From nobody Wed Oct 16 00:51:27 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of groups.io designates 66.175.222.108 as permitted sender) client-ip=66.175.222.108; envelope-from=bounce+27952+105161+1787277+3901457@groups.io; helo=mail02.groups.io; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of groups.io designates 66.175.222.108 as permitted sender) smtp.mailfrom=bounce+27952+105161+1787277+3901457@groups.io; dmarc=fail(p=none dis=none) header.from=intel.com ARC-Seal: i=1; a=rsa-sha256; t=1684829594; cv=none; d=zohomail.com; s=zohoarc; b=nxU1Iy6tZTLg2zCNTgbT4mQk6grp1h9JEnGiDY2lYTR0DT1WeYSXly//m8j0QmJ1phGcwKhHKsPNDjYbSEozA4m497XykG36qHhyZ5YyuBYWFOhlJTL5ESYcGuWE5TiaQaqnfHndqgYjZETSsCBL5MYpdyJGl2F9fepYCe1LMFs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1684829594; h=Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:Sender:Subject:To; bh=xvDRtP9UYTOHVa3BWQlP2Xz709iad7LwLfyiZa+ihZ0=; b=GKeHAKU5OUDDO/ksyO8nliyhq+DRCVareTOZcH/gYSpFxGL13iWymK0NzEdMS4OBm1zdeFjU60AkK6+S2l7J0ANZDLByWgRh3Hour+Uyc+RN/NMrnj5MLc9+8rKxycWXmSlIs8+1+XNV2b+/c/dNCrg9+tLyNV9xSBl2LomD+GU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of groups.io designates 66.175.222.108 as permitted sender) smtp.mailfrom=bounce+27952+105161+1787277+3901457@groups.io; dmarc=fail header.from= (p=none dis=none) Received: from mail02.groups.io (mail02.groups.io [66.175.222.108]) by mx.zohomail.com with SMTPS id 168482959486021.842180262695933; Tue, 23 May 2023 01:13:14 -0700 (PDT) Return-Path: X-Received: by 127.0.0.2 with SMTP id aTDiYY1788612xgt81bJoXOE; Tue, 23 May 2023 01:13:14 -0700 X-Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mx.groups.io with SMTP id smtpd.web10.16530.1684829593121567585 for ; Tue, 23 May 2023 01:13:13 -0700 X-IronPort-AV: E=McAfee;i="6600,9927,10718"; a="350679589" X-IronPort-AV: E=Sophos;i="6.00,185,1681196400"; d="scan'208";a="350679589" X-Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 May 2023 01:13:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10718"; a="768912344" X-IronPort-AV: E=Sophos;i="6.00,185,1681196400"; d="scan'208";a="768912344" X-Received: from shwdesssddpdwei.ccr.corp.intel.com ([10.239.157.28]) by fmsmga008.fm.intel.com with ESMTP; 23 May 2023 01:11:47 -0700 From: "Sheng Wei" To: devel@edk2.groups.io Cc: Ray Ni , Rangasai V Chaganty , Jenny Huang , Robert Kowalewski Subject: [edk2-devel] [PATCH] IntelSiliconPkg/Vtd: Add Vtd core drivers Date: Tue, 23 May 2023 16:11:45 +0800 Message-Id: <20230523081145.452-1-w.sheng@intel.com> MIME-Version: 1.0 Precedence: Bulk List-Unsubscribe: List-Subscribe: List-Help: Sender: devel@edk2.groups.io List-Id: Mailing-List: list devel@edk2.groups.io; contact devel+owner@edk2.groups.io Reply-To: devel@edk2.groups.io,w.sheng@intel.com X-Gm-Message-State: RLn638m9AjRo6fTzBgbmL1sSx1787277AA= Content-Transfer-Encoding: quoted-printable DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=groups.io; q=dns/txt; s=20140610; t=1684829594; bh=7DErWGj3oQBKdgHf62cPDHuJyekk6TEaxCe8iz9fBjQ=; h=Cc:Date:From:Reply-To:Subject:To; b=Zz+zS04b+tRLCcakyNIOa9AvGEszek+SCL0IgnQgF3hc17ske+9MVTmhkXL/FQSiDcG IzAyGqGq258Fj+NPv0gun4YFD0bjPzZLguE+KiussNe70iuTTfS8DkA4SrQRWTeVl+FPR hW4YSz77YKXGz5M4ctcMPbfgYG9DRKsJkTY= X-ZohoMail-DKIM: pass (identity @groups.io) X-ZM-MESSAGEID: 1684829597100100003 Content-Type: text/plain; charset="utf-8" Add 2 drivers (IntelVTdCorePei, IntelVTdCoreDxe) for pre-boot DMA protection feature. Signed-off-by: Sheng Wei Cc: Ray Ni Cc: Rangasai V Chaganty Cc: Jenny Huang Cc: Robert Kowalewski --- .../Feature/VTd/IntelVTdCoreDxe/BmDma.c | 547 +++++ .../VTd/IntelVTdCoreDxe/DmaProtection.c | 705 +++++++ .../VTd/IntelVTdCoreDxe/DmaProtection.h | 668 ++++++ .../VTd/IntelVTdCoreDxe/DmarAcpiTable.c | 398 ++++ .../VTd/IntelVTdCoreDxe/IntelVTdCoreDxe.c | 412 ++++ .../VTd/IntelVTdCoreDxe/IntelVTdCoreDxe.inf | 93 + .../VTd/IntelVTdCoreDxe/IntelVTdCoreDxe.uni | 14 + .../IntelVTdCoreDxe/IntelVTdCoreDxeExtra.uni | 14 + .../Feature/VTd/IntelVTdCoreDxe/PciInfo.c | 418 ++++ .../VTd/IntelVTdCoreDxe/TranslationTable.c | 1112 ++++++++++ .../VTd/IntelVTdCoreDxe/TranslationTableEx.c | 108 + .../Feature/VTd/IntelVTdCoreDxe/VtdLog.c | 383 ++++ .../Feature/VTd/IntelVTdCoreDxe/VtdReg.c | 757 +++++++ .../Feature/VTd/IntelVTdCorePei/DmarTable.c | 63 + .../VTd/IntelVTdCorePei/IntelVTdCorePei.c | 1099 ++++++++++ .../VTd/IntelVTdCorePei/IntelVTdCorePei.h | 262 +++ .../VTd/IntelVTdCorePei/IntelVTdCorePei.inf | 70 + .../VTd/IntelVTdCorePei/IntelVTdCorePei.uni | 14 + .../IntelVTdCorePei/IntelVTdCorePeiExtra.uni | 14 + .../VTd/IntelVTdCorePei/IntelVTdDmar.c | 727 +++++++ .../VTd/IntelVTdCorePei/TranslationTable.c | 926 +++++++++ .../Include/Guid/VtdLogDataHob.h | 151 ++ .../Include/Library/IntelVTdPeiDxeLib.h | 423 ++++ .../IntelSiliconPkg/Include/Protocol/VtdLog.h | 59 + .../Intel/IntelSiliconPkg/IntelSiliconPkg.dec | 21 + .../Intel/IntelSiliconPkg/IntelSiliconPkg.dsc | 1 + .../IntelVTdPeiDxeLib/IntelVTdPeiDxeLib.c | 1812 +++++++++++++++++ .../IntelVTdPeiDxeLib/IntelVTdPeiDxeLib.inf | 30 + .../IntelVTdPeiDxeLibExt.inf | 34 + 29 files changed, 11335 insertions(+) create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreD= xe/BmDma.c create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreD= xe/DmaProtection.c create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreD= xe/DmaProtection.h create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreD= xe/DmarAcpiTable.c create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreD= xe/IntelVTdCoreDxe.c create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreD= xe/IntelVTdCoreDxe.inf create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreD= xe/IntelVTdCoreDxe.uni create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreD= xe/IntelVTdCoreDxeExtra.uni create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreD= xe/PciInfo.c create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreD= xe/TranslationTable.c create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreD= xe/TranslationTableEx.c create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreD= xe/VtdLog.c create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreD= xe/VtdReg.c create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreP= ei/DmarTable.c create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreP= ei/IntelVTdCorePei.c create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreP= ei/IntelVTdCorePei.h create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreP= ei/IntelVTdCorePei.inf create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreP= ei/IntelVTdCorePei.uni create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreP= ei/IntelVTdCorePeiExtra.uni create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreP= ei/IntelVTdDmar.c create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreP= ei/TranslationTable.c create mode 100644 Silicon/Intel/IntelSiliconPkg/Include/Guid/VtdLogDataHo= b.h create mode 100644 Silicon/Intel/IntelSiliconPkg/Include/Library/IntelVTdP= eiDxeLib.h create mode 100644 Silicon/Intel/IntelSiliconPkg/Include/Protocol/VtdLog.h create mode 100644 Silicon/Intel/IntelSiliconPkg/Library/IntelVTdPeiDxeLib= /IntelVTdPeiDxeLib.c create mode 100644 Silicon/Intel/IntelSiliconPkg/Library/IntelVTdPeiDxeLib= /IntelVTdPeiDxeLib.inf create mode 100644 Silicon/Intel/IntelSiliconPkg/Library/IntelVTdPeiDxeLib= /IntelVTdPeiDxeLibExt.inf diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/BmDm= a.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/BmDma.c new file mode 100644 index 000000000..41917a004 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/BmDma.c @@ -0,0 +1,547 @@ +/** @file + BmDma related function + + Copyright (c) 2017 - 2023, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#include "DmaProtection.h" + +// TBD: May make it a policy +#define DMA_MEMORY_TOP MAX_UINTN +//#define DMA_MEMORY_TOP 0x0000000001FFFFFFULL + +#define MAP_HANDLE_INFO_SIGNATURE SIGNATURE_32 ('H', 'M', 'A', 'P') +typedef struct { + UINT32 Signature; + LIST_ENTRY Link; + EFI_HANDLE DeviceHandle; + UINT64 IoMmuAccess; +} MAP_HANDLE_INFO; +#define MAP_HANDLE_INFO_FROM_LINK(a) CR (a, MAP_HANDLE_INFO, Link, MAP_HAN= DLE_INFO_SIGNATURE) + +#define MAP_INFO_SIGNATURE SIGNATURE_32 ('D', 'M', 'A', 'P') +typedef struct { + UINT32 Signature; + LIST_ENTRY Link; + EDKII_IOMMU_OPERATION Operation; + UINTN NumberOfBytes; + UINTN NumberOfPages; + EFI_PHYSICAL_ADDRESS HostAddress; + EFI_PHYSICAL_ADDRESS DeviceAddress; + LIST_ENTRY HandleList; +} MAP_INFO; +#define MAP_INFO_FROM_LINK(a) CR (a, MAP_INFO, Link, MAP_INFO_SIGNATURE) + +LIST_ENTRY gMaps =3D INITIALIZE_LIST_HEAD_VARIABLE(= gMaps); + +/** + This function fills DeviceHandle/IoMmuAccess to the MAP_HANDLE_INFO, + based upon the DeviceAddress. + + @param[in] DeviceHandle The device who initiates the DMA access re= quest. + @param[in] DeviceAddress The base of device memory address to be us= ed as the DMA memory. + @param[in] Length The length of device memory address to be = used as the DMA memory. + @param[in] IoMmuAccess The IOMMU access. + +**/ +VOID +SyncDeviceHandleToMapInfo ( + IN EFI_HANDLE DeviceHandle, + IN EFI_PHYSICAL_ADDRESS DeviceAddress, + IN UINT64 Length, + IN UINT64 IoMmuAccess + ) +{ + MAP_INFO *MapInfo; + MAP_HANDLE_INFO *MapHandleInfo; + LIST_ENTRY *Link; + EFI_TPL OriginalTpl; + + // + // Find MapInfo according to DeviceAddress + // + OriginalTpl =3D gBS->RaiseTPL (VTD_TPL_LEVEL); + MapInfo =3D NULL; + for (Link =3D GetFirstNode (&gMaps) + ; !IsNull (&gMaps, Link) + ; Link =3D GetNextNode (&gMaps, Link) + ) { + MapInfo =3D MAP_INFO_FROM_LINK (Link); + if (MapInfo->DeviceAddress =3D=3D DeviceAddress) { + break; + } + } + if ((MapInfo =3D=3D NULL) || (MapInfo->DeviceAddress !=3D DeviceAddress)= ) { + DEBUG ((DEBUG_ERROR, "SyncDeviceHandleToMapInfo: DeviceAddress(0x%lx) = - not found\n", DeviceAddress)); + gBS->RestoreTPL (OriginalTpl); + return ; + } + + // + // Find MapHandleInfo according to DeviceHandle + // + MapHandleInfo =3D NULL; + for (Link =3D GetFirstNode (&MapInfo->HandleList) + ; !IsNull (&MapInfo->HandleList, Link) + ; Link =3D GetNextNode (&MapInfo->HandleList, Link) + ) { + MapHandleInfo =3D MAP_HANDLE_INFO_FROM_LINK (Link); + if (MapHandleInfo->DeviceHandle =3D=3D DeviceHandle) { + break; + } + } + if ((MapHandleInfo !=3D NULL) && (MapHandleInfo->DeviceHandle =3D=3D Dev= iceHandle)) { + MapHandleInfo->IoMmuAccess =3D IoMmuAccess; + gBS->RestoreTPL (OriginalTpl); + return ; + } + + // + // No DeviceHandle + // Initialize and insert the MAP_HANDLE_INFO structure + // + MapHandleInfo =3D AllocatePool (sizeof (MAP_HANDLE_INFO)); + if (MapHandleInfo =3D=3D NULL) { + DEBUG ((DEBUG_ERROR, "SyncDeviceHandleToMapInfo: %r\n", EFI_OUT_OF_RES= OURCES)); + gBS->RestoreTPL (OriginalTpl); + return ; + } + + MapHandleInfo->Signature =3D MAP_HANDLE_INFO_SIGNATURE; + MapHandleInfo->DeviceHandle =3D DeviceHandle; + MapHandleInfo->IoMmuAccess =3D IoMmuAccess; + + InsertTailList (&MapInfo->HandleList, &MapHandleInfo->Link); + gBS->RestoreTPL (OriginalTpl); + + return ; +} + +/** + Provides the controller-specific addresses required to access system mem= ory from a + DMA bus master. + + @param This The protocol instance pointer. + @param Operation Indicates if the bus master is going to re= ad or write to system memory. + @param HostAddress The system memory address to map to the PC= I controller. + @param NumberOfBytes On input the number of bytes to map. On ou= tput the number of bytes + that were mapped. + @param DeviceAddress The resulting map address for the bus mast= er PCI controller to use to + access the hosts HostAddress. + @param Mapping A resulting value to pass to Unmap(). + + @retval EFI_SUCCESS The range was mapped for the returned Numb= erOfBytes. + @retval EFI_UNSUPPORTED The HostAddress cannot be mapped as a comm= on buffer. + @retval EFI_INVALID_PARAMETER One or more parameters are invalid. + @retval EFI_OUT_OF_RESOURCES The request could not be completed due to = a lack of resources. + @retval EFI_DEVICE_ERROR The system hardware could not map the requ= ested address. + +**/ +EFI_STATUS +EFIAPI +IoMmuMap ( + IN EDKII_IOMMU_PROTOCOL *This, + IN EDKII_IOMMU_OPERATION Operation, + IN VOID *HostAddress, + IN OUT UINTN *NumberOfBytes, + OUT EFI_PHYSICAL_ADDRESS *DeviceAddress, + OUT VOID **Mapping + ) +{ + EFI_STATUS Status; + EFI_PHYSICAL_ADDRESS PhysicalAddress; + MAP_INFO *MapInfo; + EFI_PHYSICAL_ADDRESS DmaMemoryTop; + BOOLEAN NeedRemap; + EFI_TPL OriginalTpl; + + if (NumberOfBytes =3D=3D NULL || DeviceAddress =3D=3D NULL || + Mapping =3D=3D NULL) { + DEBUG ((DEBUG_ERROR, "IoMmuMap: %r\n", EFI_INVALID_PARAMETER)); + return EFI_INVALID_PARAMETER; + } + + DEBUG ((DEBUG_VERBOSE, "IoMmuMap: =3D=3D> 0x%08x - 0x%08x (%x)\n", HostA= ddress, *NumberOfBytes, Operation)); + + // + // Make sure that Operation is valid + // + if ((UINT32) Operation >=3D EdkiiIoMmuOperationMaximum) { + DEBUG ((DEBUG_ERROR, "IoMmuMap: %r\n", EFI_INVALID_PARAMETER)); + return EFI_INVALID_PARAMETER; + } + NeedRemap =3D FALSE; + PhysicalAddress =3D (EFI_PHYSICAL_ADDRESS) (UINTN) HostAddress; + + DmaMemoryTop =3D DMA_MEMORY_TOP; + + // + // Alignment check + // + if ((*NumberOfBytes !=3D ALIGN_VALUE(*NumberOfBytes, SIZE_4KB)) || + (PhysicalAddress !=3D ALIGN_VALUE(PhysicalAddress, SIZE_4KB))) { + if ((Operation =3D=3D EdkiiIoMmuOperationBusMasterCommonBuffer) || + (Operation =3D=3D EdkiiIoMmuOperationBusMasterCommonBuffer64)) { + // + // The input buffer might be a subset from IoMmuAllocateBuffer. + // Skip the check. + // + } else { + NeedRemap =3D TRUE; + } + } + + if ((PhysicalAddress + *NumberOfBytes) >=3D DMA_MEMORY_TOP) { + NeedRemap =3D TRUE; + } + + if (((Operation !=3D EdkiiIoMmuOperationBusMasterRead64 && + Operation !=3D EdkiiIoMmuOperationBusMasterWrite64 && + Operation !=3D EdkiiIoMmuOperationBusMasterCommonBuffer64)) && + ((PhysicalAddress + *NumberOfBytes) > SIZE_4GB)) { + // + // If the root bridge or the device cannot handle performing DMA above + // 4GB but any part of the DMA transfer being mapped is above 4GB, then + // map the DMA transfer to a buffer below 4GB. + // + NeedRemap =3D TRUE; + DmaMemoryTop =3D MIN (DmaMemoryTop, SIZE_4GB - 1); + } + + if (Operation =3D=3D EdkiiIoMmuOperationBusMasterCommonBuffer || + Operation =3D=3D EdkiiIoMmuOperationBusMasterCommonBuffer64) { + if (NeedRemap) { + // + // Common Buffer operations can not be remapped. If the common buff= er + // is above 4GB, then it is not possible to generate a mapping, so r= eturn + // an error. + // + DEBUG ((DEBUG_ERROR, "IoMmuMap: %r\n", EFI_UNSUPPORTED)); + return EFI_UNSUPPORTED; + } + } + + // + // Allocate a MAP_INFO structure to remember the mapping when Unmap() is + // called later. + // + MapInfo =3D AllocatePool (sizeof (MAP_INFO)); + if (MapInfo =3D=3D NULL) { + *NumberOfBytes =3D 0; + DEBUG ((DEBUG_ERROR, "IoMmuMap: %r\n", EFI_OUT_OF_RESOURCES)); + return EFI_OUT_OF_RESOURCES; + } + + // + // Initialize the MAP_INFO structure + // + MapInfo->Signature =3D MAP_INFO_SIGNATURE; + MapInfo->Operation =3D Operation; + MapInfo->NumberOfBytes =3D *NumberOfBytes; + MapInfo->NumberOfPages =3D EFI_SIZE_TO_PAGES (MapInfo->NumberOfBytes= ); + MapInfo->HostAddress =3D PhysicalAddress; + MapInfo->DeviceAddress =3D DmaMemoryTop; + InitializeListHead(&MapInfo->HandleList); + + // + // Allocate a buffer below 4GB to map the transfer to. + // + if (NeedRemap) { + Status =3D gBS->AllocatePages ( + AllocateMaxAddress, + EfiBootServicesData, + MapInfo->NumberOfPages, + &MapInfo->DeviceAddress + ); + if (EFI_ERROR (Status)) { + FreePool (MapInfo); + *NumberOfBytes =3D 0; + DEBUG ((DEBUG_ERROR, "IoMmuMap: %r\n", Status)); + return Status; + } + + // + // If this is a read operation from the Bus Master's point of view, + // then copy the contents of the real buffer into the mapped buffer + // so the Bus Master can read the contents of the real buffer. + // + if (Operation =3D=3D EdkiiIoMmuOperationBusMasterRead || + Operation =3D=3D EdkiiIoMmuOperationBusMasterRead64) { + CopyMem ( + (VOID *) (UINTN) MapInfo->DeviceAddress, + (VOID *) (UINTN) MapInfo->HostAddress, + MapInfo->NumberOfBytes + ); + } + } else { + MapInfo->DeviceAddress =3D MapInfo->HostAddress; + } + + OriginalTpl =3D gBS->RaiseTPL (VTD_TPL_LEVEL); + InsertTailList (&gMaps, &MapInfo->Link); + gBS->RestoreTPL (OriginalTpl); + + // + // The DeviceAddress is the address of the maped buffer below 4GB + // + *DeviceAddress =3D MapInfo->DeviceAddress; + // + // Return a pointer to the MAP_INFO structure in Mapping + // + *Mapping =3D MapInfo; + + DEBUG ((DEBUG_VERBOSE, "IoMmuMap: 0x%08x - 0x%08x <=3D=3D\n", *DeviceAdd= ress, *Mapping)); + + VTdLogAddEvent (VTDLOG_DXE_IOMMU_MAP, (UINT64) (*DeviceAddress), (UINT64= ) Operation); + + return EFI_SUCCESS; +} + +/** + Completes the Map() operation and releases any corresponding resources. + + @param This The protocol instance pointer. + @param Mapping The mapping value returned from Map(). + + @retval EFI_SUCCESS The range was unmapped. + @retval EFI_INVALID_PARAMETER Mapping is not a value that was returned b= y Map(). + @retval EFI_DEVICE_ERROR The data was not committed to the target s= ystem memory. +**/ +EFI_STATUS +EFIAPI +IoMmuUnmap ( + IN EDKII_IOMMU_PROTOCOL *This, + IN VOID *Mapping + ) +{ + MAP_INFO *MapInfo; + MAP_HANDLE_INFO *MapHandleInfo; + LIST_ENTRY *Link; + EFI_TPL OriginalTpl; + + DEBUG ((DEBUG_VERBOSE, "IoMmuUnmap: 0x%08x\n", Mapping)); + + if (Mapping =3D=3D NULL) { + DEBUG ((DEBUG_ERROR, "IoMmuUnmap: %r\n", EFI_INVALID_PARAMETER)); + return EFI_INVALID_PARAMETER; + } + + OriginalTpl =3D gBS->RaiseTPL (VTD_TPL_LEVEL); + MapInfo =3D NULL; + for (Link =3D GetFirstNode (&gMaps) + ; !IsNull (&gMaps, Link) + ; Link =3D GetNextNode (&gMaps, Link) + ) { + MapInfo =3D MAP_INFO_FROM_LINK (Link); + if (MapInfo =3D=3D Mapping) { + break; + } + } + // + // Mapping is not a valid value returned by Map() + // + if (MapInfo !=3D Mapping) { + gBS->RestoreTPL (OriginalTpl); + DEBUG ((DEBUG_ERROR, "IoMmuUnmap: %r\n", EFI_INVALID_PARAMETER)); + return EFI_INVALID_PARAMETER; + } + RemoveEntryList (&MapInfo->Link); + gBS->RestoreTPL (OriginalTpl); + + // + // remove all nodes in MapInfo->HandleList + // + while (!IsListEmpty (&MapInfo->HandleList)) { + MapHandleInfo =3D MAP_HANDLE_INFO_FROM_LINK (MapInfo->HandleList.Forwa= rdLink); + RemoveEntryList (&MapHandleInfo->Link); + FreePool (MapHandleInfo); + } + + if (MapInfo->DeviceAddress !=3D MapInfo->HostAddress) { + // + // If this is a write operation from the Bus Master's point of view, + // then copy the contents of the mapped buffer into the real buffer + // so the processor can read the contents of the real buffer. + // + if (MapInfo->Operation =3D=3D EdkiiIoMmuOperationBusMasterWrite || + MapInfo->Operation =3D=3D EdkiiIoMmuOperationBusMasterWrite64) { + CopyMem ( + (VOID *) (UINTN) MapInfo->HostAddress, + (VOID *) (UINTN) MapInfo->DeviceAddress, + MapInfo->NumberOfBytes + ); + } + + // + // Free the mapped buffer and the MAP_INFO structure. + // + gBS->FreePages (MapInfo->DeviceAddress, MapInfo->NumberOfPages); + } + + VTdLogAddEvent (VTDLOG_DXE_IOMMU_UNMAP, MapInfo->NumberOfBytes, MapInfo-= >DeviceAddress); + + FreePool (Mapping); + return EFI_SUCCESS; +} + +/** + Allocates pages that are suitable for an OperationBusMasterCommonBuffer = or + OperationBusMasterCommonBuffer64 mapping. + + @param This The protocol instance pointer. + @param Type This parameter is not used and must be ign= ored. + @param MemoryType The type of memory to allocate, EfiBootSer= vicesData or + EfiRuntimeServicesData. + @param Pages The number of pages to allocate. + @param HostAddress A pointer to store the base system memory = address of the + allocated range. + @param Attributes The requested bit mask of attributes for t= he allocated range. + + @retval EFI_SUCCESS The requested memory pages were allocated. + @retval EFI_UNSUPPORTED Attributes is unsupported. The only legal = attribute bits are + MEMORY_WRITE_COMBINE, MEMORY_CACHED and DU= AL_ADDRESS_CYCLE. + @retval EFI_INVALID_PARAMETER One or more parameters are invalid. + @retval EFI_OUT_OF_RESOURCES The memory pages could not be allocated. + +**/ +EFI_STATUS +EFIAPI +IoMmuAllocateBuffer ( + IN EDKII_IOMMU_PROTOCOL *This, + IN EFI_ALLOCATE_TYPE Type, + IN EFI_MEMORY_TYPE MemoryType, + IN UINTN Pages, + IN OUT VOID **HostAddress, + IN UINT64 Attributes + ) +{ + EFI_STATUS Status; + EFI_PHYSICAL_ADDRESS PhysicalAddress; + + DEBUG ((DEBUG_VERBOSE, "IoMmuAllocateBuffer: =3D=3D> 0x%08x\n", Pages)); + + // + // Validate Attributes + // + if ((Attributes & EDKII_IOMMU_ATTRIBUTE_INVALID_FOR_ALLOCATE_BUFFER) != =3D 0) { + DEBUG ((DEBUG_ERROR, "IoMmuAllocateBuffer: %r\n", EFI_UNSUPPORTED)); + return EFI_UNSUPPORTED; + } + + // + // Check for invalid inputs + // + if (HostAddress =3D=3D NULL) { + DEBUG ((DEBUG_ERROR, "IoMmuAllocateBuffer: %r\n", EFI_INVALID_PARAMETE= R)); + return EFI_INVALID_PARAMETER; + } + + // + // The only valid memory types are EfiBootServicesData and + // EfiRuntimeServicesData + // + if (MemoryType !=3D EfiBootServicesData && + MemoryType !=3D EfiRuntimeServicesData) { + DEBUG ((DEBUG_ERROR, "IoMmuAllocateBuffer: %r\n", EFI_INVALID_PARAMETE= R)); + return EFI_INVALID_PARAMETER; + } + + PhysicalAddress =3D DMA_MEMORY_TOP; + if ((Attributes & EDKII_IOMMU_ATTRIBUTE_DUAL_ADDRESS_CYCLE) =3D=3D 0) { + // + // Limit allocations to memory below 4GB + // + PhysicalAddress =3D MIN (PhysicalAddress, SIZE_4GB - 1); + } + Status =3D gBS->AllocatePages ( + AllocateMaxAddress, + MemoryType, + Pages, + &PhysicalAddress + ); + if (!EFI_ERROR (Status)) { + *HostAddress =3D (VOID *) (UINTN) PhysicalAddress; + + VTdLogAddEvent (VTDLOG_DXE_IOMMU_ALLOC_BUFFER, (UINT64) Pages, (UINT64= ) (*HostAddress)); + } + + DEBUG ((DEBUG_VERBOSE, "IoMmuAllocateBuffer: 0x%08x <=3D=3D\n", *HostAdd= ress)); + + return Status; +} + +/** + Frees memory that was allocated with AllocateBuffer(). + + @param This The protocol instance pointer. + @param Pages The number of pages to free. + @param HostAddress The base system memory address of the allo= cated range. + + @retval EFI_SUCCESS The requested memory pages were freed. + @retval EFI_INVALID_PARAMETER The memory range specified by HostAddress = and Pages + was not allocated with AllocateBuffer(). + +**/ +EFI_STATUS +EFIAPI +IoMmuFreeBuffer ( + IN EDKII_IOMMU_PROTOCOL *This, + IN UINTN Pages, + IN VOID *HostAddress + ) +{ + DEBUG ((DEBUG_VERBOSE, "IoMmuFreeBuffer: 0x%\n", Pages)); + + VTdLogAddEvent (VTDLOG_DXE_IOMMU_FREE_BUFFER, Pages, (UINT64) HostAddres= s); + + return gBS->FreePages ((EFI_PHYSICAL_ADDRESS) (UINTN) HostAddress, Pages= ); +} + +/** + Get device information from mapping. + + @param[in] Mapping The mapping. + @param[out] DeviceAddress The device address of the mapping. + @param[out] NumberOfPages The number of pages of the mapping. + + @retval EFI_SUCCESS The device information is returned. + @retval EFI_INVALID_PARAMETER The mapping is invalid. +**/ +EFI_STATUS +GetDeviceInfoFromMapping ( + IN VOID *Mapping, + OUT EFI_PHYSICAL_ADDRESS *DeviceAddress, + OUT UINTN *NumberOfPages + ) +{ + MAP_INFO *MapInfo; + LIST_ENTRY *Link; + + if (Mapping =3D=3D NULL) { + return EFI_INVALID_PARAMETER; + } + + MapInfo =3D NULL; + for (Link =3D GetFirstNode (&gMaps) + ; !IsNull (&gMaps, Link) + ; Link =3D GetNextNode (&gMaps, Link) + ) { + MapInfo =3D MAP_INFO_FROM_LINK (Link); + if (MapInfo =3D=3D Mapping) { + break; + } + } + // + // Mapping is not a valid value returned by Map() + // + if (MapInfo !=3D Mapping) { + return EFI_INVALID_PARAMETER; + } + + *DeviceAddress =3D MapInfo->DeviceAddress; + *NumberOfPages =3D MapInfo->NumberOfPages; + return EFI_SUCCESS; +} + diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/DmaP= rotection.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/Dma= Protection.c new file mode 100644 index 000000000..9fd2b4a44 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/DmaProtecti= on.c @@ -0,0 +1,705 @@ +/** @file + + Copyright (c) 2017 - 2023, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#include "DmaProtection.h" + +UINT64 mBelow4GMemoryLimit; +UINT64 mAbove4GMemoryLimit; + +EDKII_PLATFORM_VTD_POLICY_PROTOCOL *mPlatformVTdPolicy; + +VTD_ACCESS_REQUEST *mAccessRequest =3D NULL; +UINTN mAccessRequestCount =3D 0; +UINTN mAccessRequestMaxCount =3D 0; + +/** + Append VTd Access Request to global. + + @param[in] Segment The Segment used to identify a VTd engine. + @param[in] SourceId The SourceId used to identify a VTd engine= and table entry. + @param[in] BaseAddress The base of device memory address to be us= ed as the DMA memory. + @param[in] Length The length of device memory address to be = used as the DMA memory. + @param[in] IoMmuAccess The IOMMU access. + + @retval EFI_SUCCESS The IoMmuAccess is set for the memory rang= e specified by BaseAddress and Length. + @retval EFI_INVALID_PARAMETER BaseAddress is not IoMmu Page size aligned. + @retval EFI_INVALID_PARAMETER Length is not IoMmu Page size aligned. + @retval EFI_INVALID_PARAMETER Length is 0. + @retval EFI_INVALID_PARAMETER IoMmuAccess specified an illegal combinati= on of access. + @retval EFI_UNSUPPORTED The bit mask of IoMmuAccess is not support= ed by the IOMMU. + @retval EFI_UNSUPPORTED The IOMMU does not support the memory rang= e specified by BaseAddress and Length. + @retval EFI_OUT_OF_RESOURCES There are not enough resources available t= o modify the IOMMU access. + @retval EFI_DEVICE_ERROR The IOMMU device reported an error while a= ttempting the operation. + +**/ +EFI_STATUS +RequestAccessAttribute ( + IN UINT16 Segment, + IN VTD_SOURCE_ID SourceId, + IN UINT64 BaseAddress, + IN UINT64 Length, + IN UINT64 IoMmuAccess + ) +{ + VTD_ACCESS_REQUEST *NewAccessRequest; + UINTN Index; + + // + // Optimization for memory. + // + // If the last record is to IoMmuAccess=3D0, + // Check previous records and remove the matched entry. + // + if (IoMmuAccess =3D=3D 0) { + for (Index =3D 0; Index < mAccessRequestCount; Index++) { + if ((mAccessRequest[Index].Segment =3D=3D Segment) && + (mAccessRequest[Index].SourceId.Uint16 =3D=3D SourceId.Uint16) && + (mAccessRequest[Index].BaseAddress =3D=3D BaseAddress) && + (mAccessRequest[Index].Length =3D=3D Length) && + (mAccessRequest[Index].IoMmuAccess !=3D 0)) { + // + // Remove this record [Index]. + // No need to add the new record. + // + if (Index !=3D mAccessRequestCount - 1) { + CopyMem ( + &mAccessRequest[Index], + &mAccessRequest[Index + 1], + sizeof (VTD_ACCESS_REQUEST) * (mAccessRequestCount - 1 - Index) + ); + } + ZeroMem (&mAccessRequest[mAccessRequestCount - 1], sizeof(VTD_ACCE= SS_REQUEST)); + mAccessRequestCount--; + return EFI_SUCCESS; + } + } + } + + if (mAccessRequestCount >=3D mAccessRequestMaxCount) { + NewAccessRequest =3D AllocateZeroPool (sizeof(*NewAccessRequest) * (mA= ccessRequestMaxCount + MAX_VTD_ACCESS_REQUEST)); + if (NewAccessRequest =3D=3D NULL) { + return EFI_OUT_OF_RESOURCES; + } + mAccessRequestMaxCount +=3D MAX_VTD_ACCESS_REQUEST; + if (mAccessRequest !=3D NULL) { + CopyMem (NewAccessRequest, mAccessRequest, sizeof(*NewAccessRequest)= * mAccessRequestCount); + FreePool (mAccessRequest); + } + mAccessRequest =3D NewAccessRequest; + } + + ASSERT (mAccessRequestCount < mAccessRequestMaxCount); + + mAccessRequest[mAccessRequestCount].Segment =3D Segment; + mAccessRequest[mAccessRequestCount].SourceId =3D SourceId; + mAccessRequest[mAccessRequestCount].BaseAddress =3D BaseAddress; + mAccessRequest[mAccessRequestCount].Length =3D Length; + mAccessRequest[mAccessRequestCount].IoMmuAccess =3D IoMmuAccess; + + mAccessRequestCount++; + + return EFI_SUCCESS; +} + +/** + Process Access Requests from before DMAR table is installed. + +**/ +VOID +ProcessRequestedAccessAttribute ( + VOID + ) +{ + UINTN Index; + EFI_STATUS Status; + + DEBUG ((DEBUG_INFO, "ProcessRequestedAccessAttribute ...\n")); + + for (Index =3D 0; Index < mAccessRequestCount; Index++) { + DEBUG (( + DEBUG_INFO, + "PCI(S%x.B%x.D%x.F%x) ", + mAccessRequest[Index].Segment, + mAccessRequest[Index].SourceId.Bits.Bus, + mAccessRequest[Index].SourceId.Bits.Device, + mAccessRequest[Index].SourceId.Bits.Function + )); + DEBUG (( + DEBUG_INFO, + "(0x%lx~0x%lx) - %lx\n", + mAccessRequest[Index].BaseAddress, + mAccessRequest[Index].Length, + mAccessRequest[Index].IoMmuAccess + )); + Status =3D SetAccessAttribute ( + mAccessRequest[Index].Segment, + mAccessRequest[Index].SourceId, + mAccessRequest[Index].BaseAddress, + mAccessRequest[Index].Length, + mAccessRequest[Index].IoMmuAccess + ); + if (EFI_ERROR (Status)) { + DEBUG ((DEBUG_ERROR, "SetAccessAttribute %r: ", Status)); + } + } + + if (mAccessRequest !=3D NULL) { + FreePool (mAccessRequest); + } + mAccessRequest =3D NULL; + mAccessRequestCount =3D 0; + mAccessRequestMaxCount =3D 0; + + DEBUG ((DEBUG_INFO, "ProcessRequestedAccessAttribute Done\n")); +} + +/** + Return UEFI memory map information. + + @param[out] Below4GMemoryLimit The below 4GiB memory limit address or 0= if insufficient resources exist to + determine the address. + @param[out] Above4GMemoryLimit The above 4GiB memory limit address or 0= if insufficient resources exist to + determine the address. + +**/ +VOID +ReturnUefiMemoryMap ( + OUT UINT64 *Below4GMemoryLimit, + OUT UINT64 *Above4GMemoryLimit + ) +{ + EFI_STATUS Status; + EFI_MEMORY_DESCRIPTOR *EfiMemoryMap; + EFI_MEMORY_DESCRIPTOR *EfiMemoryMapEnd; + EFI_MEMORY_DESCRIPTOR *EfiEntry; + EFI_MEMORY_DESCRIPTOR *NextEfiEntry; + EFI_MEMORY_DESCRIPTOR TempEfiEntry; + UINTN EfiMemoryMapSize; + UINTN EfiMapKey; + UINTN EfiDescriptorSize; + UINT32 EfiDescriptorVersion; + UINT64 MemoryBlockLength; + + *Below4GMemoryLimit =3D 0; + *Above4GMemoryLimit =3D 0; + + // + // Get the EFI memory map. + // + EfiMemoryMapSize =3D 0; + EfiMemoryMap =3D NULL; + Status =3D gBS->GetMemoryMap ( + &EfiMemoryMapSize, + EfiMemoryMap, + &EfiMapKey, + &EfiDescriptorSize, + &EfiDescriptorVersion + ); + ASSERT (Status =3D=3D EFI_BUFFER_TOO_SMALL); + + do { + // + // Use size returned back plus 1 descriptor for the AllocatePool. + // We don't just multiply by 2 since the "for" loop below terminates on + // EfiMemoryMapEnd which is dependent upon EfiMemoryMapSize. Otherwize + // we process bogus entries and create bogus E820 entries. + // + EfiMemoryMap =3D (EFI_MEMORY_DESCRIPTOR *) AllocatePool (EfiMemoryMapS= ize); + if (EfiMemoryMap =3D=3D NULL) { + ASSERT (EfiMemoryMap !=3D NULL); + return; + } + + Status =3D gBS->GetMemoryMap ( + &EfiMemoryMapSize, + EfiMemoryMap, + &EfiMapKey, + &EfiDescriptorSize, + &EfiDescriptorVersion + ); + if (EFI_ERROR (Status)) { + FreePool (EfiMemoryMap); + } + } while (Status =3D=3D EFI_BUFFER_TOO_SMALL); + ASSERT_EFI_ERROR (Status); + + // + // Sort memory map from low to high + // + EfiEntry =3D EfiMemoryMap; + NextEfiEntry =3D NEXT_MEMORY_DESCRIPTOR (EfiEntry, EfiDescriptorSize); + EfiMemoryMapEnd =3D (EFI_MEMORY_DESCRIPTOR *) ((UINT8 *) EfiMemoryMap + = EfiMemoryMapSize); + while (EfiEntry < EfiMemoryMapEnd) { + while (NextEfiEntry < EfiMemoryMapEnd) { + if (EfiEntry->PhysicalStart > NextEfiEntry->PhysicalStart) { + CopyMem (&TempEfiEntry, EfiEntry, sizeof (EFI_MEMORY_DESCRIPTOR)); + CopyMem (EfiEntry, NextEfiEntry, sizeof (EFI_MEMORY_DESCRIPTOR)); + CopyMem (NextEfiEntry, &TempEfiEntry, sizeof (EFI_MEMORY_DESCRIPTO= R)); + } + + NextEfiEntry =3D NEXT_MEMORY_DESCRIPTOR (NextEfiEntry, EfiDescriptor= Size); + } + + EfiEntry =3D NEXT_MEMORY_DESCRIPTOR (EfiEntry, EfiDescriptorSize); + NextEfiEntry =3D NEXT_MEMORY_DESCRIPTOR (EfiEntry, EfiDescriptorSize); + } + + DEBUG ((DEBUG_INFO, "MemoryMap:\n")); + EfiEntry =3D EfiMemoryMap; + EfiMemoryMapEnd =3D (EFI_MEMORY_DESCRIPTOR *) ((UINT8 *) EfiMemoryMap + = EfiMemoryMapSize); + while (EfiEntry < EfiMemoryMapEnd) { + MemoryBlockLength =3D (UINT64) (LShiftU64 (EfiEntry->NumberOfPages, 12= )); + DEBUG ((DEBUG_INFO, "Entry(0x%02x) 0x%016lx - 0x%016lx\n", EfiEntry->T= ype, EfiEntry->PhysicalStart, EfiEntry->PhysicalStart + MemoryBlockLength)); + switch (EfiEntry->Type) { + case EfiLoaderCode: + case EfiLoaderData: + case EfiBootServicesCode: + case EfiBootServicesData: + case EfiConventionalMemory: + case EfiRuntimeServicesCode: + case EfiRuntimeServicesData: + case EfiACPIReclaimMemory: + case EfiACPIMemoryNVS: + case EfiReservedMemoryType: + if ((EfiEntry->PhysicalStart + MemoryBlockLength) <=3D BASE_1MB) { + // + // Skip the memory block is under 1MB + // + } else if (EfiEntry->PhysicalStart >=3D BASE_4GB) { + if (*Above4GMemoryLimit < EfiEntry->PhysicalStart + MemoryBlockLen= gth) { + *Above4GMemoryLimit =3D EfiEntry->PhysicalStart + MemoryBlockLen= gth; + } + } else { + if (*Below4GMemoryLimit < EfiEntry->PhysicalStart + MemoryBlockLen= gth) { + *Below4GMemoryLimit =3D EfiEntry->PhysicalStart + MemoryBlockLen= gth; + } + } + break; + } + EfiEntry =3D NEXT_MEMORY_DESCRIPTOR (EfiEntry, EfiDescriptorSize); + } + + FreePool (EfiMemoryMap); + + DEBUG ((DEBUG_INFO, "Result:\n")); + DEBUG ((DEBUG_INFO, "Below4GMemoryLimit: 0x%016lx\n", *Below4GMemoryLim= it)); + DEBUG ((DEBUG_INFO, "Above4GMemoryLimit: 0x%016lx\n", *Above4GMemoryLim= it)); + + return ; +} + +/** + The scan bus callback function to always enable page attribute. + + @param[in] Context The context of the callback. + @param[in] Segment The segment of the source. + @param[in] Bus The bus of the source. + @param[in] Device The device of the source. + @param[in] Function The function of the source. + + @retval EFI_SUCCESS The VTd entry is updated to always enable = all DMA access for the specific device. +**/ +EFI_STATUS +EFIAPI +ScanBusCallbackAlwaysEnablePageAttribute ( + IN VOID *Context, + IN UINT16 Segment, + IN UINT8 Bus, + IN UINT8 Device, + IN UINT8 Function + ) +{ + VTD_SOURCE_ID SourceId; + EFI_STATUS Status; + + SourceId.Bits.Bus =3D Bus; + SourceId.Bits.Device =3D Device; + SourceId.Bits.Function =3D Function; + Status =3D AlwaysEnablePageAttribute (Segment, SourceId); + return Status; +} + +/** + Always enable the VTd page attribute for the device in the DeviceScope. + + @param[in] DeviceScope the input device scope data structure + + @retval EFI_SUCCESS The VTd entry is updated to always enable = all DMA access for the specific device in the device scope. +**/ +EFI_STATUS +AlwaysEnablePageAttributeDeviceScope ( + IN EDKII_PLATFORM_VTD_DEVICE_SCOPE *DeviceScope + ) +{ + UINT8 Bus; + UINT8 Device; + UINT8 Function; + VTD_SOURCE_ID SourceId; + UINT8 SecondaryBusNumber; + EFI_STATUS Status; + + Status =3D GetPciBusDeviceFunction (DeviceScope->SegmentNumber, &DeviceS= cope->DeviceScope, &Bus, &Device, &Function); + + if (DeviceScope->DeviceScope.Type =3D=3D EFI_ACPI_DEVICE_SCOPE_ENTRY_TYP= E_PCI_BRIDGE) { + // + // Need scan the bridge and add all devices. + // + SecondaryBusNumber =3D PciSegmentRead8 (PCI_SEGMENT_LIB_ADDRESS(Device= Scope->SegmentNumber, Bus, Device, Function, PCI_BRIDGE_SECONDARY_BUS_REGIS= TER_OFFSET)); + Status =3D ScanPciBus (NULL, DeviceScope->SegmentNumber, SecondaryBusN= umber, ScanBusCallbackAlwaysEnablePageAttribute); + return Status; + } else { + SourceId.Bits.Bus =3D Bus; + SourceId.Bits.Device =3D Device; + SourceId.Bits.Function =3D Function; + Status =3D AlwaysEnablePageAttribute (DeviceScope->SegmentNumber, Sour= ceId); + return Status; + } +} + +/** + Always enable the VTd page attribute for the device matching DeviceId. + + @param[in] PciDeviceId the input PCI device ID + + @retval EFI_SUCCESS The VTd entry is updated to always enable = all DMA access for the specific device matching DeviceId. +**/ +EFI_STATUS +AlwaysEnablePageAttributePciDeviceId ( + IN EDKII_PLATFORM_VTD_PCI_DEVICE_ID *PciDeviceId + ) +{ + UINTN VtdIndex; + UINTN PciIndex; + PCI_DEVICE_DATA *PciDeviceData; + EFI_STATUS Status; + + for (VtdIndex =3D 0; VtdIndex < mVtdUnitNumber; VtdIndex++) { + for (PciIndex =3D 0; PciIndex < mVtdUnitInformation[VtdIndex].PciDevic= eInfo->PciDeviceDataNumber; PciIndex++) { + PciDeviceData =3D &mVtdUnitInformation[VtdIndex].PciDeviceInfo->PciD= eviceData[PciIndex]; + + if (((PciDeviceId->VendorId =3D=3D 0xFFFF) || (PciDeviceId->VendorId= =3D=3D PciDeviceData->PciDeviceId.VendorId)) && + ((PciDeviceId->DeviceId =3D=3D 0xFFFF) || (PciDeviceId->DeviceId= =3D=3D PciDeviceData->PciDeviceId.DeviceId)) && + ((PciDeviceId->RevisionId =3D=3D 0xFF) || (PciDeviceId->Revision= Id =3D=3D PciDeviceData->PciDeviceId.RevisionId)) && + ((PciDeviceId->SubsystemVendorId =3D=3D 0xFFFF) || (PciDeviceId-= >SubsystemVendorId =3D=3D PciDeviceData->PciDeviceId.SubsystemVendorId)) && + ((PciDeviceId->SubsystemDeviceId =3D=3D 0xFFFF) || (PciDeviceId-= >SubsystemDeviceId =3D=3D PciDeviceData->PciDeviceId.SubsystemDeviceId)) ) { + Status =3D AlwaysEnablePageAttribute (mVtdUnitInformation[VtdIndex= ].Segment, PciDeviceData->PciSourceId); + if (EFI_ERROR(Status)) { + continue; + } + } + } + } + return EFI_SUCCESS; +} + +/** + Always enable the VTd page attribute for the device. + + @param[in] DeviceInfo the exception device information + + @retval EFI_SUCCESS The VTd entry is updated to always enable = all DMA access for the specific device in the device info. +**/ +EFI_STATUS +AlwaysEnablePageAttributeExceptionDeviceInfo ( + IN EDKII_PLATFORM_VTD_EXCEPTION_DEVICE_INFO *DeviceInfo + ) +{ + switch (DeviceInfo->Type) { + case EDKII_PLATFORM_VTD_EXCEPTION_DEVICE_INFO_TYPE_DEVICE_SCOPE: + return AlwaysEnablePageAttributeDeviceScope ((VOID *)(DeviceInfo + 1)); + case EDKII_PLATFORM_VTD_EXCEPTION_DEVICE_INFO_TYPE_PCI_DEVICE_ID: + return AlwaysEnablePageAttributePciDeviceId ((VOID *)(DeviceInfo + 1)); + default: + return EFI_UNSUPPORTED; + } +} + +/** + Initialize platform VTd policy. +**/ +VOID +InitializePlatformVTdPolicy ( + VOID + ) +{ + EFI_STATUS Status; + UINTN DeviceInfoCount; + VOID *DeviceInfo; + EDKII_PLATFORM_VTD_EXCEPTION_DEVICE_INFO *ThisDeviceInfo; + UINTN Index; + + // + // It is optional. + // + Status =3D gBS->LocateProtocol ( + &gEdkiiPlatformVTdPolicyProtocolGuid, + NULL, + (VOID **)&mPlatformVTdPolicy + ); + if (!EFI_ERROR(Status)) { + DEBUG ((DEBUG_INFO, "InitializePlatformVTdPolicy\n")); + Status =3D mPlatformVTdPolicy->GetExceptionDeviceList (mPlatformVTdPol= icy, &DeviceInfoCount, &DeviceInfo); + if (!EFI_ERROR(Status)) { + ThisDeviceInfo =3D DeviceInfo; + for (Index =3D 0; Index < DeviceInfoCount; Index++) { + if (ThisDeviceInfo->Type =3D=3D EDKII_PLATFORM_VTD_EXCEPTION_DEVIC= E_INFO_TYPE_END) { + break; + } + AlwaysEnablePageAttributeExceptionDeviceInfo (ThisDeviceInfo); + ThisDeviceInfo =3D (VOID *)((UINTN)ThisDeviceInfo + ThisDeviceInfo= ->Length); + } + FreePool (DeviceInfo); + } + } +} + +/** + Setup VTd engine. +**/ +VOID +SetupVtd ( + VOID + ) +{ + EFI_STATUS Status; + VOID *PciEnumerationComplete; + UINTN Index; + UINT64 Below4GMemoryLimit; + UINT64 Above4GMemoryLimit; + VTD_ROOT_TABLE_INFO RootTableInfo; + + // + // PCI Enumeration must be done + // + Status =3D gBS->LocateProtocol ( + &gEfiPciEnumerationCompleteProtocolGuid, + NULL, + &PciEnumerationComplete + ); + ASSERT_EFI_ERROR (Status); + + ReturnUefiMemoryMap (&Below4GMemoryLimit, &Above4GMemoryLimit); + Below4GMemoryLimit =3D ALIGN_VALUE_UP(Below4GMemoryLimit, SIZE_256MB); + DEBUG ((DEBUG_INFO, " Adjusted Below4GMemoryLimit: 0x%016lx\n", Below4GM= emoryLimit)); + + mBelow4GMemoryLimit =3D Below4GMemoryLimit; + mAbove4GMemoryLimit =3D Above4GMemoryLimit; + + VTdLogAddEvent (VTDLOG_DXE_SETUP_VTD, Below4GMemoryLimit, Above4GMemoryL= imit); + + // + // 1. setup + // + DEBUG ((DEBUG_INFO, "ParseDmarAcpiTable\n")); + Status =3D ParseDmarAcpiTableDrhd (); + if (EFI_ERROR (Status)) { + return; + } + + DumpVtdIfError (); + + DEBUG ((DEBUG_INFO, "PrepareVtdConfig\n")); + PrepareVtdConfig (); + + // + // 2. initialization + // + DEBUG ((DEBUG_INFO, "SetupTranslationTable\n")); + Status =3D SetupTranslationTable (); + if (EFI_ERROR (Status)) { + return; + } + + InitializePlatformVTdPolicy (); + + ParseDmarAcpiTableRmrr (); + + if ((PcdGet8 (PcdVTdPolicyPropertyMask) & BIT2) =3D=3D 0) { + // + // Support IOMMU access attribute request recording before DMAR table = is installed. + // Here is to process the requests. + // + ProcessRequestedAccessAttribute (); + } + + for (Index =3D 0; Index < mVtdUnitNumber; Index++) { + DEBUG ((DEBUG_INFO,"VTD Unit %d (Segment: %04x)\n", Index, mVtdUnitInf= ormation[Index].Segment)); + + if (mVtdUnitInformation[Index].ExtRootEntryTable !=3D NULL) { + VtdLibDumpDmarExtContextEntryTable (NULL, NULL, mVtdUnitInformation[= Index].ExtRootEntryTable, mVtdUnitInformation[Index].Is5LevelPaging); + + RootTableInfo.BaseAddress =3D mVtdUnitInformation[Index].VtdUnitBase= Address; + RootTableInfo.TableAddress =3D (UINT64) (UINTN) mVtdUnitInformation[= Index].RootEntryTable; + RootTableInfo.Is5LevelPaging =3D mVtdUnitInformation[Index].Is5Level= Paging; + VTdLogAddDataEvent (VTDLOG_DXE_ROOT_TABLE, 1, &RootTableInfo, sizeof= (VTD_ROOT_TABLE_INFO)); + } + + if (mVtdUnitInformation[Index].RootEntryTable !=3D NULL) { + VtdLibDumpDmarContextEntryTable (NULL, NULL, mVtdUnitInformation[Ind= ex].RootEntryTable, mVtdUnitInformation[Index].Is5LevelPaging); + + RootTableInfo.BaseAddress =3D mVtdUnitInformation[Index].VtdUnitBase= Address; + RootTableInfo.TableAddress =3D (UINT64) (UINTN) mVtdUnitInformation[= Index].RootEntryTable; + RootTableInfo.Is5LevelPaging =3D mVtdUnitInformation[Index].Is5Level= Paging; + VTdLogAddDataEvent (VTDLOG_DXE_ROOT_TABLE, 0, &RootTableInfo, sizeof= (VTD_ROOT_TABLE_INFO)); + } + } + + // + // 3. enable + // + DEBUG ((DEBUG_INFO, "EnableDmar\n")); + Status =3D EnableDmar (); + if (EFI_ERROR (Status)) { + return; + } + DEBUG ((DEBUG_INFO, "DumpVtdRegs\n")); + DumpVtdRegsAll (); +} + +/** + Notification function of ACPI Table change. + + This is a notification function registered on ACPI Table change event. + + @param Event Event whose notification function is being invoked. + @param Context Pointer to the notification function's context. + +**/ +VOID +EFIAPI +AcpiNotificationFunc ( + IN EFI_EVENT Event, + IN VOID *Context + ) +{ + EFI_STATUS Status; + + Status =3D GetDmarAcpiTable (); + if (EFI_ERROR (Status)) { + if (Status =3D=3D EFI_ALREADY_STARTED) { + gBS->CloseEvent (Event); + } + return; + } + SetupVtd (); + gBS->CloseEvent (Event); +} + +/** + Exit boot service callback function. + + @param[in] Event The event handle. + @param[in] Context The event content. +**/ +VOID +EFIAPI +OnExitBootServices ( + IN EFI_EVENT Event, + IN VOID *Context + ) +{ + UINTN VtdIndex; + + DEBUG ((DEBUG_INFO, "Vtd OnExitBootServices\n")); + + DumpVtdRegsAll (); + + DEBUG ((DEBUG_INFO, "Invalidate all\n")); + for (VtdIndex =3D 0; VtdIndex < mVtdUnitNumber; VtdIndex++) { + VtdLibFlushWriteBuffer (mVtdUnitInformation[VtdIndex].VtdUnitBaseAddre= ss); + + InvalidateContextCache (VtdIndex); + + InvalidateIOTLB (VtdIndex); + } + + if ((PcdGet8(PcdVTdPolicyPropertyMask) & BIT1) =3D=3D 0) { + DisableDmar (); + DumpVtdRegsAll (); + } +} + +/** + Legacy boot callback function. + + @param[in] Event The event handle. + @param[in] Context The event content. +**/ +VOID +EFIAPI +OnLegacyBoot ( + EFI_EVENT Event, + VOID *Context + ) +{ + DEBUG ((DEBUG_INFO, "Vtd OnLegacyBoot\n")); + DumpVtdRegsAll (); + DisableDmar (); + DumpVtdRegsAll (); +} + +/** + Initialize DMA protection. +**/ +VOID +InitializeDmaProtection ( + VOID + ) +{ + EFI_STATUS Status; + EFI_EVENT ExitBootServicesEvent; + EFI_EVENT LegacyBootEvent; + EFI_EVENT EventAcpi10; + EFI_EVENT EventAcpi20; + + Status =3D gBS->CreateEventEx ( + EVT_NOTIFY_SIGNAL, + VTD_TPL_LEVEL, + AcpiNotificationFunc, + NULL, + &gEfiAcpi10TableGuid, + &EventAcpi10 + ); + ASSERT_EFI_ERROR (Status); + + Status =3D gBS->CreateEventEx ( + EVT_NOTIFY_SIGNAL, + VTD_TPL_LEVEL, + AcpiNotificationFunc, + NULL, + &gEfiAcpi20TableGuid, + &EventAcpi20 + ); + ASSERT_EFI_ERROR (Status); + + // + // Signal the events initially for the case + // that DMAR table has been installed. + // + gBS->SignalEvent (EventAcpi20); + gBS->SignalEvent (EventAcpi10); + + Status =3D gBS->CreateEventEx ( + EVT_NOTIFY_SIGNAL, + TPL_CALLBACK, + OnExitBootServices, + NULL, + &gEfiEventExitBootServicesGuid, + &ExitBootServicesEvent + ); + ASSERT_EFI_ERROR (Status); + + Status =3D EfiCreateEventLegacyBootEx ( + TPL_CALLBACK, + OnLegacyBoot, + NULL, + &LegacyBootEvent + ); + ASSERT_EFI_ERROR (Status); + + return ; +} diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/DmaP= rotection.h b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/Dma= Protection.h new file mode 100644 index 000000000..4b2f451b1 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/DmaProtecti= on.h @@ -0,0 +1,668 @@ +/** @file + + Copyright (c) 2017 - 2023, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#ifndef _DMAR_PROTECTION_H_ +#define _DMAR_PROTECTION_H_ + +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include + +#define VTD_64BITS_ADDRESS(Lo, Hi) (LShiftU64 (Lo, 12) | LShiftU64 (Hi, 32= )) + +#define ALIGN_VALUE_UP(Value, Alignment) (((Value) + (Alignment) - 1) & (= ~((Alignment) - 1))) +#define ALIGN_VALUE_LOW(Value, Alignment) ((Value) & (~((Alignment) - 1))) + +#define VTD_TPL_LEVEL TPL_NOTIFY + +// +// Use 256-bit descriptor +// Queue size is 128. +// +#define VTD_QUEUED_INVALIDATION_DESCRIPTOR_WIDTH 1 +#define VTD_INVALIDATION_QUEUE_SIZE 0 + +// +// This is the initial max PCI DATA number. +// The number may be enlarged later. +// +#define MAX_VTD_PCI_DATA_NUMBER 0x100 + +typedef struct { + UINTN VtdUnitBaseAddress; + UINT16 Segment; + VTD_VER_REG VerReg; + VTD_CAP_REG CapReg; + VTD_ECAP_REG ECapReg; + VTD_ROOT_ENTRY *RootEntryTable; + VTD_EXT_ROOT_ENTRY *ExtRootEntryTable; + VTD_SECOND_LEVEL_PAGING_ENTRY *FixedSecondLevelPagingEntry; + BOOLEAN HasDirtyContext; + BOOLEAN HasDirtyPages; + PCI_DEVICE_INFORMATION *PciDeviceInfo; + BOOLEAN Is5LevelPaging; + UINT8 EnableQueuedInvalidation; + VOID *QiDescBuffer; + UINTN QiDescBufferSize; +} VTD_UNIT_INFORMATION; + +// +// This is the initial max ACCESS request. +// The number may be enlarged later. +// +#define MAX_VTD_ACCESS_REQUEST 0x100 + +typedef struct { + UINT16 Segment; + VTD_SOURCE_ID SourceId; + UINT64 BaseAddress; + UINT64 Length; + UINT64 IoMmuAccess; +} VTD_ACCESS_REQUEST; + + +/** + The scan bus callback function. + + It is called in PCI bus scan for each PCI device under the bus. + + @param[in] Context The context of the callback. + @param[in] Segment The segment of the source. + @param[in] Bus The bus of the source. + @param[in] Device The device of the source. + @param[in] Function The function of the source. + + @retval EFI_SUCCESS The specific PCI device is processed in th= e callback. +**/ +typedef +EFI_STATUS +(EFIAPI *SCAN_BUS_FUNC_CALLBACK_FUNC) ( + IN VOID *Context, + IN UINT16 Segment, + IN UINT8 Bus, + IN UINT8 Device, + IN UINT8 Function + ); + +extern EFI_ACPI_DMAR_HEADER *mAcpiDmarTable; + +extern UINTN mVtdUnitNumber; +extern VTD_UNIT_INFORMATION *mVtdUnitInformation; + +extern UINT64 mBelow4GMemoryLimit; +extern UINT64 mAbove4GMemoryLimit; + +extern EDKII_PLATFORM_VTD_POLICY_PROTOCOL *mPlatformVTdPolicy; + +/** + Prepare VTD configuration. +**/ +VOID +PrepareVtdConfig ( + VOID + ); + +/** + Setup VTd translation table. + + @retval EFI_SUCCESS Setup translation table successfully. + @retval EFI_OUT_OF_RESOURCE Setup translation table fail. +**/ +EFI_STATUS +SetupTranslationTable ( + VOID + ); + +/** + Enable DMAR translation. + + @retval EFI_SUCCESS DMAR translation is enabled. + @retval EFI_DEVICE_ERROR DMAR translation is not enabled. +**/ +EFI_STATUS +EnableDmar ( + VOID + ); + +/** + Disable DMAR translation. + + @retval EFI_SUCCESS DMAR translation is disabled. + @retval EFI_DEVICE_ERROR DMAR translation is not disabled. +**/ +EFI_STATUS +DisableDmar ( + VOID + ); + +/** + Perpare cache invalidation interface. + + @param[in] VtdIndex The index used to identify a VTd engine. + + @retval EFI_SUCCESS The operation was successful. + @retval EFI_UNSUPPORTED Invalidation method is not supported. + @retval EFI_OUT_OF_RESOURCES A memory allocation failed. +**/ +EFI_STATUS +PerpareCacheInvalidationInterface ( + IN UINTN VtdIndex + ); + +/** + Invalidate VTd context cache. + + @param[in] VtdIndex The index used to identify a VTd engine. +**/ +EFI_STATUS +InvalidateContextCache ( + IN UINTN VtdIndex + ); + +/** + Invalidate VTd IOTLB. + + @param[in] VtdIndex The index used to identify a VTd engine. +**/ +EFI_STATUS +InvalidateIOTLB ( + IN UINTN VtdIndex + ); + +/** + Invalid VTd global IOTLB. + + @param[in] VtdIndex The index of VTd engine. + + @retval EFI_SUCCESS VTd global IOTLB is invalidated. + @retval EFI_DEVICE_ERROR VTd global IOTLB is not invalidated. +**/ +EFI_STATUS +InvalidateVtdIOTLBGlobal ( + IN UINTN VtdIndex + ); + +/** + Dump VTd registers. + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. +**/ +VOID +DumpVtdRegs ( + IN UINTN VtdUnitBaseAddress + ); + +/** + Dump VTd registers for all VTd engine. +**/ +VOID +DumpVtdRegsAll ( + VOID + ); + +/** + Dump VTd version registers. + + @param[in] VerReg The version register. +**/ +VOID +DumpVtdVerRegs ( + IN VTD_VER_REG *VerReg + ); + +/** + Dump VTd capability registers. + + @param[in] CapReg The capability register. +**/ +VOID +DumpVtdCapRegs ( + IN VTD_CAP_REG *CapReg + ); + +/** + Dump VTd extended capability registers. + + @param[in] ECapReg The extended capability register. +**/ +VOID +DumpVtdECapRegs ( + IN VTD_ECAP_REG *ECapReg + ); + +/** + Register PCI device to VTd engine. + + @param[in] VtdIndex The index of VTd engine. + @param[in] Segment The segment of the source. + @param[in] SourceId The SourceId of the source. + @param[in] DeviceType The DMAR device scope type. + @param[in] CheckExist TRUE: ERROR will be returned if the PC= I device is already registered. + FALSE: SUCCESS will be returned if the= PCI device is registered. + + @retval EFI_SUCCESS The PCI device is registered. + @retval EFI_OUT_OF_RESOURCES No enough resource to register a new PCI d= evice. + @retval EFI_ALREADY_STARTED The device is already registered. +**/ +EFI_STATUS +RegisterPciDevice ( + IN UINTN VtdIndex, + IN UINT16 Segment, + IN VTD_SOURCE_ID SourceId, + IN UINT8 DeviceType, + IN BOOLEAN CheckExist + ); + +/** + The scan bus callback function to always enable page attribute. + + @param[in] Context The context of the callback. + @param[in] Segment The segment of the source. + @param[in] Bus The bus of the source. + @param[in] Device The device of the source. + @param[in] Function The function of the source. + + @retval EFI_SUCCESS The VTd entry is updated to always enable = all DMA access for the specific device. +**/ +EFI_STATUS +EFIAPI +ScanBusCallbackRegisterPciDevice ( + IN VOID *Context, + IN UINT16 Segment, + IN UINT8 Bus, + IN UINT8 Device, + IN UINT8 Function + ); + +/** + Scan PCI bus and invoke callback function for each PCI devices under the= bus. + + @param[in] Context The context of the callback function. + @param[in] Segment The segment of the source. + @param[in] Bus The bus of the source. + @param[in] Callback The callback function in PCI scan. + + @retval EFI_SUCCESS The PCI devices under the bus are scaned. +**/ +EFI_STATUS +ScanPciBus ( + IN VOID *Context, + IN UINT16 Segment, + IN UINT8 Bus, + IN SCAN_BUS_FUNC_CALLBACK_FUNC Callback + ); + +/** + Scan PCI bus and invoke callback function for each PCI devices under all= root bus. + + @param[in] Context The context of the callback function. + @param[in] Segment The segment of the source. + @param[in] Callback The callback function in PCI scan. + + @retval EFI_SUCCESS The PCI devices under the bus are scaned. +**/ +EFI_STATUS +ScanAllPciBus ( + IN VOID *Context, + IN UINT16 Segment, + IN SCAN_BUS_FUNC_CALLBACK_FUNC Callback + ); + +/** + Find the VTd index by the Segment and SourceId. + + @param[in] Segment The segment of the source. + @param[in] SourceId The SourceId of the source. + @param[out] ExtContextEntry The ExtContextEntry of the source. + @param[out] ContextEntry The ContextEntry of the source. + + @return The index of the VTd engine. + @retval (UINTN)-1 The VTd engine is not found. +**/ +UINTN +FindVtdIndexByPciDevice ( + IN UINT16 Segment, + IN VTD_SOURCE_ID SourceId, + OUT VTD_EXT_CONTEXT_ENTRY **ExtContextEntry, + OUT VTD_CONTEXT_ENTRY **ContextEntry + ); + +/** + Get the DMAR ACPI table. + + @retval EFI_SUCCESS The DMAR ACPI table is got. + @retval EFI_ALREADY_STARTED The DMAR ACPI table has been got previousl= y. + @retval EFI_NOT_FOUND The DMAR ACPI table is not found. +**/ +EFI_STATUS +GetDmarAcpiTable ( + VOID + ); + +/** + Parse DMAR DRHD table. + + @return EFI_SUCCESS The DMAR DRHD table is parsed. +**/ +EFI_STATUS +ParseDmarAcpiTableDrhd ( + VOID + ); + +/** + Parse DMAR RMRR table. + + @return EFI_SUCCESS The DMAR RMRR table is parsed. +**/ +EFI_STATUS +ParseDmarAcpiTableRmrr ( + VOID + ); + +/** + Set VTd attribute for a system memory. + + @param[in] VtdIndex The index used to identify a VTd eng= ine. + @param[in] DomainIdentifier The domain ID of the source. + @param[in] SecondLevelPagingEntry The second level paging entry in VTd= table for the device. + @param[in] BaseAddress The base of device memory address to= be used as the DMA memory. + @param[in] Length The length of device memory address = to be used as the DMA memory. + @param[in] IoMmuAccess The IOMMU access. + + @retval EFI_SUCCESS The IoMmuAccess is set for the memory ran= ge specified by BaseAddress and Length. + @retval EFI_INVALID_PARAMETER BaseAddress is not IoMmu Page size aligne= d. + @retval EFI_INVALID_PARAMETER Length is not IoMmu Page size aligned. + @retval EFI_INVALID_PARAMETER Length is 0. + @retval EFI_INVALID_PARAMETER IoMmuAccess specified an illegal combinat= ion of access. + @retval EFI_UNSUPPORTED The bit mask of IoMmuAccess is not suppor= ted by the IOMMU. + @retval EFI_UNSUPPORTED The IOMMU does not support the memory ran= ge specified by BaseAddress and Length. + @retval EFI_OUT_OF_RESOURCES There are not enough resources available = to modify the IOMMU access. + @retval EFI_DEVICE_ERROR The IOMMU device reported an error while = attempting the operation. +**/ +EFI_STATUS +SetPageAttribute ( + IN UINTN VtdIndex, + IN UINT16 DomainIdentifier, + IN VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry, + IN UINT64 BaseAddress, + IN UINT64 Length, + IN UINT64 IoMmuAccess + ); + +/** + Set VTd attribute for a system memory. + + @param[in] Segment The Segment used to identify a VTd engine. + @param[in] SourceId The SourceId used to identify a VTd engine= and table entry. + @param[in] BaseAddress The base of device memory address to be us= ed as the DMA memory. + @param[in] Length The length of device memory address to be = used as the DMA memory. + @param[in] IoMmuAccess The IOMMU access. + + @retval EFI_SUCCESS The IoMmuAccess is set for the memory ran= ge specified by BaseAddress and Length. + @retval EFI_INVALID_PARAMETER BaseAddress is not IoMmu Page size aligne= d. + @retval EFI_INVALID_PARAMETER Length is not IoMmu Page size aligned. + @retval EFI_INVALID_PARAMETER Length is 0. + @retval EFI_INVALID_PARAMETER IoMmuAccess specified an illegal combinat= ion of access. + @retval EFI_UNSUPPORTED The bit mask of IoMmuAccess is not suppor= ted by the IOMMU. + @retval EFI_UNSUPPORTED The IOMMU does not support the memory ran= ge specified by BaseAddress and Length. + @retval EFI_OUT_OF_RESOURCES There are not enough resources available = to modify the IOMMU access. + @retval EFI_DEVICE_ERROR The IOMMU device reported an error while = attempting the operation. +**/ +EFI_STATUS +SetAccessAttribute ( + IN UINT16 Segment, + IN VTD_SOURCE_ID SourceId, + IN UINT64 BaseAddress, + IN UINT64 Length, + IN UINT64 IoMmuAccess + ); + +/** + Return the index of PCI data. + + @param[in] VtdIndex The index used to identify a VTd engine. + @param[in] Segment The Segment used to identify a VTd engine. + @param[in] SourceId The SourceId used to identify a VTd engine= and table entry. + + @return The index of the PCI data. + @retval (UINTN)-1 The PCI data is not found. +**/ +UINTN +GetPciDataIndex ( + IN UINTN VtdIndex, + IN UINT16 Segment, + IN VTD_SOURCE_ID SourceId + ); + +/** + Dump VTd registers if there is error. +**/ +VOID +DumpVtdIfError ( + VOID + ); + +/** + Initialize platform VTd policy. +**/ +VOID +InitializePlatformVTdPolicy ( + VOID + ); + +/** + Always enable the VTd page attribute for the device. + + @param[in] Segment The Segment used to identify a VTd engine. + @param[in] SourceId The SourceId used to identify a VTd engine= and table entry. + + @retval EFI_SUCCESS The VTd entry is updated to always enable = all DMA access for the specific device. +**/ +EFI_STATUS +AlwaysEnablePageAttribute ( + IN UINT16 Segment, + IN VTD_SOURCE_ID SourceId + ); + +/** + Convert the DeviceHandle to SourceId and Segment. + + @param[in] DeviceHandle The device who initiates the DMA access re= quest. + @param[out] Segment The Segment used to identify a VTd engine. + @param[out] SourceId The SourceId used to identify a VTd engine= and table entry. + + @retval EFI_SUCCESS The Segment and SourceId are returned. + @retval EFI_INVALID_PARAMETER DeviceHandle is an invalid handle. + @retval EFI_UNSUPPORTED DeviceHandle is unknown by the IOMMU. +**/ +EFI_STATUS +DeviceHandleToSourceId ( + IN EFI_HANDLE DeviceHandle, + OUT UINT16 *Segment, + OUT VTD_SOURCE_ID *SourceId + ); + +/** + Get device information from mapping. + + @param[in] Mapping The mapping. + @param[out] DeviceAddress The device address of the mapping. + @param[out] NumberOfPages The number of pages of the mapping. + + @retval EFI_SUCCESS The device information is returned. + @retval EFI_INVALID_PARAMETER The mapping is invalid. +**/ +EFI_STATUS +GetDeviceInfoFromMapping ( + IN VOID *Mapping, + OUT EFI_PHYSICAL_ADDRESS *DeviceAddress, + OUT UINTN *NumberOfPages + ); + +/** + Initialize DMA protection. +**/ +VOID +InitializeDmaProtection ( + VOID + ); + +/** + Allocate zero pages. + + @param[in] Pages the number of pages. + + @return the page address. + @retval NULL No resource to allocate pages. +**/ +VOID * +EFIAPI +AllocateZeroPages ( + IN UINTN Pages + ); + +/** + Flush VTD page table and context table memory. + + This action is to make sure the IOMMU engine can get final data in memor= y. + + @param[in] VtdIndex The index used to identify a VTd engine. + @param[in] Base The base address of memory to be flushed. + @param[in] Size The size of memory in bytes to be flushed. +**/ +VOID +FlushPageTableMemory ( + IN UINTN VtdIndex, + IN UINTN Base, + IN UINTN Size + ); + +/** + Get PCI device information from DMAR DevScopeEntry. + + @param[in] Segment The segment number. + @param[in] DmarDevScopeEntry DMAR DevScopeEntry + @param[out] Bus The bus number. + @param[out] Device The device number. + @param[out] Function The function number. + + @retval EFI_SUCCESS The PCI device information is returned. +**/ +EFI_STATUS +GetPciBusDeviceFunction ( + IN UINT16 Segment, + IN EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *DmarDevScopeEntry, + OUT UINT8 *Bus, + OUT UINT8 *Device, + OUT UINT8 *Function + ); + +/** + Append VTd Access Request to global. + + @param[in] Segment The Segment used to identify a VTd engine. + @param[in] SourceId The SourceId used to identify a VTd engine= and table entry. + @param[in] BaseAddress The base of device memory address to be us= ed as the DMA memory. + @param[in] Length The length of device memory address to be = used as the DMA memory. + @param[in] IoMmuAccess The IOMMU access. + + @retval EFI_SUCCESS The IoMmuAccess is set for the memory rang= e specified by BaseAddress and Length. + @retval EFI_INVALID_PARAMETER BaseAddress is not IoMmu Page size aligned. + @retval EFI_INVALID_PARAMETER Length is not IoMmu Page size aligned. + @retval EFI_INVALID_PARAMETER Length is 0. + @retval EFI_INVALID_PARAMETER IoMmuAccess specified an illegal combinati= on of access. + @retval EFI_UNSUPPORTED The bit mask of IoMmuAccess is not support= ed by the IOMMU. + @retval EFI_UNSUPPORTED The IOMMU does not support the memory rang= e specified by BaseAddress and Length. + @retval EFI_OUT_OF_RESOURCES There are not enough resources available t= o modify the IOMMU access. + @retval EFI_DEVICE_ERROR The IOMMU device reported an error while a= ttempting the operation. + +**/ +EFI_STATUS +RequestAccessAttribute ( + IN UINT16 Segment, + IN VTD_SOURCE_ID SourceId, + IN UINT64 BaseAddress, + IN UINT64 Length, + IN UINT64 IoMmuAccess + ); + +/** + Add a new VTd log event. + + @param[in] EventType Event type + @param[in] Data1 First parameter + @param[in] Data2 Second parameter + +**/ +VOID +EFIAPI +VTdLogAddEvent ( + IN CONST VTDLOG_EVENT_TYPE EventType, + IN CONST UINT64 Data1, + IN CONST UINT64 Data2 + ); + +/** + Add a new VTd log event with data. + + @param[in] EventType Event type + @param[in] Param parameter + @param[in] Data Data + @param[in] DataSize Data size + +**/ +VOID +EFIAPI +VTdLogAddDataEvent ( + IN CONST VTDLOG_EVENT_TYPE EventType, + IN CONST UINT64 Param, + IN CONST VOID *Data, + IN CONST UINT32 DataSize + ); + +/** + Initializes the VTd Log. + +**/ +VOID +EFIAPI +VTdLogInitialize( + VOID + ); + +#endif diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/Dmar= AcpiTable.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/Dma= rAcpiTable.c new file mode 100644 index 000000000..21f559983 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/DmarAcpiTab= le.c @@ -0,0 +1,398 @@ +/** @file + + Copyright (c) 2017 - 2023, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#include "DmaProtection.h" + +#pragma pack(1) + +typedef struct { + EFI_ACPI_DESCRIPTION_HEADER Header; + UINT32 Entry; +} RSDT_TABLE; + +typedef struct { + EFI_ACPI_DESCRIPTION_HEADER Header; + UINT64 Entry; +} XSDT_TABLE; + +#pragma pack() + +EFI_ACPI_DMAR_HEADER *mAcpiDmarTable =3D NULL; + +/** + Dump DMAR ACPI table. +**/ +VOID +VtdDumpDmarTable ( + VOID + ) +{ + VtdLibDumpAcpiDmar (NULL, NULL, (EFI_ACPI_DMAR_HEADER *) (UINTN) mAcpiDm= arTable); + + VTdLogAddDataEvent (VTDLOG_DXE_DMAR_TABLE, mAcpiDmarTable->Header.Length= , (VOID *)mAcpiDmarTable, mAcpiDmarTable->Header.Length); =20 +} + +/** + Get PCI device information from DMAR DevScopeEntry. + + @param[in] Segment The segment number. + @param[in] DmarDevScopeEntry DMAR DevScopeEntry + @param[out] Bus The bus number. + @param[out] Device The device number. + @param[out] Function The function number. + + @retval EFI_SUCCESS The PCI device information is returned. +**/ +EFI_STATUS +GetPciBusDeviceFunction ( + IN UINT16 Segment, + IN EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *DmarDevScopeEntry, + OUT UINT8 *Bus, + OUT UINT8 *Device, + OUT UINT8 *Function + ) +{ + EFI_ACPI_DMAR_PCI_PATH *DmarPciPath; + UINT8 MyBus; + UINT8 MyDevice; + UINT8 MyFunction; + + DmarPciPath =3D (EFI_ACPI_DMAR_PCI_PATH *)((UINTN)(DmarDevScopeEntry + 1= )); + MyBus =3D DmarDevScopeEntry->StartBusNumber; + MyDevice =3D DmarPciPath->Device; + MyFunction =3D DmarPciPath->Function; + + switch (DmarDevScopeEntry->Type) { + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_ENDPOINT: + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_BRIDGE: + while ((UINTN)DmarPciPath + sizeof(EFI_ACPI_DMAR_PCI_PATH) < (UINTN)Dm= arDevScopeEntry + DmarDevScopeEntry->Length) { + MyBus =3D PciSegmentRead8 (PCI_SEGMENT_LIB_ADDRESS(Segment, MyBus, M= yDevice, MyFunction, PCI_BRIDGE_SECONDARY_BUS_REGISTER_OFFSET)); + DmarPciPath ++; + MyDevice =3D DmarPciPath->Device; + MyFunction =3D DmarPciPath->Function; + } + break; + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_IOAPIC: + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_MSI_CAPABLE_HPET: + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_ACPI_NAMESPACE_DEVICE: + break; + } + + *Bus =3D MyBus; + *Device =3D MyDevice; + *Function =3D MyFunction; + + return EFI_SUCCESS; +} + +/** + Process DMAR DRHD table. + + @param[in] VtdIndex The index of VTd engine. + @param[in] DmarDrhd The DRHD table. + + @retval EFI_SUCCESS The DRHD table is processed. +**/ +EFI_STATUS +ProcessDrhd ( + IN UINTN VtdIndex, + IN EFI_ACPI_DMAR_DRHD_HEADER *DmarDrhd + ) +{ + EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *DmarDevScopeEntry; + UINT8 Bus; + UINT8 Device; + UINT8 Function; + UINT8 SecondaryBusNumber; + EFI_STATUS Status; + VTD_SOURCE_ID SourceId; + + mVtdUnitInformation[VtdIndex].PciDeviceInfo =3D AllocateZeroPool (sizeof= (PCI_DEVICE_INFORMATION) + sizeof (PCI_DEVICE_DATA) * MAX_VTD_PCI_DATA_NUM= BER); + if (mVtdUnitInformation[VtdIndex].PciDeviceInfo =3D=3D NULL) { + ASSERT (FALSE); + return EFI_OUT_OF_RESOURCES; + } + + mVtdUnitInformation[VtdIndex].Segment =3D DmarDrhd->SegmentNu= mber; + mVtdUnitInformation[VtdIndex].VtdUnitBaseAddress =3D (UINTN)DmarDrhd->Re= gisterBaseAddress; + DEBUG ((DEBUG_INFO," VTD (%d) BaseAddress - 0x%016lx\n", VtdIndex, Dma= rDrhd->RegisterBaseAddress)); + + mVtdUnitInformation[VtdIndex].PciDeviceInfo->Segment =3D = DmarDrhd->SegmentNumber; + mVtdUnitInformation[VtdIndex].PciDeviceInfo->PciDeviceDataMaxNumber =3D = MAX_VTD_PCI_DATA_NUMBER; + + if ((DmarDrhd->Flags & EFI_ACPI_DMAR_DRHD_FLAGS_INCLUDE_PCI_ALL) !=3D 0)= { + mVtdUnitInformation[VtdIndex].PciDeviceInfo->IncludeAllFlag =3D TRUE; + DEBUG ((DEBUG_INFO," ProcessDrhd: with INCLUDE ALL\n")); + + Status =3D ScanAllPciBus((VOID *)VtdIndex, DmarDrhd->SegmentNumber, Sc= anBusCallbackRegisterPciDevice); + if (EFI_ERROR (Status)) { + return Status; + } + } else { + mVtdUnitInformation[VtdIndex].PciDeviceInfo->IncludeAllFlag =3D FALSE; + DEBUG ((DEBUG_INFO," ProcessDrhd: without INCLUDE ALL\n")); + } + + DmarDevScopeEntry =3D (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *)((U= INTN)(DmarDrhd + 1)); + while ((UINTN)DmarDevScopeEntry < (UINTN)DmarDrhd + DmarDrhd->Header.Len= gth) { + + Status =3D GetPciBusDeviceFunction (DmarDrhd->SegmentNumber, DmarDevSc= opeEntry, &Bus, &Device, &Function); + if (EFI_ERROR (Status)) { + return Status; + } + + DEBUG ((DEBUG_INFO," ProcessDrhd: ")); + switch (DmarDevScopeEntry->Type) { + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_ENDPOINT: + DEBUG ((DEBUG_INFO,"PCI Endpoint")); + break; + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_BRIDGE: + DEBUG ((DEBUG_INFO,"PCI-PCI bridge")); + break; + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_IOAPIC: + DEBUG ((DEBUG_INFO,"IOAPIC")); + break; + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_MSI_CAPABLE_HPET: + DEBUG ((DEBUG_INFO,"MSI Capable HPET")); + break; + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_ACPI_NAMESPACE_DEVICE: + DEBUG ((DEBUG_INFO,"ACPI Namespace Device")); + break; + } + DEBUG ((DEBUG_INFO," S%04x B%02x D%02x F%02x\n", DmarDrhd->SegmentNumb= er, Bus, Device, Function)); + + SourceId.Bits.Bus =3D Bus; + SourceId.Bits.Device =3D Device; + SourceId.Bits.Function =3D Function; + + Status =3D RegisterPciDevice (VtdIndex, DmarDrhd->SegmentNumber, Sourc= eId, DmarDevScopeEntry->Type, TRUE); + if (EFI_ERROR (Status)) { + // + // There might be duplication for special device other than standard= PCI device. + // + switch (DmarDevScopeEntry->Type) { + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_ENDPOINT: + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_BRIDGE: + return Status; + } + } + + switch (DmarDevScopeEntry->Type) { + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_BRIDGE: + SecondaryBusNumber =3D PciSegmentRead8 (PCI_SEGMENT_LIB_ADDRESS(Dmar= Drhd->SegmentNumber, Bus, Device, Function, PCI_BRIDGE_SECONDARY_BUS_REGIST= ER_OFFSET)); + Status =3D ScanPciBus ((VOID *)VtdIndex, DmarDrhd->SegmentNumber, Se= condaryBusNumber, ScanBusCallbackRegisterPciDevice); + if (EFI_ERROR (Status)) { + return Status; + } + break; + default: + break; + } + + DmarDevScopeEntry =3D (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *)(= (UINTN)DmarDevScopeEntry + DmarDevScopeEntry->Length); + } + + return EFI_SUCCESS; +} + +/** + Process DMAR RMRR table. + + @param[in] DmarRmrr The RMRR table. + + @retval EFI_SUCCESS The RMRR table is processed. +**/ +EFI_STATUS +ProcessRmrr ( + IN EFI_ACPI_DMAR_RMRR_HEADER *DmarRmrr + ) +{ + EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *DmarDevScopeEntry; + UINT8 Bus; + UINT8 Device; + UINT8 Function; + EFI_STATUS Status; + VTD_SOURCE_ID SourceId; + + DEBUG ((DEBUG_INFO," RMRR (Base 0x%016lx, Limit 0x%016lx)\n", DmarRmrr-= >ReservedMemoryRegionBaseAddress, DmarRmrr->ReservedMemoryRegionLimitAddres= s)); + + DmarDevScopeEntry =3D (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *)((U= INTN)(DmarRmrr + 1)); + while ((UINTN)DmarDevScopeEntry < (UINTN)DmarRmrr + DmarRmrr->Header.Len= gth) { + if (DmarDevScopeEntry->Type !=3D EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_= ENDPOINT) { + DEBUG ((DEBUG_INFO,"RMRR DevScopeEntryType is not endpoint, type[0x%= x] \n", DmarDevScopeEntry->Type)); + return EFI_DEVICE_ERROR; + } + + Status =3D GetPciBusDeviceFunction (DmarRmrr->SegmentNumber, DmarDevSc= opeEntry, &Bus, &Device, &Function); + if (EFI_ERROR (Status)) { + return Status; + } + + DEBUG ((DEBUG_INFO,"RMRR S%04x B%02x D%02x F%02x\n", DmarRmrr->Segment= Number, Bus, Device, Function)); + + SourceId.Bits.Bus =3D Bus; + SourceId.Bits.Device =3D Device; + SourceId.Bits.Function =3D Function; + Status =3D SetAccessAttribute ( + DmarRmrr->SegmentNumber, + SourceId, + DmarRmrr->ReservedMemoryRegionBaseAddress, + DmarRmrr->ReservedMemoryRegionLimitAddress + 1 - DmarRmrr->= ReservedMemoryRegionBaseAddress, + EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE + ); + if (EFI_ERROR (Status)) { + return Status; + } + + DmarDevScopeEntry =3D (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *)(= (UINTN)DmarDevScopeEntry + DmarDevScopeEntry->Length); + } + + return EFI_SUCCESS; +} + +/** + Get VTd engine number. +**/ +UINTN +GetVtdEngineNumber ( + VOID + ) +{ + EFI_ACPI_DMAR_STRUCTURE_HEADER *DmarHeader; + UINTN VtdIndex; + + VtdIndex =3D 0; + DmarHeader =3D (EFI_ACPI_DMAR_STRUCTURE_HEADER *)((UINTN)(mAcpiDmarTable= + 1)); + while ((UINTN)DmarHeader < (UINTN)mAcpiDmarTable + mAcpiDmarTable->Heade= r.Length) { + switch (DmarHeader->Type) { + case EFI_ACPI_DMAR_TYPE_DRHD: + VtdIndex++; + break; + default: + break; + } + DmarHeader =3D (EFI_ACPI_DMAR_STRUCTURE_HEADER *)((UINTN)DmarHeader + = DmarHeader->Length); + } + return VtdIndex ; +} + +/** + Parse DMAR DRHD table. + + @return EFI_SUCCESS The DMAR DRHD table is parsed. +**/ +EFI_STATUS +ParseDmarAcpiTableDrhd ( + VOID + ) +{ + EFI_ACPI_DMAR_STRUCTURE_HEADER *DmarHeader; + EFI_STATUS Status; + UINTN VtdIndex; + + mVtdUnitNumber =3D GetVtdEngineNumber (); + DEBUG ((DEBUG_INFO," VtdUnitNumber - %d\n", mVtdUnitNumber)); + ASSERT (mVtdUnitNumber > 0); + if (mVtdUnitNumber =3D=3D 0) { + return EFI_DEVICE_ERROR; + } + + mVtdUnitInformation =3D AllocateZeroPool (sizeof(*mVtdUnitInformation) *= mVtdUnitNumber); + ASSERT (mVtdUnitInformation !=3D NULL); + if (mVtdUnitInformation =3D=3D NULL) { + return EFI_OUT_OF_RESOURCES; + } + + VtdIndex =3D 0; + DmarHeader =3D (EFI_ACPI_DMAR_STRUCTURE_HEADER *)((UINTN)(mAcpiDmarTable= + 1)); + while ((UINTN)DmarHeader < (UINTN)mAcpiDmarTable + mAcpiDmarTable->Heade= r.Length) { + switch (DmarHeader->Type) { + case EFI_ACPI_DMAR_TYPE_DRHD: + ASSERT (VtdIndex < mVtdUnitNumber); + Status =3D ProcessDrhd (VtdIndex, (EFI_ACPI_DMAR_DRHD_HEADER *)DmarH= eader); + if (EFI_ERROR (Status)) { + return Status; + } + VtdIndex++; + + break; + + default: + break; + } + DmarHeader =3D (EFI_ACPI_DMAR_STRUCTURE_HEADER *)((UINTN)DmarHeader + = DmarHeader->Length); + } + ASSERT (VtdIndex =3D=3D mVtdUnitNumber); + + for (VtdIndex =3D 0; VtdIndex < mVtdUnitNumber; VtdIndex++) { + VtdLibDumpPciDeviceInfo (NULL, NULL, mVtdUnitInformation[VtdIndex].Pci= DeviceInfo); + + VTdLogAddDataEvent (VTDLOG_DXE_PCI_DEVICE, + mVtdUnitInformation[VtdIndex].VtdUnitBaseAddress, + mVtdUnitInformation[VtdIndex].PciDeviceInfo, + sizeof (PCI_DEVICE_INFORMATION) + sizeof (PCI_DEVI= CE_DATA) * mVtdUnitInformation[VtdIndex].PciDeviceInfo->PciDeviceDataNumber= ); + } + return EFI_SUCCESS ; +} + +/** + Parse DMAR DRHD table. + + @return EFI_SUCCESS The DMAR DRHD table is parsed. +**/ +EFI_STATUS +ParseDmarAcpiTableRmrr ( + VOID + ) +{ + EFI_ACPI_DMAR_STRUCTURE_HEADER *DmarHeader; + EFI_STATUS Status; + + DmarHeader =3D (EFI_ACPI_DMAR_STRUCTURE_HEADER *)((UINTN)(mAcpiDmarTable= + 1)); + while ((UINTN)DmarHeader < (UINTN)mAcpiDmarTable + mAcpiDmarTable->Heade= r.Length) { + switch (DmarHeader->Type) { + case EFI_ACPI_DMAR_TYPE_RMRR: + Status =3D ProcessRmrr ((EFI_ACPI_DMAR_RMRR_HEADER *)DmarHeader); + if (EFI_ERROR (Status)) { + return Status; + } + break; + default: + break; + } + DmarHeader =3D (EFI_ACPI_DMAR_STRUCTURE_HEADER *)((UINTN)DmarHeader + = DmarHeader->Length); + } + return EFI_SUCCESS ; +} + +/** + Get the DMAR ACPI table. + + @retval EFI_SUCCESS The DMAR ACPI table is got. + @retval EFI_ALREADY_STARTED The DMAR ACPI table has been got previousl= y. + @retval EFI_NOT_FOUND The DMAR ACPI table is not found. +**/ +EFI_STATUS +GetDmarAcpiTable ( + VOID + ) +{ + if (mAcpiDmarTable !=3D NULL) { + return EFI_ALREADY_STARTED; + } + + mAcpiDmarTable =3D (EFI_ACPI_DMAR_HEADER *) EfiLocateFirstAcpiTable ( + EFI_ACPI_4_0_DMA_REMAPPING_T= ABLE_SIGNATURE + ); + if (mAcpiDmarTable =3D=3D NULL) { + return EFI_NOT_FOUND; + } + DEBUG ((DEBUG_INFO,"DMAR Table - 0x%08x\n", mAcpiDmarTable)); + VtdDumpDmarTable(); + + return EFI_SUCCESS; +} diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/Inte= lVTdCoreDxe.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/I= ntelVTdCoreDxe.c new file mode 100644 index 000000000..8449b2885 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/IntelVTdCor= eDxe.c @@ -0,0 +1,412 @@ +/** @file + Intel VTd driver. + + Copyright (c) 2017 - 2023, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#include "DmaProtection.h" + +/** + Provides the controller-specific addresses required to access system mem= ory from a + DMA bus master. + + @param This The protocol instance pointer. + @param Operation Indicates if the bus master is going to re= ad or write to system memory. + @param HostAddress The system memory address to map to the PC= I controller. + @param NumberOfBytes On input the number of bytes to map. On ou= tput the number of bytes + that were mapped. + @param DeviceAddress The resulting map address for the bus mast= er PCI controller to use to + access the hosts HostAddress. + @param Mapping A resulting value to pass to Unmap(). + + @retval EFI_SUCCESS The range was mapped for the returned Numb= erOfBytes. + @retval EFI_UNSUPPORTED The HostAddress cannot be mapped as a comm= on buffer. + @retval EFI_INVALID_PARAMETER One or more parameters are invalid. + @retval EFI_OUT_OF_RESOURCES The request could not be completed due to = a lack of resources. + @retval EFI_DEVICE_ERROR The system hardware could not map the requ= ested address. + +**/ +EFI_STATUS +EFIAPI +IoMmuMap ( + IN EDKII_IOMMU_PROTOCOL *This, + IN EDKII_IOMMU_OPERATION Operation, + IN VOID *HostAddress, + IN OUT UINTN *NumberOfBytes, + OUT EFI_PHYSICAL_ADDRESS *DeviceAddress, + OUT VOID **Mapping + ); + +/** + Completes the Map() operation and releases any corresponding resources. + + @param This The protocol instance pointer. + @param Mapping The mapping value returned from Map(). + + @retval EFI_SUCCESS The range was unmapped. + @retval EFI_INVALID_PARAMETER Mapping is not a value that was returned b= y Map(). + @retval EFI_DEVICE_ERROR The data was not committed to the target s= ystem memory. +**/ +EFI_STATUS +EFIAPI +IoMmuUnmap ( + IN EDKII_IOMMU_PROTOCOL *This, + IN VOID *Mapping + ); + +/** + Allocates pages that are suitable for an OperationBusMasterCommonBuffer = or + OperationBusMasterCommonBuffer64 mapping. + + @param This The protocol instance pointer. + @param Type This parameter is not used and must be ign= ored. + @param MemoryType The type of memory to allocate, EfiBootSer= vicesData or + EfiRuntimeServicesData. + @param Pages The number of pages to allocate. + @param HostAddress A pointer to store the base system memory = address of the + allocated range. + @param Attributes The requested bit mask of attributes for t= he allocated range. + + @retval EFI_SUCCESS The requested memory pages were allocated. + @retval EFI_UNSUPPORTED Attributes is unsupported. The only legal = attribute bits are + MEMORY_WRITE_COMBINE, MEMORY_CACHED and DU= AL_ADDRESS_CYCLE. + @retval EFI_INVALID_PARAMETER One or more parameters are invalid. + @retval EFI_OUT_OF_RESOURCES The memory pages could not be allocated. + +**/ +EFI_STATUS +EFIAPI +IoMmuAllocateBuffer ( + IN EDKII_IOMMU_PROTOCOL *This, + IN EFI_ALLOCATE_TYPE Type, + IN EFI_MEMORY_TYPE MemoryType, + IN UINTN Pages, + IN OUT VOID **HostAddress, + IN UINT64 Attributes + ); + +/** + Frees memory that was allocated with AllocateBuffer(). + + @param This The protocol instance pointer. + @param Pages The number of pages to free. + @param HostAddress The base system memory address of the allo= cated range. + + @retval EFI_SUCCESS The requested memory pages were freed. + @retval EFI_INVALID_PARAMETER The memory range specified by HostAddress = and Pages + was not allocated with AllocateBuffer(). + +**/ +EFI_STATUS +EFIAPI +IoMmuFreeBuffer ( + IN EDKII_IOMMU_PROTOCOL *This, + IN UINTN Pages, + IN VOID *HostAddress + ); + +/** + This function fills DeviceHandle/IoMmuAccess to the MAP_HANDLE_INFO, + based upon the DeviceAddress. + + @param[in] DeviceHandle The device who initiates the DMA access re= quest. + @param[in] DeviceAddress The base of device memory address to be us= ed as the DMA memory. + @param[in] Length The length of device memory address to be = used as the DMA memory. + @param[in] IoMmuAccess The IOMMU access. + +**/ +VOID +SyncDeviceHandleToMapInfo ( + IN EFI_HANDLE DeviceHandle, + IN EFI_PHYSICAL_ADDRESS DeviceAddress, + IN UINT64 Length, + IN UINT64 IoMmuAccess + ); + +/** + Convert the DeviceHandle to SourceId and Segment. + + @param[in] DeviceHandle The device who initiates the DMA access re= quest. + @param[out] Segment The Segment used to identify a VTd engine. + @param[out] SourceId The SourceId used to identify a VTd engine= and table entry. + + @retval EFI_SUCCESS The Segment and SourceId are returned. + @retval EFI_INVALID_PARAMETER DeviceHandle is an invalid handle. + @retval EFI_UNSUPPORTED DeviceHandle is unknown by the IOMMU. +**/ +EFI_STATUS +DeviceHandleToSourceId ( + IN EFI_HANDLE DeviceHandle, + OUT UINT16 *Segment, + OUT VTD_SOURCE_ID *SourceId + ) +{ + EFI_PCI_IO_PROTOCOL *PciIo; + UINTN Seg; + UINTN Bus; + UINTN Dev; + UINTN Func; + EFI_STATUS Status; + EDKII_PLATFORM_VTD_DEVICE_INFO DeviceInfo; + + Status =3D EFI_NOT_FOUND; + if (mPlatformVTdPolicy !=3D NULL) { + Status =3D mPlatformVTdPolicy->GetDeviceId (mPlatformVTdPolicy, Device= Handle, &DeviceInfo); + if (!EFI_ERROR(Status)) { + *Segment =3D DeviceInfo.Segment; + *SourceId =3D DeviceInfo.SourceId; + return EFI_SUCCESS; + } + } + + Status =3D gBS->HandleProtocol (DeviceHandle, &gEfiPciIoProtocolGuid, (V= OID **)&PciIo); + if (EFI_ERROR(Status)) { + return EFI_UNSUPPORTED; + } + Status =3D PciIo->GetLocation (PciIo, &Seg, &Bus, &Dev, &Func); + if (EFI_ERROR(Status)) { + return EFI_UNSUPPORTED; + } + *Segment =3D (UINT16)Seg; + SourceId->Bits.Bus =3D (UINT8)Bus; + SourceId->Bits.Device =3D (UINT8)Dev; + SourceId->Bits.Function =3D (UINT8)Func; + + return EFI_SUCCESS; +} + +/** + Set IOMMU attribute for a system memory. + + If the IOMMU protocol exists, the system memory cannot be used + for DMA by default. + + When a device requests a DMA access for a system memory, + the device driver need use SetAttribute() to update the IOMMU + attribute to request DMA access (read and/or write). + + The DeviceHandle is used to identify which device submits the request. + The IOMMU implementation need translate the device path to an IOMMU devi= ce ID, + and set IOMMU hardware register accordingly. + 1) DeviceHandle can be a standard PCI device. + The memory for BusMasterRead need set EDKII_IOMMU_ACCESS_READ. + The memory for BusMasterWrite need set EDKII_IOMMU_ACCESS_WRITE. + The memory for BusMasterCommonBuffer need set EDKII_IOMMU_ACCESS_READ= |EDKII_IOMMU_ACCESS_WRITE. + After the memory is used, the memory need set 0 to keep it being prot= ected. + 2) DeviceHandle can be an ACPI device (ISA, I2C, SPI, etc). + The memory for DMA access need set EDKII_IOMMU_ACCESS_READ and/or EDK= II_IOMMU_ACCESS_WRITE. + + @param[in] This The protocol instance pointer. + @param[in] DeviceHandle The device who initiates the DMA access re= quest. + @param[in] DeviceAddress The base of device memory address to be us= ed as the DMA memory. + @param[in] Length The length of device memory address to be = used as the DMA memory. + @param[in] IoMmuAccess The IOMMU access. + + @retval EFI_SUCCESS The IoMmuAccess is set for the memory ran= ge specified by DeviceAddress and Length. + @retval EFI_INVALID_PARAMETER DeviceHandle is an invalid handle. + @retval EFI_INVALID_PARAMETER DeviceAddress is not IoMmu Page size alig= ned. + @retval EFI_INVALID_PARAMETER Length is not IoMmu Page size aligned. + @retval EFI_INVALID_PARAMETER Length is 0. + @retval EFI_INVALID_PARAMETER IoMmuAccess specified an illegal combinat= ion of access. + @retval EFI_UNSUPPORTED DeviceHandle is unknown by the IOMMU. + @retval EFI_UNSUPPORTED The bit mask of IoMmuAccess is not suppor= ted by the IOMMU. + @retval EFI_UNSUPPORTED The IOMMU does not support the memory ran= ge specified by DeviceAddress and Length. + @retval EFI_OUT_OF_RESOURCES There are not enough resources available = to modify the IOMMU access. + @retval EFI_DEVICE_ERROR The IOMMU device reported an error while = attempting the operation. + +**/ +EFI_STATUS +VTdSetAttribute ( + IN EDKII_IOMMU_PROTOCOL *This, + IN EFI_HANDLE DeviceHandle, + IN EFI_PHYSICAL_ADDRESS DeviceAddress, + IN UINT64 Length, + IN UINT64 IoMmuAccess + ) +{ + EFI_STATUS Status; + UINT16 Segment; + VTD_SOURCE_ID SourceId; + CHAR8 PerfToken[sizeof("VTD(S0000.B00.D00.F00)")]; + UINT32 Identifier; + VTD_PROTOCOL_SET_ATTRIBUTE LogSetAttribute; + + DumpVtdIfError (); + + Status =3D DeviceHandleToSourceId (DeviceHandle, &Segment, &SourceId); + if (EFI_ERROR(Status)) { + return Status; + } + + DEBUG ((DEBUG_VERBOSE, "IoMmuSetAttribute: ")); + DEBUG ((DEBUG_VERBOSE, "PCI(S%x.B%x.D%x.F%x) ", Segment, SourceId.Bits.B= us, SourceId.Bits.Device, SourceId.Bits.Function)); + DEBUG ((DEBUG_VERBOSE, "(0x%lx~0x%lx) - %lx\n", DeviceAddress, Length, I= oMmuAccess)); + + if (mAcpiDmarTable =3D=3D NULL) { + // + // Record the entry to driver global variable. + // As such once VTd is activated, the setting can be adopted. + // + if ((PcdGet8 (PcdVTdPolicyPropertyMask) & BIT2) !=3D 0) { + // + // Force no IOMMU access attribute request recording before DMAR tab= le is installed. + // + ASSERT_EFI_ERROR (EFI_NOT_READY); + return EFI_NOT_READY; + } + Status =3D RequestAccessAttribute (Segment, SourceId, DeviceAddress, L= ength, IoMmuAccess); + } else { + PERF_CODE ( + AsciiSPrint (PerfToken, sizeof(PerfToken), "S%04xB%02xD%02xF%01x", S= egment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId.Bits.Function); + Identifier =3D (Segment << 16) | SourceId.Uint16; + PERF_START_EX (gImageHandle, PerfToken, "IntelVTD", 0, Identifier); + ); + + Status =3D SetAccessAttribute (Segment, SourceId, DeviceAddress, Lengt= h, IoMmuAccess); + + PERF_CODE ( + Identifier =3D (Segment << 16) | SourceId.Uint16; + PERF_END_EX (gImageHandle, PerfToken, "IntelVTD", 0, Identifier); + ); + } + + if (!EFI_ERROR(Status)) { + SyncDeviceHandleToMapInfo ( + DeviceHandle, + DeviceAddress, + Length, + IoMmuAccess + ); + } + + LogSetAttribute.SourceId.Uint16 =3D SourceId.Uint16; + LogSetAttribute.DeviceAddress =3D DeviceAddress; + LogSetAttribute.Length =3D Length; + LogSetAttribute.IoMmuAccess =3D IoMmuAccess; + LogSetAttribute.Status =3D Status; + VTdLogAddDataEvent (VTDLOG_DXE_IOMMU_SET_ATTRIBUTE, 0, &LogSetAttribute,= sizeof (VTD_PROTOCOL_SET_ATTRIBUTE)); + + return Status; +} + +/** + Set IOMMU attribute for a system memory. + + If the IOMMU protocol exists, the system memory cannot be used + for DMA by default. + + When a device requests a DMA access for a system memory, + the device driver need use SetAttribute() to update the IOMMU + attribute to request DMA access (read and/or write). + + The DeviceHandle is used to identify which device submits the request. + The IOMMU implementation need translate the device path to an IOMMU devi= ce ID, + and set IOMMU hardware register accordingly. + 1) DeviceHandle can be a standard PCI device. + The memory for BusMasterRead need set EDKII_IOMMU_ACCESS_READ. + The memory for BusMasterWrite need set EDKII_IOMMU_ACCESS_WRITE. + The memory for BusMasterCommonBuffer need set EDKII_IOMMU_ACCESS_READ= |EDKII_IOMMU_ACCESS_WRITE. + After the memory is used, the memory need set 0 to keep it being prot= ected. + 2) DeviceHandle can be an ACPI device (ISA, I2C, SPI, etc). + The memory for DMA access need set EDKII_IOMMU_ACCESS_READ and/or EDK= II_IOMMU_ACCESS_WRITE. + + @param[in] This The protocol instance pointer. + @param[in] DeviceHandle The device who initiates the DMA access re= quest. + @param[in] Mapping The mapping value returned from Map(). + @param[in] IoMmuAccess The IOMMU access. + + @retval EFI_SUCCESS The IoMmuAccess is set for the memory ran= ge specified by DeviceAddress and Length. + @retval EFI_INVALID_PARAMETER DeviceHandle is an invalid handle. + @retval EFI_INVALID_PARAMETER Mapping is not a value that was returned = by Map(). + @retval EFI_INVALID_PARAMETER IoMmuAccess specified an illegal combinat= ion of access. + @retval EFI_UNSUPPORTED DeviceHandle is unknown by the IOMMU. + @retval EFI_UNSUPPORTED The bit mask of IoMmuAccess is not suppor= ted by the IOMMU. + @retval EFI_UNSUPPORTED The IOMMU does not support the memory ran= ge specified by Mapping. + @retval EFI_OUT_OF_RESOURCES There are not enough resources available = to modify the IOMMU access. + @retval EFI_DEVICE_ERROR The IOMMU device reported an error while = attempting the operation. + +**/ +EFI_STATUS +EFIAPI +IoMmuSetAttribute ( + IN EDKII_IOMMU_PROTOCOL *This, + IN EFI_HANDLE DeviceHandle, + IN VOID *Mapping, + IN UINT64 IoMmuAccess + ) +{ + EFI_STATUS Status; + EFI_PHYSICAL_ADDRESS DeviceAddress; + UINTN NumberOfPages; + EFI_TPL OriginalTpl; + + OriginalTpl =3D gBS->RaiseTPL (VTD_TPL_LEVEL); + + Status =3D GetDeviceInfoFromMapping (Mapping, &DeviceAddress, &NumberOfP= ages); + if (!EFI_ERROR(Status)) { + Status =3D VTdSetAttribute ( + This, + DeviceHandle, + DeviceAddress, + EFI_PAGES_TO_SIZE(NumberOfPages), + IoMmuAccess + ); + } + + gBS->RestoreTPL (OriginalTpl); + + return Status; +} + +EDKII_IOMMU_PROTOCOL mIntelVTd =3D { + EDKII_IOMMU_PROTOCOL_REVISION, + IoMmuSetAttribute, + IoMmuMap, + IoMmuUnmap, + IoMmuAllocateBuffer, + IoMmuFreeBuffer, +}; + +/** + Initialize the VTd driver. + + @param[in] ImageHandle ImageHandle of the loaded driver + @param[in] SystemTable Pointer to the System Table + + @retval EFI_SUCCESS The Protocol is installed. + @retval EFI_OUT_OF_RESOURCES Not enough resources available to initial= ize driver. + @retval EFI_DEVICE_ERROR A device error occurred attempting to ini= tialize the driver. + +**/ +EFI_STATUS +EFIAPI +IntelVTdInitialize ( + IN EFI_HANDLE ImageHandle, + IN EFI_SYSTEM_TABLE *SystemTable + ) +{ + EFI_STATUS Status; + EFI_HANDLE Handle; + + if ((PcdGet8(PcdVTdPolicyPropertyMask) & BIT0) =3D=3D 0) { + return EFI_UNSUPPORTED; + } + + VTdLogInitialize (); + + InitializeDmaProtection (); + + Handle =3D NULL; + Status =3D gBS->InstallMultipleProtocolInterfaces ( + &Handle, + &gEdkiiIoMmuProtocolGuid, &mIntelVTd, + NULL + ); + ASSERT_EFI_ERROR (Status); + + VTdLogAddEvent (VTDLOG_DXE_INSTALL_IOMMU_PROTOCOL, Status, 0); + + return Status; +} diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/Inte= lVTdCoreDxe.inf b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe= /IntelVTdCoreDxe.inf new file mode 100644 index 000000000..210d6963f --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/IntelVTdCor= eDxe.inf @@ -0,0 +1,93 @@ +## @file +# Intel VTd DXE Driver. +# +# This driver initializes VTd engine based upon DMAR ACPI tables +# and provide DMA protection to PCI or ACPI device. +# +# Copyright (c) 2017 - 2023, Intel Corporation. All rights reserved.
+# SPDX-License-Identifier: BSD-2-Clause-Patent +# +## + +[Defines] + INF_VERSION =3D 0x00010005 + BASE_NAME =3D IntelVTdCoreDxe + MODULE_UNI_FILE =3D IntelVTdCoreDxe.uni + FILE_GUID =3D 5c83381f-34d3-4672-b8f3-83c3d6f3b00e + MODULE_TYPE =3D DXE_DRIVER + VERSION_STRING =3D 1.0 + ENTRY_POINT =3D IntelVTdInitialize + +# +# The following information is for reference only and not required by the = build tools. +# +# VALID_ARCHITECTURES =3D IA32 X64 EBC +# +# + +[Sources] + IntelVTdCoreDxe.c + BmDma.c + DmaProtection.c + DmaProtection.h + DmarAcpiTable.c + PciInfo.c + TranslationTable.c + TranslationTableEx.c + VtdLog.c + VtdReg.c + +[Packages] + MdePkg/MdePkg.dec + MdeModulePkg/MdeModulePkg.dec + IntelSiliconPkg/IntelSiliconPkg.dec + +[LibraryClasses] + DebugLib + UefiDriverEntryPoint + UefiBootServicesTableLib + BaseLib + IoLib + HobLib + PciSegmentLib + BaseMemoryLib + MemoryAllocationLib + UefiLib + CacheMaintenanceLib + PerformanceLib + PrintLib + ReportStatusCodeLib + IntelVTdPeiDxeLib + +[Guids] + gVTdLogBufferHobGuid ## CONSUMES + gEfiEventExitBootServicesGuid ## CONSUMES ## Event + ## CONSUMES ## SystemTable + ## CONSUMES ## Event + gEfiAcpi20TableGuid + ## CONSUMES ## SystemTable + ## CONSUMES ## Event + gEfiAcpi10TableGuid + +[Protocols] + gEdkiiIoMmuProtocolGuid ## PRODUCES + gEfiPciIoProtocolGuid ## CONSUMES + gEfiPciEnumerationCompleteProtocolGuid ## CONSUMES + gEdkiiPlatformVTdPolicyProtocolGuid ## SOMETIMES_CONSUMES + gEfiPciRootBridgeIoProtocolGuid ## CONSUMES + gEdkiiVTdLogProtocolGuid ## PRODUCES + +[Pcd] + gIntelSiliconPkgTokenSpaceGuid.PcdVTdPolicyPropertyMask ## CONSUMES + gIntelSiliconPkgTokenSpaceGuid.PcdErrorCodeVTdError ## CONSUMES + gIntelSiliconPkgTokenSpaceGuid.PcdVTdSupportAbortDmaMode ## CONSUMES + gIntelSiliconPkgTokenSpaceGuid.PcdVTdLogLevel ## CONSUMES + gIntelSiliconPkgTokenSpaceGuid.PcdVTdPeiPostMemLogBufferSize ## CONSUMES + gIntelSiliconPkgTokenSpaceGuid.PcdVTdDxeLogBufferSize ## CONSUMES + +[Depex] + gEfiPciRootBridgeIoProtocolGuid + +[UserExtensions.TianoCore."ExtraFiles"] + IntelVTdCoreDxeExtra.uni + diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/Inte= lVTdCoreDxe.uni b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe= /IntelVTdCoreDxe.uni new file mode 100644 index 000000000..73d2c83c4 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/IntelVTdCor= eDxe.uni @@ -0,0 +1,14 @@ +// /** @file +// IntelVTdDxe Module Localized Abstract and Description Content +// +// Copyright (c) 2017, Intel Corporation. All rights reserved.
+// +// SPDX-License-Identifier: BSD-2-Clause-Patent +// +// **/ + + +#string STR_MODULE_ABSTRACT #language en-US "Intel VTd CORE DX= E Driver." + +#string STR_MODULE_DESCRIPTION #language en-US "This driver initi= alizes VTd engine based upon DMAR ACPI tables and provide DMA protection to= PCI or ACPI device." + diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/Inte= lVTdCoreDxeExtra.uni b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCo= reDxe/IntelVTdCoreDxeExtra.uni new file mode 100644 index 000000000..7f1aec65e --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/IntelVTdCor= eDxeExtra.uni @@ -0,0 +1,14 @@ +// /** @file +// IntelVTdDxe Localized Strings and Content +// +// Copyright (c) 2017, Intel Corporation. All rights reserved.
+// +// SPDX-License-Identifier: BSD-2-Clause-Patent +// +// **/ + +#string STR_PROPERTIES_MODULE_NAME +#language en-US +"Intel VTd CORE DXE Driver" + + diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/PciI= nfo.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/PciInfo.c new file mode 100644 index 000000000..0eb832d6e --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/PciInfo.c @@ -0,0 +1,418 @@ +/** @file + + Copyright (c) 2017 - 2023, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#include "DmaProtection.h" + +/** + Return the index of PCI data. + + @param[in] VtdIndex The index used to identify a VTd engine. + @param[in] Segment The Segment used to identify a VTd engine. + @param[in] SourceId The SourceId used to identify a VTd engine= and table entry. + + @return The index of the PCI data. + @retval (UINTN)-1 The PCI data is not found. +**/ +UINTN +GetPciDataIndex ( + IN UINTN VtdIndex, + IN UINT16 Segment, + IN VTD_SOURCE_ID SourceId + ) +{ + UINTN Index; + VTD_SOURCE_ID *PciSourceId; + + if (Segment !=3D mVtdUnitInformation[VtdIndex].Segment) { + return (UINTN)-1; + } + + for (Index =3D 0; Index < mVtdUnitInformation[VtdIndex].PciDeviceInfo->P= ciDeviceDataNumber; Index++) { + PciSourceId =3D &mVtdUnitInformation[VtdIndex].PciDeviceInfo->PciDevic= eData[Index].PciSourceId; + if ((PciSourceId->Bits.Bus =3D=3D SourceId.Bits.Bus) && + (PciSourceId->Bits.Device =3D=3D SourceId.Bits.Device) && + (PciSourceId->Bits.Function =3D=3D SourceId.Bits.Function) ) { + return Index; + } + } + + return (UINTN)-1; +} + +/** + Register PCI device to VTd engine. + + @param[in] VtdIndex The index of VTd engine. + @param[in] Segment The segment of the source. + @param[in] SourceId The SourceId of the source. + @param[in] DeviceType The DMAR device scope type. + @param[in] CheckExist TRUE: ERROR will be returned if the PC= I device is already registered. + FALSE: SUCCESS will be returned if the= PCI device is registered. + + @retval EFI_SUCCESS The PCI device is registered. + @retval EFI_OUT_OF_RESOURCES No enough resource to register a new PCI d= evice. + @retval EFI_ALREADY_STARTED The device is already registered. +**/ +EFI_STATUS +RegisterPciDevice ( + IN UINTN VtdIndex, + IN UINT16 Segment, + IN VTD_SOURCE_ID SourceId, + IN UINT8 DeviceType, + IN BOOLEAN CheckExist + ) +{ + PCI_DEVICE_INFORMATION *PciDeviceInfo; + VTD_SOURCE_ID *PciSourceId; + UINTN PciDataIndex; + UINTN Index; + PCI_DEVICE_INFORMATION *NewPciDeviceInfo; + EDKII_PLATFORM_VTD_PCI_DEVICE_ID *PciDeviceId; + + PciDeviceInfo =3D mVtdUnitInformation[VtdIndex].PciDeviceInfo; + + if (PciDeviceInfo->IncludeAllFlag) { + // + // Do not register device in other VTD Unit + // + for (Index =3D 0; Index < VtdIndex; Index++) { + PciDataIndex =3D GetPciDataIndex (Index, Segment, SourceId); + if (PciDataIndex !=3D (UINTN)-1) { + DEBUG ((DEBUG_INFO, " RegisterPciDevice: PCI S%04x B%02x D%02x F%= 02x already registered by Other Vtd(%d)\n", Segment, SourceId.Bits.Bus, Sou= rceId.Bits.Device, SourceId.Bits.Function, Index)); + return EFI_SUCCESS; + } + } + } + + PciDataIndex =3D GetPciDataIndex (VtdIndex, Segment, SourceId); + if (PciDataIndex =3D=3D (UINTN)-1) { + // + // Register new + // + + if (PciDeviceInfo->PciDeviceDataNumber >=3D PciDeviceInfo->PciDeviceDa= taMaxNumber) { + // + // Reallocate + // + NewPciDeviceInfo =3D AllocateZeroPool (sizeof (PCI_DEVICE_INFORMATIO= N) + sizeof (PCI_DEVICE_DATA) * (PciDeviceInfo->PciDeviceDataMaxNumber + MA= X_VTD_PCI_DATA_NUMBER)); + if (NewPciDeviceInfo =3D=3D NULL) { + return EFI_OUT_OF_RESOURCES; + } + + CopyMem (NewPciDeviceInfo, PciDeviceInfo, sizeof (PCI_DEVICE_INFORMA= TION) + sizeof (PCI_DEVICE_DATA) * (PciDeviceInfo->PciDeviceDataMaxNumber += MAX_VTD_PCI_DATA_NUMBER)); + FreePool (PciDeviceInfo); + + NewPciDeviceInfo->PciDeviceDataMaxNumber +=3D MAX_VTD_PCI_DATA_NUMBE= R; + mVtdUnitInformation[VtdIndex].PciDeviceInfo =3D NewPciDeviceInfo; + PciDeviceInfo =3D NewPciDeviceInfo; + } + + ASSERT (PciDeviceInfo->PciDeviceDataNumber < PciDeviceInfo->PciDeviceD= ataMaxNumber); + + PciSourceId =3D &PciDeviceInfo->PciDeviceData[PciDeviceInfo->PciDevice= DataNumber].PciSourceId; + PciSourceId->Bits.Bus =3D SourceId.Bits.Bus; + PciSourceId->Bits.Device =3D SourceId.Bits.Device; + PciSourceId->Bits.Function =3D SourceId.Bits.Function; + + DEBUG ((DEBUG_INFO, " RegisterPciDevice: PCI S%04x B%02x D%02x F%02x"= , Segment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId.Bits.Function)= ); + + PciDeviceId =3D &PciDeviceInfo->PciDeviceData[PciDeviceInfo->PciDevice= DataNumber].PciDeviceId; + if ((DeviceType =3D=3D EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_ENDPOINT) = || + (DeviceType =3D=3D EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_BRIDGE)) { + PciDeviceId->VendorId =3D PciSegmentRead16 (PCI_SEGMENT_LIB_ADDRES= S(Segment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId.Bits.Function,= PCI_VENDOR_ID_OFFSET)); + PciDeviceId->DeviceId =3D PciSegmentRead16 (PCI_SEGMENT_LIB_ADDRES= S(Segment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId.Bits.Function,= PCI_DEVICE_ID_OFFSET)); + PciDeviceId->RevisionId =3D PciSegmentRead8 (PCI_SEGMENT_LIB_ADDRESS= (Segment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId.Bits.Function, = PCI_REVISION_ID_OFFSET)); + + DEBUG ((DEBUG_INFO, " (%04x:%04x:%02x", PciDeviceId->VendorId, PciDe= viceId->DeviceId, PciDeviceId->RevisionId)); + + if (DeviceType =3D=3D EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_ENDPOINT)= { + PciDeviceId->SubsystemVendorId =3D PciSegmentRead16 (PCI_SEGMENT_L= IB_ADDRESS(Segment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId.Bits.= Function, PCI_SUBSYSTEM_VENDOR_ID_OFFSET)); + PciDeviceId->SubsystemDeviceId =3D PciSegmentRead16 (PCI_SEGMENT_L= IB_ADDRESS(Segment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId.Bits.= Function, PCI_SUBSYSTEM_ID_OFFSET)); + DEBUG ((DEBUG_INFO, ":%04x:%04x", PciDeviceId->SubsystemVendorId, = PciDeviceId->SubsystemDeviceId)); + } + DEBUG ((DEBUG_INFO, ")")); + } + + PciDeviceInfo->PciDeviceData[PciDeviceInfo->PciDeviceDataNumber].Devic= eType =3D DeviceType; + + if ((DeviceType !=3D EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_ENDPOINT) && + (DeviceType !=3D EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_BRIDGE)) { + DEBUG ((DEBUG_INFO, " (*)")); + } + DEBUG ((DEBUG_INFO, "\n")); + + PciDeviceInfo->PciDeviceDataNumber++; + } else { + if (CheckExist) { + DEBUG ((DEBUG_INFO, " RegisterPciDevice: PCI S%04x B%02x D%02x F%02= x already registered\n", Segment, SourceId.Bits.Bus, SourceId.Bits.Device, = SourceId.Bits.Function)); + return EFI_ALREADY_STARTED; + } + } + + return EFI_SUCCESS; +} + +/** + The scan bus callback function to register PCI device. + + @param[in] Context The context of the callback. + @param[in] Segment The segment of the source. + @param[in] Bus The bus of the source. + @param[in] Device The device of the source. + @param[in] Function The function of the source. + + @retval EFI_SUCCESS The PCI device is registered. +**/ +EFI_STATUS +EFIAPI +ScanBusCallbackRegisterPciDevice ( + IN VOID *Context, + IN UINT16 Segment, + IN UINT8 Bus, + IN UINT8 Device, + IN UINT8 Function + ) +{ + VTD_SOURCE_ID SourceId; + UINTN VtdIndex; + UINT8 BaseClass; + UINT8 SubClass; + UINT8 DeviceType; + EFI_STATUS Status; + + VtdIndex =3D (UINTN)Context; + SourceId.Bits.Bus =3D Bus; + SourceId.Bits.Device =3D Device; + SourceId.Bits.Function =3D Function; + + DeviceType =3D EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_ENDPOINT; + BaseClass =3D PciSegmentRead8 (PCI_SEGMENT_LIB_ADDRESS(Segment, Bus, Dev= ice, Function, PCI_CLASSCODE_OFFSET + 2)); + if (BaseClass =3D=3D PCI_CLASS_BRIDGE) { + SubClass =3D PciSegmentRead8 (PCI_SEGMENT_LIB_ADDRESS(Segment, Bus, De= vice, Function, PCI_CLASSCODE_OFFSET + 1)); + if (SubClass =3D=3D PCI_CLASS_BRIDGE_P2P) { + DeviceType =3D EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_BRIDGE; + } + } + + Status =3D RegisterPciDevice (VtdIndex, Segment, SourceId, DeviceType, F= ALSE); + return Status; +} + +/** + Scan PCI bus and invoke callback function for each PCI devices under the= bus. + + @param[in] Context The context of the callback function. + @param[in] Segment The segment of the source. + @param[in] Bus The bus of the source. + @param[in] Callback The callback function in PCI scan. + + @retval EFI_SUCCESS The PCI devices under the bus are scaned. +**/ +EFI_STATUS +ScanPciBus ( + IN VOID *Context, + IN UINT16 Segment, + IN UINT8 Bus, + IN SCAN_BUS_FUNC_CALLBACK_FUNC Callback + ) +{ + UINT8 Device; + UINT8 Function; + UINT8 SecondaryBusNumber; + UINT8 HeaderType; + UINT8 BaseClass; + UINT8 SubClass; + UINT16 VendorID; + UINT16 DeviceID; + EFI_STATUS Status; + + // Scan the PCI bus for devices + for (Device =3D 0; Device <=3D PCI_MAX_DEVICE; Device++) { + for (Function =3D 0; Function <=3D PCI_MAX_FUNC; Function++) { + VendorID =3D PciSegmentRead16 (PCI_SEGMENT_LIB_ADDRESS(Segment, Bus= , Device, Function, PCI_VENDOR_ID_OFFSET)); + DeviceID =3D PciSegmentRead16 (PCI_SEGMENT_LIB_ADDRESS(Segment, Bus= , Device, Function, PCI_DEVICE_ID_OFFSET)); + if (VendorID =3D=3D 0xFFFF && DeviceID =3D=3D 0xFFFF) { + if (Function =3D=3D 0) { + // + // If function 0 is not implemented, do not scan other functions. + // + break; + } + continue; + } + + Status =3D Callback (Context, Segment, Bus, Device, Function); + if (EFI_ERROR (Status)) { + return Status; + } + + BaseClass =3D PciSegmentRead8 (PCI_SEGMENT_LIB_ADDRESS(Segment, Bus,= Device, Function, PCI_CLASSCODE_OFFSET + 2)); + if (BaseClass =3D=3D PCI_CLASS_BRIDGE) { + SubClass =3D PciSegmentRead8 (PCI_SEGMENT_LIB_ADDRESS(Segment, Bus= , Device, Function, PCI_CLASSCODE_OFFSET + 1)); + if (SubClass =3D=3D PCI_CLASS_BRIDGE_P2P) { + SecondaryBusNumber =3D PciSegmentRead8 (PCI_SEGMENT_LIB_ADDRESS(= Segment, Bus, Device, Function, PCI_BRIDGE_SECONDARY_BUS_REGISTER_OFFSET)); + DEBUG ((DEBUG_INFO," ScanPciBus: PCI bridge S%04x B%02x D%02x F= %02x (SecondBus:%02x)\n", Segment, Bus, Device, Function, SecondaryBusNumbe= r)); + if (SecondaryBusNumber !=3D 0) { + Status =3D ScanPciBus (Context, Segment, SecondaryBusNumber, C= allback); + if (EFI_ERROR (Status)) { + return Status; + } + } + } + } + + if (Function =3D=3D 0) { + HeaderType =3D PciSegmentRead8 (PCI_SEGMENT_LIB_ADDRESS(Segment, B= us, Device, 0, PCI_HEADER_TYPE_OFFSET)); + if ((HeaderType & HEADER_TYPE_MULTI_FUNCTION) =3D=3D 0x00) { + // + // It is not a multi-function device, do not scan other function= s. + // + break; + } + } + } + } + + return EFI_SUCCESS; +} + +/** + Scan PCI bus and invoke callback function for each PCI devices under all= root bus. + + @param[in] Context The context of the callback function. + @param[in] Segment The segment of the source. + @param[in] Callback The callback function in PCI scan. + + @retval EFI_SUCCESS The PCI devices under the bus are scaned. +**/ +EFI_STATUS +ScanAllPciBus ( + IN VOID *Context, + IN UINT16 Segment, + IN SCAN_BUS_FUNC_CALLBACK_FUNC Callback + ) +{ + EFI_STATUS Status; + UINTN Index; + UINTN HandleCount; + EFI_HANDLE *HandleBuffer; + EFI_PCI_ROOT_BRIDGE_IO_PROTOCOL *PciRootBridgeIo; + EFI_ACPI_ADDRESS_SPACE_DESCRIPTOR *Descriptors; + + DEBUG ((DEBUG_INFO, "ScanAllPciBus ()\n")); + + Status =3D gBS->LocateHandleBuffer ( + ByProtocol, + &gEfiPciRootBridgeIoProtocolGuid, + NULL, + &HandleCount, + &HandleBuffer + ); + ASSERT_EFI_ERROR (Status); + + DEBUG ((DEBUG_INFO,"Find %d root bridges\n", HandleCount)); + + for (Index =3D 0; Index < HandleCount; Index++) { + Status =3D gBS->HandleProtocol ( + HandleBuffer[Index], + &gEfiPciRootBridgeIoProtocolGuid, + (VOID **) &PciRootBridgeIo + ); + ASSERT_EFI_ERROR (Status); + + Status =3D PciRootBridgeIo->Configuration (PciRootBridgeIo, (VOID **) = &Descriptors); + ASSERT_EFI_ERROR (Status); + + while (Descriptors->Desc !=3D ACPI_END_TAG_DESCRIPTOR) { + if (Descriptors->ResType =3D=3D ACPI_ADDRESS_SPACE_TYPE_BUS) { + break; + } + Descriptors++; + } + + if (Descriptors->Desc =3D=3D ACPI_END_TAG_DESCRIPTOR) { + continue; + } + + DEBUG ((DEBUG_INFO,"Scan root bridges : %d, Segment : %d, Bus : 0x%02X= \n", Index, PciRootBridgeIo->SegmentNumber, Descriptors->AddrRangeMin)); + Status =3D ScanPciBus(Context, (UINT16) PciRootBridgeIo->SegmentNumber= , (UINT8) Descriptors->AddrRangeMin, Callback); + if (EFI_ERROR (Status)) { + break; + } + } + + FreePool(HandleBuffer); + + return Status; +} + +/** + Find the VTd index by the Segment and SourceId. + + @param[in] Segment The segment of the source. + @param[in] SourceId The SourceId of the source. + @param[out] ExtContextEntry The ExtContextEntry of the source. + @param[out] ContextEntry The ContextEntry of the source. + + @return The index of the VTd engine. + @retval (UINTN)-1 The VTd engine is not found. +**/ +UINTN +FindVtdIndexByPciDevice ( + IN UINT16 Segment, + IN VTD_SOURCE_ID SourceId, + OUT VTD_EXT_CONTEXT_ENTRY **ExtContextEntry, + OUT VTD_CONTEXT_ENTRY **ContextEntry + ) +{ + UINTN VtdIndex; + VTD_ROOT_ENTRY *RootEntry; + VTD_CONTEXT_ENTRY *ContextEntryTable; + VTD_CONTEXT_ENTRY *ThisContextEntry; + VTD_EXT_ROOT_ENTRY *ExtRootEntry; + VTD_EXT_CONTEXT_ENTRY *ExtContextEntryTable; + VTD_EXT_CONTEXT_ENTRY *ThisExtContextEntry; + UINTN PciDataIndex; + + for (VtdIndex =3D 0; VtdIndex < mVtdUnitNumber; VtdIndex++) { + if (Segment !=3D mVtdUnitInformation[VtdIndex].Segment) { + continue; + } + + PciDataIndex =3D GetPciDataIndex (VtdIndex, Segment, SourceId); + if (PciDataIndex =3D=3D (UINTN)-1) { + continue; + } + +// DEBUG ((DEBUG_INFO,"FindVtdIndex(0x%x) for S%04x B%02x D%02x F%02x\n= ", VtdIndex, Segment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId.Bit= s.Function)); + + if (mVtdUnitInformation[VtdIndex].ExtRootEntryTable !=3D 0) { + ExtRootEntry =3D &mVtdUnitInformation[VtdIndex].ExtRootEntryTable[So= urceId.Index.RootIndex]; + ExtContextEntryTable =3D (VTD_EXT_CONTEXT_ENTRY *)(UINTN)VTD_64BITS_= ADDRESS(ExtRootEntry->Bits.LowerContextTablePointerLo, ExtRootEntry->Bits.L= owerContextTablePointerHi) ; + ThisExtContextEntry =3D &ExtContextEntryTable[SourceId.Index.Contex= tIndex]; + if (ThisExtContextEntry->Bits.AddressWidth =3D=3D 0) { + continue; + } + *ExtContextEntry =3D ThisExtContextEntry; + *ContextEntry =3D NULL; + } else { + RootEntry =3D &mVtdUnitInformation[VtdIndex].RootEntryTable[SourceId= .Index.RootIndex]; + ContextEntryTable =3D (VTD_CONTEXT_ENTRY *)(UINTN)VTD_64BITS_ADDRESS= (RootEntry->Bits.ContextTablePointerLo, RootEntry->Bits.ContextTablePointer= Hi) ; + ThisContextEntry =3D &ContextEntryTable[SourceId.Index.ContextIndex= ]; + if (ThisContextEntry->Bits.AddressWidth =3D=3D 0) { + continue; + } + *ExtContextEntry =3D NULL; + *ContextEntry =3D ThisContextEntry; + } + + return VtdIndex; + } + + return (UINTN)-1; +} + diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/Tran= slationTable.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/= TranslationTable.c new file mode 100644 index 000000000..37ca6e405 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/Translation= Table.c @@ -0,0 +1,1112 @@ +/** @file + + Copyright (c) 2017 - 2023, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#include "DmaProtection.h" + +/** + Create extended context entry. + + @param[in] VtdIndex The index of the VTd engine. + + @retval EFI_SUCCESS The extended context entry is created. + @retval EFI_OUT_OF_RESOURCE No enough resource to create extended conte= xt entry. +**/ +EFI_STATUS +CreateExtContextEntry ( + IN UINTN VtdIndex + ); + +/** + Allocate zero pages. + + @param[in] Pages the number of pages. + + @return the page address. + @retval NULL No resource to allocate pages. +**/ +VOID * +EFIAPI +AllocateZeroPages ( + IN UINTN Pages + ) +{ + VOID *Addr; + + Addr =3D AllocatePages (Pages); + if (Addr =3D=3D NULL) { + return NULL; + } + ZeroMem (Addr, EFI_PAGES_TO_SIZE(Pages)); + return Addr; +} + +/** + Set second level paging entry attribute based upon IoMmuAccess. + + @param[in] PtEntry The paging entry. + @param[in] IoMmuAccess The IOMMU access. +**/ +VOID +SetSecondLevelPagingEntryAttribute ( + IN VTD_SECOND_LEVEL_PAGING_ENTRY *PtEntry, + IN UINT64 IoMmuAccess + ) +{ + PtEntry->Bits.Read =3D ((IoMmuAccess & EDKII_IOMMU_ACCESS_READ) !=3D 0); + PtEntry->Bits.Write =3D ((IoMmuAccess & EDKII_IOMMU_ACCESS_WRITE) !=3D 0= ); +} + +/** + Create context entry. + + @param[in] VtdIndex The index of the VTd engine. + + @retval EFI_SUCCESS The context entry is created. + @retval EFI_OUT_OF_RESOURCE No enough resource to create context entry. +**/ +EFI_STATUS +CreateContextEntry ( + IN UINTN VtdIndex + ) +{ + UINTN Index; + VOID *Buffer; + UINTN RootPages; + UINTN ContextPages; + VTD_ROOT_ENTRY *RootEntry; + VTD_CONTEXT_ENTRY *ContextEntryTable; + VTD_CONTEXT_ENTRY *ContextEntry; + VTD_SOURCE_ID *PciSourceId; + VTD_SOURCE_ID SourceId; + UINTN MaxBusNumber; + UINTN EntryTablePages; + + MaxBusNumber =3D 0; + for (Index =3D 0; Index < mVtdUnitInformation[VtdIndex].PciDeviceInfo->P= ciDeviceDataNumber; Index++) { + PciSourceId =3D &mVtdUnitInformation[VtdIndex].PciDeviceInfo->PciDevic= eData[Index].PciSourceId; + if (PciSourceId->Bits.Bus > MaxBusNumber) { + MaxBusNumber =3D PciSourceId->Bits.Bus; + } + } + DEBUG ((DEBUG_INFO," MaxBusNumber - 0x%x\n", MaxBusNumber)); + + RootPages =3D EFI_SIZE_TO_PAGES (sizeof (VTD_ROOT_ENTRY) * VTD_ROOT_ENTR= Y_NUMBER); + ContextPages =3D EFI_SIZE_TO_PAGES (sizeof (VTD_CONTEXT_ENTRY) * VTD_CON= TEXT_ENTRY_NUMBER); + EntryTablePages =3D RootPages + ContextPages * (MaxBusNumber + 1); + Buffer =3D AllocateZeroPages (EntryTablePages); + if (Buffer =3D=3D NULL) { + DEBUG ((DEBUG_INFO,"Could not Alloc Root Entry Table.. \n")); + return EFI_OUT_OF_RESOURCES; + } + mVtdUnitInformation[VtdIndex].RootEntryTable =3D (VTD_ROOT_ENTRY *)Buffe= r; + Buffer =3D (UINT8 *)Buffer + EFI_PAGES_TO_SIZE (RootPages); + + for (Index =3D 0; Index < mVtdUnitInformation[VtdIndex].PciDeviceInfo->P= ciDeviceDataNumber; Index++) { + PciSourceId =3D &mVtdUnitInformation[VtdIndex].PciDeviceInfo->PciDevic= eData[Index].PciSourceId; + + SourceId.Bits.Bus =3D PciSourceId->Bits.Bus; + SourceId.Bits.Device =3D PciSourceId->Bits.Device; + SourceId.Bits.Function =3D PciSourceId->Bits.Function; + + RootEntry =3D &mVtdUnitInformation[VtdIndex].RootEntryTable[SourceId.I= ndex.RootIndex]; + if (RootEntry->Bits.Present =3D=3D 0) { + RootEntry->Bits.ContextTablePointerLo =3D (UINT32) RShiftU64 ((UINT= 64)(UINTN)Buffer, 12); + RootEntry->Bits.ContextTablePointerHi =3D (UINT32) RShiftU64 ((UINT= 64)(UINTN)Buffer, 32); + RootEntry->Bits.Present =3D 1; + Buffer =3D (UINT8 *)Buffer + EFI_PAGES_TO_SIZE (ContextPages); + } + + ContextEntryTable =3D (VTD_CONTEXT_ENTRY *)(UINTN)VTD_64BITS_ADDRESS(R= ootEntry->Bits.ContextTablePointerLo, RootEntry->Bits.ContextTablePointerHi= ) ; + ContextEntry =3D &ContextEntryTable[SourceId.Index.ContextIndex]; + ContextEntry->Bits.TranslationType =3D 0; + ContextEntry->Bits.FaultProcessingDisable =3D 0; + ContextEntry->Bits.Present =3D 0; + + DEBUG ((DEBUG_INFO,"Source: S%04x B%02x D%02x F%02x\n", mVtdUnitInform= ation[VtdIndex].Segment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId.= Bits.Function)); + + mVtdUnitInformation[VtdIndex].Is5LevelPaging =3D FALSE; + if ((mVtdUnitInformation[VtdIndex].CapReg.Bits.SAGAW & BIT3) !=3D 0) { + mVtdUnitInformation[VtdIndex].Is5LevelPaging =3D TRUE; + if ((mAcpiDmarTable->HostAddressWidth <=3D 48) && + ((mVtdUnitInformation[VtdIndex].CapReg.Bits.SAGAW & BIT2) !=3D 0= )) { + mVtdUnitInformation[VtdIndex].Is5LevelPaging =3D FALSE; + } + } else if ((mVtdUnitInformation[VtdIndex].CapReg.Bits.SAGAW & BIT2) = =3D=3D 0) { + DEBUG((DEBUG_ERROR, "!!!! Page-table type is not supported on VTD %d= !!!!\n", VtdIndex)); + return EFI_UNSUPPORTED; + } + + if (mVtdUnitInformation[VtdIndex].Is5LevelPaging) { + ContextEntry->Bits.AddressWidth =3D 0x3; + DEBUG((DEBUG_INFO, "Using 5-level page-table on VTD %d\n", VtdIndex)= ); + } else { + ContextEntry->Bits.AddressWidth =3D 0x2; + DEBUG((DEBUG_INFO, "Using 4-level page-table on VTD %d\n", VtdIndex)= ); + } + } + + FlushPageTableMemory (VtdIndex, (UINTN)mVtdUnitInformation[VtdIndex].Roo= tEntryTable, EFI_PAGES_TO_SIZE(EntryTablePages)); + + return EFI_SUCCESS; +} + +/** + Create second level paging entry table. + + @param[in] VtdIndex The index of the VTd engine. + @param[in] SecondLevelPagingEntry The second level paging entry. + @param[in] MemoryBase The base of the memory. + @param[in] MemoryLimit The limit of the memory. + @param[in] IoMmuAccess The IOMMU access. + @param[in] Is5LevelPaging If it is the 5 level paging. + + @return The second level paging entry. +**/ +VTD_SECOND_LEVEL_PAGING_ENTRY * +CreateSecondLevelPagingEntryTable ( + IN UINTN VtdIndex, + IN VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry, + IN UINT64 MemoryBase, + IN UINT64 MemoryLimit, + IN UINT64 IoMmuAccess, + IN BOOLEAN Is5LevelPaging + ) +{ + UINTN Index5; + UINTN Index4; + UINTN Index3; + UINTN Index2; + UINTN Lvl5Start; + UINTN Lvl5End; + UINTN Lvl4PagesStart; + UINTN Lvl4PagesEnd; + UINTN Lvl4Start; + UINTN Lvl4End; + UINTN Lvl3Start; + UINTN Lvl3End; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl5PtEntry; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl4PtEntry; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl3PtEntry; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl2PtEntry; + UINT64 BaseAddress; + UINT64 EndAddress; + + if (MemoryLimit =3D=3D 0) { + return NULL; + } + + Lvl4PagesStart =3D 0; + Lvl4PagesEnd =3D 0; + Lvl4PtEntry =3D NULL; + Lvl5PtEntry =3D NULL; + + BaseAddress =3D ALIGN_VALUE_LOW(MemoryBase, SIZE_2MB); + EndAddress =3D ALIGN_VALUE_UP(MemoryLimit, SIZE_2MB); + DEBUG ((DEBUG_INFO,"CreateSecondLevelPagingEntryTable: BaseAddress - 0x%= 016lx, EndAddress - 0x%016lx\n", BaseAddress, EndAddress)); + + if (SecondLevelPagingEntry =3D=3D NULL) { + SecondLevelPagingEntry =3D AllocateZeroPages (1); + if (SecondLevelPagingEntry =3D=3D NULL) { + DEBUG ((DEBUG_ERROR,"Could not Alloc LVL4 or LVL5 PT. \n")); + return NULL; + } + FlushPageTableMemory (VtdIndex, (UINTN)SecondLevelPagingEntry, EFI_PAG= ES_TO_SIZE(1)); + } + + // + // If no access is needed, just create not present entry. + // + if (IoMmuAccess =3D=3D 0) { + return SecondLevelPagingEntry; + } + + if (Is5LevelPaging) { + Lvl5Start =3D RShiftU64 (BaseAddress, 48) & 0x1FF; + Lvl5End =3D RShiftU64 (EndAddress - 1, 48) & 0x1FF; + DEBUG ((DEBUG_INFO," Lvl5Start - 0x%x, Lvl5End - 0x%x\n", Lvl5Start, = Lvl5End)); + + Lvl4Start =3D RShiftU64 (BaseAddress, 39) & 0x1FF; + Lvl4End =3D RShiftU64 (EndAddress - 1, 39) & 0x1FF; + + Lvl4PagesStart =3D (Lvl5Start<<9) | Lvl4Start; + Lvl4PagesEnd =3D (Lvl5End<<9) | Lvl4End; + DEBUG ((DEBUG_INFO," Lvl4PagesStart - 0x%x, Lvl4PagesEnd - 0x%x\n", L= vl4PagesStart, Lvl4PagesEnd)); + + Lvl5PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)SecondLevelPagingEntr= y; + } else { + Lvl5Start =3D RShiftU64 (BaseAddress, 48) & 0x1FF; + Lvl5End =3D Lvl5Start; + + Lvl4Start =3D RShiftU64 (BaseAddress, 39) & 0x1FF; + Lvl4End =3D RShiftU64 (EndAddress - 1, 39) & 0x1FF; + DEBUG ((DEBUG_INFO," Lvl4Start - 0x%x, Lvl4End - 0x%x\n", Lvl4Start, = Lvl4End)); + + Lvl4PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)SecondLevelPagingEntr= y; + } + + for (Index5 =3D Lvl5Start; Index5 <=3D Lvl5End; Index5++) { + if (Is5LevelPaging) { + if (Lvl5PtEntry[Index5].Uint64 =3D=3D 0) { + Lvl5PtEntry[Index5].Uint64 =3D (UINT64)(UINTN)AllocateZeroPages (1= ); + if (Lvl5PtEntry[Index5].Uint64 =3D=3D 0) { + DEBUG ((DEBUG_ERROR,"!!!!!! ALLOCATE LVL4 PAGE FAIL (0x%x)!!!!!!= \n", Index5)); + ASSERT(FALSE); + return NULL; + } + FlushPageTableMemory (VtdIndex, (UINTN)Lvl5PtEntry[Index5].Uint64,= SIZE_4KB); + SetSecondLevelPagingEntryAttribute (&Lvl5PtEntry[Index5], EDKII_IO= MMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE); + } + Lvl4Start =3D Lvl4PagesStart & 0x1FF; + if (((Index5+1)<<9) > Lvl4PagesEnd) { + Lvl4End =3D SIZE_4KB/sizeof(VTD_SECOND_LEVEL_PAGING_ENTRY) - 1;; + Lvl4PagesStart =3D (Index5+1)<<9; + } else { + Lvl4End =3D Lvl4PagesEnd & 0x1FF; + } + DEBUG ((DEBUG_INFO," Lvl5(0x%x): Lvl4Start - 0x%x, Lvl4End - 0x%x\n= ", Index5, Lvl4Start, Lvl4End)); + Lvl4PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)(UINTN)VTD_64BITS_A= DDRESS(Lvl5PtEntry[Index5].Bits.AddressLo, Lvl5PtEntry[Index5].Bits.Address= Hi); + } + + for (Index4 =3D Lvl4Start; Index4 <=3D Lvl4End; Index4++) { + if (Lvl4PtEntry[Index4].Uint64 =3D=3D 0) { + Lvl4PtEntry[Index4].Uint64 =3D (UINT64)(UINTN)AllocateZeroPages (1= ); + if (Lvl4PtEntry[Index4].Uint64 =3D=3D 0) { + DEBUG ((DEBUG_ERROR,"!!!!!! ALLOCATE LVL4 PAGE FAIL (0x%x)!!!!!!= \n", Index4)); + ASSERT(FALSE); + return NULL; + } + FlushPageTableMemory (VtdIndex, (UINTN)Lvl4PtEntry[Index4].Uint64,= SIZE_4KB); + SetSecondLevelPagingEntryAttribute (&Lvl4PtEntry[Index4], EDKII_IO= MMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE); + } + + Lvl3Start =3D RShiftU64 (BaseAddress, 30) & 0x1FF; + if (ALIGN_VALUE_LOW(BaseAddress + SIZE_1GB, SIZE_1GB) <=3D EndAddres= s) { + Lvl3End =3D SIZE_4KB/sizeof(VTD_SECOND_LEVEL_PAGING_ENTRY) - 1; + } else { + Lvl3End =3D RShiftU64 (EndAddress - 1, 30) & 0x1FF; + } + DEBUG ((DEBUG_INFO," Lvl4(0x%x): Lvl3Start - 0x%x, Lvl3End - 0x%x\n= ", Index4, Lvl3Start, Lvl3End)); + + Lvl3PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)(UINTN)VTD_64BITS_A= DDRESS(Lvl4PtEntry[Index4].Bits.AddressLo, Lvl4PtEntry[Index4].Bits.Address= Hi); + for (Index3 =3D Lvl3Start; Index3 <=3D Lvl3End; Index3++) { + if (Lvl3PtEntry[Index3].Uint64 =3D=3D 0) { + Lvl3PtEntry[Index3].Uint64 =3D (UINT64)(UINTN)AllocateZeroPages = (1); + if (Lvl3PtEntry[Index3].Uint64 =3D=3D 0) { + DEBUG ((DEBUG_ERROR,"!!!!!! ALLOCATE LVL3 PAGE FAIL (0x%x, 0x%= x)!!!!!!\n", Index4, Index3)); + ASSERT(FALSE); + return NULL; + } + FlushPageTableMemory (VtdIndex, (UINTN)Lvl3PtEntry[Index3].Uint6= 4, SIZE_4KB); + SetSecondLevelPagingEntryAttribute (&Lvl3PtEntry[Index3], EDKII_= IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE); + } + + Lvl2PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)(UINTN)VTD_64BITS= _ADDRESS(Lvl3PtEntry[Index3].Bits.AddressLo, Lvl3PtEntry[Index3].Bits.Addre= ssHi); + for (Index2 =3D 0; Index2 < SIZE_4KB/sizeof(VTD_SECOND_LEVEL_PAGIN= G_ENTRY); Index2++) { + Lvl2PtEntry[Index2].Uint64 =3D BaseAddress; + SetSecondLevelPagingEntryAttribute (&Lvl2PtEntry[Index2], IoMmuA= ccess); + Lvl2PtEntry[Index2].Bits.PageSize =3D 1; + BaseAddress +=3D SIZE_2MB; + if (BaseAddress >=3D MemoryLimit) { + break; + } + } + FlushPageTableMemory (VtdIndex, (UINTN)Lvl2PtEntry, SIZE_4KB); + if (BaseAddress >=3D MemoryLimit) { + break; + } + } + FlushPageTableMemory (VtdIndex, (UINTN)&Lvl3PtEntry[Lvl3Start], (UIN= TN)&Lvl3PtEntry[Lvl3End + 1] - (UINTN)&Lvl3PtEntry[Lvl3Start]); + if (BaseAddress >=3D MemoryLimit) { + break; + } + } + FlushPageTableMemory (VtdIndex, (UINTN)&Lvl4PtEntry[Lvl4Start], (UINTN= )&Lvl4PtEntry[Lvl4End + 1] - (UINTN)&Lvl4PtEntry[Lvl4Start]); + } + FlushPageTableMemory (VtdIndex, (UINTN)&Lvl5PtEntry[Lvl5Start], (UINTN)&= Lvl5PtEntry[Lvl5End + 1] - (UINTN)&Lvl5PtEntry[Lvl5Start]); + + return SecondLevelPagingEntry; +} + +/** + Create second level paging entry. + + @param[in] VtdIndex The index of the VTd engine. + @param[in] IoMmuAccess The IOMMU access. + @param[in] Is5LevelPaging If it is the 5 level paging. + + @return The second level paging entry. +**/ +VTD_SECOND_LEVEL_PAGING_ENTRY * +CreateSecondLevelPagingEntry ( + IN UINTN VtdIndex, + IN UINT64 IoMmuAccess, + IN BOOLEAN Is5LevelPaging + ) +{ + VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry; + + SecondLevelPagingEntry =3D NULL; + SecondLevelPagingEntry =3D CreateSecondLevelPagingEntryTable (VtdIndex, = SecondLevelPagingEntry, 0, mBelow4GMemoryLimit, IoMmuAccess, Is5LevelPaging= ); + if (SecondLevelPagingEntry =3D=3D NULL) { + return NULL; + } + + if (mAbove4GMemoryLimit !=3D 0) { + ASSERT (mAbove4GMemoryLimit > BASE_4GB); + SecondLevelPagingEntry =3D CreateSecondLevelPagingEntryTable (VtdIndex= , SecondLevelPagingEntry, SIZE_4GB, mAbove4GMemoryLimit, IoMmuAccess, Is5Le= velPaging); + if (SecondLevelPagingEntry =3D=3D NULL) { + return NULL; + } + } + + return SecondLevelPagingEntry; +} + +/** + Setup VTd translation table. + + @retval EFI_SUCCESS Setup translation table successfully. + @retval EFI_OUT_OF_RESOURCE Setup translation table fail. +**/ +EFI_STATUS +SetupTranslationTable ( + VOID + ) +{ + EFI_STATUS Status; + UINTN Index; + + for (Index =3D 0; Index < mVtdUnitNumber; Index++) { + DEBUG((DEBUG_INFO, "CreateContextEntry - %d\n", Index)); + + if (mVtdUnitInformation[Index].ECapReg.Bits.SMTS) { + if (mVtdUnitInformation[Index].ECapReg.Bits.DEP_24) { + DEBUG ((DEBUG_ERROR,"ECapReg.bit24 is not zero\n")); + ASSERT(FALSE); + Status =3D EFI_UNSUPPORTED; + } else { + Status =3D CreateContextEntry (Index); + } + } else { + if (mVtdUnitInformation[Index].ECapReg.Bits.DEP_24) { + // + // To compatible with pervious VTd engine + // It was ECS(Extended Context Support) bit. + // + Status =3D CreateExtContextEntry (Index); + } else { + Status =3D CreateContextEntry (Index); + } + } + + if (EFI_ERROR (Status)) { + return Status; + } + } + + return EFI_SUCCESS; +} + +/** + Dump DMAR second level paging entry. + + @param[in] SecondLevelPagingEntry The second level paging entry. + @param[in] Is5LevelPaging If it is the 5 level paging. +**/ +VOID +DumpSecondLevelPagingEntry ( + IN VOID *SecondLevelPagingEntry, + IN BOOLEAN Is5LevelPaging + ) +{ + UINTN Index5; + UINTN Index4; + UINTN Index3; + UINTN Index2; + UINTN Index1; + UINTN Lvl5IndexEnd; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl5PtEntry; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl4PtEntry; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl3PtEntry; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl2PtEntry; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl1PtEntry; + + DEBUG ((DEBUG_VERBOSE,"=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D\= n")); + DEBUG ((DEBUG_VERBOSE,"DMAR Second Level Page Table:\n")); + DEBUG ((DEBUG_VERBOSE,"SecondLevelPagingEntry Base - 0x%x, Is5LevelPagin= g - %d\n", SecondLevelPagingEntry, Is5LevelPaging)); + + Lvl5IndexEnd =3D Is5LevelPaging ? SIZE_4KB/sizeof(VTD_SECOND_LEVEL_PAGIN= G_ENTRY) : 1; + Lvl4PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)SecondLevelPagingEntry; + Lvl5PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)SecondLevelPagingEntry; + + for (Index5 =3D 0; Index5 < Lvl5IndexEnd; Index5++) { + if (Is5LevelPaging) { + if (Lvl5PtEntry[Index5].Uint64 !=3D 0) { + DEBUG ((DEBUG_VERBOSE," Lvl5Pt Entry(0x%03x) - 0x%016lx\n", Index= 5, Lvl5PtEntry[Index5].Uint64)); + } + if (Lvl5PtEntry[Index5].Uint64 =3D=3D 0) { + continue; + } + Lvl4PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)(UINTN)VTD_64BITS_A= DDRESS(Lvl5PtEntry[Index5].Bits.AddressLo, Lvl5PtEntry[Index5].Bits.Address= Hi); + } + + for (Index4 =3D 0; Index4 < SIZE_4KB/sizeof(VTD_SECOND_LEVEL_PAGING_EN= TRY); Index4++) { + if (Lvl4PtEntry[Index4].Uint64 !=3D 0) { + DEBUG ((DEBUG_VERBOSE," Lvl4Pt Entry(0x%03x) - 0x%016lx\n", Index= 4, Lvl4PtEntry[Index4].Uint64)); + } + if (Lvl4PtEntry[Index4].Uint64 =3D=3D 0) { + continue; + } + Lvl3PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)(UINTN)VTD_64BITS_A= DDRESS(Lvl4PtEntry[Index4].Bits.AddressLo, Lvl4PtEntry[Index4].Bits.Address= Hi); + for (Index3 =3D 0; Index3 < SIZE_4KB/sizeof(VTD_SECOND_LEVEL_PAGING_= ENTRY); Index3++) { + if (Lvl3PtEntry[Index3].Uint64 !=3D 0) { + DEBUG ((DEBUG_VERBOSE," Lvl3Pt Entry(0x%03x) - 0x%016lx\n", In= dex3, Lvl3PtEntry[Index3].Uint64)); + } + if (Lvl3PtEntry[Index3].Uint64 =3D=3D 0) { + continue; + } + + Lvl2PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)(UINTN)VTD_64BITS= _ADDRESS(Lvl3PtEntry[Index3].Bits.AddressLo, Lvl3PtEntry[Index3].Bits.Addre= ssHi); + for (Index2 =3D 0; Index2 < SIZE_4KB/sizeof(VTD_SECOND_LEVEL_PAGIN= G_ENTRY); Index2++) { + if (Lvl2PtEntry[Index2].Uint64 !=3D 0) { + DEBUG ((DEBUG_VERBOSE," Lvl2Pt Entry(0x%03x) - 0x%016lx\n",= Index2, Lvl2PtEntry[Index2].Uint64)); + } + if (Lvl2PtEntry[Index2].Uint64 =3D=3D 0) { + continue; + } + if (Lvl2PtEntry[Index2].Bits.PageSize =3D=3D 0) { + Lvl1PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)(UINTN)VTD_64= BITS_ADDRESS(Lvl2PtEntry[Index2].Bits.AddressLo, Lvl2PtEntry[Index2].Bits.A= ddressHi); + for (Index1 =3D 0; Index1 < SIZE_4KB/sizeof(VTD_SECOND_LEVEL_P= AGING_ENTRY); Index1++) { + if (Lvl1PtEntry[Index1].Uint64 !=3D 0) { + DEBUG ((DEBUG_VERBOSE," Lvl1Pt Entry(0x%03x) - 0x%016= lx\n", Index1, Lvl1PtEntry[Index1].Uint64)); + } + } + } + } + } + } + } + DEBUG ((DEBUG_VERBOSE,"=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D\= n")); +} + +/** + Invalid page entry. + + @param VtdIndex The VTd engine index. +**/ +VOID +InvalidatePageEntry ( + IN UINTN VtdIndex + ) +{ + if (mVtdUnitInformation[VtdIndex].HasDirtyContext || mVtdUnitInformation= [VtdIndex].HasDirtyPages) { + InvalidateVtdIOTLBGlobal (VtdIndex); + } + mVtdUnitInformation[VtdIndex].HasDirtyContext =3D FALSE; + mVtdUnitInformation[VtdIndex].HasDirtyPages =3D FALSE; +} + +#define VTD_PG_R BIT0 +#define VTD_PG_W BIT1 +#define VTD_PG_X BIT2 +#define VTD_PG_EMT (BIT3 | BIT4 | BIT5) +#define VTD_PG_TM (BIT62) + +#define VTD_PG_PS BIT7 + +#define PAGE_PROGATE_BITS (VTD_PG_TM | VTD_PG_EMT | VTD_PG_W | VT= D_PG_R) + +#define PAGING_4K_MASK 0xFFF +#define PAGING_2M_MASK 0x1FFFFF +#define PAGING_1G_MASK 0x3FFFFFFF + +#define PAGING_VTD_INDEX_MASK 0x1FF + +#define PAGING_4K_ADDRESS_MASK_64 0x000FFFFFFFFFF000ull +#define PAGING_2M_ADDRESS_MASK_64 0x000FFFFFFFE00000ull +#define PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull + +typedef enum { + PageNone, + Page4K, + Page2M, + Page1G, +} PAGE_ATTRIBUTE; + +typedef struct { + PAGE_ATTRIBUTE Attribute; + UINT64 Length; + UINT64 AddressMask; +} PAGE_ATTRIBUTE_TABLE; + +PAGE_ATTRIBUTE_TABLE mPageAttributeTable[] =3D { + {Page4K, SIZE_4KB, PAGING_4K_ADDRESS_MASK_64}, + {Page2M, SIZE_2MB, PAGING_2M_ADDRESS_MASK_64}, + {Page1G, SIZE_1GB, PAGING_1G_ADDRESS_MASK_64}, +}; + +/** + Return length according to page attributes. + + @param[in] PageAttributes The page attribute of the page entry. + + @return The length of page entry. +**/ +UINTN +PageAttributeToLength ( + IN PAGE_ATTRIBUTE PageAttribute + ) +{ + UINTN Index; + for (Index =3D 0; Index < sizeof(mPageAttributeTable)/sizeof(mPageAttrib= uteTable[0]); Index++) { + if (PageAttribute =3D=3D mPageAttributeTable[Index].Attribute) { + return (UINTN)mPageAttributeTable[Index].Length; + } + } + return 0; +} + +/** + Return page table entry to match the address. + + @param[in] VtdIndex The index used to identify a VTd e= ngine. + @param[in] SecondLevelPagingEntry The second level paging entry in V= Td table for the device. + @param[in] Address The address to be checked. + @param[in] Is5LevelPaging If it is the 5 level paging. + @param[out] PageAttributes The page attribute of the page ent= ry. + + @return The page entry. +**/ +VOID * +GetSecondLevelPageTableEntry ( + IN UINTN VtdIndex, + IN VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry, + IN PHYSICAL_ADDRESS Address, + IN BOOLEAN Is5LevelPaging, + OUT PAGE_ATTRIBUTE *PageAttribute + ) +{ + UINTN Index1; + UINTN Index2; + UINTN Index3; + UINTN Index4; + UINTN Index5; + UINT64 *L1PageTable; + UINT64 *L2PageTable; + UINT64 *L3PageTable; + UINT64 *L4PageTable; + UINT64 *L5PageTable; + + Index5 =3D ((UINTN)RShiftU64 (Address, 48)) & PAGING_VTD_INDEX_MASK; + Index4 =3D ((UINTN)RShiftU64 (Address, 39)) & PAGING_VTD_INDEX_MASK; + Index3 =3D ((UINTN)Address >> 30) & PAGING_VTD_INDEX_MASK; + Index2 =3D ((UINTN)Address >> 21) & PAGING_VTD_INDEX_MASK; + Index1 =3D ((UINTN)Address >> 12) & PAGING_VTD_INDEX_MASK; + + if (Is5LevelPaging) { + L5PageTable =3D (UINT64 *)SecondLevelPagingEntry; + if (L5PageTable[Index5] =3D=3D 0) { + L5PageTable[Index5] =3D (UINT64)(UINTN)AllocateZeroPages (1); + if (L5PageTable[Index5] =3D=3D 0) { + DEBUG ((DEBUG_ERROR,"!!!!!! ALLOCATE LVL5 PAGE FAIL (0x%x)!!!!!!\n= ", Index4)); + ASSERT(FALSE); + *PageAttribute =3D PageNone; + return NULL; + } + FlushPageTableMemory (VtdIndex, (UINTN)L5PageTable[Index5], SIZE_4KB= ); + SetSecondLevelPagingEntryAttribute ((VTD_SECOND_LEVEL_PAGING_ENTRY *= )&L5PageTable[Index5], EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE); + FlushPageTableMemory (VtdIndex, (UINTN)&L5PageTable[Index5], sizeof(= L5PageTable[Index5])); + } + L4PageTable =3D (UINT64 *)(UINTN)(L5PageTable[Index5] & PAGING_4K_ADDR= ESS_MASK_64); + } else { + L4PageTable =3D (UINT64 *)SecondLevelPagingEntry; + } + + if (L4PageTable[Index4] =3D=3D 0) { + L4PageTable[Index4] =3D (UINT64)(UINTN)AllocateZeroPages (1); + if (L4PageTable[Index4] =3D=3D 0) { + DEBUG ((DEBUG_ERROR,"!!!!!! ALLOCATE LVL4 PAGE FAIL (0x%x)!!!!!!\n",= Index4)); + ASSERT(FALSE); + *PageAttribute =3D PageNone; + return NULL; + } + FlushPageTableMemory (VtdIndex, (UINTN)L4PageTable[Index4], SIZE_4KB); + SetSecondLevelPagingEntryAttribute ((VTD_SECOND_LEVEL_PAGING_ENTRY *)&= L4PageTable[Index4], EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE); + FlushPageTableMemory (VtdIndex, (UINTN)&L4PageTable[Index4], sizeof(L4= PageTable[Index4])); + } + + L3PageTable =3D (UINT64 *)(UINTN)(L4PageTable[Index4] & PAGING_4K_ADDRES= S_MASK_64); + if (L3PageTable[Index3] =3D=3D 0) { + L3PageTable[Index3] =3D (UINT64)(UINTN)AllocateZeroPages (1); + if (L3PageTable[Index3] =3D=3D 0) { + DEBUG ((DEBUG_ERROR,"!!!!!! ALLOCATE LVL3 PAGE FAIL (0x%x, 0x%x)!!!!= !!\n", Index4, Index3)); + ASSERT(FALSE); + *PageAttribute =3D PageNone; + return NULL; + } + FlushPageTableMemory (VtdIndex, (UINTN)L3PageTable[Index3], SIZE_4KB); + SetSecondLevelPagingEntryAttribute ((VTD_SECOND_LEVEL_PAGING_ENTRY *)&= L3PageTable[Index3], EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE); + FlushPageTableMemory (VtdIndex, (UINTN)&L3PageTable[Index3], sizeof(L3= PageTable[Index3])); + } + if ((L3PageTable[Index3] & VTD_PG_PS) !=3D 0) { + // 1G + *PageAttribute =3D Page1G; + return &L3PageTable[Index3]; + } + + L2PageTable =3D (UINT64 *)(UINTN)(L3PageTable[Index3] & PAGING_4K_ADDRES= S_MASK_64); + if (L2PageTable[Index2] =3D=3D 0) { + L2PageTable[Index2] =3D Address & PAGING_2M_ADDRESS_MASK_64; + SetSecondLevelPagingEntryAttribute ((VTD_SECOND_LEVEL_PAGING_ENTRY *)&= L2PageTable[Index2], 0); + L2PageTable[Index2] |=3D VTD_PG_PS; + FlushPageTableMemory (VtdIndex, (UINTN)&L2PageTable[Index2], sizeof(L2= PageTable[Index2])); + } + if ((L2PageTable[Index2] & VTD_PG_PS) !=3D 0) { + // 2M + *PageAttribute =3D Page2M; + return &L2PageTable[Index2]; + } + + // 4k + L1PageTable =3D (UINT64 *)(UINTN)(L2PageTable[Index2] & PAGING_4K_ADDRES= S_MASK_64); + if ((L1PageTable[Index1] =3D=3D 0) && (Address !=3D 0)) { + *PageAttribute =3D PageNone; + return NULL; + } + *PageAttribute =3D Page4K; + return &L1PageTable[Index1]; +} + +/** + Modify memory attributes of page entry. + + @param[in] VtdIndex The index used to identify a VTd engine. + @param[in] PageEntry The page entry. + @param[in] IoMmuAccess The IOMMU access. + @param[out] IsModified TRUE means page table modified. FALSE mean= s page table not modified. +**/ +VOID +ConvertSecondLevelPageEntryAttribute ( + IN UINTN VtdIndex, + IN VTD_SECOND_LEVEL_PAGING_ENTRY *PageEntry, + IN UINT64 IoMmuAccess, + OUT BOOLEAN *IsModified + ) +{ + UINT64 CurrentPageEntry; + UINT64 NewPageEntry; + + CurrentPageEntry =3D PageEntry->Uint64; + SetSecondLevelPagingEntryAttribute (PageEntry, IoMmuAccess); + FlushPageTableMemory (VtdIndex, (UINTN)PageEntry, sizeof(*PageEntry)); + NewPageEntry =3D PageEntry->Uint64; + if (CurrentPageEntry !=3D NewPageEntry) { + *IsModified =3D TRUE; + DEBUG ((DEBUG_VERBOSE, "ConvertSecondLevelPageEntryAttribute 0x%lx", C= urrentPageEntry)); + DEBUG ((DEBUG_VERBOSE, "->0x%lx\n", NewPageEntry)); + } else { + *IsModified =3D FALSE; + } +} + +/** + This function returns if there is need to split page entry. + + @param[in] BaseAddress The base address to be checked. + @param[in] Length The length to be checked. + @param[in] PageAttribute The page attribute of the page entry. + + @retval SplitAttributes on if there is need to split page entry. +**/ +PAGE_ATTRIBUTE +NeedSplitPage ( + IN PHYSICAL_ADDRESS BaseAddress, + IN UINT64 Length, + IN PAGE_ATTRIBUTE PageAttribute + ) +{ + UINT64 PageEntryLength; + + PageEntryLength =3D PageAttributeToLength (PageAttribute); + + if (((BaseAddress & (PageEntryLength - 1)) =3D=3D 0) && (Length >=3D Pag= eEntryLength)) { + return PageNone; + } + + if (((BaseAddress & PAGING_2M_MASK) !=3D 0) || (Length < SIZE_2MB)) { + return Page4K; + } + + return Page2M; +} + +/** + This function splits one page entry to small page entries. + + @param[in] VtdIndex The index used to identify a VTd engine. + @param[in] PageEntry The page entry to be splitted. + @param[in] PageAttribute The page attribute of the page entry. + @param[in] SplitAttribute How to split the page entry. + + @retval RETURN_SUCCESS The page entry is splitted. + @retval RETURN_UNSUPPORTED The page entry does not support to be = splitted. + @retval RETURN_OUT_OF_RESOURCES No resource to split page entry. +**/ +RETURN_STATUS +SplitSecondLevelPage ( + IN UINTN VtdIndex, + IN VTD_SECOND_LEVEL_PAGING_ENTRY *PageEntry, + IN PAGE_ATTRIBUTE PageAttribute, + IN PAGE_ATTRIBUTE SplitAttribute + ) +{ + UINT64 BaseAddress; + UINT64 *NewPageEntry; + UINTN Index; + + ASSERT (PageAttribute =3D=3D Page2M || PageAttribute =3D=3D Page1G); + + if (PageAttribute =3D=3D Page2M) { + // + // Split 2M to 4K + // + ASSERT (SplitAttribute =3D=3D Page4K); + if (SplitAttribute =3D=3D Page4K) { + NewPageEntry =3D AllocateZeroPages (1); + DEBUG ((DEBUG_VERBOSE, "Split - 0x%x\n", NewPageEntry)); + if (NewPageEntry =3D=3D NULL) { + return RETURN_OUT_OF_RESOURCES; + } + BaseAddress =3D PageEntry->Uint64 & PAGING_2M_ADDRESS_MASK_64; + for (Index =3D 0; Index < SIZE_4KB / sizeof(UINT64); Index++) { + NewPageEntry[Index] =3D (BaseAddress + SIZE_4KB * Index) | (PageEn= try->Uint64 & PAGE_PROGATE_BITS); + } + FlushPageTableMemory (VtdIndex, (UINTN)NewPageEntry, SIZE_4KB); + + PageEntry->Uint64 =3D (UINT64)(UINTN)NewPageEntry; + SetSecondLevelPagingEntryAttribute (PageEntry, EDKII_IOMMU_ACCESS_RE= AD | EDKII_IOMMU_ACCESS_WRITE); + FlushPageTableMemory (VtdIndex, (UINTN)PageEntry, sizeof(*PageEntry)= ); + return RETURN_SUCCESS; + } else { + return RETURN_UNSUPPORTED; + } + } else if (PageAttribute =3D=3D Page1G) { + // + // Split 1G to 2M + // No need support 1G->4K directly, we should use 1G->2M, then 2M->4K = to get more compact page table. + // + ASSERT (SplitAttribute =3D=3D Page2M || SplitAttribute =3D=3D Page4K); + if ((SplitAttribute =3D=3D Page2M || SplitAttribute =3D=3D Page4K)) { + NewPageEntry =3D AllocateZeroPages (1); + DEBUG ((DEBUG_VERBOSE, "Split - 0x%x\n", NewPageEntry)); + if (NewPageEntry =3D=3D NULL) { + return RETURN_OUT_OF_RESOURCES; + } + BaseAddress =3D PageEntry->Uint64 & PAGING_1G_ADDRESS_MASK_64; + for (Index =3D 0; Index < SIZE_4KB / sizeof(UINT64); Index++) { + NewPageEntry[Index] =3D (BaseAddress + SIZE_2MB * Index) | VTD_PG_= PS | (PageEntry->Uint64 & PAGE_PROGATE_BITS); + } + FlushPageTableMemory (VtdIndex, (UINTN)NewPageEntry, SIZE_4KB); + + PageEntry->Uint64 =3D (UINT64)(UINTN)NewPageEntry; + SetSecondLevelPagingEntryAttribute (PageEntry, EDKII_IOMMU_ACCESS_RE= AD | EDKII_IOMMU_ACCESS_WRITE); + FlushPageTableMemory (VtdIndex, (UINTN)PageEntry, sizeof(*PageEntry)= ); + return RETURN_SUCCESS; + } else { + return RETURN_UNSUPPORTED; + } + } else { + return RETURN_UNSUPPORTED; + } +} + +/** + Set VTd attribute for a system memory on second level page entry + + @param[in] VtdIndex The index used to identify a VTd eng= ine. + @param[in] DomainIdentifier The domain ID of the source. + @param[in] SecondLevelPagingEntry The second level paging entry in VTd= table for the device. + @param[in] BaseAddress The base of device memory address to= be used as the DMA memory. + @param[in] Length The length of device memory address = to be used as the DMA memory. + @param[in] IoMmuAccess The IOMMU access. + + @retval EFI_SUCCESS The IoMmuAccess is set for the memory ran= ge specified by BaseAddress and Length. + @retval EFI_INVALID_PARAMETER BaseAddress is not IoMmu Page size aligne= d. + @retval EFI_INVALID_PARAMETER Length is not IoMmu Page size aligned. + @retval EFI_INVALID_PARAMETER Length is 0. + @retval EFI_INVALID_PARAMETER IoMmuAccess specified an illegal combinat= ion of access. + @retval EFI_UNSUPPORTED The bit mask of IoMmuAccess is not suppor= ted by the IOMMU. + @retval EFI_UNSUPPORTED The IOMMU does not support the memory ran= ge specified by BaseAddress and Length. + @retval EFI_OUT_OF_RESOURCES There are not enough resources available = to modify the IOMMU access. + @retval EFI_DEVICE_ERROR The IOMMU device reported an error while = attempting the operation. +**/ +EFI_STATUS +SetSecondLevelPagingAttribute ( + IN UINTN VtdIndex, + IN UINT16 DomainIdentifier, + IN VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry, + IN UINT64 BaseAddress, + IN UINT64 Length, + IN UINT64 IoMmuAccess + ) +{ + VTD_SECOND_LEVEL_PAGING_ENTRY *PageEntry; + PAGE_ATTRIBUTE PageAttribute; + UINTN PageEntryLength; + PAGE_ATTRIBUTE SplitAttribute; + EFI_STATUS Status; + BOOLEAN IsEntryModified; + + DEBUG ((DEBUG_VERBOSE,"SetSecondLevelPagingAttribute (%d) (0x%016lx - 0x= %016lx : %x) \n", VtdIndex, BaseAddress, Length, IoMmuAccess)); + DEBUG ((DEBUG_VERBOSE," SecondLevelPagingEntry Base - 0x%x\n", SecondLe= velPagingEntry)); + + if (BaseAddress !=3D ALIGN_VALUE(BaseAddress, SIZE_4KB)) { + DEBUG ((DEBUG_ERROR, "SetSecondLevelPagingAttribute - Invalid Alignmen= t\n")); + return EFI_UNSUPPORTED; + } + if (Length !=3D ALIGN_VALUE(Length, SIZE_4KB)) { + DEBUG ((DEBUG_ERROR, "SetSecondLevelPagingAttribute - Invalid Alignmen= t\n")); + return EFI_UNSUPPORTED; + } + + while (Length !=3D 0) { + PageEntry =3D GetSecondLevelPageTableEntry (VtdIndex, SecondLevelPagin= gEntry, BaseAddress, mVtdUnitInformation[VtdIndex].Is5LevelPaging, &PageAtt= ribute); + if (PageEntry =3D=3D NULL) { + DEBUG ((DEBUG_ERROR, "PageEntry - NULL\n")); + return RETURN_UNSUPPORTED; + } + PageEntryLength =3D PageAttributeToLength (PageAttribute); + SplitAttribute =3D NeedSplitPage (BaseAddress, Length, PageAttribute); + if (SplitAttribute =3D=3D PageNone) { + ConvertSecondLevelPageEntryAttribute (VtdIndex, PageEntry, IoMmuAcce= ss, &IsEntryModified); + if (IsEntryModified) { + mVtdUnitInformation[VtdIndex].HasDirtyPages =3D TRUE; + } + // + // Convert success, move to next + // + BaseAddress +=3D PageEntryLength; + Length -=3D PageEntryLength; + } else { + Status =3D SplitSecondLevelPage (VtdIndex, PageEntry, PageAttribute,= SplitAttribute); + if (RETURN_ERROR (Status)) { + DEBUG ((DEBUG_ERROR, "SplitSecondLevelPage - %r\n", Status)); + return RETURN_UNSUPPORTED; + } + mVtdUnitInformation[VtdIndex].HasDirtyPages =3D TRUE; + // + // Just split current page + // Convert success in next around + // + } + } + + return EFI_SUCCESS; +} + +/** + Set VTd attribute for a system memory. + + @param[in] VtdIndex The index used to identify a VTd eng= ine. + @param[in] DomainIdentifier The domain ID of the source. + @param[in] SecondLevelPagingEntry The second level paging entry in VTd= table for the device. + @param[in] BaseAddress The base of device memory address to= be used as the DMA memory. + @param[in] Length The length of device memory address = to be used as the DMA memory. + @param[in] IoMmuAccess The IOMMU access. + + @retval EFI_SUCCESS The IoMmuAccess is set for the memory ran= ge specified by BaseAddress and Length. + @retval EFI_INVALID_PARAMETER BaseAddress is not IoMmu Page size aligne= d. + @retval EFI_INVALID_PARAMETER Length is not IoMmu Page size aligned. + @retval EFI_INVALID_PARAMETER Length is 0. + @retval EFI_INVALID_PARAMETER IoMmuAccess specified an illegal combinat= ion of access. + @retval EFI_UNSUPPORTED The bit mask of IoMmuAccess is not suppor= ted by the IOMMU. + @retval EFI_UNSUPPORTED The IOMMU does not support the memory ran= ge specified by BaseAddress and Length. + @retval EFI_OUT_OF_RESOURCES There are not enough resources available = to modify the IOMMU access. + @retval EFI_DEVICE_ERROR The IOMMU device reported an error while = attempting the operation. +**/ +EFI_STATUS +SetPageAttribute ( + IN UINTN VtdIndex, + IN UINT16 DomainIdentifier, + IN VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry, + IN UINT64 BaseAddress, + IN UINT64 Length, + IN UINT64 IoMmuAccess + ) +{ + EFI_STATUS Status; + Status =3D EFI_NOT_FOUND; + if (SecondLevelPagingEntry !=3D NULL) { + Status =3D SetSecondLevelPagingAttribute (VtdIndex, DomainIdentifier, = SecondLevelPagingEntry, BaseAddress, Length, IoMmuAccess); + } + return Status; +} + +/** + Set VTd attribute for a system memory. + + @param[in] Segment The Segment used to identify a VTd engine. + @param[in] SourceId The SourceId used to identify a VTd engine= and table entry. + @param[in] BaseAddress The base of device memory address to be us= ed as the DMA memory. + @param[in] Length The length of device memory address to be = used as the DMA memory. + @param[in] IoMmuAccess The IOMMU access. + + @retval EFI_SUCCESS The IoMmuAccess is set for the memory ran= ge specified by BaseAddress and Length. + @retval EFI_INVALID_PARAMETER BaseAddress is not IoMmu Page size aligne= d. + @retval EFI_INVALID_PARAMETER Length is not IoMmu Page size aligned. + @retval EFI_INVALID_PARAMETER Length is 0. + @retval EFI_INVALID_PARAMETER IoMmuAccess specified an illegal combinat= ion of access. + @retval EFI_UNSUPPORTED The bit mask of IoMmuAccess is not suppor= ted by the IOMMU. + @retval EFI_UNSUPPORTED The IOMMU does not support the memory ran= ge specified by BaseAddress and Length. + @retval EFI_OUT_OF_RESOURCES There are not enough resources available = to modify the IOMMU access. + @retval EFI_DEVICE_ERROR The IOMMU device reported an error while = attempting the operation. +**/ +EFI_STATUS +SetAccessAttribute ( + IN UINT16 Segment, + IN VTD_SOURCE_ID SourceId, + IN UINT64 BaseAddress, + IN UINT64 Length, + IN UINT64 IoMmuAccess + ) +{ + UINTN VtdIndex; + EFI_STATUS Status; + VTD_EXT_CONTEXT_ENTRY *ExtContextEntry; + VTD_CONTEXT_ENTRY *ContextEntry; + VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry; + UINT64 Pt; + UINTN PciDataIndex; + UINT16 DomainIdentifier; + + SecondLevelPagingEntry =3D NULL; + + DEBUG ((DEBUG_VERBOSE,"SetAccessAttribute (S%04x B%02x D%02x F%02x) (0x%= 016lx - 0x%08x, %x)\n", Segment, SourceId.Bits.Bus, SourceId.Bits.Device, S= ourceId.Bits.Function, BaseAddress, (UINTN)Length, IoMmuAccess)); + + VtdIndex =3D FindVtdIndexByPciDevice (Segment, SourceId, &ExtContextEntr= y, &ContextEntry); + if (VtdIndex =3D=3D (UINTN)-1) { + DEBUG ((DEBUG_ERROR,"SetAccessAttribute - Pci device (S%04x B%02x D%02= x F%02x) not found!\n", Segment, SourceId.Bits.Bus, SourceId.Bits.Device, S= ourceId.Bits.Function)); + return EFI_DEVICE_ERROR; + } + + PciDataIndex =3D GetPciDataIndex (VtdIndex, Segment, SourceId); + mVtdUnitInformation[VtdIndex].PciDeviceInfo->PciDeviceData[PciDataIndex]= .AccessCount++; + // + // DomainId should not be 0. + // + DomainIdentifier =3D (UINT16)(PciDataIndex + 1); + + if (ExtContextEntry !=3D NULL) { + if (ExtContextEntry->Bits.Present =3D=3D 0) { + SecondLevelPagingEntry =3D CreateSecondLevelPagingEntry (VtdIndex, 0= , mVtdUnitInformation[VtdIndex].Is5LevelPaging); + DEBUG ((DEBUG_VERBOSE,"SecondLevelPagingEntry - 0x%x (S%04x B%02x D%= 02x F%02x) New\n", SecondLevelPagingEntry, Segment, SourceId.Bits.Bus, Sour= ceId.Bits.Device, SourceId.Bits.Function)); + Pt =3D (UINT64)RShiftU64 ((UINT64)(UINTN)SecondLevelPagingEntry, 12); + + ExtContextEntry->Bits.SecondLevelPageTranslationPointerLo =3D (UINT3= 2) Pt; + ExtContextEntry->Bits.SecondLevelPageTranslationPointerHi =3D (UINT3= 2) RShiftU64(Pt, 20); + ExtContextEntry->Bits.DomainIdentifier =3D DomainIdentifier; + ExtContextEntry->Bits.Present =3D 1; + FlushPageTableMemory (VtdIndex, (UINTN)ExtContextEntry, sizeof(*ExtC= ontextEntry)); + VtdLibDumpDmarExtContextEntryTable (NULL, NULL, mVtdUnitInformation[= VtdIndex].ExtRootEntryTable, mVtdUnitInformation[VtdIndex].Is5LevelPaging); + mVtdUnitInformation[VtdIndex].HasDirtyContext =3D TRUE; + } else { + SecondLevelPagingEntry =3D (VOID *)(UINTN)VTD_64BITS_ADDRESS(ExtCont= extEntry->Bits.SecondLevelPageTranslationPointerLo, ExtContextEntry->Bits.S= econdLevelPageTranslationPointerHi); + DEBUG ((DEBUG_VERBOSE,"SecondLevelPagingEntry - 0x%x (S%04x B%02x D%= 02x F%02x)\n", SecondLevelPagingEntry, Segment, SourceId.Bits.Bus, SourceId= .Bits.Device, SourceId.Bits.Function)); + } + } else if (ContextEntry !=3D NULL) { + if (ContextEntry->Bits.Present =3D=3D 0) { + SecondLevelPagingEntry =3D CreateSecondLevelPagingEntry (VtdIndex, 0= , mVtdUnitInformation[VtdIndex].Is5LevelPaging); + DEBUG ((DEBUG_VERBOSE,"SecondLevelPagingEntry - 0x%x (S%04x B%02x D%= 02x F%02x) New\n", SecondLevelPagingEntry, Segment, SourceId.Bits.Bus, Sour= ceId.Bits.Device, SourceId.Bits.Function)); + Pt =3D (UINT64)RShiftU64 ((UINT64)(UINTN)SecondLevelPagingEntry, 12); + + ContextEntry->Bits.SecondLevelPageTranslationPointerLo =3D (UINT32) = Pt; + ContextEntry->Bits.SecondLevelPageTranslationPointerHi =3D (UINT32) = RShiftU64(Pt, 20); + ContextEntry->Bits.DomainIdentifier =3D DomainIdentifier; + ContextEntry->Bits.Present =3D 1; + FlushPageTableMemory (VtdIndex, (UINTN)ContextEntry, sizeof(*Context= Entry)); + VtdLibDumpDmarContextEntryTable (NULL, NULL, mVtdUnitInformation[Vtd= Index].RootEntryTable, mVtdUnitInformation[VtdIndex].Is5LevelPaging); + mVtdUnitInformation[VtdIndex].HasDirtyContext =3D TRUE; + } else { + SecondLevelPagingEntry =3D (VOID *)(UINTN)VTD_64BITS_ADDRESS(Context= Entry->Bits.SecondLevelPageTranslationPointerLo, ContextEntry->Bits.SecondL= evelPageTranslationPointerHi); + DEBUG ((DEBUG_VERBOSE,"SecondLevelPagingEntry - 0x%x (S%04x B%02x D%= 02x F%02x)\n", SecondLevelPagingEntry, Segment, SourceId.Bits.Bus, SourceId= .Bits.Device, SourceId.Bits.Function)); + } + } + + // + // Do not update FixedSecondLevelPagingEntry + // + if (SecondLevelPagingEntry !=3D mVtdUnitInformation[VtdIndex].FixedSecon= dLevelPagingEntry) { + Status =3D SetPageAttribute ( + VtdIndex, + DomainIdentifier, + SecondLevelPagingEntry, + BaseAddress, + Length, + IoMmuAccess + ); + if (EFI_ERROR (Status)) { + DEBUG ((DEBUG_ERROR,"SetPageAttribute - %r\n", Status)); + return Status; + } + } + + InvalidatePageEntry (VtdIndex); + + return EFI_SUCCESS; +} + +/** + Always enable the VTd page attribute for the device. + + @param[in] Segment The Segment used to identify a VTd engine. + @param[in] SourceId The SourceId used to identify a VTd engine= and table entry. + + @retval EFI_SUCCESS The VTd entry is updated to always enable = all DMA access for the specific device. +**/ +EFI_STATUS +AlwaysEnablePageAttribute ( + IN UINT16 Segment, + IN VTD_SOURCE_ID SourceId + ) +{ + UINTN VtdIndex; + VTD_EXT_CONTEXT_ENTRY *ExtContextEntry; + VTD_CONTEXT_ENTRY *ContextEntry; + VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry; + UINT64 Pt; + + DEBUG ((DEBUG_INFO,"AlwaysEnablePageAttribute (S%04x B%02x D%02x F%02x)\= n", Segment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId.Bits.Functio= n)); + + VtdIndex =3D FindVtdIndexByPciDevice (Segment, SourceId, &ExtContextEntr= y, &ContextEntry); + if (VtdIndex =3D=3D (UINTN)-1) { + DEBUG ((DEBUG_ERROR,"AlwaysEnablePageAttribute - Pci device (S%04x B%0= 2x D%02x F%02x) not found!\n", Segment, SourceId.Bits.Bus, SourceId.Bits.De= vice, SourceId.Bits.Function)); + return EFI_DEVICE_ERROR; + } + + if (mVtdUnitInformation[VtdIndex].FixedSecondLevelPagingEntry =3D=3D 0) { + DEBUG((DEBUG_INFO, "CreateSecondLevelPagingEntry - %d\n", VtdIndex)); + mVtdUnitInformation[VtdIndex].FixedSecondLevelPagingEntry =3D CreateSe= condLevelPagingEntry (VtdIndex, EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCES= S_WRITE, mVtdUnitInformation[VtdIndex].Is5LevelPaging); + } + + SecondLevelPagingEntry =3D mVtdUnitInformation[VtdIndex].FixedSecondLeve= lPagingEntry; + Pt =3D (UINT64)RShiftU64 ((UINT64)(UINTN)SecondLevelPagingEntry, 12); + if (ExtContextEntry !=3D NULL) { + ExtContextEntry->Bits.SecondLevelPageTranslationPointerLo =3D (UINT32)= Pt; + ExtContextEntry->Bits.SecondLevelPageTranslationPointerHi =3D (UINT32)= RShiftU64(Pt, 20); + ExtContextEntry->Bits.DomainIdentifier =3D ((1 << (UINT8)((UINTN)mVtdU= nitInformation[VtdIndex].CapReg.Bits.ND * 2 + 4)) - 1); + ExtContextEntry->Bits.Present =3D 1; + FlushPageTableMemory (VtdIndex, (UINTN)ExtContextEntry, sizeof(*ExtCon= textEntry)); + } else if (ContextEntry !=3D NULL) { + ContextEntry->Bits.SecondLevelPageTranslationPointerLo =3D (UINT32) Pt; + ContextEntry->Bits.SecondLevelPageTranslationPointerHi =3D (UINT32) RS= hiftU64(Pt, 20); + ContextEntry->Bits.DomainIdentifier =3D ((1 << (UINT8)((UINTN)mVtdUnit= Information[VtdIndex].CapReg.Bits.ND * 2 + 4)) - 1); + ContextEntry->Bits.Present =3D 1; + FlushPageTableMemory (VtdIndex, (UINTN)ContextEntry, sizeof(*ContextEn= try)); + } + + return EFI_SUCCESS; +} diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/Tran= slationTableEx.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDx= e/TranslationTableEx.c new file mode 100644 index 000000000..c07afaf2b --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/Translation= TableEx.c @@ -0,0 +1,108 @@ +/** @file + + Copyright (c) 2017 - 2023, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#include "DmaProtection.h" + +/** + Create extended context entry. + + @param[in] VtdIndex The index of the VTd engine. + + @retval EFI_SUCCESS The extended context entry is created. + @retval EFI_OUT_OF_RESOURCE No enough resource to create extended conte= xt entry. +**/ +EFI_STATUS +CreateExtContextEntry ( + IN UINTN VtdIndex + ) +{ + UINTN Index; + VOID *Buffer; + UINTN RootPages; + UINTN ContextPages; + VTD_EXT_ROOT_ENTRY *ExtRootEntry; + VTD_EXT_CONTEXT_ENTRY *ExtContextEntryTable; + VTD_EXT_CONTEXT_ENTRY *ExtContextEntry; + VTD_SOURCE_ID *PciSourceId; + VTD_SOURCE_ID SourceId; + UINTN MaxBusNumber; + UINTN EntryTablePages; + + MaxBusNumber =3D 0; + for (Index =3D 0; Index < mVtdUnitInformation[VtdIndex].PciDeviceInfo->P= ciDeviceDataNumber; Index++) { + PciSourceId =3D &mVtdUnitInformation[VtdIndex].PciDeviceInfo->PciDevic= eData[Index].PciSourceId; + if (PciSourceId->Bits.Bus > MaxBusNumber) { + MaxBusNumber =3D PciSourceId->Bits.Bus; + } + } + DEBUG ((DEBUG_INFO," MaxBusNumber - 0x%x\n", MaxBusNumber)); + + RootPages =3D EFI_SIZE_TO_PAGES (sizeof (VTD_EXT_ROOT_ENTRY) * VTD_ROOT_= ENTRY_NUMBER); + ContextPages =3D EFI_SIZE_TO_PAGES (sizeof (VTD_EXT_CONTEXT_ENTRY) * VTD= _CONTEXT_ENTRY_NUMBER); + EntryTablePages =3D RootPages + ContextPages * (MaxBusNumber + 1); + Buffer =3D AllocateZeroPages (EntryTablePages); + if (Buffer =3D=3D NULL) { + DEBUG ((DEBUG_INFO,"Could not Alloc Root Entry Table.. \n")); + return EFI_OUT_OF_RESOURCES; + } + mVtdUnitInformation[VtdIndex].ExtRootEntryTable =3D (VTD_EXT_ROOT_ENTRY = *)Buffer; + Buffer =3D (UINT8 *)Buffer + EFI_PAGES_TO_SIZE (RootPages); + + for (Index =3D 0; Index < mVtdUnitInformation[VtdIndex].PciDeviceInfo->P= ciDeviceDataNumber; Index++) { + PciSourceId =3D &mVtdUnitInformation[VtdIndex].PciDeviceInfo->PciDevic= eData[Index].PciSourceId; + + SourceId.Bits.Bus =3D PciSourceId->Bits.Bus; + SourceId.Bits.Device =3D PciSourceId->Bits.Device; + SourceId.Bits.Function =3D PciSourceId->Bits.Function; + + ExtRootEntry =3D &mVtdUnitInformation[VtdIndex].ExtRootEntryTable[Sour= ceId.Index.RootIndex]; + if (ExtRootEntry->Bits.LowerPresent =3D=3D 0) { + ExtRootEntry->Bits.LowerContextTablePointerLo =3D (UINT32) RShiftU6= 4 ((UINT64)(UINTN)Buffer, 12); + ExtRootEntry->Bits.LowerContextTablePointerHi =3D (UINT32) RShiftU6= 4 ((UINT64)(UINTN)Buffer, 32); + ExtRootEntry->Bits.LowerPresent =3D 1; + ExtRootEntry->Bits.UpperContextTablePointerLo =3D (UINT32) RShiftU6= 4 ((UINT64)(UINTN)Buffer, 12) + 1; + ExtRootEntry->Bits.UpperContextTablePointerHi =3D (UINT32) RShiftU6= 4 (RShiftU64 ((UINT64)(UINTN)Buffer, 12) + 1, 20); + ExtRootEntry->Bits.UpperPresent =3D 1; + Buffer =3D (UINT8 *)Buffer + EFI_PAGES_TO_SIZE (ContextPages); + } + + ExtContextEntryTable =3D (VTD_EXT_CONTEXT_ENTRY *)(UINTN)VTD_64BITS_AD= DRESS(ExtRootEntry->Bits.LowerContextTablePointerLo, ExtRootEntry->Bits.Low= erContextTablePointerHi) ; + ExtContextEntry =3D &ExtContextEntryTable[SourceId.Index.ContextIndex]; + ExtContextEntry->Bits.TranslationType =3D 0; + ExtContextEntry->Bits.FaultProcessingDisable =3D 0; + ExtContextEntry->Bits.Present =3D 0; + + DEBUG ((DEBUG_INFO,"DOMAIN: S%04x, B%02x D%02x F%02x\n", mVtdUnitInfor= mation[VtdIndex].Segment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId= .Bits.Function)); + + mVtdUnitInformation[VtdIndex].Is5LevelPaging =3D FALSE; + if ((mVtdUnitInformation[VtdIndex].CapReg.Bits.SAGAW & BIT3) !=3D 0) { + mVtdUnitInformation[VtdIndex].Is5LevelPaging =3D TRUE; + if ((mAcpiDmarTable->HostAddressWidth <=3D 48) && + ((mVtdUnitInformation[VtdIndex].CapReg.Bits.SAGAW & BIT2) !=3D 0= )) { + mVtdUnitInformation[VtdIndex].Is5LevelPaging =3D FALSE; + } + } else if ((mVtdUnitInformation[VtdIndex].CapReg.Bits.SAGAW & BIT2) = =3D=3D 0) { + DEBUG((DEBUG_ERROR, "!!!! Page-table type is not supported on VTD %d= !!!!\n", VtdIndex)); + return EFI_UNSUPPORTED; + } + + if (mVtdUnitInformation[VtdIndex].Is5LevelPaging) { + ExtContextEntry->Bits.AddressWidth =3D 0x3; + DEBUG((DEBUG_INFO, "Using 5-level page-table on VTD %d\n", VtdIndex)= ); + } else { + ExtContextEntry->Bits.AddressWidth =3D 0x2; + DEBUG((DEBUG_INFO, "Using 4-level page-table on VTD %d\n", VtdIndex)= ); + } + + + } + + FlushPageTableMemory (VtdIndex, (UINTN)mVtdUnitInformation[VtdIndex].Ext= RootEntryTable, EFI_PAGES_TO_SIZE(EntryTablePages)); + + return EFI_SUCCESS; +} + diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/VtdL= og.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/VtdLog.c new file mode 100644 index 000000000..0ac4758ff --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/VtdLog.c @@ -0,0 +1,383 @@ +/** @file + + Copyright (c) 2023, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#include "DmaProtection.h" + +UINT8 *mVtdLogBuffer =3D NULL; + +UINT8 *mVtdLogDxeFreeBuffer =3D NULL; +UINT32 mVtdLogDxeBufferUsed =3D 0; + +UINT32 mVtdLogPeiPostMemBufferUsed =3D 0; + +UINT8 mVtdLogPeiError =3D 0; +UINT16 mVtdLogDxeError =3D 0; + +/** + Allocate memory buffer for VTd log items. + + @param[in] MemorySize Required memory buffer size. + + @retval Buffer address + +**/ +UINT8 * +EFIAPI +VTdLogAllocMemory ( + IN CONST UINT32 MemorySize + ) +{ + UINT8 *Buffer; + + Buffer =3D NULL; + if (mVtdLogDxeFreeBuffer !=3D NULL) { + if ((mVtdLogDxeBufferUsed + MemorySize) <=3D PcdGet32 (PcdVTdDxeLogBuf= ferSize)) { + Buffer =3D mVtdLogDxeFreeBuffer; + + mVtdLogDxeFreeBuffer +=3D MemorySize; + mVtdLogDxeBufferUsed +=3D MemorySize; + } else { + mVtdLogDxeError |=3D VTD_LOG_ERROR_BUFFER_FULL; + } + } + return Buffer; +} + +/** + Add a new VTd log event. + + @param[in] EventType Event type + @param[in] Data1 First parameter + @param[in] Data2 Second parameter + +**/ +VOID +EFIAPI +VTdLogAddEvent ( + IN CONST VTDLOG_EVENT_TYPE EventType, + IN CONST UINT64 Data1, + IN CONST UINT64 Data2 + ) +{ + VTDLOG_EVENT_2PARAM *Item; + + if (PcdGet8 (PcdVTdLogLevel) =3D=3D 0) { + return; + } else if ((PcdGet8 (PcdVTdLogLevel) =3D=3D 1) && (EventType >=3D VTDLOG= _DXE_ADVANCED)) { + return; + } + + Item =3D (VTDLOG_EVENT_2PARAM *) VTdLogAllocMemory (sizeof (VTDLOG_EVENT= _2PARAM)); + if (Item !=3D NULL) { + Item->Data1 =3D Data1; + Item->Data2 =3D Data2; + + Item->Header.DataSize =3D sizeof (VTDLOG_EVENT_2PARAM); + Item->Header.LogType =3D (UINT64) 1 << EventType; + Item->Header.Timestamp =3D AsmReadTsc (); + } +} + +/** + Add a new VTd log event with data. + + @param[in] EventType Event type + @param[in] Param parameter + @param[in] Data Data + @param[in] DataSize Data size + +**/ +VOID +EFIAPI +VTdLogAddDataEvent ( + IN CONST VTDLOG_EVENT_TYPE EventType, + IN CONST UINT64 Param, + IN CONST VOID *Data, + IN CONST UINT32 DataSize + ) +{ + VTDLOG_EVENT_CONTEXT *Item; + UINT32 EventSize; + + if (PcdGet8 (PcdVTdLogLevel) =3D=3D 0) { + return; + } else if ((PcdGet8 (PcdVTdLogLevel) =3D=3D 1) && (EventType >=3D VTDLOG= _DXE_ADVANCED)) { + return; + } + + EventSize =3D sizeof (VTDLOG_EVENT_CONTEXT) + DataSize - 1; + + Item =3D (VTDLOG_EVENT_CONTEXT *) VTdLogAllocMemory (EventSize); + if (Item !=3D NULL) { + Item->Param =3D Param; + CopyMem (Item->Data, Data, DataSize); + + Item->Header.DataSize =3D EventSize; + Item->Header.LogType =3D (UINT64) 1 << EventType; + Item->Header.Timestamp =3D AsmReadTsc (); + } +} + =20 +/** + Get Event Items From Pei Pre-Mem Buffer + + @param[in] Buffer Pre-Memory data buffer. + @param[in] Context Event context + @param[in out] CallbackHandle Callback function for each VTd log event +**/ +UINT64 +EFIAPI +VTdGetEventItemsFromPeiPreMemBuffer ( + IN VTDLOG_PEI_PRE_MEM_INFO *InfoBuffer, + IN VOID *Context, + IN OUT EDKII_VTD_LOG_HANDLE_EVENT CallbackHandle + ) +{ + UINTN Index; + UINT64 EventCount; + VTDLOG_EVENT_2PARAM Event; + + if (InfoBuffer =3D=3D NULL) { + return 0; + } + + EventCount =3D 0; + for (Index =3D 0; Index < VTD_LOG_PEI_PRE_MEM_BAR_MAX; Index++) { + if (InfoBuffer[Index].Mode =3D=3D VTD_LOG_PEI_PRE_MEM_NOT_USED) { + continue; + } + if (CallbackHandle) { + Event.Header.DataSize =3D sizeof (VTDLOG_EVENT_2PARAM); + Event.Header.Timestamp =3D 0; + + Event.Header.LogType =3D ((UINT64) 1) << VTDLOG_PEI_PRE_MEM_DMA_PROT= ECT; + Event.Data1 =3D InfoBuffer[Index].BarAddress; + Event.Data2 =3D InfoBuffer[Index].Mode; + Event.Data2 |=3D InfoBuffer[Index].Status<<8; + CallbackHandle (Context, &Event.Header); + } + EventCount++; + } + + return EventCount; +} + +/** + Get Event Items From Pei Post-Mem/Dxe Buffer + + @param[in] Buffer Data buffer. + @param[in] BufferUsed Data buffer used. + @param[in] Context Event context + @param[in out] CallbackHandle Callback function for each VTd log eve= nt +**/ +UINT64 +EFIAPI +VTdGetEventItemsFromBuffer ( + IN UINT8 *Buffer, + IN UINT32 BufferUsed, + IN VOID *Context, + IN OUT EDKII_VTD_LOG_HANDLE_EVENT CallbackHandle + ) +{ + UINT64 Count; + VTDLOG_EVENT_HEADER *Header; + + Count =3D 0; + if (Buffer !=3D NULL) { + while (BufferUsed > 0) { + Header =3D (VTDLOG_EVENT_HEADER *) Buffer; + if (BufferUsed >=3D Header->DataSize) { + if (CallbackHandle) { + CallbackHandle (Context, Header); + } + Buffer +=3D Header->DataSize; + BufferUsed -=3D Header->DataSize; + Count++; + } else { + BufferUsed =3D 0; + } + } + } + + return Count; +} + +/** + Generate the VTd log state. + + @param[in] EventType Event type + @param[in] Data1 First parameter + @param[in] Data2 Second parameter + @param[in] Context Event context + @param[in out] CallbackHandle Callback function for each VTd log eve= nt +**/ +VOID +EFIAPI +VTdGenerateStateEvent ( + IN VTDLOG_EVENT_TYPE EventType, + IN UINT64 Data1, + IN UINT64 Data2, + IN VOID *Context, + IN OUT EDKII_VTD_LOG_HANDLE_EVENT CallbackHandle + ) +{ + VTDLOG_EVENT_2PARAM Item; + + Item.Data1 =3D Data1; + Item.Data2 =3D Data2; + + Item.Header.DataSize =3D sizeof (VTDLOG_EVENT_2PARAM); + Item.Header.LogType =3D (UINT64) 1 << EventType; + Item.Header.Timestamp =3D 0; + + if (CallbackHandle) { + CallbackHandle (Context, &Item.Header); + } +} + +/** + Get the VTd log events. + @param[in] Context Event context + @param[in out] CallbackHandle Callback function for each VTd log eve= nt + + @retval UINT32 Number of events +**/ +UINT64 +EFIAPI +VTdLogGetEvents ( + IN VOID *Context, + IN OUT EDKII_VTD_LOG_HANDLE_EVENT CallbackHandle + ) +{ + UINT64 CountPeiPreMem; + UINT64 CountPeiPostMem; + UINT64 CountDxe; + UINT8 *Buffer; + + if (mVtdLogBuffer =3D=3D NULL) { + return 0; + } + + // + // PEI pre-memory phase + // + Buffer =3D &mVtdLogBuffer[PcdGet32 (PcdVTdDxeLogBufferSize) + PcdGet32 (= PcdVTdPeiPostMemLogBufferSize)]; + CountPeiPreMem =3D VTdGetEventItemsFromPeiPreMemBuffer ((VTDLOG_PEI_PRE_= MEM_INFO *) Buffer, Context, CallbackHandle); + DEBUG ((DEBUG_INFO, "Find %d in PEI pre mem phase\n", CountPeiPreMem)); + + // + // PEI post memory phase + // + Buffer =3D &mVtdLogBuffer[PcdGet32 (PcdVTdDxeLogBufferSize)]; + CountPeiPostMem =3D VTdGetEventItemsFromBuffer (Buffer, mVtdLogPeiPostMe= mBufferUsed, Context, CallbackHandle); + if (mVtdLogPeiError !=3D 0) { + VTdGenerateStateEvent (VTDLOG_PEI_BASIC, mVtdLogPeiError, 0, Context, = CallbackHandle); + CountPeiPostMem++; + } + DEBUG ((DEBUG_INFO, "Find %d in PEI post mem phase\n", CountPeiPostMem)); + + // + // DXE phase + // + Buffer =3D &mVtdLogBuffer[0]; + CountDxe =3D VTdGetEventItemsFromBuffer (Buffer, mVtdLogDxeBufferUsed, C= ontext, CallbackHandle); + if (mVtdLogDxeError !=3D 0) { + VTdGenerateStateEvent (VTDLOG_DXE_BASIC, mVtdLogDxeError, 0, Context, = CallbackHandle); + CountDxe++; + } + DEBUG ((DEBUG_INFO, "Find %d in DXE phase\n", CountDxe)); + + return CountPeiPreMem + CountPeiPostMem + CountDxe; +} + +EDKII_VTD_LOG_PROTOCOL mIntelVTdLog =3D { + EDKII_VTD_LOG_PROTOCOL_REVISION, + VTdLogGetEvents +}; + +/** + Initializes the VTd Log. + +**/ +VOID +EFIAPI +VTdLogInitialize( + VOID + ) +{ + UINT32 TotalBufferSize; + EFI_STATUS Status; + VOID *HobPtr; + VTDLOG_PEI_BUFFER_HOB *HobPeiBuffer; + EFI_HANDLE Handle; + UINT32 BufferOffset; + + if (PcdGet8 (PcdVTdLogLevel) =3D=3D 0) { + return; + } + + if (mVtdLogBuffer !=3D NULL) { + return; + } + + TotalBufferSize =3D PcdGet32 (PcdVTdDxeLogBufferSize) + PcdGet32 (PcdVTd= PeiPostMemLogBufferSize) + sizeof (VTDLOG_PEI_PRE_MEM_INFO) * VTD_LOG_PEI_P= RE_MEM_BAR_MAX; + + Status =3D gBS->AllocatePool (EfiBootServicesData, TotalBufferSize, &mVt= dLogBuffer); + if (EFI_ERROR (Status)) { + return; + } + + // + // DXE Buffer + // + if (PcdGet32 (PcdVTdDxeLogBufferSize) > 0) { + mVtdLogDxeFreeBuffer =3D mVtdLogBuffer; + mVtdLogDxeBufferUsed =3D 0; + } + + // + // Get PEI pre-memory buffer offset + // + BufferOffset =3D PcdGet32 (PcdVTdDxeLogBufferSize) + PcdGet32 (PcdVTdPei= PostMemLogBufferSize); + + HobPtr =3D GetFirstGuidHob (&gVTdLogBufferHobGuid); + if (HobPtr !=3D NULL) { + HobPeiBuffer =3D GET_GUID_HOB_DATA (HobPtr); + + // + // Copy PEI pre-memory phase VTd log. + // + CopyMem (&mVtdLogBuffer[BufferOffset], &HobPeiBuffer->PreMemInfo, size= of (VTDLOG_PEI_PRE_MEM_INFO) * VTD_LOG_PEI_PRE_MEM_BAR_MAX); + + // + // Copy PEI post-memory pase VTd log. + // + BufferOffset =3D PcdGet32 (PcdVTdDxeLogBufferSize); + if (PcdGet32 (PcdVTdPeiPostMemLogBufferSize) > 0) { + if (HobPeiBuffer->PostMemBufferUsed > 0) { + mVtdLogPeiPostMemBufferUsed =3D HobPeiBuffer->PostMemBufferUsed; + CopyMem (&mVtdLogBuffer[BufferOffset], (UINT8 *) (UINTN) HobPeiBuf= fer->PostMemBuffer, mVtdLogPeiPostMemBufferUsed); + } + } + + mVtdLogPeiError =3D HobPeiBuffer->VtdLogPeiError; + } else { + // + // Do not find PEI Vtd log, clear PEI pre-memory phase buffer. + // + ZeroMem (&mVtdLogBuffer[BufferOffset], sizeof (VTDLOG_PEI_PRE_MEM_INFO= ) * VTD_LOG_PEI_PRE_MEM_BAR_MAX); + } + + Handle =3D NULL; + Status =3D gBS->InstallMultipleProtocolInterfaces ( + &Handle, + &gEdkiiVTdLogProtocolGuid, + &mIntelVTdLog, + NULL + ); + ASSERT_EFI_ERROR (Status); +} diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/VtdR= eg.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/VtdReg.c new file mode 100644 index 000000000..dd0c49698 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCoreDxe/VtdReg.c @@ -0,0 +1,757 @@ +/** @file + + Copyright (c) 2017 - 2023, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#include "DmaProtection.h" + +#define VTD_CAP_REG_NFR_MAX (256) + +UINTN mVtdUnitNumber =3D 0; +VTD_UNIT_INFORMATION *mVtdUnitInformation =3D NULL; +VTD_REGESTER_INFO *mVtdRegsInfoBuffer =3D NULL; + +BOOLEAN mVtdEnabled; + +/** + Flush VTD page table and context table memory. + + This action is to make sure the IOMMU engine can get final data in memor= y. + + @param[in] VtdIndex The index used to identify a VTd engine. + @param[in] Base The base address of memory to be flushed. + @param[in] Size The size of memory in bytes to be flushed. +**/ +VOID +FlushPageTableMemory ( + IN UINTN VtdIndex, + IN UINTN Base, + IN UINTN Size + ) +{ + if (mVtdUnitInformation[VtdIndex].ECapReg.Bits.C =3D=3D 0) { + WriteBackDataCacheRange ((VOID *)Base, Size); + } +} + +/** + Perpare cache invalidation interface. + + @param[in] VtdIndex The index used to identify a VTd engine. + + @retval EFI_SUCCESS The operation was successful. + @retval EFI_UNSUPPORTED Invalidation method is not supported. + @retval EFI_OUT_OF_RESOURCES A memory allocation failed. +**/ +EFI_STATUS +PerpareCacheInvalidationInterface ( + IN UINTN VtdIndex + ) +{ + UINT32 Reg32; + VTD_IQA_REG IqaReg; + VTD_UNIT_INFORMATION *VtdUnitInfo; + UINTN VtdUnitBaseAddress; + + VtdUnitInfo =3D &mVtdUnitInformation[VtdIndex]; + VtdUnitBaseAddress =3D VtdUnitInfo->VtdUnitBaseAddress; + + if (VtdUnitInfo->VerReg.Bits.Major <=3D 5) { + VtdUnitInfo->EnableQueuedInvalidation =3D 0; + DEBUG ((DEBUG_INFO, "Use Register-based Invalidation Interface for eng= ine [%d]\n", VtdIndex)); + return EFI_SUCCESS; + } + + if (VtdUnitInfo->ECapReg.Bits.QI =3D=3D 0) { + DEBUG ((DEBUG_ERROR, "Hardware does not support queued invalidations i= nterface for engine [%d]\n", VtdIndex)); + return EFI_UNSUPPORTED; + } + + VtdUnitInfo->EnableQueuedInvalidation =3D 1; + DEBUG ((DEBUG_INFO, "Use Queued Invalidation Interface for engine [%d]\n= ", VtdIndex)); + + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + if ((Reg32 & B_GSTS_REG_QIES) !=3D 0) { + DEBUG ((DEBUG_ERROR,"Queued Invalidation Interface was enabled.\n")); + + VtdLibDisableQueuedInvalidationInterface (VtdUnitBaseAddress); + } + + // + // Initialize the Invalidation Queue Tail Register to zero. + // + MmioWrite64 (VtdUnitBaseAddress + R_IQT_REG, 0); + + // + // Setup the IQ address, size and descriptor width through the Invalidat= ion Queue Address Register + // + if (VtdUnitInfo->QiDescBuffer =3D=3D NULL) { + VtdUnitInfo->QiDescBufferSize =3D (sizeof (QI_256_DESC) * ((UINTN) 1 <= < (VTD_INVALIDATION_QUEUE_SIZE + 7))); + VtdUnitInfo->QiDescBuffer =3D AllocatePages (EFI_SIZE_TO_PAGES (VtdUni= tInfo->QiDescBufferSize)); + if (VtdUnitInfo->QiDescBuffer =3D=3D NULL) { + DEBUG ((DEBUG_ERROR,"Could not Alloc Invalidation Queue Buffer.\n")); + VTdLogAddEvent (VTDLOG_DXE_QUEUED_INVALIDATION, VTD_LOG_QI_ERROR_OUT= _OF_RESOURCES, VtdUnitBaseAddress); + return EFI_OUT_OF_RESOURCES; + } + } + + DEBUG ((DEBUG_INFO, "Invalidation Queue Buffer Size : %d\n", VtdUnitInfo= ->QiDescBufferSize)); + // + // 4KB Aligned address + // + IqaReg.Uint64 =3D (UINT64) (UINTN) VtdUnitInfo->QiDescBuffer; + IqaReg.Bits.DW =3D VTD_QUEUED_INVALIDATION_DESCRIPTOR_WIDTH; + IqaReg.Bits.QS =3D VTD_INVALIDATION_QUEUE_SIZE; + MmioWrite64 (VtdUnitBaseAddress + R_IQA_REG, IqaReg.Uint64); + IqaReg.Uint64 =3D MmioRead64 (VtdUnitBaseAddress + R_IQA_REG); + DEBUG ((DEBUG_INFO, "IQA_REG =3D 0x%lx, IQH_REG =3D 0x%lx\n", IqaReg.Uin= t64, MmioRead64 (VtdUnitBaseAddress + R_IQH_REG))); + + // + // Enable the queued invalidation interface through the Global Command R= egister. + // When enabled, hardware sets the QIES field in the Global Status Regis= ter. + // + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + Reg32 |=3D B_GMCD_REG_QIE; + MmioWrite32 (VtdUnitBaseAddress + R_GCMD_REG, Reg32); + DEBUG ((DEBUG_INFO, "Enable Queued Invalidation Interface. GCMD_REG =3D = 0x%x\n", Reg32)); + do { + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + } while ((Reg32 & B_GSTS_REG_QIES) =3D=3D 0); + + VTdLogAddEvent (VTDLOG_DXE_QUEUED_INVALIDATION, VTD_LOG_QI_ENABLE, VtdUn= itBaseAddress); + + return EFI_SUCCESS; +} + +/** + Submit the queued invalidation descriptor to the remapping + hardware unit and wait for its completion. + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + @param[in] Desc The invalidate descriptor + + @retval EFI_SUCCESS The operation was successful. + @retval RETURN_DEVICE_ERROR A fault is detected. + @retval EFI_INVALID_PARAMETER Parameter is invalid. +**/ +EFI_STATUS +SubmitQueuedInvalidationDescriptor ( + IN UINTN VtdUnitBaseAddress, + IN QI_256_DESC *Desc + ) +{ + EFI_STATUS Status; + VTD_REGESTER_QI_INFO RegisterQi; + + Status =3D VtdLibSubmitQueuedInvalidationDescriptor (VtdUnitBaseAddress,= Desc, FALSE); + if (Status =3D=3D EFI_DEVICE_ERROR) { + RegisterQi.BaseAddress =3D VtdUnitBaseAddress; + RegisterQi.FstsReg =3D MmioRead32 (VtdUnitBaseAddress + R_FSTS_REG= );; + RegisterQi.IqercdReg =3D MmioRead64 (VtdUnitBaseAddress + R_IQERCD_R= EG); + VTdLogAddDataEvent (VTDLOG_PEI_REGISTER, VTDLOG_REGISTER_QI, &Register= Qi, sizeof (VTD_REGESTER_QI_INFO)); + + MmioWrite32 (VtdUnitBaseAddress + R_FSTS_REG, RegisterQi.FstsReg & (B_= FSTS_REG_IQE | B_FSTS_REG_ITE | B_FSTS_REG_ICE)); + } + + return Status; +} + +/** + Invalidate VTd context cache. + + @param[in] VtdIndex The index used to identify a VTd engine. +**/ +EFI_STATUS +InvalidateContextCache ( + IN UINTN VtdIndex + ) +{ + UINT64 Reg64; + QI_256_DESC QiDesc; + + if (mVtdUnitInformation[VtdIndex].EnableQueuedInvalidation =3D=3D 0) { + // + // Register-based Invalidation + // + Reg64 =3D MmioRead64 (mVtdUnitInformation[VtdIndex].VtdUnitBaseAddress= + R_CCMD_REG); + if ((Reg64 & B_CCMD_REG_ICC) !=3D 0) { + DEBUG ((DEBUG_ERROR,"ERROR: InvalidateContextCache: B_CCMD_REG_ICC i= s set for VTD(%d)\n",VtdIndex)); + return EFI_DEVICE_ERROR; + } + + Reg64 &=3D ((~B_CCMD_REG_ICC) & (~B_CCMD_REG_CIRG_MASK)); + Reg64 |=3D (B_CCMD_REG_ICC | V_CCMD_REG_CIRG_GLOBAL); + MmioWrite64 (mVtdUnitInformation[VtdIndex].VtdUnitBaseAddress + R_CCMD= _REG, Reg64); + + do { + Reg64 =3D MmioRead64 (mVtdUnitInformation[VtdIndex].VtdUnitBaseAddre= ss + R_CCMD_REG); + } while ((Reg64 & B_CCMD_REG_ICC) !=3D 0); + } else { + // + // Queued Invalidation + // + QiDesc.Uint64[0] =3D QI_CC_FM(0) | QI_CC_SID(0) | QI_CC_DID(0) | QI_CC= _GRAN(1) | QI_CC_TYPE; + QiDesc.Uint64[1] =3D 0; + QiDesc.Uint64[2] =3D 0; + QiDesc.Uint64[3] =3D 0; + + return SubmitQueuedInvalidationDescriptor(mVtdUnitInformation[VtdIndex= ].VtdUnitBaseAddress, &QiDesc); + } + return EFI_SUCCESS; +} + +/** + Invalidate VTd IOTLB. + + @param[in] VtdIndex The index used to identify a VTd engine. +**/ +EFI_STATUS +InvalidateIOTLB ( + IN UINTN VtdIndex + ) +{ + UINT64 Reg64; + QI_256_DESC QiDesc; + + if (mVtdUnitInformation[VtdIndex].EnableQueuedInvalidation =3D=3D 0) { + // + // Register-based Invalidation + // + Reg64 =3D MmioRead64 (mVtdUnitInformation[VtdIndex].VtdUnitBaseAddress= + (mVtdUnitInformation[VtdIndex].ECapReg.Bits.IRO * 16) + R_IOTLB_REG); + if ((Reg64 & B_IOTLB_REG_IVT) !=3D 0) { + DEBUG ((DEBUG_ERROR,"ERROR: InvalidateIOTLB: B_IOTLB_REG_IVT is set = for VTD(%d)\n", VtdIndex)); + return EFI_DEVICE_ERROR; + } + + Reg64 &=3D ((~B_IOTLB_REG_IVT) & (~B_IOTLB_REG_IIRG_MASK)); + Reg64 |=3D (B_IOTLB_REG_IVT | V_IOTLB_REG_IIRG_GLOBAL); + MmioWrite64 (mVtdUnitInformation[VtdIndex].VtdUnitBaseAddress + (mVtdU= nitInformation[VtdIndex].ECapReg.Bits.IRO * 16) + R_IOTLB_REG, Reg64); + + do { + Reg64 =3D MmioRead64 (mVtdUnitInformation[VtdIndex].VtdUnitBaseAddre= ss + (mVtdUnitInformation[VtdIndex].ECapReg.Bits.IRO * 16) + R_IOTLB_REG); + } while ((Reg64 & B_IOTLB_REG_IVT) !=3D 0); + } else { + // + // Queued Invalidation + // + QiDesc.Uint64[0] =3D QI_IOTLB_DID(0) | QI_IOTLB_DR(CAP_READ_DRAIN(mVtd= UnitInformation[VtdIndex].CapReg.Uint64)) | QI_IOTLB_DW(CAP_WRITE_DRAIN(mVt= dUnitInformation[VtdIndex].CapReg.Uint64)) | QI_IOTLB_GRAN(1) | QI_IOTLB_TY= PE; + QiDesc.Uint64[1] =3D QI_IOTLB_ADDR(0) | QI_IOTLB_IH(0) | QI_IOTLB_AM(0= ); + QiDesc.Uint64[2] =3D 0; + QiDesc.Uint64[3] =3D 0; + + return SubmitQueuedInvalidationDescriptor(mVtdUnitInformation[VtdIndex= ].VtdUnitBaseAddress, &QiDesc); + } + + return EFI_SUCCESS; +} + +/** + Invalid VTd global IOTLB. + + @param[in] VtdIndex The index of VTd engine. + + @retval EFI_SUCCESS VTd global IOTLB is invalidated. + @retval EFI_DEVICE_ERROR VTd global IOTLB is not invalidated. +**/ +EFI_STATUS +InvalidateVtdIOTLBGlobal ( + IN UINTN VtdIndex + ) +{ + if (!mVtdEnabled) { + return EFI_SUCCESS; + } + + DEBUG((DEBUG_VERBOSE, "InvalidateVtdIOTLBGlobal(%d)\n", VtdIndex)); + + // + // Write Buffer Flush before invalidation + // + VtdLibFlushWriteBuffer (mVtdUnitInformation[VtdIndex].VtdUnitBaseAddress= ); + + // + // Invalidate the context cache + // + if (mVtdUnitInformation[VtdIndex].HasDirtyContext) { + InvalidateContextCache (VtdIndex); + } + + // + // Invalidate the IOTLB cache + // + if (mVtdUnitInformation[VtdIndex].HasDirtyContext || mVtdUnitInformation= [VtdIndex].HasDirtyPages) { + InvalidateIOTLB (VtdIndex); + } + + return EFI_SUCCESS; +} + +/** + Prepare VTD configuration. +**/ +VOID +PrepareVtdConfig ( + VOID + ) +{ + UINTN Index; + UINTN DomainNumber; + EFI_STATUS Status; + + if (mVtdRegsInfoBuffer =3D=3D NULL) { + mVtdRegsInfoBuffer =3D AllocateZeroPool (sizeof (VTD_REGESTER_INFO) + = sizeof (VTD_UINT128) * VTD_CAP_REG_NFR_MAX); + ASSERT (mVtdRegsInfoBuffer !=3D NULL); + } + + // + // Dump VTd error before DXE phase + // + DumpVtdIfError (); + + for (Index =3D 0; Index < mVtdUnitNumber; Index++) { + DEBUG ((DEBUG_INFO, "Dump VTd Capability (%d)\n", Index)); + mVtdUnitInformation[Index].VerReg.Uint32 =3D MmioRead32 (mVtdUnitInfor= mation[Index].VtdUnitBaseAddress + R_VER_REG); + DumpVtdVerRegs (&mVtdUnitInformation[Index].VerReg); + mVtdUnitInformation[Index].CapReg.Uint64 =3D MmioRead64 (mVtdUnitInfor= mation[Index].VtdUnitBaseAddress + R_CAP_REG); + DumpVtdCapRegs (&mVtdUnitInformation[Index].CapReg); + mVtdUnitInformation[Index].ECapReg.Uint64 =3D MmioRead64 (mVtdUnitInfo= rmation[Index].VtdUnitBaseAddress + R_ECAP_REG); + DumpVtdECapRegs (&mVtdUnitInformation[Index].ECapReg); + + if ((mVtdUnitInformation[Index].CapReg.Bits.SLLPS & BIT0) =3D=3D 0) { + DEBUG((DEBUG_WARN, "!!!! 2MB super page is not supported on VTD %d != !!!\n", Index)); + } + if ((mVtdUnitInformation[Index].CapReg.Bits.SAGAW & BIT3) !=3D 0) { + DEBUG((DEBUG_INFO, "Support 5-level page-table on VTD %d\n", Index)); + } + if ((mVtdUnitInformation[Index].CapReg.Bits.SAGAW & BIT2) !=3D 0) { + DEBUG((DEBUG_INFO, "Support 4-level page-table on VTD %d\n", Index)); + } + if ((mVtdUnitInformation[Index].CapReg.Bits.SAGAW & (BIT3 | BIT2)) =3D= =3D 0) { + DEBUG((DEBUG_ERROR, "!!!! Page-table type 0x%X is not supported on V= TD %d !!!!\n", Index, mVtdUnitInformation[Index].CapReg.Bits.SAGAW)); + return ; + } + + DomainNumber =3D (UINTN)1 << (UINT8)((UINTN)mVtdUnitInformation[Index]= .CapReg.Bits.ND * 2 + 4); + if (mVtdUnitInformation[Index].PciDeviceInfo->PciDeviceDataNumber >=3D= DomainNumber) { + DEBUG((DEBUG_ERROR, "!!!! Pci device Number(0x%x) >=3D DomainNumber(= 0x%x) !!!!\n", mVtdUnitInformation[Index].PciDeviceInfo->PciDeviceDataNumbe= r, DomainNumber)); + return ; + } + + Status =3D PerpareCacheInvalidationInterface(Index); + if (EFI_ERROR (Status)) { + ASSERT(FALSE); + return; + } + } + return ; +} + +/** + Disable PMR in all VTd engine. +**/ +VOID +DisablePmr ( + VOID + ) +{ + UINTN Index; + EFI_STATUS Status; + + DEBUG ((DEBUG_INFO,"DisablePmr\n")); + + for (Index =3D 0; Index < mVtdUnitNumber; Index++) { + Status =3D VtdLibDisablePmr (mVtdUnitInformation[Index].VtdUnitBaseAdd= ress); + VTdLogAddEvent (VTDLOG_DXE_DISABLE_PMR, mVtdUnitInformation[Index].Vtd= UnitBaseAddress, Status); + } + + return ; +} + +/** + Update Root Table Address Register + + @param[in] VtdIndex The index used to identify a VTd engine. + @param[in] EnableADM TRUE - Enable ADM in TTM bits +**/ +VOID +UpdateRootTableAddressRegister ( + IN UINTN VtdIndex, + IN BOOLEAN EnableADM + ) +{ + UINT64 Reg64; + + if (mVtdUnitInformation[VtdIndex].ExtRootEntryTable !=3D NULL) { + DEBUG((DEBUG_INFO, "ExtRootEntryTable 0x%x \n", mVtdUnitInformation[Vt= dIndex].ExtRootEntryTable)); + Reg64 =3D (UINT64)(UINTN)mVtdUnitInformation[VtdIndex].ExtRootEntryTab= le | (EnableADM ? V_RTADDR_REG_TTM_ADM : BIT11); + } else { + DEBUG((DEBUG_INFO, "RootEntryTable 0x%x \n", mVtdUnitInformation[VtdIn= dex].RootEntryTable)); + Reg64 =3D (UINT64)(UINTN)mVtdUnitInformation[VtdIndex].RootEntryTable = | (EnableADM ? V_RTADDR_REG_TTM_ADM : 0); + } + MmioWrite64 (mVtdUnitInformation[VtdIndex].VtdUnitBaseAddress + R_RTADDR= _REG, Reg64); +} + +/** + Enable DMAR translation. + + @retval EFI_SUCCESS DMAR translation is enabled. + @retval EFI_DEVICE_ERROR DMAR translation is not enabled. +**/ +EFI_STATUS +EnableDmar ( + VOID + ) +{ + UINTN Index; + UINTN VtdUnitBaseAddress; + BOOLEAN TEWasEnabled; + + for (Index =3D 0; Index < mVtdUnitNumber; Index++) { + VtdUnitBaseAddress =3D mVtdUnitInformation[Index].VtdUnitBaseAddress; + DEBUG((DEBUG_INFO, ">>>>>>EnableDmar() for engine [%d] BAR [0x%x]\n", = Index, VtdUnitBaseAddress)); + + // + // Check TE was enabled or not. + // + TEWasEnabled =3D ((MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG) & B_GS= TS_REG_TE) =3D=3D B_GSTS_REG_TE); + + if (TEWasEnabled && (mVtdUnitInformation[Index].ECapReg.Bits.ADMS =3D= =3D 1) && PcdGetBool (PcdVTdSupportAbortDmaMode)) { + // + // For implementations reporting Enhanced SRTP Support (ESRTPS) fiel= d as + // Clear in the Capability register, software must not modify this f= ield while + // DMA remapping is active (TES=3D1 in Global Status register). + // + if (mVtdUnitInformation[Index].CapReg.Bits.ESRTPS =3D=3D 0) { + VtdLibClearGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_R= EG_TE); + } + + // + // Enable ADM + // + UpdateRootTableAddressRegister (Index, TRUE); + + DEBUG((DEBUG_INFO, "EnableDmar: waiting for RTPS bit to be set... \n= ")); + VtdLibSetGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_REG_S= RTP); + + DEBUG((DEBUG_INFO, "Enable Abort DMA Mode...\n")); + VtdLibSetGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_REG_T= E); + + } else { + UpdateRootTableAddressRegister (Index, FALSE); + + DEBUG((DEBUG_INFO, "EnableDmar: waiting for RTPS bit to be set... \n= ")); + VtdLibSetGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_REG_S= RTP); + } + + // + // Write Buffer Flush before invalidation + // + VtdLibFlushWriteBuffer (VtdUnitBaseAddress); + + // + // Invalidate the context cache + // + InvalidateContextCache (Index); + + // + // Invalidate the IOTLB cache + // + InvalidateIOTLB (Index); + + if (TEWasEnabled && (mVtdUnitInformation[Index].ECapReg.Bits.ADMS =3D= =3D 1) && PcdGetBool (PcdVTdSupportAbortDmaMode)) { + if (mVtdUnitInformation[Index].CapReg.Bits.ESRTPS =3D=3D 0) { + VtdLibClearGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_R= EG_TE); + } + + UpdateRootTableAddressRegister (Index, FALSE); + + DEBUG((DEBUG_INFO, "EnableDmar: waiting for RTPS bit to be set... \n= ")); + VtdLibSetGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_REG_S= RTP); + } + + // + // Enable VTd + // + DEBUG ((DEBUG_INFO, "EnableDmar: Waiting B_GSTS_REG_TE ...\n")); + VtdLibSetGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_REG_TE); + + DEBUG ((DEBUG_INFO,"VTD (%d) enabled!<<<<<<\n",Index)); + + VTdLogAddEvent (VTDLOG_DXE_ENABLE_DMAR, mVtdUnitInformation[Index].Vtd= UnitBaseAddress, 0); + } + + // + // Need disable PMR, since we already setup translation table. + // + DisablePmr (); + + mVtdEnabled =3D TRUE; + + return EFI_SUCCESS; +} + +/** + Disable DMAR translation. + + @retval EFI_SUCCESS DMAR translation is disabled. + @retval EFI_DEVICE_ERROR DMAR translation is not disabled. +**/ +EFI_STATUS +DisableDmar ( + VOID + ) +{ + UINTN Index; + UINTN SubIndex; + VTD_UNIT_INFORMATION *VtdUnitInfo; + + for (Index =3D 0; Index < mVtdUnitNumber; Index++) { + VtdUnitInfo =3D &mVtdUnitInformation[Index]; + + VtdLibDisableDmar (VtdUnitInfo->VtdUnitBaseAddress); + VTdLogAddEvent (VTDLOG_DXE_DISABLE_DMAR, VtdUnitInfo->VtdUnitBaseAddre= ss, 0); + + if (VtdUnitInfo->EnableQueuedInvalidation !=3D 0) { + // + // Disable queued invalidation interface. + // + VtdLibDisableQueuedInvalidationInterface (VtdUnitInfo->VtdUnitBaseAd= dress); + VTdLogAddEvent (VTDLOG_DXE_QUEUED_INVALIDATION, VTD_LOG_QI_DISABLE, = VtdUnitInfo->VtdUnitBaseAddress); + + // + // Free descriptor queue memory + // + if (VtdUnitInfo->QiDescBuffer !=3D NULL) { + FreePages(VtdUnitInfo->QiDescBuffer, EFI_SIZE_TO_PAGES (VtdUnitInf= o->QiDescBufferSize)); + VtdUnitInfo->QiDescBuffer =3D NULL; + VtdUnitInfo->QiDescBufferSize =3D 0; + } + + VtdUnitInfo->EnableQueuedInvalidation =3D 0; + } + } + + mVtdEnabled =3D FALSE; + + for (Index =3D 0; Index < mVtdUnitNumber; Index++) { + VtdUnitInfo =3D &mVtdUnitInformation[Index]; + DEBUG((DEBUG_INFO, "engine [%d] access\n", Index)); + for (SubIndex =3D 0; SubIndex < VtdUnitInfo->PciDeviceInfo->PciDeviceD= ataNumber; SubIndex++) { + DEBUG ((DEBUG_INFO, " PCI S%04X B%02x D%02x F%02x - %d\n", + VtdUnitInfo->Segment, + VtdUnitInfo->PciDeviceInfo->PciDeviceData[Index].PciSourceId.Bits.= Bus, + VtdUnitInfo->PciDeviceInfo->PciDeviceData[Index].PciSourceId.Bits.= Device, + VtdUnitInfo->PciDeviceInfo->PciDeviceData[Index].PciSourceId.Bits.= Function, + VtdUnitInfo->PciDeviceInfo->PciDeviceData[Index].AccessCount + )); + } + } + + return EFI_SUCCESS; +} + +/** + Dump VTd version registers. + + @param[in] VerReg The version register. +**/ +VOID +DumpVtdVerRegs ( + IN VTD_VER_REG *VerReg + ) +{ + DEBUG ((DEBUG_INFO, " VerReg - 0x%x\n", VerReg->Uint32)); + DEBUG ((DEBUG_INFO, " Major - 0x%x\n", VerReg->Bits.Major)); + DEBUG ((DEBUG_INFO, " Minor - 0x%x\n", VerReg->Bits.Minor)); +} + +/** + Dump VTd capability registers. + + @param[in] CapReg The capability register. +**/ +VOID +DumpVtdCapRegs ( + IN VTD_CAP_REG *CapReg + ) +{ + DEBUG((DEBUG_INFO, " CapReg - 0x%x\n", CapReg->Uint64)); + DEBUG((DEBUG_INFO, " ND - 0x%x\n", CapReg->Bits.ND)); + DEBUG((DEBUG_INFO, " AFL - 0x%x\n", CapReg->Bits.AFL)); + DEBUG((DEBUG_INFO, " RWBF - 0x%x\n", CapReg->Bits.RWBF)); + DEBUG((DEBUG_INFO, " PLMR - 0x%x\n", CapReg->Bits.PLMR)); + DEBUG((DEBUG_INFO, " PHMR - 0x%x\n", CapReg->Bits.PHMR)); + DEBUG((DEBUG_INFO, " CM - 0x%x\n", CapReg->Bits.CM)); + DEBUG((DEBUG_INFO, " SAGAW - 0x%x\n", CapReg->Bits.SAGAW)); + DEBUG((DEBUG_INFO, " MGAW - 0x%x\n", CapReg->Bits.MGAW)); + DEBUG((DEBUG_INFO, " ZLR - 0x%x\n", CapReg->Bits.ZLR)); + DEBUG((DEBUG_INFO, " FRO - 0x%x\n", CapReg->Bits.FRO)); + DEBUG((DEBUG_INFO, " SLLPS - 0x%x\n", CapReg->Bits.SLLPS)); + DEBUG((DEBUG_INFO, " PSI - 0x%x\n", CapReg->Bits.PSI)); + DEBUG((DEBUG_INFO, " NFR - 0x%x\n", CapReg->Bits.NFR)); + DEBUG((DEBUG_INFO, " MAMV - 0x%x\n", CapReg->Bits.MAMV)); + DEBUG((DEBUG_INFO, " DWD - 0x%x\n", CapReg->Bits.DWD)); + DEBUG((DEBUG_INFO, " DRD - 0x%x\n", CapReg->Bits.DRD)); + DEBUG((DEBUG_INFO, " FL1GP - 0x%x\n", CapReg->Bits.FL1GP)); + DEBUG((DEBUG_INFO, " PI - 0x%x\n", CapReg->Bits.PI)); +} + +/** + Dump VTd extended capability registers. + + @param[in] ECapReg The extended capability register. +**/ +VOID +DumpVtdECapRegs ( + IN VTD_ECAP_REG *ECapReg + ) +{ + DEBUG((DEBUG_INFO, " ECapReg - 0x%lx\n", ECapReg->Uint64)); + DEBUG((DEBUG_INFO, " C - 0x%x\n", ECapReg->Bits.C)); + DEBUG((DEBUG_INFO, " QI - 0x%x\n", ECapReg->Bits.QI)); + DEBUG((DEBUG_INFO, " DT - 0x%x\n", ECapReg->Bits.DT)); + DEBUG((DEBUG_INFO, " IR - 0x%x\n", ECapReg->Bits.IR)); + DEBUG((DEBUG_INFO, " EIM - 0x%x\n", ECapReg->Bits.EIM)); + DEBUG((DEBUG_INFO, " PT - 0x%x\n", ECapReg->Bits.PT)); + DEBUG((DEBUG_INFO, " SC - 0x%x\n", ECapReg->Bits.SC)); + DEBUG((DEBUG_INFO, " IRO - 0x%x\n", ECapReg->Bits.IRO)); + DEBUG((DEBUG_INFO, " MHMV - 0x%x\n", ECapReg->Bits.MHMV)); + DEBUG((DEBUG_INFO, " MTS - 0x%x\n", ECapReg->Bits.MTS)); + DEBUG((DEBUG_INFO, " NEST - 0x%x\n", ECapReg->Bits.NEST)); + DEBUG((DEBUG_INFO, " PASID - 0x%x\n", ECapReg->Bits.PASID)); + DEBUG((DEBUG_INFO, " PRS - 0x%x\n", ECapReg->Bits.PRS)); + DEBUG((DEBUG_INFO, " ERS - 0x%x\n", ECapReg->Bits.ERS)); + DEBUG((DEBUG_INFO, " SRS - 0x%x\n", ECapReg->Bits.SRS)); + DEBUG((DEBUG_INFO, " NWFS - 0x%x\n", ECapReg->Bits.NWFS)); + DEBUG((DEBUG_INFO, " EAFS - 0x%x\n", ECapReg->Bits.EAFS)); + DEBUG((DEBUG_INFO, " PSS - 0x%x\n", ECapReg->Bits.PSS)); + DEBUG((DEBUG_INFO, " SMTS - 0x%x\n", ECapReg->Bits.SMTS)); + DEBUG((DEBUG_INFO, " ADMS - 0x%x\n", ECapReg->Bits.ADMS)); + DEBUG((DEBUG_INFO, " PDS - 0x%x\n", ECapReg->Bits.PDS)); +} + +/** + Dump VTd registers. + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. +**/ +VOID +DumpVtdRegs ( + IN UINTN VtdUnitBaseAddress + ) +{ + VTD_REGESTER_INFO *VtdRegInfo; + VTD_ECAP_REG ECapReg; + VTD_CAP_REG CapReg; + + if (mVtdRegsInfoBuffer =3D=3D NULL) { + return; + } + + VtdRegInfo =3D mVtdRegsInfoBuffer; + VtdRegInfo->BaseAddress =3D VtdUnitBaseAddress; + VtdRegInfo->VerReg =3D MmioRead32 (VtdUnitBaseAddress + R_VER_REG); + VtdRegInfo->CapReg =3D MmioRead64 (VtdUnitBaseAddress + R_CAP_REG); + VtdRegInfo->EcapReg =3D MmioRead64 (VtdUnitBaseAddress + R_ECAP_REG); + VtdRegInfo->GstsReg =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + VtdRegInfo->RtaddrReg =3D MmioRead64 (VtdUnitBaseAddress + R_RTADDR_RE= G); + VtdRegInfo->CcmdReg =3D MmioRead64 (VtdUnitBaseAddress + R_CCMD_REG); + VtdRegInfo->FstsReg =3D MmioRead32 (VtdUnitBaseAddress + R_FSTS_REG); + VtdRegInfo->FectlReg =3D MmioRead32 (VtdUnitBaseAddress + R_FECTL_REG= ); + VtdRegInfo->FedataReg =3D MmioRead32 (VtdUnitBaseAddress + R_FEDATA_RE= G); + VtdRegInfo->FeaddrReg =3D MmioRead32 (VtdUnitBaseAddress + R_FEADDR_RE= G); + VtdRegInfo->FeuaddrReg =3D MmioRead32 (VtdUnitBaseAddress + R_FEUADDR_R= EG); + VtdRegInfo->IqercdReg =3D MmioRead64 (VtdUnitBaseAddress + R_IQERCD_RE= G); + + CapReg.Uint64 =3D VtdRegInfo->CapReg; + for (VtdRegInfo->FrcdRegNum =3D 0; VtdRegInfo->FrcdRegNum < (UINT16) Cap= Reg.Bits.NFR + 1; VtdRegInfo->FrcdRegNum++) { + VtdRegInfo->FrcdReg[VtdRegInfo->FrcdRegNum].Uint64Lo =3D MmioRead64 (V= tdUnitBaseAddress + ((CapReg.Bits.FRO * 16) + (VtdRegInfo->FrcdRegNum * 16)= + R_FRCD_REG)); + VtdRegInfo->FrcdReg[VtdRegInfo->FrcdRegNum].Uint64Hi =3D MmioRead64 (V= tdUnitBaseAddress + ((CapReg.Bits.FRO * 16) + (VtdRegInfo->FrcdRegNum * 16)= + R_FRCD_REG + sizeof(UINT64))); + } + + ECapReg.Uint64 =3D VtdRegInfo->EcapReg; + VtdRegInfo->IvaReg =3D MmioRead64 (VtdUnitBaseAddress + (ECapReg.Bits.IR= O * 16) + R_IVA_REG); + VtdRegInfo->IotlbReg =3D MmioRead64 (VtdUnitBaseAddress + (ECapReg.Bits.= IRO * 16) + R_IOTLB_REG); + + DEBUG((DEBUG_INFO, "#### DumpVtdRegs(0x%016lx) Begin ####\n", VtdUnitBas= eAddress)); + + VtdLibDumpVtdRegsAll (NULL, NULL, VtdRegInfo); + + DEBUG((DEBUG_INFO, "#### DumpVtdRegs(0x%016lx) End ####\n", VtdUnitBaseA= ddress)); + + VTdLogAddDataEvent (VTDLOG_DXE_REGISTER, VTDLOG_REGISTER_ALL, (VOID *) V= tdRegInfo, sizeof (VTD_REGESTER_INFO) + sizeof (VTD_UINT128) * (VtdRegInfo-= >FrcdRegNum - 1)); +} + +/** + Dump VTd registers for all VTd engine. +**/ +VOID +DumpVtdRegsAll ( + VOID + ) +{ + UINTN VtdIndex; + + for (VtdIndex =3D 0; VtdIndex < mVtdUnitNumber; VtdIndex++) { + DumpVtdRegs (mVtdUnitInformation[VtdIndex].VtdUnitBaseAddress); + } +} + +/** + Dump VTd registers if there is error. +**/ +VOID +DumpVtdIfError ( + VOID + ) +{ + UINTN Num; + UINTN Index; + VTD_FRCD_REG FrcdReg; + VTD_CAP_REG CapReg; + UINT32 Reg32; + BOOLEAN HasError; + + for (Num =3D 0; Num < mVtdUnitNumber; Num++) { + HasError =3D FALSE; + Reg32 =3D MmioRead32 (mVtdUnitInformation[Num].VtdUnitBaseAddress + R_= FSTS_REG); + if (Reg32 !=3D 0) { + HasError =3D TRUE; + } + Reg32 =3D MmioRead32 (mVtdUnitInformation[Num].VtdUnitBaseAddress + R_= FECTL_REG); + if ((Reg32 & BIT30) !=3D 0) { + HasError =3D TRUE; + } + + CapReg.Uint64 =3D MmioRead64 (mVtdUnitInformation[Num].VtdUnitBaseAddr= ess + R_CAP_REG); + for (Index =3D 0; Index < (UINTN)CapReg.Bits.NFR + 1; Index++) { + FrcdReg.Uint64[0] =3D MmioRead64 (mVtdUnitInformation[Num].VtdUnitBa= seAddress + ((CapReg.Bits.FRO * 16) + (Index * 16) + R_FRCD_REG)); + FrcdReg.Uint64[1] =3D MmioRead64 (mVtdUnitInformation[Num].VtdUnitBa= seAddress + ((CapReg.Bits.FRO * 16) + (Index * 16) + R_FRCD_REG + sizeof(UI= NT64))); + if (FrcdReg.Bits.F !=3D 0) { + HasError =3D TRUE; + } + } + + if (HasError) { + REPORT_STATUS_CODE (EFI_ERROR_CODE, PcdGet32 (PcdErrorCodeVTdError)); + DEBUG((DEBUG_INFO, "\n#### ERROR ####\n")); + DumpVtdRegs (Num); + DEBUG((DEBUG_INFO, "#### ERROR ####\n\n")); + // + // Clear + // + for (Index =3D 0; Index < (UINTN)CapReg.Bits.NFR + 1; Index++) { + FrcdReg.Uint64[1] =3D MmioRead64 (mVtdUnitInformation[Num].VtdUnit= BaseAddress + ((CapReg.Bits.FRO * 16) + (Index * 16) + R_FRCD_REG + sizeof(= UINT64))); + if (FrcdReg.Bits.F !=3D 0) { + // + // Software writes the value read from this field (F) to Clear i= t. + // + MmioWrite64 (mVtdUnitInformation[Num].VtdUnitBaseAddress + ((Cap= Reg.Bits.FRO * 16) + (Index * 16) + R_FRCD_REG + sizeof(UINT64)), FrcdReg.U= int64[1]); + } + } + MmioWrite32 (mVtdUnitInformation[Num].VtdUnitBaseAddress + R_FSTS_RE= G, MmioRead32 (mVtdUnitInformation[Num].VtdUnitBaseAddress + R_FSTS_REG)); + } + } +} diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/Dmar= Table.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/DmarTab= le.c new file mode 100644 index 000000000..91c89de47 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/DmarTable.c @@ -0,0 +1,63 @@ +/** @file + + Copyright (c) 2023, Intel Corporation. All rights reserved.
+ + SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "IntelVTdCorePei.h" + +/** + Parse DMAR DRHD table. + + @param[in] AcpiDmarTable DMAR ACPI table + @param[in] Callback Callback function for handle DRHD + @param[in] Context Callback function Context + + @return the VTd engine number. + +**/ +UINTN +ParseDmarAcpiTableDrhd ( + IN EFI_ACPI_DMAR_HEADER *AcpiDmarTable, + IN PROCESS_DRHD_CALLBACK_FUNC Callback, + IN VOID *Context + ) +{ + EFI_ACPI_DMAR_STRUCTURE_HEADER *DmarHeader; + UINTN VtdIndex; + + VtdIndex =3D 0; + DmarHeader =3D (EFI_ACPI_DMAR_STRUCTURE_HEADER *) ((UINTN) (AcpiDmarTabl= e + 1)); + + while ((UINTN) DmarHeader < (UINTN) AcpiDmarTable + AcpiDmarTable->Heade= r.Length) { + switch (DmarHeader->Type) { + case EFI_ACPI_DMAR_TYPE_DRHD: + if (Callback !=3D NULL) { + Callback (Context, VtdIndex, (EFI_ACPI_DMAR_DRHD_HEADER *) DmarHea= der); + } + VtdIndex++; + break; + default: + break; + } + DmarHeader =3D (EFI_ACPI_DMAR_STRUCTURE_HEADER *) ((UINTN) DmarHeader = + DmarHeader->Length); + } + + return VtdIndex; +} + diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/Inte= lVTdCorePei.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/I= ntelVTdCorePei.c new file mode 100644 index 000000000..0160c3604 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/IntelVTdCor= ePei.c @@ -0,0 +1,1099 @@ +/** @file + + Copyright (c) 2023, Intel Corporation. All rights reserved.
+ + SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "IntelVTdCorePei.h" + +#define VTD_UNIT_MAX 64 + +EFI_GUID mVTdInfoGuid =3D { + 0x222f5e30, 0x5cd, 0x49c6, { 0x8a, 0xc, 0x36, 0xd6, 0x58, 0x41, 0xe0, 0x= 82 } +}; + +EFI_GUID mDmaBufferInfoGuid =3D { + 0x7b624ec7, 0xfb67, 0x4f9c, { 0xb6, 0xb0, 0x4d, 0xfa, 0x9c, 0x88, 0x20, = 0x39 } +}; + +#define MAP_INFO_SIGNATURE SIGNATURE_32 ('D', 'M', 'A', 'P') +typedef struct { + UINT32 Signature; + EDKII_IOMMU_OPERATION Operation; + UINTN NumberOfBytes; + EFI_PHYSICAL_ADDRESS HostAddress; + EFI_PHYSICAL_ADDRESS DeviceAddress; +} MAP_INFO; + +/** + Allocate memory buffer for VTd log events. + + @param[in] MemorySize Required memory buffer size. + + @retval Buffer address + +**/ +UINT8 * +EFIAPI +VTdLogAllocMemory ( + IN CONST UINT32 MemorySize + ) +{ + VOID *HobPtr; + VTDLOG_PEI_BUFFER_HOB *BufferHob; + UINT8 *ReturnBuffer; + + ReturnBuffer =3D NULL; + HobPtr =3D GetFirstGuidHob (&gVTdLogBufferHobGuid); + if (HobPtr !=3D NULL) { + BufferHob =3D GET_GUID_HOB_DATA (HobPtr); + + if (BufferHob->PostMemBuffer !=3D 0) { + // + // Post-memory phase + // + if ((BufferHob->PostMemBufferUsed + MemorySize) < PcdGet32 (PcdVTdPe= iPostMemLogBufferSize)) { + ReturnBuffer =3D &((UINT8 *) (UINTN) BufferHob->PostMemBuffer)[Buf= ferHob->PostMemBufferUsed]; + BufferHob->PostMemBufferUsed +=3D MemorySize; + } else { + BufferHob->VtdLogPeiError |=3D VTD_LOG_ERROR_BUFFER_FULL; + } + } + } + + return ReturnBuffer; +} + +/** + Add the VTd log event in post memory phase. + + @param[in] EventType Event type + @param[in] Data1 First parameter + @param[in] Data2 Second parameter + +**/ +VOID +EFIAPI +VTdLogAddEvent ( + IN CONST VTDLOG_EVENT_TYPE EventType, + IN CONST UINT64 Data1, + IN CONST UINT64 Data2 + ) +{ + VTDLOG_EVENT_2PARAM *Item; + + if (PcdGet8 (PcdVTdLogLevel) =3D=3D 0) { + return; + } else if ((PcdGet8 (PcdVTdLogLevel) =3D=3D 1) && (EventType >=3D VTDLOG= _PEI_ADVANCED)) { + return; + } + + Item =3D (VTDLOG_EVENT_2PARAM *) VTdLogAllocMemory (sizeof (VTDLOG_EVENT= _2PARAM)); + if (Item !=3D NULL) { + Item->Data1 =3D Data1; + Item->Data2 =3D Data2; + Item->Header.DataSize =3D sizeof (VTDLOG_EVENT_2PARAM); + Item->Header.LogType =3D (UINT64) (1 << EventType); + Item->Header.Timestamp =3D AsmReadTsc (); + } +} + +/** + Add a new VTd log event with data. + + @param[in] EventType Event type + @param[in] Param parameter + @param[in] Data Data + @param[in] DataSize Data size + +**/ +VOID +EFIAPI +VTdLogAddDataEvent ( + IN CONST VTDLOG_EVENT_TYPE EventType, + IN CONST UINT64 Param, + IN CONST VOID *Data, + IN CONST UINT32 DataSize + ) +{ + VTDLOG_EVENT_CONTEXT *Item; + UINT32 EventSize; + + if (PcdGet8 (PcdVTdLogLevel) =3D=3D 0) { + return; + } else if ((PcdGet8 (PcdVTdLogLevel) =3D=3D 1) && (EventType >=3D VTDLOG= _PEI_ADVANCED)) { + return; + } + + EventSize =3D sizeof (VTDLOG_EVENT_CONTEXT) + DataSize - 1; + + Item =3D (VTDLOG_EVENT_CONTEXT *) VTdLogAllocMemory (EventSize); + if (Item !=3D NULL) { + Item->Param =3D Param; + CopyMem (Item->Data, Data, DataSize); + + Item->Header.DataSize =3D EventSize; + Item->Header.LogType =3D (UINT64) (1 << EventType); + Item->Header.Timestamp =3D AsmReadTsc (); =20 + } +} +/** + Add the VTd log event in pre-memory phase. + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + @param[in] Mode Pre-memory DMA protection mode. + @param[in] Status Status + +**/ +VOID +EFIAPI +VTdLogAddPreMemoryEvent ( + IN UINTN VtdUnitBaseAddress, + IN UINT8 Mode, + IN UINT8 Status + ) +{ + VTDLOG_PEI_BUFFER_HOB *BufferHob; + VOID *HobPtr; + UINT8 Index; + + HobPtr =3D GetFirstGuidHob (&gVTdLogBufferHobGuid); + if (HobPtr !=3D NULL) { + BufferHob =3D GET_GUID_HOB_DATA (HobPtr); + + for (Index =3D 0; Index < VTD_LOG_PEI_PRE_MEM_BAR_MAX; Index++) { + if (BufferHob->PreMemInfo[Index].Mode =3D=3D VTD_LOG_PEI_PRE_MEM_NOT= _USED) { + // + // Found a free posttion + // + BufferHob->PreMemInfo[Index].BarAddress =3D (UINT32) VtdUnitBaseAd= dress; + BufferHob->PreMemInfo[Index].Mode =3D Mode; + BufferHob->PreMemInfo[Index].Status =3D Status; + break; + } + } + } +} + +/** + Initializes the VTd Log. + + @param[in] MemoryInitialized TRUE: It is post-memory phase + FALSE: It is pre-memory phase +**/ +VOID +EFIAPI +VTdLogInitialize( + BOOLEAN MemoryInitialized + ) +{ + VTDLOG_PEI_BUFFER_HOB *BufferHob; + VOID *HobPtr; + + if (PcdGet8 (PcdVTdLogLevel) > 0) { + HobPtr =3D GetFirstGuidHob (&gVTdLogBufferHobGuid); + if (HobPtr =3D=3D NULL) { + BufferHob =3D BuildGuidHob (&gVTdLogBufferHobGuid, sizeof (VTDLOG_PE= I_BUFFER_HOB)); + ASSERT (BufferHob !=3D NULL); + + ZeroMem (BufferHob, sizeof (VTDLOG_PEI_BUFFER_HOB)); + } else { + BufferHob =3D GET_GUID_HOB_DATA (HobPtr); + } + + if (MemoryInitialized) { + if ((BufferHob->PostMemBuffer =3D=3D 0) && (PcdGet32 (PcdVTdPeiPostM= emLogBufferSize) > 0)) { + BufferHob->PostMemBufferUsed =3D 0; + BufferHob->PostMemBuffer =3D (UINTN) AllocateAlignedPages (EFI_SIZ= E_TO_PAGES (PcdGet32 (PcdVTdPeiPostMemLogBufferSize)), sizeof (UINT8)); + } + } + } +} + +/** + Set IOMMU attribute for a system memory. + + If the IOMMU PPI exists, the system memory cannot be used + for DMA by default. + + When a device requests a DMA access for a system memory, + the device driver need use SetAttribute() to update the IOMMU + attribute to request DMA access (read and/or write). + + @param[in] This The PPI instance pointer. + @param[in] DeviceHandle The device who initiates the DMA acces= s request. + @param[in] Mapping The mapping value returned from Map(). + @param[in] IoMmuAccess The IOMMU access. + + @retval EFI_SUCCESS The IoMmuAccess is set for the memory = range specified by DeviceAddress and Length. + @retval EFI_INVALID_PARAMETER Mapping is not a value that was return= ed by Map(). + @retval EFI_INVALID_PARAMETER IoMmuAccess specified an illegal combi= nation of access. + @retval EFI_UNSUPPORTED The bit mask of IoMmuAccess is not sup= ported by the IOMMU. + @retval EFI_UNSUPPORTED The IOMMU does not support the memory = range specified by Mapping. + @retval EFI_OUT_OF_RESOURCES There are not enough resources availab= le to modify the IOMMU access. + @retval EFI_DEVICE_ERROR The IOMMU device reported an error whi= le attempting the operation. + @retval EFI_NOT_AVAILABLE_YET DMA protection has been enabled, but D= MA buffer are + not available to be allocated yet. +**/ +EFI_STATUS +EFIAPI +PeiIoMmuSetAttribute ( + IN EDKII_IOMMU_PPI *This, + IN VOID *Mapping, + IN UINT64 IoMmuAccess + ) +{ + VOID *Hob; + DMA_BUFFER_INFO *DmaBufferInfo; + + // + // check and clear VTd error + // + DumpVtdIfError (); + + DEBUG ((DEBUG_INFO, "PeiIoMmuSetAttribute:\n")); + + Hob =3D GetFirstGuidHob (&mDmaBufferInfoGuid); + DmaBufferInfo =3D GET_GUID_HOB_DATA(Hob); + + if (DmaBufferInfo->DmaBufferCurrentTop =3D=3D 0) { + DEBUG ((DEBUG_INFO, "PeiIoMmuSetAttribute: DmaBufferCurrentTop =3D=3D = 0\n")); + return EFI_NOT_AVAILABLE_YET; + } + + return EFI_SUCCESS; +} + +/** + Provides the controller-specific addresses required to access system mem= ory from a + DMA bus master. + + @param [in] This The PPI instance pointer. + @param [in] Operation Indicates if the bus master is going t= o read or write to system memory. + @param [in] HostAddress The system memory address to map to th= e PCI controller. + @param [in] [out] NumberOfBytes On input the number of bytes to map. O= n output the number of bytes + that were mapped. + @param [out] DeviceAddress The resulting map address for the bus = master PCI controller to use to + access the hosts HostAddress. + @param [out] Mapping A resulting value to pass to Unmap(). + + @retval EFI_SUCCESS The range was mapped for the returned = NumberOfBytes. + @retval EFI_UNSUPPORTED The HostAddress cannot be mapped as a = common buffer. + @retval EFI_INVALID_PARAMETER One or more parameters are invalid. + @retval EFI_OUT_OF_RESOURCES The request could not be completed due= to a lack of resources. + @retval EFI_DEVICE_ERROR The system hardware could not map the = requested address. + @retval EFI_NOT_AVAILABLE_YET DMA protection has been enabled, but D= MA buffer are + not available to be allocated yet. +**/ +EFI_STATUS +EFIAPI +PeiIoMmuMap ( + IN EDKII_IOMMU_PPI *This, + IN EDKII_IOMMU_OPERATION Operation, + IN VOID *HostAddress, + IN OUT UINTN *NumberOfBytes, + OUT EFI_PHYSICAL_ADDRESS *DeviceAddress, + OUT VOID **Mapping + ) +{ + MAP_INFO *MapInfo; + UINTN Length; + VOID *Hob; + DMA_BUFFER_INFO *DmaBufferInfo; + + Hob =3D GetFirstGuidHob (&mDmaBufferInfoGuid); + DmaBufferInfo =3D GET_GUID_HOB_DATA(Hob); + + DEBUG ((DEBUG_INFO, "PeiIoMmuMap - HostAddress - 0x%x, NumberOfBytes - %= x\n", HostAddress, *NumberOfBytes)); + DEBUG ((DEBUG_INFO, " DmaBufferCurrentTop - %x\n", DmaBufferInfo->DmaBu= fferCurrentTop)); + DEBUG ((DEBUG_INFO, " DmaBufferCurrentBottom - %x\n", DmaBufferInfo->Dm= aBufferCurrentBottom)); + DEBUG ((DEBUG_INFO, " Operation - %x\n", Operation)); + + if (DmaBufferInfo->DmaBufferCurrentTop =3D=3D 0) { + return EFI_NOT_AVAILABLE_YET; + } + + if (Operation =3D=3D EdkiiIoMmuOperationBusMasterCommonBuffer || + Operation =3D=3D EdkiiIoMmuOperationBusMasterCommonBuffer64) { + *DeviceAddress =3D (UINTN) HostAddress; + *Mapping =3D NULL; + return EFI_SUCCESS; + } + + Length =3D *NumberOfBytes + sizeof (MAP_INFO); + if (Length > DmaBufferInfo->DmaBufferCurrentTop - DmaBufferInfo->DmaBuff= erCurrentBottom) { + DEBUG ((DEBUG_ERROR, "PeiIoMmuMap - OUT_OF_RESOURCE\n")); + VTdLogAddEvent (VTDLOG_PEI_VTD_ERROR, VTD_LOG_PEI_VTD_ERROR_PPI_MAP, L= ength); + ASSERT (FALSE); + return EFI_OUT_OF_RESOURCES; + } + + *DeviceAddress =3D DmaBufferInfo->DmaBufferCurrentBottom; + DmaBufferInfo->DmaBufferCurrentBottom +=3D Length; + + MapInfo =3D (VOID *) (UINTN) (*DeviceAddress + *NumberOfBytes); + MapInfo->Signature =3D MAP_INFO_SIGNATURE; + MapInfo->Operation =3D Operation; + MapInfo->NumberOfBytes =3D *NumberOfBytes; + MapInfo->HostAddress =3D (UINTN) HostAddress; + MapInfo->DeviceAddress =3D *DeviceAddress; + *Mapping =3D MapInfo; + DEBUG ((DEBUG_INFO, " Op(%x):DeviceAddress - %x, Mapping - %x\n", Opera= tion, (UINTN) *DeviceAddress, MapInfo)); + + // + // If this is a read operation from the Bus Master's point of view, + // then copy the contents of the real buffer into the mapped buffer + // so the Bus Master can read the contents of the real buffer. + // + if (Operation =3D=3D EdkiiIoMmuOperationBusMasterRead || + Operation =3D=3D EdkiiIoMmuOperationBusMasterRead64) { + CopyMem ( + (VOID *) (UINTN) MapInfo->DeviceAddress, + (VOID *) (UINTN) MapInfo->HostAddress, + MapInfo->NumberOfBytes + ); + } + + VTdLogAddEvent (VTDLOG_PEI_PPI_MAP, (UINT64) HostAddress, Length); + return EFI_SUCCESS; +} + +/** + Completes the Map() operation and releases any corresponding resources. + + @param [in] This The PPI instance pointer. + @param [in] Mapping The mapping value returned from Map(). + + @retval EFI_SUCCESS The range was unmapped. + @retval EFI_INVALID_PARAMETER Mapping is not a value that was return= ed by Map(). + @retval EFI_DEVICE_ERROR The data was not committed to the targ= et system memory. + @retval EFI_NOT_AVAILABLE_YET DMA protection has been enabled, but D= MA buffer are + not available to be allocated yet. +**/ +EFI_STATUS +EFIAPI +PeiIoMmuUnmap ( + IN EDKII_IOMMU_PPI *This, + IN VOID *Mapping + ) +{ + MAP_INFO *MapInfo; + UINTN Length; + VOID *Hob; + DMA_BUFFER_INFO *DmaBufferInfo; + + Hob =3D GetFirstGuidHob (&mDmaBufferInfoGuid); + DmaBufferInfo =3D GET_GUID_HOB_DATA(Hob); + + DEBUG ((DEBUG_INFO, "PeiIoMmuUnmap - Mapping - %x\n", Mapping)); + DEBUG ((DEBUG_INFO, " DmaBufferCurrentTop - %x\n", DmaBufferInfo->DmaBu= fferCurrentTop)); + DEBUG ((DEBUG_INFO, " DmaBufferCurrentBottom - %x\n", DmaBufferInfo->Dm= aBufferCurrentBottom)); + + if (DmaBufferInfo->DmaBufferCurrentTop =3D=3D 0) { + return EFI_NOT_AVAILABLE_YET; + } + + if (Mapping =3D=3D NULL) { + return EFI_SUCCESS; + } + + MapInfo =3D Mapping; + ASSERT (MapInfo->Signature =3D=3D MAP_INFO_SIGNATURE); + DEBUG ((DEBUG_INFO, " Op(%x):DeviceAddress - %x, NumberOfBytes - %x\n",= MapInfo->Operation, (UINTN) MapInfo->DeviceAddress, MapInfo->NumberOfBytes= )); + + // + // If this is a write operation from the Bus Master's point of view, + // then copy the contents of the mapped buffer into the real buffer + // so the processor can read the contents of the real buffer. + // + if (MapInfo->Operation =3D=3D EdkiiIoMmuOperationBusMasterWrite || + MapInfo->Operation =3D=3D EdkiiIoMmuOperationBusMasterWrite64) { + CopyMem ( + (VOID *) (UINTN) MapInfo->HostAddress, + (VOID *) (UINTN) MapInfo->DeviceAddress, + MapInfo->NumberOfBytes + ); + } + + Length =3D MapInfo->NumberOfBytes + sizeof (MAP_INFO); + if (DmaBufferInfo->DmaBufferCurrentBottom =3D=3D MapInfo->DeviceAddress = + Length) { + DmaBufferInfo->DmaBufferCurrentBottom -=3D Length; + } + + return EFI_SUCCESS; +} + +/** + Allocates pages that are suitable for an OperationBusMasterCommonBuffer = or + OperationBusMasterCommonBuffer64 mapping. + + @param [in] This The PPI instance pointer. + @param [in] MemoryType The type of memory to allocate, EfiBoo= tServicesData or + EfiRuntimeServicesData. + @param [in] Pages The number of pages to allocate. + @param [in] [out] HostAddress A pointer to store the base system mem= ory address of the + allocated range. + @param [in] Attributes The requested bit mask of attributes f= or the allocated range. + + @retval EFI_SUCCESS The requested memory pages were alloca= ted. + @retval EFI_UNSUPPORTED Attributes is unsupported. The only le= gal attribute bits are + MEMORY_WRITE_COMBINE, MEMORY_CACHED an= d DUAL_ADDRESS_CYCLE. + @retval EFI_INVALID_PARAMETER One or more parameters are invalid. + @retval EFI_OUT_OF_RESOURCES The memory pages could not be allocate= d. + @retval EFI_NOT_AVAILABLE_YET DMA protection has been enabled, but D= MA buffer are + not available to be allocated yet. +**/ +EFI_STATUS +EFIAPI +PeiIoMmuAllocateBuffer ( + IN EDKII_IOMMU_PPI *This, + IN EFI_MEMORY_TYPE MemoryType, + IN UINTN Pages, + IN OUT VOID **HostAddress, + IN UINT64 Attributes + ) +{ + UINTN Length; + VOID *Hob; + DMA_BUFFER_INFO *DmaBufferInfo; + + Hob =3D GetFirstGuidHob (&mDmaBufferInfoGuid); + DmaBufferInfo =3D GET_GUID_HOB_DATA(Hob); + + DEBUG ((DEBUG_INFO, "PeiIoMmuAllocateBuffer - page - %x\n", Pages)); + DEBUG ((DEBUG_INFO, " DmaBufferCurrentTop - %x\n", DmaBufferInfo->DmaBu= fferCurrentTop)); + DEBUG ((DEBUG_INFO, " DmaBufferCurrentBottom - %x\n", DmaBufferInfo->Dm= aBufferCurrentBottom)); + + if (DmaBufferInfo->DmaBufferCurrentTop =3D=3D 0) { + return EFI_NOT_AVAILABLE_YET; + } + + Length =3D EFI_PAGES_TO_SIZE (Pages); + if (Length > DmaBufferInfo->DmaBufferCurrentTop - DmaBufferInfo->DmaBuff= erCurrentBottom) { + DEBUG ((DEBUG_ERROR, "PeiIoMmuAllocateBuffer - OUT_OF_RESOURCE\n")); + VTdLogAddEvent (VTDLOG_PEI_VTD_ERROR, VTD_LOG_PEI_VTD_ERROR_PPI_ALLOC,= Length); + ASSERT (FALSE); + return EFI_OUT_OF_RESOURCES; + } + *HostAddress =3D (VOID *) (UINTN) (DmaBufferInfo->DmaBufferCurrentTop - = Length); + DmaBufferInfo->DmaBufferCurrentTop -=3D Length; + + DEBUG ((DEBUG_INFO, "PeiIoMmuAllocateBuffer - allocate - %x\n", *HostAdd= ress)); + + VTdLogAddEvent (VTDLOG_PEI_PPI_ALLOC_BUFFER, (UINT64) (*HostAddress), Le= ngth); + + return EFI_SUCCESS; +} + +/** + Frees memory that was allocated with AllocateBuffer(). + + @param [in] This The PPI instance pointer. + @param [in] Pages The number of pages to free. + @param [in] HostAddress The base system memory address of the = allocated range. + + @retval EFI_SUCCESS The requested memory pages were freed. + @retval EFI_INVALID_PARAMETER The memory range specified by HostAddr= ess and Pages + was not allocated with AllocateBuffer(= ). + @retval EFI_NOT_AVAILABLE_YET DMA protection has been enabled, but D= MA buffer are + not available to be allocated yet. +**/ +EFI_STATUS +EFIAPI +PeiIoMmuFreeBuffer ( + IN EDKII_IOMMU_PPI *This, + IN UINTN Pages, + IN VOID *HostAddress + ) +{ + UINTN Length; + VOID *Hob; + DMA_BUFFER_INFO *DmaBufferInfo; + + Hob =3D GetFirstGuidHob (&mDmaBufferInfoGuid); + DmaBufferInfo =3D GET_GUID_HOB_DATA (Hob); + + DEBUG ((DEBUG_INFO, "PeiIoMmuFreeBuffer - page - %x, HostAddr - %x\n", P= ages, HostAddress)); + DEBUG ((DEBUG_INFO, " DmaBufferCurrentTop - %x\n", DmaBufferInfo->DmaBu= fferCurrentTop)); + DEBUG ((DEBUG_INFO, " DmaBufferCurrentBottom - %x\n", DmaBufferInfo->Dm= aBufferCurrentBottom)); + + if (DmaBufferInfo->DmaBufferCurrentTop =3D=3D 0) { + return EFI_NOT_AVAILABLE_YET; + } + + Length =3D EFI_PAGES_TO_SIZE (Pages); + if ((UINTN)HostAddress =3D=3D DmaBufferInfo->DmaBufferCurrentTop) { + DmaBufferInfo->DmaBufferCurrentTop +=3D Length; + } + + return EFI_SUCCESS; +} + +EDKII_IOMMU_PPI mIoMmuPpi =3D { + EDKII_IOMMU_PPI_REVISION, + PeiIoMmuSetAttribute, + PeiIoMmuMap, + PeiIoMmuUnmap, + PeiIoMmuAllocateBuffer, + PeiIoMmuFreeBuffer, +}; + +CONST EFI_PEI_PPI_DESCRIPTOR mIoMmuPpiList =3D { + EFI_PEI_PPI_DESCRIPTOR_PPI | EFI_PEI_PPI_DESCRIPTOR_TERMINATE_LIST, + &gEdkiiIoMmuPpiGuid, + (VOID *) &mIoMmuPpi +}; + +/** + Get ACPI DMAT Table from EdkiiVTdInfo PPI + + @retval Address ACPI DMAT Table address + @retval NULL Failed to get ACPI DMAT Table +**/ +EFI_ACPI_DMAR_HEADER * GetAcpiDmarTable ( + VOID + ) +{ + EFI_STATUS Status; + EFI_ACPI_DMAR_HEADER *AcpiDmarTable; + + // + // Get the DMAR table + // + Status =3D PeiServicesLocatePpi ( + &gEdkiiVTdInfoPpiGuid, + 0, + NULL, + (VOID **)&AcpiDmarTable + ); + if (EFI_ERROR (Status)) { + DEBUG ((DEBUG_ERROR, "Fail to get ACPI DMAR Table : %r\n", Status)); + AcpiDmarTable =3D NULL; + } else { + VtdLibDumpAcpiDmarDrhd (NULL, NULL, AcpiDmarTable); + } + + return AcpiDmarTable; +} + +/** + Get the VTd engine context information hob. + + @retval The VTd engine context information. + +**/ +VTD_INFO * +GetVTdInfoHob ( + VOID + ) +{ + VOID *Hob; + VTD_INFO *VTdInfo; + + Hob =3D GetFirstGuidHob (&mVTdInfoGuid); + if (Hob =3D=3D NULL) { + VTdInfo =3D BuildGuidHob (&mVTdInfoGuid, sizeof (VTD_INFO)); + if (VTdInfo !=3D NULL) { + ZeroMem (VTdInfo, sizeof (VTD_INFO)); + } + } else { + VTdInfo =3D GET_GUID_HOB_DATA(Hob); + } + return VTdInfo; +} + +/** + Callback function of parse DMAR DRHD table in pre-memory phase. + + @param [in] [out] Context Callback function context. + @param [in] VTdIndex The VTd engine index. + @param [in] DmarDrhd The DRHD table. + +**/ +VOID +ProcessDhrdPreMemory ( + IN OUT VOID *Context, + IN UINTN VTdIndex, + IN EFI_ACPI_DMAR_DRHD_HEADER *DmarDrhd + ) +{ + DEBUG ((DEBUG_INFO,"VTD (%d) BaseAddress - 0x%016lx\n", VTdIndex, DmarD= rhd->RegisterBaseAddress)); + + EnableVTdTranslationProtectionBlockDma ((UINTN) DmarDrhd->RegisterBaseAd= dress); +} + +/** + Callback function of parse DMAR DRHD table in post memory phase. + + @param [in] [out] Context Callback function context. + @param [in] VTdIndex The VTd engine index. + @param [in] DmarDrhd The DRHD table. + +**/ +VOID +ProcessDrhdPostMemory ( + IN OUT VOID *Context, + IN UINTN VTdIndex, + IN EFI_ACPI_DMAR_DRHD_HEADER *DmarDrhd + ) +{ + VTD_UNIT_INFO *VtdUnitInfo; + UINTN Index; + + VtdUnitInfo =3D (VTD_UNIT_INFO *) Context; + + if (DmarDrhd->RegisterBaseAddress =3D=3D 0) { + DEBUG ((DEBUG_INFO,"VTd Base Address is 0\n")); + ASSERT (FALSE); + return; + } + + for (Index =3D 0; Index < VTD_UNIT_MAX; Index++) { + if (VtdUnitInfo[Index].VtdUnitBaseAddress =3D=3D DmarDrhd->RegisterBas= eAddress) { + DEBUG ((DEBUG_INFO,"Find VTD (%d) [0x%08x] Exist\n", VTdIndex, DmarD= rhd->RegisterBaseAddress)); + return; + } + } + + for (VTdIndex =3D 0; VTdIndex < VTD_UNIT_MAX; VTdIndex++) { + if (VtdUnitInfo[VTdIndex].VtdUnitBaseAddress =3D=3D 0) { + VtdUnitInfo[VTdIndex].VtdUnitBaseAddress =3D (UINTN) DmarDrhd->Regis= terBaseAddress; + VtdUnitInfo[VTdIndex].Segment =3D DmarDrhd->SegmentNumber; + VtdUnitInfo[VTdIndex].Flags =3D DmarDrhd->Flags; + VtdUnitInfo[VTdIndex].Done =3D FALSE; + + DEBUG ((DEBUG_INFO,"VTD (%d) BaseAddress - 0x%016lx\n", VTdIndex, D= marDrhd->RegisterBaseAddress)); + DEBUG ((DEBUG_INFO," Segment - %d, Flags - 0x%x\n", DmarDrhd->Seg= mentNumber, DmarDrhd->Flags)); + return; + } + } + + DEBUG ((DEBUG_INFO,"VtdUnitInfo Table is full\n")); + ASSERT (FALSE); + return; +} + +/** + Initializes the Intel VTd Info in post memory phase. + + @retval EFI_SUCCESS Usb bot driver is successfully initialized. + @retval EFI_OUT_OF_RESOURCES Can't initialize the driver. +**/ +EFI_STATUS +InitVTdInfo ( + VOID + ) +{ + VTD_INFO *VTdInfo; + EFI_ACPI_DMAR_HEADER *AcpiDmarTable; + UINTN VtdUnitNumber; + VTD_UNIT_INFO *VtdUnitInfo; + + VTdInfo =3D GetVTdInfoHob (); + ASSERT (VTdInfo !=3D NULL); + + AcpiDmarTable =3D GetAcpiDmarTable (); + ASSERT (AcpiDmarTable !=3D NULL); + + if (VTdInfo->VtdUnitInfo =3D=3D NULL) { + // + // Genrate a new Vtd Unit Info Table + // + VTdInfo->VtdUnitInfo =3D AllocateZeroPages (EFI_SIZE_TO_PAGES (sizeof = (VTD_UNIT_INFO) * VTD_UNIT_MAX)); + if (VTdInfo->VtdUnitInfo =3D=3D NULL) { + DEBUG ((DEBUG_ERROR, "InitVTdInfo - OUT_OF_RESOURCE\n")); + ASSERT (FALSE); + return EFI_OUT_OF_RESOURCES; + } + } + VtdUnitInfo =3D VTdInfo->VtdUnitInfo; + + if (VTdInfo->HostAddressWidth =3D=3D 0) { + VTdInfo->HostAddressWidth =3D AcpiDmarTable->HostAddressWidth; + } + + if (VTdInfo->HostAddressWidth !=3D AcpiDmarTable->HostAddressWidth) { + DEBUG ((DEBUG_ERROR, "Host Address Width is not match.\n")); + ASSERT (FALSE); + return EFI_UNSUPPORTED; + } + + // + // Parse the DMAR ACPI Table to the new Vtd Unit Info Table + // + VtdUnitNumber =3D ParseDmarAcpiTableDrhd (AcpiDmarTable, ProcessDrhdPost= Memory, VtdUnitInfo); + if (VtdUnitNumber =3D=3D 0) { + return EFI_UNSUPPORTED; + } + + for (VTdInfo->VTdEngineCount =3D 0; VTdInfo->VTdEngineCount < VTD_UNIT_M= AX; VTdInfo->VTdEngineCount++) { + if (VtdUnitInfo[VTdInfo->VTdEngineCount].VtdUnitBaseAddress =3D=3D 0) { + break; + } + } + + VTdInfo->AcpiDmarTable =3D AcpiDmarTable; + + return EFI_SUCCESS; +} + +/** + Initializes the Intel VTd DMAR for block all DMA. + + @retval EFI_SUCCESS Driver is successfully initialized. + @retval RETURN_NOT_READY Fail to get VTdInfo Hob . +**/ +EFI_STATUS +InitVTdDmarBlockAll ( + VOID + ) +{ + EFI_ACPI_DMAR_HEADER *AcpiDmarTable; + + // + // Get the DMAR table + // + AcpiDmarTable =3D GetAcpiDmarTable (); + ASSERT (AcpiDmarTable !=3D NULL); + + // + // Parse the DMAR table and block all DMA + // + return ParseDmarAcpiTableDrhd (AcpiDmarTable, ProcessDhrdPreMemory, NULL= ); +} + +/** + Initializes DMA buffer + + @retval EFI_SUCCESS DMA buffer is successfully initialized. + @retval EFI_INVALID_PARAMETER Invalid DMA buffer size. + @retval EFI_OUT_OF_RESOURCES Can't initialize DMA buffer. +**/ +EFI_STATUS +InitDmaBuffer( + VOID + ) +{ + DMA_BUFFER_INFO *DmaBufferInfo; + VOID *Hob; + VOID *VtdPmrHobPtr; + VTD_PMR_INFO_HOB *VtdPmrHob; + + DEBUG ((DEBUG_INFO, "InitDmaBuffer :\n")); + + Hob =3D GetFirstGuidHob (&mDmaBufferInfoGuid); + ASSERT(Hob !=3D NULL); + DmaBufferInfo =3D GET_GUID_HOB_DATA (Hob); + + /** + When gVtdPmrInfoDataHobGuid exists, it means: + 1. Dma buffer is reserved by memory initialize code + 2. PeiGetVtdPmrAlignmentLib is used to get alignment + 3. Protection regions are determined by the system memory map + 4. Protection regions will be conveyed through VTD_PMR_INFO_HOB + + When gVtdPmrInfoDataHobGuid dosen't exist, it means: + 1. IntelVTdDmarPei driver will calcuate the protected memory alignment + 2. Dma buffer is reserved by AllocateAlignedPages() + **/ + + + if (DmaBufferInfo->DmaBufferSize =3D=3D 0) { + DEBUG ((DEBUG_INFO, " DmaBufferSize is 0\n")); + return EFI_INVALID_PARAMETER; + } + + if (DmaBufferInfo->DmaBufferBase =3D=3D 0) { + VtdPmrHobPtr =3D GetFirstGuidHob (&gVtdPmrInfoDataHobGuid); + if (VtdPmrHobPtr !=3D NULL) { + // + // Get the protected memory ranges information from the VTd PMR hob + // + VtdPmrHob =3D GET_GUID_HOB_DATA (VtdPmrHobPtr); + + if ((VtdPmrHob->ProtectedHighBase - VtdPmrHob->ProtectedLowLimit) < = DmaBufferInfo->DmaBufferSize) { + DEBUG ((DEBUG_ERROR, " DmaBufferSize not enough\n")); + return EFI_INVALID_PARAMETER; + } + DmaBufferInfo->DmaBufferBase =3D VtdPmrHob->ProtectedLowLimit; + } else { + // + // Allocate memory for DMA buffer + // + DmaBufferInfo->DmaBufferBase =3D (UINTN) AllocateAlignedPages (EFI_S= IZE_TO_PAGES (DmaBufferInfo->DmaBufferSize), 0); + if (DmaBufferInfo->DmaBufferBase =3D=3D 0) { + DEBUG ((DEBUG_ERROR, " InitDmaBuffer : OutOfResource\n")); + return EFI_OUT_OF_RESOURCES; + } + DEBUG ((DEBUG_INFO, "Alloc DMA buffer success.\n")); + } + + DmaBufferInfo->DmaBufferCurrentTop =3D DmaBufferInfo->DmaBufferBase + = DmaBufferInfo->DmaBufferSize; + DmaBufferInfo->DmaBufferCurrentBottom =3D DmaBufferInfo->DmaBufferBase; + + DEBUG ((DEBUG_INFO, " DmaBufferSize : 0x%x\n", DmaBufferInfo-= >DmaBufferSize)); + DEBUG ((DEBUG_INFO, " DmaBufferBase : 0x%x\n", DmaBufferInfo-= >DmaBufferBase)); + } + + DEBUG ((DEBUG_INFO, " DmaBufferCurrentTop : 0x%x\n", DmaBufferInfo->D= maBufferCurrentTop)); + DEBUG ((DEBUG_INFO, " DmaBufferCurrentBottom : 0x%x\n", DmaBufferInfo->D= maBufferCurrentBottom)); + + VTdLogAddEvent (VTDLOG_PEI_PROTECT_MEMORY_RANGE, DmaBufferInfo->DmaBuffe= rCurrentBottom, DmaBufferInfo->DmaBufferCurrentTop); + + return EFI_SUCCESS; +} + +/** + Initializes the Intel VTd DMAR for DMA buffer. + + @retval EFI_SUCCESS Usb bot driver is successfully initialized. + @retval EFI_OUT_OF_RESOURCES Can't initialize the driver. + @retval EFI_DEVICE_ERROR DMAR translation is not enabled. +**/ +EFI_STATUS +InitVTdDmarForDma ( + VOID + ) +{ + VTD_INFO *VTdInfo; + + EFI_STATUS Status; + EFI_PEI_PPI_DESCRIPTOR *OldDescriptor; + EDKII_IOMMU_PPI *OldIoMmuPpi; + + VTdInfo =3D GetVTdInfoHob (); + ASSERT (VTdInfo !=3D NULL); + + DEBUG ((DEBUG_INFO, "PrepareVtdConfig\n")); + Status =3D PrepareVtdConfig (VTdInfo); + if (EFI_ERROR (Status)) { + ASSERT_EFI_ERROR (Status); + return Status; + } + + // create root entry table + DEBUG ((DEBUG_INFO, "SetupTranslationTable\n")); + Status =3D SetupTranslationTable (VTdInfo); + if (EFI_ERROR (Status)) { + ASSERT_EFI_ERROR (Status); + return Status; + } + + DEBUG ((DEBUG_INFO, "EnableVtdDmar\n")); + Status =3D EnableVTdTranslationProtection(VTdInfo); + if (EFI_ERROR (Status)) { + return Status; + } + + DEBUG ((DEBUG_INFO, "Install gEdkiiIoMmuPpiGuid\n")); + // install protocol + // + // (Re)Install PPI. + // + Status =3D PeiServicesLocatePpi ( + &gEdkiiIoMmuPpiGuid, + 0, + &OldDescriptor, + (VOID **) &OldIoMmuPpi + ); + if (!EFI_ERROR (Status)) { + Status =3D PeiServicesReInstallPpi (OldDescriptor, &mIoMmuPpiList); + } else { + Status =3D PeiServicesInstallPpi (&mIoMmuPpiList); + } + ASSERT_EFI_ERROR (Status); + + return Status; +} + +/** + This function handles S3 resume task at the end of PEI + + @param[in] PeiServices Pointer to PEI Services Table. + @param[in] NotifyDesc Pointer to the descriptor for the Notifica= tion event that + caused this function to execute. + @param[in] Ppi Pointer to the PPI data associated with th= is function. + + @retval EFI_STATUS Always return EFI_SUCCESS +**/ +EFI_STATUS +EFIAPI +S3EndOfPeiNotify( + IN EFI_PEI_SERVICES **PeiServices, + IN EFI_PEI_NOTIFY_DESCRIPTOR *NotifyDesc, + IN VOID *Ppi + ) +{ + DEBUG ((DEBUG_INFO, "VTd DMAR PEI S3EndOfPeiNotify\n")); + + if ((PcdGet8 (PcdVTdPolicyPropertyMask) & BIT1) =3D=3D 0) { + DumpVtdIfError (); + + DisableVTdTranslationProtection (GetVTdInfoHob ()); + } + return EFI_SUCCESS; +} + +EFI_PEI_NOTIFY_DESCRIPTOR mS3EndOfPeiNotifyDesc =3D { + (EFI_PEI_PPI_DESCRIPTOR_NOTIFY_CALLBACK | EFI_PEI_PPI_DESCRIPTOR_TERMINA= TE_LIST), + &gEfiEndOfPeiSignalPpiGuid, + S3EndOfPeiNotify +}; + +/** + This function handles VTd engine setup + + @param[in] PeiServices Pointer to PEI Services Table. + @param[in] NotifyDesc Pointer to the descriptor for the Notifica= tion event that + caused this function to execute. + @param[in] Ppi Pointer to the PPI data associated with th= is function. + + @retval EFI_STATUS Always return EFI_SUCCESS +**/ +EFI_STATUS +EFIAPI +VTdInfoNotify ( + IN EFI_PEI_SERVICES **PeiServices, + IN EFI_PEI_NOTIFY_DESCRIPTOR *NotifyDesc, + IN VOID *Ppi + ) +{ + EFI_STATUS Status; + VOID *MemoryDiscovered; + BOOLEAN MemoryInitialized; + + DEBUG ((DEBUG_INFO, "VTdInfoNotify\n")); + + // + // Check if memory is initialized. + // + MemoryInitialized =3D FALSE; + Status =3D PeiServicesLocatePpi ( + &gEfiPeiMemoryDiscoveredPpiGuid, + 0, + NULL, + &MemoryDiscovered + ); + if (!EFI_ERROR(Status)) { + MemoryInitialized =3D TRUE; + } + + DEBUG ((DEBUG_INFO, "MemoryInitialized - %x\n", MemoryInitialized)); + + if (!MemoryInitialized) { + // + // If the memory is not initialized, + // Protect all system memory + // + + InitVTdDmarBlockAll (); + + // + // Install PPI. + // + Status =3D PeiServicesInstallPpi (&mIoMmuPpiList); + ASSERT_EFI_ERROR(Status); + } else { + // + // If the memory is initialized, + // Allocate DMA buffer and protect rest system memory + // + + VTdLogInitialize (TRUE); + + Status =3D InitDmaBuffer (); + ASSERT_EFI_ERROR (Status); + + // + // NOTE: We need reinit VTdInfo because previous information might be = overriden. + // + Status =3D InitVTdInfo (); + ASSERT_EFI_ERROR (Status); + + Status =3D InitVTdDmarForDma (); + ASSERT_EFI_ERROR (Status); + } + + return EFI_SUCCESS; +} + +EFI_PEI_NOTIFY_DESCRIPTOR mVTdInfoNotifyDesc =3D { + (EFI_PEI_PPI_DESCRIPTOR_NOTIFY_CALLBACK | EFI_PEI_PPI_DESCRIPTOR_TERMINA= TE_LIST), + &gEdkiiVTdInfoPpiGuid, + VTdInfoNotify +}; + +/** + Initializes the Intel VTd DMAR PEIM. + + @param[in] FileHandle Handle of the file being invoked. + @param[in] PeiServices Describes the list of possible PEI Service= s. + + @retval EFI_SUCCESS Usb bot driver is successfully initialized. + @retval EFI_OUT_OF_RESOURCES Can't initialize the driver. +**/ +EFI_STATUS +EFIAPI +IntelVTdDmarInitialize ( + IN EFI_PEI_FILE_HANDLE FileHandle, + IN CONST EFI_PEI_SERVICES **PeiServices + ) +{ + EFI_STATUS Status; + EFI_BOOT_MODE BootMode; + DMA_BUFFER_INFO *DmaBufferInfo; + + DEBUG ((DEBUG_INFO, "IntelVTdDmarInitialize\n")); + + if ((PcdGet8(PcdVTdPolicyPropertyMask) & BIT0) =3D=3D 0) { + return EFI_UNSUPPORTED; + } + + VTdLogInitialize (FALSE); + + DmaBufferInfo =3D BuildGuidHob (&mDmaBufferInfoGuid, sizeof (DMA_BUFFER_= INFO)); + ASSERT(DmaBufferInfo !=3D NULL); + if (DmaBufferInfo =3D=3D NULL) { + return EFI_OUT_OF_RESOURCES; + } + ZeroMem (DmaBufferInfo, sizeof (DMA_BUFFER_INFO)); + + PeiServicesGetBootMode (&BootMode); + + if (BootMode =3D=3D BOOT_ON_S3_RESUME) { + DmaBufferInfo->DmaBufferSize =3D PcdGet32 (PcdVTdPeiDmaBufferSizeS3); + } else { + DmaBufferInfo->DmaBufferSize =3D PcdGet32 (PcdVTdPeiDmaBufferSize); + } + + Status =3D PeiServicesNotifyPpi (&mVTdInfoNotifyDesc); + ASSERT_EFI_ERROR (Status); + + // + // Register EndOfPei Notify for S3 + // + if (BootMode =3D=3D BOOT_ON_S3_RESUME) { + Status =3D PeiServicesNotifyPpi (&mS3EndOfPeiNotifyDesc); + ASSERT_EFI_ERROR (Status); + } + + return EFI_SUCCESS; +} diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/Inte= lVTdCorePei.h b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/I= ntelVTdCorePei.h new file mode 100644 index 000000000..cca9c7f02 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/IntelVTdCor= ePei.h @@ -0,0 +1,262 @@ +/** @file + The definition for DMA access Library. + + Copyright (c) 2023, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#ifndef __DMA_ACCESS_LIB_H__ +#define __DMA_ACCESS_LIB_H__ + +#include + +#define VTD_64BITS_ADDRESS(Lo, Hi) (LShiftU64 (Lo, 12) | LShiftU64 (Hi, 32= )) + +// +// Use 256-bit descriptor +// Queue size is 128. +// +#define VTD_QUEUED_INVALIDATION_DESCRIPTOR_WIDTH 1 +#define VTD_INVALIDATION_QUEUE_SIZE 0 + +typedef struct { + BOOLEAN Done; + UINTN VtdUnitBaseAddress; + UINT16 Segment; + UINT8 Flags; + VTD_VER_REG VerReg; + VTD_CAP_REG CapReg; + VTD_ECAP_REG ECapReg; + BOOLEAN Is5LevelPaging; + UINT8 EnableQueuedInvalidation; + VOID *QiDescBuffer; + UINTN QiDescBufferSize; + UINTN FixedSecondLevelPagingEntry; + UINTN RootEntryTable; + UINTN ExtRootEntryTable; + UINTN RootEntryTablePageSize; + UINTN ExtRootEntryTablePageSize; +} VTD_UNIT_INFO; + +typedef struct { + EFI_ACPI_DMAR_HEADER *AcpiDmarTable; + UINT8 HostAddressWidth; + VTD_REGESTER_THIN_INFO *RegsInfoBuffer; + UINTN VTdEngineCount; + VTD_UNIT_INFO *VtdUnitInfo; +} VTD_INFO; + +typedef struct { + UINTN DmaBufferBase; + UINTN DmaBufferSize; + UINTN DmaBufferCurrentTop; + UINTN DmaBufferCurrentBottom; +} DMA_BUFFER_INFO; + +typedef +VOID +(*PROCESS_DRHD_CALLBACK_FUNC) ( + IN OUT VOID *Context, + IN UINTN VTdIndex, + IN EFI_ACPI_DMAR_DRHD_HEADER *DmarDrhd + ); + +/** + Enable VTd translation table protection for block DMA + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + + @retval EFI_SUCCESS DMAR translation is enabled. + @retval EFI_DEVICE_ERROR DMAR translation is not enabled. +**/ +EFI_STATUS +EnableVTdTranslationProtectionBlockDma ( + IN UINTN VtdUnitBaseAddress + ); + +/** + Enable VTd translation table protection. + + @param[in] VTdInfo The VTd engine context information. + + @retval EFI_SUCCESS DMAR translation is enabled. + @retval EFI_DEVICE_ERROR DMAR translation is not enabled. +**/ +EFI_STATUS +EnableVTdTranslationProtection ( + IN VTD_INFO *VTdInfo + ); + +/** + Disable VTd translation table protection. + + @param[in] VTdInfo The VTd engine context information. +**/ +VOID +DisableVTdTranslationProtection ( + IN VTD_INFO *VTdInfo + ); + +/** + Parse DMAR DRHD table. + + @param[in] AcpiDmarTable DMAR ACPI table + @param[in] Callback Callback function for handle DRHD + @param[in] Context Callback function Context + + @return the VTd engine number. + +**/ +UINTN +ParseDmarAcpiTableDrhd ( + IN EFI_ACPI_DMAR_HEADER *AcpiDmarTable, + IN PROCESS_DRHD_CALLBACK_FUNC Callback, + IN VOID *Context + ); + +/** + Prepare VTD configuration. + + @param[in] VTdInfo The VTd engine context information. + + @retval EFI_SUCCESS Prepare Vtd config success +**/ +EFI_STATUS +PrepareVtdConfig ( + IN VTD_INFO *VTdInfo + ); + +/** + Setup VTd translation table. + + @param[in] VTdInfo The VTd engine context information. + + @retval EFI_SUCCESS Setup translation table successfully. + @retval EFI_OUT_OF_RESOURCE Setup translation table fail. +**/ +EFI_STATUS +SetupTranslationTable ( + IN VTD_INFO *VTdInfo + ); + +/** + Flush VTD page table and context table memory. + + This action is to make sure the IOMMU engine can get final data in memor= y. + + @param[in] VtdUnitInfo The VTd engine unit information. + @param[in] Base The base address of memory to be flushed. + @param[in] Size The size of memory in bytes to be flushed. +**/ +VOID +FlushPageTableMemory ( + IN VTD_UNIT_INFO *VtdUnitInfo, + IN UINTN Base, + IN UINTN Size + ); + +/** + Allocate zero pages. + + @param[in] Pages the number of pages. + + @return the page address. + @retval NULL No resource to allocate pages. +**/ +VOID * +EFIAPI +AllocateZeroPages ( + IN UINTN Pages + ); + +/** + Return the index of PCI data. + + @param[in] VtdUnitInfo The VTd engine unit information. + @param[in] Segment The Segment used to identify a VTd engine. + @param[in] SourceId The SourceId used to identify a VTd engine= and table entry. + + @return The index of the PCI data. + @retval (UINTN)-1 The PCI data is not found. +**/ +UINTN +GetPciDataIndex ( + IN VTD_UNIT_INFO *VtdUnitInfo, + IN UINT16 Segment, + IN VTD_SOURCE_ID SourceId + ); + +/** + Get the VTd engine context information hob. + + @retval The VTd engine context information. + +**/ +VTD_INFO * +GetVTdInfoHob ( + VOID + ); + +/** + Dump VTd registers if there is error. +**/ +VOID +DumpVtdIfError ( + VOID + ); + +/** + Add the VTd log event in post memory phase. + + @param[in] EventType Event type + @param[in] Data1 First parameter + @param[in] Data2 Second parameter + +**/ +VOID +EFIAPI +VTdLogAddEvent ( + IN CONST VTDLOG_EVENT_TYPE EventType, + IN CONST UINT64 Data1, + IN CONST UINT64 Data2 + ); + +/** + Add a new VTd log event with data. + + @param[in] EventType Event type + @param[in] Param parameter + @param[in] Data Data + @param[in] DataSize Data size + +**/ +VOID +EFIAPI +VTdLogAddDataEvent ( + IN CONST VTDLOG_EVENT_TYPE EventType, + IN CONST UINT64 Param, + IN CONST VOID *Data, + IN CONST UINT32 DataSize + ); + +/** + Add the VTd log event in pre-memory phase. + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + @param[in] Mode Pre-memory DMA protection mode. + @param[in] Status Status + +**/ +VOID +EFIAPI +VTdLogAddPreMemoryEvent ( + IN UINTN VtdUnitBaseAddress, + IN UINT8 Mode, + IN UINT8 Status + ); + +extern EFI_GUID mVTdInfoGuid; +extern EFI_GUID mDmaBufferInfoGuid; + +#endif diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/Inte= lVTdCorePei.inf b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei= /IntelVTdCorePei.inf new file mode 100644 index 000000000..f756c543c --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/IntelVTdCor= ePei.inf @@ -0,0 +1,70 @@ +## @file +# Component INF file for the Intel VTd DMAR PEIM. +# +# This driver initializes VTd engine based upon EDKII_VTD_INFO_PPI +# and provide DMA protection in PEI. +# +# Copyright (c) 2023, Intel Corporation. All rights reserved.
+# SPDX-License-Identifier: BSD-2-Clause-Patent +# +## + +[Defines] + INF_VERSION =3D 0x00010017 + BASE_NAME =3D IntelVTdCorePei + MODULE_UNI_FILE =3D IntelVTdCorePei.uni + FILE_GUID =3D 9311b0cc-5c08-4c0a-bec8-23afab024e48 + MODULE_TYPE =3D PEIM + VERSION_STRING =3D 2.0 + ENTRY_POINT =3D IntelVTdDmarInitialize + +[Packages] + MdePkg/MdePkg.dec + MdeModulePkg/MdeModulePkg.dec + IntelSiliconPkg/IntelSiliconPkg.dec + +[Sources] + IntelVTdCorePei.c + IntelVTdCorePei.h + IntelVTdDmar.c + DmarTable.c + TranslationTable.c + +[LibraryClasses] + DebugLib + BaseMemoryLib + BaseLib + PeimEntryPoint + PeiServicesLib + HobLib + IoLib + CacheMaintenanceLib + PciSegmentLib + IntelVTdPeiDxeLib + +[Guids] + gVTdLogBufferHobGuid ## PRODUCES CONSUMES + gVtdPmrInfoDataHobGuid ## CONSUMES + +[Ppis] + gEdkiiIoMmuPpiGuid ## PRODUCES + gEdkiiVTdInfoPpiGuid ## CONSUMES + gEfiPeiMemoryDiscoveredPpiGuid ## CONSUMES + gEfiEndOfPeiSignalPpiGuid ## CONSUMES + gEdkiiVTdNullRootEntryTableGuid ## CONSUMES + +[Pcd] + gIntelSiliconPkgTokenSpaceGuid.PcdVTdPolicyPropertyMask ## CONSUMES + gIntelSiliconPkgTokenSpaceGuid.PcdVTdPeiDmaBufferSize ## CONSUMES + gIntelSiliconPkgTokenSpaceGuid.PcdVTdPeiDmaBufferSizeS3 ## CONSUMES + gIntelSiliconPkgTokenSpaceGuid.PcdVTdSupportAbortDmaMode ## CONSUMES + gIntelSiliconPkgTokenSpaceGuid.PcdVTdLogLevel ## CONSUMES + gIntelSiliconPkgTokenSpaceGuid.PcdVTdPeiPostMemLogBufferSize ## CONSUMES + +[Depex] + gEfiPeiMasterBootModePpiGuid AND + gEdkiiVTdInfoPpiGuid + +[UserExtensions.TianoCore."ExtraFiles"] + IntelVTdCorePeiExtra.uni + diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/Inte= lVTdCorePei.uni b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei= /IntelVTdCorePei.uni new file mode 100644 index 000000000..2b5b260f5 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/IntelVTdCor= ePei.uni @@ -0,0 +1,14 @@ +// /** @file +// IntelVTdDmarPei Module Localized Abstract and Description Content +// +// Copyright (c) 2020, Intel Corporation. All rights reserved.
+// +// SPDX-License-Identifier: BSD-2-Clause-Patent +// +// **/ + + +#string STR_MODULE_ABSTRACT #language en-US "Intel VTd CORE PE= I Driver." + +#string STR_MODULE_DESCRIPTION #language en-US "This driver initi= alizes VTd engine based upon EDKII_VTD_INFO_PPI and provide DMA protection = to device in PEI." + diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/Inte= lVTdCorePeiExtra.uni b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCo= rePei/IntelVTdCorePeiExtra.uni new file mode 100644 index 000000000..14848f924 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/IntelVTdCor= ePeiExtra.uni @@ -0,0 +1,14 @@ +// /** @file +// IntelVTdDmarPei Localized Strings and Content +// +// Copyright (c) 2020, Intel Corporation. All rights reserved.
+// +// SPDX-License-Identifier: BSD-2-Clause-Patent +// +// **/ + +#string STR_PROPERTIES_MODULE_NAME +#language en-US +"Intel VTd CORE PEI Driver" + + diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/Inte= lVTdDmar.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/Inte= lVTdDmar.c new file mode 100644 index 000000000..93207ba52 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/IntelVTdDma= r.c @@ -0,0 +1,727 @@ +/** @file + + Copyright (c) 2023, Intel Corporation. All rights reserved.
+ + SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "IntelVTdCorePei.h" + +#define VTD_CAP_REG_NFR_MAX (256) + +/** + Flush VTD page table and context table memory. + + This action is to make sure the IOMMU engine can get final data in memor= y. + + @param[in] VtdUnitInfo The VTd engine unit information. + @param[in] Base The base address of memory to be flushed. + @param[in] Size The size of memory in bytes to be flushed. +**/ +VOID +FlushPageTableMemory ( + IN VTD_UNIT_INFO *VtdUnitInfo, + IN UINTN Base, + IN UINTN Size + ) +{ + if (VtdUnitInfo->ECapReg.Bits.C =3D=3D 0) { + WriteBackDataCacheRange ((VOID *) Base, Size); + } +} + +/** + Perpare cache invalidation interface. + + @param[in] VtdUnitInfo The VTd engine unit information. + + @retval EFI_SUCCESS The operation was successful. + @retval EFI_UNSUPPORTED Invalidation method is not supported. + @retval EFI_OUT_OF_RESOURCES A memory allocation failed. +**/ +EFI_STATUS +PerpareCacheInvalidationInterface ( + IN VTD_UNIT_INFO *VtdUnitInfo + ) +{ + UINT32 Reg32; + VTD_ECAP_REG ECapReg; + VTD_IQA_REG IqaReg; + UINTN VtdUnitBaseAddress; + + VtdUnitBaseAddress =3D VtdUnitInfo->VtdUnitBaseAddress; + + if (VtdUnitInfo->VerReg.Bits.Major <=3D 5) { + VtdUnitInfo->EnableQueuedInvalidation =3D 0; + DEBUG ((DEBUG_INFO, "Use Register-based Invalidation Interface for eng= ine [0x%x]\n", VtdUnitBaseAddress)); + return EFI_SUCCESS; + } + + ECapReg.Uint64 =3D MmioRead64 (VtdUnitBaseAddress + R_ECAP_REG); + if (ECapReg.Bits.QI =3D=3D 0) { + DEBUG ((DEBUG_ERROR, "Hardware does not support queued invalidations i= nterface for engine [0x%x]\n", VtdUnitBaseAddress)); + return EFI_UNSUPPORTED; + } + + VtdUnitInfo->EnableQueuedInvalidation =3D 1; + DEBUG ((DEBUG_INFO, "Use Queued Invalidation Interface for engine [0x%x]= \n", VtdUnitBaseAddress)); + + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + if ((Reg32 & B_GSTS_REG_QIES) !=3D 0) { + DEBUG ((DEBUG_INFO,"Queued Invalidation Interface was enabled.\n")); + + VtdLibDisableQueuedInvalidationInterface (VtdUnitBaseAddress); + } + + // + // Initialize the Invalidation Queue Tail Register to zero. + // + MmioWrite64 (VtdUnitBaseAddress + R_IQT_REG, 0); + + // + // Setup the IQ address, size and descriptor width through the Invalidat= ion Queue Address Register + // + if (VtdUnitInfo->QiDescBuffer =3D=3D NULL) { + VtdUnitInfo->QiDescBufferSize =3D (sizeof (QI_256_DESC) * ((UINTN) 1 <= < (VTD_INVALIDATION_QUEUE_SIZE + 7))); + VtdUnitInfo->QiDescBuffer =3D AllocatePages (EFI_SIZE_TO_PAGES (VtdUni= tInfo->QiDescBufferSize)); + if (VtdUnitInfo->QiDescBuffer =3D=3D NULL) { + DEBUG ((DEBUG_ERROR,"Could not Alloc Invalidation Queue Buffer.\n")); + VTdLogAddEvent (VTDLOG_PEI_QUEUED_INVALIDATION, VTD_LOG_QI_ERROR_OUT= _OF_RESOURCES, VtdUnitBaseAddress); + return EFI_OUT_OF_RESOURCES; + } + } + + DEBUG ((DEBUG_INFO, "Invalidation Queue Buffer Size : %d\n", VtdUnitInfo= ->QiDescBufferSize)); + // + // 4KB Aligned address + // + IqaReg.Uint64 =3D (UINT64) (UINTN) VtdUnitInfo->QiDescBuffer; + IqaReg.Bits.DW =3D VTD_QUEUED_INVALIDATION_DESCRIPTOR_WIDTH; + IqaReg.Bits.QS =3D VTD_INVALIDATION_QUEUE_SIZE; + MmioWrite64 (VtdUnitBaseAddress + R_IQA_REG, IqaReg.Uint64); + IqaReg.Uint64 =3D MmioRead64 (VtdUnitBaseAddress + R_IQA_REG); + DEBUG ((DEBUG_INFO, "IQA_REG =3D 0x%lx, IQH_REG =3D 0x%lx\n", IqaReg.Uin= t64, MmioRead64 (VtdUnitBaseAddress + R_IQH_REG))); + + // + // Enable the queued invalidation interface through the Global Command R= egister. + // When enabled, hardware sets the QIES field in the Global Status Regis= ter. + // + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + Reg32 |=3D B_GMCD_REG_QIE; + MmioWrite32 (VtdUnitBaseAddress + R_GCMD_REG, Reg32); + DEBUG ((DEBUG_INFO, "Enable Queued Invalidation Interface. GCMD_REG =3D = 0x%x\n", Reg32)); + do { + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + } while ((Reg32 & B_GSTS_REG_QIES) =3D=3D 0); + + VTdLogAddEvent (VTDLOG_PEI_QUEUED_INVALIDATION, VTD_LOG_QI_ENABLE, VtdUn= itBaseAddress); + + return EFI_SUCCESS; +} + +/** + Submit the queued invalidation descriptor to the remapping + hardware unit and wait for its completion. + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + @param[in] Desc The invalidate descriptor + + @retval EFI_SUCCESS The operation was successful. + @retval RETURN_DEVICE_ERROR A fault is detected. + @retval EFI_INVALID_PARAMETER Parameter is invalid. +**/ +EFI_STATUS +SubmitQueuedInvalidationDescriptor ( + IN UINTN VtdUnitBaseAddress, + IN QI_256_DESC *Desc + ) +{ + EFI_STATUS Status; + VTD_REGESTER_QI_INFO RegisterQi; + + Status =3D VtdLibSubmitQueuedInvalidationDescriptor (VtdUnitBaseAddress,= Desc, FALSE); + if (Status =3D=3D EFI_DEVICE_ERROR) { + RegisterQi.BaseAddress =3D VtdUnitBaseAddress; + RegisterQi.FstsReg =3D MmioRead32 (VtdUnitBaseAddress + R_FSTS_REG= );; + RegisterQi.IqercdReg =3D MmioRead64 (VtdUnitBaseAddress + R_IQERCD_R= EG); + VTdLogAddDataEvent (VTDLOG_PEI_REGISTER, VTDLOG_REGISTER_QI, &Register= Qi, sizeof (VTD_REGESTER_QI_INFO)); + + MmioWrite32 (VtdUnitBaseAddress + R_FSTS_REG, RegisterQi.FstsReg & (B_= FSTS_REG_IQE | B_FSTS_REG_ITE | B_FSTS_REG_ICE)); + } + + return Status; +} + +/** + Invalidate VTd context cache. + + @param[in] VtdUnitInfo The VTd engine unit information. +**/ +EFI_STATUS +InvalidateContextCache ( + IN VTD_UNIT_INFO *VtdUnitInfo + ) +{ + UINT64 Reg64; + QI_256_DESC QiDesc; + + if (VtdUnitInfo->EnableQueuedInvalidation =3D=3D 0) { + // + // Register-based Invalidation + // + Reg64 =3D MmioRead64 (VtdUnitInfo->VtdUnitBaseAddress + R_CCMD_REG); + if ((Reg64 & B_CCMD_REG_ICC) !=3D 0) { + DEBUG ((DEBUG_ERROR,"ERROR: InvalidateContextCache: B_CCMD_REG_ICC i= s set for VTD(%x)\n", VtdUnitInfo->VtdUnitBaseAddress)); + return EFI_DEVICE_ERROR; + } + + Reg64 &=3D ((~B_CCMD_REG_ICC) & (~B_CCMD_REG_CIRG_MASK)); + Reg64 |=3D (B_CCMD_REG_ICC | V_CCMD_REG_CIRG_GLOBAL); + MmioWrite64 (VtdUnitInfo->VtdUnitBaseAddress + R_CCMD_REG, Reg64); + + do { + Reg64 =3D MmioRead64 (VtdUnitInfo->VtdUnitBaseAddress + R_CCMD_REG); + } while ((Reg64 & B_CCMD_REG_ICC) !=3D 0); + } else { + // + // Queued Invalidation + // + QiDesc.Uint64[0] =3D QI_CC_FM(0) | QI_CC_SID(0) | QI_CC_DID(0) | QI_CC= _GRAN(1) | QI_CC_TYPE; + QiDesc.Uint64[1] =3D 0; + QiDesc.Uint64[2] =3D 0; + QiDesc.Uint64[3] =3D 0; + + return SubmitQueuedInvalidationDescriptor(VtdUnitInfo->VtdUnitBaseAddr= ess, &QiDesc); + } + + return EFI_SUCCESS; +} + +/** + Invalidate VTd IOTLB. + + @param[in] VtdUnitInfo The VTd engine unit information. +**/ +EFI_STATUS +InvalidateIOTLB ( + IN VTD_UNIT_INFO *VtdUnitInfo + ) +{ + UINT64 Reg64; + VTD_ECAP_REG ECapReg; + VTD_CAP_REG CapReg; + QI_256_DESC QiDesc; + + if (VtdUnitInfo->EnableQueuedInvalidation =3D=3D 0) { + // + // Register-based Invalidation + // + ECapReg.Uint64 =3D MmioRead64 (VtdUnitInfo->VtdUnitBaseAddress + R_ECA= P_REG); + + Reg64 =3D MmioRead64 (VtdUnitInfo->VtdUnitBaseAddress + (ECapReg.Bits.= IRO * 16) + R_IOTLB_REG); + if ((Reg64 & B_IOTLB_REG_IVT) !=3D 0) { + DEBUG ((DEBUG_ERROR, "ERROR: InvalidateIOTLB: B_IOTLB_REG_IVT is se= t for VTD(%x)\n", VtdUnitInfo->VtdUnitBaseAddress)); + return EFI_DEVICE_ERROR; + } + + Reg64 &=3D ((~B_IOTLB_REG_IVT) & (~B_IOTLB_REG_IIRG_MASK)); + Reg64 |=3D (B_IOTLB_REG_IVT | V_IOTLB_REG_IIRG_GLOBAL); + MmioWrite64 (VtdUnitInfo->VtdUnitBaseAddress + (ECapReg.Bits.IRO * 16)= + R_IOTLB_REG, Reg64); + + do { + Reg64 =3D MmioRead64 (VtdUnitInfo->VtdUnitBaseAddress + (ECapReg.Bit= s.IRO * 16) + R_IOTLB_REG); + } while ((Reg64 & B_IOTLB_REG_IVT) !=3D 0); + } else { + // + // Queued Invalidation + // + CapReg.Uint64 =3D MmioRead64 (VtdUnitInfo->VtdUnitBaseAddress + R_CAP_= REG); + QiDesc.Uint64[0] =3D QI_IOTLB_DID(0) | (CapReg.Bits.DRD ? QI_IOTLB_DR(= 1) : QI_IOTLB_DR(0)) | (CapReg.Bits.DWD ? QI_IOTLB_DW(1) : QI_IOTLB_DW(0)) = | QI_IOTLB_GRAN(1) | QI_IOTLB_TYPE; + QiDesc.Uint64[1] =3D QI_IOTLB_ADDR(0) | QI_IOTLB_IH(0) | QI_IOTLB_AM(0= ); + QiDesc.Uint64[2] =3D 0; + QiDesc.Uint64[3] =3D 0; + + return SubmitQueuedInvalidationDescriptor(VtdUnitInfo->VtdUnitBaseAddr= ess, &QiDesc); + } + + return EFI_SUCCESS; +} + +/** + Enable DMAR translation in pre-mem phase. + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + @param[in] RtaddrRegValue The value of RTADDR_REG. + + @retval EFI_SUCCESS DMAR translation is enabled. + @retval EFI_DEVICE_ERROR DMAR translation is not enabled. +**/ +EFI_STATUS +EnableDmarPreMem ( + IN UINTN VtdUnitBaseAddress, + IN UINT64 RtaddrRegValue + ) +{ + UINT32 Reg32; + + DEBUG ((DEBUG_INFO, ">>>>>>EnableDmarPreMem() for engine [%x] \n", VtdUn= itBaseAddress)); + + DEBUG ((DEBUG_INFO, "RTADDR_REG : 0x%016lx \n", RtaddrRegValue)); + MmioWrite64 (VtdUnitBaseAddress + R_RTADDR_REG, RtaddrRegValue); + + DEBUG ((DEBUG_INFO, "EnableDmarPreMem: waiting for RTPS bit to be set...= \n")); + VtdLibSetGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_REG_SRTP); + + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + DEBUG ((DEBUG_INFO, "EnableDmarPreMem: R_GSTS_REG =3D 0x%x \n", Reg32)); + + // + // Write Buffer Flush + // + VtdLibFlushWriteBuffer (VtdUnitBaseAddress); + + // + // Enable VTd + // + VtdLibSetGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_REG_TE); + + DEBUG ((DEBUG_INFO, "VTD () enabled!<<<<<<\n")); + + return EFI_SUCCESS; +} + +/** + Enable DMAR translation. + + @param[in] VtdUnitInfo The VTd engine unit information. + @param[in] RootEntryTable The address of the VTd RootEntryTable. + + @retval EFI_SUCCESS DMAR translation is enabled. + @retval EFI_DEVICE_ERROR DMAR translation is not enabled. +**/ +EFI_STATUS +EnableDmar ( + IN VTD_UNIT_INFO *VtdUnitInfo, + IN UINTN RootEntryTable + ) +{ + UINTN VtdUnitBaseAddress; + BOOLEAN TEWasEnabled; + + VtdUnitBaseAddress =3D VtdUnitInfo->VtdUnitBaseAddress; + + DEBUG ((DEBUG_INFO, ">>>>>>EnableDmar() for engine [%x] \n", VtdUnitBase= Address)); + + // + // Check TE was enabled or not. + // + TEWasEnabled =3D ((MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG) & B_GSTS= _REG_TE) =3D=3D B_GSTS_REG_TE); + + if (TEWasEnabled && (VtdUnitInfo->ECapReg.Bits.ADMS =3D=3D 1) && PcdGetB= ool (PcdVTdSupportAbortDmaMode)) { + // + // For implementations reporting Enhanced SRTP Support (ESRTPS) field = as + // Clear in the Capability register, software must not modify this fie= ld while + // DMA remapping is active (TES=3D1 in Global Status register). + // + if (VtdUnitInfo->CapReg.Bits.ESRTPS =3D=3D 0) { + VtdLibClearGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_REG= _TE); + } + + // + // Enable ADM + // + MmioWrite64 (VtdUnitBaseAddress + R_RTADDR_REG, (UINT64) (RootEntryTab= le | V_RTADDR_REG_TTM_ADM)); + + DEBUG ((DEBUG_INFO, "EnableDmar: waiting for RTPS bit to be set... \n"= )); + VtdLibSetGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_REG_SRT= P); + + DEBUG ((DEBUG_INFO, "Enable Abort DMA Mode...\n")); + VtdLibSetGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_REG_TE); + + } else { + DEBUG ((DEBUG_INFO, "RootEntryTable 0x%x \n", RootEntryTable)); + MmioWrite64 (VtdUnitBaseAddress + R_RTADDR_REG, (UINT64) RootEntryTabl= e); + + DEBUG ((DEBUG_INFO, "EnableDmar: waiting for RTPS bit to be set... \n"= )); + VtdLibSetGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_REG_SRT= P); + } + + // + // Write Buffer Flush before invalidation + // + VtdLibFlushWriteBuffer (VtdUnitBaseAddress); + + // + // Invalidate the context cache + // + InvalidateContextCache (VtdUnitInfo); + + // + // Invalidate the IOTLB cache + // + InvalidateIOTLB (VtdUnitInfo); + + if (TEWasEnabled && (VtdUnitInfo->ECapReg.Bits.ADMS =3D=3D 1) && PcdGetB= ool (PcdVTdSupportAbortDmaMode)) { + if (VtdUnitInfo->CapReg.Bits.ESRTPS =3D=3D 0) { + VtdLibClearGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_REG= _TE); + } + + DEBUG ((DEBUG_INFO, "RootEntryTable 0x%x \n", RootEntryTable)); + MmioWrite64 (VtdUnitBaseAddress + R_RTADDR_REG, (UINT64) RootEntryTabl= e); + + DEBUG ((DEBUG_INFO, "EnableDmar: waiting for RTPS bit to be set... \n"= )); + VtdLibSetGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_REG_SRT= P); + } + + // + // Enable VTd + // + VtdLibSetGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_REG_TE); + + DEBUG ((DEBUG_INFO, "VTD () enabled!<<<<<<\n")); + + return EFI_SUCCESS; +} + +/** + Enable VTd translation table protection for block DMA + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + + @retval EFI_SUCCESS DMAR translation is enabled. + @retval EFI_DEVICE_ERROR DMAR translation is not enabled. +**/ +EFI_STATUS +EnableVTdTranslationProtectionBlockDma ( + IN UINTN VtdUnitBaseAddress + ) +{ + EFI_STATUS Status; + VTD_ECAP_REG ECapReg; + EDKII_VTD_NULL_ROOT_ENTRY_TABLE_PPI *RootEntryTable; + UINT8 Mode; + + DEBUG ((DEBUG_INFO, "EnableVTdTranslationProtectionBlockDma - 0x%08x\n",= VtdUnitBaseAddress)); + + DEBUG ((DEBUG_INFO, "PcdVTdSupportAbortDmaMode : %d\n", PcdGetBool (PcdV= TdSupportAbortDmaMode))); + + ECapReg.Uint64 =3D MmioRead64 (VtdUnitBaseAddress + R_ECAP_REG); + DEBUG ((DEBUG_INFO, "ECapReg.ADMS : %d\n", ECapReg.Bits.ADMS)); + + if ((ECapReg.Bits.ADMS =3D=3D 1) && PcdGetBool (PcdVTdSupportAbortDmaMod= e)) { + Mode =3D VTD_LOG_PEI_PRE_MEM_ADM; + // + // Use Abort DMA Mode + // + DEBUG ((DEBUG_INFO, "Enable abort DMA mode.\n")); + Status =3D EnableDmarPreMem (VtdUnitBaseAddress, V_RTADDR_REG_TTM_ADM); + } else { + // + // Use Null Root Entry Table + // + Status =3D PeiServicesLocatePpi ( + &gEdkiiVTdNullRootEntryTableGuid, + 0, + NULL, + (VOID **)&RootEntryTable + ); + if (EFI_ERROR (Status)) { + Mode =3D VTD_LOG_PEI_PRE_MEM_DISABLE; + DEBUG ((DEBUG_ERROR, "Locate Null Root Entry Table Ppi Failed : %r\n= ", Status)); + ASSERT (FALSE); + } else { + Mode =3D VTD_LOG_PEI_PRE_MEM_TE; + DEBUG ((DEBUG_INFO, "Block All DMA by TE.\n")); + Status =3D EnableDmarPreMem (VtdUnitBaseAddress, (UINT64) (*RootEntr= yTable)); + } + } + + VTdLogAddPreMemoryEvent (VtdUnitBaseAddress, Mode, EFI_ERROR (Status) ? = 0 : 1); + + return Status; +} + +/** + Enable VTd translation table protection. + + @param[in] VTdInfo The VTd engine context information. + + @retval EFI_SUCCESS DMAR translation is enabled. + @retval EFI_DEVICE_ERROR DMAR translation is not enabled. +**/ +EFI_STATUS +EnableVTdTranslationProtection ( + IN VTD_INFO *VTdInfo + ) +{ + EFI_STATUS Status; + UINTN Index; + VTD_UNIT_INFO *VtdUnitInfo; + + for (Index =3D 0; Index < VTdInfo->VTdEngineCount; Index++) { + VtdUnitInfo =3D &VTdInfo->VtdUnitInfo[Index]; + if (VtdUnitInfo->Done) { + DEBUG ((DEBUG_INFO, "EnableVtdDmar (%d) was enabled\n", Index)); + continue; + } + + if (VtdUnitInfo->ExtRootEntryTable !=3D 0) { + DEBUG ((DEBUG_INFO, "EnableVtdDmar (%d) ExtRootEntryTable 0x%x\n", I= ndex, VtdUnitInfo->ExtRootEntryTable)); + Status =3D EnableDmar (VtdUnitInfo, VtdUnitInfo->ExtRootEntryTable |= BIT11); + } else { + DEBUG ((DEBUG_INFO, "EnableVtdDmar (%d) RootEntryTable 0x%x\n", Inde= x, VtdUnitInfo->RootEntryTable)); + Status =3D EnableDmar (VtdUnitInfo, VtdUnitInfo->RootEntryTable); + } + + VTdLogAddEvent (VTDLOG_PEI_POST_MEM_ENABLE_DMA_PROTECT, VTdInfo->VtdUn= itInfo[Index].VtdUnitBaseAddress, Status); + + if (EFI_ERROR (Status)) { + DEBUG ((DEBUG_ERROR, "EnableVtdDmar (%d) Failed !\n", Index)); + return Status; + } + VtdUnitInfo->Done =3D TRUE; + } + return EFI_SUCCESS; +} + +/** + Disable VTd translation table protection. + + @param[in] VTdInfo The VTd engine context information. +**/ +VOID +DisableVTdTranslationProtection ( + IN VTD_INFO *VTdInfo + ) +{ + UINTN Index; + VTD_UNIT_INFO *VtdUnitInfo; + + if (VTdInfo =3D=3D NULL) { + return; + } + + DEBUG ((DEBUG_INFO, "DisableVTdTranslationProtection - %d Vtd Engine\n",= VTdInfo->VTdEngineCount)); + + for (Index =3D 0; Index < VTdInfo->VTdEngineCount; Index++) { + VtdUnitInfo =3D &VTdInfo->VtdUnitInfo[Index]; + + VtdLibDisableDmar (VtdUnitInfo->VtdUnitBaseAddress); + VTdLogAddEvent (VTDLOG_PEI_POST_MEM_DISABLE_DMA_PROTECT, VtdUnitInfo->= VtdUnitBaseAddress, 0); + + if (VtdUnitInfo->EnableQueuedInvalidation !=3D 0) { + // + // Disable queued invalidation interface. + // + VtdLibDisableQueuedInvalidationInterface (VtdUnitInfo->VtdUnitBaseAd= dress); + + if (VtdUnitInfo->QiDescBuffer !=3D NULL) { + FreePages(VtdUnitInfo->QiDescBuffer, EFI_SIZE_TO_PAGES (VtdUnitInf= o->QiDescBufferSize)); + VtdUnitInfo->QiDescBuffer =3D NULL; + VtdUnitInfo->QiDescBufferSize =3D 0; + } + + VtdUnitInfo->EnableQueuedInvalidation =3D 0; + VTdLogAddEvent (VTDLOG_PEI_QUEUED_INVALIDATION, VTD_LOG_QI_DISABLE, = VtdUnitInfo->VtdUnitBaseAddress); + } + } + + return; +} + +/** + Check if VTd engine use 5 level paging. + + @param[in] HostAddressWidth Host Address Width. + @param[in] VtdUnitInfo The VTd engine unit information. + @param[out] Is5LevelPaging Use 5 level paging or not + + @retval EFI_SUCCESS Success + @retval EFI_UNSUPPORTED Feature is not support + +**/ +EFI_STATUS +VtdCheckUsing5LevelPaging ( + IN UINT8 HostAddressWidth, + IN VTD_CAP_REG CapReg, + OUT BOOLEAN *Is5LevelPaging + ) +{ + DEBUG ((DEBUG_INFO, " CapReg SAGAW bits : 0x%02x\n", CapReg.Bits.SAGAW)= ); + + *Is5LevelPaging =3D FALSE; + if ((CapReg.Bits.SAGAW & BIT3) !=3D 0) { + *Is5LevelPaging =3D TRUE; + if ((HostAddressWidth <=3D 48) && + ((CapReg.Bits.SAGAW & BIT2) !=3D 0)) { + *Is5LevelPaging =3D FALSE; + } else { + return EFI_UNSUPPORTED; + } + } + if ((CapReg.Bits.SAGAW & (BIT3 | BIT2)) =3D=3D 0) { + return EFI_UNSUPPORTED; + } + DEBUG ((DEBUG_INFO, " Using %d Level Paging\n", *Is5LevelPaging ? 5 : 4= )); + return EFI_SUCCESS; +} + + +/** + Prepare VTD configuration. + + @param[in] VTdInfo The VTd engine context information. + + @retval EFI_SUCCESS Prepare Vtd config success +**/ +EFI_STATUS +PrepareVtdConfig ( + IN VTD_INFO *VTdInfo + ) +{ + EFI_STATUS Status; + UINTN Index; + VTD_UNIT_INFO *VtdUnitInfo; + UINTN VtdUnitBaseAddress; + + if (VTdInfo->RegsInfoBuffer =3D=3D NULL) { + VTdInfo->RegsInfoBuffer =3D AllocateZeroPages (EFI_SIZE_TO_PAGES (size= of (VTD_REGESTER_THIN_INFO) + sizeof (VTD_UINT128) * VTD_CAP_REG_NFR_MAX)); + ASSERT (VTdInfo->RegsInfoBuffer !=3D NULL); + } + + for (Index =3D 0; Index < VTdInfo->VTdEngineCount; Index++) { + VtdUnitInfo =3D &VTdInfo->VtdUnitInfo[Index]; + if (VtdUnitInfo->Done) { + continue; + } + VtdUnitBaseAddress =3D VtdUnitInfo->VtdUnitBaseAddress; + DEBUG ((DEBUG_INFO, "VTd Engine: 0x%08X\n", VtdUnitBaseAddress)); + + VtdUnitInfo->VerReg.Uint32 =3D MmioRead32 (VtdUnitBaseAddress + R_VER_= REG); + VtdUnitInfo->CapReg.Uint64 =3D MmioRead64 (VtdUnitBaseAddress + R_CAP_= REG); + VtdUnitInfo->ECapReg.Uint64 =3D MmioRead64 (VtdUnitBaseAddress + R_ECA= P_REG); + DEBUG ((DEBUG_INFO, " VER_REG : 0x%08X\n", VtdUnitInfo->VerReg.Uint3= 2)); + DEBUG ((DEBUG_INFO, " CAP_REG : 0x%016lX\n", VtdUnitInfo->CapReg.Uin= t64)); + DEBUG ((DEBUG_INFO, " ECAP_REG : 0x%016lX\n", VtdUnitInfo->ECapReg.Ui= nt64)); + + Status =3D VtdCheckUsing5LevelPaging (VTdInfo->HostAddressWidth, VtdUn= itInfo->CapReg, &(VtdUnitInfo->Is5LevelPaging)); + if (EFI_ERROR (Status)) { + DEBUG ((DEBUG_ERROR, "!!!! Page-table type 0x%X is not supported!!!!= \n", VtdUnitInfo->CapReg.Bits.SAGAW)); + return Status; + } + + Status =3D PerpareCacheInvalidationInterface(&VTdInfo->VtdUnitInfo[Ind= ex]); + if (EFI_ERROR (Status)) { + return Status; + } + } + + return EFI_SUCCESS; +} + +/** + Dump VTd registers if there is error. +**/ +VOID +DumpVtdIfError ( + VOID + ) +{ + VTD_INFO *VTdInfo; + UINTN Num; + UINTN VtdUnitBaseAddress; + UINT16 Index; + VTD_REGESTER_THIN_INFO *VtdRegInfo; + VTD_FRCD_REG FrcdReg; + VTD_CAP_REG CapReg; + UINT32 FstsReg32; + UINT32 FectlReg32; + BOOLEAN HasError; + + VTdInfo =3D GetVTdInfoHob (); + if (VTdInfo =3D=3D NULL) { + return; + } + + VtdRegInfo =3D VTdInfo->RegsInfoBuffer; + if (VtdRegInfo =3D=3D NULL) { + return; + } + + for (Num =3D 0; Num < VTdInfo->VTdEngineCount; Num++) { + HasError =3D FALSE; + VtdUnitBaseAddress =3D VTdInfo->VtdUnitInfo[Num].VtdUnitBaseAddress; + FstsReg32 =3D MmioRead32 (VtdUnitBaseAddress + R_FSTS_REG); + if (FstsReg32 !=3D 0) { + HasError =3D TRUE; + } + FectlReg32 =3D MmioRead32 (VtdUnitBaseAddress + R_FECTL_REG); + if ((FectlReg32 & BIT30) !=3D 0) { + HasError =3D TRUE; + } + + CapReg.Uint64 =3D MmioRead64 (VtdUnitBaseAddress + R_CAP_REG); + for (Index =3D 0; Index < (UINT16) CapReg.Bits.NFR + 1; Index++) { + FrcdReg.Uint64[0] =3D MmioRead64 (VtdUnitBaseAddress + ((CapReg.Bits= .FRO * 16) + (Index * 16) + R_FRCD_REG)); + FrcdReg.Uint64[1] =3D MmioRead64 (VtdUnitBaseAddress + ((CapReg.Bits= .FRO * 16) + (Index * 16) + R_FRCD_REG + sizeof(UINT64))); + if (FrcdReg.Bits.F !=3D 0) { + HasError =3D TRUE; + break; + } + } + + if (HasError) { + DEBUG ((DEBUG_INFO, "\n#### ERROR ####\n")); + + VtdRegInfo->BaseAddress =3D VtdUnitBaseAddress; + VtdRegInfo->GstsReg =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_= REG); + VtdRegInfo->RtaddrReg =3D MmioRead64 (VtdUnitBaseAddress + R_RTADD= R_REG);; + VtdRegInfo->FstsReg =3D FstsReg32; + VtdRegInfo->FectlReg =3D FectlReg32; + VtdRegInfo->IqercdReg =3D MmioRead64 (VtdUnitBaseAddress + R_IQERC= D_REG); + + CapReg.Uint64 =3D MmioRead64 (VtdUnitBaseAddress + R_CAP_REG); + for (Index =3D 0; Index < (UINT16) CapReg.Bits.NFR + 1; Index++) { + VtdRegInfo->FrcdReg[Index].Uint64Lo =3D MmioRead64 (VtdUnitBaseAdd= ress + ((CapReg.Bits.FRO * 16) + (Index * 16) + R_FRCD_REG)); + VtdRegInfo->FrcdReg[Index].Uint64Hi =3D MmioRead64 (VtdUnitBaseAdd= ress + ((CapReg.Bits.FRO * 16) + (Index * 16) + R_FRCD_REG + sizeof(UINT64)= )); + } + VtdRegInfo->FrcdRegNum =3D Index; + + DEBUG ((DEBUG_INFO, "\n#### ERROR ####\n")); + + VtdLibDumpVtdRegsThin (NULL, NULL, VtdRegInfo); + + DEBUG ((DEBUG_INFO, "#### ERROR ####\n\n")); + + VTdLogAddDataEvent (VTDLOG_PEI_REGISTER, VTDLOG_REGISTER_THIN, VtdRe= gInfo, sizeof (VTD_REGESTER_THIN_INFO) + sizeof (VTD_UINT128) * (VtdRegInfo= ->FrcdRegNum - 1)); + + // + // Clear + // + for (Index =3D 0; Index < (UINT16) CapReg.Bits.NFR + 1; Index++) { + FrcdReg.Uint64[1] =3D MmioRead64 (VtdUnitBaseAddress + ((CapReg.Bi= ts.FRO * 16) + (Index * 16) + R_FRCD_REG + sizeof(UINT64))); + if (FrcdReg.Bits.F !=3D 0) { + // + // Software writes the value read from this field (F) to Clear i= t. + // + MmioWrite64 (VtdUnitBaseAddress + ((CapReg.Bits.FRO * 16) + (Ind= ex * 16) + R_FRCD_REG + sizeof(UINT64)), FrcdReg.Uint64[1]); + } + } + MmioWrite32 (VtdUnitBaseAddress + R_FSTS_REG, MmioRead32 (VtdUnitBas= eAddress + R_FSTS_REG)); + } + } +} \ No newline at end of file diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/Tran= slationTable.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/= TranslationTable.c new file mode 100644 index 000000000..03a4544a0 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdCorePei/Translation= Table.c @@ -0,0 +1,926 @@ +/** @file + + Copyright (c) 2020 - 2021, Intel Corporation. All rights reserved.
+ + SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "IntelVTdCorePei.h" + +#define ALIGN_VALUE_UP(Value, Alignment) (((Value) + (Alignment) - 1) & (= ~((Alignment) - 1))) +#define ALIGN_VALUE_LOW(Value, Alignment) ((Value) & (~((Alignment) - 1))) + +/** + Allocate zero pages. + + @param[in] Pages the number of pages. + + @return the page address. + @retval NULL No resource to allocate pages. +**/ +VOID * +EFIAPI +AllocateZeroPages ( + IN UINTN Pages + ) +{ + VOID *Addr; + + Addr =3D AllocatePages (Pages); + if (Addr =3D=3D NULL) { + return NULL; + } + ZeroMem (Addr, EFI_PAGES_TO_SIZE (Pages)); + return Addr; +} + +/** + Set second level paging entry attribute based upon IoMmuAccess. + + @param[in] PtEntry The paging entry. + @param[in] IoMmuAccess The IOMMU access. +**/ +VOID +SetSecondLevelPagingEntryAttribute ( + IN VTD_SECOND_LEVEL_PAGING_ENTRY *PtEntry, + IN UINT64 IoMmuAccess + ) +{ + PtEntry->Bits.Read =3D ((IoMmuAccess & EDKII_IOMMU_ACCESS_READ) !=3D 0); + PtEntry->Bits.Write =3D ((IoMmuAccess & EDKII_IOMMU_ACCESS_WRITE) !=3D 0= ); + DEBUG ((DEBUG_VERBOSE, "SetSecondLevelPagingEntryAttribute - 0x%x - 0x%x= \n", PtEntry, IoMmuAccess)); +} + +/** + Create second level paging entry table. + + @param[in] VtdUnitInfo The VTd engine unit information. + @param[in] SecondLevelPagingEntry The second level paging entry. + @param[in] MemoryBase The base of the memory. + @param[in] MemoryLimit The limit of the memory. + @param[in] IoMmuAccess The IOMMU access. + + @return The second level paging entry. +**/ +VTD_SECOND_LEVEL_PAGING_ENTRY * +CreateSecondLevelPagingEntryTable ( + IN VTD_UNIT_INFO *VtdUnitInfo, + IN VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry, + IN UINT64 MemoryBase, + IN UINT64 MemoryLimit, + IN UINT64 IoMmuAccess + ) +{ + UINTN Index5; + UINTN Index4; + UINTN Index3; + UINTN Index2; + UINTN Lvl5Start; + UINTN Lvl5End; + UINTN Lvl4PagesStart; + UINTN Lvl4PagesEnd; + UINTN Lvl4Start; + UINTN Lvl4End; + UINTN Lvl3Start; + UINTN Lvl3End; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl5PtEntry; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl4PtEntry; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl3PtEntry; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl2PtEntry; + UINT64 BaseAddress; + UINT64 EndAddress; + BOOLEAN Is5LevelPaging; + + if (MemoryLimit =3D=3D 0) { + return NULL; + } + + Lvl4PagesStart =3D 0; + Lvl4PagesEnd =3D 0; + Lvl4PtEntry =3D NULL; + Lvl5PtEntry =3D NULL; + + BaseAddress =3D ALIGN_VALUE_LOW (MemoryBase, SIZE_2MB); + EndAddress =3D ALIGN_VALUE_UP (MemoryLimit, SIZE_2MB); + DEBUG ((DEBUG_INFO, "CreateSecondLevelPagingEntryTable: BaseAddress - 0x= %016lx, EndAddress - 0x%016lx\n", BaseAddress, EndAddress)); + + if (SecondLevelPagingEntry =3D=3D NULL) { + SecondLevelPagingEntry =3D AllocateZeroPages (1); + if (SecondLevelPagingEntry =3D=3D NULL) { + DEBUG ((DEBUG_ERROR, "Could not Alloc LVL4 or LVL5 PT. \n")); + return NULL; + } + FlushPageTableMemory (VtdUnitInfo, (UINTN) SecondLevelPagingEntry, EFI= _PAGES_TO_SIZE (1)); + } + + DEBUG ((DEBUG_INFO, " SecondLevelPagingEntry:0x%016lx\n", (UINT64) (UINT= N) SecondLevelPagingEntry)); + // + // If no access is needed, just create not present entry. + // + if (IoMmuAccess =3D=3D 0) { + DEBUG ((DEBUG_INFO, " SecondLevelPagingEntry:0x%016lx Access 0\n", (UI= NT64) (UINTN) SecondLevelPagingEntry)); + return SecondLevelPagingEntry; + } + + Is5LevelPaging =3D VtdUnitInfo->Is5LevelPaging; + + if (Is5LevelPaging) { + Lvl5Start =3D RShiftU64 (BaseAddress, 48) & 0x1FF; + Lvl5End =3D RShiftU64 (EndAddress - 1, 48) & 0x1FF; + DEBUG ((DEBUG_INFO, " Lvl5Start - 0x%x, Lvl5End - 0x%x\n", Lvl5Start,= Lvl5End)); + + Lvl4Start =3D RShiftU64 (BaseAddress, 39) & 0x1FF; + Lvl4End =3D RShiftU64 (EndAddress - 1, 39) & 0x1FF; + + Lvl4PagesStart =3D (Lvl5Start<<9) | Lvl4Start; + Lvl4PagesEnd =3D (Lvl5End<<9) | Lvl4End; + DEBUG ((DEBUG_INFO, " Lvl4PagesStart - 0x%x, Lvl4PagesEnd - 0x%x\n", = Lvl4PagesStart, Lvl4PagesEnd)); + + Lvl5PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *) SecondLevelPagingEnt= ry; + } else { + Lvl5Start =3D RShiftU64 (BaseAddress, 48) & 0x1FF; + Lvl5End =3D Lvl5Start; + + Lvl4Start =3D RShiftU64 (BaseAddress, 39) & 0x1FF; + Lvl4End =3D RShiftU64 (EndAddress - 1, 39) & 0x1FF; + DEBUG ((DEBUG_INFO, " Lvl4Start - 0x%x, Lvl4End - 0x%x\n", Lvl4Start,= Lvl4End)); + + Lvl4PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *) SecondLevelPagingEnt= ry; + } + + for (Index5 =3D Lvl5Start; Index5 <=3D Lvl5End; Index5++) { + if (Is5LevelPaging) { + if (Lvl5PtEntry[Index5].Uint64 =3D=3D 0) { + Lvl5PtEntry[Index5].Uint64 =3D (UINT64) (UINTN) AllocateZeroPages = (1); + if (Lvl5PtEntry[Index5].Uint64 =3D=3D 0) { + DEBUG ((DEBUG_ERROR, "!!!!!! ALLOCATE LVL4 PAGE FAIL (0x%x)!!!!!= !\n", Index5)); + ASSERT (FALSE); + return NULL; + } + FlushPageTableMemory (VtdUnitInfo, (UINTN) Lvl5PtEntry[Index5].Uin= t64, SIZE_4KB); + SetSecondLevelPagingEntryAttribute (&Lvl5PtEntry[Index5], EDKII_IO= MMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE); + } + Lvl4Start =3D Lvl4PagesStart & 0x1FF; + if (((Index5+1)<<9) > Lvl4PagesEnd) { + Lvl4End =3D SIZE_4KB/sizeof(VTD_SECOND_LEVEL_PAGING_ENTRY) - 1;; + Lvl4PagesStart =3D (Index5+1)<<9; + } else { + Lvl4End =3D Lvl4PagesEnd & 0x1FF; + } + DEBUG ((DEBUG_INFO, " Lvl5(0x%x): Lvl4Start - 0x%x, Lvl4End - 0x%x\= n", Index5, Lvl4Start, Lvl4End)); + Lvl4PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *) (UINTN) VTD_64BITS= _ADDRESS(Lvl5PtEntry[Index5].Bits.AddressLo, Lvl5PtEntry[Index5].Bits.Addre= ssHi); + } + + for (Index4 =3D Lvl4Start; Index4 <=3D Lvl4End; Index4++) { + if (Lvl4PtEntry[Index4].Uint64 =3D=3D 0) { + Lvl4PtEntry[Index4].Uint64 =3D (UINT64) (UINTN) AllocateZeroPages = (1); + if (Lvl4PtEntry[Index4].Uint64 =3D=3D 0) { + DEBUG ((DEBUG_ERROR, "!!!!!! ALLOCATE LVL4 PAGE FAIL (0x%x)!!!!!= !\n", Index4)); + ASSERT(FALSE); + return NULL; + } + FlushPageTableMemory (VtdUnitInfo, (UINTN) Lvl4PtEntry[Index4].Uin= t64, SIZE_4KB); + SetSecondLevelPagingEntryAttribute (&Lvl4PtEntry[Index4], EDKII_IO= MMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE); + } + + Lvl3Start =3D RShiftU64 (BaseAddress, 30) & 0x1FF; + if (ALIGN_VALUE_LOW(BaseAddress + SIZE_1GB, SIZE_1GB) <=3D EndAddres= s) { + Lvl3End =3D SIZE_4KB / sizeof (VTD_SECOND_LEVEL_PAGING_ENTRY) - 1; + } else { + Lvl3End =3D RShiftU64 (EndAddress - 1, 30) & 0x1FF; + } + DEBUG ((DEBUG_INFO, " Lvl4(0x%x): Lvl3Start - 0x%x, Lvl3End - 0x%x\= n", Index4, Lvl3Start, Lvl3End)); + + Lvl3PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *) (UINTN) VTD_64BITS= _ADDRESS(Lvl4PtEntry[Index4].Bits.AddressLo, Lvl4PtEntry[Index4].Bits.Addre= ssHi); + for (Index3 =3D Lvl3Start; Index3 <=3D Lvl3End; Index3++) { + if (Lvl3PtEntry[Index3].Uint64 =3D=3D 0) { + Lvl3PtEntry[Index3].Uint64 =3D (UINT64) (UINTN) AllocateZeroPage= s (1); + if (Lvl3PtEntry[Index3].Uint64 =3D=3D 0) { + DEBUG ((DEBUG_ERROR, "!!!!!! ALLOCATE LVL3 PAGE FAIL (0x%x, 0x= %x)!!!!!!\n", Index4, Index3)); + ASSERT(FALSE); + return NULL; + } + FlushPageTableMemory (VtdUnitInfo, (UINTN) Lvl3PtEntry[Index3].U= int64, SIZE_4KB); + SetSecondLevelPagingEntryAttribute (&Lvl3PtEntry[Index3], EDKII_= IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE); + } + + Lvl2PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *) (UINTN) VTD_64BI= TS_ADDRESS(Lvl3PtEntry[Index3].Bits.AddressLo, Lvl3PtEntry[Index3].Bits.Add= ressHi); + for (Index2 =3D 0; Index2 < SIZE_4KB/sizeof(VTD_SECOND_LEVEL_PAGIN= G_ENTRY); Index2++) { + Lvl2PtEntry[Index2].Uint64 =3D BaseAddress; + SetSecondLevelPagingEntryAttribute (&Lvl2PtEntry[Index2], IoMmuA= ccess); + Lvl2PtEntry[Index2].Bits.PageSize =3D 1; + BaseAddress +=3D SIZE_2MB; + if (BaseAddress >=3D MemoryLimit) { + break; + } + } + FlushPageTableMemory (VtdUnitInfo, (UINTN) Lvl2PtEntry, SIZE_4KB); + if (BaseAddress >=3D MemoryLimit) { + break; + } + } + FlushPageTableMemory (VtdUnitInfo, (UINTN) &Lvl3PtEntry[Lvl3Start], = (UINTN) &Lvl3PtEntry[Lvl3End + 1] - (UINTN) &Lvl3PtEntry[Lvl3Start]); + if (BaseAddress >=3D MemoryLimit) { + break; + } + } + FlushPageTableMemory (VtdUnitInfo, (UINTN) &Lvl4PtEntry[Lvl4Start], (U= INTN) &Lvl4PtEntry[Lvl4End + 1] - (UINTN) &Lvl4PtEntry[Lvl4Start]); + } + FlushPageTableMemory (VtdUnitInfo, (UINTN) &Lvl5PtEntry[Lvl5Start], (UIN= TN) &Lvl5PtEntry[Lvl5End + 1] - (UINTN) &Lvl5PtEntry[Lvl5Start]); + + DEBUG ((DEBUG_INFO, " SecondLevelPagingEntry:0x%016lx\n", (UINT64) (UINT= N) SecondLevelPagingEntry)); + return SecondLevelPagingEntry; +} + +/** + Create context entry. + + @param[in] VtdUnitInfo The VTd engine unit information. + + @retval EFI_SUCCESS The context entry is created. + @retval EFI_OUT_OF_RESOURCE No enough resource to create context entry. + +**/ +EFI_STATUS +CreateContextEntry ( + IN VTD_UNIT_INFO *VtdUnitInfo + ) +{ + UINTN RootPages; + UINTN ContextPages; + UINTN EntryTablePages; + VOID *Buffer; + UINTN RootIndex; + UINTN ContextIndex; + VTD_ROOT_ENTRY *RootEntryBase; + VTD_ROOT_ENTRY *RootEntry; + VTD_CONTEXT_ENTRY *ContextEntryTable; + VTD_CONTEXT_ENTRY *ContextEntry; + VTD_SOURCE_ID SourceId; + VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry; + UINT64 Pt; + + if (VtdUnitInfo->RootEntryTable !=3D 0) { + return EFI_SUCCESS; + } + + RootPages =3D EFI_SIZE_TO_PAGES (sizeof (VTD_ROOT_ENTRY) * VTD_ROOT_ENTR= Y_NUMBER); + ContextPages =3D EFI_SIZE_TO_PAGES (sizeof (VTD_CONTEXT_ENTRY) * VTD_CON= TEXT_ENTRY_NUMBER); + EntryTablePages =3D RootPages + ContextPages * (VTD_ROOT_ENTRY_NUMBER); + Buffer =3D AllocateZeroPages (EntryTablePages); + if (Buffer =3D=3D NULL) { + DEBUG ((DEBUG_ERROR, "Could not Alloc Root Entry Table.. \n")); + return EFI_OUT_OF_RESOURCES; + } + + DEBUG ((DEBUG_ERROR, "RootEntryTable address - 0x%x\n", Buffer)); + VtdUnitInfo->RootEntryTable =3D (UINTN) Buffer; + VtdUnitInfo->RootEntryTablePageSize =3D EntryTablePages; + RootEntryBase =3D (VTD_ROOT_ENTRY *) Buffer; + Buffer =3D (UINT8 *) Buffer + EFI_PAGES_TO_SIZE (RootPages); + + if (VtdUnitInfo->FixedSecondLevelPagingEntry =3D=3D 0) { + DEBUG ((DEBUG_ERROR, "FixedSecondLevelPagingEntry is empty\n")); + ASSERT(FALSE); + } + + for (RootIndex =3D 0; RootIndex < VTD_ROOT_ENTRY_NUMBER; RootIndex++) { + SourceId.Index.RootIndex =3D (UINT8) RootIndex; + + RootEntry =3D &RootEntryBase[SourceId.Index.RootIndex]; + RootEntry->Bits.ContextTablePointerLo =3D (UINT32) RShiftU64 ((UINT64= ) (UINTN) Buffer, 12); + RootEntry->Bits.ContextTablePointerHi =3D (UINT32) RShiftU64 ((UINT64= ) (UINTN) Buffer, 32); + RootEntry->Bits.Present =3D 1; + Buffer =3D (UINT8 *)Buffer + EFI_PAGES_TO_SIZE (ContextPages); + ContextEntryTable =3D (VTD_CONTEXT_ENTRY *) (UINTN) VTD_64BITS_ADDRESS= (RootEntry->Bits.ContextTablePointerLo, RootEntry->Bits.ContextTablePointer= Hi); + + for (ContextIndex =3D 0; ContextIndex < VTD_CONTEXT_ENTRY_NUMBER; Cont= extIndex++) { + SourceId.Index.ContextIndex =3D (UINT8) ContextIndex; + ContextEntry =3D &ContextEntryTable[SourceId.Index.ContextIndex]; + + ContextEntry->Bits.TranslationType =3D 0; + ContextEntry->Bits.FaultProcessingDisable =3D 0; + ContextEntry->Bits.Present =3D 0; + + ContextEntry->Bits.AddressWidth =3D VtdUnitInfo->Is5LevelPaging ? 0x= 3 : 0x2; + + if (VtdUnitInfo->FixedSecondLevelPagingEntry !=3D 0) { + SecondLevelPagingEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *) VtdUn= itInfo->FixedSecondLevelPagingEntry; + Pt =3D (UINT64)RShiftU64 ((UINT64) (UINTN) SecondLevelPagingEntry,= 12); + ContextEntry->Bits.SecondLevelPageTranslationPointerLo =3D (UINT32= ) Pt; + ContextEntry->Bits.SecondLevelPageTranslationPointerHi =3D (UINT32= ) RShiftU64(Pt, 20); + ContextEntry->Bits.DomainIdentifier =3D ((1 << (UINT8)((UINTN)VtdU= nitInfo->CapReg.Bits.ND * 2 + 4)) - 1); + ContextEntry->Bits.Present =3D 1; + } + } + } + + FlushPageTableMemory (VtdUnitInfo, VtdUnitInfo->RootEntryTable, EFI_PAGE= S_TO_SIZE(EntryTablePages)); + + return EFI_SUCCESS; +} + +/** + Create extended context entry. + + @param[in] VtdUnitInfo The VTd engine unit information. + + @retval EFI_SUCCESS The extended context entry is created. + @retval EFI_OUT_OF_RESOURCE No enough resource to create extended cont= ext entry. +**/ +EFI_STATUS +CreateExtContextEntry ( + IN VTD_UNIT_INFO *VtdUnitInfo + ) +{ + UINTN RootPages; + UINTN ContextPages; + UINTN EntryTablePages; + VOID *Buffer; + UINTN RootIndex; + UINTN ContextIndex; + VTD_EXT_ROOT_ENTRY *ExtRootEntryBase; + VTD_EXT_ROOT_ENTRY *ExtRootEntry; + VTD_EXT_CONTEXT_ENTRY *ExtContextEntryTable; + VTD_EXT_CONTEXT_ENTRY *ExtContextEntry; + VTD_SOURCE_ID SourceId; + VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry; + UINT64 Pt; + + if (VtdUnitInfo->ExtRootEntryTable !=3D 0) { + return EFI_SUCCESS; + } + + RootPages =3D EFI_SIZE_TO_PAGES (sizeof (VTD_EXT_ROOT_ENTRY) * VTD_ROOT_= ENTRY_NUMBER); + ContextPages =3D EFI_SIZE_TO_PAGES (sizeof (VTD_EXT_CONTEXT_ENTRY) * VTD= _CONTEXT_ENTRY_NUMBER); + EntryTablePages =3D RootPages + ContextPages * (VTD_ROOT_ENTRY_NUMBER); + Buffer =3D AllocateZeroPages (EntryTablePages); + if (Buffer =3D=3D NULL) { + DEBUG ((DEBUG_INFO, "Could not Alloc Root Entry Table !\n")); + return EFI_OUT_OF_RESOURCES; + } + + DEBUG ((DEBUG_ERROR, "ExtRootEntryTable address - 0x%x\n", Buffer)); + VtdUnitInfo->ExtRootEntryTable =3D (UINTN) Buffer; + VtdUnitInfo->ExtRootEntryTablePageSize =3D EntryTablePages; + ExtRootEntryBase =3D (VTD_EXT_ROOT_ENTRY *) Buffer; + Buffer =3D (UINT8 *) Buffer + EFI_PAGES_TO_SIZE (RootPages); + + if (VtdUnitInfo->FixedSecondLevelPagingEntry =3D=3D 0) { + DEBUG ((DEBUG_ERROR, "FixedSecondLevelPagingEntry is empty\n")); + ASSERT(FALSE); + } + + for (RootIndex =3D 0; RootIndex < VTD_ROOT_ENTRY_NUMBER; RootIndex++) { + SourceId.Index.RootIndex =3D (UINT8)RootIndex; + + ExtRootEntry =3D &ExtRootEntryBase[SourceId.Index.RootIndex]; + ExtRootEntry->Bits.LowerContextTablePointerLo =3D (UINT32) RShiftU64 = ((UINT64) (UINTN) Buffer, 12); + ExtRootEntry->Bits.LowerContextTablePointerHi =3D (UINT32) RShiftU64 = ((UINT64) (UINTN) Buffer, 32); + ExtRootEntry->Bits.LowerPresent =3D 1; + ExtRootEntry->Bits.UpperContextTablePointerLo =3D (UINT32) RShiftU64 = ((UINT64) (UINTN) Buffer, 12) + 1; + ExtRootEntry->Bits.UpperContextTablePointerHi =3D (UINT32) RShiftU64 = (RShiftU64 ((UINT64) (UINTN) Buffer, 12) + 1, 20); + ExtRootEntry->Bits.UpperPresent =3D 1; + Buffer =3D (UINT8 *) Buffer + EFI_PAGES_TO_SIZE (ContextPages); + ExtContextEntryTable =3D (VTD_EXT_CONTEXT_ENTRY *) (UINTN) VTD_64BITS_= ADDRESS (ExtRootEntry->Bits.LowerContextTablePointerLo, ExtRootEntry->Bits.= LowerContextTablePointerHi); + + for (ContextIndex =3D 0; ContextIndex < VTD_CONTEXT_ENTRY_NUMBER; Cont= extIndex++) { + SourceId.Index.ContextIndex =3D (UINT8) ContextIndex; + ExtContextEntry =3D &ExtContextEntryTable[SourceId.Index.ContextInde= x]; + + ExtContextEntry->Bits.TranslationType =3D 0; + ExtContextEntry->Bits.FaultProcessingDisable =3D 0; + ExtContextEntry->Bits.Present =3D 0; + + ExtContextEntry->Bits.AddressWidth =3D VtdUnitInfo->Is5LevelPaging ?= 0x3 : 0x2; + + if (VtdUnitInfo->FixedSecondLevelPagingEntry !=3D 0) { + SecondLevelPagingEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *) VtdUn= itInfo->FixedSecondLevelPagingEntry; + Pt =3D (UINT64)RShiftU64 ((UINT64) (UINTN) SecondLevelPagingEntry,= 12); + + ExtContextEntry->Bits.SecondLevelPageTranslationPointerLo =3D (UIN= T32) Pt; + ExtContextEntry->Bits.SecondLevelPageTranslationPointerHi =3D (UIN= T32) RShiftU64(Pt, 20); + ExtContextEntry->Bits.DomainIdentifier =3D ((1 << (UINT8) ((UINTN)= VtdUnitInfo->CapReg.Bits.ND * 2 + 4)) - 1); + ExtContextEntry->Bits.Present =3D 1; + } + } + } + + FlushPageTableMemory (VtdUnitInfo, VtdUnitInfo->ExtRootEntryTable, EFI_P= AGES_TO_SIZE(EntryTablePages)); + + return EFI_SUCCESS; +} + +#define VTD_PG_R BIT0 +#define VTD_PG_W BIT1 +#define VTD_PG_X BIT2 +#define VTD_PG_EMT (BIT3 | BIT4 | BIT5) +#define VTD_PG_TM (BIT62) + +#define VTD_PG_PS BIT7 + +#define PAGE_PROGATE_BITS (VTD_PG_TM | VTD_PG_EMT | VTD_PG_W | VT= D_PG_R) + +#define PAGING_4K_MASK 0xFFF +#define PAGING_2M_MASK 0x1FFFFF +#define PAGING_1G_MASK 0x3FFFFFFF + +#define PAGING_VTD_INDEX_MASK 0x1FF + +#define PAGING_4K_ADDRESS_MASK_64 0x000FFFFFFFFFF000ull +#define PAGING_2M_ADDRESS_MASK_64 0x000FFFFFFFE00000ull +#define PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull + +typedef enum { + PageNone, + Page4K, + Page2M, + Page1G, +} PAGE_ATTRIBUTE; + +typedef struct { + PAGE_ATTRIBUTE Attribute; + UINT64 Length; + UINT64 AddressMask; +} PAGE_ATTRIBUTE_TABLE; + +PAGE_ATTRIBUTE_TABLE mPageAttributeTable[] =3D { + {Page4K, SIZE_4KB, PAGING_4K_ADDRESS_MASK_64}, + {Page2M, SIZE_2MB, PAGING_2M_ADDRESS_MASK_64}, + {Page1G, SIZE_1GB, PAGING_1G_ADDRESS_MASK_64}, +}; + +/** + Return length according to page attributes. + + @param[in] PageAttributes The page attribute of the page entry. + + @return The length of page entry. +**/ +UINTN +PageAttributeToLength ( + IN PAGE_ATTRIBUTE PageAttribute + ) +{ + UINTN Index; + for (Index =3D 0; Index < sizeof (mPageAttributeTable) / sizeof (mPageAt= tributeTable[0]); Index++) { + if (PageAttribute =3D=3D mPageAttributeTable[Index].Attribute) { + return (UINTN) mPageAttributeTable[Index].Length; + } + } + return 0; +} + +/** + Return page table entry to match the address. + + @param[in] VtdUnitInfo The VTd engine unit information. + @param[in] SecondLevelPagingEntry The second level paging entry in V= Td table for the device. + @param[in] Address The address to be checked. + @param[out] PageAttributes The page attribute of the page ent= ry. + + @return The page entry. +**/ +VOID * +GetSecondLevelPageTableEntry ( + IN VTD_UNIT_INFO *VtdUnitInfo, + IN VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry, + IN PHYSICAL_ADDRESS Address, + OUT PAGE_ATTRIBUTE *PageAttribute + ) +{ + UINTN Index1; + UINTN Index2; + UINTN Index3; + UINTN Index4; + UINTN Index5; + UINT64 *L1PageTable; + UINT64 *L2PageTable; + UINT64 *L3PageTable; + UINT64 *L4PageTable; + UINT64 *L5PageTable; + BOOLEAN Is5LevelPaging; + + Index5 =3D ((UINTN) RShiftU64 (Address, 48)) & PAGING_VTD_INDEX_MASK; + Index4 =3D ((UINTN) RShiftU64 (Address, 39)) & PAGING_VTD_INDEX_MASK; + Index3 =3D ((UINTN) Address >> 30) & PAGING_VTD_INDEX_MASK; + Index2 =3D ((UINTN) Address >> 21) & PAGING_VTD_INDEX_MASK; + Index1 =3D ((UINTN) Address >> 12) & PAGING_VTD_INDEX_MASK; + + Is5LevelPaging =3D VtdUnitInfo->Is5LevelPaging; + + if (Is5LevelPaging) { + L5PageTable =3D (UINT64 *) SecondLevelPagingEntry; + if (L5PageTable[Index5] =3D=3D 0) { + L5PageTable[Index5] =3D (UINT64) (UINTN) AllocateZeroPages (1); + if (L5PageTable[Index5] =3D=3D 0) { + DEBUG ((DEBUG_ERROR, "!!!!!! ALLOCATE LVL5 PAGE FAIL (0x%x)!!!!!!\= n", Index4)); + ASSERT(FALSE); + *PageAttribute =3D PageNone; + return NULL; + } + FlushPageTableMemory (VtdUnitInfo, (UINTN) L5PageTable[Index5], SIZE= _4KB); + SetSecondLevelPagingEntryAttribute ((VTD_SECOND_LEVEL_PAGING_ENTRY *= ) &L5PageTable[Index5], EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE); + FlushPageTableMemory (VtdUnitInfo, (UINTN) &L5PageTable[Index5], siz= eof(L5PageTable[Index5])); + } + L4PageTable =3D (UINT64 *) (UINTN) (L5PageTable[Index5] & PAGING_4K_AD= DRESS_MASK_64); + } else { + L4PageTable =3D (UINT64 *)SecondLevelPagingEntry; + } + + if (L4PageTable[Index4] =3D=3D 0) { + L4PageTable[Index4] =3D (UINT64) (UINTN) AllocateZeroPages (1); + if (L4PageTable[Index4] =3D=3D 0) { + DEBUG ((DEBUG_ERROR, "!!!!!! ALLOCATE LVL4 PAGE FAIL (0x%x)!!!!!!\n"= , Index4)); + ASSERT(FALSE); + *PageAttribute =3D PageNone; + return NULL; + } + FlushPageTableMemory (VtdUnitInfo, (UINTN) L4PageTable[Index4], SIZE_4= KB); + SetSecondLevelPagingEntryAttribute ((VTD_SECOND_LEVEL_PAGING_ENTRY *) = &L4PageTable[Index4], EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE); + FlushPageTableMemory (VtdUnitInfo, (UINTN) &L4PageTable[Index4], sizeo= f(L4PageTable[Index4])); + } + + L3PageTable =3D (UINT64 *) (UINTN) (L4PageTable[Index4] & PAGING_4K_ADDR= ESS_MASK_64); + if (L3PageTable[Index3] =3D=3D 0) { + L3PageTable[Index3] =3D (UINT64) (UINTN) AllocateZeroPages (1); + if (L3PageTable[Index3] =3D=3D 0) { + DEBUG ((DEBUG_ERROR, "!!!!!! ALLOCATE LVL3 PAGE FAIL (0x%x, 0x%x)!!!= !!!\n", Index4, Index3)); + ASSERT(FALSE); + *PageAttribute =3D PageNone; + return NULL; + } + FlushPageTableMemory (VtdUnitInfo, (UINTN) L3PageTable[Index3], SIZE_4= KB); + SetSecondLevelPagingEntryAttribute ((VTD_SECOND_LEVEL_PAGING_ENTRY *) = &L3PageTable[Index3], EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE); + FlushPageTableMemory (VtdUnitInfo, (UINTN) &L3PageTable[Index3], sizeo= f (L3PageTable[Index3])); + } + if ((L3PageTable[Index3] & VTD_PG_PS) !=3D 0) { + // 1G + *PageAttribute =3D Page1G; + return &L3PageTable[Index3]; + } + + L2PageTable =3D (UINT64 *) (UINTN) (L3PageTable[Index3] & PAGING_4K_ADDR= ESS_MASK_64); + if (L2PageTable[Index2] =3D=3D 0) { + L2PageTable[Index2] =3D Address & PAGING_2M_ADDRESS_MASK_64; + SetSecondLevelPagingEntryAttribute ((VTD_SECOND_LEVEL_PAGING_ENTRY *) = &L2PageTable[Index2], 0); + L2PageTable[Index2] |=3D VTD_PG_PS; + FlushPageTableMemory (VtdUnitInfo, (UINTN) &L2PageTable[Index2], sizeo= f (L2PageTable[Index2])); + } + if ((L2PageTable[Index2] & VTD_PG_PS) !=3D 0) { + // 2M + *PageAttribute =3D Page2M; + return &L2PageTable[Index2]; + } + + // 4k + L1PageTable =3D (UINT64 *) (UINTN) (L2PageTable[Index2] & PAGING_4K_ADDR= ESS_MASK_64); + if ((L1PageTable[Index1] =3D=3D 0) && (Address !=3D 0)) { + *PageAttribute =3D PageNone; + return NULL; + } + *PageAttribute =3D Page4K; + return &L1PageTable[Index1]; +} + +/** + Modify memory attributes of page entry. + + @param[in] VtdUnitInfo The VTd engine unit information. + @param[in] PageEntry The page entry. + @param[in] IoMmuAccess The IOMMU access. + @param[out] IsModified TRUE means page table modified. FALSE mean= s page table not modified. +**/ +VOID +ConvertSecondLevelPageEntryAttribute ( + IN VTD_UNIT_INFO *VtdUnitInfo, + IN VTD_SECOND_LEVEL_PAGING_ENTRY *PageEntry, + IN UINT64 IoMmuAccess, + OUT BOOLEAN *IsModified + ) +{ + UINT64 CurrentPageEntry; + UINT64 NewPageEntry; + + CurrentPageEntry =3D PageEntry->Uint64; + SetSecondLevelPagingEntryAttribute (PageEntry, IoMmuAccess); + FlushPageTableMemory (VtdUnitInfo, (UINTN) PageEntry, sizeof(*PageEntry)= ); + NewPageEntry =3D PageEntry->Uint64; + if (CurrentPageEntry !=3D NewPageEntry) { + *IsModified =3D TRUE; + DEBUG ((DEBUG_VERBOSE, "ConvertSecondLevelPageEntryAttribute 0x%lx", C= urrentPageEntry)); + DEBUG ((DEBUG_VERBOSE, "->0x%lx\n", NewPageEntry)); + } else { + *IsModified =3D FALSE; + } +} + +/** + This function returns if there is need to split page entry. + + @param[in] BaseAddress The base address to be checked. + @param[in] Length The length to be checked. + @param[in] PageAttribute The page attribute of the page entry. + + @retval SplitAttributes on if there is need to split page entry. +**/ +PAGE_ATTRIBUTE +NeedSplitPage ( + IN PHYSICAL_ADDRESS BaseAddress, + IN UINT64 Length, + IN PAGE_ATTRIBUTE PageAttribute + ) +{ + UINT64 PageEntryLength; + + PageEntryLength =3D PageAttributeToLength (PageAttribute); + + if (((BaseAddress & (PageEntryLength - 1)) =3D=3D 0) && (Length >=3D Pag= eEntryLength)) { + return PageNone; + } + + if (((BaseAddress & PAGING_2M_MASK) !=3D 0) || (Length < SIZE_2MB)) { + return Page4K; + } + + return Page2M; +} + +/** + This function splits one page entry to small page entries. + + @param[in] VtdUnitInfo The VTd engine unit information. + @param[in] PageEntry The page entry to be splitted. + @param[in] PageAttribute The page attribute of the page entry. + @param[in] SplitAttribute How to split the page entry. + + @retval RETURN_SUCCESS The page entry is splitted. + @retval RETURN_UNSUPPORTED The page entry does not support to be = splitted. + @retval RETURN_OUT_OF_RESOURCES No resource to split page entry. +**/ +RETURN_STATUS +SplitSecondLevelPage ( + IN VTD_UNIT_INFO *VtdUnitInfo, + IN VTD_SECOND_LEVEL_PAGING_ENTRY *PageEntry, + IN PAGE_ATTRIBUTE PageAttribute, + IN PAGE_ATTRIBUTE SplitAttribute + ) +{ + UINT64 BaseAddress; + UINT64 *NewPageEntry; + UINTN Index; + + ASSERT (PageAttribute =3D=3D Page2M || PageAttribute =3D=3D Page1G); + + if (PageAttribute =3D=3D Page2M) { + // + // Split 2M to 4K + // + ASSERT (SplitAttribute =3D=3D Page4K); + if (SplitAttribute =3D=3D Page4K) { + NewPageEntry =3D AllocateZeroPages (1); + DEBUG ((DEBUG_INFO, "Split - 0x%x\n", NewPageEntry)); + if (NewPageEntry =3D=3D NULL) { + return RETURN_OUT_OF_RESOURCES; + } + BaseAddress =3D PageEntry->Uint64 & PAGING_2M_ADDRESS_MASK_64; + for (Index =3D 0; Index < SIZE_4KB / sizeof(UINT64); Index++) { + NewPageEntry[Index] =3D (BaseAddress + SIZE_4KB * Index) | (PageEn= try->Uint64 & PAGE_PROGATE_BITS); + } + FlushPageTableMemory (VtdUnitInfo, (UINTN)NewPageEntry, SIZE_4KB); + + PageEntry->Uint64 =3D (UINT64)(UINTN)NewPageEntry; + SetSecondLevelPagingEntryAttribute (PageEntry, EDKII_IOMMU_ACCESS_RE= AD | EDKII_IOMMU_ACCESS_WRITE); + FlushPageTableMemory (VtdUnitInfo, (UINTN)PageEntry, sizeof(*PageEnt= ry)); + return RETURN_SUCCESS; + } else { + return RETURN_UNSUPPORTED; + } + } else if (PageAttribute =3D=3D Page1G) { + // + // Split 1G to 2M + // No need support 1G->4K directly, we should use 1G->2M, then 2M->4K = to get more compact page table. + // + ASSERT (SplitAttribute =3D=3D Page2M || SplitAttribute =3D=3D Page4K); + if ((SplitAttribute =3D=3D Page2M || SplitAttribute =3D=3D Page4K)) { + NewPageEntry =3D AllocateZeroPages (1); + DEBUG ((DEBUG_INFO, "Split - 0x%x\n", NewPageEntry)); + if (NewPageEntry =3D=3D NULL) { + return RETURN_OUT_OF_RESOURCES; + } + BaseAddress =3D PageEntry->Uint64 & PAGING_1G_ADDRESS_MASK_64; + for (Index =3D 0; Index < SIZE_4KB / sizeof(UINT64); Index++) { + NewPageEntry[Index] =3D (BaseAddress + SIZE_2MB * Index) | VTD_PG_= PS | (PageEntry->Uint64 & PAGE_PROGATE_BITS); + } + FlushPageTableMemory (VtdUnitInfo, (UINTN)NewPageEntry, SIZE_4KB); + + PageEntry->Uint64 =3D (UINT64)(UINTN)NewPageEntry; + SetSecondLevelPagingEntryAttribute (PageEntry, EDKII_IOMMU_ACCESS_RE= AD | EDKII_IOMMU_ACCESS_WRITE); + FlushPageTableMemory (VtdUnitInfo, (UINTN)PageEntry, sizeof(*PageEnt= ry)); + return RETURN_SUCCESS; + } else { + return RETURN_UNSUPPORTED; + } + } else { + return RETURN_UNSUPPORTED; + } +} + +/** + Set VTd attribute for a system memory on second level page entry + + @param[in] VtdUnitInfo The VTd engine unit information. + @param[in] SecondLevelPagingEntry The second level paging entry in VTd= table for the device. + @param[in] BaseAddress The base of device memory address to= be used as the DMA memory. + @param[in] Length The length of device memory address = to be used as the DMA memory. + @param[in] IoMmuAccess The IOMMU access. + + @retval EFI_SUCCESS The IoMmuAccess is set for the memory ran= ge specified by BaseAddress and Length. + @retval EFI_INVALID_PARAMETER BaseAddress is not IoMmu Page size aligne= d. + @retval EFI_INVALID_PARAMETER Length is not IoMmu Page size aligned. + @retval EFI_INVALID_PARAMETER Length is 0. + @retval EFI_INVALID_PARAMETER IoMmuAccess specified an illegal combinat= ion of access. + @retval EFI_UNSUPPORTED The bit mask of IoMmuAccess is not suppor= ted by the IOMMU. + @retval EFI_UNSUPPORTED The IOMMU does not support the memory ran= ge specified by BaseAddress and Length. + @retval EFI_OUT_OF_RESOURCES There are not enough resources available = to modify the IOMMU access. + @retval EFI_DEVICE_ERROR The IOMMU device reported an error while = attempting the operation. +**/ +EFI_STATUS +SetSecondLevelPagingAttribute ( + IN VTD_UNIT_INFO *VtdUnitInfo, + IN VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry, + IN UINT64 BaseAddress, + IN UINT64 Length, + IN UINT64 IoMmuAccess + ) +{ + VTD_SECOND_LEVEL_PAGING_ENTRY *PageEntry; + PAGE_ATTRIBUTE PageAttribute; + UINTN PageEntryLength; + PAGE_ATTRIBUTE SplitAttribute; + EFI_STATUS Status; + BOOLEAN IsEntryModified; + + DEBUG ((DEBUG_INFO, "SetSecondLevelPagingAttribute (0x%016lx - 0x%016lx = : %x) \n", BaseAddress, Length, IoMmuAccess)); + DEBUG ((DEBUG_INFO, " SecondLevelPagingEntry Base - 0x%x\n", SecondLeve= lPagingEntry)); + + if (BaseAddress !=3D ALIGN_VALUE(BaseAddress, SIZE_4KB)) { + DEBUG ((DEBUG_ERROR, "SetSecondLevelPagingAttribute - Invalid Alignmen= t\n")); + return EFI_UNSUPPORTED; + } + if (Length !=3D ALIGN_VALUE(Length, SIZE_4KB)) { + DEBUG ((DEBUG_ERROR, "SetSecondLevelPagingAttribute - Invalid Alignmen= t\n")); + return EFI_UNSUPPORTED; + } + + while (Length !=3D 0) { + PageEntry =3D GetSecondLevelPageTableEntry (VtdUnitInfo, SecondLevelPa= gingEntry, BaseAddress, &PageAttribute); + if (PageEntry =3D=3D NULL) { + DEBUG ((DEBUG_ERROR, "PageEntry - NULL\n")); + return RETURN_UNSUPPORTED; + } + PageEntryLength =3D PageAttributeToLength (PageAttribute); + SplitAttribute =3D NeedSplitPage (BaseAddress, Length, PageAttribute); + if (SplitAttribute =3D=3D PageNone) { + ConvertSecondLevelPageEntryAttribute (VtdUnitInfo, PageEntry, IoMmuA= ccess, &IsEntryModified); + // + // Convert success, move to next + // + BaseAddress +=3D PageEntryLength; + Length -=3D PageEntryLength; + } else { + Status =3D SplitSecondLevelPage (VtdUnitInfo, PageEntry, PageAttribu= te, SplitAttribute); + if (RETURN_ERROR (Status)) { + DEBUG ((DEBUG_ERROR, "SplitSecondLevelPage - %r\n", Status)); + return RETURN_UNSUPPORTED; + } + // + // Just split current page + // Convert success in next around + // + } + } + + return EFI_SUCCESS; +} + +/** + Create Fixed Second Level Paging Entry. + + @param[in] VtdUnitInfo The VTd engine unit information. + + @retval EFI_SUCCESS Setup translation table successfully. + @retval EFI_OUT_OF_RESOURCES Setup translation table fail. + +**/ +EFI_STATUS +CreateFixedSecondLevelPagingEntry ( + IN VTD_UNIT_INFO *VtdUnitInfo + ) +{ + EFI_STATUS Status; + UINT64 IoMmuAccess; + UINT64 BaseAddress; + UINT64 Length; + VOID *Hob; + DMA_BUFFER_INFO *DmaBufferInfo; + + if (VtdUnitInfo->FixedSecondLevelPagingEntry !=3D 0) { + return EFI_SUCCESS; + } + + VtdUnitInfo->FixedSecondLevelPagingEntry =3D (UINTN) CreateSecondLevelPa= gingEntryTable (VtdUnitInfo, NULL, 0, SIZE_4GB, 0); + if (VtdUnitInfo->FixedSecondLevelPagingEntry =3D=3D 0) { + DEBUG ((DEBUG_ERROR, "FixedSecondLevelPagingEntry is empty\n")); + return EFI_OUT_OF_RESOURCES; + } + + Hob =3D GetFirstGuidHob (&mDmaBufferInfoGuid); + DmaBufferInfo =3D GET_GUID_HOB_DATA (Hob); + BaseAddress =3D DmaBufferInfo->DmaBufferBase; + Length =3D DmaBufferInfo->DmaBufferSize; + IoMmuAccess =3D EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE; + + DEBUG ((DEBUG_INFO, " BaseAddress =3D 0x%lx\n", BaseAddress)); + DEBUG ((DEBUG_INFO, " Length =3D 0x%lx\n", Length)); + DEBUG ((DEBUG_INFO, " IoMmuAccess =3D 0x%lx\n", IoMmuAccess)); + + Status =3D SetSecondLevelPagingAttribute (VtdUnitInfo, (VTD_SECOND_LEVEL= _PAGING_ENTRY*) VtdUnitInfo->FixedSecondLevelPagingEntry, BaseAddress, Leng= th, IoMmuAccess); + + return Status; +} +/** + Setup VTd translation table. + + @param[in] VTdInfo The VTd engine context information. + + @retval EFI_SUCCESS Setup translation table successfully. + @retval EFI_OUT_OF_RESOURCES Setup translation table fail. + +**/ +EFI_STATUS +SetupTranslationTable ( + IN VTD_INFO *VTdInfo + ) +{ + EFI_STATUS Status; + UINTN Index; + VTD_UNIT_INFO *VtdUnitInfo; + + for (Index =3D 0; Index < VTdInfo->VTdEngineCount; Index++) { + VtdUnitInfo =3D &VTdInfo->VtdUnitInfo[Index]; + if (VtdUnitInfo->Done) { + continue; + } + + Status =3D CreateFixedSecondLevelPagingEntry (VtdUnitInfo); + if (EFI_ERROR (Status)) { + DEBUG ((DEBUG_INFO, "CreateFixedSecondLevelPagingEntry failed - %r\n= ", Status)); + return Status; + } + + if (VtdUnitInfo->ECapReg.Bits.SMTS) { + if (VtdUnitInfo->ECapReg.Bits.DEP_24) { + DEBUG ((DEBUG_ERROR,"ECapReg.bit24 is not zero\n")); + ASSERT(FALSE); + Status =3D EFI_UNSUPPORTED; + } else { + Status =3D CreateContextEntry (VtdUnitInfo); + } + } else { + if (VtdUnitInfo->ECapReg.Bits.DEP_24) { + // + // To compatible with pervious VTd engine + // It was ECS(Extended Context Support) bit. + // + Status =3D CreateExtContextEntry (VtdUnitInfo); + } else { + Status =3D CreateContextEntry (VtdUnitInfo); + } + } + + if (EFI_ERROR (Status)) { + return Status; + } + } + return EFI_SUCCESS; +} + diff --git a/Silicon/Intel/IntelSiliconPkg/Include/Guid/VtdLogDataHob.h b/S= ilicon/Intel/IntelSiliconPkg/Include/Guid/VtdLogDataHob.h new file mode 100644 index 000000000..7863a257a --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Include/Guid/VtdLogDataHob.h @@ -0,0 +1,151 @@ +/** @file + The definition for VTD Log Data Hob. + + Copyright (c) 2023, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent +**/ + + +#ifndef _VTD_LOG_DATA_HOB_H_ +#define _VTD_LOG_DATA_HOB_H_ + +#include + +#define VTDLOG_LOG_TYPE(_id_) ((UINT64) 1 << (_id_)) + +typedef enum { + VTDLOG_PEI_BASIC =3D 0, // Start ID for PEI bas= ic log + VTDLOG_PEI_PRE_MEM_DMA_PROTECT =3D 1, // PRE-MEM phase + VTDLOG_PEI_PMR_LOW_MEMORY_RANGE =3D 2, + VTDLOG_PEI_PMR_HIGH_MEMORY_RANGE =3D 3, + VTDLOG_PEI_PROTECT_MEMORY_RANGE =3D 4, + VTDLOG_PEI_POST_MEM_ENABLE_DMA_PROTECT =3D 5, + VTDLOG_PEI_POST_MEM_DISABLE_DMA_PROTECT =3D 6, + VTDLOG_PEI_QUEUED_INVALIDATION =3D 7, + VTDLOG_PEI_REGISTER =3D 8, + VTDLOG_PEI_VTD_ERROR =3D 9, + + VTDLOG_PEI_ADVANCED =3D 16, // Start ID for PEI ad= vanced log + VTDLOG_PEI_PPI_ALLOC_BUFFER =3D 17, + VTDLOG_PEI_PPI_MAP =3D 18, + + VTDLOG_DXE_BASIC =3D 24, // Start ID for DXE ba= sic log + VTDLOG_DXE_DMAR_TABLE =3D 25, + VTDLOG_DXE_SETUP_VTD =3D 26, + VTDLOG_DXE_PCI_DEVICE =3D 27, + VTDLOG_DXE_REGISTER =3D 28, + VTDLOG_DXE_ENABLE_DMAR =3D 29, + VTDLOG_DXE_DISABLE_DMAR =3D 30, + VTDLOG_DXE_DISABLE_PMR =3D 31, + VTDLOG_DXE_INSTALL_IOMMU_PROTOCOL =3D 32, + VTDLOG_DXE_QUEUED_INVALIDATION =3D 33, =20 + + VTDLOG_DXE_ADVANCED =3D 44, // Start ID for DXE ad= vanced log + VTDLOG_DXE_IOMMU_ALLOC_BUFFER =3D 45, + VTDLOG_DXE_IOMMU_FREE_BUFFER =3D 46, + VTDLOG_DXE_IOMMU_MAP =3D 47, + VTDLOG_DXE_IOMMU_UNMAP =3D 48, + VTDLOG_DXE_IOMMU_SET_ATTRIBUTE =3D 49, + VTDLOG_DXE_ROOT_TABLE =3D 50, +} VTDLOG_EVENT_TYPE; + +#define VTD_LOG_PEI_PRE_MEM_BAR_MAX 64 + +// +// Code of VTDLOG_PEI_BASIC / VTDLOG_DXE_BASIC +// +#define VTD_LOG_ERROR_BUFFER_FULL (1<<0) + +// +// Code of VTDLOG_PEI_PRE_MEM_DMA_PROTECT_MODE +// +#define VTD_LOG_PEI_PRE_MEM_NOT_USED 0 +#define VTD_LOG_PEI_PRE_MEM_DISABLE 1 +#define VTD_LOG_PEI_PRE_MEM_ADM 2 +#define VTD_LOG_PEI_PRE_MEM_TE 3 +#define VTD_LOG_PEI_PRE_MEM_PMR 4 + +// +// Code of VTDLOG_PEI_QUEUED_INVALIDATION +// +#define VTD_LOG_QI_DISABLE 0 +#define VTD_LOG_QI_ENABLE 1 +#define VTD_LOG_QI_ERROR_OUT_OF_RESOURCES 2 + +// +// Code of VTDLOG_PEI_VTD_ERROR +// +#define VTD_LOG_PEI_VTD_ERROR_PPI_ALLOC 1 +#define VTD_LOG_PEI_VTD_ERROR_PPI_MAP 2 + +// Code of VTDLOG_PEI_REGISTER / VTDLOG_DXE_REGISTER +#define VTDLOG_REGISTER_ALL 0 +#define VTDLOG_REGISTER_THIN 1 +#define VTDLOG_REGISTER_QI 2 + +#pragma pack(1) + +// +// Item head +// +typedef struct { + UINT32 DataSize; + UINT64 LogType; + UINT64 Timestamp; +}VTDLOG_EVENT_HEADER; + +// +// Struct for type =3D VTDLOG_PEI_REGISTER +// VTDLOG_DXE_REGISTER +// VTDLOG_DXE_DMAR_TABLE +// VTDLOG_DXE_IOMMU_SET_ATTRIBUTE +// VTDLOG_DXE_PCI_DEVICE +// VTDLOG_DXE_ROOT_TABLE +// +typedef struct { + VTDLOG_EVENT_HEADER Header; + UINT64 Param; + UINT8 Data[1]; +} VTDLOG_EVENT_CONTEXT; + +// +// Struct for rest of the types +// +typedef struct { + VTDLOG_EVENT_HEADER Header; + UINT64 Data1; + UINT64 Data2; +}VTDLOG_EVENT_2PARAM; + +// +// Struct for VTd log event +// +typedef union{ + VTDLOG_EVENT_HEADER EventHeader; + VTDLOG_EVENT_2PARAM CommenEvent; + VTDLOG_EVENT_CONTEXT ContextEvent; +} VTDLOG_EVENT; + +// +// Information for PEI pre-memory phase +// +typedef struct { + UINT8 Mode; + UINT8 Status; + UINT32 BarAddress; +} VTDLOG_PEI_PRE_MEM_INFO; + +// +// Buffer struct for PEI phase +// +typedef struct { + UINT8 VtdLogPeiError; + VTDLOG_PEI_PRE_MEM_INFO PreMemInfo[VTD_LOG_PEI_PRE_MEM_BAR_MAX]; + UINT32 PostMemBufferUsed; + UINT64 PostMemBuffer; +} VTDLOG_PEI_BUFFER_HOB; + +#pragma pack() + +#endif // _VTD_LOG_DATA_HOB_H_ + diff --git a/Silicon/Intel/IntelSiliconPkg/Include/Library/IntelVTdPeiDxeLi= b.h b/Silicon/Intel/IntelSiliconPkg/Include/Library/IntelVTdPeiDxeLib.h new file mode 100644 index 000000000..e39619a71 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Include/Library/IntelVTdPeiDxeLib.h @@ -0,0 +1,423 @@ +/** @file + Intel VTd library definitions. + + Copyright (c) 2023 Intel Corporation. All rights reserved.
+ + SPDX-License-Identifier: BSD-2-Clause-Patent +**/ +#ifndef _INTEL_VTD_PEI_DXE_LIB_H_ +#define _INTEL_VTD_PEI_DXE_LIB_H_ + +// +// Include files +// +#include +#include +#include +#include + +#if defined (EXT_CALLBACK) + #define _VTDLIB_DEBUG(PrintLevel, ...) = \ + do { = \ + VtdLogEventCallback (Context, CallbackHandle, PrintLevel, ##__VA_ARG= S__); \ + } while (FALSE) + #define VTDLIB_DEBUG(Expression) _VTDLIB_DEBUG Expression +#else + #define VTDLIB_DEBUG(Expression) DEBUG(Expression) +#endif + +#pragma pack(1) + +typedef struct { + UINT8 DeviceType; + VTD_SOURCE_ID PciSourceId; + EDKII_PLATFORM_VTD_PCI_DEVICE_ID PciDeviceId; + // for statistic analysis + UINT64 AccessCount; +} PCI_DEVICE_DATA; + +typedef struct { + BOOLEAN IncludeAllFlag; + UINT16 Segment; + UINT32 PciDeviceDataMaxNumber; + UINT32 PciDeviceDataNumber; + PCI_DEVICE_DATA PciDeviceData[1]; +} PCI_DEVICE_INFORMATION; + +typedef struct { + UINT64 Uint64Lo; + UINT64 Uint64Hi; +}VTD_UINT128; + +typedef struct { + UINT64 BaseAddress; + UINT32 VerReg; + UINT64 CapReg; + UINT64 EcapReg; + UINT32 GstsReg; + UINT64 RtaddrReg; + UINT64 CcmdReg; + UINT32 FstsReg; + UINT32 FectlReg; + UINT32 FedataReg; + UINT32 FeaddrReg; + UINT32 FeuaddrReg; + UINT64 IqercdReg; + UINT64 IvaReg; + UINT64 IotlbReg; + UINT16 FrcdRegNum; // Number of FRCD Registers + VTD_UINT128 FrcdReg[1]; +} VTD_REGESTER_INFO; + +typedef struct { + UINT64 BaseAddress; + UINT32 FstsReg; + UINT64 IqercdReg; +} VTD_REGESTER_QI_INFO; + +typedef struct { + UINT64 BaseAddress; + UINT32 GstsReg; + UINT64 RtaddrReg; + UINT32 FstsReg; + UINT32 FectlReg; + UINT64 IqercdReg; + UINT16 FrcdRegNum; // Number of FRCD Registers + VTD_UINT128 FrcdReg[1]; +} VTD_REGESTER_THIN_INFO; + +typedef struct { + VTD_SOURCE_ID SourceId; + EFI_PHYSICAL_ADDRESS DeviceAddress; + UINT64 Length; + UINT64 IoMmuAccess; + EFI_STATUS Status; +} VTD_PROTOCOL_SET_ATTRIBUTE; + +typedef struct { + UINT64 BaseAddress; + UINT64 TableAddress; + BOOLEAN Is5LevelPaging; +} VTD_ROOT_TABLE_INFO; + +#pragma pack() + +/** + @brief This callback function is to handle the Vtd log strings. + + [Consumption] + Dump VTd log + + @param[in] Context Context + @param[in] ErrorLevel The error level of the debug message. + @param[in] Buffer Event string +**/ +typedef +VOID +(EFIAPI *EDKII_VTD_LIB_STRING_CB) ( + IN VOID *Context, + IN UINTN ErrorLevel, + IN CHAR8 *Buffer + ); + +/** + @brief This function is to dump DMAR ACPI table. + + [Consumption] + Dump VTd log + + @param[in] Context Event Context + @param[in out] CallbackHandle Callback Handler + @param[in] Dmar DMAR ACPI table +**/ +VOID +VtdLibDumpAcpiDmar ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN EFI_ACPI_DMAR_HEADER *Dmar + ); + +/** + @brief This function is to dump DRHD DMAR ACPI table. + + [Consumption] + Dump VTd log + + @param[in] Context Event Context + @param[in out] CallbackHandle Callback Handler + @param[in] Dmar DMAR ACPI table +**/ +VOID +VtdLibDumpAcpiDmarDrhd ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN EFI_ACPI_DMAR_HEADER *Dmar + ); + +/** + @brief This function is to dump the PCI device information of the VTd en= gine. + + [Consumption] + Dump VTd log + + @param[in] Context Event Context + @param[in out] CallbackHandle Callback Handler + @param[in] PciDeviceInfo PCI device information +**/ +VOID +VtdLibDumpPciDeviceInfo ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN PCI_DEVICE_INFORMATION *PciDeviceInfo + ); + +/** + @brief This function is to dump DMAR context entry table. + + [Consumption] + Dump VTd log + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] RootEntry DMAR root entry. + @param[in] Is5LevelPaging If it is the 5 level paging. +**/ +VOID +VtdLibDumpDmarContextEntryTable ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTD_ROOT_ENTRY *RootEntry, + IN BOOLEAN Is5LevelPaging + ); + +/** + @brief This function is to dump DMAR extended context entry table. + + [Consumption] + Dump VTd log + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] ExtRootEntry DMAR extended root entry. + @param[in] Is5LevelPaging If it is the 5 level paging. +**/ +VOID +VtdLibDumpDmarExtContextEntryTable ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTD_EXT_ROOT_ENTRY *ExtRootEntry, + IN BOOLEAN Is5LevelPaging + ); + +/** + @brief This function is to dump VTd registers. + + [Consumption] + Dump VTd log + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] VtdRegInfo Registers Information +**/ +VOID +VtdLibDumpVtdRegsAll ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTD_REGESTER_INFO *VtdRegInfo + ); + +/** + @brief This function is to dump VTd registers. + + [Consumption] + Dump VTd log + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] VtdRegInfo Registers Information +**/ +VOID +VtdLibDumpVtdRegsThin ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTD_REGESTER_THIN_INFO *VtdRegInfo + ); + +/** + @brief This function is to decode log event context. + + [Consumption] + Dump VTd log + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Event Event struct + + @retval TRUE Decode event success + @retval FALSE Unknown event +**/ +BOOLEAN +VtdLibDecodeEvent ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTDLOG_EVENT *Event + ); + +/** + @brief Pre-boot DMA protection End Process + + +----------------------+ + | OnExitBootServices | + +----------------------+ + || + \/ ++-------------------------------------------------+ +| Flush Write Buffer | +| VtdLibFlushWriteBuffer () | ++-------------------------------------------------+ + || + \/ ++-------------------------------------------------+ +| Invalidate Context Cache | +| VtdLibSubmitQueuedInvalidationDescriptor () | ++-------------------------------------------------+ + || + \/ ++-------------------------------------------------+ +| Invalidate IOTLB | +| VtdLibSubmitQueuedInvalidationDescriptor () | ++-------------------------------------------------+ + || + \/ ++-------------------------------------------------+ +| Disable DMAR | +| VtdLibDisableDmar () | ++-------------------------------------------------+ + || + \/ ++-------------------------------------------------+ +| Disable Queued Invalidation interface | +| VtdLibDisableQueuedInvalidationInterface () | ++-------------------------------------------------+ + +**/ + +/** + @brief This function is to flush VTd engine write buffer. + + [Consumption] + Operate VTd engine + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. +**/ +VOID +VtdLibFlushWriteBuffer ( + IN UINTN VtdUnitBaseAddress + ); + +/** + @brief This function is to clear Global Command Register Bits. + + [Consumption] + Operate VTd engine + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + @param[in] BitMask Bit mask. +**/ +VOID +VtdLibClearGlobalCommandRegisterBits ( + IN UINTN VtdUnitBaseAddress, + IN UINT32 BitMask + ); + +/** + @brief This function is to set VTd Global Command Register Bits. + + [Consumption] + Operate VTd engine + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + @param[in] BitMask Bit mask. +**/ +VOID +VtdLibSetGlobalCommandRegisterBits ( + IN UINTN VtdUnitBaseAddress, + IN UINT32 BitMask + ); + +/** + @brief This function is to disable DMAR. + + [Consumption] + Operate VTd engine + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + + @retval EFI_SUCCESS DMAR translation is disabled. +**/ +EFI_STATUS +VtdLibDisableDmar ( + IN UINTN VtdUnitBaseAddress + ); + +/** + @brief This function is to disable PMR. + + [Consumption] + Operate VTd engine + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + + @retval EFI_SUCCESS PMR is disabled. + @retval EFI_UNSUPPORTED PMR is not supported. + @retval EFI_NOT_STARTED PMR was not enabled. +**/ +EFI_STATUS +VtdLibDisablePmr ( + IN UINTN VtdUnitBaseAddress + ); + +/** + @brief This function is to disable queued invalidation interface + + [Introduction] + Disable queued invalidation interface. + + [Consumption] + Operate VTd engine + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. +**/ +VOID +VtdLibDisableQueuedInvalidationInterface ( + IN UINTN VtdUnitBaseAddress + ); + +/** + @brief This function is to submit a queued invalidation descriptor + + [Introduction] + Submit the queued invalidation descriptor to the remapping + hardware unit and wait for its completion. + + [Consumption] + Operate VTd engine + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + @param[in] Desc The invalidate descriptor + @param[in] ClearFaultBits TRUE - This API will clear the queued= invalidation fault bits if any. + FALSE - The caller need to check and c= lear the queued invalidation fault bits. + + @retval EFI_SUCCESS The operation was successful. + @retval RETURN_DEVICE_ERROR A fault is detected. + @retval EFI_INVALID_PARAMETER Parameter is invalid. + @retval EFI_DEVICE_ERROR Detect fault, need to clear fault bits= if ClearFaultBits is FALSE +**/ +EFI_STATUS +VtdLibSubmitQueuedInvalidationDescriptor ( + IN UINTN VtdUnitBaseAddress, + IN VOID *Desc, + IN BOOLEAN ClearFaultBits + ); + +#endif diff --git a/Silicon/Intel/IntelSiliconPkg/Include/Protocol/VtdLog.h b/Sili= con/Intel/IntelSiliconPkg/Include/Protocol/VtdLog.h new file mode 100644 index 000000000..7c2894e81 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Include/Protocol/VtdLog.h @@ -0,0 +1,59 @@ +/** @file + The definition for VTD Log. + + Copyright (c) 2023, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#ifndef __VTD_LOG_PROTOCOL_H__ +#define __VTD_LOG_PROTOCOL_H__ + +#include + +#define EDKII_VTD_LOG_PROTOCOL_GUID \ + { \ + 0x1e271819, 0xa3ca, 0x481f, { 0xbd, 0xff, 0x92, 0x78, 0x2f, 0x9a, 0x= 99, 0x3c } \ + } + +typedef struct _EDKII_VTD_LOG_PROTOCOL EDKII_VTD_LOG_PROTOCOL; + +#define EDKII_VTD_LOG_PROTOCOL_REVISION 0x00010000 + +/** + Callback function of each VTd log event. + @param[in] Context Event context + @param[in] Header Event header + + @retval UINT32 Number of events +**/ +typedef +VOID +(EFIAPI *EDKII_VTD_LOG_HANDLE_EVENT) ( + IN VOID *Context, + IN VTDLOG_EVENT_HEADER *Header + ); + +/** + Get the VTd log events. + @param[in] Context Event context + @param[in out] CallbackHandle Callback function for each VTd log eve= nt + + @retval UINT32 Number of events +**/ +typedef +UINT64 +(EFIAPI *EDKII_VTD_LOG_GET_EVENTS) ( + IN VOID *Context, + IN OUT EDKII_VTD_LOG_HANDLE_EVENT CallbackHandle + ); + +struct _EDKII_VTD_LOG_PROTOCOL { + UINT64 Revision; + EDKII_VTD_LOG_GET_EVENTS GetEvents; +}; + +extern EFI_GUID gEdkiiVTdLogProtocolGuid; + +#endif + diff --git a/Silicon/Intel/IntelSiliconPkg/IntelSiliconPkg.dec b/Silicon/In= tel/IntelSiliconPkg/IntelSiliconPkg.dec index cad22acda..ec8690a8d 100644 --- a/Silicon/Intel/IntelSiliconPkg/IntelSiliconPkg.dec +++ b/Silicon/Intel/IntelSiliconPkg/IntelSiliconPkg.dec @@ -73,6 +73,9 @@ ## HOB GUID to get memory information after MRC is done. The hob data wi= ll be used to set the PMR ranges gVtdPmrInfoDataHobGuid =3D {0x6fb61645, 0xf168, 0x46be, { 0x80, 0xec, 0x= b5, 0x02, 0x38, 0x5e, 0xe7, 0xe7 } } =20 + ## HOB GUID to get VTd log data. + gVTdLogBufferHobGuid =3D {0xc8049121, 0xdf91, 0x4dfd, { 0xad, 0xcb, 0x1c= , 0x55, 0x85, 0x09, 0x6d, 0x3b } } + ## Include/Guid/MicrocodeShadowInfoHob.h gEdkiiMicrocodeShadowInfoHobGuid =3D { 0x658903f9, 0xda66, 0x460d, { 0x8= b, 0xb0, 0x9d, 0x2d, 0xdf, 0x65, 0x44, 0x59 } } =20 @@ -119,6 +122,8 @@ gPchSmmSpi2ProtocolGuid =3D { 0x2d1c0c43, 0x20d3, 0x40ae, { 0x99, 0x07, = 0x2d, 0xf0, 0xe7, 0x91, 0x21, 0xa5 } } =20 gEdkiiPlatformVTdPolicyProtocolGuid =3D { 0x3d17e448, 0x466, 0x4e20, { 0= x99, 0x9f, 0xb2, 0xe1, 0x34, 0x88, 0xee, 0x22 }} + gEdkiiVTdLogProtocolGuid =3D { 0x1e271819, 0xa3ca, 0x481f, { 0xbd, 0xff,= 0x92, 0x78, 0x2f, 0x9a, 0x99, 0x3c }} + gIntelDieInfoProtocolGuid =3D { 0xAED8A0A1, 0xFDE6, 0x4CF2, { 0xA3, 0x85= , 0x08, 0xF1, 0x25, 0xF2, 0x40, 0x37 }} =20 ## Protocol for device security policy. @@ -207,3 +212,19 @@ # non-zero: The size of an additional NVS region following the Regular = variable region.
# @Prompt Additional NVS Region Size. gIntelSiliconPkgTokenSpaceGuid.PcdFlashNvStorageAdditionalSize|0x0000000= 0|UINT32|0x0000000F + + ## Declares VTd LOG Output Level.

+ # 0 : Disable VTd Log + # 1 : Enable Basic Log + # 2 : Enable All Log + # @Prompt The VTd Log Output Level. + gIntelSiliconPkgTokenSpaceGuid.PcdVTdLogLevel|0x02|UINT8|0x00000017 + + ## Declares VTd PEI POST-MEM LOG buffer size.

+ # @Prompt The VTd PEI Post-Mem Log buffer size. 8k + gIntelSiliconPkgTokenSpaceGuid.PcdVTdPeiPostMemLogBufferSize|0x00002000|= UINT32|0x00000019 + + ## Declares VTd DXE LOG buffer size.

+ # @Prompt The VTd DXE Log buffer size. 4M + gIntelSiliconPkgTokenSpaceGuid.PcdVTdDxeLogBufferSize|0x00400000|UINT32|= 0x0000001A + diff --git a/Silicon/Intel/IntelSiliconPkg/IntelSiliconPkg.dsc b/Silicon/In= tel/IntelSiliconPkg/IntelSiliconPkg.dsc index 170eb480a..c8ff40b38 100644 --- a/Silicon/Intel/IntelSiliconPkg/IntelSiliconPkg.dsc +++ b/Silicon/Intel/IntelSiliconPkg/IntelSiliconPkg.dsc @@ -45,6 +45,7 @@ UefiBootServicesTableLib|MdePkg/Library/UefiBootServicesTableLib/UefiBoo= tServicesTableLib.inf UefiDriverEntryPoint|MdePkg/Library/UefiDriverEntryPoint/UefiDriverEntry= Point.inf VariableFlashInfoLib|MdeModulePkg/Library/BaseVariableFlashInfoLib/BaseV= ariableFlashInfoLib.inf + IntelVTdPeiDxeLib|IntelSiliconPkg/Library/IntelVTdPeiDxeLib/IntelVTdPeiD= xeLib.inf =20 [LibraryClasses.common.PEIM] PeimEntryPoint|MdePkg/Library/PeimEntryPoint/PeimEntryPoint.inf diff --git a/Silicon/Intel/IntelSiliconPkg/Library/IntelVTdPeiDxeLib/IntelV= TdPeiDxeLib.c b/Silicon/Intel/IntelSiliconPkg/Library/IntelVTdPeiDxeLib/Int= elVTdPeiDxeLib.c new file mode 100644 index 000000000..1e65115cb --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Library/IntelVTdPeiDxeLib/IntelVTdPeiDx= eLib.c @@ -0,0 +1,1812 @@ +/** @file + Source code file for Intel VTd PEI DXE library. + +Copyright (c) 2023, Intel Corporation. All rights reserved.
+SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#include +#include +#include +#include +#include +#include +#include + +// +// Define the maximum message length that this library supports +// +#define MAX_STRING_LENGTH (0x100) + +#define VTD_64BITS_ADDRESS(Lo, Hi) (LShiftU64 (Lo, 12) | LShiftU64 (Hi, 32= )) + +/** + Produces a Null-terminated ASCII string in an output buffer based on a N= ull-terminated + ASCII format string and variable argument list. + =20 + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] ErrorLevel The error level of the debug message. + @param[in] FormatString A Null-terminated ASCII format string. + @param[in] ... Variable argument list whose contents a= re accessed based on the format string specified by FormatString. + + @return The number of ASCII characters in the produced output buffer not= including the + Null-terminator. +**/ +UINTN +EFIAPI +VtdLogEventCallback ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN UINTN ErrorLevel, + IN CONST CHAR8 *FormatString, + ... + ) +{ + CHAR8 Buffer[MAX_STRING_LENGTH]; + VA_LIST Marker; + UINTN NumberOfPrinted; + + if ((CallbackHandle =3D=3D NULL) || (FormatString =3D=3D NULL)) { + return 0; + } + + VA_START (Marker, FormatString); + NumberOfPrinted =3D AsciiVSPrint (Buffer, sizeof (Buffer), FormatString,= Marker); + VA_END (Marker); + + if (NumberOfPrinted > 0) { + CallbackHandle (Context, ErrorLevel, Buffer); + } + + return NumberOfPrinted; +} + +/** + Dump DMAR DeviceScopeEntry. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] DmarDeviceScopeEntry DMAR DeviceScopeEntry +**/ +VOID +VtdLibDumpDmarDeviceScopeEntry ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *DmarDeviceScopeEntry + ) +{ + UINTN PciPathNumber; + UINTN PciPathIndex; + EFI_ACPI_DMAR_PCI_PATH *PciPath; + + if (DmarDeviceScopeEntry =3D=3D NULL) { + return; + } + + VTDLIB_DEBUG ((DEBUG_INFO, + " *****************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " * DMA-Remapping Device Scope Entry Structure = *\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *****************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO, + (sizeof(UINTN) =3D=3D sizeof(UINT64)) ? + " DMAR Device Scope Entry address ...................... 0x%016lx\n= " : + " DMAR Device Scope Entry address ...................... 0x%08x\n", + DmarDeviceScopeEntry + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " Device Scope Entry Type ............................ 0x%02x\n", + DmarDeviceScopeEntry->Type + )); + switch (DmarDeviceScopeEntry->Type) { + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_ENDPOINT: + VTDLIB_DEBUG ((DEBUG_INFO, + " PCI Endpoint Device\n" + )); + break; + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_BRIDGE: + VTDLIB_DEBUG ((DEBUG_INFO, + " PCI Sub-hierachy\n" + )); + break; + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_IOAPIC: + VTDLIB_DEBUG ((DEBUG_INFO, + " IOAPIC\n" + )); + break; + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_MSI_CAPABLE_HPET: + VTDLIB_DEBUG ((DEBUG_INFO, + " MSI Capable HPET\n" + )); + break; + case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_ACPI_NAMESPACE_DEVICE: + VTDLIB_DEBUG ((DEBUG_INFO, + " ACPI Namespace Device\n" + )); + break; + default: + break; + } + VTDLIB_DEBUG ((DEBUG_INFO, + " Length ............................................. 0x%02x\n", + DmarDeviceScopeEntry->Length + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " Enumeration ID ..................................... 0x%02x\n", + DmarDeviceScopeEntry->EnumerationId + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " Starting Bus Number ................................ 0x%02x\n", + DmarDeviceScopeEntry->StartBusNumber + )); + + PciPathNumber =3D (DmarDeviceScopeEntry->Length - sizeof(EFI_ACPI_DMAR_D= EVICE_SCOPE_STRUCTURE_HEADER)) / sizeof(EFI_ACPI_DMAR_PCI_PATH); + PciPath =3D (EFI_ACPI_DMAR_PCI_PATH *)(DmarDeviceScopeEntry + 1); + for (PciPathIndex =3D 0; PciPathIndex < PciPathNumber; PciPathIndex++) { + VTDLIB_DEBUG ((DEBUG_INFO, + " Device ............................................. 0x%02x\n= ", + PciPath[PciPathIndex].Device + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " Function ........................................... 0x%02x\n= ", + PciPath[PciPathIndex].Function + )); + } + + VTDLIB_DEBUG ((DEBUG_INFO, + " *****************************************************************= ********\n\n" + )); +} + +/** + Dump DMAR SIDP table. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Sidp DMAR SIDP table +**/ +VOID +VtdLibDumpDmarSidp ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN EFI_ACPI_DMAR_SIDP_HEADER *Sidp + ) +{ + EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *DmarDeviceScopeEntry; + INTN SidpLen; + + if (Sidp =3D=3D NULL) { + return; + } + + VTDLIB_DEBUG ((DEBUG_INFO, + " *******************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " * SoC Integrated Device Property Reporting Structure = *\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " *******************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO, + (sizeof(UINTN) =3D=3D sizeof(UINT64)) ? + " SIDP address ........................................... 0x%016lx\n= " : + " SIDP address ........................................... 0x%08x\n", + Sidp + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " Type ................................................. 0x%04x\n", + Sidp->Header.Type + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " Length ............................................... 0x%04x\n", + Sidp->Header.Length + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " Segment Number ....................................... 0x%04x\n", + Sidp->SegmentNumber + )); + + SidpLen =3D Sidp->Header.Length - sizeof(EFI_ACPI_DMAR_SIDP_HEADER); + DmarDeviceScopeEntry =3D (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *)= (Sidp + 1); + while (SidpLen > 0) { + VtdLibDumpDmarDeviceScopeEntry (Context, CallbackHandle, DmarDeviceSco= peEntry); + SidpLen -=3D DmarDeviceScopeEntry->Length; + DmarDeviceScopeEntry =3D (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER = *)((UINTN)DmarDeviceScopeEntry + DmarDeviceScopeEntry->Length); + } + + VTDLIB_DEBUG ((DEBUG_INFO, + " *******************************************************************= ********\n\n" + )); +} + +/** + Dump DMAR SATC table. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Satc DMAR SATC table +**/ +VOID +VtdLibDumpDmarSatc ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN EFI_ACPI_DMAR_SATC_HEADER *Satc + ) +{ + EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *DmarDeviceScopeEntry; + INTN SatcLen; + + if (Satc =3D=3D NULL) { + return; + } + + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " * ACPI Soc Integrated Address Translation Cache reporting Str= ucture *\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + (sizeof(UINTN) =3D=3D sizeof(UINT64)) ? + " SATC address ........................................... 0x%016lx\n= " : + " SATC address ........................................... 0x%08x\n", + Satc + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Type ................................................. 0x%04x\n", + Satc->Header.Type + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Length ............................................... 0x%04x\n", + Satc->Header.Length + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Flags ................................................ 0x%02x\n", + Satc->Flags + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Segment Number ....................................... 0x%04x\n", + Satc->SegmentNumber + )); + + SatcLen =3D Satc->Header.Length - sizeof(EFI_ACPI_DMAR_SATC_HEADER); + DmarDeviceScopeEntry =3D (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *)= (Satc + 1); + while (SatcLen > 0) { + VtdLibDumpDmarDeviceScopeEntry (Context, CallbackHandle, DmarDeviceSco= peEntry); + SatcLen -=3D DmarDeviceScopeEntry->Length; + DmarDeviceScopeEntry =3D (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER = *)((UINTN)DmarDeviceScopeEntry + DmarDeviceScopeEntry->Length); + } + + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n\n" + )); +} + +/** + Dump DMAR ANDD table. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Andd DMAR ANDD table +**/ +VOID +VtdLibDumpDmarAndd ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN EFI_ACPI_DMAR_ANDD_HEADER *Andd + ) +{ + if (Andd =3D=3D NULL) { + return; + } + + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " * ACPI Name-space Device Declaration Structure = *\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + (sizeof(UINTN) =3D=3D sizeof(UINT64)) ? + " ANDD address ........................................... 0x%016lx\n= " : + " ANDD address ........................................... 0x%08x\n", + Andd + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Type ................................................. 0x%04x\n", + Andd->Header.Type + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Length ............................................... 0x%04x\n", + Andd->Header.Length + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " ACPI Device Number ................................... 0x%02x\n", + Andd->AcpiDeviceNumber + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " ACPI Object Name ..................................... '%a'\n", + (Andd + 1) + )); + + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n\n" + )); +} + +/** + Dump DMAR RHSA table. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Rhsa DMAR RHSA table +**/ +VOID +VtdLibDumpDmarRhsa ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN EFI_ACPI_DMAR_RHSA_HEADER *Rhsa + ) +{ + if (Rhsa =3D=3D NULL) { + return; + } + + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " * Remapping Hardware Status Affinity Structure = *\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + (sizeof(UINTN) =3D=3D sizeof(UINT64)) ? + " RHSA address ........................................... 0x%016lx\n= " : + " RHSA address ........................................... 0x%08x\n", + Rhsa + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Type ................................................. 0x%04x\n", + Rhsa->Header.Type + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Length ............................................... 0x%04x\n", + Rhsa->Header.Length + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Register Base Address ................................ 0x%016lx\n= ", + Rhsa->RegisterBaseAddress + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Proximity Domain ..................................... 0x%08x\n", + Rhsa->ProximityDomain + )); + + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n\n" + )); +} + +/** + Dump DMAR ATSR table. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Atsr DMAR ATSR table +**/ +VOID +VtdLibDumpDmarAtsr ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN EFI_ACPI_DMAR_ATSR_HEADER *Atsr + ) +{ + EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *DmarDeviceScopeEntry; + INTN AtsrLen; + + if (Atsr =3D=3D NULL) { + return; + } + + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " * Root Port ATS Capability Reporting Structure = *\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + (sizeof(UINTN) =3D=3D sizeof(UINT64)) ? + " ATSR address ........................................... 0x%016lx\n= " : + " ATSR address ........................................... 0x%08x\n", + Atsr + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Type ................................................. 0x%04x\n", + Atsr->Header.Type + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Length ............................................... 0x%04x\n", + Atsr->Header.Length + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Flags ................................................ 0x%02x\n", + Atsr->Flags + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " ALL_PORTS .......................................... 0x%02x\n", + Atsr->Flags & EFI_ACPI_DMAR_ATSR_FLAGS_ALL_PORTS + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Segment Number ....................................... 0x%04x\n", + Atsr->SegmentNumber + )); + + AtsrLen =3D Atsr->Header.Length - sizeof(EFI_ACPI_DMAR_ATSR_HEADER); + DmarDeviceScopeEntry =3D (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *)= (Atsr + 1); + while (AtsrLen > 0) { + VtdLibDumpDmarDeviceScopeEntry (Context, CallbackHandle, DmarDeviceSco= peEntry); + AtsrLen -=3D DmarDeviceScopeEntry->Length; + DmarDeviceScopeEntry =3D (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER = *)((UINTN)DmarDeviceScopeEntry + DmarDeviceScopeEntry->Length); + } + + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n\n" + )); +} + +/** + Dump DMAR RMRR table. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Rmrr DMAR RMRR table +**/ +VOID +VtdLibDumpDmarRmrr ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN EFI_ACPI_DMAR_RMRR_HEADER *Rmrr + ) +{ + EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *DmarDeviceScopeEntry; + INTN RmrrLen; + + if (Rmrr =3D=3D NULL) { + return; + } + + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " * Reserved Memory Region Reporting Structure = *\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + (sizeof(UINTN) =3D=3D sizeof(UINT64)) ? + " RMRR address ........................................... 0x%016lx\n= " : + " RMRR address ........................................... 0x%08x\n", + Rmrr + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Type ................................................. 0x%04x\n", + Rmrr->Header.Type + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Length ............................................... 0x%04x\n", + Rmrr->Header.Length + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Segment Number ....................................... 0x%04x\n", + Rmrr->SegmentNumber + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Reserved Memory Region Base Address .................. 0x%016lx\n= ", + Rmrr->ReservedMemoryRegionBaseAddress + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Reserved Memory Region Limit Address ................. 0x%016lx\n= ", + Rmrr->ReservedMemoryRegionLimitAddress + )); + + RmrrLen =3D Rmrr->Header.Length - sizeof(EFI_ACPI_DMAR_RMRR_HEADER); + DmarDeviceScopeEntry =3D (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *)= (Rmrr + 1); + while (RmrrLen > 0) { + VtdLibDumpDmarDeviceScopeEntry (Context, CallbackHandle, DmarDeviceSco= peEntry); + RmrrLen -=3D DmarDeviceScopeEntry->Length; + DmarDeviceScopeEntry =3D (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER = *)((UINTN)DmarDeviceScopeEntry + DmarDeviceScopeEntry->Length); + } + + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n\n" + )); +} + +/** + Dump DMAR DRHD table. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Drhd DMAR DRHD table +**/ +VOID +VtdLibDumpDmarDrhd ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN EFI_ACPI_DMAR_DRHD_HEADER *Drhd + ) +{ + EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *DmarDeviceScopeEntry; + INTN DrhdLen; + + if (Drhd =3D=3D NULL) { + return; + } + + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " * DMA-Remapping Hardware Definition Structure = *\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " *******************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + (sizeof(UINTN) =3D=3D sizeof(UINT64)) ? + " DRHD address ........................................... 0x%016lx\n= " : + " DRHD address ........................................... 0x%08x\n", + Drhd + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Type ................................................. 0x%04x\n", + Drhd->Header.Type + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Length ............................................... 0x%04x\n", + Drhd->Header.Length + )); + VTDLIB_DEBUG ((DEBUG_INFO,=20 + " Flags ................................................ 0x%02x\n", + Drhd->Flags + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " INCLUDE_PCI_ALL .................................... 0x%02x\n", + Drhd->Flags & EFI_ACPI_DMAR_DRHD_FLAGS_INCLUDE_PCI_ALL + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " Size ................................................. 0x%02x\n", + Drhd->Size + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " Segment Number ....................................... 0x%04x\n", + Drhd->SegmentNumber + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " Register Base Address ................................ 0x%016lx\n= ", + Drhd->RegisterBaseAddress + )); + + DrhdLen =3D Drhd->Header.Length - sizeof(EFI_ACPI_DMAR_DRHD_HEADER); + DmarDeviceScopeEntry =3D (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *)= (Drhd + 1); + while (DrhdLen > 0) { + VtdLibDumpDmarDeviceScopeEntry (Context, CallbackHandle, DmarDeviceSco= peEntry); + DrhdLen -=3D DmarDeviceScopeEntry->Length; + DmarDeviceScopeEntry =3D (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER = *)((UINTN)DmarDeviceScopeEntry + DmarDeviceScopeEntry->Length); + } + + VTDLIB_DEBUG ((DEBUG_INFO, + " *******************************************************************= ********\n\n" + )); +} + +/** + Dump Header of DMAR ACPI table. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Dmar DMAR ACPI table +**/ +VOID +VtdLibDumpAcpiDmarHeader ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN EFI_ACPI_DMAR_HEADER *Dmar + ) +{ + // + // Dump Dmar table + // + VTDLIB_DEBUG ((DEBUG_INFO, + "*********************************************************************= ********\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO, + "* DMAR Table = *\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO, + "*********************************************************************= ********\n" + )); + + VTDLIB_DEBUG ((DEBUG_INFO, + (sizeof(UINTN) =3D=3D sizeof(UINT64)) ? + "DMAR address ............................................. 0x%016lx\n= " : + "DMAR address ............................................. 0x%08x\n", + Dmar + )); + + VTDLIB_DEBUG ((DEBUG_INFO, + " Table Contents:\n" + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " Host Address Width ................................... 0x%02x\n", + Dmar->HostAddressWidth + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " Flags ................................................ 0x%02x\n", + Dmar->Flags + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " INTR_REMAP ......................................... 0x%02x\n", + Dmar->Flags & EFI_ACPI_DMAR_FLAGS_INTR_REMAP + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " X2APIC_OPT_OUT_SET ................................. 0x%02x\n", + Dmar->Flags & EFI_ACPI_DMAR_FLAGS_X2APIC_OPT_OUT + )); + VTDLIB_DEBUG ((DEBUG_INFO, + " DMA_CTRL_PLATFORM_OPT_IN_FLAG ...................... 0x%02x\n", + Dmar->Flags & EFI_ACPI_DMAR_FLAGS_DMA_CTRL_PLATFORM_OPT_IN_FLAG + )); +} + +/** + Dump DMAR ACPI table. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Dmar DMAR ACPI table +**/ +VOID +VtdLibDumpAcpiDmar ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN EFI_ACPI_DMAR_HEADER *Dmar + ) +{ + EFI_ACPI_DMAR_STRUCTURE_HEADER *DmarHeader; + INTN DmarLen; + + if (Dmar =3D=3D NULL) { + return; + } + + // + // Dump Dmar table + // + VtdLibDumpAcpiDmarHeader (Context, CallbackHandle, Dmar); + + DmarLen =3D Dmar->Header.Length - sizeof(EFI_ACPI_DMAR_HEADER); + DmarHeader =3D (EFI_ACPI_DMAR_STRUCTURE_HEADER *)(Dmar + 1); + while (DmarLen > 0) { + switch (DmarHeader->Type) { + case EFI_ACPI_DMAR_TYPE_DRHD: + VtdLibDumpDmarDrhd (Context, CallbackHandle, (EFI_ACPI_DMAR_DRHD_HEA= DER *)DmarHeader); + break; + case EFI_ACPI_DMAR_TYPE_RMRR: + VtdLibDumpDmarRmrr (Context, CallbackHandle, (EFI_ACPI_DMAR_RMRR_HEA= DER *)DmarHeader); + break; + case EFI_ACPI_DMAR_TYPE_ATSR: + VtdLibDumpDmarAtsr (Context, CallbackHandle, (EFI_ACPI_DMAR_ATSR_HEA= DER *)DmarHeader); + break; + case EFI_ACPI_DMAR_TYPE_RHSA: + VtdLibDumpDmarRhsa (Context, CallbackHandle, (EFI_ACPI_DMAR_RHSA_HEA= DER *)DmarHeader); + break; + case EFI_ACPI_DMAR_TYPE_ANDD: + VtdLibDumpDmarAndd (Context, CallbackHandle, (EFI_ACPI_DMAR_ANDD_HEA= DER *)DmarHeader); + break; + case EFI_ACPI_DMAR_TYPE_SATC: + VtdLibDumpDmarSatc (Context, CallbackHandle, (EFI_ACPI_DMAR_SATC_HEA= DER *)DmarHeader); + break; + case EFI_ACPI_DMAR_TYPE_SIDP: + VtdLibDumpDmarSidp (Context, CallbackHandle, (EFI_ACPI_DMAR_SIDP_HEA= DER *)DmarHeader); + break; + default: + break; + } + DmarLen -=3D DmarHeader->Length; + DmarHeader =3D (EFI_ACPI_DMAR_STRUCTURE_HEADER *)((UINTN)DmarHeader + = DmarHeader->Length); + } + + VTDLIB_DEBUG ((DEBUG_INFO, + "*********************************************************************= ********\n\n" + )); +} + +/** + Dump DRHD DMAR ACPI table. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Dmar DMAR ACPI table +**/ +VOID +VtdLibDumpAcpiDmarDrhd ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN EFI_ACPI_DMAR_HEADER *Dmar + ) +{ + EFI_ACPI_DMAR_STRUCTURE_HEADER *DmarHeader; + INTN DmarLen; + + if (Dmar =3D=3D NULL) { + return; + } + + // + // Dump Dmar table + // + VtdLibDumpAcpiDmarHeader (Context, CallbackHandle, Dmar); + + DmarLen =3D Dmar->Header.Length - sizeof(EFI_ACPI_DMAR_HEADER); + DmarHeader =3D (EFI_ACPI_DMAR_STRUCTURE_HEADER *)(Dmar + 1); + while (DmarLen > 0) { + switch (DmarHeader->Type) { + case EFI_ACPI_DMAR_TYPE_DRHD: + VtdLibDumpDmarDrhd (Context, CallbackHandle, (EFI_ACPI_DMAR_DRHD_HEA= DER *)DmarHeader); + break; + default: + break; + } + DmarLen -=3D DmarHeader->Length; + DmarHeader =3D (EFI_ACPI_DMAR_STRUCTURE_HEADER *)((UINTN)DmarHeader + = DmarHeader->Length); + } + + VTDLIB_DEBUG ((DEBUG_INFO, + "*********************************************************************= ********\n\n" + )); +} + +/** + Dump the PCI device information managed by this VTd engine. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] PciDeviceInfo VTd Unit Information +**/ +VOID +VtdLibDumpPciDeviceInfo ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN PCI_DEVICE_INFORMATION *PciDeviceInfo + ) +{ + UINTN Index; + + if (PciDeviceInfo !=3D NULL) { + VTDLIB_DEBUG ((DEBUG_INFO, "PCI Device Information (Number 0x%x, Inclu= deAll - %d):\n", + PciDeviceInfo->PciDeviceDataNumber, + PciDeviceInfo->IncludeAllFlag + )); + for (Index =3D 0; Index < PciDeviceInfo->PciDeviceDataNumber; Index++)= { + VTDLIB_DEBUG ((DEBUG_INFO, " S%04x B%02x D%02x F%02x\n", + PciDeviceInfo->Segment, + PciDeviceInfo->PciDeviceData[Index].PciSourceId.Bits.Bus, + PciDeviceInfo->PciDeviceData[Index].PciSourceId.Bits.Device, + PciDeviceInfo->PciDeviceData[Index].PciSourceId.Bits.Function + )); + } + } +} + +/** + Dump DMAR second level paging entry. + + @param[in] Context Event context + @param[in] CallbackHandle Callback handler + @param[in] SecondLevelPagingEntry The second level paging entry. + @param[in] Is5LevelPaging If it is the 5 level paging. +**/ +VOID +VtdLibDumpSecondLevelPagingEntry ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VOID *SecondLevelPagingEntry, + IN BOOLEAN Is5LevelPaging + ) +{ + UINTN Index5; + UINTN Index4; + UINTN Index3; + UINTN Index2; + UINTN Index1; + UINTN Lvl5IndexEnd; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl5PtEntry; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl4PtEntry; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl3PtEntry; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl2PtEntry; + VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl1PtEntry; + + VTDLIB_DEBUG ((DEBUG_VERBOSE, "=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D\n")); + VTDLIB_DEBUG ((DEBUG_VERBOSE, "DMAR Second Level Page Table:\n")); + VTDLIB_DEBUG ((DEBUG_VERBOSE, "SecondLevelPagingEntry Base - 0x%x, Is5Le= velPaging - %d\n", SecondLevelPagingEntry, Is5LevelPaging)); + + Lvl5IndexEnd =3D Is5LevelPaging ? SIZE_4KB/sizeof(VTD_SECOND_LEVEL_PAGIN= G_ENTRY) : 1; + Lvl4PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)SecondLevelPagingEntry; + Lvl5PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)SecondLevelPagingEntry; + + for (Index5 =3D 0; Index5 < Lvl5IndexEnd; Index5++) { + if (Is5LevelPaging) { + if (Lvl5PtEntry[Index5].Uint64 !=3D 0) { + VTDLIB_DEBUG ((DEBUG_VERBOSE, " Lvl5Pt Entry(0x%03x) - 0x%016lx\n= ", Index5, Lvl5PtEntry[Index5].Uint64)); + } + if (Lvl5PtEntry[Index5].Uint64 =3D=3D 0) { + continue; + } + Lvl4PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)(UINTN)VTD_64BITS_A= DDRESS(Lvl5PtEntry[Index5].Bits.AddressLo, Lvl5PtEntry[Index5].Bits.Address= Hi); + } + + for (Index4 =3D 0; Index4 < SIZE_4KB/sizeof(VTD_SECOND_LEVEL_PAGING_EN= TRY); Index4++) { + if (Lvl4PtEntry[Index4].Uint64 !=3D 0) { + VTDLIB_DEBUG ((DEBUG_VERBOSE, " Lvl4Pt Entry(0x%03x) - 0x%016lx\n= ", Index4, Lvl4PtEntry[Index4].Uint64)); + } + if (Lvl4PtEntry[Index4].Uint64 =3D=3D 0) { + continue; + } + Lvl3PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)(UINTN)VTD_64BITS_A= DDRESS(Lvl4PtEntry[Index4].Bits.AddressLo, Lvl4PtEntry[Index4].Bits.Address= Hi); + for (Index3 =3D 0; Index3 < SIZE_4KB/sizeof(VTD_SECOND_LEVEL_PAGING_= ENTRY); Index3++) { + if (Lvl3PtEntry[Index3].Uint64 !=3D 0) { + VTDLIB_DEBUG ((DEBUG_VERBOSE, " Lvl3Pt Entry(0x%03x) - 0x%016l= x\n", Index3, Lvl3PtEntry[Index3].Uint64)); + } + if (Lvl3PtEntry[Index3].Uint64 =3D=3D 0) { + continue; + } + + Lvl2PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)(UINTN)VTD_64BITS= _ADDRESS(Lvl3PtEntry[Index3].Bits.AddressLo, Lvl3PtEntry[Index3].Bits.Addre= ssHi); + for (Index2 =3D 0; Index2 < SIZE_4KB/sizeof(VTD_SECOND_LEVEL_PAGIN= G_ENTRY); Index2++) { + if (Lvl2PtEntry[Index2].Uint64 !=3D 0) { + VTDLIB_DEBUG ((DEBUG_VERBOSE, " Lvl2Pt Entry(0x%03x) - 0x%0= 16lx\n", Index2, Lvl2PtEntry[Index2].Uint64)); + } + if (Lvl2PtEntry[Index2].Uint64 =3D=3D 0) { + continue; + } + if (Lvl2PtEntry[Index2].Bits.PageSize =3D=3D 0) { + Lvl1PtEntry =3D (VTD_SECOND_LEVEL_PAGING_ENTRY *)(UINTN)VTD_64= BITS_ADDRESS(Lvl2PtEntry[Index2].Bits.AddressLo, Lvl2PtEntry[Index2].Bits.A= ddressHi); + for (Index1 =3D 0; Index1 < SIZE_4KB/sizeof(VTD_SECOND_LEVEL_P= AGING_ENTRY); Index1++) { + if (Lvl1PtEntry[Index1].Uint64 !=3D 0) { + VTDLIB_DEBUG ((DEBUG_VERBOSE, " Lvl1Pt Entry(0x%03x) = - 0x%016lx\n", Index1, Lvl1PtEntry[Index1].Uint64)); + } + } + } + } + } + } + } + VTDLIB_DEBUG ((DEBUG_VERBOSE, "=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D\n")); +} + +/** + Dump DMAR context entry table. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] RootEntry DMAR root entry. + @param[in] Is5LevelPaging If it is the 5 level paging. +**/ +VOID +VtdLibDumpDmarContextEntryTable ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTD_ROOT_ENTRY *RootEntry, + IN BOOLEAN Is5LevelPaging + ) +{ + UINTN Index; + UINTN Index2; + VTD_CONTEXT_ENTRY *ContextEntry; + + VTDLIB_DEBUG ((DEBUG_INFO, "=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D\n")); + VTDLIB_DEBUG ((DEBUG_INFO, "DMAR Context Entry Table:\n")); + + VTDLIB_DEBUG ((DEBUG_INFO, "RootEntry Address - 0x%x\n", RootEntry)); + + for (Index =3D 0; Index < VTD_ROOT_ENTRY_NUMBER; Index++) { + if ((RootEntry[Index].Uint128.Uint64Lo !=3D 0) || (RootEntry[Index].Ui= nt128.Uint64Hi !=3D 0)) { + VTDLIB_DEBUG ((DEBUG_INFO, " RootEntry(0x%02x) B%02x - 0x%016lx %01= 6lx\n", + Index, Index, RootEntry[Index].Uint128.Uint64Hi, RootEntry[Index].= Uint128.Uint64Lo)); + } + if (RootEntry[Index].Bits.Present =3D=3D 0) { + continue; + } + ContextEntry =3D (VTD_CONTEXT_ENTRY *) (UINTN) VTD_64BITS_ADDRESS (Roo= tEntry[Index].Bits.ContextTablePointerLo, RootEntry[Index].Bits.ContextTabl= ePointerHi); + for (Index2 =3D 0; Index2 < VTD_CONTEXT_ENTRY_NUMBER; Index2++) { + if ((ContextEntry[Index2].Uint128.Uint64Lo !=3D 0) || (ContextEntry[= Index2].Uint128.Uint64Hi !=3D 0)) { + VTDLIB_DEBUG ((DEBUG_INFO, " ContextEntry(0x%02x) D%02xF%02x - = 0x%016lx %016lx\n", + Index2, Index2 >> 3, Index2 & 0x7, ContextEntry[Index2].Uint128.= Uint64Hi, ContextEntry[Index2].Uint128.Uint64Lo)); + } + if (ContextEntry[Index2].Bits.Present =3D=3D 0) { + continue; + } + VtdLibDumpSecondLevelPagingEntry (Context, CallbackHandle, (VOID *) = (UINTN) VTD_64BITS_ADDRESS (ContextEntry[Index2].Bits.SecondLevelPageTransl= ationPointerLo, ContextEntry[Index2].Bits.SecondLevelPageTranslationPointer= Hi), Is5LevelPaging); + } + } + VTDLIB_DEBUG ((DEBUG_INFO, "=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D\n")); +} + +/** + Dump DMAR extended context entry table. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] ExtRootEntry DMAR extended root entry. + @param[in] Is5LevelPaging If it is the 5 level paging. +**/ +VOID +VtdLibDumpDmarExtContextEntryTable ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTD_EXT_ROOT_ENTRY *ExtRootEntry, + IN BOOLEAN Is5LevelPaging + ) +{ + UINTN Index; + UINTN Index2; + VTD_EXT_CONTEXT_ENTRY *ExtContextEntry; + + VTDLIB_DEBUG ((DEBUG_INFO, "=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D\n")); + VTDLIB_DEBUG ((DEBUG_INFO, "DMAR ExtContext Entry Table:\n")); + + VTDLIB_DEBUG ((DEBUG_INFO, "ExtRootEntry Address - 0x%x\n", ExtRootEntry= )); + + for (Index =3D 0; Index < VTD_ROOT_ENTRY_NUMBER; Index++) { + if ((ExtRootEntry[Index].Uint128.Uint64Lo !=3D 0) || (ExtRootEntry[Ind= ex].Uint128.Uint64Hi !=3D 0)) { + VTDLIB_DEBUG ((DEBUG_INFO, " ExtRootEntry(0x%02x) B%02x - 0x%016lx = %016lx\n", + Index, Index, ExtRootEntry[Index].Uint128.Uint64Hi, ExtRootEntry[I= ndex].Uint128.Uint64Lo)); + } + if (ExtRootEntry[Index].Bits.LowerPresent =3D=3D 0) { + continue; + } + ExtContextEntry =3D (VTD_EXT_CONTEXT_ENTRY *) (UINTN) VTD_64BITS_ADDRE= SS (ExtRootEntry[Index].Bits.LowerContextTablePointerLo, ExtRootEntry[Index= ].Bits.LowerContextTablePointerHi); + for (Index2 =3D 0; Index2 < VTD_CONTEXT_ENTRY_NUMBER/2; Index2++) { + if ((ExtContextEntry[Index2].Uint256.Uint64_1 !=3D 0) || (ExtContext= Entry[Index2].Uint256.Uint64_2 !=3D 0) || + (ExtContextEntry[Index2].Uint256.Uint64_3 !=3D 0) || (ExtContext= Entry[Index2].Uint256.Uint64_4 !=3D 0)) { + VTDLIB_DEBUG ((DEBUG_INFO, " ExtContextEntryLower(0x%02x) D%02x= F%02x - 0x%016lx %016lx %016lx %016lx\n", + Index2, Index2 >> 3, Index2 & 0x7, ExtContextEntry[Index2].Uint2= 56.Uint64_4, ExtContextEntry[Index2].Uint256.Uint64_3, ExtContextEntry[Inde= x2].Uint256.Uint64_2, ExtContextEntry[Index2].Uint256.Uint64_1)); + } + if (ExtContextEntry[Index2].Bits.Present =3D=3D 0) { + continue; + } + VtdLibDumpSecondLevelPagingEntry (Context, CallbackHandle, (VOID *) = (UINTN) VTD_64BITS_ADDRESS (ExtContextEntry[Index2].Bits.SecondLevelPageTra= nslationPointerLo, ExtContextEntry[Index2].Bits.SecondLevelPageTranslationP= ointerHi), Is5LevelPaging); + } + + if (ExtRootEntry[Index].Bits.UpperPresent =3D=3D 0) { + continue; + } + ExtContextEntry =3D (VTD_EXT_CONTEXT_ENTRY *) (UINTN) VTD_64BITS_ADDRE= SS (ExtRootEntry[Index].Bits.UpperContextTablePointerLo, ExtRootEntry[Index= ].Bits.UpperContextTablePointerHi); + for (Index2 =3D 0; Index2 < VTD_CONTEXT_ENTRY_NUMBER/2; Index2++) { + if ((ExtContextEntry[Index2].Uint256.Uint64_1 !=3D 0) || (ExtContext= Entry[Index2].Uint256.Uint64_2 !=3D 0) || + (ExtContextEntry[Index2].Uint256.Uint64_3 !=3D 0) || (ExtContext= Entry[Index2].Uint256.Uint64_4 !=3D 0)) { + VTDLIB_DEBUG ((DEBUG_INFO, " ExtContextEntryUpper(0x%02x) D%02x= F%02x - 0x%016lx %016lx %016lx %016lx\n", + Index2, (Index2 + 128) >> 3, (Index2 + 128) & 0x7, ExtContextEnt= ry[Index2].Uint256.Uint64_4, ExtContextEntry[Index2].Uint256.Uint64_3, ExtC= ontextEntry[Index2].Uint256.Uint64_2, ExtContextEntry[Index2].Uint256.Uint6= 4_1)); + } + if (ExtContextEntry[Index2].Bits.Present =3D=3D 0) { + continue; + } + } + } + VTDLIB_DEBUG ((DEBUG_INFO, "=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D\n")); +} + +/** + Dump VTd FRCD register. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] FrcdRegNum FRCD Register Number + @param[in] FrcdRegTab FRCD Register Table +**/ +VOID +VtdLibDumpVtdFrcdRegs ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN UINT16 FrcdRegNum, + IN VTD_UINT128 *FrcdRegTab + ) +{ + UINT16 Index; + VTD_FRCD_REG FrcdReg; + VTD_SOURCE_ID SourceId; + + for (Index =3D 0; Index < FrcdRegNum; Index++) { + FrcdReg.Uint64[0] =3D FrcdRegTab[Index].Uint64Lo; + FrcdReg.Uint64[1] =3D FrcdRegTab[Index].Uint64Hi; + VTDLIB_DEBUG ((DEBUG_INFO, " FRCD_REG[%d] - 0x%016lx %016lx\n", Index= , FrcdReg.Uint64[1], FrcdReg.Uint64[0])); + if (FrcdReg.Uint64[1] !=3D 0 || FrcdReg.Uint64[0] !=3D 0) { + VTDLIB_DEBUG ((DEBUG_INFO, " Fault Info - 0x%016lx\n", VTD_64BITS= _ADDRESS(FrcdReg.Bits.FILo, FrcdReg.Bits.FIHi))); + VTDLIB_DEBUG ((DEBUG_INFO, " Fault Bit - %d\n", FrcdReg.Bits.F)); + SourceId.Uint16 =3D (UINT16)FrcdReg.Bits.SID; + VTDLIB_DEBUG ((DEBUG_INFO, " Source - B%02x D%02x F%02x\n", Sourc= eId.Bits.Bus, SourceId.Bits.Device, SourceId.Bits.Function)); + VTDLIB_DEBUG ((DEBUG_INFO, " Type - 0x%02x\n", (FrcdReg.Bits.T1 <= < 1) | FrcdReg.Bits.T2)); + VTDLIB_DEBUG ((DEBUG_INFO, " Reason - %x (Refer to VTd Spec, Appe= ndix A)\n", FrcdReg.Bits.FR)); + } + } +} + +/** + Dump VTd registers. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] VtdRegInfo Registers information +**/ +VOID +VtdLibDumpVtdRegsAll ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTD_REGESTER_INFO *VtdRegInfo + ) +{ + if (VtdRegInfo !=3D NULL) { + VTDLIB_DEBUG ((DEBUG_INFO, "VTd Engine: [0x%016lx]\n", VtdRegInfo->Bas= eAddress)); + VTDLIB_DEBUG ((DEBUG_INFO, " VER_REG - 0x%08x\n", VtdRegInfo->V= erReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " CAP_REG - 0x%016lx\n", VtdRegInfo->C= apReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " ECAP_REG - 0x%016lx\n", VtdRegInfo->E= capReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " GSTS_REG - 0x%08x \n", VtdRegInfo->G= stsReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " RTADDR_REG - 0x%016lx\n", VtdRegInfo->R= taddrReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " CCMD_REG - 0x%016lx\n", VtdRegInfo->C= cmdReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " FSTS_REG - 0x%08x\n", VtdRegInfo->F= stsReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " FECTL_REG - 0x%08x\n", VtdRegInfo->F= ectlReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " FEDATA_REG - 0x%08x\n", VtdRegInfo->F= edataReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " FEADDR_REG - 0x%08x\n", VtdRegInfo->F= eaddrReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " FEUADDR_REG - 0x%08x\n", VtdRegInfo->F= euaddrReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " IQERCD_REG - 0x%016lx\n", VtdRegInfo->I= qercdReg)); + + VtdLibDumpVtdFrcdRegs (Context, CallbackHandle, VtdRegInfo->FrcdRegNum= , VtdRegInfo->FrcdReg); + + VTDLIB_DEBUG ((DEBUG_INFO, " IVA_REG - 0x%016lx\n", VtdRegInfo->I= vaReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " IOTLB_REG - 0x%016lx\n", VtdRegInfo->I= otlbReg)); + } +} + +/** + Dump VTd registers. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] VtdRegInfo Registers information +**/ +VOID +VtdLibDumpVtdRegsThin ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTD_REGESTER_THIN_INFO *VtdRegInfo + ) +{ + if (VtdRegInfo !=3D NULL) { + VTDLIB_DEBUG ((DEBUG_INFO, "VTd Engine: [0x%016lx]\n", VtdRegInfo->Bas= eAddress)); + VTDLIB_DEBUG ((DEBUG_INFO, " GSTS_REG - 0x%08x \n", VtdRegInfo->G= stsReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " RTADDR_REG - 0x%016lx\n", VtdRegInfo->R= taddrReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " FSTS_REG - 0x%08x\n", VtdRegInfo->F= stsReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " FECTL_REG - 0x%08x\n", VtdRegInfo->F= ectlReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " IQERCD_REG - 0x%016lx\n", VtdRegInfo->I= qercdReg)); + + VtdLibDumpVtdFrcdRegs (Context, CallbackHandle, VtdRegInfo->FrcdRegNum= , VtdRegInfo->FrcdReg); + } +} + +/** + Dump VTd registers. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] VtdRegInfo Registers information +**/ +VOID +VtdLibDumpVtdRegsQi ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTD_REGESTER_QI_INFO *VtdRegInfo + ) +{ + if (VtdRegInfo !=3D NULL) { + VTDLIB_DEBUG ((DEBUG_INFO, "VTd Engine: [0x%016lx]\n", VtdRegInfo->Bas= eAddress)); + VTDLIB_DEBUG ((DEBUG_INFO, " FSTS_REG - 0x%08x\n", VtdRegInfo->F= stsReg)); + VTDLIB_DEBUG ((DEBUG_INFO, " IQERCD_REG - 0x%016lx\n", VtdRegInfo->I= qercdReg)); + } +} + +/** + Dump Vtd PEI pre-mem event. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Event VTDLOG_EVENT_2PARAM event + +**/ +VOID +VtdLibDumpPeiPreMemInfo ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTDLOG_EVENT_2PARAM *Event + ) +{ + UINT64 VtdBarAddress; + UINT64 Mode; + UINT64 Status; + + VtdBarAddress =3D Event->Data1; + Mode =3D Event->Data2 & 0xFF; + Status =3D (Event->Data2>>8) & 0xFF; + + switch (Mode) { + case VTD_LOG_PEI_PRE_MEM_DISABLE: + VTDLIB_DEBUG ((DEBUG_INFO, "PEI (pre-memory): Disabled [0x%016lx] 0x%x= \n", VtdBarAddress, Status)); + break; + case VTD_LOG_PEI_PRE_MEM_ADM: + VTDLIB_DEBUG ((DEBUG_INFO, "PEI (pre-memory): Enable Abort DMA Mode [0= x%016lx] 0x%x\n", VtdBarAddress, Status)); + break; + case VTD_LOG_PEI_PRE_MEM_TE: + VTDLIB_DEBUG ((DEBUG_INFO, "PEI (pre-memory): Enable NULL Root Entry T= able [0x%016lx] 0x%x\n", VtdBarAddress, Status)); + break; + case VTD_LOG_PEI_PRE_MEM_PMR: + VTDLIB_DEBUG ((DEBUG_INFO, "PEI (pre-memory): Enable PMR [0x%016lx] 0x= %x\n", VtdBarAddress, Status)); + break; + case VTD_LOG_PEI_PRE_MEM_NOT_USED: + // + // Not used + // + break; + default: + VTDLIB_DEBUG ((DEBUG_INFO, "PEI (pre-memory): Unknown [0x%016lx] 0x%x\= n", VtdBarAddress, Status)); + break; + } +} + +/** + Dump Vtd Queued Invaildation event. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Event VTDLOG_EVENT_2PARAM event + +**/ +VOID +VtdLibDumpQueuedInvaildation ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTDLOG_EVENT_2PARAM *Event + ) +{ + switch (Event->Data1) { + case VTD_LOG_QI_DISABLE: + VTDLIB_DEBUG ((DEBUG_INFO, " [0x%016lx] Disable\n", Event->Data2)); + break; + case VTD_LOG_QI_ENABLE: + VTDLIB_DEBUG ((DEBUG_INFO, " [0x%016lx] Enable\n", Event->Data2)); + break; + case VTD_LOG_QI_ERROR_OUT_OF_RESOURCES: + VTDLIB_DEBUG ((DEBUG_INFO, " [0x%016lx] error - Out of resources\n", E= vent->Data2)); + break; + default: + VTDLIB_DEBUG ((DEBUG_INFO, " [0x%016lx] error - (0x%x)\n", Event->Data= 2, Event->Data1)); + break; + } +} + +/** + Dump Vtd registers event. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Event VTDLOG_EVENT_CONTEXT event + +**/ +VOID +VtdLibDumpRegisters ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTDLOG_EVENT_CONTEXT *Event + ) +{ + switch (Event->Param) { + case VTDLOG_REGISTER_ALL: + VtdLibDumpVtdRegsAll (Context, CallbackHandle, (VTD_REGESTER_INFO *) E= vent->Data); + break; + case VTDLOG_REGISTER_THIN: + VtdLibDumpVtdRegsThin (Context, CallbackHandle, (VTD_REGESTER_THIN_INF= O *) Event->Data); + break; + case VTDLOG_REGISTER_QI: + VtdLibDumpVtdRegsQi (Context, CallbackHandle, (VTD_REGESTER_QI_INFO *)= Event->Data); + break; + default: + VTDLIB_DEBUG ((DEBUG_INFO, " Unknown format (%d)\n", Event->Param)); + break; + } +} + +/** + Dump Vtd PEI Error event. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Event VTDLOG_EVENT_2PARAM event + +**/ +VOID +VtdLibDumpPeiError ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTDLOG_EVENT_2PARAM *Event + ) +{ + UINT64 Timestamp; + + Timestamp =3D Event->Header.Timestamp; + + switch (Event->Data1) { + case VTD_LOG_PEI_VTD_ERROR_PPI_ALLOC: + VTDLIB_DEBUG ((DEBUG_INFO, "PEI [%ld]: Error - PPI alloc length [0x%01= 6lx]\n", Timestamp, Event->Data2)); + break; + case VTD_LOG_PEI_VTD_ERROR_PPI_MAP: + VTDLIB_DEBUG ((DEBUG_INFO, "PEI [%ld]: Error - PPI map length [0x%016l= x]\n", Timestamp, Event->Data2)); + break; + default: + VTDLIB_DEBUG ((DEBUG_INFO, "PEI [%ld]: Error - Unknown (%d) 0x%x\n", T= imestamp, Event->Data1, Event->Data2)); + break; + } +} + +/** + Dump Vtd registers event. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Event VTDLOG_EVENT_CONTEXT event + +**/ +VOID +VtdLibDumpSetAttribute ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTDLOG_EVENT_CONTEXT *Event + ) +{ + VTD_PROTOCOL_SET_ATTRIBUTE * SetAttributeInfo; + + SetAttributeInfo =3D (VTD_PROTOCOL_SET_ATTRIBUTE *) Event->Data; + + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: SetAttribute SourceId =3D 0x%04x,= Address =3D 0x%lx, Length =3D 0x%lx, IoMmuAccess =3D 0x%lx, %r\n",=20 + Event->Header.Timestamp, + SetAttributeInfo->SourceId.Uint16, + SetAttributeInfo->DeviceAddress, + SetAttributeInfo->Length, + SetAttributeInfo->Status)); +} + + + +/** + Dump Vtd Root Table event. + + @param[in] Context Event context + @param[in out] CallbackHandle Callback handler + @param[in] Event VTDLOG_EVENT_CONTEXT event + +**/ +VOID +VtdLibDumpRootTable ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTDLOG_EVENT_CONTEXT *Event + ) +{ + VTD_ROOT_TABLE_INFO *RootTableInfo; + + RootTableInfo =3D (VTD_ROOT_TABLE_INFO *) Event->Data; + if (Event->Param =3D=3D 0) { + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: Root Entry Table [0x%016lx]\n",= Event->Header.Timestamp, RootTableInfo->BaseAddress)); + VtdLibDumpDmarContextEntryTable (Context, CallbackHandle, (VTD_ROOT_EN= TRY *) (UINTN) RootTableInfo->TableAddress, RootTableInfo->Is5LevelPaging); + + } else if (Event->Param =3D=3D 1) { + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: Ext Root Entry Table [0x%016lx]= \n", Event->Header.Timestamp, RootTableInfo->BaseAddress)); + VtdLibDumpDmarExtContextEntryTable (Context, CallbackHandle, (VTD_EXT_= ROOT_ENTRY *) (UINTN) RootTableInfo->TableAddress, RootTableInfo->Is5Level= Paging); + + } else { + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: Unknown Root Table Type (%d)\n"= , Event->Header.Timestamp, Event->Param)); + } +} + +/** + Decode log event. + + @param[in] Context Event context + @param[in out] PciDeviceId Callback handler + @param[in] Event Event struct + + @retval TRUE Decode event success + @retval FALSE Unknown event +**/ +BOOLEAN +VtdLibDecodeEvent ( + IN VOID *Context, + IN OUT EDKII_VTD_LIB_STRING_CB CallbackHandle, + IN VTDLOG_EVENT *Event + ) +{ + BOOLEAN Result; + UINT64 Timestamp; + UINT64 Data1; + UINT64 Data2; + + Result =3D TRUE; + Timestamp =3D Event->EventHeader.Timestamp; + Data1 =3D Event->CommenEvent.Data1; + Data2 =3D Event->CommenEvent.Data2; + + switch (Event->EventHeader.LogType) { + case VTDLOG_LOG_TYPE (VTDLOG_PEI_BASIC): + if (Data1 & VTD_LOG_ERROR_BUFFER_FULL) { + VTDLIB_DEBUG ((DEBUG_INFO, "PEI [%ld]: Info : Log Buffer Full\n", Ti= mestamp)); + Data1 &=3D ~VTD_LOG_ERROR_BUFFER_FULL; + } + if (Data1 !=3D 0) { + VTDLIB_DEBUG ((DEBUG_INFO, "PEI [%ld]: Info : 0x%x, 0x%x\n", Timesta= mp, Data1, Data2)); + } + break; + case VTDLOG_LOG_TYPE (VTDLOG_PEI_PRE_MEM_DMA_PROTECT): + VtdLibDumpPeiPreMemInfo (Context, CallbackHandle, &(Event->CommenEvent= )); + break; + case VTDLOG_LOG_TYPE (VTDLOG_PEI_PMR_LOW_MEMORY_RANGE): + VTDLIB_DEBUG ((DEBUG_INFO, "PEI [%ld]: PMR Low Memory Range [0x%x, 0x%= x]\n", Timestamp, Data1, Data2)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_PEI_PMR_HIGH_MEMORY_RANGE): + VTDLIB_DEBUG ((DEBUG_INFO, "PEI [%ld]: PMR High Memory Range [0x%016lx= , 0x%016lx]\n", Timestamp, Data1, Data2)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_PEI_PROTECT_MEMORY_RANGE): + VTDLIB_DEBUG ((DEBUG_INFO, "PEI [%ld]: Protected DMA Memory Range [0x%= 016lx, 0x%016lx]\n", Timestamp, Data1, Data2)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_PEI_POST_MEM_ENABLE_DMA_PROTECT): + VTDLIB_DEBUG ((DEBUG_INFO, "PEI [%ld]: Enable DMA protection [0x%016lx= ] %r\n", Timestamp, Data1, Data2)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_PEI_POST_MEM_DISABLE_DMA_PROTECT): + VTDLIB_DEBUG ((DEBUG_INFO, "PEI [%ld]: Disable DMA protection [0x%016l= x]\n", Timestamp, Data1)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_PEI_QUEUED_INVALIDATION): + VTDLIB_DEBUG ((DEBUG_INFO, "PEI [%ld]: Queued Invalidation", Timestamp= )); + VtdLibDumpQueuedInvaildation (Context, CallbackHandle, &(Event->Commen= Event)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_PEI_REGISTER): + VTDLIB_DEBUG ((DEBUG_INFO, "PEI [%ld]: Dump Registers\n", Timestamp)); + VtdLibDumpRegisters (Context, CallbackHandle, &(Event->ContextEvent)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_PEI_VTD_ERROR): + VtdLibDumpPeiError (Context, CallbackHandle, &(Event->CommenEvent)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_PEI_PPI_ALLOC_BUFFER): + VTDLIB_DEBUG ((DEBUG_INFO, "PEI [%ld]: PPI AllocateBuffer 0x%x, Length= =3D 0x%x\n", Timestamp, Data1, Data2)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_PEI_PPI_MAP): + VTDLIB_DEBUG ((DEBUG_INFO, "PEI [%ld]: PPI Map 0x%x, Length =3D 0x%x\n= ", Timestamp, Data1, Data2)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_DXE_BASIC): + if (Data1 & VTD_LOG_ERROR_BUFFER_FULL) { + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: Info : Log Buffer Full\n", Ti= mestamp)); + Data1 &=3D ~VTD_LOG_ERROR_BUFFER_FULL; + } + if (Data1 !=3D 0) { + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: Info : 0x%x, 0x%x\n", Timesta= mp, Data1, Data2)); + } + break; + case VTDLOG_LOG_TYPE (VTDLOG_DXE_DMAR_TABLE): + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: DMAR Table\n", Timestamp)); + VtdLibDumpAcpiDmar (Context, CallbackHandle, (EFI_ACPI_DMAR_HEADER *) = Event->ContextEvent.Data); + break; + case VTDLOG_LOG_TYPE (VTDLOG_DXE_SETUP_VTD): + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: Setup VTd Below/Above 4G Memory= Limit =3D [0x%016lx, 0x%016lx]\n", Timestamp, Data1, Data2)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_DXE_PCI_DEVICE): + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: PCI Devices [0x%016lx]\n", Time= stamp, Event->ContextEvent.Param)); + VtdLibDumpPciDeviceInfo (Context, CallbackHandle, (PCI_DEVICE_INFORMAT= ION *) Event->ContextEvent.Data); + break; + case VTDLOG_LOG_TYPE (VTDLOG_DXE_REGISTER): + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: Dump Registers\n", Timestamp)); + VtdLibDumpRegisters (Context, CallbackHandle, &(Event->ContextEvent)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_DXE_ENABLE_DMAR): + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: Enable DMAR [0x%016lx]\n", Time= stamp, Data1)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_DXE_DISABLE_DMAR): + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: Disable DMAR [0x%016lx]\n", Tim= estamp, Data1)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_DXE_DISABLE_PMR): + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: Disable PMR [0x%016lx] %r\n", T= imestamp, Data1, Data2)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_DXE_INSTALL_IOMMU_PROTOCOL): + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: Install IOMMU Protocol %r\n", T= imestamp, Data1)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_DXE_QUEUED_INVALIDATION): + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: Queued Invalidation", Timestamp= )); + VtdLibDumpQueuedInvaildation (Context, CallbackHandle, &(Event->Commen= Event)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_DXE_ROOT_TABLE): + VtdLibDumpRootTable (Context, CallbackHandle, &(Event->ContextEvent)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_DXE_IOMMU_ALLOC_BUFFER): + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: AllocateBuffer 0x%x, Page =3D 0= x%x\n", Timestamp, Data2, Data1)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_DXE_IOMMU_FREE_BUFFER): + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: FreeBuffer 0x%x, Page =3D 0x%x\= n", Timestamp, Data2, Data1)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_DXE_IOMMU_MAP): + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: Map 0x%x, Operation =3D 0x%x\n"= , Timestamp, Data1, Data2)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_DXE_IOMMU_UNMAP): + VTDLIB_DEBUG ((DEBUG_INFO, "DXE [%ld]: Unmap 0x%x, NumberOfBytes =3D 0= x%x\n", Timestamp, Data2, Data1)); + break; + case VTDLOG_LOG_TYPE (VTDLOG_DXE_IOMMU_SET_ATTRIBUTE): + VtdLibDumpSetAttribute (Context, CallbackHandle, &(Event->ContextEvent= )); + break; + default: + VTDLIB_DEBUG ((DEBUG_INFO, "## Unknown VTd Event Type=3D%d Timestamp= =3D%ld Size=3D%d\n", Event->EventHeader.LogType, Event->EventHeader.Timesta= mp, Event->EventHeader.DataSize)); + Result =3D FALSE; + break; + } + + return Result; +} + +/** + Flush VTd engine write buffer. + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. +**/ +VOID +VtdLibFlushWriteBuffer ( + IN UINTN VtdUnitBaseAddress + ) +{ + UINT32 Reg32; + VTD_CAP_REG CapReg; + + CapReg.Uint64 =3D MmioRead64 (VtdUnitBaseAddress + R_CAP_REG); + + if (CapReg.Bits.RWBF !=3D 0) { + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + MmioWrite32 (VtdUnitBaseAddress + R_GCMD_REG, Reg32 | B_GMCD_REG_WBF); + do { + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + } while ((Reg32 & B_GSTS_REG_WBF) !=3D 0); + } +} + +/** + Clear Global Command Register Bits + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + @param[in] BitMask Bit mask +**/ +VOID +VtdLibClearGlobalCommandRegisterBits ( + IN UINTN VtdUnitBaseAddress, + IN UINT32 BitMask + ) +{ + UINT32 Reg32; + UINT32 Status; + UINT32 Command; + + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + Status =3D (Reg32 & 0x96FFFFFF); // Reset the one-shot bits + Command =3D (Status & (~BitMask)); + MmioWrite32 (VtdUnitBaseAddress + R_GCMD_REG, Command); + + DEBUG ((DEBUG_INFO, "Clear GCMD_REG bits 0x%x.\n", BitMask)); + + // + // Poll on Status bit of Global status register to become zero + // + do { + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + } while ((Reg32 & BitMask) =3D=3D BitMask); + DEBUG ((DEBUG_INFO, "GSTS_REG : 0x%08x \n", Reg32)); +} + +/** + Set Global Command Register Bits + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + @param[in] BitMask Bit mask +**/ +VOID +VtdLibSetGlobalCommandRegisterBits ( + IN UINTN VtdUnitBaseAddress, + IN UINT32 BitMask + ) +{ + UINT32 Reg32; + UINT32 Status; + UINT32 Command; + + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + Status =3D (Reg32 & 0x96FFFFFF); // Reset the one-shot bits + Command =3D (Status | BitMask); + MmioWrite32 (VtdUnitBaseAddress + R_GCMD_REG, Command); + + DEBUG ((DEBUG_INFO, "Set GCMD_REG bits 0x%x.\n", BitMask)); + + // + // Poll on Status bit of Global status register to become not zero + // + do { + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + } while ((Reg32 & BitMask) =3D=3D 0); + DEBUG ((DEBUG_INFO, "GSTS_REG : 0x%08x \n", Reg32)); +} + +/** + Disable DMAR translation. + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + + @retval EFI_SUCCESS DMAR translation is disabled. +**/ +EFI_STATUS +VtdLibDisableDmar ( + IN UINTN VtdUnitBaseAddress + ) +{ + UINT32 Reg32; + + DEBUG ((DEBUG_INFO, ">>>>>>DisableDmar() for engine [%x]\n", VtdUnitBase= Address)); + + // + // Write Buffer Flush before invalidation + // + VtdLibFlushWriteBuffer (VtdUnitBaseAddress); + + // + // Disable Dmar + // + // + // Set TE (Translation Enable: BIT31) of Global command register to zero + // + VtdLibClearGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_REG_TE); + + // + // Set SRTP (Set Root Table Pointer: BIT30) of Global command register i= n order to update the root table pointerDisable VTd + // + VtdLibSetGlobalCommandRegisterBits (VtdUnitBaseAddress, B_GMCD_REG_SRTP); + + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + DEBUG ((DEBUG_INFO, "DisableDmar: GSTS_REG - 0x%08x\n", Reg32)); + + MmioWrite64 (VtdUnitBaseAddress + R_RTADDR_REG, 0); + + DEBUG ((DEBUG_INFO,"VTD () Disabled!<<<<<<\n")); + + return EFI_SUCCESS; +} + +/** + Disable PMR. + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + + @retval EFI_SUCCESS PMR is disabled. + @retval EFI_UNSUPPORTED PMR is not supported. + @retval EFI_NOT_STARTED PMR was not enabled. +**/ +EFI_STATUS +VtdLibDisablePmr ( + IN UINTN VtdUnitBaseAddress + ) +{ + UINT32 Reg32; + VTD_CAP_REG CapReg; + EFI_STATUS Status; + + CapReg.Uint64 =3D MmioRead64 (VtdUnitBaseAddress + R_CAP_REG); + if (CapReg.Bits.PLMR =3D=3D 0 || CapReg.Bits.PHMR =3D=3D 0) { + // + // PMR is not supported + // + return EFI_UNSUPPORTED; + } + + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_PMEN_ENABLE_REG); + if ((Reg32 & BIT0) !=3D 0) { + MmioWrite32 (VtdUnitBaseAddress + R_PMEN_ENABLE_REG, 0x0); + do { + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_PMEN_ENABLE_REG); + } while((Reg32 & BIT0) !=3D 0); + + DEBUG ((DEBUG_INFO,"Pmr [0x%016lx] disabled\n", VtdUnitBaseAddress)); + Status =3D EFI_SUCCESS; + } else { + DEBUG ((DEBUG_INFO,"Pmr [0x%016lx] not enabled\n", VtdUnitBaseAddress)= ); + Status =3D EFI_NOT_STARTED; + } + return Status; +} + +/** + Disable queued invalidation interface. + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. +**/ +VOID +VtdLibDisableQueuedInvalidationInterface ( + IN UINTN VtdUnitBaseAddress + ) +{ + UINT32 Reg32; + QI_256_DESC QiDesc; + + QiDesc.Uint64[0] =3D QI_IWD_TYPE; + QiDesc.Uint64[1] =3D 0; + QiDesc.Uint64[2] =3D 0; + QiDesc.Uint64[3] =3D 0; + + VtdLibSubmitQueuedInvalidationDescriptor (VtdUnitBaseAddress, &QiDesc, T= RUE); + + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + Reg32 &=3D (~B_GMCD_REG_QIE); + MmioWrite32 (VtdUnitBaseAddress + R_GCMD_REG, Reg32); + + DEBUG ((DEBUG_INFO, "Disable Queued Invalidation Interface. [%x] GCMD_RE= G =3D 0x%x\n", VtdUnitBaseAddress, Reg32)); + do { + Reg32 =3D MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG); + } while ((Reg32 & B_GSTS_REG_QIES) !=3D 0); + + MmioWrite64 (VtdUnitBaseAddress + R_IQA_REG, 0); +} + +/** + Submit the queued invalidation descriptor to the remapping + hardware unit and wait for its completion. + + @param[in] VtdUnitBaseAddress The base address of the VTd engine. + @param[in] Desc The invalidate descriptor + @param[in] ClearFaultBits Clear Error bits + + @retval EFI_SUCCESS The operation was successful. + @retval EFI_INVALID_PARAMETER Parameter is invalid. + @retval EFI_NOT_READY Queued invalidation is not inited. + @retval EFI_DEVICE_ERROR Detect fault, need to clear fault bits= if ClearFaultBits is FALSE + +**/ +EFI_STATUS +VtdLibSubmitQueuedInvalidationDescriptor ( + IN UINTN VtdUnitBaseAddress, + IN VOID *Desc, + IN BOOLEAN ClearFaultBits + ) +{ + UINTN QueueSize; + UINTN QueueTail; + UINTN QueueHead; + QI_DESC *Qi128Desc; + QI_256_DESC *Qi256Desc; + VTD_IQA_REG IqaReg; + VTD_IQT_REG IqtReg; + VTD_IQH_REG IqhReg; + UINT32 FaultReg; + UINT64 IqercdReg; + UINT64 IQBassAddress; + + if (Desc =3D=3D NULL) { + return EFI_INVALID_PARAMETER; + } + + IqaReg.Uint64 =3D MmioRead64 (VtdUnitBaseAddress + R_IQA_REG); + // + // Get IQA_REG.IQA (Invalidation Queue Base Address) + // + IQBassAddress =3D RShiftU64 (IqaReg.Uint64, 12); + if (IQBassAddress =3D=3D 0) { + DEBUG ((DEBUG_ERROR,"Invalidation Queue Buffer not ready [0x%lx]\n", I= qaReg.Uint64)); + return EFI_NOT_READY; + } + IqtReg.Uint64 =3D MmioRead64 (VtdUnitBaseAddress + R_IQT_REG); + + // + // Check IQA_REG.DW (Descriptor Width) + // + if ((IqaReg.Uint64 & BIT11) =3D=3D 0) { + // + // 128-bit descriptor + // + QueueSize =3D (UINTN) (1 << (IqaReg.Bits.QS + 8)); + Qi128Desc =3D (QI_DESC *) (UINTN) LShiftU64 (IQBassAddress, VTD_PAGE_S= HIFT); + // + // Get IQT_REG.QT for 128-bit descriptors + // + QueueTail =3D (UINTN) (RShiftU64 (IqtReg.Uint64, 4) & 0x7FFF); + Qi128Desc +=3D QueueTail; + CopyMem (Qi128Desc, Desc, sizeof (QI_DESC)); + QueueTail =3D (QueueTail + 1) % QueueSize; + + DEBUG ((DEBUG_VERBOSE, "[0x%x] Submit QI Descriptor 0x%x [0x%016lx, 0x= %016lx]\n", + VtdUnitBaseAddress, + QueueTail, + Qi128Desc->Low, + Qi128Desc->High)); + + IqtReg.Uint64 &=3D ~(0x7FFF << 4); + IqtReg.Uint64 |=3D LShiftU64 (QueueTail, 4); + } else { + // + // 256-bit descriptor + // + QueueSize =3D (UINTN) (1 << (IqaReg.Bits.QS + 7)); + Qi256Desc =3D (QI_256_DESC *) (UINTN) LShiftU64 (IQBassAddress, VTD_PA= GE_SHIFT); + // + // Get IQT_REG.QT for 256-bit descriptors + // + QueueTail =3D (UINTN) (RShiftU64 (IqtReg.Uint64, 5) & 0x3FFF); + Qi256Desc +=3D QueueTail; + CopyMem (Qi256Desc, Desc, sizeof (QI_256_DESC)); + QueueTail =3D (QueueTail + 1) % QueueSize; + + DEBUG ((DEBUG_VERBOSE, "[0x%x] Submit QI Descriptor 0x%x [0x%016lx, 0x= %016lx, 0x%016lx, 0x%016lx]\n", + VtdUnitBaseAddress, + QueueTail, + Qi256Desc->Uint64[0], + Qi256Desc->Uint64[1], + Qi256Desc->Uint64[2], + Qi256Desc->Uint64[3])); + + IqtReg.Uint64 &=3D ~(0x3FFF << 5); + IqtReg.Uint64 |=3D LShiftU64 (QueueTail, 5); + } + + // + // Update the HW tail register indicating the presence of new descriptor= s. + // + MmioWrite64 (VtdUnitBaseAddress + R_IQT_REG, IqtReg.Uint64); + + do { + FaultReg =3D MmioRead32 (VtdUnitBaseAddress + R_FSTS_REG); + if (FaultReg & (B_FSTS_REG_IQE | B_FSTS_REG_ITE | B_FSTS_REG_ICE)) { + IqercdReg =3D MmioRead64 (VtdUnitBaseAddress + R_IQERCD_REG); + DEBUG((DEBUG_ERROR, "BAR [0x%016lx] Detect Queue Invalidation Fault = [0x%08x] - IQERCD [0x%016lx]\n", VtdUnitBaseAddress, FaultReg, IqercdReg)); + if (ClearFaultBits) { + FaultReg &=3D (B_FSTS_REG_IQE | B_FSTS_REG_ITE | B_FSTS_REG_ICE); + MmioWrite32 (VtdUnitBaseAddress + R_FSTS_REG, FaultReg); + } + return EFI_DEVICE_ERROR; + } + + IqhReg.Uint64 =3D MmioRead64 (VtdUnitBaseAddress + R_IQH_REG); + // + // Check IQA_REG.DW (Descriptor Width) and get IQH_REG.QH + // + if ((IqaReg.Uint64 & BIT11) =3D=3D 0) { + QueueHead =3D (UINTN) (RShiftU64 (IqhReg.Uint64, 4) & 0x7FFF); + } else { + QueueHead =3D (UINTN) (RShiftU64 (IqhReg.Uint64, 5) & 0x3FFF); + } + } while (QueueTail !=3D QueueHead); + + return EFI_SUCCESS; +} diff --git a/Silicon/Intel/IntelSiliconPkg/Library/IntelVTdPeiDxeLib/IntelV= TdPeiDxeLib.inf b/Silicon/Intel/IntelSiliconPkg/Library/IntelVTdPeiDxeLib/I= ntelVTdPeiDxeLib.inf new file mode 100644 index 000000000..0d6dff5fa --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Library/IntelVTdPeiDxeLib/IntelVTdPeiDx= eLib.inf @@ -0,0 +1,30 @@ +### @file +# Component information file for Intel VTd function library. +# +# Copyright (c) 2023, Intel Corporation. All rights reserved.
+# +# SPDX-License-Identifier: BSD-2-Clause-Patent +# +### + +[Defines] + INF_VERSION =3D 0x00010005 + BASE_NAME =3D IntelVTdPeiDxeLib + FILE_GUID =3D 6cd8b1ea-152d-4cc9-b9b1-f5c692ba63da + VERSION_STRING =3D 1.0 + MODULE_TYPE =3D BASE + LIBRARY_CLASS =3D IntelVTdPeiDxeLib + +[LibraryClasses] + BaseLib + PrintLib + IoLib + CacheMaintenanceLib + +[Packages] + MdePkg/MdePkg.dec + MdeModulePkg/MdeModulePkg.dec + IntelSiliconPkg/IntelSiliconPkg.dec + +[Sources] + IntelVTdPeiDxeLib.c diff --git a/Silicon/Intel/IntelSiliconPkg/Library/IntelVTdPeiDxeLib/IntelV= TdPeiDxeLibExt.inf b/Silicon/Intel/IntelSiliconPkg/Library/IntelVTdPeiDxeLi= b/IntelVTdPeiDxeLibExt.inf new file mode 100644 index 000000000..9a2b28e12 --- /dev/null +++ b/Silicon/Intel/IntelSiliconPkg/Library/IntelVTdPeiDxeLib/IntelVTdPeiDx= eLibExt.inf @@ -0,0 +1,34 @@ +### @file +# Component information file for Intel VTd function library. +# +# Copyright (c) 2023, Intel Corporation. All rights reserved.
+# +# SPDX-License-Identifier: BSD-2-Clause-Patent +# +### + +[Defines] + INF_VERSION =3D 0x00010005 + BASE_NAME =3D IntelVTdPeiDxeLib + FILE_GUID =3D 6fd8b3aa-852d-6ccA-b9b2-f5c692ba63ca + VERSION_STRING =3D 1.0 + MODULE_TYPE =3D BASE + LIBRARY_CLASS =3D IntelVTdPeiDxeLib + +[LibraryClasses] + BaseLib + PrintLib + IoLib + CacheMaintenanceLib + +[Packages] + MdePkg/MdePkg.dec + MdeModulePkg/MdeModulePkg.dec + IntelSiliconPkg/IntelSiliconPkg.dec + +[Sources] + IntelVTdPeiDxeLib.c + +[BuildOptions] + *_*_X64_CC_FLAGS =3D -DEXT_CALLBACK + --=20 2.26.2.windows.1 -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#105161): https://edk2.groups.io/g/devel/message/105161 Mute This Topic: https://groups.io/mt/99082933/1787277 Group Owner: devel+owner@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org] -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-