From nobody Tue Feb 10 20:50:21 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of groups.io designates 66.175.222.108 as permitted sender) client-ip=66.175.222.108; envelope-from=bounce+27952+94419+1787277+3901457@groups.io; helo=mail02.groups.io; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of groups.io designates 66.175.222.108 as permitted sender) smtp.mailfrom=bounce+27952+94419+1787277+3901457@groups.io ARC-Seal: i=1; a=rsa-sha256; t=1664277279; cv=none; d=zohomail.com; s=zohoarc; b=DxFirSB424BVZxSEwtajTbp6rcZzks8ihyv4WHT67pPqOWjQiznFdHkqGSNcNK16xt/wXB2q1iaraSndrk0wZrFJ//ZM5QCnRTEFzIQB194NV/1+RM1gceStg6hc/aP33TtpQsvG1CsG9oQcOa+Q8zoCywbrs4DVhV5tghQ+9Wg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1664277279; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:References:Sender:Subject:To; bh=gFWG1LEjWGxNli3jI5dMBlBhEJG/8wmHVMPFFOyoAxA=; b=bzOqhK0zzkXnTP/Dxjo9+zTOfHIIpI+iPfZO6d6Y0yihCbFBouTsrBvydSORevXzR+/7MMJOGzs3pNp/55YltBTPVSFOwVfjKOBsOwxaJXbt8veSU1agPlslflrg00IHrKnokftUuKdra6Xdfz73w8EcfeN/Tj0+B5QL5aJu1IE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of groups.io designates 66.175.222.108 as permitted sender) smtp.mailfrom=bounce+27952+94419+1787277+3901457@groups.io Received: from mail02.groups.io (mail02.groups.io [66.175.222.108]) by mx.zohomail.com with SMTPS id 1664277279510724.7851120501109; Tue, 27 Sep 2022 04:14:39 -0700 (PDT) Return-Path: X-Received: by 127.0.0.2 with SMTP id iq95YY1788612xEDNlzNpyhR; Tue, 27 Sep 2022 04:14:39 -0700 X-Received: from loongson.cn (loongson.cn [114.242.206.163]) by mx.groups.io with SMTP id smtpd.web11.9756.1664277278019473100 for ; Tue, 27 Sep 2022 04:14:38 -0700 X-Received: from code-server.gen (unknown [10.2.9.245]) by localhost.localdomain (Coremail) with SMTP id AQAAf8Cxrmv02jJjirsiAA--.49762S27; Tue, 27 Sep 2022 19:14:36 +0800 (CST) From: "Chao Li" To: devel@edk2.groups.io Cc: Michael D Kinney , Liming Gao , Zhiguang Liu Subject: [edk2-devel] [PATCH v3 25/34] MdePkg/BaseCacheMaintenanceLib: LoongArch cache maintenance implementation. Date: Tue, 27 Sep 2022 19:13:45 +0800 Message-Id: <20220927111354.4107719-26-lichao@loongson.cn> In-Reply-To: <20220927111354.4107719-1-lichao@loongson.cn> References: <20220927111354.4107719-1-lichao@loongson.cn> MIME-Version: 1.0 X-CM-TRANSID: AQAAf8Cxrmv02jJjirsiAA--.49762S27 X-Coremail-Antispam: 1UD129KBjvJXoW3KrWkCrykCFyxAw1fZF1kXwb_yoWDAr1rpr WfGr47trW8XrWxG3yvqw48GFn5ua95Ja42k3s8C34Syrn5tF97Ca4jyr4Ygayjkr4xAw1I qw47tanrZFs8ZaDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnUUvcSsGvfC2KfnxnUUI43ZEXa7xR_UUUUUUUUU== X-CM-SenderInfo: xolfxt3r6o00pqjv00gofq/1tbiAQACCGMxll4dbgAlsH Precedence: Bulk List-Unsubscribe: List-Subscribe: List-Help: Sender: devel@edk2.groups.io List-Id: Mailing-List: list devel@edk2.groups.io; contact devel+owner@edk2.groups.io Reply-To: devel@edk2.groups.io,lichao@loongson.cn X-Gm-Message-State: 2L8MZfrU06gJ5D71gJ4O5Ahbx1787277AA= Content-Transfer-Encoding: quoted-printable DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=groups.io; q=dns/txt; s=20140610; t=1664277279; bh=5thzPwnuVQIsXl2XS0H7FLj6yCq6JdUtQQezMSJUCKs=; h=Cc:Date:From:Reply-To:Subject:To; b=MA+wJOEmpScmfvBtmsD3gyFhoH53BKAib5cphieMbxNQTMPwOxm1nKBMBJwu8yFJVz7 TNKeTMr9wYBnhZ9Z+3SdhR7DLnf2VaYj9zfsfQy8P8K0yWV7RemreA5tgZKDYEXYPtqzA 0ltYrjjVpSkTcM5k75dFjX+t5PM4KO4pIUA= X-ZohoMail-DKIM: pass (identity @groups.io) X-ZM-MESSAGEID: 1664277281223100071 Content-Type: text/plain; charset="utf-8" REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3D4053 Implement LoongArch cache maintenance functions in BaseCacheMaintenanceLib. Cc: Michael D Kinney Cc: Liming Gao Cc: Zhiguang Liu Signed-off-by: Chao Li Reviewed-by: Michael D Kinney --- .../BaseCacheMaintenanceLib.inf | 6 +- .../BaseCacheMaintenanceLib/LoongArchCache.c | 254 ++++++++++++++++++ 2 files changed, 259 insertions(+), 1 deletion(-) create mode 100644 MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c diff --git a/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib= .inf b/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf index 33114243d5..6fd9cbe5f6 100644 --- a/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf +++ b/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf @@ -7,6 +7,7 @@ # Copyright (c) 2007 - 2018, Intel Corporation. All rights reserved.
# Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.
# Copyright (c) 2020, Hewlett Packard Enterprise Development LP. All righ= ts reserved.
+# Copyright (c) 2022, Loongson Technology Corporation Limited. All rights= reserved.
# # SPDX-License-Identifier: BSD-2-Clause-Patent # @@ -24,7 +25,7 @@ =20 =20 # -# VALID_ARCHITECTURES =3D IA32 X64 EBC ARM AARCH64 +# VALID_ARCHITECTURES =3D IA32 X64 EBC ARM AARCH64 RISCV64 LOON= GARCH64 # =20 [Sources.IA32] @@ -45,6 +46,9 @@ [Sources.RISCV64] RiscVCache.c =20 +[Sources.LOONGARCH64] + LoongArchCache.c + [Packages] MdePkg/MdePkg.dec =20 diff --git a/MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c b/MdeP= kg/Library/BaseCacheMaintenanceLib/LoongArchCache.c new file mode 100644 index 0000000000..4c8773278c --- /dev/null +++ b/MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c @@ -0,0 +1,254 @@ +/** @file + Cache Maintenance Functions for LoongArch. + LoongArch cache maintenance functions has not yet been completed, and wi= ll added in later. + Functions are null functions now. + + Copyright (c) 2022, Loongson Technology Corporation Limited. All rights = reserved.
+ + SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +// +// Include common header file for this module. +// +#include +#include +#include + +/** + LoongArch data barrier operation. +**/ +VOID +EFIAPI +AsmDataBarrierLoongArch ( + VOID + ); + +/** + LoongArch instruction barrier operation. +**/ +VOID +EFIAPI +AsmInstructionBarrierLoongArch ( + VOID + ); + +/** + Invalidates the entire instruction cache in cache coherency domain of the + calling CPU. + +**/ +VOID +EFIAPI +InvalidateInstructionCache ( + VOID + ) +{ + AsmInstructionBarrierLoongArch (); +} + +/** + Invalidates a range of instruction cache lines in the cache coherency do= main + of the calling CPU. + + Invalidates the instruction cache lines specified by Address and Length.= If + Address is not aligned on a cache line boundary, then entire instruction + cache line containing Address is invalidated. If Address + Length is not + aligned on a cache line boundary, then the entire instruction cache line + containing Address + Length -1 is invalidated. This function may choose = to + invalidate the entire instruction cache if that is more efficient than + invalidating the specified range. If Length is 0, the no instruction cac= he + lines are invalidated. Address is returned. + + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT(). + + @param[in] Address The base address of the instruction cache lines to + invalidate. If the CPU is in a physical addressing mode,= then + Address is a physical address. If the CPU is in a virtual + addressing mode, then Address is a virtual address. + + @param[in] Length The number of bytes to invalidate from the instructi= on cache. + + @return Address. + +**/ +VOID * +EFIAPI +InvalidateInstructionCacheRange ( + IN VOID *Address, + IN UINTN Length + ) +{ + AsmInstructionBarrierLoongArch (); + return Address; +} + +/** + Writes Back and Invalidates the entire data cache in cache coherency dom= ain + of the calling CPU. + + Writes Back and Invalidates the entire data cache in cache coherency dom= ain + of the calling CPU. This function guarantees that all dirty cache lines = are + written back to system memory, and also invalidates all the data cache l= ines + in the cache coherency domain of the calling CPU. + +**/ +VOID +EFIAPI +WriteBackInvalidateDataCache ( + VOID + ) +{ + DEBUG ((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __= FUNCTION__)); +} + +/** + Writes Back and Invalidates a range of data cache lines in the cache + coherency domain of the calling CPU. + + Writes Back and Invalidate the data cache lines specified by Address and + Length. If Address is not aligned on a cache line boundary, then entire = data + cache line containing Address is written back and invalidated. If Addres= s + + Length is not aligned on a cache line boundary, then the entire data cac= he + line containing Address + Length -1 is written back and invalidated. This + function may choose to write back and invalidate the entire data cache if + that is more efficient than writing back and invalidating the specified + range. If Length is 0, the no data cache lines are written back and + invalidated. Address is returned. + + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT(). + + @param[in] Address The base address of the data cache lines to write ba= ck and + invalidate. If the CPU is in a physical addressing mode,= then + Address is a physical address. If the CPU is in a virtual + addressing mode, then Address is a virtual address. + @param[in] Length The number of bytes to write back and invalidate fro= m the + data cache. + + @return Address of cache invalidation. + +**/ +VOID * +EFIAPI +WriteBackInvalidateDataCacheRange ( + IN VOID *Address, + IN UINTN Length + ) +{ + DEBUG ((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __= FUNCTION__)); + return Address; +} + +/** + Writes Back the entire data cache in cache coherency domain of the calli= ng + CPU. + + Writes Back the entire data cache in cache coherency domain of the calli= ng + CPU. This function guarantees that all dirty cache lines are written bac= k to + system memory. This function may also invalidate all the data cache line= s in + the cache coherency domain of the calling CPU. + +**/ +VOID +EFIAPI +WriteBackDataCache ( + VOID + ) +{ + WriteBackInvalidateDataCache (); +} + +/** + Writes Back a range of data cache lines in the cache coherency domain of= the + calling CPU. + + Writes Back the data cache lines specified by Address and Length. If Add= ress + is not aligned on a cache line boundary, then entire data cache line + containing Address is written back. If Address + Length is not aligned o= n a + cache line boundary, then the entire data cache line containing Address + + Length -1 is written back. This function may choose to write back the en= tire + data cache if that is more efficient than writing back the specified ran= ge. + If Length is 0, the no data cache lines are written back. This function = may + also invalidate all the data cache lines in the specified range of the c= ache + coherency domain of the calling CPU. Address is returned. + + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT(). + + @param[in] Address The base address of the data cache lines to write ba= ck. If + the CPU is in a physical addressing mode, then Address i= s a + physical address. If the CPU is in a virtual addressing + mode, then Address is a virtual address. + @param[in] Length The number of bytes to write back from the data cach= e. + + @return Address of cache written in main memory. + +**/ +VOID * +EFIAPI +WriteBackDataCacheRange ( + IN VOID *Address, + IN UINTN Length + ) +{ + DEBUG ((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __= FUNCTION__)); + return Address; +} + +/** + Invalidates the entire data cache in cache coherency domain of the calli= ng + CPU. + + Invalidates the entire data cache in cache coherency domain of the calli= ng + CPU. This function must be used with care because dirty cache lines are = not + written back to system memory. It is typically used for cache diagnostic= s. If + the CPU does not support invalidation of the entire data cache, then a w= rite + back and invalidate operation should be performed on the entire data cac= he. + +**/ +VOID +EFIAPI +InvalidateDataCache ( + VOID + ) +{ + AsmDataBarrierLoongArch (); +} + +/** + Invalidates a range of data cache lines in the cache coherency domain of= the + calling CPU. + + Invalidates the data cache lines specified by Address and Length. If Add= ress + is not aligned on a cache line boundary, then entire data cache line + containing Address is invalidated. If Address + Length is not aligned on= a + cache line boundary, then the entire data cache line containing Address + + Length -1 is invalidated. This function must never invalidate any cache = lines + outside the specified range. If Length is 0, the no data cache lines are + invalidated. Address is returned. This function must be used with care + because dirty cache lines are not written back to system memory. It is + typically used for cache diagnostics. If the CPU does not support + invalidation of a data cache range, then a write back and invalidate + operation should be performed on the data cache range. + + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT(). + + @param[in] Address The base address of the data cache lines to invalida= te. If + the CPU is in a physical addressing mode, then Address i= s a + physical address. If the CPU is in a virtual addressing = mode, + then Address is a virtual address. + @param[in] Length The number of bytes to invalidate from the data cach= e. + + @return Address. + +**/ +VOID * +EFIAPI +InvalidateDataCacheRange ( + IN VOID *Address, + IN UINTN Length + ) +{ + AsmDataBarrierLoongArch (); + return Address; +} --=20 2.27.0 -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#94419): https://edk2.groups.io/g/devel/message/94419 Mute This Topic: https://groups.io/mt/93947382/1787277 Group Owner: devel+owner@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org] -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-