From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout149.security-mail.net (smtpout149.security-mail.net [85.31.212.149]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 40B8016CD25 for ; Mon, 22 Jul 2024 09:45:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.149 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641510; cv=none; b=MBVKGlT+w9TMC/ez3VGD5SXNqqYwIVx4w/cNVl7HT8k9FG5oBxR/P5r9kriERGQbVgDbVLg1t698r0Qadw23wl2HhUCHC/uvPXgFgcmoK3om77HUW7KCSByyJPsP8w/s0a1av378wCAPerRli2MbQnigNFXOpihtl2LY3V92hqc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641510; c=relaxed/simple; bh=YeqFWzCyLzymaU3I8FDaOr7hFimGUaQy9161dgkeAQY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=KKP9GJvex1KEIY0vkPyPkXUDBlhnkvfPrfg7vLjJJgQd25RWWRUScEbRpHR7xV1W4b+TkIzSBIkLvMYM+HgAGHdRuAVi3hJIwBcgtvUCCellBanHpIMi6gb81A5dGH2c55J7TEu35kLLjd+pX5tcyuKGqrF/c7Qt3eLOo7ihbwI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=SOIWo8Ed; arc=none smtp.client-ip=85.31.212.149 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="SOIWo8Ed" Received: from localhost (fx409.security-mail.net [127.0.0.1]) by fx409.security-mail.net (Postfix) with ESMTP id 2E5743499B2 for ; Mon, 22 Jul 2024 11:43:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641396; bh=YeqFWzCyLzymaU3I8FDaOr7hFimGUaQy9161dgkeAQY=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=SOIWo8Ed/UUhcAl5pjSF6UDQlQmggGa91Ek7qXggKwP2lPUIjoAUUpDpYDNJ0qWca 7CwX7xg/D1WTcm9kK1xXvWELfRIBqklUM7CiWJM97QAKHJ0d9O/wZhxYwauZ/f1pW2 0IsyLCmKrIBs6lzzbKIzRi/c2za6nWVdTNAEBNnc= Received: from fx409 (fx409.security-mail.net [127.0.0.1]) by fx409.security-mail.net (Postfix) with ESMTP id DFB5A34991E; Mon, 22 Jul 2024 11:43:15 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx409.security-mail.net (Postfix) with ESMTPS id 19EAA349829; Mon, 22 Jul 2024 11:43:15 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id D86ED40317; Mon, 22 Jul 2024 11:43:14 +0200 (CEST) X-Secumail-id: From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Jonathan Corbet Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Jules Maselbas , Guillaume Thouvenin , linux-doc@vger.kernel.org Subject: [RFC PATCH v3 01/37] Documentation: kvx: Add basic documentation Date: Mon, 22 Jul 2024 11:41:12 +0200 Message-ID: <20240722094226.21602-2-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add some documentation for the kvx architecture and its Linux port. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Jules Maselbas Signed-off-by: Jules Maselbas Co-developed-by: Guillaume Thouvenin Signed-off-by: Guillaume Thouvenin Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: - converted to .rst, typos and formatting fixes - reword few paragraphs V2 -> V3: - move rst index into new arch index file - fixed few typos - remarks from V2 (inconsistencies) will be addressed in V4 --- Documentation/arch/index.rst | 1 + Documentation/arch/kvx/index.rst | 17 ++ Documentation/arch/kvx/kvx-exceptions.rst | 258 ++++++++++++++++++++ Documentation/arch/kvx/kvx-iommu.rst | 191 +++++++++++++++ Documentation/arch/kvx/kvx-mmu.rst | 256 ++++++++++++++++++++ Documentation/arch/kvx/kvx-smp.rst | 41 ++++ Documentation/arch/kvx/kvx.rst | 273 ++++++++++++++++++++++ 7 files changed, 1037 insertions(+) create mode 100644 Documentation/arch/kvx/index.rst create mode 100644 Documentation/arch/kvx/kvx-exceptions.rst create mode 100644 Documentation/arch/kvx/kvx-iommu.rst create mode 100644 Documentation/arch/kvx/kvx-mmu.rst create mode 100644 Documentation/arch/kvx/kvx-smp.rst create mode 100644 Documentation/arch/kvx/kvx.rst diff --git a/Documentation/arch/index.rst b/Documentation/arch/index.rst index 3f9962e45c098..c99cf4cccf3b3 100644 --- a/Documentation/arch/index.rst +++ b/Documentation/arch/index.rst @@ -13,6 +13,7 @@ implementation. arm/index arm64/index loongarch/index + kvx/index m68k/index mips/index nios2/index diff --git a/Documentation/arch/kvx/index.rst b/Documentation/arch/kvx/inde= x.rst new file mode 100644 index 0000000000000..0bac597b5fbe9 --- /dev/null +++ b/Documentation/arch/kvx/index.rst @@ -0,0 +1,17 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +Kalray kvx Architecture +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +.. toctree:: + :maxdepth: 2 + :numbered: + + kvx + kvx-exceptions + kvx-smp + kvx-mmu + kvx-iommu + + diff --git a/Documentation/arch/kvx/kvx-exceptions.rst b/Documentation/arch= /kvx/kvx-exceptions.rst new file mode 100644 index 0000000000000..3810d3489de3b --- /dev/null +++ b/Documentation/arch/kvx/kvx-exceptions.rst @@ -0,0 +1,258 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +Exceptions +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +On kvx, handlers are set using ``$ev`` (exception vector) register which +specifies a base address. An offset is added to ``$ev`` upon exception +and the result is used as the new ``$pc``. +The offset depends on which exception vector the cpu wants to jump to: + + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D + ``$ev + 0x00`` debug + ``$ev + 0x40`` trap + ``$ev + 0x80`` interrupt + ``$ev + 0xc0`` syscall + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D + +Then, handlers are laid in the following order:: + + _____________ + | | + | Syscall | + |_____________| + | | + | Interrupts | + |_____________| + | | + | Traps | + |_____________| + | | ^ + | Debug | | Stride + BASE -> |_____________| v + + +Interrupts and traps are serviced similarly, ie: + + - Jump to handler + - Save all registers + - Prepare the call (do_IRQ or trap_handler) + - restore all registers + - return from exception + +entry.S file is (as for other architectures) the entry point into the kern= el. +It contains all assembly routines related to interrupts/traps/syscall. + +Syscall handling +---------------- + +When executing a syscall, it must be done using ``scall $r6`` where ``$r6`` +contains the syscall number. This convention allows to modify and restart +a syscall from the kernel. + +Syscalls are handled differently than interrupts/exceptions. From an ABI +point of view, syscalls are like function calls: any caller-saved register +can be clobbered by the syscall. However, syscall parameters are passed +using registers r0 through r7. These registers must be preserved to avoid +clobberring them before the actual syscall function. + +On syscall from userspace (``scall`` instruction), the processor will put +the syscall number in $es.sn and switch from user to kernel privilege +mode. ``kvx_syscall_handler`` will be called in kernel mode. + +The following steps are then taken: + + 1. Switch to kernel stack + 2. Extract syscall number + 3. If the syscall number is bogus, set the syscall function to ``sys_ni_s= yscall`` + 4. If tracing is enabled + + - Jump to ``trace_syscall_enter`` + - Save syscall arguments (``r0`` -> ``r7``) on stack in ``pt_regs``. + - Call ``do_trace_syscall_enter`` function. + + 5. Restore syscall arguments since they have been modified by C call + 6. Call the syscall function + 7. Save ``$r0`` in ``pt_regs`` since it can be clobberred afterward + 8. If tracing is enabled, call ``trace_syscall_exit``. + 9. Call ``work_pending`` + 10. Return to user ! + +The trace call is handled out of the fast path. All slow path handling +is done in another part of code to avoid messing with the cache. + +Signals +------- + +Signals are handled when exiting kernel before returning to user. +When handling a signal, the path is the following: + + 1. User application is executing normally + Then any exception happens (syscall, interrupt, trap) + 2. The exception handling path is taken + and before returning to user, pending signals are checked + 3. Signal are handled by ``do_signal``. + Registers are saved and a special part of the stack is modified + to create a trampoline to call ``rt_sigreturn``, + ``$spc`` is modified to jump to user signal handler; + ``$ra`` is modified to jump to sigreturn trampoline directly after + returning from user signal handler. + 4. User signal handler is called after ``rfe`` from exception + when returning, ``$ra`` is retored to ``$pc``, resulting in a call + to the syscall trampoline. + 5. syscall trampoline is executed, leading to rt_sigreturn syscall + 6. ``rt_sigreturn`` syscall is executed. Previous registers are restored = to + allow returning to user correctly. + 7. User application is restored at the exact point it was interrupted + before. + +:: + + +----------+ + | 1 | + | User app | @func + | (user) | + +---+------+ + | + | it/trap/scall + | + +---v-------+ + | 2 | + | exception | + | handling | + | (kernel) | + +---+-------+ + | + | Check if signal are pending, if so, handle signals + | + +---v--------+ + | 3 | + | do_signal | + | handling | + | (kernel) | + +----+-------+ + | + | Return to user signal handler + | + +----v------+ + | 4 | + | signal | + | handler | + | (user) | + +----+------+ + | + | Return to sigreturn trampoline + | + +----v-------+ + | 5 | + | syscall | + |rt_sigreturn| + | (user) | + +----+-------+ + | + | Syscall to rt_sigreturn + | + +----v-------+ + | 6 | + | sigreturn | + | handler | + | (kernel) | + +----+-------+ + | + | Modify context to return to original func + | + +----v-----+ + | 7 | + | User app | @func + | (user) | + +----------+ + + +Registers handling +------------------ + +MMU is disabled in all exceptions paths, during register save and restorat= ion. +This will prevent from triggering MMU fault (such as TLB miss) which could +clobber the current register state. Such event can occurs when RWX mode is +enabled and the memory accessed to save register can trigger a TLB miss. +Aside from that which is common for all exceptions path, registers are sav= ed +differently depending on the exception type. + +Interrupts and traps +-------------------- + +When interrupt and traps are triggered, only caller-saved registers are sa= ved. +Indeed, we rely on the fact that C code will save and restore callee-saved= and +hence, there is no need to save them. The path is:: + + +------------+ +-----------+ +---------------+ + IT | Save caller| C Call | Execute C | Ret | Restore caller| Ret = from IT + +--->+ saved +--------->+ handler +------->+ saved +-----> + | registers | +-----------+ | registers | + +------------+ +---------------+ + +However, when returning to user, we check if there is work_pending. If a s= ignal +is pending and there is a signal handler to be called, then all registers = are +saved on the stack in ``pt_regs`` before executing the signal handler and +restored after that. Since only caller-saved registers have been saved bef= ore +checking for pending work, callee-saved registers also need to be saved to +restore everything correctly when before returning to user. +This path is the following (a bit more complicated !):: + + +------------+ + | Save caller| +-----------+ Ret +------------+ + IT | saved | C Call | Execute C | to asm | Check work | + +--->+ registers +--------->+ handler +------->+ pending | + | to pt_regs | +-----------+ +--+---+-----+ + +------------+ | | + Work pending | | No work pend= ing + +--------------------------------------------+ | + | | + | +------------+ + v | + +------+------+ v + | Save callee | +-------+-------+ + | saved | | Restore caller| RFE from IT + | registers | | saved +-------> + | to pt_regs | | registers | + +--+-------+--+ | from pt_regs | + | | +-------+-------+ + | | +---------+ ^ + | | | Execute | | + | +-------->+ needed +-----------+ + | | work | + | +---------+ + |Signal handler ? + v + +----+----------+ RFE to user +-------------+ +--------------+ + | Copy all | handler | Execute | ret | rt_sigreturn | + | registers +------------>+ user signal +------>+ trampoline | + | from pt_regs | | handler | | to kernel | + | to user stack | +-------------+ +------+-------+ + +---------------+ | + syscall rt_sigreturn | + +-------------------------------------------------+ + | + v + +--------+-------+ +-------------+ + | Recopy all | | Restore all | RFE + | registers from +--------------------->+ saved +-------> + | user stack | Return | registers | + | to pt_regs | from sigreturn |from pt_regs | + +----------------+ (via ret_from_fork) +-------------+ + + +Syscalls +-------- + +As explained before, for syscalls, we can use whatever callee-saved regist= ers +we want since syscall are seen as a "classic" call from ABI pov. +Only different path is the one for clone. For this path, since the child e= xpects +to find same callee-registers content than his parent, we must save them b= efore +executing the clone syscall and restore them after that for the child. Thi= s is +done via a redefinition of __sys_clone in assembly which will be called in= place +of the standard sys_clone. This new call will save callee saved registers +in pt_regs. Parent will return using the syscall standard path. Freshly sp= awned +child however will be woken up via ret_from_fork which will restore all +registers (even if caller saved are not needed). diff --git a/Documentation/arch/kvx/kvx-iommu.rst b/Documentation/arch/kvx/= kvx-iommu.rst new file mode 100644 index 0000000000000..240995d315ce4 --- /dev/null +++ b/Documentation/arch/kvx/kvx-iommu.rst @@ -0,0 +1,191 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=3D=3D=3D=3D=3D +IOMMU +=3D=3D=3D=3D=3D + +General Overview +---------------- + +To exchange data between device and users through memory, the driver +has to set up a buffer by doing some kernel allocation. The buffer uses +virtual address and the physical address is obtained through the MMU. +When the device wants to access the same physical memory space it uses +the bus address which is obtained by using the DMA mapping API. The +Coolidge SoC includes several IOMMUs for clusters, PCIe peripherals, +SoC peripherals, and more; that will translate this "bus address" into +a physical one for DMA operations. + +Bus addresses are IOVA (I/O Virtual Address) or DMA addresses. These +addresses can be obtained by calling the allocation functions of the DMA A= PIs. +It can also be obtained through classical kernel allocation of physical +contiguous memory and then calling mapping functions of the DMA API. + +In order to be able to use the kvx IOMMU we have implemented the IOMMU DMA +interface in arch/kvx/mm/dma-mapping.c. DMA functions are registered by +implementing arch_setup_dma_ops() and generic IOMMU functions. Generic IOM= MU +are calling our specific IOMMU functions that adds or remove mappings betw= een +DMA addresses and physical addresses in the IOMMU TLB. + +Specific IOMMU functions are defined in the kvx IOMMU driver. The kvx IOMMU +driver manage two physical hardware IOMMU: one used for TX and one for RX. +In the next section we described the HW IOMMUs. + +Cluster IOMMUs +-------------- + +IOMMUs on cluster are used for DMA and cryptographic accelerators. +There are six IOMMUs connected to the: + + - cluster DMA tx + - cluster DMA rx + - first non secure cryptographic accelerator + - second non secure cryptographic accelerator + - first secure cryptographic accelerator + - second secure cryptographic accelerator + +SoC peripherals IOMMUs +---------------------- + +Since SoC peripherals are connected to an AXI bus, two IOMMUs are used: on= e for +each AXI channel (read and write). These two IOMMUs are shared between all= master +devices and DMA. These two IOMMUs will have the same entries but need to b= e configured +independently. + +PCIe IOMMUs +----------- + +There is a slave IOMMU (read and write from the MPPA to the PCIe endpoint) +and a master IOMMU (read and write from a PCIe endpoint to system DDR). +The PCIe root complex and the MSI/MSI-X controller have been designed to u= se +the IOMMU feature when enabled. (For example for supporting endpoint that +support only 32 bits addresses and allow them to access any memory in a +64 bits address space). For security reason it is highly recommended to +activate the IOMMU for PCIe. + +IOMMU implementation +-------------------- + +The kvx is providing several IOMMUs. Here is a simplified view of all IOMM= Us +and translations that occurs between memory and devices:: + + +---------------------------------------------------------------------+ + | +------------+ +---------+ | CLUSTER X | + | | Cores 0-15 +---->+ Crypto | +-----------| + | +-----+------+ +----+----+ | + | | | | + | v v | + | +-------+ +------------------------------+ | + | | MMU | +----+ IOMMU x4 (secure + insecure) | | + | +---+---+ | +------------------------------+ | + | | | | + +--------------------+ | + | | | | + v v | | + +---+--------+-+ | | + | MEMORY | | +----------+ +--------+ +-------+ | + | +<-|-----+ IOMMU Rx |<----+ DMA Rx |<----+ | | + | | | +----------+ +--------+ | | | + | | | | NoC | | + | | | +----------+ +--------+ | | | + | +--|---->| IOMMU Tx +---->| DMA Tx +---->+ | | + | | | +----------+ +--------+ +-------+ | + | | +------------------------------------------------+ + | | + | | +--------------+ +------+ + | |<--->+ IOMMU Rx/Tx +<--->+ PCIe + + | | +--------------+ +------+ + | | + | | +--------------+ +------------------------+ + | |<--->+ IOMMU Rx/Tx +<--->+ master Soc Peripherals | + | | +--------------+ +------------------------+ + +--------------+ + + +There is also an IOMMU dedicated to the crypto module but this module will= not +be accessed by the operating system. + +We will provide one driver to manage IOMMUs RX/TX. All of them will be +described in the device tree to be able to get their particularities. See +the example below that describes the relation between IOMMU, DMA and NoC in +the cluster. + +IOMMU is related to a specific bus like PCIe we will be able to specify th= at +all peripherals will go through this IOMMU. + +IOMMU Page table +~~~~~~~~~~~~~~~~ + +We need to be able to know which IO virtual addresses (IOVA) are mapped in= the +TLB in order to be able to remove entries when a device finishes a transfe= r and +release memory. This information could be extracted when needed by computi= ng all +sets used by the memory and then reads all sixteen ways and compare them t= o the +IOVA but it won't be efficient. We also need to be able to translate an IO= VA +to a physical address as required by the iova_to_phys IOMMU ops that is us= ed +by DMA. Like previously it can be done by extracting the set from the addr= ess +and comparing the IOVA to each sixteen entries of the given set. + +A solution is to keep a page table for the IOMMU. But this method is not +efficient for reloading an entry of the TLB without the help of an hardware +page table. So to prevent the need of a refill we will update the TLB when= a +device request access to memory and if there is no more slot available in = the +TLB we will just fail and the device will have to try again later. It is n= ot +efficient but at least we won't need to manage the refill of the TLB. + +This limits the total amount of memory that can be used for transfer betwe= en +device and memory (see Limitations section below). +To be able to manage bigger transfer we can implement the huge page table = in +the Linux kernel and use a page table that match the size of huge page tab= le +for a given IOMMU (typically the PCIe IOMMU). + +As we won't refill the TLB we know that we won't have more than 128*16 ent= ries. +In this case we can simply keep a table with all possible entries. + +Maintenance interface +~~~~~~~~~~~~~~~~~~~~~ + +It is possible to have several "maintainers" for the same IOMMU. The drive= r is +using two of them. One that writes the TLB and another interface reads TLB= . For +debug purpose it is possible to display the content of the tlb by using the +following command in gdb:: + + gdb> p kvx_iommu_dump_tlb( , 0) + +Since different management interface are used for read and write it is saf= e to +execute the above command at any moment. + +Interrupts +~~~~~~~~~~ + +IOMMU can have 3 kind of interrupts that corresponds to 3 different types = of +errors (no mapping. protection, parity). When the IOMMU is shared between +clusters (SoC periph and PCIe) then fifteen IRQs are generated according t= o the +configuration of an association table. The association table is indexed by= the +ASN number (9 bits) and the entry of the table is a subscription mask with= one +bit per destination. Currently this is not managed by the driver. + +The driver is only managing interrupts for the cluster. The mode used is t= he +stall one. So when an interrupt occurs it is managed by the driver. All ot= hers +interrupts that occurs are stored and the IOMMU is stalled. When driver cl= eans +the first interrupt others will be managed one by one. + +ASN (Address Space Number) +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This is also know as ASID in some other architecture. Each device will hav= e a +given ASN that will be given through the device tree. As address space is +managed at the IOMMU domain level we will use one group and one domain per= ID. +ASN are coded on 9 bits. + +Device tree +----------- + +Relationships between devices, DMAs and IOMMUs are described in the +device tree (see `Documentation/devicetree/bindings/iommu/kalray,kvx-iommu= .txt` +for more details). + +Limitations +----------- + +Only supporting 4KB page size will limit the size of mapped memory to 8MB +because the IOMMU TLB can have at most 128*16 entries. diff --git a/Documentation/arch/kvx/kvx-mmu.rst b/Documentation/arch/kvx/kv= x-mmu.rst new file mode 100644 index 0000000000000..de2260f26f1fb --- /dev/null +++ b/Documentation/arch/kvx/kvx-mmu.rst @@ -0,0 +1,256 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=3D=3D=3D +MMU +=3D=3D=3D + +Virtual addresses are 41-bit long. +Bit 40 is set for kernel space mappings only. +When bit 40 is set, then the higher remaining bits must also be +set to 1. The virtual address is sign-extended from bit 40 onwards. + +Memory Map +---------- + + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =3D= =3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D + Start End Attr Size Name + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =3D= =3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D + 0000 0000 0000 0000 0000 003F FFFF FFFF --- 256GB User + 0000 0040 0000 0000 0000 007F FFFF FFFF --- 256GB MMAP + 0000 0080 0000 0000 FFFF FF7F FFFF FFFF --- --- Gap + FFFF FF80 0000 0000 FFFF FFFF FFFF FFFF --- 512GB Kernel + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D + +Kernel memory map +----------------- + + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =3D= =3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D + Start End Attr Size Name + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =3D= =3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D + FFFF FF80 0000 0000 FFFF FF8F FFFF FFFF RWX 64GB Direct M= ap + FFFF FF90 0000 0000 FFFF FF90 3FFF FFFF RWX 1GB Vmalloc + FFFF FF90 4000 0000 FFFF FFFF FFFF FFFF RW 447GB Free area + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =3D= =3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D + +Enable the MMU +-------------- + +All kernel functions and symbols are in virtual memory except for kvx_star= t() +function which is loaded at 0x0 in physical memory. +To be able to switch from physical addresses to virtual addresses we choos= e to +setup the TLB at the very beginning of the boot process to be able to map = both +pieces of code. For this we added two entries in the LTLB. The first one, +LTLB[0], contains the mapping between virtual memory and DDR. Its size is = 512MB. +The second entry, LTLB[1], contains a flat mapping of the first 2MB of the= SMEM. +Once those two entries are present we can enable the MMU. LTLB[1] will be +removed during paging_init() because once we are really running in virtual= space +it will not be used anymore. +In order to access more than 512MB DDR memory, the remaining memory (> 512= MB) is +refill using a comparison in kernel_perf_refill that does not walk the ker= nel +page table, thus having a faster refill time for kernel. These entries are +inserted into the LTLB for easier computation (4 LTLB entries). The drawba= ck of +this approach is that mapped entries are using RWX protection attributes, +leading to no protection at all. + +Privilege Level +--------------- + +Since we are using privilege levels on kvx, we make use of the virtual +spaces to be in the same space as the user. The kernel will have the +$ps.mmup set in kernel (PL1) and unset for user (PL2). +As said in kvx documentation, we have two cases when the kernel is +booted: + + - Either we have been booted by someone (bootloader, hypervisor, etc) + - Or we are alone (boot from flash) + +In both cases, we will use the virtual space 0. Indeed, if we are alone +on the core, then it means nobody is using the MMU and we can take the +first virtual space. If not alone, then when writing an entry to the tlb +using writetlb instruction, the hypervisor will catch it and change the +virtual space accordingly. + +Memblock +=3D=3D=3D=3D=3D=3D=3D=3D + +When the kernel starts there is no memory allocator available. One of the = first +step in the kernel is to detect the amount of DDR available by getting this +information in the device tree and initialize the low-level "memblock" all= ocator. + +We start by reserving memory for the whole kernel. For instance with a dev= ice +tree containing 512MB of DDR you could see the following boot messages:: + + setup_bootmem: Memory : 0x100000000 - 0x120000000 + setup_bootmem: Reserved: 0x10001f000 - 0x1002d1bc0 + +During the paging init we need to set: + + - min_low_pfn that is the lowest PFN available in the system + - max_low_pfn that indicates the end if NORMAL zone + - max_pfn that is the number of pages in the system + +This setting is used for dividing memory into pages and for configuring the +zone. See the memory map section for more information about ZONE. + +Zones are configured in free_area_init_core(). During start_kernel() other +allocations are done for command line, cpu areas, PID hash table, different +caches for VFS. This allocator is used until mem_init() is called. + +mem_init() is provided by the architecture. For MPPA we just call +free_all_bootmem() that will go through all pages that are not used by the +low level allocator and mark them as not used. So physical pages that are +reserved for the kernel are still used and remain in physical memory. All = pages +released will now be used by the buddy allocator. + +Peripherals +----------- + +Peripherals are mapped using standard ioremap infrastructure, therefore +mapped addresses are located in the vmalloc space. + +LTLB Usage +---------- + +LTLB is used to add resident mapping which allows for faster MMU lookup. +Currently, the LTLB is used to map some mandatory kernel pages and to allo= w fast +accesses to l2 cache (mailbox and registers). +When CONFIG_STRICT_KERNEL_RWX is disabled, 4 entries are reserved for kern= el +TLB refill using 512MB pages. When CONFIG_STRICT_KERNEL_RWX is enabled, th= ese +entries are unused since kernel is paginated using the same mecanism than = for +user (page walking and entries in JTLB) + +Page Table +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +We only support three levels for the page table and 4KB for page size. + +3 levels page table +------------------- + +:: + + ...-----+--------+--------+--------+--------+--------+ + 40|39 32|31 24|23 16|15 8|7 0| + ...-----++-------+--+-----+---+----+----+---+--------+ + | | | | + | | | +---> [11:0] Offset (12 bits) + | | +-------------> [20:12] PTE offset (9 bit= s) + | +-----------------------> [29:21] PMD offset (9 bit= s) + +----------------------------------> [39:30] PGD offset (10 bi= ts) + +Bits 40 to 64 are signed extended according to bit 39. If bit 39 is equal = to 1 +we are in kernel space. + +As 10 bits are used for PGD we need to allocate 2 pages. + +PTE format +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +About the format of the PTE entry, as we are not forced by hardware for ch= oices, +we choose to follow the format described in the RiscV implementation as a +starting point:: + + +---------+--------+----+--------+---+---+---+---+---+---+------+---+--= -+ + | 63..23 | 22..13 | 12 | 11..10 | 9 | 8 | 7 | 6 | 5 | 4 | 3..2 | 1 | 0= | + +---------+--------+----+--------+---+---+---+---+---+---+------+---+--= -+ + PFN Unused S PageSZ H G X W R D CP A P + where: + P: Present + A: Accessed + CP: Cache policy + D: Dirty + R: Read + W: Write + X: Executable + G: Global + H: Huge page + PageSZ: Page size as set in TLB format (0:4KB, 1:64KB, 2:2MB, 3:= 512MB) + S: Soft/Special + PFN: Page frame number (depends on page size) + +Huge bit must be somewhere in the first 12 bits to be able to detect it +when reading the PMD entry. + +PageSZ must be on bit 10 and 11 because it matches the TEL.PS bits. And +by doing that it is easier in assembly to set the TEL.PS to PageSZ. + +Fast TLB refill +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +kvx core does not feature a hardware page walker. This work must be done +by the core in software. In order to optimize TLB refill, a special fast +path is taken when entering in kernel space. +In order to speed up the process, the following actions are taken: + + 1. Save some registers in a per process scratchpad + 2. If the trap is a nomapping then try the fastpath + 3. Save some more registers for this fastpath + 4. Check if faulting address is a memory direct mapping one. + + * If entry is a direct mapping one and RWX is not enabled, add an entr= y into LTLB + * If not, continue + + 5. Try to walk the page table + + * If entry is not present, take the slowpath (do_page_fault) + + 6. Refill the tlb properly + 7. Exit by restoring only a few registers + +ASN Handling +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +Disclaimer: Some part of this are taken from ARC architecture. + +kvx MMU provides 9-bit ASN (Address Space Number) in order to tag TLB entr= ies. +It allows for multiple process with the same virtual space to cohabit with= out +the need to flush TLB everytime we context switch. +kvx implementation to use them is based on other architectures (such as arc +or xtensa) and uses a wrapping ASN counter containing both cycle/generatio= n and +asn. + +:: + + +---------+--------+ + |63 10|9 0| + +---------+--------+ + Cycle ASN + +This ASN counter is incremented monotonously to allocate new ASNs. When the +counter reaches 511 (9 bit), TLB is completely flushed and a new cycle is +started. A new allocation cycle, post rollover, could potentially reassign= an +ASN to a different task. Thus the rule is to reassign an ASN when the curr= ent +context cycles does not match the allocation cycle. +The 64 bit @cpu_asn_cache (and mm->asn) have 9 bits MMU ASN and rest 55 bi= ts +serve as cycle/generation indicator and natural 64 bit unsigned math +automagically increments the generation when lower 9 bits rollover. +When the counter completely wraps, we reset the counter to first cycle val= ue +(ie cycle =3D 1). This allows to distinguish context without any ASN and o= ld cycle +generated value with the same operation (XOR on cycle). + +Huge page +=3D=3D=3D=3D=3D=3D=3D=3D=3D + +Currently only 3 level page table has been implemented for 4KB base page s= ize. +So the page shift is 12 bits, the pmd shift is 21 and the pgdir shift is 3= 0 bits. +This choice implies that for 4KB base page size if we use a PMD as a huge +page the size will be 2MB and if we use a PUD as a huge page it will be 1G= B. + +To support other huge page sizes (64KB and 512MB) we need to use several +contiguous entries in the page table. For huge page of 64KB we will need to +use 16 entries in the PTE and for a huge page of 512MB it means that 256 +entries in PMD will be used. + +Debug +=3D=3D=3D=3D=3D + +In order to debug the page table and tlb entries, gdb scripts contains com= mands +which allows to dump the page table: + +:``lx-kvx-page-table-walk``: Display the current process page table by def= ault +:``lx-kvx-tlb-decode``: Display the content of $tel and $teh into somethin= g readable + +Other commands available in kvx-gdb are the following: + +:``mppa-dump-tlb``: Display the content of TLBs (JTLB and LTLB) +:``mppa-lookup-addr``: Find physical address matching a virtual one diff --git a/Documentation/arch/kvx/kvx-smp.rst b/Documentation/arch/kvx/kv= x-smp.rst new file mode 100644 index 0000000000000..a41b0738d1699 --- /dev/null +++ b/Documentation/arch/kvx/kvx-smp.rst @@ -0,0 +1,41 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=3D=3D=3D +SMP +=3D=3D=3D + +The Coolidge SoC is comprised of 5 clusters, each organized as a group +of 17 cores: 16 application core (PE) and 1 secure core (RM). +These 17 cores have their L1 cache coherent with the local Tightly +Coupled Memory (TCM or SMEM). The L2 cache is necessary for SMP support +is and implemented with a mix of HW support and SW firmware. The L2 cache +data and meta-data are stored in the TCM. +The RM core is not meant to run Linux and is reserved for implementing +hypervisor services, thus only 16 processors are available for SMP. + +Booting +------- + +When booting the kvx processor, only the RM is woken up. This RM will +execute a portion of code located in the section named ``.rm_firmware``. +By default, a simple power off code is embedded in this section. +To avoid embedding the firmware in kernel sources, the section is patched +using external tools to add the L2 firmware (and replace the default firmw= are). +Before executing this firmware, the RM boots the PE0. PE0 will then enable= L2 +coherency and request will be stalled until RM boots the L2 firmware. + +Locking primitives +------------------ + +spinlock/rwlock are using the kernel standard queued spinlock/rwlocks. +These primitives are based on cmpxch and xchg. More particularly, it uses = xchg16 +which is implemented as a read modify write with acswap on 32bit word since +kvx does not have atomic cmpxchg instructions for less than 32 bits. + +IPI +--- + +An IPI controller allows to communicate between CPUs using a simple +memory mapped register. This register can simply be written using a +mask to trigger interrupts directly to the cores matching the mask. + diff --git a/Documentation/arch/kvx/kvx.rst b/Documentation/arch/kvx/kvx.rst new file mode 100644 index 0000000000000..c222328aa4b2a --- /dev/null +++ b/Documentation/arch/kvx/kvx.rst @@ -0,0 +1,273 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=3D=3D=3D=3D=3D=3D=3D=3D=3D +kvx Linux +=3D=3D=3D=3D=3D=3D=3D=3D=3D + +This documents will try to explain any architecture choice for the kvx +Linux port. + +Regarding the peripheral, we MUST use device tree to describe ALL +peripherals. The bindings should always start with "kalray,kvx" for all +core related peripherals (watchdog, timer, etc) + +System Architecture +------------------- + +On kvx, we have 4 levels of privilege level starting from 0 (most +privileged one) to 3 (less privilege one). A system of owners allows +to delegate ownership of resources by using specials system registers. + +The 2 main software stacks for Linux Kernel are the following:: + + +-------------+ +-------------+ + | PL0: Debug | | PL0: Debug | + +-------------+ +-------------+ + | PL1: Linux | | PL1: HyperV | + +-------------+ +-------------+ + | PL2: User | | PL2: Linux | + +-------------+ +-------------+ + | | | PL3: User | + +-------------+ +-------------+ + +In both cases, the kvx support for privileges has been designed using +only relative PL and thus should work on both configurations without +any modifications. + +When booting, the CPU is executing in PL0 and owns all the privileges. +This level is almost dedicated to the debug routines for the debugguer. +It only needs to own few privileges (breakpoint 0 and watchpoint 0) to +be able to debug a system executing in PL1 to PL3. +Debug routines are not always there for instance when the kernel is +executing alone (booted from flash). +In order to ease the load of debug routines, software convention is to +jump directly to PL1 and let PL0 for the debug. +When the kernel boots, it checks if the current privilege level is 0 +(`$ps.pl` is the only absolute value). If so, then it will delegate +almost all resources to PL1 and use a RFE to lower its execution +privilege level (see asm_delegate_pl in head.S). +If the current PL is already different from 0, then it means somebody +is above us and we need to request resource to inform it we need them. It = will +then either delegate them to us directly or virtualize the delegation. +All privileges levels have their set of banked registers (ps, ea, sps, +sr, etc) which contain privilege level specific values. +`$sr` (system reserved) is banked and will hold the current task_struct. +This register is reserved and should not be touched by any other code. +For more information, refer to the kvx system level architecture manual. + +Boot +---- + +On kvx, the RM (Secure Core) of Cluster 0 will boot first. It will then be= able +to boot a firmware. This firmware is stored in the rm_firmware section. +The first argument ($r0) of this firmware will be a pointer to a function = with +the following prototype: void firmware_init_done(uint64_t features). This +function is responsible of describing the features supported by the firmwa= re and +will start the first PE after that. +By default, the rm_firmware function act as the "default" firmware. This +function does nothing except calling firmware_init_done and then goes to s= leep. +In order to add another firmware, the rm_firmware section is patched using +objcopy. The content of this section is then replaced by the provided firm= ware. +This firmware will do an init and then call firmware_init_done before runn= ing +the main loop. +When the PE boots, it will check for the firmware features to enable or di= sable +specific core features (L2 for instance). + +When entering the C (kvx_lowlevel_start) the kernel will look for a special +magic (``0x494C314B``) in ``$r0``. This magic tells the kernel if there is= arguments +passed by a bootloader. +Currently, the following values are passed through registers: + + :``$r1``: pointer to command line setup by bootloader + :``$r2``: device tree + +If this magic is not set, then, the command line will be the one +provided in the device tree (see bootargs). The default device tree is +not builtin but will be patched by the runner used (simulator or jtag) in = the +dtb section. + +A default stdout-path is desirable to allow early printk. + +Boot Memory Allocator +--------------------- + +The boot memory allocator is used to allocate memory before paging is enab= led. +It is initialized with DDR and also with the shared memory. This first one= is +initialized during the setup_bootmem() and the second one when calling +early_init_fdt_scan_reserved_mem(). + + +Virtual and physical memory +--------------------------- + +The mapping used and the memory management is described in +Documentation/kvx/kvx-mmu.rst. +Our Kernel is compiled using virtual addresses that starts at ``0xffffff00= 00000000``. +But when it is started the kernel uses physical addresses. +Before calling the first function arch_low_level_start() we configure 2 en= tries +of the LTLB. + +The first entry will map the first 1G of virtual address space to the first +1G of DDR:: + + TLB[0]: 0xffffff0000000000 -> 0x100000000 (size 512Mo) + +The second entry will be a flat mapping of the first 512 Ko of the SMEM. It +is required to have this flat mapping because there is still code located = at +this address that needs to be executed:: + + TLB[1]: 0x0 -> 0x0 (size 512Ko) + +Once virtual space reached the second entry is removed. + +To be able to set breakpoints when MMU is enabled we added a label called +gdb_mmu_enabled. If you try to set a breakpoint on a function that is in +virtual memory before the activation of the MMU this address as no signifi= cation +for GDB. So, for example, if you want to break on the function start_kerne= l() +you will need to run:: + + kvx-gdb -silent path_to/vmlinux \ + -ex 'tbreak gdb_mmu_enabled' -ex 'run' \ + -ex 'break start_kernel' \ + -ex 'continue' + +We will also add an option to kvx-gdb to simplify this step. + +Timers +------ + +The free-running clock (clocksource) is based on the DSU. This clock is +not interruptible and never stops even if core go into idle. + +Regarding the tick (clockevent), we use the timer 0 available on the core. +This timer allows to set a periodic tick which will be used as the main +tick for each core. Note that this clock is percpu. + +The get_cycles implementation is based on performance counter. One of them +is used to count cycles. Note that since this is used only when the core +is running, there is no need to worry about core sleeping (which will +stop the cycle counter) + +Context switching +----------------- + +Context switching is done in entry.S. When spawning a fresh thread, +copy_thread is called. During this call, we setup callee saved register +``r20`` and ``r21`` to special values containing the function to call. + +The normal path for a kernel thread will be the following: + +1. Enter copy_thread_tls and setup callee saved registers which will + be restored in __switch_to. +2. set r20 and r21 (in thread_struct) to function and argument and + ra to ret_from_kernel_thread. + These callee saved will be restored in switch_to. +3. Call _switch_to at some point. +4. Save all callee saved register since switch_to is seen as a + standard function call by the caller. +5. Change stack pointer to the new stack +6. At the end of switch to, set sr0 to the new task and use ret to + jump to ret_from_kernel_thread (address restored from ra). +7. In ret_from_kernel_thread, execute the function with arguments by + using r20, r21 and we are done + +For more explanations, you can refer to https://lwn.net/Articles/520227/ + +User thread creation +-------------------- + +We are using almost the same path as copy_thread to create it. +The detailed path is the following: + + 1. Call start_thread which will setup user pc and stack pointer in + task regs. We also set sps and clear privilege mode bit. + When returning from exception, it will "flip" to user mode. + 2. Enter copy_thread_tls and setup callee saved registers which will + be restored in __switch_to. Also, set the "return" function to be + ret_from_fork which will be called at end of switch_to + 3. set r20 (in thread_struct) with tracing information. + (simply by lazyness to avoid computing it in assembly...) + 4. Call _switch_to at some point. + 5. The current pc will then be restored to be ret_from fork. + 6. Ret from fork calls schedule_tail and then check if tracing is + enabled. If so call syscall_trace_exit + 7. finally, instead of returning to kernel, we restore all registers + that have been setup by start_thread by restoring regs stored on + stack + +L2 handling +----------- + +On kvx, the L2 is handled by a firmware running on the RM. This firmware +needs various information to be aware of its configuration and communicate +with the kernel. In order to do that, when firmware is starting, the device +tree is given as parameter along with the "registers" zone. This zone is +simply a memory area where data are exchanged between kernel <-> L2. When +some commands are written to it, the kernel sends an interrupt using a mai= lbox. +If the L2 node is not present in the device tree, then, the RM will direct= ly go +into sleeping. + +Boot diagram:: + + RM PE 0 + + + +---------+ | + | Boot | | + +----+----+ | + | | + v | + +-----+-----+ | + | Prepare | | + | L2 shared | | + | memory | | + |(registers)| | + +-----+-----+ | + | | +-----------+ + +------------------->+ Boot | + | | +-----+-----+ + v | | + +--------+---------+ | | + | L2 firmware | | | + | parameters: | | | + | r0 =3D registers | | | + | r1 =3D DTB | | | + +--------+---------+ | | + | | | + v | | + +-------+--------+ | +------+------+ + | L2 firmware | | | Wait for L2 | + | execution | | | to be ready | + +-------+--------+ | +------+------+ + | | | + +------v-------+ | v + | L2 requests | | +------+------+ + +--->+ handling | | | Enable | + | +-------+------+ | | L2 caching | + | | | +------+------+ + | | | | + +------------+ + v + + +Since this driver is started early (before SMP boot), A lot of drivers +are not yet probed (mailboxes, IOMMU, etc) and thus can not be used. + +Building +-------- + +In order to build the kernel, you will need a complete kvx toolchain. +First, setup the config using the following command line:: + + $ make ARCH=3Dkvx O=3Dyour_directory defconfig + +Adjust any configuration option you may need and then, build the kernel:: + + $ make ARCH=3Dkvx O=3Dyour_directory -j12 + +You will finally have a vmlinux image ready to be run:: + + $ kvx-mppa -- vmlinux + +Additionally, you may want to debug it. To do so, use kvx-gdb:: + + $ kvx-gdb vmlinux + --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout148.security-mail.net (smtpout148.security-mail.net [85.31.212.148]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0DDA116CD30 for ; Mon, 22 Jul 2024 09:45:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.148 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641508; cv=none; b=hL1hE23b/GKSL4bCCNNAD8PAsx2qM1li/AP17nAORYPHn0MqYsbpEnj0mlfYsFje+9pVF3xfugTrTNrne9Csyqwd89AXqfr3yala6i6QWBoTsXh4K3DTm2cFtH+i3HfXTNJX4iVcr9s3u3FouXj1jUxQbjq+FnwI3FEYaORxzRc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641508; c=relaxed/simple; bh=JVTV4LpXfQIy/ruik+P0dnGNR2Q9xIw1w/eCRtiVszk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=CV7U0hNnae3dHo+auHSrhAcYAjFjJDWX7rgyQj+FEN42pWTWryxNZ4PKosUzz3qxB4WaXu5azyY2GV2hQzH5bMcbwWtItMl8+EMF34rCeO7tQbqrPs9KOj7XdNahksQMfQ7pKQwarMxrDb9mheBFwb5ODpUNmHkEysnAnAMtSUc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=DzFqlru+; arc=none smtp.client-ip=85.31.212.148 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="DzFqlru+" Received: from localhost (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id 5BC7D322AF7 for ; Mon, 22 Jul 2024 11:43:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641396; bh=JVTV4LpXfQIy/ruik+P0dnGNR2Q9xIw1w/eCRtiVszk=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=DzFqlru+palasZXJ8/xABG3iAUzC6au73kHJyjwr9Vwtdwd9l+lxbTdQ7ZH5A/Wbq 1vu16IEep7t1/lUjW2L0B9qOnPIwBtldZH9BwJvBhMbRqU2ZSKdp1ljWuBcsI3DhCu F22bT+dop2TusFA4da1j9WePNu3hDhUaOOiiUhh8= Received: from fx408 (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id 3333B3228F2; Mon, 22 Jul 2024 11:43:16 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx408.security-mail.net (Postfix) with ESMTPS id 9BDB9322636; Mon, 22 Jul 2024 11:43:15 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 68E7140317; Mon, 22 Jul 2024 11:43:15 +0200 (CEST) X-Quarantine-ID: X-Secumail-id: <9f9d.669e29b3.9a2a6.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Borne , Julian Vetter , Yann Sionneau Cc: devicetree@vger.kernel.org Subject: [RFC PATCH v3 02/37] dt-bindings: soc: kvx: Add binding for kalray,coolidge-pwr-ctrl Date: Mon, 22 Jul 2024 11:41:13 +0200 Message-ID: <20240722094226.21602-3-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add binding for Kalray Coolidge SoC cluster power controller. Signed-off-by: Yann Sionneau --- Notes: V2 -> V3: New patch --- .../soc/kvx/kalray,coolidge-pwr-ctrl.yaml | 37 +++++++++++++++++++ 1 file changed, 37 insertions(+) create mode 100644 Documentation/devicetree/bindings/soc/kvx/kalray,coolid= ge-pwr-ctrl.yaml diff --git a/Documentation/devicetree/bindings/soc/kvx/kalray,coolidge-pwr-= ctrl.yaml b/Documentation/devicetree/bindings/soc/kvx/kalray,coolidge-pwr-c= trl.yaml new file mode 100644 index 0000000000000..e0363a080ac11 --- /dev/null +++ b/Documentation/devicetree/bindings/soc/kvx/kalray,coolidge-pwr-ctrl.ya= ml @@ -0,0 +1,37 @@ +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/kalray/kalray,coolidge-pwr-ctrl.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Kalray Coolidge cluster Power Controller (pwr-ctrl) + +maintainers: + - Jonathan Borne + - Julian Vetter + - Yann Sionneau + +description: | + The Power Controller (pwr-ctrl) control cores reset and wake-up procedur= e. + +properties: + compatible: + const: kalray,coolidge-pwr-ctrl + + reg: + maxItems: 1 + +additionalProperties: false + +required: + - compatible + - reg + +examples: + - | + pwr_ctrl: pwr-ctrl@a40000 { + compatible =3D "kalray,coolidge-pwr-ctrl"; + reg =3D <0x00 0xa40000 0x00 0x4158>; + }; + +... --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout148.security-mail.net (smtpout148.security-mail.net [85.31.212.148]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 061E016CD2B for ; Mon, 22 Jul 2024 09:45:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.148 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641508; cv=none; b=oug2u/r2Spi0Nlgd9A/f2lbAgy2sdpyY1xgWPc2dbP3x9k6sWDrBPSmiycSquvBsnOb4vLPurfavB6hLC60m/UDrAtPp3uf0Pa6DiiwTRfWFtvmvoR/BszfJRhFHXeAo/Of2G0KA1DjBct4loyaGQkEPHH+/D8/eSiZ8eKeoN28= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641508; c=relaxed/simple; bh=DX9g+0kZiXZhcSvUCagT5ep+SpJuakRXEGMnTqccY70=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=rvdtbIHvPE3Ky+Cc6olj3lsnfGfQEH0mpsDY8DXjRgeHcBocRJsBpJI8T5cYhex3FlexESOdNmd0suU2RhPqLq/hmzgdCG22RU3aBYsrQU/keuLJHgYpMilJx/WY6o7WCqlpTRb7IyLh7IDxhTirwKgvb5a5qXofM2RNWRCAA1w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=bqu51DDx; arc=none smtp.client-ip=85.31.212.148 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="bqu51DDx" Received: from localhost (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id D1897322B40 for ; Mon, 22 Jul 2024 11:43:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641396; bh=DX9g+0kZiXZhcSvUCagT5ep+SpJuakRXEGMnTqccY70=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=bqu51DDxGuMfZhu5L+7i0SzDj6VEW0HaDzyCWJYc69s0hVpCdgAjMLE0Nc+MenvUp l8FEgM1EHZpAHa5JoDi4I3HLqPM4/AN5zvtzF3WrMcrqc7s6ljAgc0p6JUyrRSGP4V 2GS0ijpdxoShZoXyrMXLZFbHvRVbwbBIP85D8Few= Received: from fx408 (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id 9DBCC322B32; Mon, 22 Jul 2024 11:43:16 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx408.security-mail.net (Postfix) with ESMTPS id 1D4C1322858; Mon, 22 Jul 2024 11:43:16 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id E05CE40317; Mon, 22 Jul 2024 11:43:15 +0200 (CEST) X-Quarantine-ID: X-Secumail-id: <7edc.669e29b4.1b2d3.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Thomas Gleixner , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Borne , Julian Vetter , Yann Sionneau Cc: Jules Maselbas , devicetree@vger.kernel.org Subject: [RFC PATCH v3 03/37] dt-bindings: Add binding for kalray,kv3-1-intc Date: Mon, 22 Jul 2024 11:41:14 +0200 Message-ID: <20240722094226.21602-4-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add binding for Kalray kv3-1 core interrupt controller. Co-developed-by: Jules Maselbas Signed-off-by: Jules Maselbas Signed-off-by: Yann Sionneau --- Notes: V2 -> V3: Fixed bindings to adhere to dt schema --- .../kalray,kv3-1-intc.yaml | 54 +++++++++++++++++++ 1 file changed, 54 insertions(+) create mode 100644 Documentation/devicetree/bindings/interrupt-controller/= kalray,kv3-1-intc.yaml diff --git a/Documentation/devicetree/bindings/interrupt-controller/kalray,= kv3-1-intc.yaml b/Documentation/devicetree/bindings/interrupt-controller/ka= lray,kv3-1-intc.yaml new file mode 100644 index 0000000000000..9c8bb2c8c49dd --- /dev/null +++ b/Documentation/devicetree/bindings/interrupt-controller/kalray,kv3-1-i= ntc.yaml @@ -0,0 +1,54 @@ +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/interrupt-controller/kalray,kv3-1-intc.= yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Kalray kv3-1 Core Interrupt Controller + +maintainers: + - Jonathan Borne + - Julian Vetter + - Yann Sionneau + +description: | + The Kalray Core Interrupt Controller is tightly integrated in each kv3 c= ore + present in the Coolidge SoC. + + It provides the following features: + - 32 independent interrupt sources + - 2-bit configurable priority level + - 2-bit configurable ownership level + +properties: + compatible: + const: kalray,kv3-1-intc + + "#interrupt-cells": + const: 1 + description: + The IRQ number. + + "#address-cells": + const: 0 + + interrupt-controller: true + +additionalProperties: false + +required: + - compatible + - "#interrupt-cells" + - "#address-cells" + - interrupt-controller + +examples: + - | + intc: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <1>; + #address-cells =3D <0>; + interrupt-controller; + }; + +... --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout42.security-mail.net (smtpout42.security-mail.net [85.31.212.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24CF316CD23 for ; Mon, 22 Jul 2024 09:45:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641506; cv=none; b=J+2NxpTS0jHfHdWIwoas1YqlW41dEH8au8Egy758E4P8bx85JgBRy7zv0FJzFrJKUm56+31rCR9GWVrE6K3o0lM5ZxQEP6g6DAtErTJzb+qFFv3MUaNFODMwiHUZcX9EySv0yC92EL6Ft7ZUWyZ1eWxnBPpOoHieW0tyLBl/y2I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641506; c=relaxed/simple; bh=oYzLSviRsu4hT5L/Iit/IeNJIfdJ/uOEBu+6RT3zUb8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=FO0Ik42VOTmcJEJR7bHi/iE7VOvHRGna/QeDxw1JPaMgyqaUBoDheb+4sUL2LxgG1vZnOWfMXZn0c1TYN0ZfmpQWKI8hZcAm2T4vYKxJevexZ/UlqIGrvY1e3JXRAlzUfRT8lzLwz2Tn+1Rty2hHOlr7vaG4SO6BZh1RACeC378= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=Jk53thYc; arc=none smtp.client-ip=85.31.212.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="Jk53thYc" Received: from localhost (localhost [127.0.0.1]) by fx302.security-mail.net (Postfix) with ESMTP id 38B7780B302 for ; Mon, 22 Jul 2024 11:43:17 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641397; bh=oYzLSviRsu4hT5L/Iit/IeNJIfdJ/uOEBu+6RT3zUb8=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=Jk53thYcT8R0uKY8POuJbOPJCbnjeUs/yMuNuZQ6qqUZM5LG1HFtNu/TN9fidYmVf 00L1Y1iFD2PhEwC7usX+bU1djPDbM0JhlgPjC1CjZv5Vb31UiEVOgY9RC8LYTU8K5/ wtCWR8qO4iEWwC2QmCiycxmDqe40i64uhMoB+dSo= Received: from fx302 (localhost [127.0.0.1]) by fx302.security-mail.net (Postfix) with ESMTP id 0B2A380B0B9; Mon, 22 Jul 2024 11:43:17 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx302.security-mail.net (Postfix) with ESMTPS id 8B04580B017; Mon, 22 Jul 2024 11:43:16 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 5EFAD40317; Mon, 22 Jul 2024 11:43:16 +0200 (CEST) X-Quarantine-ID: X-Secumail-id: From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Thomas Gleixner , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Borne , Julian Vetter , Yann Sionneau Cc: Jules Maselbas , devicetree@vger.kernel.org Subject: [RFC PATCH v3 04/37] dt-bindings: Add binding for kalray,coolidge-apic-gic Date: Mon, 22 Jul 2024 11:41:15 +0200 Message-ID: <20240722094226.21602-5-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add binding for Kalray Coolidge APIC GIC interrupt controller. Co-developed-by: Jules Maselbas Signed-off-by: Jules Maselbas Signed-off-by: Yann Sionneau --- Notes: V2 -> V3: Fixed bindings to adhere to dt-schema --- .../kalray,coolidge-apic-gic.yaml | 87 +++++++++++++++++++ 1 file changed, 87 insertions(+) create mode 100644 Documentation/devicetree/bindings/interrupt-controller/= kalray,coolidge-apic-gic.yaml diff --git a/Documentation/devicetree/bindings/interrupt-controller/kalray,= coolidge-apic-gic.yaml b/Documentation/devicetree/bindings/interrupt-contro= ller/kalray,coolidge-apic-gic.yaml new file mode 100644 index 0000000000000..02e25256c1c1d --- /dev/null +++ b/Documentation/devicetree/bindings/interrupt-controller/kalray,coolidg= e-apic-gic.yaml @@ -0,0 +1,87 @@ +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/interrupt-controller/kalray,coolidge-ap= ic-gic.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Kalray kv3-1 APIC-GIC + +maintainers: + - Jonathan Borne + - Julian Vetter + - Yann Sionneau + +description: | + Each cluster in the Coolidge SoC includes an Advanced Programmable Inter= rupt + Controller (APIC) which is split in two part: + - a Generic Interrupt Controller (referred as APIC-GIC) + - a Mailbox Controller (referred as APIC-Mailbox) + The APIC-GIC acts as an intermediary interrupt controller, muxing/routing + incoming interrupts to output interrupts connected to kvx cores interrup= ts lines. + The 139 possible input interrupt lines are organized as follow: + - 128 from the mailbox controller (one it per mailboxes) + - 1 from the NoC router + - 5 from IOMMUs + - 1 from L2 cache DMA job FIFO + - 1 from cluster watchdog + - 2 for SECC, DECC + - 1 from Data NoC + The 72 possible output interrupt lines: + - 68 : 4 interrupts per cores (17 cores) + - 1 for L2 cache controller + - 3 extra that are for padding + +properties: + compatible: + const: kalray,coolidge-apic-gic + + reg: + maxItems: 1 + + "#interrupt-cells": + const: 1 + description: The IRQ number. + + interrupt-controller: true + + interrupts-extended: + maxItems: 16 + description: | + Specifies the interrupt line(s) in the interrupt-parent controller no= de; + valid values depend on the type of parent interrupt controller + +required: + - compatible + - reg + - "#interrupt-cells" + - interrupt-controller + - interrupts-extended + +additionalProperties: false + +examples: + - | + apic_gic: interrupt-controller@a20000 { + compatible =3D "kalray,coolidge-apic-gic"; + reg =3D <0 0xa20000 0 0x12000>; + #interrupt-cells =3D <1>; + interrupt-controller; + interrupts-extended =3D <&core_intc0 0x4>, + <&core_intc1 0x4>, + <&core_intc2 0x4>, + <&core_intc3 0x4>, + <&core_intc4 0x4>, + <&core_intc5 0x4>, + <&core_intc6 0x4>, + <&core_intc7 0x4>, + <&core_intc8 0x4>, + <&core_intc9 0x4>, + <&core_intc10 0x4>, + <&core_intc11 0x4>, + <&core_intc12 0x4>, + <&core_intc13 0x4>, + <&core_intc14 0x4>, + <&core_intc15 0x4>; + }; + +... --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout43.security-mail.net (smtpout43.security-mail.net [85.31.212.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11B4516CD27 for ; Mon, 22 Jul 2024 09:45:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641513; cv=none; b=eO1tilk+9BupcisSu+SEOOscrFJ2Kso0WIFkuEzBfG8MdpmrkOyOScThp8iJxOWFRVEfOVBR4xGWtqPjyRzm0dZd6wyg69vJ+MJbqYn3DcmJhQN/GX8D0ypKxM5NYFXaO6n2wfj0ki/tnlPxkPVpXyzhd/RpFGfZVXlrbrg3V6E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641513; c=relaxed/simple; bh=I96rfEW8ScQ10PPJ++fzpMrvWRQP0sxu3gDqH2P0U8Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=R+EqaGuCJv2y8Fut/1D7mac7N6tcMJ7ubwF1mJiG4T+rHAzlPT7qdfWS9/cbIBGzXr1KuLOLPa7jeYR90RR8n+UzzfWs+B2gyDr1hwB8E+dnWr2rrATbEnREnvZKmOGH8YUG4I9sCIO67XS7hEXJqXwCcLS35wWdcBa0a0NVSB4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=M43fM3c9; arc=none smtp.client-ip=85.31.212.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="M43fM3c9" Received: from localhost (fx303.security-mail.net [127.0.0.1]) by fx303.security-mail.net (Postfix) with ESMTP id 035C730F013 for ; Mon, 22 Jul 2024 11:43:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641398; bh=I96rfEW8ScQ10PPJ++fzpMrvWRQP0sxu3gDqH2P0U8Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=M43fM3c9YgLjTwko9c0KZjCFPiCmxBmk8Ipc25+cwBTOs2TSrxvdJzWk9KyDfu4W9 4DpyNn370BvQmKuv0/qCh9IiuASXC07QJfCH2nGo2ZQhR7pqAULaNY+LL7uuAWNDrr jDV4CHwYJhgUKoPBubTygjgEiqXFzYELBvKVd7Vg= Received: from fx303 (fx303.security-mail.net [127.0.0.1]) by fx303.security-mail.net (Postfix) with ESMTP id BFFC530EFB0; Mon, 22 Jul 2024 11:43:17 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx303.security-mail.net (Postfix) with ESMTPS id 0884030ED15; Mon, 22 Jul 2024 11:43:17 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id D236040317; Mon, 22 Jul 2024 11:43:16 +0200 (CEST) X-Quarantine-ID: X-Secumail-id: <10d66.669e29b5.6079.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Thomas Gleixner , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Borne , Julian Vetter , Yann Sionneau Cc: Jules Maselbas , devicetree@vger.kernel.org Subject: [RFC PATCH v3 05/37] dt-bindings: Add binding for kalray,coolidge-apic-mailbox Date: Mon, 22 Jul 2024 11:41:16 +0200 Message-ID: <20240722094226.21602-6-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add binding for Kalray Coolidge APIC Mailbox interrupt-controller. Co-developed-by: Jules Maselbas Signed-off-by: Jules Maselbas Signed-off-by: Yann Sionneau --- Notes: V2 -> V3: Fixed bindings to adhere to dt-schema --- .../kalray,coolidge-apic-mailbox.yaml | 90 +++++++++++++++++++ 1 file changed, 90 insertions(+) create mode 100644 Documentation/devicetree/bindings/interrupt-controller/= kalray,coolidge-apic-mailbox.yaml diff --git a/Documentation/devicetree/bindings/interrupt-controller/kalray,= coolidge-apic-mailbox.yaml b/Documentation/devicetree/bindings/interrupt-co= ntroller/kalray,coolidge-apic-mailbox.yaml new file mode 100644 index 0000000000000..334b816b80583 --- /dev/null +++ b/Documentation/devicetree/bindings/interrupt-controller/kalray,coolidg= e-apic-mailbox.yaml @@ -0,0 +1,90 @@ +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/interrupt-controller/kalray,coolidge-ap= ic-mailbox.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Kalray Coolidge APIC-Mailbox + +maintainers: + - Jonathan Borne + - Julian Vetter + - Yann Sionneau + +description: | + Each cluster in the Coolidge SoC includes an Advanced Programmable Inter= rupt + Controller (APIC) which is split in two part: + - a Generic Interrupt Controller (referred as APIC-GIC) + - a Mailbox Controller (referred as APIC-Mailbox) + The APIC-Mailbox contains 128 mailboxes of 8 bytes (size of a word), + this hardware block is basically a 1 KB of smart memory space. + Each mailbox can be independently configured with a trigger condition + and an input mode function. + + Input mode are: + - write + - bitwise OR + - add + + Interrupts are generated on a write when the mailbox content value + match the configured trigger condition. + Available conditions are: + - doorbell: always raise interruption on write + - match: when the mailbox's value equal the configured trigger value + - barrier: same as match but the mailbox's value is cleared on trigger + - threshold: when the mailbox's value is greater than, or equal to, the + configured trigger value + + Since this hardware block generates IRQs based on writes to some memory + locations, it is both an interrupt controller and an MSI controller. + +properties: + compatible: + const: kalray,coolidge-apic-mailbox + + reg: + maxItems: 1 + + "#interrupt-cells": + const: 0 + description: + The IRQ number. + + "#address-cells": + const: 0 + + interrupt-controller: true + + interrupts: + maxItems: 128 + minItems: 1 + description: | + Specifies the interrupt line(s) in the interrupt-parent controller no= de; + valid values depend on the type of parent interrupt controller + + msi-controller: true + +additionalProperties: false + +required: + - compatible + - reg + - "#interrupt-cells" + - "#address-cells" + - interrupt-controller + - interrupts + - msi-controller + +examples: + - | + apic_mailbox: interrupt-controller@a00000 { + compatible =3D "kalray,coolidge-apic-mailbox"; + reg =3D <0 0xa00000 0 0x0f200>; + #interrupt-cells =3D <0>; + interrupt-controller; + interrupt-parent =3D <&apic_gic>; + interrupts =3D <0>, <1>, <2>, <3>, <4>, <5>, <6>, <7>, <8>, <9>; + msi-controller; + }; + +... --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout148.security-mail.net (smtpout148.security-mail.net [85.31.212.148]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1262F16CD33 for ; Mon, 22 Jul 2024 09:45:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.148 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641511; cv=none; b=PNMRGTzZsT2OvTeNHsZCE6h5PA9ZPWHni9fR7P3fMLgzJ7Z/gkOBzERQRuob9cPz0RsGO2RXFTUWCXRQzlAetODchN28WYzkDM6dtDWVLe7eKNEJWowVIRTqwOuXP0ZPrm4JkXgkq3nBnKaPoOGvcSPoPNi2D3kNQ1hiInO6U4c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641511; c=relaxed/simple; bh=F4Ckw+wQ+B9Cs3b3OLCJtfGqDkXrq2CumL7xBdOz9ZQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=OmSWmdlNlIw2JDTfkAWjtM/JOJhTFYjUjuoywI0FsjjAdydIySp/D78/9PmKwY4Y0WKb0YbVWccsfF08ADatracA7PyzEMvBRJMTUGF56LRPygp0NGlN6TJ8ti7E3h1pRa553HQu8szY1QtJjJ7FSow8+doC07gi79srsZ3mDxg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=W4sT0RvZ; arc=none smtp.client-ip=85.31.212.148 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="W4sT0RvZ" Received: from localhost (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id 38E3C3225ED for ; Mon, 22 Jul 2024 11:43:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641398; bh=F4Ckw+wQ+B9Cs3b3OLCJtfGqDkXrq2CumL7xBdOz9ZQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=W4sT0RvZo8fpiP16Ag4kfqghesHbbACn2GH9TsXjiesrrH5AappJzuNtUqqWqwI1F TGgj8ifQR+Cwb0lbU4PWhxxE1TymUWD17Er7bY5EvOVTvga1YRzaUlIMheofH0+sC5 uvfXpWtQG8jPA71c0+okZPc7DQiFrSRhEZzEqA2s= Received: from fx408 (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id 1094432259D; Mon, 22 Jul 2024 11:43:18 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx408.security-mail.net (Postfix) with ESMTPS id 7A0B3322398; Mon, 22 Jul 2024 11:43:17 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 4B91440317; Mon, 22 Jul 2024 11:43:17 +0200 (CEST) X-Quarantine-ID: X-Secumail-id: <12a7a.669e29b5.780ab.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Thomas Gleixner , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Borne , Julian Vetter , Yann Sionneau Cc: Jules Maselbas , devicetree@vger.kernel.org Subject: [RFC PATCH v3 06/37] dt-bindings: Add binding for kalray,coolidge-itgen Date: Mon, 22 Jul 2024 11:41:17 +0200 Message-ID: <20240722094226.21602-7-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add binding for Kalray Coolidge Interrupt Generator. Co-developed-by: Jules Maselbas Signed-off-by: Jules Maselbas Signed-off-by: Yann Sionneau --- Notes: V2 -> V3: Fixed bindings to adhere to dt-schema --- .../kalray,coolidge-itgen.yaml | 55 +++++++++++++++++++ 1 file changed, 55 insertions(+) create mode 100644 Documentation/devicetree/bindings/interrupt-controller/= kalray,coolidge-itgen.yaml diff --git a/Documentation/devicetree/bindings/interrupt-controller/kalray,= coolidge-itgen.yaml b/Documentation/devicetree/bindings/interrupt-controlle= r/kalray,coolidge-itgen.yaml new file mode 100644 index 0000000000000..222cacb5cbea4 --- /dev/null +++ b/Documentation/devicetree/bindings/interrupt-controller/kalray,coolidg= e-itgen.yaml @@ -0,0 +1,55 @@ +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/interrupt-controller/kalray,coolidge-it= gen.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Kalray Coolidge SoC Interrupt Generator (ITGEN) + +maintainers: + - Jonathan Borne + - Julian Vetter + - Yann Sionneau + +description: | + The Interrupt Generator (ITGEN) is an interrupt controller block. + It's purpose is to convert IRQ lines coming from SoC peripherals into wr= ites + on the AXI bus. The ITGEN intended purpose is to write into the APIC mai= lboxes. + +properties: + compatible: + const: kalray,coolidge-itgen + + reg: + maxItems: 1 + + "#interrupt-cells": + const: 2 + description: | + - 1st cell is for the IRQ number + - 2nd cell is for the trigger type as defined dt-bindings/interrupt-= controller/irq.h + + interrupt-controller: true + + msi-parent: true + +additionalProperties: false + +required: + - compatible + - reg + - "#interrupt-cells" + - interrupt-controller + - msi-parent + +examples: + - | + itgen: interrupt-controller@27000000 { + compatible =3D "kalray,coolidge-itgen"; + reg =3D <0 0x27000000 0 0x1104>; + #interrupt-cells =3D <2>; + interrupt-controller; + msi-parent =3D <&apic_mailbox>; + }; + +... --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout42.security-mail.net (smtpout42.security-mail.net [85.31.212.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24B4C16C871 for ; Mon, 22 Jul 2024 09:45:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641505; cv=none; b=gvz9Ms4geW0z4lqQgYAKcAx5ViMyV1O+l+7bnlFif0gAoT0GADqkjfxs3j99d+B/AcN0VImBOY9ZvT9OgATHDvzpVbCKaPKQEaNLzV4REBWuBdhFy6drEYGOMu0OH2aFniroflb1yLwXywA883oFNNcLdsun2QI+LuOnqchnTOo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641505; c=relaxed/simple; bh=7jQH1IAsbXXXmxjL2scAVo9a1/FDM4Dwc1U0HJ1/ZqU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=en6ezxtCo53xyYfKWECedKrK64xh3cutludpU/yCuEaj4O1lthWah9qBRme4SoGoTegoAkeKJLBQrE1kb5xusfglW2U99nbRxwmC8NnyJ7chOrMu97D0tvhL+UQdTBSl8kfwyBVfwlgP25IdUtN4NoWz/ap5b0ROEBTxlmb7ZWg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=P2opam/0; arc=none smtp.client-ip=85.31.212.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="P2opam/0" Received: from localhost (localhost [127.0.0.1]) by fx302.security-mail.net (Postfix) with ESMTP id 9EC9580B0C2 for ; Mon, 22 Jul 2024 11:43:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641398; bh=7jQH1IAsbXXXmxjL2scAVo9a1/FDM4Dwc1U0HJ1/ZqU=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=P2opam/0AOzVqFW6nbUXtEYgCtfR34dcTiI/d2j9gnoQDoMlWNjiip/PiBacmYjNX SZWBuygmxbg2sdYwAT7mORGALdGfYFWjaYX54E84FCMi1+lAVgF+XBXeUyefIH92b+ n3s9BQtAkossWcAOxD0C4r1lmbF/XKVgZuSPE3VQ= Received: from fx302 (localhost [127.0.0.1]) by fx302.security-mail.net (Postfix) with ESMTP id 7575F80AA51; Mon, 22 Jul 2024 11:43:18 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx302.security-mail.net (Postfix) with ESMTPS id EB2A480AD43; Mon, 22 Jul 2024 11:43:17 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id BDE1F40317; Mon, 22 Jul 2024 11:43:17 +0200 (CEST) X-Quarantine-ID: X-Secumail-id: From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Thomas Gleixner , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Borne , Julian Vetter , Yann Sionneau Cc: Jules Maselbas , devicetree@vger.kernel.org Subject: [RFC PATCH v3 07/37] dt-bindings: Add binding for kalray,coolidge-ipi-ctrl Date: Mon, 22 Jul 2024 11:41:18 +0200 Message-ID: <20240722094226.21602-8-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add binding for Kalray Coolidge IPI controller. Co-developed-by: Jules Maselbas Signed-off-by: Jules Maselbas Signed-off-by: Yann Sionneau --- Notes: V2 -> V3: - fixed bindings to adhere to dt-schema - moved to interrupt-controller directory, like the related driver --- .../kalray,coolidge-ipi-ctrl.yaml | 79 +++++++++++++++++++ 1 file changed, 79 insertions(+) create mode 100644 Documentation/devicetree/bindings/interrupt-controller/= kalray,coolidge-ipi-ctrl.yaml diff --git a/Documentation/devicetree/bindings/interrupt-controller/kalray,= coolidge-ipi-ctrl.yaml b/Documentation/devicetree/bindings/interrupt-contro= ller/kalray,coolidge-ipi-ctrl.yaml new file mode 100644 index 0000000000000..91e3afe4f1ca5 --- /dev/null +++ b/Documentation/devicetree/bindings/interrupt-controller/kalray,coolidg= e-ipi-ctrl.yaml @@ -0,0 +1,79 @@ +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/interrupt-controller/kalray,coolidge-ip= i-ctrl.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Kalray Coolidge SoC Inter-Processor Interrupt Controller (IPI) + +maintainers: + - Jonathan Borne + - Julian Vetter + - Yann Sionneau + +description: | + The Inter-Processor Interrupt Controller (IPI) provides a fast synchroni= zation + mechanism to the software. It exposes eight independent set of registers= that + can be use to notify each processor in the cluster. + A set of registers contains two 32-bit registers: + - 17-bit interrupt control, one bit per core, raise an interrupt on wr= ite + - 17-bit mask, one per core, to enable interrupts + + Bit at offsets 0 to 15 selects cores in the cluster, respectively PE0 to= PE15, + while bit at offset 16 is for the cluster Resource Manager (RM) core. + + The eight output interrupts are connected to each processor core interru= pt + controller (intc). + +properties: + compatible: + const: kalray,coolidge-ipi-ctrl + + reg: + maxItems: 1 + + interrupts-extended: + maxItems: 16 + description: | + Specifies the interrupt line the IPI controller will raise on the co= re INTC. + + interrupt-controller: true + + '#interrupt-cells': + const: 0 + +additionalProperties: false + +required: + - compatible + - reg + - interrupts-extended + - interrupt-controller + - '#interrupt-cells' + +examples: + - | + ipi: inter-processor-interrupt@ad0000 { + compatible =3D "kalray,coolidge-ipi-ctrl"; + reg =3D <0x00 0xad0000 0x00 0x1000>; + #interrupt-cells =3D <0>; + interrupt-controller; + interrupts-extended =3D <&core_intc0 24>, + <&core_intc1 24>, + <&core_intc2 24>, + <&core_intc3 24>, + <&core_intc4 24>, + <&core_intc5 24>, + <&core_intc6 24>, + <&core_intc7 24>, + <&core_intc8 24>, + <&core_intc9 24>, + <&core_intc10 24>, + <&core_intc11 24>, + <&core_intc12 24>, + <&core_intc13 24>, + <&core_intc14 24>, + <&core_intc15 24>; + }; + +... --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout148.security-mail.net (smtpout148.security-mail.net [85.31.212.148]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0FBC416CD31 for ; Mon, 22 Jul 2024 09:45:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.148 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641508; cv=none; b=ofNYkvWcgiBQ5SN44XGHUJhTXx1HsEtpyETnJ/V5MdZby9H2uR7nbL/UXuJHsNuPsxIHAqjNOexeJ2O87dtAnQDPRARmWKe8EB5jB2/fzszNY8wKKD8UI6ClUBosvcwwENO/ibQM4pTAdY6+rnkYf8GVxAlfwusf7hRwVKtUSvA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641508; c=relaxed/simple; bh=fwX+YsfJOl3mEYIawJDK2JgL5aB2rHwM+jyV/SOu7MU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=apvuCWv9H5XfJ7V81kkk45ICPtKWeIfCvGnf71yG92zpxQiVL/bSa3oIIUzlHBE6j3KpDlJgQ7bEML/zEih5ynJ+3MXpn0t3Ti074bmODvTLzTleIAv58jgkWjxt1jxXKOvYGe+x5+2ySWj5JX2J+LmbHHMACeDyXizP+9mhEsg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=gZNymj64; arc=none smtp.client-ip=85.31.212.148 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="gZNymj64" Received: from localhost (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id 4DDBC322BAF for ; Mon, 22 Jul 2024 11:43:19 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641399; bh=fwX+YsfJOl3mEYIawJDK2JgL5aB2rHwM+jyV/SOu7MU=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=gZNymj64zn/cuSOSHgNdOHTLIyOzU2ZyqmiG2mEoufnBe3KOOQjfkXLMKyyZzFAvz NPOgjq1SBjj932NSJ+bVp+XAJqDxuzBhYpZm9Q8NMOIu9dPf60D4PlZNARiMIZ5BdP V/hwt9zZQxttW/x5FMRhnNeIZ9GUHWZKkre7jlzQ= Received: from fx408 (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id 0C0C03228F2 for ; Mon, 22 Jul 2024 11:43:19 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx408.security-mail.net (Postfix) with ESMTPS id 6EA4B322462; Mon, 22 Jul 2024 11:43:18 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 3538140317; Mon, 22 Jul 2024 11:43:18 +0200 (CEST) X-Secumail-id: <16976.669e29b6.6c7d0.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Michael Turquette , Stephen Boyd , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Borne , Julian Vetter , Yann Sionneau Cc: linux-clk@vger.kernel.org, devicetree@vger.kernel.org Subject: [RFC PATCH v3 08/37] dt-bindings: Add binding for kalray,coolidge-dsu-clock Date: Mon, 22 Jul 2024 11:41:19 +0200 Message-ID: <20240722094226.21602-9-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add binding for Kalray Coolidge DSU (Debug System Unit) clock. Signed-off-by: Yann Sionneau --- Notes: V2 -> V3: New patch --- .../clock/kalray,coolidge-dsu-clock.yaml | 39 +++++++++++++++++++ 1 file changed, 39 insertions(+) create mode 100644 Documentation/devicetree/bindings/clock/kalray,coolidge= -dsu-clock.yaml diff --git a/Documentation/devicetree/bindings/clock/kalray,coolidge-dsu-cl= ock.yaml b/Documentation/devicetree/bindings/clock/kalray,coolidge-dsu-cloc= k.yaml new file mode 100644 index 0000000000000..a7f6239b17c12 --- /dev/null +++ b/Documentation/devicetree/bindings/clock/kalray,coolidge-dsu-clock.yaml @@ -0,0 +1,39 @@ +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/clock/kalray,coolidge-dsu-clock.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Kalray Coolidge DSU Clock + +maintainers: + - Jonathan Borne + - Julian Vetter + - Yann Sionneau + +description: | + DSU Clock is a free running counter that runs at Cluster clock frequency. + +properties: + compatible: + const: kalray,coolidge-dsu-clock + + reg: + maxItems: 1 + + clocks: + maxItems: 1 + +required: + - reg + - clocks + +additionalProperties: false + +examples: + - | + dsu_clock@a44180 { + compatible =3D "kalray,coolidge-dsu-clock"; + reg =3D <0x00 0xa44180 0x00 0x08>; + clocks =3D <&core_clk>; + }; --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout148.security-mail.net (smtpout148.security-mail.net [85.31.212.148]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8DCC1168499 for ; Mon, 22 Jul 2024 09:43:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.148 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641405; cv=none; b=sEFPR/2iwSDKKj7/KcCvi1QcWbPHqH/FL+IHsuu+sgWQWe5/nJxOrdxGA/jL41xVXBZN3Q7BWvM95T43P5SQ/o8oyBdY/CjcHT7hXnztSVIT16jbwHVOOthinvPaH3176gmiGW4bYH3hIBri5U5Mp1I7O0nHX3SZhC3DOGCuXd0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641405; c=relaxed/simple; bh=hQQoSnZGeakz3KirXfjKHIJTqkcNAF5znxVkYddg6tw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=snusnXIbWRgpsYJ0RZ0Lt9iTCOEu+bldum4ZdFRJ9a123XqtpDPX8z9JU/B4R7aYKwMRFOqdxT2wynsCoT0qyPYZT8EQn/ODMD4LRHBERSNMgPj2YOW4ZnspDcx8HB8j7nl8q4bTxIkHtzA0BQQTEX00iWtej5lBfLrZNweP06U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=FMtynVP1; arc=none smtp.client-ip=85.31.212.148 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="FMtynVP1" Received: from localhost (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id B3DBD322C22 for ; Mon, 22 Jul 2024 11:43:19 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641399; bh=hQQoSnZGeakz3KirXfjKHIJTqkcNAF5znxVkYddg6tw=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=FMtynVP1bneOvKTD66Axq/8eEllOPMFkF8wVjHg98teJ/TiWBUMXFaO5ki9Fw5Yqm +eRqfb1sySLb4+8tpGA28MXjdDi7X2tu14I+9izIp8BRDU4MbWAhnRBrm6OZ8O8uE+ 5q5foVVAzV0EJrJmEvXuyW67gJqu2I4BQ3w9Xw0Q= Received: from fx408 (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id 80667322B78; Mon, 22 Jul 2024 11:43:19 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx408.security-mail.net (Postfix) with ESMTPS id E03C83228B4; Mon, 22 Jul 2024 11:43:18 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id B476F40317; Mon, 22 Jul 2024 11:43:18 +0200 (CEST) X-Quarantine-ID: X-Secumail-id: <152c5.669e29b6.de7ea.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Daniel Lezcano , Thomas Gleixner , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Borne , Julian Vetter , Yann Sionneau Cc: devicetree@vger.kernel.org Subject: [RFC PATCH v3 09/37] dt-bindings: Add binding for kalray,kv3-1-timer Date: Mon, 22 Jul 2024 11:41:20 +0200 Message-ID: <20240722094226.21602-10-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add binding for Kalray kv3-1 core timer. Signed-off-by: Yann Sionneau --- Notes: V2 -> V3: New patch --- .../bindings/timer/kalray,kv3-1-timer.yaml | 57 +++++++++++++++++++ 1 file changed, 57 insertions(+) create mode 100644 Documentation/devicetree/bindings/timer/kalray,kv3-1-ti= mer.yaml diff --git a/Documentation/devicetree/bindings/timer/kalray,kv3-1-timer.yam= l b/Documentation/devicetree/bindings/timer/kalray,kv3-1-timer.yaml new file mode 100644 index 0000000000000..1932f28a05a18 --- /dev/null +++ b/Documentation/devicetree/bindings/timer/kalray,kv3-1-timer.yaml @@ -0,0 +1,57 @@ +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/timer/kalray,kv3-1-timer.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Kalray kv3-1 core timer + +maintainers: + - Jonathan Borne + - Julian Vetter + - Yann Sionneau + +description: | + Timer tightly coupled to the kv3-1 processor core. It is configured via = core SFR. + It triggers an interrupt directly on core-intc. + +properties: + compatible: + const: kalray,kv3-1-timer + + interrupts-extended: + maxItems: 16 + + clocks: + maxItems: 1 + +required: + - compatible + - interrupts-extended + - clocks + +additionalProperties: false + +examples: + - | + core_timer { + compatible =3D "kalray,kv3-1-timer"; + clocks =3D <&core_clk>; + interrupts-extended =3D <&core_intc0 0>, + <&core_intc1 0>, + <&core_intc2 0>, + <&core_intc3 0>, + <&core_intc4 0>, + <&core_intc5 0>, + <&core_intc6 0>, + <&core_intc7 0>, + <&core_intc8 0>, + <&core_intc9 0>, + <&core_intc10 0>, + <&core_intc11 0>, + <&core_intc12 0>, + <&core_intc13 0>, + <&core_intc14 0>, + <&core_intc15 0>; + }; + --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout149.security-mail.net (smtpout149.security-mail.net [85.31.212.149]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B07B216C6B7 for ; Mon, 22 Jul 2024 09:45:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.149 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641507; cv=none; b=L+ds2UqqehyMwrPtYTUn9adEaY4EraEpJnR7c5wQOgIqYn8SP/86n7xyVjKhxfO20eFlZpC/Z2xd+3i2NPiRQquUINSjxFFTlTYuFM/tUB82gaGz+TGLEBzz36I5jqxvXTC00iii7mU17wCovo0zyDYA20vV0juUbhR7oVnVICQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641507; c=relaxed/simple; bh=egooQkmj99pm83DC9gb6KlvkhnjktH8O6Q/DT11oDZE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=gJ4rT5D17msNJetqOs1/2piBvJMpllTQ/3LEeSAixlfDxILkyU85YTrD+sjmGVwjm9AtT8E8PEirDaPtvchHkSKhmGfconlYsUeKUi8XaGBk9OZUxftN6HkAqiJIk3WPlwL5R9zZ+irs0VN+9PlP8PLYYEbXfZnVLX5x8c35Xvs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=Tf8BNw8G; arc=none smtp.client-ip=85.31.212.149 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="Tf8BNw8G" Received: from localhost (fx409.security-mail.net [127.0.0.1]) by fx409.security-mail.net (Postfix) with ESMTP id 0A444349953 for ; Mon, 22 Jul 2024 11:43:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641400; bh=egooQkmj99pm83DC9gb6KlvkhnjktH8O6Q/DT11oDZE=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=Tf8BNw8GrmoyXRj9UeuC9+b+a5zVzgm9veiBVvunuNxGZXP5wA+4tGEmhuYAKQO6S VkfvaLZ7vmEDA7P2DkZHdCP5/1Y00GfTxsFD6zFyos/9eoWqjhzyXO9cIcpsw/g7kC xg5AeylTXLm3XJt7xvon4FRyo2XZucjJ5SW1N4oY= Received: from fx409 (fx409.security-mail.net [127.0.0.1]) by fx409.security-mail.net (Postfix) with ESMTP id D79F03498B7; Mon, 22 Jul 2024 11:43:19 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx409.security-mail.net (Postfix) with ESMTPS id 5DCD3349888; Mon, 22 Jul 2024 11:43:19 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 3299240317; Mon, 22 Jul 2024 11:43:19 +0200 (CEST) X-Quarantine-ID: X-Secumail-id: <806d.669e29b7.5bd88.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Borne , Julian Vetter , Yann Sionneau Cc: devicetree@vger.kernel.org Subject: [RFC PATCH v3 10/37] dt-bindings: kalray: Add CPU bindings for Kalray kvx Date: Mon, 22 Jul 2024 11:41:21 +0200 Message-ID: <20240722094226.21602-11-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add Kalray kvx CPU bindings. Signed-off-by: Yann Sionneau --- Notes: V2 -> V3: New patch --- .../devicetree/bindings/kalray/cpus.yaml | 105 ++++++++++++++++++ 1 file changed, 105 insertions(+) create mode 100644 Documentation/devicetree/bindings/kalray/cpus.yaml diff --git a/Documentation/devicetree/bindings/kalray/cpus.yaml b/Documenta= tion/devicetree/bindings/kalray/cpus.yaml new file mode 100644 index 0000000000000..cdf1f10573da7 --- /dev/null +++ b/Documentation/devicetree/bindings/kalray/cpus.yaml @@ -0,0 +1,105 @@ +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/kalray/cpus.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Kalray CPUs + +maintainers: + - Jonathan Borne + - Julian Vetter + - Yann Sionneau + +allOf: + - $ref: /schemas/cpu.yaml# + +properties: + reg: + maxItems: 1 + description: | + Contains the CPU number. + + compatible: + oneOf: + - items: + - const: kalray,kv3-1-pe + - const: kalray,kv3-pe + + clocks: + maxItems: 1 + + enable-method: + const: "kalray,coolidge-pwr-ctrl" + + interrupt-controller: + type: object + additionalProperties: false + description: Describes the CPU's local interrupt controller + + properties: + '#interrupt-cells': + const: 1 + + compatible: + const: kalray,kv3-1-intc + + interrupt-controller: true + + '#address-cells': + const: 0 + + required: + - compatible + - '#interrupt-cells' + - '#address-cells' + - interrupt-controller + +if: + properties: + reg: + const: 0 +then: + required: + - clocks + +required: + - reg + - compatible + - interrupt-controller + +unevaluatedProperties: false + +examples: + - | + + cpus { + #size-cells =3D <0>; + #address-cells =3D <1>; + + cpu@0 { + compatible =3D "kalray,kv3-1-pe"; + device_type =3D "cpu"; + reg =3D <0>; + clocks =3D <&core_clk>; + cpu_intc0: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <1>; + #address-cells =3D <0>; + interrupt-controller; + }; + }; + + + cpu@1 { + compatible =3D "kalray,kv3-1-pe"; + device_type =3D "cpu"; + reg =3D <1>; + cpu_intc1: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <1>; + #address-cells =3D <0>; + interrupt-controller; + }; + }; + }; --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout148.security-mail.net (smtpout148.security-mail.net [85.31.212.148]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A6DA16C437 for ; Mon, 22 Jul 2024 09:43:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.148 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641406; cv=none; b=eWdW1fROazg/w8OJMnrFHZSqY5ErPZUVTECDFKa3P0LnPyh+wG+I6OwpstI0ugWxOSoMO0GsX6smhud36EIi5Q9yR/nMTQEFd9TwYjv7/7Ns9Z/mMHHx5JIrxUfPZ9SMkZdGYyrl1oSC1yZZRo9OK6rSQ78eayczLOkCGDPAa8U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641406; c=relaxed/simple; bh=A4I33nibX6sQIz/KSjvrrW2ZqZnbsn4l26E8nv0hiKI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=afUxG1lqbycbxtpdepWMwnFmWviXy0NNdoCOGq5kFQUd6STROWlib93M95lfhC112TnEvA4QbAu2AlP319vxQw4hAW3DgO1XoeFd+WMwnVy9jEcjeT0yPnB3x+B7XIvLZWBxxxBx9CfAe3ksdqAuFhiDKYk2/l/NPN+x4Z6Mv6g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=TpZ8jy41; arc=none smtp.client-ip=85.31.212.148 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="TpZ8jy41" Received: from localhost (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id AA0C8322D24 for ; Mon, 22 Jul 2024 11:43:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641400; bh=A4I33nibX6sQIz/KSjvrrW2ZqZnbsn4l26E8nv0hiKI=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=TpZ8jy413vklg9e0xGsONQT30xSJKSzP4R86024JLYpW9bOnrBbco89Mg850LmxqB IpqTmgQSg8U97sW6YlmzB82jnTuzWxl9v1s9td0EfMvIjey1hN24RKYbf4Rj36DyKt 0L6BpDuEYKzoTmBqEvs1ubEpUkBkzhtiJEBefDaI= Received: from fx408 (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id 73242322CBF; Mon, 22 Jul 2024 11:43:20 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx408.security-mail.net (Postfix) with ESMTPS id C3E7432247A; Mon, 22 Jul 2024 11:43:19 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 9978E40317; Mon, 22 Jul 2024 11:43:19 +0200 (CEST) X-Quarantine-ID: X-Secumail-id: From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Borne , Julian Vetter , Yann Sionneau Cc: devicetree@vger.kernel.org Subject: [RFC PATCH v3 11/37] dt-bindings: kalray: Add Kalray SoC board compatibles Date: Mon, 22 Jul 2024 11:41:22 +0200 Message-ID: <20240722094226.21602-12-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add Kalray SoC board bindings. Signed-off-by: Yann Sionneau --- Notes: V2 -> V3: New patch --- .../devicetree/bindings/kalray/kalray.yaml | 22 +++++++++++++++++++ 1 file changed, 22 insertions(+) create mode 100644 Documentation/devicetree/bindings/kalray/kalray.yaml diff --git a/Documentation/devicetree/bindings/kalray/kalray.yaml b/Documen= tation/devicetree/bindings/kalray/kalray.yaml new file mode 100644 index 0000000000000..3da817da9b2fe --- /dev/null +++ b/Documentation/devicetree/bindings/kalray/kalray.yaml @@ -0,0 +1,22 @@ +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/kalray/kalray.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Kalray SoC-based boards + +maintainers: + - Jonathan Borne + - Julian Vetter + - Yann Sionneau + +properties: + $nodename: + const: '/' + compatible: + oneOf: + - description: Kalray Coolidge SoC on qemu + const: kalray,coolidge-qemu + +additionalProperties: true --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout42.security-mail.net (smtpout42.security-mail.net [85.31.212.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24BEA16CD1F for ; Mon, 22 Jul 2024 09:45:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641505; cv=none; b=H8oaUFXe2kiP3HBNxoUT6roA6dOm15m9SlJT+mHiwYBaL/DRaLiUh4h0xkf3rv6R6X8Y1DBqPVt+U3/OiBI7GcbOJ/F4rxLHy2JwqQ+LP/7FL8h6oBI1mvkvEBwraT5dMAtLCUnZ40KhQlLtIBI30oW6HjMsaGnnWgKbVMdbQ5w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641505; c=relaxed/simple; bh=hBw3JMQA4zvpLtBy+6/1m/DQmJRkpxpnzq6/ZBHGrP4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=H9HZo2QGHnM279NzcPdooF+oMz8yIGXlMlAJWJikNEJ8+KZNAj8oO4Xt/s5AzTO+MrK+2ts4jjjdSYeSD+GxgPRFuteLL3gwmyLOE2im9qpRBUqHS+InF86g619KeQB1KSmSGfkUO/KkwUeZnJFmUH3puwn4wxHw0LNAy8n/1Do= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=WmV+DRpn; arc=none smtp.client-ip=85.31.212.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="WmV+DRpn" Received: from localhost (localhost [127.0.0.1]) by fx302.security-mail.net (Postfix) with ESMTP id A0AA580B39C for ; Mon, 22 Jul 2024 11:43:22 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641402; bh=hBw3JMQA4zvpLtBy+6/1m/DQmJRkpxpnzq6/ZBHGrP4=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=WmV+DRpnUUOg4D08q68w+Zh2MkHRv7p9UonTFtjGxBx4hag6W6dYFg3qLgKlMhWdL 2EGHbIwC1gpCDOjAyjaYWD8S59qy/HRhqRQ9DoSznovl1Pir7MmljvWDMa2vwEAlXH 7FkihZUcutlHxGVeXQBOVMH+lV2ZC1AdXB8n0mjc= Received: from fx302 (localhost [127.0.0.1]) by fx302.security-mail.net (Postfix) with ESMTP id 7F0E280AFDB; Mon, 22 Jul 2024 11:43:22 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx302.security-mail.net (Postfix) with ESMTPS id 6319480AE53; Mon, 22 Jul 2024 11:43:20 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 2F44740317; Mon, 22 Jul 2024 11:43:20 +0200 (CEST) X-Secumail-id: From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Paul Moore , Eric Paris , Eric Biederman , Kees Cook Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , audit@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v3 12/37] kvx: Add ELF-related definitions Date: Mon, 22 Jul 2024 11:41:23 +0200 Message-ID: <20240722094226.21602-13-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add ELF-related definitions for kvx, including: EM_KVX, AUDIT_ARCH_KVX and NT_KVX_TCA. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Signed-off-by: Yann Sionneau --- Notes: V2 -> V3: update NT_KVX_TCA register set number macro --- include/uapi/linux/audit.h | 1 + include/uapi/linux/elf-em.h | 1 + include/uapi/linux/elf.h | 1 + 3 files changed, 3 insertions(+) diff --git a/include/uapi/linux/audit.h b/include/uapi/linux/audit.h index d676ed2b246ec..4db7aa3f84c70 100644 --- a/include/uapi/linux/audit.h +++ b/include/uapi/linux/audit.h @@ -402,6 +402,7 @@ enum { #define AUDIT_ARCH_HEXAGON (EM_HEXAGON) #define AUDIT_ARCH_I386 (EM_386|__AUDIT_ARCH_LE) #define AUDIT_ARCH_IA64 (EM_IA_64|__AUDIT_ARCH_64BIT|__AUDIT_ARCH_LE) +#define AUDIT_ARCH_KVX (EM_KVX|__AUDIT_ARCH_64BIT|__AUDIT_ARCH_LE) #define AUDIT_ARCH_M32R (EM_M32R) #define AUDIT_ARCH_M68K (EM_68K) #define AUDIT_ARCH_MICROBLAZE (EM_MICROBLAZE) diff --git a/include/uapi/linux/elf-em.h b/include/uapi/linux/elf-em.h index ef38c2bc5ab7a..9cc348be7f860 100644 --- a/include/uapi/linux/elf-em.h +++ b/include/uapi/linux/elf-em.h @@ -51,6 +51,7 @@ #define EM_RISCV 243 /* RISC-V */ #define EM_BPF 247 /* Linux BPF - in-kernel virtual machine */ #define EM_CSKY 252 /* C-SKY */ +#define EM_KVX 256 /* Kalray VLIW Architecture */ #define EM_LOONGARCH 258 /* LoongArch */ #define EM_FRV 0x5441 /* Fujitsu FR-V */ =20 diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h index b54b313bcf073..2d4885f2e4995 100644 --- a/include/uapi/linux/elf.h +++ b/include/uapi/linux/elf.h @@ -448,6 +448,7 @@ typedef struct elf64_shdr { #define NT_MIPS_MSA 0x802 /* MIPS SIMD registers */ #define NT_RISCV_CSR 0x900 /* RISC-V Control and Status Registers */ #define NT_RISCV_VECTOR 0x901 /* RISC-V vector registers */ +#define NT_KVX_TCA 0x902 /* kvx TCA registers */ #define NT_LOONGARCH_CPUCFG 0xa00 /* LoongArch CPU config registers */ #define NT_LOONGARCH_CSR 0xa01 /* LoongArch control and status registers */ #define NT_LOONGARCH_LSX 0xa02 /* LoongArch Loongson SIMD Extension regist= ers */ --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout42.security-mail.net (smtpout42.security-mail.net [85.31.212.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3C75616C69C for ; Mon, 22 Jul 2024 09:45:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641507; cv=none; b=T9G9SjuPvshkcCIK9Aqc6987l1bEVuV8mWNcoadnIj+I0xznGr9rlSJeBD1xhnh3+kLifO3NOGF+rosZMD5bbRAYXoM9Kf/Yp3llRDXTGOPPTCOepieayTKvwb2Yg48WrrkYI39Ie8832YJEFPhO/phn39W0K4aAbtLwkmqCqiU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641507; c=relaxed/simple; bh=RfEbXUvXuwb0hpvBSZAdbFO4zfrCKl+D0or1hBndqNI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=eU/MHkjVh7tTt0YDfVGDVUzjYVGn/9q71iuI5IHseQuHIWzHC/yDNUaJE+OHBpa+n9OteNVuWxvJ14bMUX8R+sGr1H7Pxq8Jz9ev4Xs5PUbSd0pLQoDxKFzmmrJXpqYnSEIn8Ozz/4LgTffm3t6hmagAFGQXcniXdP7XuwFjgbE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=XQ/KhuxY; arc=none smtp.client-ip=85.31.212.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="XQ/KhuxY" Received: from localhost (localhost [127.0.0.1]) by fx302.security-mail.net (Postfix) with ESMTP id DD5BD80AEEB for ; Mon, 22 Jul 2024 11:43:22 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641402; bh=RfEbXUvXuwb0hpvBSZAdbFO4zfrCKl+D0or1hBndqNI=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=XQ/KhuxYXwzbDvq4tHjEA1mxkKWps+AStEZPhjSoRnA7aeZ5AlOFExcjBo5IoVfYE 1fJ96+tiWjMCtVlALImjzOVYPt5paUxUG80qo5siefeO+b/2SZ60rm/jkImNVdIfMO q+MpLaPtv1/z8IStYGnRyX4lw06oZ1cbpToeOFeI= Received: from fx302 (localhost [127.0.0.1]) by fx302.security-mail.net (Postfix) with ESMTP id B358B80AD53; Mon, 22 Jul 2024 11:43:22 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx302.security-mail.net (Postfix) with ESMTPS id 2608F80B126; Mon, 22 Jul 2024 11:43:22 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id D9DEB40317; Mon, 22 Jul 2024 11:43:21 +0200 (CEST) X-Secumail-id: From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Guillaume Thouvenin , Jules Maselbas , Marc =?utf-8?b?UG91bGhpw6hz?= , Marius Gligor , Samuel Jones , Vincent Chardon , bpf@vger.kernel.org Subject: [RFC PATCH v3 13/37] kvx: Add build infrastructure Date: Mon, 22 Jul 2024 11:41:24 +0200 Message-ID: <20240722094226.21602-14-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done From: Yann Sionneau Add Kbuild, Makefile, Kconfig and link script for kvx build infrastructure. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Guillaume Thouvenin Signed-off-by: Guillaume Thouvenin Co-developed-by: Jonathan Borne Signed-off-by: Jonathan Borne Co-developed-by: Jules Maselbas Signed-off-by: Jules Maselbas Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Marc Poulhi=C3=A8s Signed-off-by: Marc Poulhi=C3=A8s Co-developed-by: Marius Gligor Signed-off-by: Marius Gligor Co-developed-by: Samuel Jones Signed-off-by: Samuel Jones Co-developed-by: Vincent Chardon Signed-off-by: Vincent Chardon Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: - typos and formatting fixes, removed int from PGTABLE_LEVELS - renamed default_defconfig to defconfig in arch/kvx/Makefile - fix clean target raising an error from gcc (LIBGCC) V2 -> V3: - use generic entry framework - add smem in kernel direct map - remove DEBUG_EXCEPTION_STACK - use global addresses for smem - now boot from PE (KALRAY_BOOT_BY_PE) - do not link with libgcc anymore --- arch/kvx/Kconfig | 226 +++++++++++++++++++++++++++++++ arch/kvx/Kconfig.debug | 60 ++++++++ arch/kvx/Makefile | 47 +++++++ arch/kvx/include/asm/Kbuild | 20 +++ arch/kvx/include/uapi/asm/Kbuild | 1 + arch/kvx/kernel/Makefile | 15 ++ arch/kvx/kernel/kvx_ksyms.c | 18 +++ arch/kvx/kernel/vmlinux.lds.S | 150 ++++++++++++++++++++ arch/kvx/lib/Makefile | 6 + arch/kvx/mm/Makefile | 8 ++ 10 files changed, 551 insertions(+) create mode 100644 arch/kvx/Kconfig create mode 100644 arch/kvx/Kconfig.debug create mode 100644 arch/kvx/Makefile create mode 100644 arch/kvx/include/asm/Kbuild create mode 100644 arch/kvx/include/uapi/asm/Kbuild create mode 100644 arch/kvx/kernel/Makefile create mode 100644 arch/kvx/kernel/kvx_ksyms.c create mode 100644 arch/kvx/kernel/vmlinux.lds.S create mode 100644 arch/kvx/lib/Makefile create mode 100644 arch/kvx/mm/Makefile diff --git a/arch/kvx/Kconfig b/arch/kvx/Kconfig new file mode 100644 index 0000000000000..a98c1fbd8131d --- /dev/null +++ b/arch/kvx/Kconfig @@ -0,0 +1,226 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# For a description of the syntax of this configuration file, +# see Documentation/kbuild/kconfig-language.txt. +# + +config 64BIT + def_bool y + +config GENERIC_CALIBRATE_DELAY + def_bool y + +config FIX_EARLYCON_MEM + def_bool y + +config MMU + def_bool y + +config KALLSYMS_BASE_RELATIVE + def_bool n + +config GENERIC_CSUM + def_bool y + +config RWSEM_GENERIC_SPINLOCK + def_bool y + +config GENERIC_HWEIGHT + def_bool y + +config ARCH_MMAP_RND_BITS_MAX + default 24 + +config ARCH_MMAP_RND_BITS_MIN + default 18 + +config STACKTRACE_SUPPORT + def_bool y + +config LOCKDEP_SUPPORT + def_bool y + +config GENERIC_BUG + def_bool y + depends on BUG + +config KVX_4K_PAGES + def_bool y + +config KVX + def_bool y + select ARCH_CLOCKSOURCE_DATA + select ARCH_DMA_ADDR_T_64BIT + select ARCH_HAS_DEVMEM_IS_ALLOWED + select ARCH_HAS_DMA_PREP_COHERENT + select ARCH_HAS_ELF_RANDOMIZE + select ARCH_HAS_PTE_SPECIAL + select ARCH_HAS_SETUP_DMA_OPS if IOMMU_SUPPORT + select ARCH_HAS_SYNC_DMA_FOR_DEVICE + select ARCH_HAS_SYNC_DMA_FOR_CPU + select ARCH_HAS_TEARDOWN_DMA_OPS if IOMMU_SUPPORT + select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT + select ARCH_USE_QUEUED_SPINLOCKS + select ARCH_USE_QUEUED_RWLOCKS + select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT + select ARCH_WANT_FRAME_POINTERS + select CLKSRC_OF + select COMMON_CLK + select DMA_DIRECT_REMAP + select GENERIC_ALLOCATOR + select GENERIC_CLOCKEVENTS + select GENERIC_CLOCKEVENTS + select GENERIC_CPU_DEVICES + select GENERIC_ENTRY + select GENERIC_IOMAP + select GENERIC_IOREMAP + select GENERIC_IRQ_CHIP + select GENERIC_IRQ_MULTI_HANDLER + select GENERIC_IRQ_PROBE + select GENERIC_IRQ_SHOW + select GENERIC_SCHED_CLOCK + select HAVE_ARCH_AUDITSYSCALL + select HAVE_ARCH_BITREVERSE + select HAVE_ARCH_MMAP_RND_BITS + select HAVE_ARCH_SECCOMP_FILTER + select HAVE_ASM_MODVERSIONS + select HAVE_DEBUG_KMEMLEAK + select HAVE_EFFICIENT_UNALIGNED_ACCESS + select HAVE_FUTEX_CMPXCHG if FUTEX + select HAVE_IOREMAP_PROT + select HAVE_MEMBLOCK_NODE_MAP + select HAVE_PCI + select HAVE_STACKPROTECTOR + select HAVE_SYSCALL_TRACEPOINTS + select IOMMU_DMA if IOMMU_SUPPORT + select KVX_IPI_CTRL if SMP + select KVX_APIC_GIC + select KVX_APIC_MAILBOX + select KVX_CORE_INTC + select KVX_ITGEN + select KVX_WATCHDOG + select MODULES_USE_ELF_RELA + select OF + select OF_EARLY_FLATTREE + select OF_RESERVED_MEM + select PCI_DOMAINS_GENERIC if PCI + select SPARSE_IRQ + select SYSCTL_EXCEPTION_TRACE + select THREAD_INFO_IN_TASK + select TIMER_OF + select TRACE_IRQFLAGS_SUPPORT + select WATCHDOG + select ZONE_DMA32 + +config PGTABLE_LEVELS + default 3 + +config HAVE_KPROBES + def_bool n + +menu "System setup" + +config POISON_INITMEM + bool "Enable to poison freed initmem" + default y + help + In order to debug initmem, using poison allows to verify if some + data/code is still using it. Enable this for debug purposes. + +config KVX_PHYS_OFFSET + hex "RAM address of memory base" + default 0x0 + +config KVX_PAGE_OFFSET + hex "kernel virtual address of memory base" + default 0xFFFFFF8000000000 + +config ARCH_SPARSEMEM_ENABLE + def_bool y + +config ARCH_SPARSEMEM_DEFAULT + def_bool ARCH_SPARSEMEM_ENABLE + +config ARCH_SELECT_MEMORY_MODEL + def_bool ARCH_SPARSEMEM_ENABLE + +config STACK_MAX_DEPTH_TO_PRINT + int "Maximum depth of stack to print" + range 1 128 + default "24" + +config SECURE_DAME_HANDLING + bool "Secure DAME handling" + default y + help + In order to securely handle Data Asynchronous Memory Errors, we need + to do a barrier upon kernel entry when coming from userspace. This + barrier guarantees us that any pending DAME will be serviced right + away. We also need to do a barrier when returning from kernel to user. + This way, if the kernel or the user triggered a DAME, it will be + serviced by knowing we are coming from kernel or user and avoid + pulling the wrong lever (panic for kernel or sigfault for user). + This can be costly but ensures that user cannot interfere with kernel. + /!\ Do not disable unless you want to open a giant breach between + user and kernel /!\ + +config CACHECTL_UNSAFE_PHYS_OPERATIONS + bool "Enable cachectl syscall unsafe physical operations" + default n + help + Enable the cachectl syscall to allow writing back/invalidating memory + ranges based on physical addresses. These operations require the + CAP_SYS_ADMIN capability. + +config ENABLE_TCA + bool "Enable TCA coprocessor support" + default y + help + This option enables TCA coprocessor support. It allows the user to + use the coprocessor and save registers on context switch if used. + Registers content will also be cleared when switching. + +config SMP + bool "Symmetric multi-processing support" + default n + select GENERIC_SMP_IDLE_THREAD + select GENERIC_IRQ_IPI + select IRQ_DOMAIN_HIERARCHY + select IRQ_DOMAIN + help + This enables support for systems with more than one CPU. If you have + a system with only one CPU, say N. If you have a system with more + than one CPU, say Y. + + If you say N here, the kernel will run on uni- and multiprocessor + machines, but will use only one CPU of a multiprocessor machine. If + you say Y here, the kernel will run on many, but not all, + uniprocessor machines. On a uniprocessor machine, the kernel + will run faster if you say N here. + +config NR_CPUS + int "Maximum number of CPUs" + range 1 16 + default "16" + depends on SMP + help + Kalray support can handle a maximum of 16 CPUs. + +config KVX_PAGE_SHIFT + int + default 12 + +config CMDLINE + string "Default kernel command string" + default "" + help + On some architectures there is currently no way for the boot loader + to pass arguments to the kernel. For these architectures, you should + supply some command-line options at build time by entering them + here. + +endmenu + +menu "Kernel Features" +source "kernel/Kconfig.hz" +endmenu diff --git a/arch/kvx/Kconfig.debug b/arch/kvx/Kconfig.debug new file mode 100644 index 0000000000000..0d526802e9221 --- /dev/null +++ b/arch/kvx/Kconfig.debug @@ -0,0 +1,60 @@ +menu "KVX debugging" + +config KVX_DEBUG_ASN + bool "Check ASN before writing TLB entry" + default n + help + This option allows to check if the ASN of the current + process matches the ASN found in MMC. If it is not the + case an error will be printed. + +config KVX_DEBUG_TLB_WRITE + bool "Enable TLBs write checks" + default n + help + Enabling this option will enable TLB access checks. This is + particularly helpful when modifying the assembly code responsible + for TLB refill. If set, mmc.e will be checked each time the TLB are + written and a panic will be thrown on error. + +config KVX_DEBUG_TLB_ACCESS + bool "Enable TLBs accesses logging" + default n + help + Enabling this option will enable TLB entry manipulation logging. + Each time an entry is added to the TLBs, it is logged in an array + readable via gdb scripts. This can be useful to understand strange + crashes related to suspicious virtual/physical addresses. + +config KVX_DEBUG_TLB_ACCESS_BITS + int "Number of bits used as index of entries in log table" + default 12 + depends on KVX_DEBUG_TLB_ACCESS + help + Set the number of bits used as index of entries that will be logged + in a ring buffer called kvx_tlb_access. One entry in the table + contains registers TEL, TEH and MMC. It also logs the type of the + operations (0:read, 1:write, 2:probe). Buffer is per CPU. For one + entry 24 bytes are used. So by default it uses 96KB of memory per + CPU to store 2^12 (4096) entries. + +config KVX_MMU_STATS + bool "Register MMU stats debugfs entries" + default n + depends on DEBUG_FS + help + Enable debugfs attribute which will allow inspecting various metrics + regarding MMU: + - Number of nomapping traps handled + - avg/min/max time for nomapping refill (user/kernel) + +config DEBUG_SFR_SET_MASK + bool "Enable SFR set_mask debugging" + default n + help + Verify that values written using kvx_sfr_set_mask match the mask. + This ensure that no extra bits of SFR will be overridden by some + incorrectly truncated values. This can lead to huge problems by + modifying important bits in system registers. + +endmenu diff --git a/arch/kvx/Makefile b/arch/kvx/Makefile new file mode 100644 index 0000000000000..305fa1d5d8692 --- /dev/null +++ b/arch/kvx/Makefile @@ -0,0 +1,47 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# Copyright (C) 2018-2024 Kalray Inc. + +ifeq ($(CROSS_COMPILE),) +CROSS_COMPILE :=3D kvx-mbr- +endif + +KBUILD_DEFCONFIG :=3D defconfig + +LDFLAGS_vmlinux :=3D -X +OBJCOPYFLAGS :=3D -O binary -R .comment -R .note -R .bootloader -S + +DEFAULT_OPTS :=3D -nostdlib -fno-builtin -march=3Dkv3-1 +KBUILD_CFLAGS +=3D $(DEFAULT_OPTS) +KBUILD_AFLAGS +=3D $(DEFAULT_OPTS) +KBUILD_CFLAGS_MODULE +=3D -mfarcall + +KBUILD_LDFLAGS +=3D -m elf64kvx + +head-y :=3D arch/kvx/kernel/head.o +libs-y +=3D arch/kvx/lib/ +core-y +=3D arch/kvx/kernel/ \ + arch/kvx/mm/ +# Final targets +all: vmlinux + +BOOT_TARGETS =3D bImage bImage.bin bImage.bz2 bImage.gz bImage.lzma bImage= .lzo + +$(BOOT_TARGETS): vmlinux + $(Q)$(MAKE) $(build)=3D$(boot) $(boot)/$@ + +install: + $(Q)$(MAKE) $(build)=3D$(boot) BOOTIMAGE=3D$(KBUILD_IMAGE) install + +define archhelp + echo '* bImage - Alias to selected kernel format (bImage.gz by = default)' + echo ' bImage.bin - Uncompressed Kernel-only image for barebox (ar= ch/$(ARCH)/boot/bImage.bin)' + echo ' bImage.bz2 - Kernel-only image for barebox (arch/$(ARCH)/bo= ot/bImage.bz2)' + echo '* bImage.gz - Kernel-only image for barebox (arch/$(ARCH)/bo= ot/bImage.gz)' + echo ' bImage.lzma - Kernel-only image for barebox (arch/$(ARCH)/bo= ot/bImage.lzma)' + echo ' bImage.lzo - Kernel-only image for barebox (arch/$(ARCH)/bo= ot/bImage.lzo)' + echo ' install - Install kernel using' + echo ' (your) ~/bin/$(INSTALLKERNEL) or' + echo ' (distribution) PATH: $(INSTALLKERNEL) or' + echo ' install to $$(INSTALL_PATH)' +endef diff --git a/arch/kvx/include/asm/Kbuild b/arch/kvx/include/asm/Kbuild new file mode 100644 index 0000000000000..ea73552faa103 --- /dev/null +++ b/arch/kvx/include/asm/Kbuild @@ -0,0 +1,20 @@ +generic-y +=3D asm-offsets.h +generic-y +=3D clkdev.h +generic-y +=3D auxvec.h +generic-y +=3D bpf_perf_event.h +generic-y +=3D cmpxchg-local.h +generic-y +=3D errno.h +generic-y +=3D extable.h +generic-y +=3D export.h +generic-y +=3D kvm_para.h +generic-y +=3D mcs_spinlock.h +generic-y +=3D mman.h +generic-y +=3D param.h +generic-y +=3D qrwlock.h +generic-y +=3D qspinlock.h +generic-y +=3D rwsem.h +generic-y +=3D sockios.h +generic-y +=3D stat.h +generic-y +=3D statfs.h +generic-y +=3D ucontext.h +generic-y +=3D user.h diff --git a/arch/kvx/include/uapi/asm/Kbuild b/arch/kvx/include/uapi/asm/K= build new file mode 100644 index 0000000000000..8b137891791fe --- /dev/null +++ b/arch/kvx/include/uapi/asm/Kbuild @@ -0,0 +1 @@ + diff --git a/arch/kvx/kernel/Makefile b/arch/kvx/kernel/Makefile new file mode 100644 index 0000000000000..a57309bbdbefd --- /dev/null +++ b/arch/kvx/kernel/Makefile @@ -0,0 +1,15 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# Copyright (C) 2019-2024 Kalray Inc. +# + +obj-y :=3D head.o setup.o process.o traps.o common.o time.o prom.o kvx_ksy= ms.o \ + irq.o cpuinfo.o entry.o ptrace.o syscall_table.o signal.o sys_kvx.o \ + stacktrace.o dame_handler.o vdso.o debug.o break_hook.o \ + reset.o io.o cpu.o + +obj-$(CONFIG_SMP) +=3D smp.o smpboot.o +obj-$(CONFIG_MODULES) +=3D module.o +CFLAGS_module.o +=3D -Wstrict-overflow -fstrict-overflow + +extra-y +=3D vmlinux.lds diff --git a/arch/kvx/kernel/kvx_ksyms.c b/arch/kvx/kernel/kvx_ksyms.c new file mode 100644 index 0000000000000..6a24ade5be3d9 --- /dev/null +++ b/arch/kvx/kernel/kvx_ksyms.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * derived from arch/nios2/kernel/nios2_ksyms.c + * + * Copyright (C) 2017-2024 Kalray Inc. + * Author(s): Clement Leger + * Yann Sionneau + */ + +#include +#include + +#define DECLARE_EXPORT(name) extern void name(void); EXPORT_SYMBOL(name) + +DECLARE_EXPORT(clear_page); +DECLARE_EXPORT(copy_page); +DECLARE_EXPORT(memset); +DECLARE_EXPORT(asm_clear_user); diff --git a/arch/kvx/kernel/vmlinux.lds.S b/arch/kvx/kernel/vmlinux.lds.S new file mode 100644 index 0000000000000..6e064e8867b82 --- /dev/null +++ b/arch/kvx/kernel/vmlinux.lds.S @@ -0,0 +1,150 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2024 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + * Marius Gligor + * Marc Poulhi=C3=A8s + * Yann Sionneau + */ + +#include +#include +#include +#include +#include +#include + +/* Entry is linked at smem global address */ +#define BOOT_ENTRY (16 * 1024 * 1024) +#define DTB_DEFAULT_SIZE (64 * 1024) + +#define LOAD_OFFSET (DDR_VIRT_OFFSET - DDR_PHYS_OFFSET) +#include + +OUTPUT_FORMAT("elf64-kvx") +ENTRY(kvx_start) + +#define HANDLER_SECTION(__sec, __name) \ + __sec ## _ ## __name ## _start =3D .; \ + KEEP(*(.##__sec ##.## __name)); \ + . =3D __sec ## _ ##__name ## _start + EXCEPTION_STRIDE; + +/** + * Generate correct section positioning for exception handling + * Since we need it twice for early exception handler and normal + * exception handler, factorize it here. + */ +#define EXCEPTION_SECTIONS(__sec) \ + __ ## __sec ## _start =3D ABSOLUTE(.); \ + HANDLER_SECTION(__sec,debug) \ + HANDLER_SECTION(__sec,trap) \ + HANDLER_SECTION(__sec,interrupt) \ + HANDLER_SECTION(__sec,syscall) + +jiffies =3D jiffies_64; +KALRAY_BOOT_BY_PE =3D 0; +SECTIONS +{ + . =3D BOOT_ENTRY; + .boot : + { + __kernel_smem_code_start =3D .; + KEEP(*(.boot.startup)); + KEEP(*(.boot.*)); + __kernel_smem_code_end =3D .; + } + + . =3D DDR_VIRT_OFFSET; + _start =3D .; + + _stext =3D .; + __init_begin =3D .; + __inittext_start =3D .; + .exit.text : AT(ADDR(.exit.text) - LOAD_OFFSET) + { + EXIT_TEXT + } + + .early_exception ALIGN(EXCEPTION_ALIGNMENT) : + AT(ADDR(.early_exception) - LOAD_OFFSET) + { + EXCEPTION_SECTIONS(early_exception) + } + + HEAD_TEXT_SECTION + INIT_TEXT_SECTION(PAGE_SIZE) + . =3D ALIGN(PAGE_SIZE); + __inittext_end =3D .; + __initdata_start =3D .; + INIT_DATA_SECTION(16) + + /* we have to discard exit text and such at runtime, not link time */ + .exit.data : AT(ADDR(.exit.data) - LOAD_OFFSET) + { + EXIT_DATA + } + + PERCPU_SECTION(L1_CACHE_BYTES) + . =3D ALIGN(PAGE_SIZE); + __initdata_end =3D .; + __init_end =3D .; + + /* Everything below this point will be mapped RO EXEC up to _etext */ + .text ALIGN(PAGE_SIZE) : AT(ADDR(.text) - LOAD_OFFSET) + { + _text =3D .; + EXCEPTION_SECTIONS(exception) + *(.exception.text) + . =3D ALIGN(PAGE_SIZE); + __exception_end =3D .; + TEXT_TEXT + SCHED_TEXT + LOCK_TEXT + KPROBES_TEXT + ENTRY_TEXT + IRQENTRY_TEXT + SOFTIRQENTRY_TEXT + *(.fixup) + } + . =3D ALIGN(PAGE_SIZE); + _etext =3D .; + + /* Everything below this point will be mapped RO NOEXEC up to _sdata */ + __rodata_start =3D .; + RO_DATA(PAGE_SIZE) + EXCEPTION_TABLE(8) + . =3D ALIGN(32); + .dtb : AT(ADDR(.dtb) - LOAD_OFFSET) + { + __dtb_start =3D .; + . +=3D DTB_DEFAULT_SIZE; + __dtb_end =3D .; + } + . =3D ALIGN(PAGE_SIZE); + __rodata_end =3D .; + + /* Everything below this point will be mapped RW NOEXEC up to _end */ + _sdata =3D .; + RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE) + _edata =3D .; + + BSS_SECTION(32, 32, 32) + . =3D ALIGN(PAGE_SIZE); + _end =3D .; + + /* This page will be mapped using a FIXMAP */ + .gdb_page ALIGN(PAGE_SIZE) : AT(ADDR(.gdb_page) - LOAD_OFFSET) + { + _debug_start =3D ADDR(.gdb_page) - LOAD_OFFSET; + . +=3D PAGE_SIZE; + } + _debug_start_lma =3D ASM_FIX_TO_VIRT(FIX_GDB_MEM_BASE_IDX); + + /* Debugging sections */ + STABS_DEBUG + DWARF_DEBUG + + /* Sections to be discarded -- must be last */ + DISCARDS +} diff --git a/arch/kvx/lib/Makefile b/arch/kvx/lib/Makefile new file mode 100644 index 0000000000000..3e9904d1f6f9c --- /dev/null +++ b/arch/kvx/lib/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# Copyright (C) 2017-2024 Kalray Inc. +# + +lib-y :=3D usercopy.o clear_page.o copy_page.o memcpy.o memset.o strlen.o = delay.o div.o diff --git a/arch/kvx/mm/Makefile b/arch/kvx/mm/Makefile new file mode 100644 index 0000000000000..a96a3764aae4a --- /dev/null +++ b/arch/kvx/mm/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# Copyright (C) 2017-2024 Kalray Inc. +# + +obj-y :=3D init.o mmu.o fault.o tlb.o extable.o dma-mapping.o cacheflush.o +obj-$(CONFIG_KVX_MMU_STATS) +=3D mmu_stats.o +obj-$(CONFIG_STRICT_DEVMEM) +=3D mmap.o --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout145.security-mail.net (smtpout145.security-mail.net [85.31.212.145]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ACFA416D4EE for ; Mon, 22 Jul 2024 09:45:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.145 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641543; cv=none; b=IS+PtfExiPJFC20e+bYGWhomEbzUxYGuWbyS4lRe5GruS0YLkdihNJtxnSBH6pBihBIh3fjb7mhVStyI7USk9oS+xwysjKI64+3iRY5FeAUeL60snhYRN7fA8YTNF5kNa86xg+SmG4caIXTLg12kktiKh4jsxNEOJ0e8xhXv9vE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641543; c=relaxed/simple; bh=x68zwVORXxCu/LDpqDri+al/z/9EZ7U/fBgENkRUYnU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=QRezrNXSVUzAzsfA8kr2I5+jqWDJqH6GquNvx71vgOru7xNpaY/1d5fcrUjDDCxDlllnC9PuiYDyN17O+JyHsExIMoBddbyr80zdXipATH1x3NfeDLb4x68XpJOlvfR0Ziue79ohvqO60Ie5r3rC33tJEnWNtAD9Ut4c4nBJqss= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=L+y37RFK; arc=none smtp.client-ip=85.31.212.145 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="L+y37RFK" Received: from localhost (fx405.security-mail.net [127.0.0.1]) by fx405.security-mail.net (Postfix) with ESMTP id 9BE3F3360CF for ; Mon, 22 Jul 2024 11:43:26 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641406; bh=x68zwVORXxCu/LDpqDri+al/z/9EZ7U/fBgENkRUYnU=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=L+y37RFKCA75DBOeW27/kHY+YDdieeTo1Fuiw1K0Dz65+YIRnzTL47zxme5IWCPLU iglvB46ZOAXR9jFU1DT3Bg3q0gYTSYgPtPKMG9wy0GJetuAMQUSoUGbiQEy3WkYgOw uJ6iY22Y5dhZ5Jakezjv8aEsRxV2IoH+33HRzrvo= Received: from fx405 (fx405.security-mail.net [127.0.0.1]) by fx405.security-mail.net (Postfix) with ESMTP id 6AD30335C57; Mon, 22 Jul 2024 11:43:26 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx405.security-mail.net (Postfix) with ESMTPS id 830403360E8; Mon, 22 Jul 2024 11:43:25 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 15C8340317; Mon, 22 Jul 2024 11:43:25 +0200 (CEST) X-Secumail-id: <7436.669e29bd.58a53.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Guillaume Thouvenin , Marius Gligor Subject: [RFC PATCH v3 14/37] kvx: Add CPU definition headers Date: Mon, 22 Jul 2024 11:41:25 +0200 Message-ID: <20240722094226.21602-15-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add common headers for basic kvx support: - CPU definition - SFR (System Function Registers) - Privilege level owners Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Guillaume Thouvenin Signed-off-by: Guillaume Thouvenin Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Marius Gligor Signed-off-by: Marius Gligor Signed-off-by: Yann Sionneau --- Notes: V2 -> V3: add kvx_of_parent_cpuid declaration. --- arch/kvx/include/asm/privilege.h | 211 ++ arch/kvx/include/asm/processor.h | 176 ++ arch/kvx/include/asm/sfr.h | 107 + arch/kvx/include/asm/sfr_defs.h | 5028 ++++++++++++++++++++++++++++++ arch/kvx/include/asm/swab.h | 48 + arch/kvx/include/asm/sys_arch.h | 51 + 6 files changed, 5621 insertions(+) create mode 100644 arch/kvx/include/asm/privilege.h create mode 100644 arch/kvx/include/asm/processor.h create mode 100644 arch/kvx/include/asm/sfr.h create mode 100644 arch/kvx/include/asm/sfr_defs.h create mode 100644 arch/kvx/include/asm/swab.h create mode 100644 arch/kvx/include/asm/sys_arch.h diff --git a/arch/kvx/include/asm/privilege.h b/arch/kvx/include/asm/privil= ege.h new file mode 100644 index 0000000000000..e639c4a541c65 --- /dev/null +++ b/arch/kvx/include/asm/privilege.h @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Yann Sionneau + * Marius Gligor + */ + +#ifndef _ASM_KVX_PRIVILEGE_H +#define _ASM_KVX_PRIVILEGE_H + +#include + +/** + * Privilege level stuff + */ + +/* + * When manipulating ring levels, we always use relative values. This mean= s that + * settings a resource owner to value 1 means "Current privilege level + 1. + * Setting it to 0 means "Current privilege level" + */ +#define PL_CUR_PLUS_1 1 +#define PL_CUR 0 + +/** + * Syscall owner configuration + */ +#define SYO_WFXL_OWN(__field, __pl) \ + SFR_SET_VAL_WFXL(SYO, __field, __pl) + +#define SYO_WFXL_VALUE(__pl) (SYO_WFXL_OWN(Q0, __pl) | \ + SYO_WFXL_OWN(Q1, __pl) | \ + SYO_WFXL_OWN(Q2, __pl) | \ + SYO_WFXL_OWN(Q3, __pl)) + +#define SYO_WFXL_VALUE_PL_CUR_PLUS_1 SYO_WFXL_VALUE(PL_CUR_PLUS_1) +#define SYO_WFXL_VALUE_PL_CUR SYO_WFXL_VALUE(PL_CUR) + +/** + * hardware trap owner configuration + */ +#define HTO_WFXL_OWN(__field, __pl) \ + SFR_SET_VAL_WFXL(HTO, __field, __pl) + + +#define HTO_WFXL_VALUE_BASE(__pl) (HTO_WFXL_OWN(OPC, __pl) | \ + HTO_WFXL_OWN(DMIS, __pl) | \ + HTO_WFXL_OWN(PSYS, __pl) | \ + HTO_WFXL_OWN(DSYS, __pl) | \ + HTO_WFXL_OWN(NOMAP, __pl) | \ + HTO_WFXL_OWN(PROT, __pl) | \ + HTO_WFXL_OWN(W2CL, __pl) | \ + HTO_WFXL_OWN(A2CL, __pl) | \ + HTO_WFXL_OWN(VSFR, __pl) | \ + HTO_WFXL_OWN(PLO, __pl)) + +/* + * When alone on the processor, we need to request all traps or the proces= sor + * will die badly without any information at all by jumping to the more + * privilege level even if nobody is there. + */ +#define HTO_WFXL_VALUE_PL_CUR_PLUS_1 (HTO_WFXL_VALUE_BASE(PL_CUR_PLUS_1) |= \ + HTO_WFXL_OWN(DECCG, PL_CUR_PLUS_1) | \ + HTO_WFXL_OWN(SECCG, PL_CUR_PLUS_1) | \ + HTO_WFXL_OWN(DE, PL_CUR_PLUS_1)) + +#define HTO_WFXL_VALUE_PL_CUR HTO_WFXL_VALUE_BASE(PL_CUR) + +/** + * Interrupt owner configuration + */ +#define ITO_WFXL_OWN(__field, __pl) \ + SFR_SET_VAL_WFXL(ITO, __field, __pl) + +#define ITO_WFXL_VALUE(__pl) (ITO_WFXL_OWN(IT0, __pl) | \ + ITO_WFXL_OWN(IT1, __pl) | \ + ITO_WFXL_OWN(IT2, __pl) | \ + ITO_WFXL_OWN(IT3, __pl) | \ + ITO_WFXL_OWN(IT4, __pl) | \ + ITO_WFXL_OWN(IT5, __pl) | \ + ITO_WFXL_OWN(IT6, __pl) | \ + ITO_WFXL_OWN(IT7, __pl) | \ + ITO_WFXL_OWN(IT8, __pl) | \ + ITO_WFXL_OWN(IT9, __pl) | \ + ITO_WFXL_OWN(IT10, __pl) | \ + ITO_WFXL_OWN(IT11, __pl) | \ + ITO_WFXL_OWN(IT12, __pl) | \ + ITO_WFXL_OWN(IT13, __pl) | \ + ITO_WFXL_OWN(IT14, __pl) | \ + ITO_WFXL_OWN(IT15, __pl)) + +#define ITO_WFXL_VALUE_PL_CUR_PLUS_1 ITO_WFXL_VALUE(PL_CUR_PLUS_1) +#define ITO_WFXL_VALUE_PL_CUR ITO_WFXL_VALUE(PL_CUR) + +#define ITO_WFXM_OWN(__field, __pl) \ + SFR_SET_VAL_WFXM(ITO, __field, __pl) + +#define ITO_WFXM_VALUE(__pl) (ITO_WFXM_OWN(IT16, __pl) | \ + ITO_WFXM_OWN(IT17, __pl) | \ + ITO_WFXM_OWN(IT18, __pl) | \ + ITO_WFXM_OWN(IT19, __pl) | \ + ITO_WFXM_OWN(IT20, __pl) | \ + ITO_WFXM_OWN(IT21, __pl) | \ + ITO_WFXM_OWN(IT22, __pl) | \ + ITO_WFXM_OWN(IT23, __pl) | \ + ITO_WFXM_OWN(IT24, __pl) | \ + ITO_WFXM_OWN(IT25, __pl) | \ + ITO_WFXM_OWN(IT26, __pl) | \ + ITO_WFXM_OWN(IT27, __pl) | \ + ITO_WFXM_OWN(IT28, __pl) | \ + ITO_WFXM_OWN(IT29, __pl) | \ + ITO_WFXM_OWN(IT30, __pl) | \ + ITO_WFXM_OWN(IT31, __pl)) + +#define ITO_WFXM_VALUE_PL_CUR_PLUS_1 ITO_WFXM_VALUE(PL_CUR_PLUS_1) +#define ITO_WFXM_VALUE_PL_CUR ITO_WFXM_VALUE(PL_CUR) + +/** + * Debug Owner configuration + */ + +#define DO_WFXL_OWN(__field, __pl) \ + SFR_SET_VAL_WFXL(DO, __field, __pl) + +#define DO_WFXL_VALUE(__pl) (DO_WFXL_OWN(B0, __pl) | \ + DO_WFXL_OWN(B1, __pl) | \ + DO_WFXL_OWN(W0, __pl) | \ + DO_WFXL_OWN(W1, __pl)) + +#define DO_WFXL_VALUE_PL_CUR_PLUS_1 DO_WFXL_VALUE(PL_CUR_PLUS_1) +#define DO_WFXL_VALUE_PL_CUR DO_WFXL_VALUE(PL_CUR) + +/** + * Misc owner configuration + */ +#define MO_WFXL_OWN(__field, __pl) \ + SFR_SET_VAL_WFXL(MO, __field, __pl) + +#define MO_WFXL_VALUE(__pl) (MO_WFXL_OWN(MMI, __pl) | \ + MO_WFXL_OWN(RFE, __pl) | \ + MO_WFXL_OWN(STOP, __pl) | \ + MO_WFXL_OWN(SYNC, __pl) | \ + MO_WFXL_OWN(PCR, __pl) | \ + MO_WFXL_OWN(MSG, __pl) | \ + MO_WFXL_OWN(MEN, __pl) | \ + MO_WFXL_OWN(MES, __pl) | \ + MO_WFXL_OWN(CSIT, __pl) | \ + MO_WFXL_OWN(T0, __pl) | \ + MO_WFXL_OWN(T1, __pl) | \ + MO_WFXL_OWN(WD, __pl) | \ + MO_WFXL_OWN(PM0, __pl) | \ + MO_WFXL_OWN(PM1, __pl) | \ + MO_WFXL_OWN(PM2, __pl) | \ + MO_WFXL_OWN(PM3, __pl)) + +#define MO_WFXL_VALUE_PL_CUR_PLUS_1 MO_WFXL_VALUE(PL_CUR_PLUS_1) +#define MO_WFXL_VALUE_PL_CUR MO_WFXL_VALUE(PL_CUR) + +#define MO_WFXM_OWN(__field, __pl) \ + SFR_SET_VAL_WFXM(MO, __field, __pl) + +#define MO_WFXM_VALUE(__pl) (MO_WFXM_OWN(PMIT, __pl)) + +#define MO_WFXM_VALUE_PL_CUR_PLUS_1 MO_WFXM_VALUE(PL_CUR_PLUS_1) +#define MO_WFXM_VALUE_PL_CUR MO_WFXM_VALUE(PL_CUR) + +/** + * $ps owner configuration + */ +#define PSO_WFXL_OWN(__field, __pl) \ + SFR_SET_VAL_WFXL(PSO, __field, __pl) + +#define PSO_WFXL_BASE_VALUE(__pl) (PSO_WFXL_OWN(PL0, __pl) | \ + PSO_WFXL_OWN(PL1, __pl) | \ + PSO_WFXL_OWN(ET, __pl) | \ + PSO_WFXL_OWN(HTD, __pl) | \ + PSO_WFXL_OWN(IE, __pl) | \ + PSO_WFXL_OWN(HLE, __pl) | \ + PSO_WFXL_OWN(SRE, __pl) | \ + PSO_WFXL_OWN(DAUS, __pl) | \ + PSO_WFXL_OWN(ICE, __pl) | \ + PSO_WFXL_OWN(USE, __pl) | \ + PSO_WFXL_OWN(DCE, __pl) | \ + PSO_WFXL_OWN(MME, __pl) | \ + PSO_WFXL_OWN(IL0, __pl) | \ + PSO_WFXL_OWN(IL1, __pl) | \ + PSO_WFXL_OWN(VS0, __pl)) +/* Request additionnal VS1 when alone */ +#define PSO_WFXL_VALUE_PL_CUR_PLUS_1 (PSO_WFXL_BASE_VALUE(PL_CUR_PLUS_1) |= \ + PSO_WFXL_OWN(VS1, PL_CUR_PLUS_1)) +#define PSO_WFXL_VALUE_PL_CUR PSO_WFXL_BASE_VALUE(PL_CUR) + +#define PSO_WFXM_OWN(__field, __pl) \ + SFR_SET_VAL_WFXM(PSO, __field, __pl) + +#define PSO_WFXM_VALUE(__pl) (PSO_WFXM_OWN(V64, __pl) | \ + PSO_WFXM_OWN(L2E, __pl) | \ + PSO_WFXM_OWN(SME, __pl) | \ + PSO_WFXM_OWN(SMR, __pl) | \ + PSO_WFXM_OWN(PMJ0, __pl) | \ + PSO_WFXM_OWN(PMJ1, __pl) | \ + PSO_WFXM_OWN(PMJ2, __pl) | \ + PSO_WFXM_OWN(PMJ3, __pl) | \ + PSO_WFXM_OWN(MMUP, __pl)) + +/* Request additionnal VS1 */ +#define PSO_WFXM_VALUE_PL_CUR_PLUS_1 PSO_WFXM_VALUE(PL_CUR_PLUS_1) +#define PSO_WFXM_VALUE_PL_CUR PSO_WFXM_VALUE(PL_CUR) + +#endif /* _ASM_KVX_PRIVILEGE_H */ diff --git a/arch/kvx/include/asm/processor.h b/arch/kvx/include/asm/proces= sor.h new file mode 100644 index 0000000000000..18edca066b603 --- /dev/null +++ b/arch/kvx/include/asm/processor.h @@ -0,0 +1,176 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2024 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + * Marius Gligor + * Yann Sionneau + */ + +#ifndef _ASM_KVX_PROCESSOR_H +#define _ASM_KVX_PROCESSOR_H + +#include +#include +#include +#include + +#define ARCH_HAS_PREFETCH +#define ARCH_HAS_PREFETCHW + +static inline void prefetch(const void *x) +{ + __builtin_prefetch(x); +} + +static inline void prefetchw(const void *x) +{ + __builtin_prefetch(x, 1); +} + +#define TASK_SIZE _BITULL(MMU_USR_ADDR_BITS) +#define TASK_SIZE_MAX TASK_SIZE + +/* + * This decides where the kernel will search for a free chunk of vm + * space during mmap's. + */ +#define TASK_UNMAPPED_BASE PAGE_ALIGN(TASK_SIZE >> 1) + +#define STACK_TOP TASK_SIZE +#define STACK_TOP_MAX STACK_TOP + +/* Stack alignment constant */ +#define STACK_ALIGNMENT 32 +#define STACK_ALIGN_MASK (STACK_ALIGNMENT - 1) + +#define cpu_relax() barrier() + +/* Size for register saving area for refill handler (enough for 3 quad reg= s) */ +#define SAVE_AREA_SIZE 12 + +#define TCA_REG_COUNT 48 + +/* TCA registers are 256 bits wide */ +struct tca_reg { + uint64_t x; + uint64_t y; + uint64_t z; + uint64_t t; +}; + +/** + * According to kvx ABI, the following registers are callee-saved: + * fp (r14) r18 r19 r20 r21 r22 r23 r24 r25 r26 r27 r28 r29 r30 r31. + * In order to switch from a task to another, we only need to save these + * registers + sp (r12) and ra + * + * WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING + * + * Do not reorder the following fields ! + * They are used in asm-offset for store octuples so they must be + * all right behind each other + */ +struct ctx_switch_regs { + + uint64_t fp; + + uint64_t ra; /* Return address */ + uint64_t sp; + uint64_t r18; + uint64_t r19; + + uint64_t r20; + uint64_t r21; + uint64_t r22; + uint64_t r23; + + uint64_t r24; + uint64_t r25; + uint64_t r26; + uint64_t r27; + + uint64_t r28; + uint64_t r29; + uint64_t r30; + uint64_t r31; + +#ifdef CONFIG_ENABLE_TCA + struct tca_reg tca_regs[TCA_REG_COUNT]; + uint8_t tca_regs_saved; +#endif +}; + +struct debug_info { +}; + +struct thread_struct { + uint64_t kernel_sp; + uint64_t save_area[SAVE_AREA_SIZE]; /* regs save area */ + +#ifdef CONFIG_KVX_MMU_STATS + uint64_t trap_entry_ts; +#endif + /* Context switch related registers */ + struct ctx_switch_regs ctx_switch; + + /* debugging */ + struct debug_info debug; +} __packed; + +#define INIT_THREAD { \ + .ctx_switch.sp =3D \ + sizeof(init_stack) + (unsigned long) &init_stack, \ +} + +#define KSTK_ESP(tsk) (task_pt_regs(tsk)->sp) +#define KSTK_EIP(tsk) (task_pt_regs(tsk)->spc) + +#define task_pt_regs(p) \ + ((struct pt_regs *)(task_stack_page(p) + THREAD_SIZE) - 1) + +#define thread_saved_reg(__tsk, __reg) \ + ((unsigned long) ((__tsk)->thread.ctx_switch.__reg)) + +void release_thread(struct task_struct *t); + +void start_thread(struct pt_regs *regs, unsigned long pc, unsigned long sp= ); + +unsigned long __get_wchan(struct task_struct *p); + +extern void ret_from_kernel_thread(void); + +/* User return function */ +extern void ret_from_fork(void); + +static inline void wait_for_interrupt(void) +{ + __builtin_kvx_await(); + kvx_sfr_set_field(WS, WU0, 0); +} + +static inline void local_cpu_stop(void) +{ + /* Clear Wake-Up 2 to allow stop instruction to work */ + kvx_sfr_set_field(WS, WU2, 0); + __asm__ __volatile__ ( + "1: stop\n" + ";;\n" + "goto 1b\n" + ";;\n" + ); +} + +struct cpuinfo_kvx { + u64 freq; + u8 arch_rev; + u8 uarch_rev; + u8 copro_enable; +}; + +struct device_node; +int kvx_of_parent_cpuid(struct device_node *node, unsigned long *cpuid); + +DECLARE_PER_CPU_READ_MOSTLY(struct cpuinfo_kvx, cpu_info); + +#endif /* _ASM_KVX_PROCESSOR_H */ diff --git a/arch/kvx/include/asm/sfr.h b/arch/kvx/include/asm/sfr.h new file mode 100644 index 0000000000000..d91aaa335bb9f --- /dev/null +++ b/arch/kvx/include/asm/sfr.h @@ -0,0 +1,107 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Yann Sionneau + * Guillaume Thouvenin + */ + +#ifndef _ASM_KVX_SFR_H +#define _ASM_KVX_SFR_H + +#ifndef __ASSEMBLY__ + +#include + +#include + +#define wfxl(_sfr, _val) __builtin_kvx_wfxl(_sfr, _val) + +#define wfxm(_sfr, _val) __builtin_kvx_wfxm(_sfr, _val) + +static inline void +__kvx_sfr_set_bit(unsigned char sfr, unsigned char bit) +{ + if (bit < 32) + wfxl(sfr, (uint64_t) (1 << bit) << 32); + else + wfxm(sfr, (uint64_t) 1 << bit); +} + +#define kvx_sfr_set_bit(__sfr, __bit) \ + __kvx_sfr_set_bit(KVX_SFR_ ## __sfr, __bit) + +static inline uint64_t make_sfr_val(uint64_t mask, uint64_t value) +{ + return ((value & 0xFFFFFFFF) << 32) | (mask & 0xFFFFFFFF); +} + +static inline void +__kvx_sfr_set_mask(unsigned char sfr, uint64_t mask, uint64_t value) +{ + uint64_t wf_val; + + /* Least significant bits */ + if (mask & 0xFFFFFFFF) { + wf_val =3D make_sfr_val(mask, value); + wfxl(sfr, wf_val); + } + + /* Most significant bits */ + if (mask & (0xFFFFFFFFULL << 32)) { + value >>=3D 32; + mask >>=3D 32; + wf_val =3D make_sfr_val(mask, value); + wfxm(sfr, wf_val); + } +} + +static inline u64 kvx_sfr_iget(unsigned char sfr) +{ + u64 res =3D sfr; + + asm volatile ("iget %0" : "+r"(res) :: ); + return res; +} + +#ifdef CONFIG_DEBUG_SFR_SET_MASK +# define kvx_sfr_set_mask(__sfr, __mask, __value) \ + do { \ + BUG_ON(((__value) & (__mask)) !=3D (__value)); \ + __kvx_sfr_set_mask(KVX_SFR_ ## __sfr, __mask, __value); \ + } while (0) + +#else +# define kvx_sfr_set_mask(__sfr, __mask, __value) \ + __kvx_sfr_set_mask(KVX_SFR_ ## __sfr, __mask, __value) +#endif + +#define kvx_sfr_set_field(sfr, field, value) \ + kvx_sfr_set_mask(sfr, KVX_SFR_ ## sfr ## _ ## field ## _MASK, \ + ((uint64_t) (value) << KVX_SFR_ ## sfr ## _ ## field ## _SHIFT)) + +static inline void +__kvx_sfr_clear_bit(unsigned char sfr, unsigned char bit) +{ + if (bit < 32) + wfxl(sfr, (uint64_t) 1 << bit); + else + wfxm(sfr, (uint64_t) 1 << (bit - 32)); +} + +#define kvx_sfr_clear_bit(__sfr, __bit) \ + __kvx_sfr_clear_bit(KVX_SFR_ ## __sfr, __bit) + +#define kvx_sfr_set(_sfr, _val) __builtin_kvx_set(KVX_SFR_ ## _sfr, _val) +#define kvx_sfr_get(_sfr) __builtin_kvx_get(KVX_SFR_ ## _sfr) + +#define kvx_sfr_field_val(_val, _sfr, _field) \ + (((_val) & KVX_SFR_ ## _sfr ## _ ## _field ## _MASK) \ + >> KVX_SFR_ ## _sfr ## _ ## _field ## _SHIFT) + +#define kvx_sfr_bit(_sfr, _field) \ + BIT_ULL(KVX_SFR_ ## _sfr ## _ ## _field ## _SHIFT) + +#endif + +#endif /* _ASM_KVX_SFR_DEFS_H */ diff --git a/arch/kvx/include/asm/sfr_defs.h b/arch/kvx/include/asm/sfr_def= s.h new file mode 100644 index 0000000000000..7e6318348b83a --- /dev/null +++ b/arch/kvx/include/asm/sfr_defs.h @@ -0,0 +1,5028 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_SFR_DEFS_H +#define _ASM_KVX_SFR_DEFS_H + +#include + +/* Register file indices */ +#define KVX_SFR_PC 0 /* Program Counter $pc $s0 */ +#define KVX_SFR_PS 1 /* Processor State $ps $s1 */ +#define KVX_SFR_PCR 2 /* Processing Identification $pcr $s2 */ +#define KVX_SFR_RA 3 /* Return Address $ra $s3 */ +#define KVX_SFR_CS 4 /* Compute Status $cs $s4 */ +#define KVX_SFR_CSIT 5 /* Compute Status arithmetic interrupt $csit $s5 */ +#define KVX_SFR_AESPC 6 /* Arithmetic Exception Saved PC $aespc $s6 */ +#define KVX_SFR_LS 7 /* Loop Start Address $ls $s7 */ +#define KVX_SFR_LE 8 /* Loop Exit Address $le $s8 */ +#define KVX_SFR_LC 9 /* Loop Counter $lc $s9 */ +#define KVX_SFR_IPE 10 /* Inter Processor Event $ipe $s10 */ +#define KVX_SFR_MEN 11 /* Misc External Notifications $men $s11 */ +#define KVX_SFR_PMC 12 /* Performance Monitor Control $pmc $s12 */ +#define KVX_SFR_PM0 13 /* Performance Monitor 0 $pm0 $s13 */ +#define KVX_SFR_PM1 14 /* Performance Monitor 1 $pm1 $s14 */ +#define KVX_SFR_PM2 15 /* Performance Monitor 2 $pm2 $s15 */ +#define KVX_SFR_PM3 16 /* Performance Monitor 3 $pm3 $s16 */ +#define KVX_SFR_PMSA 17 /* Performance Monitor Saved Address $pmsa $s17 */ +#define KVX_SFR_TCR 18 /* Timer Control $tcr $s18 */ +#define KVX_SFR_T0V 19 /* Timer 0 value $t0v $s19 */ +#define KVX_SFR_T1V 20 /* Timer 1 value $t1v $s20 */ +#define KVX_SFR_T0R 21 /* Timer 0 reload value $t0r $s21 */ +#define KVX_SFR_T1R 22 /* Timer 1 reload value $t1r $s22 */ +#define KVX_SFR_WDV 23 /* Watchdog Value $wdv $s23 */ +#define KVX_SFR_WDR 24 /* Watchdog Reload $wdr $s24 */ +#define KVX_SFR_ILE 25 /* Interrupt Line Enable $ile $s25 */ +#define KVX_SFR_ILL 26 /* Interrupt Line Level $ill $s26 */ +#define KVX_SFR_ILR 27 /* Interrupt Line Request $ilr $s27 */ +#define KVX_SFR_MMC 28 /* Memory Management Control $mmc $s28 */ +#define KVX_SFR_TEL 29 /* TLB Entry Low $tel $s29 */ +#define KVX_SFR_TEH 30 /* TLB Entry High $teh $s30 */ +#define KVX_SFR_SYO 32 /* Syscall Owners $syo $s32 */ +#define KVX_SFR_HTO 33 /* Hardware Trap Owners $hto $s33 */ +#define KVX_SFR_ITO 34 /* Interrupt Owners $ito $s34 */ +#define KVX_SFR_DO 35 /* Debug Owners $do $s35 */ +#define KVX_SFR_MO 36 /* Miscellaneous Owners $mo $s36 */ +#define KVX_SFR_PSO 37 /* PS register fields Owners $pso $s37 */ +#define KVX_SFR_DC 40 /* OCE (Debug) Control $dc $s40 */ +#define KVX_SFR_DBA0 41 /* Breakpoint Address 0 $dba0 $s41 */ +#define KVX_SFR_DBA1 42 /* Breakpoint Address 1 $dba1 $s42 */ +#define KVX_SFR_DWA0 43 /* Watchpoint Address 0 $dwa0 $s43 */ +#define KVX_SFR_DWA1 44 /* Watchpoint Address 1 $dwa1 $s44 */ +#define KVX_SFR_MES 45 /* Memory Error Status $mes $s45 */ +#define KVX_SFR_WS 46 /* Wake-Up Status $ws $s46 */ +#define KVX_SFR_SPC_PL0 64 /* Shadow PC PL 0 $spc_pl0 $s64 */ +#define KVX_SFR_SPC_PL1 65 /* Shadow PC PL 1 $spc_pl1 $s65 */ +#define KVX_SFR_SPC_PL2 66 /* Shadow PC PL 2 $spc_pl2 $s66 */ +#define KVX_SFR_SPC_PL3 67 /* Shadow PC PL 3 $spc_pl3 $s67 */ +#define KVX_SFR_SPS_PL0 68 /* Shadow PS PL 0 $sps_pl0 $s68 */ +#define KVX_SFR_SPS_PL1 69 /* Shadow PS PL 1 $sps_pl1 $s69 */ +#define KVX_SFR_SPS_PL2 70 /* Shadow PS PL 2 $sps_pl2 $s70 */ +#define KVX_SFR_SPS_PL3 71 /* Shadow PS PL 3 $sps_pl3 $s71 */ +#define KVX_SFR_EA_PL0 72 /* Effective Address PL0 $ea_pl0 $s72 */ +#define KVX_SFR_EA_PL1 73 /* Effective Address PL1 $ea_pl1 $s73 */ +#define KVX_SFR_EA_PL2 74 /* Effective Address PL2 $ea_pl2 $s74 */ +#define KVX_SFR_EA_PL3 75 /* Effective Address PL3 $ea_pl3 $s75 */ +#define KVX_SFR_EV_PL0 76 /* Exception Vector PL 0 $ev_pl0 $s76 */ +#define KVX_SFR_EV_PL1 77 /* Exception Vector PL 1 $ev_pl1 $s77 */ +#define KVX_SFR_EV_PL2 78 /* Exception Vector PL 2 $ev_pl2 $s78 */ +#define KVX_SFR_EV_PL3 79 /* Exception Vector PL 3 $ev_pl3 $s79 */ +#define KVX_SFR_SR_PL0 80 /* System Register PL 0 $sr_pl0 $s80 */ +#define KVX_SFR_SR_PL1 81 /* System Register PL 1 $sr_pl1 $s81 */ +#define KVX_SFR_SR_PL2 82 /* System Register PL 2 $sr_pl2 $s82 */ +#define KVX_SFR_SR_PL3 83 /* System Register PL 3 $sr_pl3 $s83 */ +#define KVX_SFR_ES_PL0 84 /* Exception Syndrome PL 0 $es_pl0 $s84 */ +#define KVX_SFR_ES_PL1 85 /* Exception Syndrome PL 1 $es_pl1 $s85 */ +#define KVX_SFR_ES_PL2 86 /* Exception Syndrome PL 2 $es_pl2 $s86 */ +#define KVX_SFR_ES_PL3 87 /* Exception Syndrome PL 3 $es_pl3 $s87 */ +#define KVX_SFR_SYOW 96 /* Alias to SYO register $syow $s96 */ +#define KVX_SFR_HTOW 97 /* Alias to HTO register $htow $s97 */ +#define KVX_SFR_ITOW 98 /* Alias to ITO register $itow $s98 */ +#define KVX_SFR_DOW 99 /* Alias to DO register $dow $s99 */ +#define KVX_SFR_MOW 100 /* Alias to MO register $mow $s100 */ +#define KVX_SFR_PSOW 101 /* Alias to PSO register $psow $s101 */ +#define KVX_SFR_SPC 128 /* Shadow PC alias on SPC_PL $spc $s128 */ +#define KVX_SFR_SPS 132 /* Shadow PS alias on PS_PL $sps $s132 */ +#define KVX_SFR_EA 136 /* Effective Address alias on EA_PL $ea $s136 */ +#define KVX_SFR_EV 140 /* Exception Vector alias on EV_PL $ev $s140 */ +#define KVX_SFR_SR 144 /* System Register alias on SR_PL $sr $s144 */ +#define KVX_SFR_ES 148 /* Exception Syndrome alias on ES_PL $es $s148 */ +#define KVX_SFR_VSFR0 256 /* Virtual SFR 0 $vsfr0 $s256 */ +#define KVX_SFR_VSFR1 257 /* Virtual SFR 1 $vsfr1 $s257 */ +#define KVX_SFR_VSFR2 258 /* Virtual SFR 2 $vsfr2 $s258 */ +#define KVX_SFR_VSFR3 259 /* Virtual SFR 3 $vsfr3 $s259 */ +#define KVX_SFR_VSFR4 260 /* Virtual SFR 4 $vsfr4 $s260 */ +#define KVX_SFR_VSFR5 261 /* Virtual SFR 5 $vsfr5 $s261 */ +#define KVX_SFR_VSFR6 262 /* Virtual SFR 6 $vsfr6 $s262 */ +#define KVX_SFR_VSFR7 263 /* Virtual SFR 7 $vsfr7 $s263 */ +#define KVX_SFR_VSFR8 264 /* Virtual SFR 8 $vsfr8 $s264 */ +#define KVX_SFR_VSFR9 265 /* Virtual SFR 9 $vsfr9 $s265 */ +#define KVX_SFR_VSFR10 266 /* Virtual SFR 10 $vsfr10 $s266 */ +#define KVX_SFR_VSFR11 267 /* Virtual SFR 11 $vsfr11 $s267 */ +#define KVX_SFR_VSFR12 268 /* Virtual SFR 12 $vsfr12 $s268 */ +#define KVX_SFR_VSFR13 269 /* Virtual SFR 13 $vsfr13 $s269 */ +#define KVX_SFR_VSFR14 270 /* Virtual SFR 14 $vsfr14 $s270 */ +#define KVX_SFR_VSFR15 271 /* Virtual SFR 15 $vsfr15 $s271 */ +#define KVX_SFR_VSFR16 272 /* Virtual SFR 16 $vsfr16 $s272 */ +#define KVX_SFR_VSFR17 273 /* Virtual SFR 17 $vsfr17 $s273 */ +#define KVX_SFR_VSFR18 274 /* Virtual SFR 18 $vsfr18 $s274 */ +#define KVX_SFR_VSFR19 275 /* Virtual SFR 19 $vsfr19 $s275 */ +#define KVX_SFR_VSFR20 276 /* Virtual SFR 20 $vsfr20 $s276 */ +#define KVX_SFR_VSFR21 277 /* Virtual SFR 21 $vsfr21 $s277 */ +#define KVX_SFR_VSFR22 278 /* Virtual SFR 22 $vsfr22 $s278 */ +#define KVX_SFR_VSFR23 279 /* Virtual SFR 23 $vsfr23 $s279 */ +#define KVX_SFR_VSFR24 280 /* Virtual SFR 24 $vsfr24 $s280 */ +#define KVX_SFR_VSFR25 281 /* Virtual SFR 25 $vsfr25 $s281 */ +#define KVX_SFR_VSFR26 282 /* Virtual SFR 26 $vsfr26 $s282 */ +#define KVX_SFR_VSFR27 283 /* Virtual SFR 27 $vsfr27 $s283 */ +#define KVX_SFR_VSFR28 284 /* Virtual SFR 28 $vsfr28 $s284 */ +#define KVX_SFR_VSFR29 285 /* Virtual SFR 29 $vsfr29 $s285 */ +#define KVX_SFR_VSFR30 286 /* Virtual SFR 30 $vsfr30 $s286 */ +#define KVX_SFR_VSFR31 287 /* Virtual SFR 31 $vsfr31 $s287 */ +#define KVX_SFR_VSFR32 288 /* Virtual SFR 32 $vsfr32 $s288 */ +#define KVX_SFR_VSFR33 289 /* Virtual SFR 33 $vsfr33 $s289 */ +#define KVX_SFR_VSFR34 290 /* Virtual SFR 34 $vsfr34 $s290 */ +#define KVX_SFR_VSFR35 291 /* Virtual SFR 35 $vsfr35 $s291 */ +#define KVX_SFR_VSFR36 292 /* Virtual SFR 36 $vsfr36 $s292 */ +#define KVX_SFR_VSFR37 293 /* Virtual SFR 37 $vsfr37 $s293 */ +#define KVX_SFR_VSFR38 294 /* Virtual SFR 38 $vsfr38 $s294 */ +#define KVX_SFR_VSFR39 295 /* Virtual SFR 39 $vsfr39 $s295 */ +#define KVX_SFR_VSFR40 296 /* Virtual SFR 40 $vsfr40 $s296 */ +#define KVX_SFR_VSFR41 297 /* Virtual SFR 41 $vsfr41 $s297 */ +#define KVX_SFR_VSFR42 298 /* Virtual SFR 42 $vsfr42 $s298 */ +#define KVX_SFR_VSFR43 299 /* Virtual SFR 43 $vsfr43 $s299 */ +#define KVX_SFR_VSFR44 300 /* Virtual SFR 44 $vsfr44 $s300 */ +#define KVX_SFR_VSFR45 301 /* Virtual SFR 45 $vsfr45 $s301 */ +#define KVX_SFR_VSFR46 302 /* Virtual SFR 46 $vsfr46 $s302 */ +#define KVX_SFR_VSFR47 303 /* Virtual SFR 47 $vsfr47 $s303 */ +#define KVX_SFR_VSFR48 304 /* Virtual SFR 48 $vsfr48 $s304 */ +#define KVX_SFR_VSFR49 305 /* Virtual SFR 49 $vsfr49 $s305 */ +#define KVX_SFR_VSFR50 306 /* Virtual SFR 50 $vsfr50 $s306 */ +#define KVX_SFR_VSFR51 307 /* Virtual SFR 51 $vsfr51 $s307 */ +#define KVX_SFR_VSFR52 308 /* Virtual SFR 52 $vsfr52 $s308 */ +#define KVX_SFR_VSFR53 309 /* Virtual SFR 53 $vsfr53 $s309 */ +#define KVX_SFR_VSFR54 310 /* Virtual SFR 54 $vsfr54 $s310 */ +#define KVX_SFR_VSFR55 311 /* Virtual SFR 55 $vsfr55 $s311 */ +#define KVX_SFR_VSFR56 312 /* Virtual SFR 56 $vsfr56 $s312 */ +#define KVX_SFR_VSFR57 313 /* Virtual SFR 57 $vsfr57 $s313 */ +#define KVX_SFR_VSFR58 314 /* Virtual SFR 58 $vsfr58 $s314 */ +#define KVX_SFR_VSFR59 315 /* Virtual SFR 59 $vsfr59 $s315 */ +#define KVX_SFR_VSFR60 316 /* Virtual SFR 60 $vsfr60 $s316 */ +#define KVX_SFR_VSFR61 317 /* Virtual SFR 61 $vsfr61 $s317 */ +#define KVX_SFR_VSFR62 318 /* Virtual SFR 62 $vsfr62 $s318 */ +#define KVX_SFR_VSFR63 319 /* Virtual SFR 63 $vsfr63 $s319 */ +#define KVX_SFR_VSFR64 320 /* Virtual SFR 64 $vsfr64 $s320 */ +#define KVX_SFR_VSFR65 321 /* Virtual SFR 65 $vsfr65 $s321 */ +#define KVX_SFR_VSFR66 322 /* Virtual SFR 66 $vsfr66 $s322 */ +#define KVX_SFR_VSFR67 323 /* Virtual SFR 67 $vsfr67 $s323 */ +#define KVX_SFR_VSFR68 324 /* Virtual SFR 68 $vsfr68 $s324 */ +#define KVX_SFR_VSFR69 325 /* Virtual SFR 69 $vsfr69 $s325 */ +#define KVX_SFR_VSFR70 326 /* Virtual SFR 70 $vsfr70 $s326 */ +#define KVX_SFR_VSFR71 327 /* Virtual SFR 71 $vsfr71 $s327 */ +#define KVX_SFR_VSFR72 328 /* Virtual SFR 72 $vsfr72 $s328 */ +#define KVX_SFR_VSFR73 329 /* Virtual SFR 73 $vsfr73 $s329 */ +#define KVX_SFR_VSFR74 330 /* Virtual SFR 74 $vsfr74 $s330 */ +#define KVX_SFR_VSFR75 331 /* Virtual SFR 75 $vsfr75 $s331 */ +#define KVX_SFR_VSFR76 332 /* Virtual SFR 76 $vsfr76 $s332 */ +#define KVX_SFR_VSFR77 333 /* Virtual SFR 77 $vsfr77 $s333 */ +#define KVX_SFR_VSFR78 334 /* Virtual SFR 78 $vsfr78 $s334 */ +#define KVX_SFR_VSFR79 335 /* Virtual SFR 79 $vsfr79 $s335 */ +#define KVX_SFR_VSFR80 336 /* Virtual SFR 80 $vsfr80 $s336 */ +#define KVX_SFR_VSFR81 337 /* Virtual SFR 81 $vsfr81 $s337 */ +#define KVX_SFR_VSFR82 338 /* Virtual SFR 82 $vsfr82 $s338 */ +#define KVX_SFR_VSFR83 339 /* Virtual SFR 83 $vsfr83 $s339 */ +#define KVX_SFR_VSFR84 340 /* Virtual SFR 84 $vsfr84 $s340 */ +#define KVX_SFR_VSFR85 341 /* Virtual SFR 85 $vsfr85 $s341 */ +#define KVX_SFR_VSFR86 342 /* Virtual SFR 86 $vsfr86 $s342 */ +#define KVX_SFR_VSFR87 343 /* Virtual SFR 87 $vsfr87 $s343 */ +#define KVX_SFR_VSFR88 344 /* Virtual SFR 88 $vsfr88 $s344 */ +#define KVX_SFR_VSFR89 345 /* Virtual SFR 89 $vsfr89 $s345 */ +#define KVX_SFR_VSFR90 346 /* Virtual SFR 90 $vsfr90 $s346 */ +#define KVX_SFR_VSFR91 347 /* Virtual SFR 91 $vsfr91 $s347 */ +#define KVX_SFR_VSFR92 348 /* Virtual SFR 92 $vsfr92 $s348 */ +#define KVX_SFR_VSFR93 349 /* Virtual SFR 93 $vsfr93 $s349 */ +#define KVX_SFR_VSFR94 350 /* Virtual SFR 94 $vsfr94 $s350 */ +#define KVX_SFR_VSFR95 351 /* Virtual SFR 95 $vsfr95 $s351 */ +#define KVX_SFR_VSFR96 352 /* Virtual SFR 96 $vsfr96 $s352 */ +#define KVX_SFR_VSFR97 353 /* Virtual SFR 97 $vsfr97 $s353 */ +#define KVX_SFR_VSFR98 354 /* Virtual SFR 98 $vsfr98 $s354 */ +#define KVX_SFR_VSFR99 355 /* Virtual SFR 99 $vsfr99 $s355 */ +#define KVX_SFR_VSFR100 356 /* Virtual SFR 100 $vsfr100 $s356 */ +#define KVX_SFR_VSFR101 357 /* Virtual SFR 101 $vsfr101 $s357 */ +#define KVX_SFR_VSFR102 358 /* Virtual SFR 102 $vsfr102 $s358 */ +#define KVX_SFR_VSFR103 359 /* Virtual SFR 103 $vsfr103 $s359 */ +#define KVX_SFR_VSFR104 360 /* Virtual SFR 104 $vsfr104 $s360 */ +#define KVX_SFR_VSFR105 361 /* Virtual SFR 105 $vsfr105 $s361 */ +#define KVX_SFR_VSFR106 362 /* Virtual SFR 106 $vsfr106 $s362 */ +#define KVX_SFR_VSFR107 363 /* Virtual SFR 107 $vsfr107 $s363 */ +#define KVX_SFR_VSFR108 364 /* Virtual SFR 108 $vsfr108 $s364 */ +#define KVX_SFR_VSFR109 365 /* Virtual SFR 109 $vsfr109 $s365 */ +#define KVX_SFR_VSFR110 366 /* Virtual SFR 110 $vsfr110 $s366 */ +#define KVX_SFR_VSFR111 367 /* Virtual SFR 111 $vsfr111 $s367 */ +#define KVX_SFR_VSFR112 368 /* Virtual SFR 112 $vsfr112 $s368 */ +#define KVX_SFR_VSFR113 369 /* Virtual SFR 113 $vsfr113 $s369 */ +#define KVX_SFR_VSFR114 370 /* Virtual SFR 114 $vsfr114 $s370 */ +#define KVX_SFR_VSFR115 371 /* Virtual SFR 115 $vsfr115 $s371 */ +#define KVX_SFR_VSFR116 372 /* Virtual SFR 116 $vsfr116 $s372 */ +#define KVX_SFR_VSFR117 373 /* Virtual SFR 117 $vsfr117 $s373 */ +#define KVX_SFR_VSFR118 374 /* Virtual SFR 118 $vsfr118 $s374 */ +#define KVX_SFR_VSFR119 375 /* Virtual SFR 119 $vsfr119 $s375 */ +#define KVX_SFR_VSFR120 376 /* Virtual SFR 120 $vsfr120 $s376 */ +#define KVX_SFR_VSFR121 377 /* Virtual SFR 121 $vsfr121 $s377 */ +#define KVX_SFR_VSFR122 378 /* Virtual SFR 122 $vsfr122 $s378 */ +#define KVX_SFR_VSFR123 379 /* Virtual SFR 123 $vsfr123 $s379 */ +#define KVX_SFR_VSFR124 380 /* Virtual SFR 124 $vsfr124 $s380 */ +#define KVX_SFR_VSFR125 381 /* Virtual SFR 125 $vsfr125 $s381 */ +#define KVX_SFR_VSFR126 382 /* Virtual SFR 126 $vsfr126 $s382 */ +#define KVX_SFR_VSFR127 383 /* Virtual SFR 127 $vsfr127 $s383 */ +#define KVX_SFR_VSFR128 384 /* Virtual SFR 128 $vsfr128 $s384 */ +#define KVX_SFR_VSFR129 385 /* Virtual SFR 129 $vsfr129 $s385 */ +#define KVX_SFR_VSFR130 386 /* Virtual SFR 130 $vsfr130 $s386 */ +#define KVX_SFR_VSFR131 387 /* Virtual SFR 131 $vsfr131 $s387 */ +#define KVX_SFR_VSFR132 388 /* Virtual SFR 132 $vsfr132 $s388 */ +#define KVX_SFR_VSFR133 389 /* Virtual SFR 133 $vsfr133 $s389 */ +#define KVX_SFR_VSFR134 390 /* Virtual SFR 134 $vsfr134 $s390 */ +#define KVX_SFR_VSFR135 391 /* Virtual SFR 135 $vsfr135 $s391 */ +#define KVX_SFR_VSFR136 392 /* Virtual SFR 136 $vsfr136 $s392 */ +#define KVX_SFR_VSFR137 393 /* Virtual SFR 137 $vsfr137 $s393 */ +#define KVX_SFR_VSFR138 394 /* Virtual SFR 138 $vsfr138 $s394 */ +#define KVX_SFR_VSFR139 395 /* Virtual SFR 139 $vsfr139 $s395 */ +#define KVX_SFR_VSFR140 396 /* Virtual SFR 140 $vsfr140 $s396 */ +#define KVX_SFR_VSFR141 397 /* Virtual SFR 141 $vsfr141 $s397 */ +#define KVX_SFR_VSFR142 398 /* Virtual SFR 142 $vsfr142 $s398 */ +#define KVX_SFR_VSFR143 399 /* Virtual SFR 143 $vsfr143 $s399 */ +#define KVX_SFR_VSFR144 400 /* Virtual SFR 144 $vsfr144 $s400 */ +#define KVX_SFR_VSFR145 401 /* Virtual SFR 145 $vsfr145 $s401 */ +#define KVX_SFR_VSFR146 402 /* Virtual SFR 146 $vsfr146 $s402 */ +#define KVX_SFR_VSFR147 403 /* Virtual SFR 147 $vsfr147 $s403 */ +#define KVX_SFR_VSFR148 404 /* Virtual SFR 148 $vsfr148 $s404 */ +#define KVX_SFR_VSFR149 405 /* Virtual SFR 149 $vsfr149 $s405 */ +#define KVX_SFR_VSFR150 406 /* Virtual SFR 150 $vsfr150 $s406 */ +#define KVX_SFR_VSFR151 407 /* Virtual SFR 151 $vsfr151 $s407 */ +#define KVX_SFR_VSFR152 408 /* Virtual SFR 152 $vsfr152 $s408 */ +#define KVX_SFR_VSFR153 409 /* Virtual SFR 153 $vsfr153 $s409 */ +#define KVX_SFR_VSFR154 410 /* Virtual SFR 154 $vsfr154 $s410 */ +#define KVX_SFR_VSFR155 411 /* Virtual SFR 155 $vsfr155 $s411 */ +#define KVX_SFR_VSFR156 412 /* Virtual SFR 156 $vsfr156 $s412 */ +#define KVX_SFR_VSFR157 413 /* Virtual SFR 157 $vsfr157 $s413 */ +#define KVX_SFR_VSFR158 414 /* Virtual SFR 158 $vsfr158 $s414 */ +#define KVX_SFR_VSFR159 415 /* Virtual SFR 159 $vsfr159 $s415 */ +#define KVX_SFR_VSFR160 416 /* Virtual SFR 160 $vsfr160 $s416 */ +#define KVX_SFR_VSFR161 417 /* Virtual SFR 161 $vsfr161 $s417 */ +#define KVX_SFR_VSFR162 418 /* Virtual SFR 162 $vsfr162 $s418 */ +#define KVX_SFR_VSFR163 419 /* Virtual SFR 163 $vsfr163 $s419 */ +#define KVX_SFR_VSFR164 420 /* Virtual SFR 164 $vsfr164 $s420 */ +#define KVX_SFR_VSFR165 421 /* Virtual SFR 165 $vsfr165 $s421 */ +#define KVX_SFR_VSFR166 422 /* Virtual SFR 166 $vsfr166 $s422 */ +#define KVX_SFR_VSFR167 423 /* Virtual SFR 167 $vsfr167 $s423 */ +#define KVX_SFR_VSFR168 424 /* Virtual SFR 168 $vsfr168 $s424 */ +#define KVX_SFR_VSFR169 425 /* Virtual SFR 169 $vsfr169 $s425 */ +#define KVX_SFR_VSFR170 426 /* Virtual SFR 170 $vsfr170 $s426 */ +#define KVX_SFR_VSFR171 427 /* Virtual SFR 171 $vsfr171 $s427 */ +#define KVX_SFR_VSFR172 428 /* Virtual SFR 172 $vsfr172 $s428 */ +#define KVX_SFR_VSFR173 429 /* Virtual SFR 173 $vsfr173 $s429 */ +#define KVX_SFR_VSFR174 430 /* Virtual SFR 174 $vsfr174 $s430 */ +#define KVX_SFR_VSFR175 431 /* Virtual SFR 175 $vsfr175 $s431 */ +#define KVX_SFR_VSFR176 432 /* Virtual SFR 176 $vsfr176 $s432 */ +#define KVX_SFR_VSFR177 433 /* Virtual SFR 177 $vsfr177 $s433 */ +#define KVX_SFR_VSFR178 434 /* Virtual SFR 178 $vsfr178 $s434 */ +#define KVX_SFR_VSFR179 435 /* Virtual SFR 179 $vsfr179 $s435 */ +#define KVX_SFR_VSFR180 436 /* Virtual SFR 180 $vsfr180 $s436 */ +#define KVX_SFR_VSFR181 437 /* Virtual SFR 181 $vsfr181 $s437 */ +#define KVX_SFR_VSFR182 438 /* Virtual SFR 182 $vsfr182 $s438 */ +#define KVX_SFR_VSFR183 439 /* Virtual SFR 183 $vsfr183 $s439 */ +#define KVX_SFR_VSFR184 440 /* Virtual SFR 184 $vsfr184 $s440 */ +#define KVX_SFR_VSFR185 441 /* Virtual SFR 185 $vsfr185 $s441 */ +#define KVX_SFR_VSFR186 442 /* Virtual SFR 186 $vsfr186 $s442 */ +#define KVX_SFR_VSFR187 443 /* Virtual SFR 187 $vsfr187 $s443 */ +#define KVX_SFR_VSFR188 444 /* Virtual SFR 188 $vsfr188 $s444 */ +#define KVX_SFR_VSFR189 445 /* Virtual SFR 189 $vsfr189 $s445 */ +#define KVX_SFR_VSFR190 446 /* Virtual SFR 190 $vsfr190 $s446 */ +#define KVX_SFR_VSFR191 447 /* Virtual SFR 191 $vsfr191 $s447 */ +#define KVX_SFR_VSFR192 448 /* Virtual SFR 192 $vsfr192 $s448 */ +#define KVX_SFR_VSFR193 449 /* Virtual SFR 193 $vsfr193 $s449 */ +#define KVX_SFR_VSFR194 450 /* Virtual SFR 194 $vsfr194 $s450 */ +#define KVX_SFR_VSFR195 451 /* Virtual SFR 195 $vsfr195 $s451 */ +#define KVX_SFR_VSFR196 452 /* Virtual SFR 196 $vsfr196 $s452 */ +#define KVX_SFR_VSFR197 453 /* Virtual SFR 197 $vsfr197 $s453 */ +#define KVX_SFR_VSFR198 454 /* Virtual SFR 198 $vsfr198 $s454 */ +#define KVX_SFR_VSFR199 455 /* Virtual SFR 199 $vsfr199 $s455 */ +#define KVX_SFR_VSFR200 456 /* Virtual SFR 200 $vsfr200 $s456 */ +#define KVX_SFR_VSFR201 457 /* Virtual SFR 201 $vsfr201 $s457 */ +#define KVX_SFR_VSFR202 458 /* Virtual SFR 202 $vsfr202 $s458 */ +#define KVX_SFR_VSFR203 459 /* Virtual SFR 203 $vsfr203 $s459 */ +#define KVX_SFR_VSFR204 460 /* Virtual SFR 204 $vsfr204 $s460 */ +#define KVX_SFR_VSFR205 461 /* Virtual SFR 205 $vsfr205 $s461 */ +#define KVX_SFR_VSFR206 462 /* Virtual SFR 206 $vsfr206 $s462 */ +#define KVX_SFR_VSFR207 463 /* Virtual SFR 207 $vsfr207 $s463 */ +#define KVX_SFR_VSFR208 464 /* Virtual SFR 208 $vsfr208 $s464 */ +#define KVX_SFR_VSFR209 465 /* Virtual SFR 209 $vsfr209 $s465 */ +#define KVX_SFR_VSFR210 466 /* Virtual SFR 210 $vsfr210 $s466 */ +#define KVX_SFR_VSFR211 467 /* Virtual SFR 211 $vsfr211 $s467 */ +#define KVX_SFR_VSFR212 468 /* Virtual SFR 212 $vsfr212 $s468 */ +#define KVX_SFR_VSFR213 469 /* Virtual SFR 213 $vsfr213 $s469 */ +#define KVX_SFR_VSFR214 470 /* Virtual SFR 214 $vsfr214 $s470 */ +#define KVX_SFR_VSFR215 471 /* Virtual SFR 215 $vsfr215 $s471 */ +#define KVX_SFR_VSFR216 472 /* Virtual SFR 216 $vsfr216 $s472 */ +#define KVX_SFR_VSFR217 473 /* Virtual SFR 217 $vsfr217 $s473 */ +#define KVX_SFR_VSFR218 474 /* Virtual SFR 218 $vsfr218 $s474 */ +#define KVX_SFR_VSFR219 475 /* Virtual SFR 219 $vsfr219 $s475 */ +#define KVX_SFR_VSFR220 476 /* Virtual SFR 220 $vsfr220 $s476 */ +#define KVX_SFR_VSFR221 477 /* Virtual SFR 221 $vsfr221 $s477 */ +#define KVX_SFR_VSFR222 478 /* Virtual SFR 222 $vsfr222 $s478 */ +#define KVX_SFR_VSFR223 479 /* Virtual SFR 223 $vsfr223 $s479 */ +#define KVX_SFR_VSFR224 480 /* Virtual SFR 224 $vsfr224 $s480 */ +#define KVX_SFR_VSFR225 481 /* Virtual SFR 225 $vsfr225 $s481 */ +#define KVX_SFR_VSFR226 482 /* Virtual SFR 226 $vsfr226 $s482 */ +#define KVX_SFR_VSFR227 483 /* Virtual SFR 227 $vsfr227 $s483 */ +#define KVX_SFR_VSFR228 484 /* Virtual SFR 228 $vsfr228 $s484 */ +#define KVX_SFR_VSFR229 485 /* Virtual SFR 229 $vsfr229 $s485 */ +#define KVX_SFR_VSFR230 486 /* Virtual SFR 230 $vsfr230 $s486 */ +#define KVX_SFR_VSFR231 487 /* Virtual SFR 231 $vsfr231 $s487 */ +#define KVX_SFR_VSFR232 488 /* Virtual SFR 232 $vsfr232 $s488 */ +#define KVX_SFR_VSFR233 489 /* Virtual SFR 233 $vsfr233 $s489 */ +#define KVX_SFR_VSFR234 490 /* Virtual SFR 234 $vsfr234 $s490 */ +#define KVX_SFR_VSFR235 491 /* Virtual SFR 235 $vsfr235 $s491 */ +#define KVX_SFR_VSFR236 492 /* Virtual SFR 236 $vsfr236 $s492 */ +#define KVX_SFR_VSFR237 493 /* Virtual SFR 237 $vsfr237 $s493 */ +#define KVX_SFR_VSFR238 494 /* Virtual SFR 238 $vsfr238 $s494 */ +#define KVX_SFR_VSFR239 495 /* Virtual SFR 239 $vsfr239 $s495 */ +#define KVX_SFR_VSFR240 496 /* Virtual SFR 240 $vsfr240 $s496 */ +#define KVX_SFR_VSFR241 497 /* Virtual SFR 241 $vsfr241 $s497 */ +#define KVX_SFR_VSFR242 498 /* Virtual SFR 242 $vsfr242 $s498 */ +#define KVX_SFR_VSFR243 499 /* Virtual SFR 243 $vsfr243 $s499 */ +#define KVX_SFR_VSFR244 500 /* Virtual SFR 244 $vsfr244 $s500 */ +#define KVX_SFR_VSFR245 501 /* Virtual SFR 245 $vsfr245 $s501 */ +#define KVX_SFR_VSFR246 502 /* Virtual SFR 246 $vsfr246 $s502 */ +#define KVX_SFR_VSFR247 503 /* Virtual SFR 247 $vsfr247 $s503 */ +#define KVX_SFR_VSFR248 504 /* Virtual SFR 248 $vsfr248 $s504 */ +#define KVX_SFR_VSFR249 505 /* Virtual SFR 249 $vsfr249 $s505 */ +#define KVX_SFR_VSFR250 506 /* Virtual SFR 250 $vsfr250 $s506 */ +#define KVX_SFR_VSFR251 507 /* Virtual SFR 251 $vsfr251 $s507 */ +#define KVX_SFR_VSFR252 508 /* Virtual SFR 252 $vsfr252 $s508 */ +#define KVX_SFR_VSFR253 509 /* Virtual SFR 253 $vsfr253 $s509 */ +#define KVX_SFR_VSFR254 510 /* Virtual SFR 254 $vsfr254 $s510 */ +#define KVX_SFR_VSFR255 511 /* Virtual SFR 255 $vsfr255 $s511 */ + +/* Register field masks */ + +#define KVX_SFR_MEN_MEN_MASK _ULL(0xffff) /* Miscellaneous External Notifi= cations */ +#define KVX_SFR_MEN_MEN_SHIFT 0 +#define KVX_SFR_MEN_MEN_WIDTH 16 +#define KVX_SFR_MEN_MEN_WFXL_MASK _ULL(0xffff) +#define KVX_SFR_MEN_MEN_WFXL_CLEAR _ULL(0xffff) +#define KVX_SFR_MEN_MEN_WFXL_SET _ULL(0xffff00000000) + +#define KVX_SFR_SYO_Q0_MASK _ULL(0x3) /* Quarter 0 syscalls 0 to 1023 owne= r */ +#define KVX_SFR_SYO_Q0_SHIFT 0 +#define KVX_SFR_SYO_Q0_WIDTH 2 +#define KVX_SFR_SYO_Q0_WFXL_MASK _ULL(0x3) +#define KVX_SFR_SYO_Q0_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_SYO_Q0_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_SYO_Q1_MASK _ULL(0xc) /* Quarter 1 syscalls 1024 to 2047 o= wner */ +#define KVX_SFR_SYO_Q1_SHIFT 2 +#define KVX_SFR_SYO_Q1_WIDTH 2 +#define KVX_SFR_SYO_Q1_WFXL_MASK _ULL(0xc) +#define KVX_SFR_SYO_Q1_WFXL_CLEAR _ULL(0xc) +#define KVX_SFR_SYO_Q1_WFXL_SET _ULL(0xc00000000) + +#define KVX_SFR_SYO_Q2_MASK _ULL(0x30) /* Quarter 2 syscalls 2048 to 3071 = owner */ +#define KVX_SFR_SYO_Q2_SHIFT 4 +#define KVX_SFR_SYO_Q2_WIDTH 2 +#define KVX_SFR_SYO_Q2_WFXL_MASK _ULL(0x30) +#define KVX_SFR_SYO_Q2_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_SYO_Q2_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_SYO_Q3_MASK _ULL(0xc0) /* Quarter 3 syscalls 3072 to 4095 = owner */ +#define KVX_SFR_SYO_Q3_SHIFT 6 +#define KVX_SFR_SYO_Q3_WIDTH 2 +#define KVX_SFR_SYO_Q3_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_SYO_Q3_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_SYO_Q3_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_SYOW_Q0_MASK _ULL(0x3) /* Quarter 0 syscalls 0 to 1023 own= er */ +#define KVX_SFR_SYOW_Q0_SHIFT 0 +#define KVX_SFR_SYOW_Q0_WIDTH 2 +#define KVX_SFR_SYOW_Q0_WFXL_MASK _ULL(0x3) +#define KVX_SFR_SYOW_Q0_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_SYOW_Q0_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_SYOW_Q1_MASK _ULL(0xc) /* Quarter 1 syscalls 1024 to 2047 = owner */ +#define KVX_SFR_SYOW_Q1_SHIFT 2 +#define KVX_SFR_SYOW_Q1_WIDTH 2 +#define KVX_SFR_SYOW_Q1_WFXL_MASK _ULL(0xc) +#define KVX_SFR_SYOW_Q1_WFXL_CLEAR _ULL(0xc) +#define KVX_SFR_SYOW_Q1_WFXL_SET _ULL(0xc00000000) + +#define KVX_SFR_SYOW_Q2_MASK _ULL(0x30) /* Quarter 2 syscalls 2048 to 3071= owner */ +#define KVX_SFR_SYOW_Q2_SHIFT 4 +#define KVX_SFR_SYOW_Q2_WIDTH 2 +#define KVX_SFR_SYOW_Q2_WFXL_MASK _ULL(0x30) +#define KVX_SFR_SYOW_Q2_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_SYOW_Q2_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_SYOW_Q3_MASK _ULL(0xc0) /* Quarter 3 syscalls 3072 to 4095= owner */ +#define KVX_SFR_SYOW_Q3_SHIFT 6 +#define KVX_SFR_SYOW_Q3_WIDTH 2 +#define KVX_SFR_SYOW_Q3_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_SYOW_Q3_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_SYOW_Q3_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_HTO_OPC_MASK _ULL(0x3) /* OPCode trap owner */ +#define KVX_SFR_HTO_OPC_SHIFT 0 +#define KVX_SFR_HTO_OPC_WIDTH 2 +#define KVX_SFR_HTO_OPC_WFXL_MASK _ULL(0x3) +#define KVX_SFR_HTO_OPC_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_HTO_OPC_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_HTO_DMIS_MASK _ULL(0xc) /* Data MISalign access trap owner= */ +#define KVX_SFR_HTO_DMIS_SHIFT 2 +#define KVX_SFR_HTO_DMIS_WIDTH 2 +#define KVX_SFR_HTO_DMIS_WFXL_MASK _ULL(0xc) +#define KVX_SFR_HTO_DMIS_WFXL_CLEAR _ULL(0xc) +#define KVX_SFR_HTO_DMIS_WFXL_SET _ULL(0xc00000000) + +#define KVX_SFR_HTO_PSYS_MASK _ULL(0x30) /* Program System Error trap owne= r */ +#define KVX_SFR_HTO_PSYS_SHIFT 4 +#define KVX_SFR_HTO_PSYS_WIDTH 2 +#define KVX_SFR_HTO_PSYS_WFXL_MASK _ULL(0x30) +#define KVX_SFR_HTO_PSYS_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_HTO_PSYS_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_HTO_DSYS_MASK _ULL(0xc0) /* Data System Error trap owner */ +#define KVX_SFR_HTO_DSYS_SHIFT 6 +#define KVX_SFR_HTO_DSYS_WIDTH 2 +#define KVX_SFR_HTO_DSYS_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_HTO_DSYS_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_HTO_DSYS_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_HTO_DECCG_MASK _ULL(0x300) /* Double ECC traps group owner= */ +#define KVX_SFR_HTO_DECCG_SHIFT 8 +#define KVX_SFR_HTO_DECCG_WIDTH 2 +#define KVX_SFR_HTO_DECCG_WFXL_MASK _ULL(0x300) +#define KVX_SFR_HTO_DECCG_WFXL_CLEAR _ULL(0x300) +#define KVX_SFR_HTO_DECCG_WFXL_SET _ULL(0x30000000000) + +#define KVX_SFR_HTO_SECCG_MASK _ULL(0xc00) /* Single ECC traps group owner= */ +#define KVX_SFR_HTO_SECCG_SHIFT 10 +#define KVX_SFR_HTO_SECCG_WIDTH 2 +#define KVX_SFR_HTO_SECCG_WFXL_MASK _ULL(0xc00) +#define KVX_SFR_HTO_SECCG_WFXL_CLEAR _ULL(0xc00) +#define KVX_SFR_HTO_SECCG_WFXL_SET _ULL(0xc0000000000) + +#define KVX_SFR_HTO_NOMAP_MASK _ULL(0x3000) /* No mapping trap owner */ +#define KVX_SFR_HTO_NOMAP_SHIFT 12 +#define KVX_SFR_HTO_NOMAP_WIDTH 2 +#define KVX_SFR_HTO_NOMAP_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_HTO_NOMAP_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_HTO_NOMAP_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_HTO_PROT_MASK _ULL(0xc000) /* PROTection trap owner */ +#define KVX_SFR_HTO_PROT_SHIFT 14 +#define KVX_SFR_HTO_PROT_WIDTH 2 +#define KVX_SFR_HTO_PROT_WFXL_MASK _ULL(0xc000) +#define KVX_SFR_HTO_PROT_WFXL_CLEAR _ULL(0xc000) +#define KVX_SFR_HTO_PROT_WFXL_SET _ULL(0xc00000000000) + +#define KVX_SFR_HTO_W2CL_MASK _ULL(0x30000) /* Write to clean trap owner */ +#define KVX_SFR_HTO_W2CL_SHIFT 16 +#define KVX_SFR_HTO_W2CL_WIDTH 2 +#define KVX_SFR_HTO_W2CL_WFXL_MASK _ULL(0x30000) +#define KVX_SFR_HTO_W2CL_WFXL_CLEAR _ULL(0x30000) +#define KVX_SFR_HTO_W2CL_WFXL_SET _ULL(0x3000000000000) + +#define KVX_SFR_HTO_A2CL_MASK _ULL(0xc0000) /* Atomic to clean trap owner = */ +#define KVX_SFR_HTO_A2CL_SHIFT 18 +#define KVX_SFR_HTO_A2CL_WIDTH 2 +#define KVX_SFR_HTO_A2CL_WFXL_MASK _ULL(0xc0000) +#define KVX_SFR_HTO_A2CL_WFXL_CLEAR _ULL(0xc0000) +#define KVX_SFR_HTO_A2CL_WFXL_SET _ULL(0xc000000000000) + +#define KVX_SFR_HTO_DE_MASK _ULL(0x300000) /* Double Exception trap owner = */ +#define KVX_SFR_HTO_DE_SHIFT 20 +#define KVX_SFR_HTO_DE_WIDTH 2 +#define KVX_SFR_HTO_DE_WFXL_MASK _ULL(0x300000) +#define KVX_SFR_HTO_DE_WFXL_CLEAR _ULL(0x300000) +#define KVX_SFR_HTO_DE_WFXL_SET _ULL(0x30000000000000) + +#define KVX_SFR_HTO_VSFR_MASK _ULL(0xc00000) /* Virtual SFR trap owner */ +#define KVX_SFR_HTO_VSFR_SHIFT 22 +#define KVX_SFR_HTO_VSFR_WIDTH 2 +#define KVX_SFR_HTO_VSFR_WFXL_MASK _ULL(0xc00000) +#define KVX_SFR_HTO_VSFR_WFXL_CLEAR _ULL(0xc00000) +#define KVX_SFR_HTO_VSFR_WFXL_SET _ULL(0xc0000000000000) + +#define KVX_SFR_HTO_PLO_MASK _ULL(0x3000000) /* Privilege Level Overflow t= rap owner */ +#define KVX_SFR_HTO_PLO_SHIFT 24 +#define KVX_SFR_HTO_PLO_WIDTH 2 +#define KVX_SFR_HTO_PLO_WFXL_MASK _ULL(0x3000000) +#define KVX_SFR_HTO_PLO_WFXL_CLEAR _ULL(0x3000000) +#define KVX_SFR_HTO_PLO_WFXL_SET _ULL(0x300000000000000) + +#define KVX_SFR_HTOW_OPC_MASK _ULL(0x3) /* OPCode trap owner */ +#define KVX_SFR_HTOW_OPC_SHIFT 0 +#define KVX_SFR_HTOW_OPC_WIDTH 2 +#define KVX_SFR_HTOW_OPC_WFXL_MASK _ULL(0x3) +#define KVX_SFR_HTOW_OPC_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_HTOW_OPC_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_HTOW_DMIS_MASK _ULL(0xc) /* Data MISalign access trap owne= r */ +#define KVX_SFR_HTOW_DMIS_SHIFT 2 +#define KVX_SFR_HTOW_DMIS_WIDTH 2 +#define KVX_SFR_HTOW_DMIS_WFXL_MASK _ULL(0xc) +#define KVX_SFR_HTOW_DMIS_WFXL_CLEAR _ULL(0xc) +#define KVX_SFR_HTOW_DMIS_WFXL_SET _ULL(0xc00000000) + +#define KVX_SFR_HTOW_PSYS_MASK _ULL(0x30) /* Program System Error trap own= er */ +#define KVX_SFR_HTOW_PSYS_SHIFT 4 +#define KVX_SFR_HTOW_PSYS_WIDTH 2 +#define KVX_SFR_HTOW_PSYS_WFXL_MASK _ULL(0x30) +#define KVX_SFR_HTOW_PSYS_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_HTOW_PSYS_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_HTOW_DSYS_MASK _ULL(0xc0) /* Data System Error trap owner = */ +#define KVX_SFR_HTOW_DSYS_SHIFT 6 +#define KVX_SFR_HTOW_DSYS_WIDTH 2 +#define KVX_SFR_HTOW_DSYS_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_HTOW_DSYS_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_HTOW_DSYS_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_HTOW_DECCG_MASK _ULL(0x300) /* Double ECC traps group owne= r */ +#define KVX_SFR_HTOW_DECCG_SHIFT 8 +#define KVX_SFR_HTOW_DECCG_WIDTH 2 +#define KVX_SFR_HTOW_DECCG_WFXL_MASK _ULL(0x300) +#define KVX_SFR_HTOW_DECCG_WFXL_CLEAR _ULL(0x300) +#define KVX_SFR_HTOW_DECCG_WFXL_SET _ULL(0x30000000000) + +#define KVX_SFR_HTOW_SECCG_MASK _ULL(0xc00) /* Single ECC traps group owne= r */ +#define KVX_SFR_HTOW_SECCG_SHIFT 10 +#define KVX_SFR_HTOW_SECCG_WIDTH 2 +#define KVX_SFR_HTOW_SECCG_WFXL_MASK _ULL(0xc00) +#define KVX_SFR_HTOW_SECCG_WFXL_CLEAR _ULL(0xc00) +#define KVX_SFR_HTOW_SECCG_WFXL_SET _ULL(0xc0000000000) + +#define KVX_SFR_HTOW_NOMAP_MASK _ULL(0x3000) /* No mapping trap owner */ +#define KVX_SFR_HTOW_NOMAP_SHIFT 12 +#define KVX_SFR_HTOW_NOMAP_WIDTH 2 +#define KVX_SFR_HTOW_NOMAP_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_HTOW_NOMAP_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_HTOW_NOMAP_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_HTOW_PROT_MASK _ULL(0xc000) /* PROTection trap owner */ +#define KVX_SFR_HTOW_PROT_SHIFT 14 +#define KVX_SFR_HTOW_PROT_WIDTH 2 +#define KVX_SFR_HTOW_PROT_WFXL_MASK _ULL(0xc000) +#define KVX_SFR_HTOW_PROT_WFXL_CLEAR _ULL(0xc000) +#define KVX_SFR_HTOW_PROT_WFXL_SET _ULL(0xc00000000000) + +#define KVX_SFR_HTOW_W2CL_MASK _ULL(0x30000) /* Write to clean trap owner = */ +#define KVX_SFR_HTOW_W2CL_SHIFT 16 +#define KVX_SFR_HTOW_W2CL_WIDTH 2 +#define KVX_SFR_HTOW_W2CL_WFXL_MASK _ULL(0x30000) +#define KVX_SFR_HTOW_W2CL_WFXL_CLEAR _ULL(0x30000) +#define KVX_SFR_HTOW_W2CL_WFXL_SET _ULL(0x3000000000000) + +#define KVX_SFR_HTOW_A2CL_MASK _ULL(0xc0000) /* Atomic to clean trap owner= */ +#define KVX_SFR_HTOW_A2CL_SHIFT 18 +#define KVX_SFR_HTOW_A2CL_WIDTH 2 +#define KVX_SFR_HTOW_A2CL_WFXL_MASK _ULL(0xc0000) +#define KVX_SFR_HTOW_A2CL_WFXL_CLEAR _ULL(0xc0000) +#define KVX_SFR_HTOW_A2CL_WFXL_SET _ULL(0xc000000000000) + +#define KVX_SFR_HTOW_DE_MASK _ULL(0x300000) /* Double Exception trap owner= */ +#define KVX_SFR_HTOW_DE_SHIFT 20 +#define KVX_SFR_HTOW_DE_WIDTH 2 +#define KVX_SFR_HTOW_DE_WFXL_MASK _ULL(0x300000) +#define KVX_SFR_HTOW_DE_WFXL_CLEAR _ULL(0x300000) +#define KVX_SFR_HTOW_DE_WFXL_SET _ULL(0x30000000000000) + +#define KVX_SFR_HTOW_VSFR_MASK _ULL(0xc00000) /* Virtual SFR trap owner */ +#define KVX_SFR_HTOW_VSFR_SHIFT 22 +#define KVX_SFR_HTOW_VSFR_WIDTH 2 +#define KVX_SFR_HTOW_VSFR_WFXL_MASK _ULL(0xc00000) +#define KVX_SFR_HTOW_VSFR_WFXL_CLEAR _ULL(0xc00000) +#define KVX_SFR_HTOW_VSFR_WFXL_SET _ULL(0xc0000000000000) + +#define KVX_SFR_HTOW_PLO_MASK _ULL(0x3000000) /* Privilege Level Overflow = trap owner */ +#define KVX_SFR_HTOW_PLO_SHIFT 24 +#define KVX_SFR_HTOW_PLO_WIDTH 2 +#define KVX_SFR_HTOW_PLO_WFXL_MASK _ULL(0x3000000) +#define KVX_SFR_HTOW_PLO_WFXL_CLEAR _ULL(0x3000000) +#define KVX_SFR_HTOW_PLO_WFXL_SET _ULL(0x300000000000000) + +#define KVX_SFR_ITO_IT0_MASK _ULL(0x3) /* Interrupt 0 owner */ +#define KVX_SFR_ITO_IT0_SHIFT 0 +#define KVX_SFR_ITO_IT0_WIDTH 2 +#define KVX_SFR_ITO_IT0_WFXL_MASK _ULL(0x3) +#define KVX_SFR_ITO_IT0_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_ITO_IT0_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_ITO_IT1_MASK _ULL(0xc) /* Interrupt 1 owner */ +#define KVX_SFR_ITO_IT1_SHIFT 2 +#define KVX_SFR_ITO_IT1_WIDTH 2 +#define KVX_SFR_ITO_IT1_WFXL_MASK _ULL(0xc) +#define KVX_SFR_ITO_IT1_WFXL_CLEAR _ULL(0xc) +#define KVX_SFR_ITO_IT1_WFXL_SET _ULL(0xc00000000) + +#define KVX_SFR_ITO_IT2_MASK _ULL(0x30) /* Interrupt 2 owner */ +#define KVX_SFR_ITO_IT2_SHIFT 4 +#define KVX_SFR_ITO_IT2_WIDTH 2 +#define KVX_SFR_ITO_IT2_WFXL_MASK _ULL(0x30) +#define KVX_SFR_ITO_IT2_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_ITO_IT2_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_ITO_IT3_MASK _ULL(0xc0) /* Interrupt 3 owner */ +#define KVX_SFR_ITO_IT3_SHIFT 6 +#define KVX_SFR_ITO_IT3_WIDTH 2 +#define KVX_SFR_ITO_IT3_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_ITO_IT3_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_ITO_IT3_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_ITO_IT4_MASK _ULL(0x300) /* Interrupt 4 owner */ +#define KVX_SFR_ITO_IT4_SHIFT 8 +#define KVX_SFR_ITO_IT4_WIDTH 2 +#define KVX_SFR_ITO_IT4_WFXL_MASK _ULL(0x300) +#define KVX_SFR_ITO_IT4_WFXL_CLEAR _ULL(0x300) +#define KVX_SFR_ITO_IT4_WFXL_SET _ULL(0x30000000000) + +#define KVX_SFR_ITO_IT5_MASK _ULL(0xc00) /* Interrupt 5 owner */ +#define KVX_SFR_ITO_IT5_SHIFT 10 +#define KVX_SFR_ITO_IT5_WIDTH 2 +#define KVX_SFR_ITO_IT5_WFXL_MASK _ULL(0xc00) +#define KVX_SFR_ITO_IT5_WFXL_CLEAR _ULL(0xc00) +#define KVX_SFR_ITO_IT5_WFXL_SET _ULL(0xc0000000000) + +#define KVX_SFR_ITO_IT6_MASK _ULL(0x3000) /* Interrupt 6 owner */ +#define KVX_SFR_ITO_IT6_SHIFT 12 +#define KVX_SFR_ITO_IT6_WIDTH 2 +#define KVX_SFR_ITO_IT6_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_ITO_IT6_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_ITO_IT6_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_ITO_IT7_MASK _ULL(0xc000) /* Interrupt 7 owner */ +#define KVX_SFR_ITO_IT7_SHIFT 14 +#define KVX_SFR_ITO_IT7_WIDTH 2 +#define KVX_SFR_ITO_IT7_WFXL_MASK _ULL(0xc000) +#define KVX_SFR_ITO_IT7_WFXL_CLEAR _ULL(0xc000) +#define KVX_SFR_ITO_IT7_WFXL_SET _ULL(0xc00000000000) + +#define KVX_SFR_ITO_IT8_MASK _ULL(0x30000) /* Interrupt 8 owner */ +#define KVX_SFR_ITO_IT8_SHIFT 16 +#define KVX_SFR_ITO_IT8_WIDTH 2 +#define KVX_SFR_ITO_IT8_WFXL_MASK _ULL(0x30000) +#define KVX_SFR_ITO_IT8_WFXL_CLEAR _ULL(0x30000) +#define KVX_SFR_ITO_IT8_WFXL_SET _ULL(0x3000000000000) + +#define KVX_SFR_ITO_IT9_MASK _ULL(0xc0000) /* Interrupt 9 owner */ +#define KVX_SFR_ITO_IT9_SHIFT 18 +#define KVX_SFR_ITO_IT9_WIDTH 2 +#define KVX_SFR_ITO_IT9_WFXL_MASK _ULL(0xc0000) +#define KVX_SFR_ITO_IT9_WFXL_CLEAR _ULL(0xc0000) +#define KVX_SFR_ITO_IT9_WFXL_SET _ULL(0xc000000000000) + +#define KVX_SFR_ITO_IT10_MASK _ULL(0x300000) /* Interrupt 10 owner */ +#define KVX_SFR_ITO_IT10_SHIFT 20 +#define KVX_SFR_ITO_IT10_WIDTH 2 +#define KVX_SFR_ITO_IT10_WFXL_MASK _ULL(0x300000) +#define KVX_SFR_ITO_IT10_WFXL_CLEAR _ULL(0x300000) +#define KVX_SFR_ITO_IT10_WFXL_SET _ULL(0x30000000000000) + +#define KVX_SFR_ITO_IT11_MASK _ULL(0xc00000) /* Interrupt 11 owner */ +#define KVX_SFR_ITO_IT11_SHIFT 22 +#define KVX_SFR_ITO_IT11_WIDTH 2 +#define KVX_SFR_ITO_IT11_WFXL_MASK _ULL(0xc00000) +#define KVX_SFR_ITO_IT11_WFXL_CLEAR _ULL(0xc00000) +#define KVX_SFR_ITO_IT11_WFXL_SET _ULL(0xc0000000000000) + +#define KVX_SFR_ITO_IT12_MASK _ULL(0x3000000) /* Interrupt 12 owner */ +#define KVX_SFR_ITO_IT12_SHIFT 24 +#define KVX_SFR_ITO_IT12_WIDTH 2 +#define KVX_SFR_ITO_IT12_WFXL_MASK _ULL(0x3000000) +#define KVX_SFR_ITO_IT12_WFXL_CLEAR _ULL(0x3000000) +#define KVX_SFR_ITO_IT12_WFXL_SET _ULL(0x300000000000000) + +#define KVX_SFR_ITO_IT13_MASK _ULL(0xc000000) /* Interrupt 13 owner */ +#define KVX_SFR_ITO_IT13_SHIFT 26 +#define KVX_SFR_ITO_IT13_WIDTH 2 +#define KVX_SFR_ITO_IT13_WFXL_MASK _ULL(0xc000000) +#define KVX_SFR_ITO_IT13_WFXL_CLEAR _ULL(0xc000000) +#define KVX_SFR_ITO_IT13_WFXL_SET _ULL(0xc00000000000000) + +#define KVX_SFR_ITO_IT14_MASK _ULL(0x30000000) /* Interrupt 14 owner */ +#define KVX_SFR_ITO_IT14_SHIFT 28 +#define KVX_SFR_ITO_IT14_WIDTH 2 +#define KVX_SFR_ITO_IT14_WFXL_MASK _ULL(0x30000000) +#define KVX_SFR_ITO_IT14_WFXL_CLEAR _ULL(0x30000000) +#define KVX_SFR_ITO_IT14_WFXL_SET _ULL(0x3000000000000000) + +#define KVX_SFR_ITO_IT15_MASK _ULL(0xc0000000) /* Interrupt 15 owner */ +#define KVX_SFR_ITO_IT15_SHIFT 30 +#define KVX_SFR_ITO_IT15_WIDTH 2 +#define KVX_SFR_ITO_IT15_WFXL_MASK _ULL(0xc0000000) +#define KVX_SFR_ITO_IT15_WFXL_CLEAR _ULL(0xc0000000) +#define KVX_SFR_ITO_IT15_WFXL_SET _ULL(0xc000000000000000) + +#define KVX_SFR_ITO_IT16_MASK _ULL(0x300000000) /* Interrupt 16 owner */ +#define KVX_SFR_ITO_IT16_SHIFT 32 +#define KVX_SFR_ITO_IT16_WIDTH 2 +#define KVX_SFR_ITO_IT16_WFXM_MASK _ULL(0x300000000) +#define KVX_SFR_ITO_IT16_WFXM_CLEAR _ULL(0x3) +#define KVX_SFR_ITO_IT16_WFXM_SET _ULL(0x300000000) + +#define KVX_SFR_ITO_IT17_MASK _ULL(0xc00000000) /* Interrupt 17 owner */ +#define KVX_SFR_ITO_IT17_SHIFT 34 +#define KVX_SFR_ITO_IT17_WIDTH 2 +#define KVX_SFR_ITO_IT17_WFXM_MASK _ULL(0xc00000000) +#define KVX_SFR_ITO_IT17_WFXM_CLEAR _ULL(0xc) +#define KVX_SFR_ITO_IT17_WFXM_SET _ULL(0xc00000000) + +#define KVX_SFR_ITO_IT18_MASK _ULL(0x3000000000) /* Interrupt 18 owner */ +#define KVX_SFR_ITO_IT18_SHIFT 36 +#define KVX_SFR_ITO_IT18_WIDTH 2 +#define KVX_SFR_ITO_IT18_WFXM_MASK _ULL(0x3000000000) +#define KVX_SFR_ITO_IT18_WFXM_CLEAR _ULL(0x30) +#define KVX_SFR_ITO_IT18_WFXM_SET _ULL(0x3000000000) + +#define KVX_SFR_ITO_IT19_MASK _ULL(0xc000000000) /* Interrupt 19 owner */ +#define KVX_SFR_ITO_IT19_SHIFT 38 +#define KVX_SFR_ITO_IT19_WIDTH 2 +#define KVX_SFR_ITO_IT19_WFXM_MASK _ULL(0xc000000000) +#define KVX_SFR_ITO_IT19_WFXM_CLEAR _ULL(0xc0) +#define KVX_SFR_ITO_IT19_WFXM_SET _ULL(0xc000000000) + +#define KVX_SFR_ITO_IT20_MASK _ULL(0x30000000000) /* Interrupt 20 owner */ +#define KVX_SFR_ITO_IT20_SHIFT 40 +#define KVX_SFR_ITO_IT20_WIDTH 2 +#define KVX_SFR_ITO_IT20_WFXM_MASK _ULL(0x30000000000) +#define KVX_SFR_ITO_IT20_WFXM_CLEAR _ULL(0x300) +#define KVX_SFR_ITO_IT20_WFXM_SET _ULL(0x30000000000) + +#define KVX_SFR_ITO_IT21_MASK _ULL(0xc0000000000) /* Interrupt 21 owner */ +#define KVX_SFR_ITO_IT21_SHIFT 42 +#define KVX_SFR_ITO_IT21_WIDTH 2 +#define KVX_SFR_ITO_IT21_WFXM_MASK _ULL(0xc0000000000) +#define KVX_SFR_ITO_IT21_WFXM_CLEAR _ULL(0xc00) +#define KVX_SFR_ITO_IT21_WFXM_SET _ULL(0xc0000000000) + +#define KVX_SFR_ITO_IT22_MASK _ULL(0x300000000000) /* Interrupt 22 owner */ +#define KVX_SFR_ITO_IT22_SHIFT 44 +#define KVX_SFR_ITO_IT22_WIDTH 2 +#define KVX_SFR_ITO_IT22_WFXM_MASK _ULL(0x300000000000) +#define KVX_SFR_ITO_IT22_WFXM_CLEAR _ULL(0x3000) +#define KVX_SFR_ITO_IT22_WFXM_SET _ULL(0x300000000000) + +#define KVX_SFR_ITO_IT23_MASK _ULL(0xc00000000000) /* Interrupt 23 owner */ +#define KVX_SFR_ITO_IT23_SHIFT 46 +#define KVX_SFR_ITO_IT23_WIDTH 2 +#define KVX_SFR_ITO_IT23_WFXM_MASK _ULL(0xc00000000000) +#define KVX_SFR_ITO_IT23_WFXM_CLEAR _ULL(0xc000) +#define KVX_SFR_ITO_IT23_WFXM_SET _ULL(0xc00000000000) + +#define KVX_SFR_ITO_IT24_MASK _ULL(0x3000000000000) /* Interrupt 24 owner = */ +#define KVX_SFR_ITO_IT24_SHIFT 48 +#define KVX_SFR_ITO_IT24_WIDTH 2 +#define KVX_SFR_ITO_IT24_WFXM_MASK _ULL(0x3000000000000) +#define KVX_SFR_ITO_IT24_WFXM_CLEAR _ULL(0x30000) +#define KVX_SFR_ITO_IT24_WFXM_SET _ULL(0x3000000000000) + +#define KVX_SFR_ITO_IT25_MASK _ULL(0xc000000000000) /* Interrupt 25 owner = */ +#define KVX_SFR_ITO_IT25_SHIFT 50 +#define KVX_SFR_ITO_IT25_WIDTH 2 +#define KVX_SFR_ITO_IT25_WFXM_MASK _ULL(0xc000000000000) +#define KVX_SFR_ITO_IT25_WFXM_CLEAR _ULL(0xc0000) +#define KVX_SFR_ITO_IT25_WFXM_SET _ULL(0xc000000000000) + +#define KVX_SFR_ITO_IT26_MASK _ULL(0x30000000000000) /* Interrupt 26 owner= */ +#define KVX_SFR_ITO_IT26_SHIFT 52 +#define KVX_SFR_ITO_IT26_WIDTH 2 +#define KVX_SFR_ITO_IT26_WFXM_MASK _ULL(0x30000000000000) +#define KVX_SFR_ITO_IT26_WFXM_CLEAR _ULL(0x300000) +#define KVX_SFR_ITO_IT26_WFXM_SET _ULL(0x30000000000000) + +#define KVX_SFR_ITO_IT27_MASK _ULL(0xc0000000000000) /* Interrupt 27 owner= */ +#define KVX_SFR_ITO_IT27_SHIFT 54 +#define KVX_SFR_ITO_IT27_WIDTH 2 +#define KVX_SFR_ITO_IT27_WFXM_MASK _ULL(0xc0000000000000) +#define KVX_SFR_ITO_IT27_WFXM_CLEAR _ULL(0xc00000) +#define KVX_SFR_ITO_IT27_WFXM_SET _ULL(0xc0000000000000) + +#define KVX_SFR_ITO_IT28_MASK _ULL(0x300000000000000) /* Interrupt 28 owne= r */ +#define KVX_SFR_ITO_IT28_SHIFT 56 +#define KVX_SFR_ITO_IT28_WIDTH 2 +#define KVX_SFR_ITO_IT28_WFXM_MASK _ULL(0x300000000000000) +#define KVX_SFR_ITO_IT28_WFXM_CLEAR _ULL(0x3000000) +#define KVX_SFR_ITO_IT28_WFXM_SET _ULL(0x300000000000000) + +#define KVX_SFR_ITO_IT29_MASK _ULL(0xc00000000000000) /* Interrupt 29 owne= r */ +#define KVX_SFR_ITO_IT29_SHIFT 58 +#define KVX_SFR_ITO_IT29_WIDTH 2 +#define KVX_SFR_ITO_IT29_WFXM_MASK _ULL(0xc00000000000000) +#define KVX_SFR_ITO_IT29_WFXM_CLEAR _ULL(0xc000000) +#define KVX_SFR_ITO_IT29_WFXM_SET _ULL(0xc00000000000000) + +#define KVX_SFR_ITO_IT30_MASK _ULL(0x3000000000000000) /* Interrupt 30 own= er */ +#define KVX_SFR_ITO_IT30_SHIFT 60 +#define KVX_SFR_ITO_IT30_WIDTH 2 +#define KVX_SFR_ITO_IT30_WFXM_MASK _ULL(0x3000000000000000) +#define KVX_SFR_ITO_IT30_WFXM_CLEAR _ULL(0x30000000) +#define KVX_SFR_ITO_IT30_WFXM_SET _ULL(0x3000000000000000) + +#define KVX_SFR_ITO_IT31_MASK _ULL(0xc000000000000000) /* Interrupt 31 own= er */ +#define KVX_SFR_ITO_IT31_SHIFT 62 +#define KVX_SFR_ITO_IT31_WIDTH 2 +#define KVX_SFR_ITO_IT31_WFXM_MASK _ULL(0xc000000000000000) +#define KVX_SFR_ITO_IT31_WFXM_CLEAR _ULL(0xc0000000) +#define KVX_SFR_ITO_IT31_WFXM_SET _ULL(0xc000000000000000) + +#define KVX_SFR_ILE_IT0_MASK _ULL(0x1) /* Interrupt 0 owner */ +#define KVX_SFR_ILE_IT0_SHIFT 0 +#define KVX_SFR_ILE_IT0_WIDTH 1 +#define KVX_SFR_ILE_IT0_WFXL_MASK _ULL(0x1) +#define KVX_SFR_ILE_IT0_WFXL_CLEAR _ULL(0x1) +#define KVX_SFR_ILE_IT0_WFXL_SET _ULL(0x100000000) + +#define KVX_SFR_ILE_IT1_MASK _ULL(0x2) /* Interrupt 1 owner */ +#define KVX_SFR_ILE_IT1_SHIFT 1 +#define KVX_SFR_ILE_IT1_WIDTH 1 +#define KVX_SFR_ILE_IT1_WFXL_MASK _ULL(0x2) +#define KVX_SFR_ILE_IT1_WFXL_CLEAR _ULL(0x2) +#define KVX_SFR_ILE_IT1_WFXL_SET _ULL(0x200000000) + +#define KVX_SFR_ILE_IT2_MASK _ULL(0x4) /* Interrupt 2 owner */ +#define KVX_SFR_ILE_IT2_SHIFT 2 +#define KVX_SFR_ILE_IT2_WIDTH 1 +#define KVX_SFR_ILE_IT2_WFXL_MASK _ULL(0x4) +#define KVX_SFR_ILE_IT2_WFXL_CLEAR _ULL(0x4) +#define KVX_SFR_ILE_IT2_WFXL_SET _ULL(0x400000000) + +#define KVX_SFR_ILE_IT3_MASK _ULL(0x8) /* Interrupt 3 owner */ +#define KVX_SFR_ILE_IT3_SHIFT 3 +#define KVX_SFR_ILE_IT3_WIDTH 1 +#define KVX_SFR_ILE_IT3_WFXL_MASK _ULL(0x8) +#define KVX_SFR_ILE_IT3_WFXL_CLEAR _ULL(0x8) +#define KVX_SFR_ILE_IT3_WFXL_SET _ULL(0x800000000) + +#define KVX_SFR_ILE_IT4_MASK _ULL(0x10) /* Interrupt 4 owner */ +#define KVX_SFR_ILE_IT4_SHIFT 4 +#define KVX_SFR_ILE_IT4_WIDTH 1 +#define KVX_SFR_ILE_IT4_WFXL_MASK _ULL(0x10) +#define KVX_SFR_ILE_IT4_WFXL_CLEAR _ULL(0x10) +#define KVX_SFR_ILE_IT4_WFXL_SET _ULL(0x1000000000) + +#define KVX_SFR_ILE_IT5_MASK _ULL(0x20) /* Interrupt 5 owner */ +#define KVX_SFR_ILE_IT5_SHIFT 5 +#define KVX_SFR_ILE_IT5_WIDTH 1 +#define KVX_SFR_ILE_IT5_WFXL_MASK _ULL(0x20) +#define KVX_SFR_ILE_IT5_WFXL_CLEAR _ULL(0x20) +#define KVX_SFR_ILE_IT5_WFXL_SET _ULL(0x2000000000) + +#define KVX_SFR_ILE_IT6_MASK _ULL(0x40) /* Interrupt 6 owner */ +#define KVX_SFR_ILE_IT6_SHIFT 6 +#define KVX_SFR_ILE_IT6_WIDTH 1 +#define KVX_SFR_ILE_IT6_WFXL_MASK _ULL(0x40) +#define KVX_SFR_ILE_IT6_WFXL_CLEAR _ULL(0x40) +#define KVX_SFR_ILE_IT6_WFXL_SET _ULL(0x4000000000) + +#define KVX_SFR_ILE_IT7_MASK _ULL(0x80) /* Interrupt 7 owner */ +#define KVX_SFR_ILE_IT7_SHIFT 7 +#define KVX_SFR_ILE_IT7_WIDTH 1 +#define KVX_SFR_ILE_IT7_WFXL_MASK _ULL(0x80) +#define KVX_SFR_ILE_IT7_WFXL_CLEAR _ULL(0x80) +#define KVX_SFR_ILE_IT7_WFXL_SET _ULL(0x8000000000) + +#define KVX_SFR_ILE_IT8_MASK _ULL(0x100) /* Interrupt 8 owner */ +#define KVX_SFR_ILE_IT8_SHIFT 8 +#define KVX_SFR_ILE_IT8_WIDTH 1 +#define KVX_SFR_ILE_IT8_WFXL_MASK _ULL(0x100) +#define KVX_SFR_ILE_IT8_WFXL_CLEAR _ULL(0x100) +#define KVX_SFR_ILE_IT8_WFXL_SET _ULL(0x10000000000) + +#define KVX_SFR_ILE_IT9_MASK _ULL(0x200) /* Interrupt 9 owner */ +#define KVX_SFR_ILE_IT9_SHIFT 9 +#define KVX_SFR_ILE_IT9_WIDTH 1 +#define KVX_SFR_ILE_IT9_WFXL_MASK _ULL(0x200) +#define KVX_SFR_ILE_IT9_WFXL_CLEAR _ULL(0x200) +#define KVX_SFR_ILE_IT9_WFXL_SET _ULL(0x20000000000) + +#define KVX_SFR_ILE_IT10_MASK _ULL(0x400) /* Interrupt 10 owner */ +#define KVX_SFR_ILE_IT10_SHIFT 10 +#define KVX_SFR_ILE_IT10_WIDTH 1 +#define KVX_SFR_ILE_IT10_WFXL_MASK _ULL(0x400) +#define KVX_SFR_ILE_IT10_WFXL_CLEAR _ULL(0x400) +#define KVX_SFR_ILE_IT10_WFXL_SET _ULL(0x40000000000) + +#define KVX_SFR_ILE_IT11_MASK _ULL(0x800) /* Interrupt 11 owner */ +#define KVX_SFR_ILE_IT11_SHIFT 11 +#define KVX_SFR_ILE_IT11_WIDTH 1 +#define KVX_SFR_ILE_IT11_WFXL_MASK _ULL(0x800) +#define KVX_SFR_ILE_IT11_WFXL_CLEAR _ULL(0x800) +#define KVX_SFR_ILE_IT11_WFXL_SET _ULL(0x80000000000) + +#define KVX_SFR_ILE_IT12_MASK _ULL(0x1000) /* Interrupt 12 owner */ +#define KVX_SFR_ILE_IT12_SHIFT 12 +#define KVX_SFR_ILE_IT12_WIDTH 1 +#define KVX_SFR_ILE_IT12_WFXL_MASK _ULL(0x1000) +#define KVX_SFR_ILE_IT12_WFXL_CLEAR _ULL(0x1000) +#define KVX_SFR_ILE_IT12_WFXL_SET _ULL(0x100000000000) + +#define KVX_SFR_ILE_IT13_MASK _ULL(0x2000) /* Interrupt 13 owner */ +#define KVX_SFR_ILE_IT13_SHIFT 13 +#define KVX_SFR_ILE_IT13_WIDTH 1 +#define KVX_SFR_ILE_IT13_WFXL_MASK _ULL(0x2000) +#define KVX_SFR_ILE_IT13_WFXL_CLEAR _ULL(0x2000) +#define KVX_SFR_ILE_IT13_WFXL_SET _ULL(0x200000000000) + +#define KVX_SFR_ILE_IT14_MASK _ULL(0x4000) /* Interrupt 14 owner */ +#define KVX_SFR_ILE_IT14_SHIFT 14 +#define KVX_SFR_ILE_IT14_WIDTH 1 +#define KVX_SFR_ILE_IT14_WFXL_MASK _ULL(0x4000) +#define KVX_SFR_ILE_IT14_WFXL_CLEAR _ULL(0x4000) +#define KVX_SFR_ILE_IT14_WFXL_SET _ULL(0x400000000000) + +#define KVX_SFR_ILE_IT15_MASK _ULL(0x8000) /* Interrupt 15 owner */ +#define KVX_SFR_ILE_IT15_SHIFT 15 +#define KVX_SFR_ILE_IT15_WIDTH 1 +#define KVX_SFR_ILE_IT15_WFXL_MASK _ULL(0x8000) +#define KVX_SFR_ILE_IT15_WFXL_CLEAR _ULL(0x8000) +#define KVX_SFR_ILE_IT15_WFXL_SET _ULL(0x800000000000) + +#define KVX_SFR_ILE_IT16_MASK _ULL(0x10000) /* Interrupt 16 owner */ +#define KVX_SFR_ILE_IT16_SHIFT 16 +#define KVX_SFR_ILE_IT16_WIDTH 1 +#define KVX_SFR_ILE_IT16_WFXL_MASK _ULL(0x10000) +#define KVX_SFR_ILE_IT16_WFXL_CLEAR _ULL(0x10000) +#define KVX_SFR_ILE_IT16_WFXL_SET _ULL(0x1000000000000) + +#define KVX_SFR_ILE_IT17_MASK _ULL(0x20000) /* Interrupt 17 owner */ +#define KVX_SFR_ILE_IT17_SHIFT 17 +#define KVX_SFR_ILE_IT17_WIDTH 1 +#define KVX_SFR_ILE_IT17_WFXL_MASK _ULL(0x20000) +#define KVX_SFR_ILE_IT17_WFXL_CLEAR _ULL(0x20000) +#define KVX_SFR_ILE_IT17_WFXL_SET _ULL(0x2000000000000) + +#define KVX_SFR_ILE_IT18_MASK _ULL(0x40000) /* Interrupt 18 owner */ +#define KVX_SFR_ILE_IT18_SHIFT 18 +#define KVX_SFR_ILE_IT18_WIDTH 1 +#define KVX_SFR_ILE_IT18_WFXL_MASK _ULL(0x40000) +#define KVX_SFR_ILE_IT18_WFXL_CLEAR _ULL(0x40000) +#define KVX_SFR_ILE_IT18_WFXL_SET _ULL(0x4000000000000) + +#define KVX_SFR_ILE_IT19_MASK _ULL(0x80000) /* Interrupt 19 owner */ +#define KVX_SFR_ILE_IT19_SHIFT 19 +#define KVX_SFR_ILE_IT19_WIDTH 1 +#define KVX_SFR_ILE_IT19_WFXL_MASK _ULL(0x80000) +#define KVX_SFR_ILE_IT19_WFXL_CLEAR _ULL(0x80000) +#define KVX_SFR_ILE_IT19_WFXL_SET _ULL(0x8000000000000) + +#define KVX_SFR_ILE_IT20_MASK _ULL(0x100000) /* Interrupt 20 owner */ +#define KVX_SFR_ILE_IT20_SHIFT 20 +#define KVX_SFR_ILE_IT20_WIDTH 1 +#define KVX_SFR_ILE_IT20_WFXL_MASK _ULL(0x100000) +#define KVX_SFR_ILE_IT20_WFXL_CLEAR _ULL(0x100000) +#define KVX_SFR_ILE_IT20_WFXL_SET _ULL(0x10000000000000) + +#define KVX_SFR_ILE_IT21_MASK _ULL(0x200000) /* Interrupt 21 owner */ +#define KVX_SFR_ILE_IT21_SHIFT 21 +#define KVX_SFR_ILE_IT21_WIDTH 1 +#define KVX_SFR_ILE_IT21_WFXL_MASK _ULL(0x200000) +#define KVX_SFR_ILE_IT21_WFXL_CLEAR _ULL(0x200000) +#define KVX_SFR_ILE_IT21_WFXL_SET _ULL(0x20000000000000) + +#define KVX_SFR_ILE_IT22_MASK _ULL(0x400000) /* Interrupt 22 owner */ +#define KVX_SFR_ILE_IT22_SHIFT 22 +#define KVX_SFR_ILE_IT22_WIDTH 1 +#define KVX_SFR_ILE_IT22_WFXL_MASK _ULL(0x400000) +#define KVX_SFR_ILE_IT22_WFXL_CLEAR _ULL(0x400000) +#define KVX_SFR_ILE_IT22_WFXL_SET _ULL(0x40000000000000) + +#define KVX_SFR_ILE_IT23_MASK _ULL(0x800000) /* Interrupt 23 owner */ +#define KVX_SFR_ILE_IT23_SHIFT 23 +#define KVX_SFR_ILE_IT23_WIDTH 1 +#define KVX_SFR_ILE_IT23_WFXL_MASK _ULL(0x800000) +#define KVX_SFR_ILE_IT23_WFXL_CLEAR _ULL(0x800000) +#define KVX_SFR_ILE_IT23_WFXL_SET _ULL(0x80000000000000) + +#define KVX_SFR_ILE_IT24_MASK _ULL(0x1000000) /* Interrupt 24 owner */ +#define KVX_SFR_ILE_IT24_SHIFT 24 +#define KVX_SFR_ILE_IT24_WIDTH 1 +#define KVX_SFR_ILE_IT24_WFXL_MASK _ULL(0x1000000) +#define KVX_SFR_ILE_IT24_WFXL_CLEAR _ULL(0x1000000) +#define KVX_SFR_ILE_IT24_WFXL_SET _ULL(0x100000000000000) + +#define KVX_SFR_ILE_IT25_MASK _ULL(0x2000000) /* Interrupt 25 owner */ +#define KVX_SFR_ILE_IT25_SHIFT 25 +#define KVX_SFR_ILE_IT25_WIDTH 1 +#define KVX_SFR_ILE_IT25_WFXL_MASK _ULL(0x2000000) +#define KVX_SFR_ILE_IT25_WFXL_CLEAR _ULL(0x2000000) +#define KVX_SFR_ILE_IT25_WFXL_SET _ULL(0x200000000000000) + +#define KVX_SFR_ILE_IT26_MASK _ULL(0x4000000) /* Interrupt 26 owner */ +#define KVX_SFR_ILE_IT26_SHIFT 26 +#define KVX_SFR_ILE_IT26_WIDTH 1 +#define KVX_SFR_ILE_IT26_WFXL_MASK _ULL(0x4000000) +#define KVX_SFR_ILE_IT26_WFXL_CLEAR _ULL(0x4000000) +#define KVX_SFR_ILE_IT26_WFXL_SET _ULL(0x400000000000000) + +#define KVX_SFR_ILE_IT27_MASK _ULL(0x8000000) /* Interrupt 27 owner */ +#define KVX_SFR_ILE_IT27_SHIFT 27 +#define KVX_SFR_ILE_IT27_WIDTH 1 +#define KVX_SFR_ILE_IT27_WFXL_MASK _ULL(0x8000000) +#define KVX_SFR_ILE_IT27_WFXL_CLEAR _ULL(0x8000000) +#define KVX_SFR_ILE_IT27_WFXL_SET _ULL(0x800000000000000) + +#define KVX_SFR_ILE_IT28_MASK _ULL(0x10000000) /* Interrupt 28 owner */ +#define KVX_SFR_ILE_IT28_SHIFT 28 +#define KVX_SFR_ILE_IT28_WIDTH 1 +#define KVX_SFR_ILE_IT28_WFXL_MASK _ULL(0x10000000) +#define KVX_SFR_ILE_IT28_WFXL_CLEAR _ULL(0x10000000) +#define KVX_SFR_ILE_IT28_WFXL_SET _ULL(0x1000000000000000) + +#define KVX_SFR_ILE_IT29_MASK _ULL(0x20000000) /* Interrupt 29 owner */ +#define KVX_SFR_ILE_IT29_SHIFT 29 +#define KVX_SFR_ILE_IT29_WIDTH 1 +#define KVX_SFR_ILE_IT29_WFXL_MASK _ULL(0x20000000) +#define KVX_SFR_ILE_IT29_WFXL_CLEAR _ULL(0x20000000) +#define KVX_SFR_ILE_IT29_WFXL_SET _ULL(0x2000000000000000) + +#define KVX_SFR_ILE_IT30_MASK _ULL(0x40000000) /* Interrupt 30 owner */ +#define KVX_SFR_ILE_IT30_SHIFT 30 +#define KVX_SFR_ILE_IT30_WIDTH 1 +#define KVX_SFR_ILE_IT30_WFXL_MASK _ULL(0x40000000) +#define KVX_SFR_ILE_IT30_WFXL_CLEAR _ULL(0x40000000) +#define KVX_SFR_ILE_IT30_WFXL_SET _ULL(0x4000000000000000) + +#define KVX_SFR_ILE_IT31_MASK _ULL(0x80000000) /* Interrupt 31 owner */ +#define KVX_SFR_ILE_IT31_SHIFT 31 +#define KVX_SFR_ILE_IT31_WIDTH 1 +#define KVX_SFR_ILE_IT31_WFXL_MASK _ULL(0x80000000) +#define KVX_SFR_ILE_IT31_WFXL_CLEAR _ULL(0x80000000) +#define KVX_SFR_ILE_IT31_WFXL_SET _ULL(0x8000000000000000) + +#define KVX_SFR_ILL_IT0_MASK _ULL(0x3) /* Interrupt 0 owner */ +#define KVX_SFR_ILL_IT0_SHIFT 0 +#define KVX_SFR_ILL_IT0_WIDTH 2 +#define KVX_SFR_ILL_IT0_WFXL_MASK _ULL(0x3) +#define KVX_SFR_ILL_IT0_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_ILL_IT0_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_ILL_IT1_MASK _ULL(0xc) /* Interrupt 1 owner */ +#define KVX_SFR_ILL_IT1_SHIFT 2 +#define KVX_SFR_ILL_IT1_WIDTH 2 +#define KVX_SFR_ILL_IT1_WFXL_MASK _ULL(0xc) +#define KVX_SFR_ILL_IT1_WFXL_CLEAR _ULL(0xc) +#define KVX_SFR_ILL_IT1_WFXL_SET _ULL(0xc00000000) + +#define KVX_SFR_ILL_IT2_MASK _ULL(0x30) /* Interrupt 2 owner */ +#define KVX_SFR_ILL_IT2_SHIFT 4 +#define KVX_SFR_ILL_IT2_WIDTH 2 +#define KVX_SFR_ILL_IT2_WFXL_MASK _ULL(0x30) +#define KVX_SFR_ILL_IT2_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_ILL_IT2_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_ILL_IT3_MASK _ULL(0xc0) /* Interrupt 3 owner */ +#define KVX_SFR_ILL_IT3_SHIFT 6 +#define KVX_SFR_ILL_IT3_WIDTH 2 +#define KVX_SFR_ILL_IT3_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_ILL_IT3_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_ILL_IT3_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_ILL_IT4_MASK _ULL(0x300) /* Interrupt 4 owner */ +#define KVX_SFR_ILL_IT4_SHIFT 8 +#define KVX_SFR_ILL_IT4_WIDTH 2 +#define KVX_SFR_ILL_IT4_WFXL_MASK _ULL(0x300) +#define KVX_SFR_ILL_IT4_WFXL_CLEAR _ULL(0x300) +#define KVX_SFR_ILL_IT4_WFXL_SET _ULL(0x30000000000) + +#define KVX_SFR_ILL_IT5_MASK _ULL(0xc00) /* Interrupt 5 owner */ +#define KVX_SFR_ILL_IT5_SHIFT 10 +#define KVX_SFR_ILL_IT5_WIDTH 2 +#define KVX_SFR_ILL_IT5_WFXL_MASK _ULL(0xc00) +#define KVX_SFR_ILL_IT5_WFXL_CLEAR _ULL(0xc00) +#define KVX_SFR_ILL_IT5_WFXL_SET _ULL(0xc0000000000) + +#define KVX_SFR_ILL_IT6_MASK _ULL(0x3000) /* Interrupt 6 owner */ +#define KVX_SFR_ILL_IT6_SHIFT 12 +#define KVX_SFR_ILL_IT6_WIDTH 2 +#define KVX_SFR_ILL_IT6_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_ILL_IT6_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_ILL_IT6_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_ILL_IT7_MASK _ULL(0xc000) /* Interrupt 7 owner */ +#define KVX_SFR_ILL_IT7_SHIFT 14 +#define KVX_SFR_ILL_IT7_WIDTH 2 +#define KVX_SFR_ILL_IT7_WFXL_MASK _ULL(0xc000) +#define KVX_SFR_ILL_IT7_WFXL_CLEAR _ULL(0xc000) +#define KVX_SFR_ILL_IT7_WFXL_SET _ULL(0xc00000000000) + +#define KVX_SFR_ILL_IT8_MASK _ULL(0x30000) /* Interrupt 8 owner */ +#define KVX_SFR_ILL_IT8_SHIFT 16 +#define KVX_SFR_ILL_IT8_WIDTH 2 +#define KVX_SFR_ILL_IT8_WFXL_MASK _ULL(0x30000) +#define KVX_SFR_ILL_IT8_WFXL_CLEAR _ULL(0x30000) +#define KVX_SFR_ILL_IT8_WFXL_SET _ULL(0x3000000000000) + +#define KVX_SFR_ILL_IT9_MASK _ULL(0xc0000) /* Interrupt 9 owner */ +#define KVX_SFR_ILL_IT9_SHIFT 18 +#define KVX_SFR_ILL_IT9_WIDTH 2 +#define KVX_SFR_ILL_IT9_WFXL_MASK _ULL(0xc0000) +#define KVX_SFR_ILL_IT9_WFXL_CLEAR _ULL(0xc0000) +#define KVX_SFR_ILL_IT9_WFXL_SET _ULL(0xc000000000000) + +#define KVX_SFR_ILL_IT10_MASK _ULL(0x300000) /* Interrupt 10 owner */ +#define KVX_SFR_ILL_IT10_SHIFT 20 +#define KVX_SFR_ILL_IT10_WIDTH 2 +#define KVX_SFR_ILL_IT10_WFXL_MASK _ULL(0x300000) +#define KVX_SFR_ILL_IT10_WFXL_CLEAR _ULL(0x300000) +#define KVX_SFR_ILL_IT10_WFXL_SET _ULL(0x30000000000000) + +#define KVX_SFR_ILL_IT11_MASK _ULL(0xc00000) /* Interrupt 11 owner */ +#define KVX_SFR_ILL_IT11_SHIFT 22 +#define KVX_SFR_ILL_IT11_WIDTH 2 +#define KVX_SFR_ILL_IT11_WFXL_MASK _ULL(0xc00000) +#define KVX_SFR_ILL_IT11_WFXL_CLEAR _ULL(0xc00000) +#define KVX_SFR_ILL_IT11_WFXL_SET _ULL(0xc0000000000000) + +#define KVX_SFR_ILL_IT12_MASK _ULL(0x3000000) /* Interrupt 12 owner */ +#define KVX_SFR_ILL_IT12_SHIFT 24 +#define KVX_SFR_ILL_IT12_WIDTH 2 +#define KVX_SFR_ILL_IT12_WFXL_MASK _ULL(0x3000000) +#define KVX_SFR_ILL_IT12_WFXL_CLEAR _ULL(0x3000000) +#define KVX_SFR_ILL_IT12_WFXL_SET _ULL(0x300000000000000) + +#define KVX_SFR_ILL_IT13_MASK _ULL(0xc000000) /* Interrupt 13 owner */ +#define KVX_SFR_ILL_IT13_SHIFT 26 +#define KVX_SFR_ILL_IT13_WIDTH 2 +#define KVX_SFR_ILL_IT13_WFXL_MASK _ULL(0xc000000) +#define KVX_SFR_ILL_IT13_WFXL_CLEAR _ULL(0xc000000) +#define KVX_SFR_ILL_IT13_WFXL_SET _ULL(0xc00000000000000) + +#define KVX_SFR_ILL_IT14_MASK _ULL(0x30000000) /* Interrupt 14 owner */ +#define KVX_SFR_ILL_IT14_SHIFT 28 +#define KVX_SFR_ILL_IT14_WIDTH 2 +#define KVX_SFR_ILL_IT14_WFXL_MASK _ULL(0x30000000) +#define KVX_SFR_ILL_IT14_WFXL_CLEAR _ULL(0x30000000) +#define KVX_SFR_ILL_IT14_WFXL_SET _ULL(0x3000000000000000) + +#define KVX_SFR_ILL_IT15_MASK _ULL(0xc0000000) /* Interrupt 15 owner */ +#define KVX_SFR_ILL_IT15_SHIFT 30 +#define KVX_SFR_ILL_IT15_WIDTH 2 +#define KVX_SFR_ILL_IT15_WFXL_MASK _ULL(0xc0000000) +#define KVX_SFR_ILL_IT15_WFXL_CLEAR _ULL(0xc0000000) +#define KVX_SFR_ILL_IT15_WFXL_SET _ULL(0xc000000000000000) + +#define KVX_SFR_ILL_IT16_MASK _ULL(0x300000000) /* Interrupt 16 owner */ +#define KVX_SFR_ILL_IT16_SHIFT 32 +#define KVX_SFR_ILL_IT16_WIDTH 2 +#define KVX_SFR_ILL_IT16_WFXM_MASK _ULL(0x300000000) +#define KVX_SFR_ILL_IT16_WFXM_CLEAR _ULL(0x3) +#define KVX_SFR_ILL_IT16_WFXM_SET _ULL(0x300000000) + +#define KVX_SFR_ILL_IT17_MASK _ULL(0xc00000000) /* Interrupt 17 owner */ +#define KVX_SFR_ILL_IT17_SHIFT 34 +#define KVX_SFR_ILL_IT17_WIDTH 2 +#define KVX_SFR_ILL_IT17_WFXM_MASK _ULL(0xc00000000) +#define KVX_SFR_ILL_IT17_WFXM_CLEAR _ULL(0xc) +#define KVX_SFR_ILL_IT17_WFXM_SET _ULL(0xc00000000) + +#define KVX_SFR_ILL_IT18_MASK _ULL(0x3000000000) /* Interrupt 18 owner */ +#define KVX_SFR_ILL_IT18_SHIFT 36 +#define KVX_SFR_ILL_IT18_WIDTH 2 +#define KVX_SFR_ILL_IT18_WFXM_MASK _ULL(0x3000000000) +#define KVX_SFR_ILL_IT18_WFXM_CLEAR _ULL(0x30) +#define KVX_SFR_ILL_IT18_WFXM_SET _ULL(0x3000000000) + +#define KVX_SFR_ILL_IT19_MASK _ULL(0xc000000000) /* Interrupt 19 owner */ +#define KVX_SFR_ILL_IT19_SHIFT 38 +#define KVX_SFR_ILL_IT19_WIDTH 2 +#define KVX_SFR_ILL_IT19_WFXM_MASK _ULL(0xc000000000) +#define KVX_SFR_ILL_IT19_WFXM_CLEAR _ULL(0xc0) +#define KVX_SFR_ILL_IT19_WFXM_SET _ULL(0xc000000000) + +#define KVX_SFR_ILL_IT20_MASK _ULL(0x30000000000) /* Interrupt 20 owner */ +#define KVX_SFR_ILL_IT20_SHIFT 40 +#define KVX_SFR_ILL_IT20_WIDTH 2 +#define KVX_SFR_ILL_IT20_WFXM_MASK _ULL(0x30000000000) +#define KVX_SFR_ILL_IT20_WFXM_CLEAR _ULL(0x300) +#define KVX_SFR_ILL_IT20_WFXM_SET _ULL(0x30000000000) + +#define KVX_SFR_ILL_IT21_MASK _ULL(0xc0000000000) /* Interrupt 21 owner */ +#define KVX_SFR_ILL_IT21_SHIFT 42 +#define KVX_SFR_ILL_IT21_WIDTH 2 +#define KVX_SFR_ILL_IT21_WFXM_MASK _ULL(0xc0000000000) +#define KVX_SFR_ILL_IT21_WFXM_CLEAR _ULL(0xc00) +#define KVX_SFR_ILL_IT21_WFXM_SET _ULL(0xc0000000000) + +#define KVX_SFR_ILL_IT22_MASK _ULL(0x300000000000) /* Interrupt 22 owner */ +#define KVX_SFR_ILL_IT22_SHIFT 44 +#define KVX_SFR_ILL_IT22_WIDTH 2 +#define KVX_SFR_ILL_IT22_WFXM_MASK _ULL(0x300000000000) +#define KVX_SFR_ILL_IT22_WFXM_CLEAR _ULL(0x3000) +#define KVX_SFR_ILL_IT22_WFXM_SET _ULL(0x300000000000) + +#define KVX_SFR_ILL_IT23_MASK _ULL(0xc00000000000) /* Interrupt 23 owner */ +#define KVX_SFR_ILL_IT23_SHIFT 46 +#define KVX_SFR_ILL_IT23_WIDTH 2 +#define KVX_SFR_ILL_IT23_WFXM_MASK _ULL(0xc00000000000) +#define KVX_SFR_ILL_IT23_WFXM_CLEAR _ULL(0xc000) +#define KVX_SFR_ILL_IT23_WFXM_SET _ULL(0xc00000000000) + +#define KVX_SFR_ILL_IT24_MASK _ULL(0x3000000000000) /* Interrupt 24 owner = */ +#define KVX_SFR_ILL_IT24_SHIFT 48 +#define KVX_SFR_ILL_IT24_WIDTH 2 +#define KVX_SFR_ILL_IT24_WFXM_MASK _ULL(0x3000000000000) +#define KVX_SFR_ILL_IT24_WFXM_CLEAR _ULL(0x30000) +#define KVX_SFR_ILL_IT24_WFXM_SET _ULL(0x3000000000000) + +#define KVX_SFR_ILL_IT25_MASK _ULL(0xc000000000000) /* Interrupt 25 owner = */ +#define KVX_SFR_ILL_IT25_SHIFT 50 +#define KVX_SFR_ILL_IT25_WIDTH 2 +#define KVX_SFR_ILL_IT25_WFXM_MASK _ULL(0xc000000000000) +#define KVX_SFR_ILL_IT25_WFXM_CLEAR _ULL(0xc0000) +#define KVX_SFR_ILL_IT25_WFXM_SET _ULL(0xc000000000000) + +#define KVX_SFR_ILL_IT26_MASK _ULL(0x30000000000000) /* Interrupt 26 owner= */ +#define KVX_SFR_ILL_IT26_SHIFT 52 +#define KVX_SFR_ILL_IT26_WIDTH 2 +#define KVX_SFR_ILL_IT26_WFXM_MASK _ULL(0x30000000000000) +#define KVX_SFR_ILL_IT26_WFXM_CLEAR _ULL(0x300000) +#define KVX_SFR_ILL_IT26_WFXM_SET _ULL(0x30000000000000) + +#define KVX_SFR_ILL_IT27_MASK _ULL(0xc0000000000000) /* Interrupt 27 owner= */ +#define KVX_SFR_ILL_IT27_SHIFT 54 +#define KVX_SFR_ILL_IT27_WIDTH 2 +#define KVX_SFR_ILL_IT27_WFXM_MASK _ULL(0xc0000000000000) +#define KVX_SFR_ILL_IT27_WFXM_CLEAR _ULL(0xc00000) +#define KVX_SFR_ILL_IT27_WFXM_SET _ULL(0xc0000000000000) + +#define KVX_SFR_ILL_IT28_MASK _ULL(0x300000000000000) /* Interrupt 28 owne= r */ +#define KVX_SFR_ILL_IT28_SHIFT 56 +#define KVX_SFR_ILL_IT28_WIDTH 2 +#define KVX_SFR_ILL_IT28_WFXM_MASK _ULL(0x300000000000000) +#define KVX_SFR_ILL_IT28_WFXM_CLEAR _ULL(0x3000000) +#define KVX_SFR_ILL_IT28_WFXM_SET _ULL(0x300000000000000) + +#define KVX_SFR_ILL_IT29_MASK _ULL(0xc00000000000000) /* Interrupt 29 owne= r */ +#define KVX_SFR_ILL_IT29_SHIFT 58 +#define KVX_SFR_ILL_IT29_WIDTH 2 +#define KVX_SFR_ILL_IT29_WFXM_MASK _ULL(0xc00000000000000) +#define KVX_SFR_ILL_IT29_WFXM_CLEAR _ULL(0xc000000) +#define KVX_SFR_ILL_IT29_WFXM_SET _ULL(0xc00000000000000) + +#define KVX_SFR_ILL_IT30_MASK _ULL(0x3000000000000000) /* Interrupt 30 own= er */ +#define KVX_SFR_ILL_IT30_SHIFT 60 +#define KVX_SFR_ILL_IT30_WIDTH 2 +#define KVX_SFR_ILL_IT30_WFXM_MASK _ULL(0x3000000000000000) +#define KVX_SFR_ILL_IT30_WFXM_CLEAR _ULL(0x30000000) +#define KVX_SFR_ILL_IT30_WFXM_SET _ULL(0x3000000000000000) + +#define KVX_SFR_ILL_IT31_MASK _ULL(0xc000000000000000) /* Interrupt 31 own= er */ +#define KVX_SFR_ILL_IT31_SHIFT 62 +#define KVX_SFR_ILL_IT31_WIDTH 2 +#define KVX_SFR_ILL_IT31_WFXM_MASK _ULL(0xc000000000000000) +#define KVX_SFR_ILL_IT31_WFXM_CLEAR _ULL(0xc0000000) +#define KVX_SFR_ILL_IT31_WFXM_SET _ULL(0xc000000000000000) + +#define KVX_SFR_ILR_IT0_MASK _ULL(0x1) /* Interrupt 0 owner */ +#define KVX_SFR_ILR_IT0_SHIFT 0 +#define KVX_SFR_ILR_IT0_WIDTH 1 +#define KVX_SFR_ILR_IT0_WFXL_MASK _ULL(0x1) +#define KVX_SFR_ILR_IT0_WFXL_CLEAR _ULL(0x1) +#define KVX_SFR_ILR_IT0_WFXL_SET _ULL(0x100000000) + +#define KVX_SFR_ILR_IT1_MASK _ULL(0x2) /* Interrupt 1 owner */ +#define KVX_SFR_ILR_IT1_SHIFT 1 +#define KVX_SFR_ILR_IT1_WIDTH 1 +#define KVX_SFR_ILR_IT1_WFXL_MASK _ULL(0x2) +#define KVX_SFR_ILR_IT1_WFXL_CLEAR _ULL(0x2) +#define KVX_SFR_ILR_IT1_WFXL_SET _ULL(0x200000000) + +#define KVX_SFR_ILR_IT2_MASK _ULL(0x4) /* Interrupt 2 owner */ +#define KVX_SFR_ILR_IT2_SHIFT 2 +#define KVX_SFR_ILR_IT2_WIDTH 1 +#define KVX_SFR_ILR_IT2_WFXL_MASK _ULL(0x4) +#define KVX_SFR_ILR_IT2_WFXL_CLEAR _ULL(0x4) +#define KVX_SFR_ILR_IT2_WFXL_SET _ULL(0x400000000) + +#define KVX_SFR_ILR_IT3_MASK _ULL(0x8) /* Interrupt 3 owner */ +#define KVX_SFR_ILR_IT3_SHIFT 3 +#define KVX_SFR_ILR_IT3_WIDTH 1 +#define KVX_SFR_ILR_IT3_WFXL_MASK _ULL(0x8) +#define KVX_SFR_ILR_IT3_WFXL_CLEAR _ULL(0x8) +#define KVX_SFR_ILR_IT3_WFXL_SET _ULL(0x800000000) + +#define KVX_SFR_ILR_IT4_MASK _ULL(0x10) /* Interrupt 4 owner */ +#define KVX_SFR_ILR_IT4_SHIFT 4 +#define KVX_SFR_ILR_IT4_WIDTH 1 +#define KVX_SFR_ILR_IT4_WFXL_MASK _ULL(0x10) +#define KVX_SFR_ILR_IT4_WFXL_CLEAR _ULL(0x10) +#define KVX_SFR_ILR_IT4_WFXL_SET _ULL(0x1000000000) + +#define KVX_SFR_ILR_IT5_MASK _ULL(0x20) /* Interrupt 5 owner */ +#define KVX_SFR_ILR_IT5_SHIFT 5 +#define KVX_SFR_ILR_IT5_WIDTH 1 +#define KVX_SFR_ILR_IT5_WFXL_MASK _ULL(0x20) +#define KVX_SFR_ILR_IT5_WFXL_CLEAR _ULL(0x20) +#define KVX_SFR_ILR_IT5_WFXL_SET _ULL(0x2000000000) + +#define KVX_SFR_ILR_IT6_MASK _ULL(0x40) /* Interrupt 6 owner */ +#define KVX_SFR_ILR_IT6_SHIFT 6 +#define KVX_SFR_ILR_IT6_WIDTH 1 +#define KVX_SFR_ILR_IT6_WFXL_MASK _ULL(0x40) +#define KVX_SFR_ILR_IT6_WFXL_CLEAR _ULL(0x40) +#define KVX_SFR_ILR_IT6_WFXL_SET _ULL(0x4000000000) + +#define KVX_SFR_ILR_IT7_MASK _ULL(0x80) /* Interrupt 7 owner */ +#define KVX_SFR_ILR_IT7_SHIFT 7 +#define KVX_SFR_ILR_IT7_WIDTH 1 +#define KVX_SFR_ILR_IT7_WFXL_MASK _ULL(0x80) +#define KVX_SFR_ILR_IT7_WFXL_CLEAR _ULL(0x80) +#define KVX_SFR_ILR_IT7_WFXL_SET _ULL(0x8000000000) + +#define KVX_SFR_ILR_IT8_MASK _ULL(0x100) /* Interrupt 8 owner */ +#define KVX_SFR_ILR_IT8_SHIFT 8 +#define KVX_SFR_ILR_IT8_WIDTH 1 +#define KVX_SFR_ILR_IT8_WFXL_MASK _ULL(0x100) +#define KVX_SFR_ILR_IT8_WFXL_CLEAR _ULL(0x100) +#define KVX_SFR_ILR_IT8_WFXL_SET _ULL(0x10000000000) + +#define KVX_SFR_ILR_IT9_MASK _ULL(0x200) /* Interrupt 9 owner */ +#define KVX_SFR_ILR_IT9_SHIFT 9 +#define KVX_SFR_ILR_IT9_WIDTH 1 +#define KVX_SFR_ILR_IT9_WFXL_MASK _ULL(0x200) +#define KVX_SFR_ILR_IT9_WFXL_CLEAR _ULL(0x200) +#define KVX_SFR_ILR_IT9_WFXL_SET _ULL(0x20000000000) + +#define KVX_SFR_ILR_IT10_MASK _ULL(0x400) /* Interrupt 10 owner */ +#define KVX_SFR_ILR_IT10_SHIFT 10 +#define KVX_SFR_ILR_IT10_WIDTH 1 +#define KVX_SFR_ILR_IT10_WFXL_MASK _ULL(0x400) +#define KVX_SFR_ILR_IT10_WFXL_CLEAR _ULL(0x400) +#define KVX_SFR_ILR_IT10_WFXL_SET _ULL(0x40000000000) + +#define KVX_SFR_ILR_IT11_MASK _ULL(0x800) /* Interrupt 11 owner */ +#define KVX_SFR_ILR_IT11_SHIFT 11 +#define KVX_SFR_ILR_IT11_WIDTH 1 +#define KVX_SFR_ILR_IT11_WFXL_MASK _ULL(0x800) +#define KVX_SFR_ILR_IT11_WFXL_CLEAR _ULL(0x800) +#define KVX_SFR_ILR_IT11_WFXL_SET _ULL(0x80000000000) + +#define KVX_SFR_ILR_IT12_MASK _ULL(0x1000) /* Interrupt 12 owner */ +#define KVX_SFR_ILR_IT12_SHIFT 12 +#define KVX_SFR_ILR_IT12_WIDTH 1 +#define KVX_SFR_ILR_IT12_WFXL_MASK _ULL(0x1000) +#define KVX_SFR_ILR_IT12_WFXL_CLEAR _ULL(0x1000) +#define KVX_SFR_ILR_IT12_WFXL_SET _ULL(0x100000000000) + +#define KVX_SFR_ILR_IT13_MASK _ULL(0x2000) /* Interrupt 13 owner */ +#define KVX_SFR_ILR_IT13_SHIFT 13 +#define KVX_SFR_ILR_IT13_WIDTH 1 +#define KVX_SFR_ILR_IT13_WFXL_MASK _ULL(0x2000) +#define KVX_SFR_ILR_IT13_WFXL_CLEAR _ULL(0x2000) +#define KVX_SFR_ILR_IT13_WFXL_SET _ULL(0x200000000000) + +#define KVX_SFR_ILR_IT14_MASK _ULL(0x4000) /* Interrupt 14 owner */ +#define KVX_SFR_ILR_IT14_SHIFT 14 +#define KVX_SFR_ILR_IT14_WIDTH 1 +#define KVX_SFR_ILR_IT14_WFXL_MASK _ULL(0x4000) +#define KVX_SFR_ILR_IT14_WFXL_CLEAR _ULL(0x4000) +#define KVX_SFR_ILR_IT14_WFXL_SET _ULL(0x400000000000) + +#define KVX_SFR_ILR_IT15_MASK _ULL(0x8000) /* Interrupt 15 owner */ +#define KVX_SFR_ILR_IT15_SHIFT 15 +#define KVX_SFR_ILR_IT15_WIDTH 1 +#define KVX_SFR_ILR_IT15_WFXL_MASK _ULL(0x8000) +#define KVX_SFR_ILR_IT15_WFXL_CLEAR _ULL(0x8000) +#define KVX_SFR_ILR_IT15_WFXL_SET _ULL(0x800000000000) + +#define KVX_SFR_ILR_IT16_MASK _ULL(0x10000) /* Interrupt 16 owner */ +#define KVX_SFR_ILR_IT16_SHIFT 16 +#define KVX_SFR_ILR_IT16_WIDTH 1 +#define KVX_SFR_ILR_IT16_WFXL_MASK _ULL(0x10000) +#define KVX_SFR_ILR_IT16_WFXL_CLEAR _ULL(0x10000) +#define KVX_SFR_ILR_IT16_WFXL_SET _ULL(0x1000000000000) + +#define KVX_SFR_ILR_IT17_MASK _ULL(0x20000) /* Interrupt 17 owner */ +#define KVX_SFR_ILR_IT17_SHIFT 17 +#define KVX_SFR_ILR_IT17_WIDTH 1 +#define KVX_SFR_ILR_IT17_WFXL_MASK _ULL(0x20000) +#define KVX_SFR_ILR_IT17_WFXL_CLEAR _ULL(0x20000) +#define KVX_SFR_ILR_IT17_WFXL_SET _ULL(0x2000000000000) + +#define KVX_SFR_ILR_IT18_MASK _ULL(0x40000) /* Interrupt 18 owner */ +#define KVX_SFR_ILR_IT18_SHIFT 18 +#define KVX_SFR_ILR_IT18_WIDTH 1 +#define KVX_SFR_ILR_IT18_WFXL_MASK _ULL(0x40000) +#define KVX_SFR_ILR_IT18_WFXL_CLEAR _ULL(0x40000) +#define KVX_SFR_ILR_IT18_WFXL_SET _ULL(0x4000000000000) + +#define KVX_SFR_ILR_IT19_MASK _ULL(0x80000) /* Interrupt 19 owner */ +#define KVX_SFR_ILR_IT19_SHIFT 19 +#define KVX_SFR_ILR_IT19_WIDTH 1 +#define KVX_SFR_ILR_IT19_WFXL_MASK _ULL(0x80000) +#define KVX_SFR_ILR_IT19_WFXL_CLEAR _ULL(0x80000) +#define KVX_SFR_ILR_IT19_WFXL_SET _ULL(0x8000000000000) + +#define KVX_SFR_ILR_IT20_MASK _ULL(0x100000) /* Interrupt 20 owner */ +#define KVX_SFR_ILR_IT20_SHIFT 20 +#define KVX_SFR_ILR_IT20_WIDTH 1 +#define KVX_SFR_ILR_IT20_WFXL_MASK _ULL(0x100000) +#define KVX_SFR_ILR_IT20_WFXL_CLEAR _ULL(0x100000) +#define KVX_SFR_ILR_IT20_WFXL_SET _ULL(0x10000000000000) + +#define KVX_SFR_ILR_IT21_MASK _ULL(0x200000) /* Interrupt 21 owner */ +#define KVX_SFR_ILR_IT21_SHIFT 21 +#define KVX_SFR_ILR_IT21_WIDTH 1 +#define KVX_SFR_ILR_IT21_WFXL_MASK _ULL(0x200000) +#define KVX_SFR_ILR_IT21_WFXL_CLEAR _ULL(0x200000) +#define KVX_SFR_ILR_IT21_WFXL_SET _ULL(0x20000000000000) + +#define KVX_SFR_ILR_IT22_MASK _ULL(0x400000) /* Interrupt 22 owner */ +#define KVX_SFR_ILR_IT22_SHIFT 22 +#define KVX_SFR_ILR_IT22_WIDTH 1 +#define KVX_SFR_ILR_IT22_WFXL_MASK _ULL(0x400000) +#define KVX_SFR_ILR_IT22_WFXL_CLEAR _ULL(0x400000) +#define KVX_SFR_ILR_IT22_WFXL_SET _ULL(0x40000000000000) + +#define KVX_SFR_ILR_IT23_MASK _ULL(0x800000) /* Interrupt 23 owner */ +#define KVX_SFR_ILR_IT23_SHIFT 23 +#define KVX_SFR_ILR_IT23_WIDTH 1 +#define KVX_SFR_ILR_IT23_WFXL_MASK _ULL(0x800000) +#define KVX_SFR_ILR_IT23_WFXL_CLEAR _ULL(0x800000) +#define KVX_SFR_ILR_IT23_WFXL_SET _ULL(0x80000000000000) + +#define KVX_SFR_ILR_IT24_MASK _ULL(0x1000000) /* Interrupt 24 owner */ +#define KVX_SFR_ILR_IT24_SHIFT 24 +#define KVX_SFR_ILR_IT24_WIDTH 1 +#define KVX_SFR_ILR_IT24_WFXL_MASK _ULL(0x1000000) +#define KVX_SFR_ILR_IT24_WFXL_CLEAR _ULL(0x1000000) +#define KVX_SFR_ILR_IT24_WFXL_SET _ULL(0x100000000000000) + +#define KVX_SFR_ILR_IT25_MASK _ULL(0x2000000) /* Interrupt 25 owner */ +#define KVX_SFR_ILR_IT25_SHIFT 25 +#define KVX_SFR_ILR_IT25_WIDTH 1 +#define KVX_SFR_ILR_IT25_WFXL_MASK _ULL(0x2000000) +#define KVX_SFR_ILR_IT25_WFXL_CLEAR _ULL(0x2000000) +#define KVX_SFR_ILR_IT25_WFXL_SET _ULL(0x200000000000000) + +#define KVX_SFR_ILR_IT26_MASK _ULL(0x4000000) /* Interrupt 26 owner */ +#define KVX_SFR_ILR_IT26_SHIFT 26 +#define KVX_SFR_ILR_IT26_WIDTH 1 +#define KVX_SFR_ILR_IT26_WFXL_MASK _ULL(0x4000000) +#define KVX_SFR_ILR_IT26_WFXL_CLEAR _ULL(0x4000000) +#define KVX_SFR_ILR_IT26_WFXL_SET _ULL(0x400000000000000) + +#define KVX_SFR_ILR_IT27_MASK _ULL(0x8000000) /* Interrupt 27 owner */ +#define KVX_SFR_ILR_IT27_SHIFT 27 +#define KVX_SFR_ILR_IT27_WIDTH 1 +#define KVX_SFR_ILR_IT27_WFXL_MASK _ULL(0x8000000) +#define KVX_SFR_ILR_IT27_WFXL_CLEAR _ULL(0x8000000) +#define KVX_SFR_ILR_IT27_WFXL_SET _ULL(0x800000000000000) + +#define KVX_SFR_ILR_IT28_MASK _ULL(0x10000000) /* Interrupt 28 owner */ +#define KVX_SFR_ILR_IT28_SHIFT 28 +#define KVX_SFR_ILR_IT28_WIDTH 1 +#define KVX_SFR_ILR_IT28_WFXL_MASK _ULL(0x10000000) +#define KVX_SFR_ILR_IT28_WFXL_CLEAR _ULL(0x10000000) +#define KVX_SFR_ILR_IT28_WFXL_SET _ULL(0x1000000000000000) + +#define KVX_SFR_ILR_IT29_MASK _ULL(0x20000000) /* Interrupt 29 owner */ +#define KVX_SFR_ILR_IT29_SHIFT 29 +#define KVX_SFR_ILR_IT29_WIDTH 1 +#define KVX_SFR_ILR_IT29_WFXL_MASK _ULL(0x20000000) +#define KVX_SFR_ILR_IT29_WFXL_CLEAR _ULL(0x20000000) +#define KVX_SFR_ILR_IT29_WFXL_SET _ULL(0x2000000000000000) + +#define KVX_SFR_ILR_IT30_MASK _ULL(0x40000000) /* Interrupt 30 owner */ +#define KVX_SFR_ILR_IT30_SHIFT 30 +#define KVX_SFR_ILR_IT30_WIDTH 1 +#define KVX_SFR_ILR_IT30_WFXL_MASK _ULL(0x40000000) +#define KVX_SFR_ILR_IT30_WFXL_CLEAR _ULL(0x40000000) +#define KVX_SFR_ILR_IT30_WFXL_SET _ULL(0x4000000000000000) + +#define KVX_SFR_ILR_IT31_MASK _ULL(0x80000000) /* Interrupt 31 owner */ +#define KVX_SFR_ILR_IT31_SHIFT 31 +#define KVX_SFR_ILR_IT31_WIDTH 1 +#define KVX_SFR_ILR_IT31_WFXL_MASK _ULL(0x80000000) +#define KVX_SFR_ILR_IT31_WFXL_CLEAR _ULL(0x80000000) +#define KVX_SFR_ILR_IT31_WFXL_SET _ULL(0x8000000000000000) + +#define KVX_SFR_ITOW_IT0_MASK _ULL(0x3) /* Interrupt 0 owner */ +#define KVX_SFR_ITOW_IT0_SHIFT 0 +#define KVX_SFR_ITOW_IT0_WIDTH 2 +#define KVX_SFR_ITOW_IT0_WFXL_MASK _ULL(0x3) +#define KVX_SFR_ITOW_IT0_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_ITOW_IT0_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_ITOW_IT1_MASK _ULL(0xc) /* Interrupt 1 owner */ +#define KVX_SFR_ITOW_IT1_SHIFT 2 +#define KVX_SFR_ITOW_IT1_WIDTH 2 +#define KVX_SFR_ITOW_IT1_WFXL_MASK _ULL(0xc) +#define KVX_SFR_ITOW_IT1_WFXL_CLEAR _ULL(0xc) +#define KVX_SFR_ITOW_IT1_WFXL_SET _ULL(0xc00000000) + +#define KVX_SFR_ITOW_IT2_MASK _ULL(0x30) /* Interrupt 2 owner */ +#define KVX_SFR_ITOW_IT2_SHIFT 4 +#define KVX_SFR_ITOW_IT2_WIDTH 2 +#define KVX_SFR_ITOW_IT2_WFXL_MASK _ULL(0x30) +#define KVX_SFR_ITOW_IT2_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_ITOW_IT2_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_ITOW_IT3_MASK _ULL(0xc0) /* Interrupt 3 owner */ +#define KVX_SFR_ITOW_IT3_SHIFT 6 +#define KVX_SFR_ITOW_IT3_WIDTH 2 +#define KVX_SFR_ITOW_IT3_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_ITOW_IT3_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_ITOW_IT3_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_ITOW_IT4_MASK _ULL(0x300) /* Interrupt 4 owner */ +#define KVX_SFR_ITOW_IT4_SHIFT 8 +#define KVX_SFR_ITOW_IT4_WIDTH 2 +#define KVX_SFR_ITOW_IT4_WFXL_MASK _ULL(0x300) +#define KVX_SFR_ITOW_IT4_WFXL_CLEAR _ULL(0x300) +#define KVX_SFR_ITOW_IT4_WFXL_SET _ULL(0x30000000000) + +#define KVX_SFR_ITOW_IT5_MASK _ULL(0xc00) /* Interrupt 5 owner */ +#define KVX_SFR_ITOW_IT5_SHIFT 10 +#define KVX_SFR_ITOW_IT5_WIDTH 2 +#define KVX_SFR_ITOW_IT5_WFXL_MASK _ULL(0xc00) +#define KVX_SFR_ITOW_IT5_WFXL_CLEAR _ULL(0xc00) +#define KVX_SFR_ITOW_IT5_WFXL_SET _ULL(0xc0000000000) + +#define KVX_SFR_ITOW_IT6_MASK _ULL(0x3000) /* Interrupt 6 owner */ +#define KVX_SFR_ITOW_IT6_SHIFT 12 +#define KVX_SFR_ITOW_IT6_WIDTH 2 +#define KVX_SFR_ITOW_IT6_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_ITOW_IT6_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_ITOW_IT6_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_ITOW_IT7_MASK _ULL(0xc000) /* Interrupt 7 owner */ +#define KVX_SFR_ITOW_IT7_SHIFT 14 +#define KVX_SFR_ITOW_IT7_WIDTH 2 +#define KVX_SFR_ITOW_IT7_WFXL_MASK _ULL(0xc000) +#define KVX_SFR_ITOW_IT7_WFXL_CLEAR _ULL(0xc000) +#define KVX_SFR_ITOW_IT7_WFXL_SET _ULL(0xc00000000000) + +#define KVX_SFR_ITOW_IT8_MASK _ULL(0x30000) /* Interrupt 8 owner */ +#define KVX_SFR_ITOW_IT8_SHIFT 16 +#define KVX_SFR_ITOW_IT8_WIDTH 2 +#define KVX_SFR_ITOW_IT8_WFXL_MASK _ULL(0x30000) +#define KVX_SFR_ITOW_IT8_WFXL_CLEAR _ULL(0x30000) +#define KVX_SFR_ITOW_IT8_WFXL_SET _ULL(0x3000000000000) + +#define KVX_SFR_ITOW_IT9_MASK _ULL(0xc0000) /* Interrupt 9 owner */ +#define KVX_SFR_ITOW_IT9_SHIFT 18 +#define KVX_SFR_ITOW_IT9_WIDTH 2 +#define KVX_SFR_ITOW_IT9_WFXL_MASK _ULL(0xc0000) +#define KVX_SFR_ITOW_IT9_WFXL_CLEAR _ULL(0xc0000) +#define KVX_SFR_ITOW_IT9_WFXL_SET _ULL(0xc000000000000) + +#define KVX_SFR_ITOW_IT10_MASK _ULL(0x300000) /* Interrupt 10 owner */ +#define KVX_SFR_ITOW_IT10_SHIFT 20 +#define KVX_SFR_ITOW_IT10_WIDTH 2 +#define KVX_SFR_ITOW_IT10_WFXL_MASK _ULL(0x300000) +#define KVX_SFR_ITOW_IT10_WFXL_CLEAR _ULL(0x300000) +#define KVX_SFR_ITOW_IT10_WFXL_SET _ULL(0x30000000000000) + +#define KVX_SFR_ITOW_IT11_MASK _ULL(0xc00000) /* Interrupt 11 owner */ +#define KVX_SFR_ITOW_IT11_SHIFT 22 +#define KVX_SFR_ITOW_IT11_WIDTH 2 +#define KVX_SFR_ITOW_IT11_WFXL_MASK _ULL(0xc00000) +#define KVX_SFR_ITOW_IT11_WFXL_CLEAR _ULL(0xc00000) +#define KVX_SFR_ITOW_IT11_WFXL_SET _ULL(0xc0000000000000) + +#define KVX_SFR_ITOW_IT12_MASK _ULL(0x3000000) /* Interrupt 12 owner */ +#define KVX_SFR_ITOW_IT12_SHIFT 24 +#define KVX_SFR_ITOW_IT12_WIDTH 2 +#define KVX_SFR_ITOW_IT12_WFXL_MASK _ULL(0x3000000) +#define KVX_SFR_ITOW_IT12_WFXL_CLEAR _ULL(0x3000000) +#define KVX_SFR_ITOW_IT12_WFXL_SET _ULL(0x300000000000000) + +#define KVX_SFR_ITOW_IT13_MASK _ULL(0xc000000) /* Interrupt 13 owner */ +#define KVX_SFR_ITOW_IT13_SHIFT 26 +#define KVX_SFR_ITOW_IT13_WIDTH 2 +#define KVX_SFR_ITOW_IT13_WFXL_MASK _ULL(0xc000000) +#define KVX_SFR_ITOW_IT13_WFXL_CLEAR _ULL(0xc000000) +#define KVX_SFR_ITOW_IT13_WFXL_SET _ULL(0xc00000000000000) + +#define KVX_SFR_ITOW_IT14_MASK _ULL(0x30000000) /* Interrupt 14 owner */ +#define KVX_SFR_ITOW_IT14_SHIFT 28 +#define KVX_SFR_ITOW_IT14_WIDTH 2 +#define KVX_SFR_ITOW_IT14_WFXL_MASK _ULL(0x30000000) +#define KVX_SFR_ITOW_IT14_WFXL_CLEAR _ULL(0x30000000) +#define KVX_SFR_ITOW_IT14_WFXL_SET _ULL(0x3000000000000000) + +#define KVX_SFR_ITOW_IT15_MASK _ULL(0xc0000000) /* Interrupt 15 owner */ +#define KVX_SFR_ITOW_IT15_SHIFT 30 +#define KVX_SFR_ITOW_IT15_WIDTH 2 +#define KVX_SFR_ITOW_IT15_WFXL_MASK _ULL(0xc0000000) +#define KVX_SFR_ITOW_IT15_WFXL_CLEAR _ULL(0xc0000000) +#define KVX_SFR_ITOW_IT15_WFXL_SET _ULL(0xc000000000000000) + +#define KVX_SFR_ITOW_IT16_MASK _ULL(0x300000000) /* Interrupt 16 owner */ +#define KVX_SFR_ITOW_IT16_SHIFT 32 +#define KVX_SFR_ITOW_IT16_WIDTH 2 +#define KVX_SFR_ITOW_IT16_WFXM_MASK _ULL(0x300000000) +#define KVX_SFR_ITOW_IT16_WFXM_CLEAR _ULL(0x3) +#define KVX_SFR_ITOW_IT16_WFXM_SET _ULL(0x300000000) + +#define KVX_SFR_ITOW_IT17_MASK _ULL(0xc00000000) /* Interrupt 17 owner */ +#define KVX_SFR_ITOW_IT17_SHIFT 34 +#define KVX_SFR_ITOW_IT17_WIDTH 2 +#define KVX_SFR_ITOW_IT17_WFXM_MASK _ULL(0xc00000000) +#define KVX_SFR_ITOW_IT17_WFXM_CLEAR _ULL(0xc) +#define KVX_SFR_ITOW_IT17_WFXM_SET _ULL(0xc00000000) + +#define KVX_SFR_ITOW_IT18_MASK _ULL(0x3000000000) /* Interrupt 18 owner */ +#define KVX_SFR_ITOW_IT18_SHIFT 36 +#define KVX_SFR_ITOW_IT18_WIDTH 2 +#define KVX_SFR_ITOW_IT18_WFXM_MASK _ULL(0x3000000000) +#define KVX_SFR_ITOW_IT18_WFXM_CLEAR _ULL(0x30) +#define KVX_SFR_ITOW_IT18_WFXM_SET _ULL(0x3000000000) + +#define KVX_SFR_ITOW_IT19_MASK _ULL(0xc000000000) /* Interrupt 19 owner */ +#define KVX_SFR_ITOW_IT19_SHIFT 38 +#define KVX_SFR_ITOW_IT19_WIDTH 2 +#define KVX_SFR_ITOW_IT19_WFXM_MASK _ULL(0xc000000000) +#define KVX_SFR_ITOW_IT19_WFXM_CLEAR _ULL(0xc0) +#define KVX_SFR_ITOW_IT19_WFXM_SET _ULL(0xc000000000) + +#define KVX_SFR_ITOW_IT20_MASK _ULL(0x30000000000) /* Interrupt 20 owner */ +#define KVX_SFR_ITOW_IT20_SHIFT 40 +#define KVX_SFR_ITOW_IT20_WIDTH 2 +#define KVX_SFR_ITOW_IT20_WFXM_MASK _ULL(0x30000000000) +#define KVX_SFR_ITOW_IT20_WFXM_CLEAR _ULL(0x300) +#define KVX_SFR_ITOW_IT20_WFXM_SET _ULL(0x30000000000) + +#define KVX_SFR_ITOW_IT21_MASK _ULL(0xc0000000000) /* Interrupt 21 owner */ +#define KVX_SFR_ITOW_IT21_SHIFT 42 +#define KVX_SFR_ITOW_IT21_WIDTH 2 +#define KVX_SFR_ITOW_IT21_WFXM_MASK _ULL(0xc0000000000) +#define KVX_SFR_ITOW_IT21_WFXM_CLEAR _ULL(0xc00) +#define KVX_SFR_ITOW_IT21_WFXM_SET _ULL(0xc0000000000) + +#define KVX_SFR_ITOW_IT22_MASK _ULL(0x300000000000) /* Interrupt 22 owner = */ +#define KVX_SFR_ITOW_IT22_SHIFT 44 +#define KVX_SFR_ITOW_IT22_WIDTH 2 +#define KVX_SFR_ITOW_IT22_WFXM_MASK _ULL(0x300000000000) +#define KVX_SFR_ITOW_IT22_WFXM_CLEAR _ULL(0x3000) +#define KVX_SFR_ITOW_IT22_WFXM_SET _ULL(0x300000000000) + +#define KVX_SFR_ITOW_IT23_MASK _ULL(0xc00000000000) /* Interrupt 23 owner = */ +#define KVX_SFR_ITOW_IT23_SHIFT 46 +#define KVX_SFR_ITOW_IT23_WIDTH 2 +#define KVX_SFR_ITOW_IT23_WFXM_MASK _ULL(0xc00000000000) +#define KVX_SFR_ITOW_IT23_WFXM_CLEAR _ULL(0xc000) +#define KVX_SFR_ITOW_IT23_WFXM_SET _ULL(0xc00000000000) + +#define KVX_SFR_ITOW_IT24_MASK _ULL(0x3000000000000) /* Interrupt 24 owner= */ +#define KVX_SFR_ITOW_IT24_SHIFT 48 +#define KVX_SFR_ITOW_IT24_WIDTH 2 +#define KVX_SFR_ITOW_IT24_WFXM_MASK _ULL(0x3000000000000) +#define KVX_SFR_ITOW_IT24_WFXM_CLEAR _ULL(0x30000) +#define KVX_SFR_ITOW_IT24_WFXM_SET _ULL(0x3000000000000) + +#define KVX_SFR_ITOW_IT25_MASK _ULL(0xc000000000000) /* Interrupt 25 owner= */ +#define KVX_SFR_ITOW_IT25_SHIFT 50 +#define KVX_SFR_ITOW_IT25_WIDTH 2 +#define KVX_SFR_ITOW_IT25_WFXM_MASK _ULL(0xc000000000000) +#define KVX_SFR_ITOW_IT25_WFXM_CLEAR _ULL(0xc0000) +#define KVX_SFR_ITOW_IT25_WFXM_SET _ULL(0xc000000000000) + +#define KVX_SFR_ITOW_IT26_MASK _ULL(0x30000000000000) /* Interrupt 26 owne= r */ +#define KVX_SFR_ITOW_IT26_SHIFT 52 +#define KVX_SFR_ITOW_IT26_WIDTH 2 +#define KVX_SFR_ITOW_IT26_WFXM_MASK _ULL(0x30000000000000) +#define KVX_SFR_ITOW_IT26_WFXM_CLEAR _ULL(0x300000) +#define KVX_SFR_ITOW_IT26_WFXM_SET _ULL(0x30000000000000) + +#define KVX_SFR_ITOW_IT27_MASK _ULL(0xc0000000000000) /* Interrupt 27 owne= r */ +#define KVX_SFR_ITOW_IT27_SHIFT 54 +#define KVX_SFR_ITOW_IT27_WIDTH 2 +#define KVX_SFR_ITOW_IT27_WFXM_MASK _ULL(0xc0000000000000) +#define KVX_SFR_ITOW_IT27_WFXM_CLEAR _ULL(0xc00000) +#define KVX_SFR_ITOW_IT27_WFXM_SET _ULL(0xc0000000000000) + +#define KVX_SFR_ITOW_IT28_MASK _ULL(0x300000000000000) /* Interrupt 28 own= er */ +#define KVX_SFR_ITOW_IT28_SHIFT 56 +#define KVX_SFR_ITOW_IT28_WIDTH 2 +#define KVX_SFR_ITOW_IT28_WFXM_MASK _ULL(0x300000000000000) +#define KVX_SFR_ITOW_IT28_WFXM_CLEAR _ULL(0x3000000) +#define KVX_SFR_ITOW_IT28_WFXM_SET _ULL(0x300000000000000) + +#define KVX_SFR_ITOW_IT29_MASK _ULL(0xc00000000000000) /* Interrupt 29 own= er */ +#define KVX_SFR_ITOW_IT29_SHIFT 58 +#define KVX_SFR_ITOW_IT29_WIDTH 2 +#define KVX_SFR_ITOW_IT29_WFXM_MASK _ULL(0xc00000000000000) +#define KVX_SFR_ITOW_IT29_WFXM_CLEAR _ULL(0xc000000) +#define KVX_SFR_ITOW_IT29_WFXM_SET _ULL(0xc00000000000000) + +#define KVX_SFR_ITOW_IT30_MASK _ULL(0x3000000000000000) /* Interrupt 30 ow= ner */ +#define KVX_SFR_ITOW_IT30_SHIFT 60 +#define KVX_SFR_ITOW_IT30_WIDTH 2 +#define KVX_SFR_ITOW_IT30_WFXM_MASK _ULL(0x3000000000000000) +#define KVX_SFR_ITOW_IT30_WFXM_CLEAR _ULL(0x30000000) +#define KVX_SFR_ITOW_IT30_WFXM_SET _ULL(0x3000000000000000) + +#define KVX_SFR_ITOW_IT31_MASK _ULL(0xc000000000000000) /* Interrupt 31 ow= ner */ +#define KVX_SFR_ITOW_IT31_SHIFT 62 +#define KVX_SFR_ITOW_IT31_WIDTH 2 +#define KVX_SFR_ITOW_IT31_WFXM_MASK _ULL(0xc000000000000000) +#define KVX_SFR_ITOW_IT31_WFXM_CLEAR _ULL(0xc0000000) +#define KVX_SFR_ITOW_IT31_WFXM_SET _ULL(0xc000000000000000) + +#define KVX_SFR_DO_B0_MASK _ULL(0x3) /* Breakpoint 0 owner. */ +#define KVX_SFR_DO_B0_SHIFT 0 +#define KVX_SFR_DO_B0_WIDTH 2 +#define KVX_SFR_DO_B0_WFXL_MASK _ULL(0x3) +#define KVX_SFR_DO_B0_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_DO_B0_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_DO_B1_MASK _ULL(0xc) /* Breakpoint 1 owner. */ +#define KVX_SFR_DO_B1_SHIFT 2 +#define KVX_SFR_DO_B1_WIDTH 2 +#define KVX_SFR_DO_B1_WFXL_MASK _ULL(0xc) +#define KVX_SFR_DO_B1_WFXL_CLEAR _ULL(0xc) +#define KVX_SFR_DO_B1_WFXL_SET _ULL(0xc00000000) + +#define KVX_SFR_DO_W0_MASK _ULL(0x30) /* Watchpoint 0 owner. */ +#define KVX_SFR_DO_W0_SHIFT 4 +#define KVX_SFR_DO_W0_WIDTH 2 +#define KVX_SFR_DO_W0_WFXL_MASK _ULL(0x30) +#define KVX_SFR_DO_W0_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_DO_W0_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_DO_W1_MASK _ULL(0xc0) /* Watchpoint 1 owner. */ +#define KVX_SFR_DO_W1_SHIFT 6 +#define KVX_SFR_DO_W1_WIDTH 2 +#define KVX_SFR_DO_W1_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_DO_W1_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_DO_W1_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_DBA0_DBA0_MASK _ULL(0xffffffffffffffff) /* Debug Breakpoin= t Address 0 */ +#define KVX_SFR_DBA0_DBA0_SHIFT 0 +#define KVX_SFR_DBA0_DBA0_WIDTH 64 +#define KVX_SFR_DBA0_DBA0_WFXL_MASK _ULL(0xffffffff) +#define KVX_SFR_DBA0_DBA0_WFXL_CLEAR _ULL(0xffffffff) +#define KVX_SFR_DBA0_DBA0_WFXL_SET _ULL(0xffffffff00000000) +#define KVX_SFR_DBA0_DBA0_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_DBA0_DBA0_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_DBA0_DBA0_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_DBA1_DBA1_MASK _ULL(0xffffffffffffffff) /* Debug Breakpoin= t Address 1 */ +#define KVX_SFR_DBA1_DBA1_SHIFT 0 +#define KVX_SFR_DBA1_DBA1_WIDTH 64 +#define KVX_SFR_DBA1_DBA1_WFXL_MASK _ULL(0xffffffff) +#define KVX_SFR_DBA1_DBA1_WFXL_CLEAR _ULL(0xffffffff) +#define KVX_SFR_DBA1_DBA1_WFXL_SET _ULL(0xffffffff00000000) +#define KVX_SFR_DBA1_DBA1_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_DBA1_DBA1_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_DBA1_DBA1_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_DWA0_DWA0_MASK _ULL(0xffffffffffffffff) /* Debug Breakpoin= t Address 0 */ +#define KVX_SFR_DWA0_DWA0_SHIFT 0 +#define KVX_SFR_DWA0_DWA0_WIDTH 64 +#define KVX_SFR_DWA0_DWA0_WFXL_MASK _ULL(0xffffffff) +#define KVX_SFR_DWA0_DWA0_WFXL_CLEAR _ULL(0xffffffff) +#define KVX_SFR_DWA0_DWA0_WFXL_SET _ULL(0xffffffff00000000) +#define KVX_SFR_DWA0_DWA0_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_DWA0_DWA0_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_DWA0_DWA0_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_DWA1_DWA1_MASK _ULL(0xffffffffffffffff) /* Debug Breakpoin= t Address 1 */ +#define KVX_SFR_DWA1_DWA1_SHIFT 0 +#define KVX_SFR_DWA1_DWA1_WIDTH 64 +#define KVX_SFR_DWA1_DWA1_WFXL_MASK _ULL(0xffffffff) +#define KVX_SFR_DWA1_DWA1_WFXL_CLEAR _ULL(0xffffffff) +#define KVX_SFR_DWA1_DWA1_WFXL_SET _ULL(0xffffffff00000000) +#define KVX_SFR_DWA1_DWA1_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_DWA1_DWA1_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_DWA1_DWA1_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_DOW_B0_MASK _ULL(0x3) /* Breakpoint 0 owner. */ +#define KVX_SFR_DOW_B0_SHIFT 0 +#define KVX_SFR_DOW_B0_WIDTH 2 +#define KVX_SFR_DOW_B0_WFXL_MASK _ULL(0x3) +#define KVX_SFR_DOW_B0_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_DOW_B0_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_DOW_B1_MASK _ULL(0xc) /* Breakpoint 1 owner. */ +#define KVX_SFR_DOW_B1_SHIFT 2 +#define KVX_SFR_DOW_B1_WIDTH 2 +#define KVX_SFR_DOW_B1_WFXL_MASK _ULL(0xc) +#define KVX_SFR_DOW_B1_WFXL_CLEAR _ULL(0xc) +#define KVX_SFR_DOW_B1_WFXL_SET _ULL(0xc00000000) + +#define KVX_SFR_DOW_W0_MASK _ULL(0x30) /* Watchpoint 0 owner. */ +#define KVX_SFR_DOW_W0_SHIFT 4 +#define KVX_SFR_DOW_W0_WIDTH 2 +#define KVX_SFR_DOW_W0_WFXL_MASK _ULL(0x30) +#define KVX_SFR_DOW_W0_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_DOW_W0_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_DOW_W1_MASK _ULL(0xc0) /* Watchpoint 1 owner. */ +#define KVX_SFR_DOW_W1_SHIFT 6 +#define KVX_SFR_DOW_W1_WIDTH 2 +#define KVX_SFR_DOW_W1_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_DOW_W1_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_DOW_W1_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_MO_MMI_MASK _ULL(0x3) /* Memory Management Instructions ow= ner. */ +#define KVX_SFR_MO_MMI_SHIFT 0 +#define KVX_SFR_MO_MMI_WIDTH 2 +#define KVX_SFR_MO_MMI_WFXL_MASK _ULL(0x3) +#define KVX_SFR_MO_MMI_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_MO_MMI_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_MO_RFE_MASK _ULL(0xc) /* RFE instruction owner. */ +#define KVX_SFR_MO_RFE_SHIFT 2 +#define KVX_SFR_MO_RFE_WIDTH 2 +#define KVX_SFR_MO_RFE_WFXL_MASK _ULL(0xc) +#define KVX_SFR_MO_RFE_WFXL_CLEAR _ULL(0xc) +#define KVX_SFR_MO_RFE_WFXL_SET _ULL(0xc00000000) + +#define KVX_SFR_MO_STOP_MASK _ULL(0x30) /* STOP instruction owner. */ +#define KVX_SFR_MO_STOP_SHIFT 4 +#define KVX_SFR_MO_STOP_WIDTH 2 +#define KVX_SFR_MO_STOP_WFXL_MASK _ULL(0x30) +#define KVX_SFR_MO_STOP_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_MO_STOP_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_MO_SYNC_MASK _ULL(0xc0) /* SYNCGROUP instruction owner. */ +#define KVX_SFR_MO_SYNC_SHIFT 6 +#define KVX_SFR_MO_SYNC_WIDTH 2 +#define KVX_SFR_MO_SYNC_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_MO_SYNC_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_MO_SYNC_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_MO_PCR_MASK _ULL(0x300) /* PCR register owner. */ +#define KVX_SFR_MO_PCR_SHIFT 8 +#define KVX_SFR_MO_PCR_WIDTH 2 +#define KVX_SFR_MO_PCR_WFXL_MASK _ULL(0x300) +#define KVX_SFR_MO_PCR_WFXL_CLEAR _ULL(0x300) +#define KVX_SFR_MO_PCR_WFXL_SET _ULL(0x30000000000) + +#define KVX_SFR_MO_MSG_MASK _ULL(0xc00) /* MMU SFR GROUP registers owner. = */ +#define KVX_SFR_MO_MSG_SHIFT 10 +#define KVX_SFR_MO_MSG_WIDTH 2 +#define KVX_SFR_MO_MSG_WFXL_MASK _ULL(0xc00) +#define KVX_SFR_MO_MSG_WFXL_CLEAR _ULL(0xc00) +#define KVX_SFR_MO_MSG_WFXL_SET _ULL(0xc0000000000) + +#define KVX_SFR_MO_MEN_MASK _ULL(0x3000) /* Miscellaneous External Notific= ations register owner. */ +#define KVX_SFR_MO_MEN_SHIFT 12 +#define KVX_SFR_MO_MEN_WIDTH 2 +#define KVX_SFR_MO_MEN_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_MO_MEN_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_MO_MEN_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_MO_MES_MASK _ULL(0xc000) /* Memory Error Status register o= wner. */ +#define KVX_SFR_MO_MES_SHIFT 14 +#define KVX_SFR_MO_MES_WIDTH 2 +#define KVX_SFR_MO_MES_WFXL_MASK _ULL(0xc000) +#define KVX_SFR_MO_MES_WFXL_CLEAR _ULL(0xc000) +#define KVX_SFR_MO_MES_WFXL_SET _ULL(0xc00000000000) + +#define KVX_SFR_MO_CSIT_MASK _ULL(0x30000) /* Compute Status Artithmetic I= nterrupt register owner. */ +#define KVX_SFR_MO_CSIT_SHIFT 16 +#define KVX_SFR_MO_CSIT_WIDTH 2 +#define KVX_SFR_MO_CSIT_WFXL_MASK _ULL(0x30000) +#define KVX_SFR_MO_CSIT_WFXL_CLEAR _ULL(0x30000) +#define KVX_SFR_MO_CSIT_WFXL_SET _ULL(0x3000000000000) + +#define KVX_SFR_MO_T0_MASK _ULL(0xc0000) /* Timer 0 register group owner */ +#define KVX_SFR_MO_T0_SHIFT 18 +#define KVX_SFR_MO_T0_WIDTH 2 +#define KVX_SFR_MO_T0_WFXL_MASK _ULL(0xc0000) +#define KVX_SFR_MO_T0_WFXL_CLEAR _ULL(0xc0000) +#define KVX_SFR_MO_T0_WFXL_SET _ULL(0xc000000000000) + +#define KVX_SFR_MO_T1_MASK _ULL(0x300000) /* Timer 1 register group owner = */ +#define KVX_SFR_MO_T1_SHIFT 20 +#define KVX_SFR_MO_T1_WIDTH 2 +#define KVX_SFR_MO_T1_WFXL_MASK _ULL(0x300000) +#define KVX_SFR_MO_T1_WFXL_CLEAR _ULL(0x300000) +#define KVX_SFR_MO_T1_WFXL_SET _ULL(0x30000000000000) + +#define KVX_SFR_MO_WD_MASK _ULL(0xc00000) /* Watch Dog register group owne= r. */ +#define KVX_SFR_MO_WD_SHIFT 22 +#define KVX_SFR_MO_WD_WIDTH 2 +#define KVX_SFR_MO_WD_WFXL_MASK _ULL(0xc00000) +#define KVX_SFR_MO_WD_WFXL_CLEAR _ULL(0xc00000) +#define KVX_SFR_MO_WD_WFXL_SET _ULL(0xc0000000000000) + +#define KVX_SFR_MO_PM0_MASK _ULL(0x3000000) /* Performance Monitor 0 regis= ter owner. */ +#define KVX_SFR_MO_PM0_SHIFT 24 +#define KVX_SFR_MO_PM0_WIDTH 2 +#define KVX_SFR_MO_PM0_WFXL_MASK _ULL(0x3000000) +#define KVX_SFR_MO_PM0_WFXL_CLEAR _ULL(0x3000000) +#define KVX_SFR_MO_PM0_WFXL_SET _ULL(0x300000000000000) + +#define KVX_SFR_MO_PM1_MASK _ULL(0xc000000) /* Performance Monitor 1 regis= ter owner. */ +#define KVX_SFR_MO_PM1_SHIFT 26 +#define KVX_SFR_MO_PM1_WIDTH 2 +#define KVX_SFR_MO_PM1_WFXL_MASK _ULL(0xc000000) +#define KVX_SFR_MO_PM1_WFXL_CLEAR _ULL(0xc000000) +#define KVX_SFR_MO_PM1_WFXL_SET _ULL(0xc00000000000000) + +#define KVX_SFR_MO_PM2_MASK _ULL(0x30000000) /* Performance Monitor 2 regi= ster owner. */ +#define KVX_SFR_MO_PM2_SHIFT 28 +#define KVX_SFR_MO_PM2_WIDTH 2 +#define KVX_SFR_MO_PM2_WFXL_MASK _ULL(0x30000000) +#define KVX_SFR_MO_PM2_WFXL_CLEAR _ULL(0x30000000) +#define KVX_SFR_MO_PM2_WFXL_SET _ULL(0x3000000000000000) + +#define KVX_SFR_MO_PM3_MASK _ULL(0xc0000000) /* Performance Monitor 3 regi= ster owner. */ +#define KVX_SFR_MO_PM3_SHIFT 30 +#define KVX_SFR_MO_PM3_WIDTH 2 +#define KVX_SFR_MO_PM3_WFXL_MASK _ULL(0xc0000000) +#define KVX_SFR_MO_PM3_WFXL_CLEAR _ULL(0xc0000000) +#define KVX_SFR_MO_PM3_WFXL_SET _ULL(0xc000000000000000) + +#define KVX_SFR_MO_PMIT_MASK _ULL(0x300000000) /* Performance Monitor Inte= rrupt register group owner. */ +#define KVX_SFR_MO_PMIT_SHIFT 32 +#define KVX_SFR_MO_PMIT_WIDTH 2 +#define KVX_SFR_MO_PMIT_WFXM_MASK _ULL(0x300000000) +#define KVX_SFR_MO_PMIT_WFXM_CLEAR _ULL(0x3) +#define KVX_SFR_MO_PMIT_WFXM_SET _ULL(0x300000000) + +#define KVX_SFR_MOW_MMI_MASK _ULL(0x3) /* Memory Management Instructions o= wner. */ +#define KVX_SFR_MOW_MMI_SHIFT 0 +#define KVX_SFR_MOW_MMI_WIDTH 2 +#define KVX_SFR_MOW_MMI_WFXL_MASK _ULL(0x3) +#define KVX_SFR_MOW_MMI_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_MOW_MMI_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_MOW_RFE_MASK _ULL(0xc) /* RFE instruction owner. */ +#define KVX_SFR_MOW_RFE_SHIFT 2 +#define KVX_SFR_MOW_RFE_WIDTH 2 +#define KVX_SFR_MOW_RFE_WFXL_MASK _ULL(0xc) +#define KVX_SFR_MOW_RFE_WFXL_CLEAR _ULL(0xc) +#define KVX_SFR_MOW_RFE_WFXL_SET _ULL(0xc00000000) + +#define KVX_SFR_MOW_STOP_MASK _ULL(0x30) /* STOP instruction owner. */ +#define KVX_SFR_MOW_STOP_SHIFT 4 +#define KVX_SFR_MOW_STOP_WIDTH 2 +#define KVX_SFR_MOW_STOP_WFXL_MASK _ULL(0x30) +#define KVX_SFR_MOW_STOP_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_MOW_STOP_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_MOW_SYNC_MASK _ULL(0xc0) /* SYNCGROUP instruction owner. */ +#define KVX_SFR_MOW_SYNC_SHIFT 6 +#define KVX_SFR_MOW_SYNC_WIDTH 2 +#define KVX_SFR_MOW_SYNC_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_MOW_SYNC_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_MOW_SYNC_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_MOW_PCR_MASK _ULL(0x300) /* PCR register owner. */ +#define KVX_SFR_MOW_PCR_SHIFT 8 +#define KVX_SFR_MOW_PCR_WIDTH 2 +#define KVX_SFR_MOW_PCR_WFXL_MASK _ULL(0x300) +#define KVX_SFR_MOW_PCR_WFXL_CLEAR _ULL(0x300) +#define KVX_SFR_MOW_PCR_WFXL_SET _ULL(0x30000000000) + +#define KVX_SFR_MOW_MSG_MASK _ULL(0xc00) /* MMU SFR GROUP registers owner.= */ +#define KVX_SFR_MOW_MSG_SHIFT 10 +#define KVX_SFR_MOW_MSG_WIDTH 2 +#define KVX_SFR_MOW_MSG_WFXL_MASK _ULL(0xc00) +#define KVX_SFR_MOW_MSG_WFXL_CLEAR _ULL(0xc00) +#define KVX_SFR_MOW_MSG_WFXL_SET _ULL(0xc0000000000) + +#define KVX_SFR_MOW_MEN_MASK _ULL(0x3000) /* Miscellaneous External Notifi= cations register owner. */ +#define KVX_SFR_MOW_MEN_SHIFT 12 +#define KVX_SFR_MOW_MEN_WIDTH 2 +#define KVX_SFR_MOW_MEN_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_MOW_MEN_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_MOW_MEN_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_MOW_MES_MASK _ULL(0xc000) /* Memory Error Status register = owner. */ +#define KVX_SFR_MOW_MES_SHIFT 14 +#define KVX_SFR_MOW_MES_WIDTH 2 +#define KVX_SFR_MOW_MES_WFXL_MASK _ULL(0xc000) +#define KVX_SFR_MOW_MES_WFXL_CLEAR _ULL(0xc000) +#define KVX_SFR_MOW_MES_WFXL_SET _ULL(0xc00000000000) + +#define KVX_SFR_MOW_CSIT_MASK _ULL(0x30000) /* Compute Status Artithmetic = Interrupt register owner. */ +#define KVX_SFR_MOW_CSIT_SHIFT 16 +#define KVX_SFR_MOW_CSIT_WIDTH 2 +#define KVX_SFR_MOW_CSIT_WFXL_MASK _ULL(0x30000) +#define KVX_SFR_MOW_CSIT_WFXL_CLEAR _ULL(0x30000) +#define KVX_SFR_MOW_CSIT_WFXL_SET _ULL(0x3000000000000) + +#define KVX_SFR_MOW_T0_MASK _ULL(0xc0000) /* Timer 0 register group owner = */ +#define KVX_SFR_MOW_T0_SHIFT 18 +#define KVX_SFR_MOW_T0_WIDTH 2 +#define KVX_SFR_MOW_T0_WFXL_MASK _ULL(0xc0000) +#define KVX_SFR_MOW_T0_WFXL_CLEAR _ULL(0xc0000) +#define KVX_SFR_MOW_T0_WFXL_SET _ULL(0xc000000000000) + +#define KVX_SFR_MOW_T1_MASK _ULL(0x300000) /* Timer 1 register group owner= */ +#define KVX_SFR_MOW_T1_SHIFT 20 +#define KVX_SFR_MOW_T1_WIDTH 2 +#define KVX_SFR_MOW_T1_WFXL_MASK _ULL(0x300000) +#define KVX_SFR_MOW_T1_WFXL_CLEAR _ULL(0x300000) +#define KVX_SFR_MOW_T1_WFXL_SET _ULL(0x30000000000000) + +#define KVX_SFR_MOW_WD_MASK _ULL(0xc00000) /* Watch Dog register group own= er. */ +#define KVX_SFR_MOW_WD_SHIFT 22 +#define KVX_SFR_MOW_WD_WIDTH 2 +#define KVX_SFR_MOW_WD_WFXL_MASK _ULL(0xc00000) +#define KVX_SFR_MOW_WD_WFXL_CLEAR _ULL(0xc00000) +#define KVX_SFR_MOW_WD_WFXL_SET _ULL(0xc0000000000000) + +#define KVX_SFR_MOW_PM0_MASK _ULL(0x3000000) /* Performance Monitor 0 regi= ster owner. */ +#define KVX_SFR_MOW_PM0_SHIFT 24 +#define KVX_SFR_MOW_PM0_WIDTH 2 +#define KVX_SFR_MOW_PM0_WFXL_MASK _ULL(0x3000000) +#define KVX_SFR_MOW_PM0_WFXL_CLEAR _ULL(0x3000000) +#define KVX_SFR_MOW_PM0_WFXL_SET _ULL(0x300000000000000) + +#define KVX_SFR_MOW_PM1_MASK _ULL(0xc000000) /* Performance Monitor 1 regi= ster owner. */ +#define KVX_SFR_MOW_PM1_SHIFT 26 +#define KVX_SFR_MOW_PM1_WIDTH 2 +#define KVX_SFR_MOW_PM1_WFXL_MASK _ULL(0xc000000) +#define KVX_SFR_MOW_PM1_WFXL_CLEAR _ULL(0xc000000) +#define KVX_SFR_MOW_PM1_WFXL_SET _ULL(0xc00000000000000) + +#define KVX_SFR_MOW_PM2_MASK _ULL(0x30000000) /* Performance Monitor 2 reg= ister owner. */ +#define KVX_SFR_MOW_PM2_SHIFT 28 +#define KVX_SFR_MOW_PM2_WIDTH 2 +#define KVX_SFR_MOW_PM2_WFXL_MASK _ULL(0x30000000) +#define KVX_SFR_MOW_PM2_WFXL_CLEAR _ULL(0x30000000) +#define KVX_SFR_MOW_PM2_WFXL_SET _ULL(0x3000000000000000) + +#define KVX_SFR_MOW_PM3_MASK _ULL(0xc0000000) /* Performance Monitor 3 reg= ister owner. */ +#define KVX_SFR_MOW_PM3_SHIFT 30 +#define KVX_SFR_MOW_PM3_WIDTH 2 +#define KVX_SFR_MOW_PM3_WFXL_MASK _ULL(0xc0000000) +#define KVX_SFR_MOW_PM3_WFXL_CLEAR _ULL(0xc0000000) +#define KVX_SFR_MOW_PM3_WFXL_SET _ULL(0xc000000000000000) + +#define KVX_SFR_MOW_PMIT_MASK _ULL(0x300000000) /* Performance Monitor Int= errupt register group owner. */ +#define KVX_SFR_MOW_PMIT_SHIFT 32 +#define KVX_SFR_MOW_PMIT_WIDTH 2 +#define KVX_SFR_MOW_PMIT_WFXM_MASK _ULL(0x300000000) +#define KVX_SFR_MOW_PMIT_WFXM_CLEAR _ULL(0x3) +#define KVX_SFR_MOW_PMIT_WFXM_SET _ULL(0x300000000) + +#define KVX_SFR_PS_PL_MASK _ULL(0x3) /* Current Privilege Level */ +#define KVX_SFR_PS_PL_SHIFT 0 +#define KVX_SFR_PS_PL_WIDTH 2 +#define KVX_SFR_PS_PL_WFXL_MASK _ULL(0x3) +#define KVX_SFR_PS_PL_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_PS_PL_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_PS_ET_MASK _ULL(0x4) /* Exception Taken */ +#define KVX_SFR_PS_ET_SHIFT 2 +#define KVX_SFR_PS_ET_WIDTH 1 +#define KVX_SFR_PS_ET_WFXL_MASK _ULL(0x4) +#define KVX_SFR_PS_ET_WFXL_CLEAR _ULL(0x4) +#define KVX_SFR_PS_ET_WFXL_SET _ULL(0x400000000) + +#define KVX_SFR_PS_HTD_MASK _ULL(0x8) /* Hardware Trap Disable */ +#define KVX_SFR_PS_HTD_SHIFT 3 +#define KVX_SFR_PS_HTD_WIDTH 1 +#define KVX_SFR_PS_HTD_WFXL_MASK _ULL(0x8) +#define KVX_SFR_PS_HTD_WFXL_CLEAR _ULL(0x8) +#define KVX_SFR_PS_HTD_WFXL_SET _ULL(0x800000000) + +#define KVX_SFR_PS_IE_MASK _ULL(0x10) /* Interrupt Enable */ +#define KVX_SFR_PS_IE_SHIFT 4 +#define KVX_SFR_PS_IE_WIDTH 1 +#define KVX_SFR_PS_IE_WFXL_MASK _ULL(0x10) +#define KVX_SFR_PS_IE_WFXL_CLEAR _ULL(0x10) +#define KVX_SFR_PS_IE_WFXL_SET _ULL(0x1000000000) + +#define KVX_SFR_PS_HLE_MASK _ULL(0x20) /* Hardware Loop Enable */ +#define KVX_SFR_PS_HLE_SHIFT 5 +#define KVX_SFR_PS_HLE_WIDTH 1 +#define KVX_SFR_PS_HLE_WFXL_MASK _ULL(0x20) +#define KVX_SFR_PS_HLE_WFXL_CLEAR _ULL(0x20) +#define KVX_SFR_PS_HLE_WFXL_SET _ULL(0x2000000000) + +#define KVX_SFR_PS_SRE_MASK _ULL(0x40) /* Software REserved */ +#define KVX_SFR_PS_SRE_SHIFT 6 +#define KVX_SFR_PS_SRE_WIDTH 1 +#define KVX_SFR_PS_SRE_WFXL_MASK _ULL(0x40) +#define KVX_SFR_PS_SRE_WFXL_CLEAR _ULL(0x40) +#define KVX_SFR_PS_SRE_WFXL_SET _ULL(0x4000000000) + +#define KVX_SFR_PS_DAUS_MASK _ULL(0x80) /* Data Accesses Use SPS settings = */ +#define KVX_SFR_PS_DAUS_SHIFT 7 +#define KVX_SFR_PS_DAUS_WIDTH 1 +#define KVX_SFR_PS_DAUS_WFXL_MASK _ULL(0x80) +#define KVX_SFR_PS_DAUS_WFXL_CLEAR _ULL(0x80) +#define KVX_SFR_PS_DAUS_WFXL_SET _ULL(0x8000000000) + +#define KVX_SFR_PS_ICE_MASK _ULL(0x100) /* Instruction Cache Enable */ +#define KVX_SFR_PS_ICE_SHIFT 8 +#define KVX_SFR_PS_ICE_WIDTH 1 +#define KVX_SFR_PS_ICE_WFXL_MASK _ULL(0x100) +#define KVX_SFR_PS_ICE_WFXL_CLEAR _ULL(0x100) +#define KVX_SFR_PS_ICE_WFXL_SET _ULL(0x10000000000) + +#define KVX_SFR_PS_USE_MASK _ULL(0x200) /* Uncached Streaming Enable */ +#define KVX_SFR_PS_USE_SHIFT 9 +#define KVX_SFR_PS_USE_WIDTH 1 +#define KVX_SFR_PS_USE_WFXL_MASK _ULL(0x200) +#define KVX_SFR_PS_USE_WFXL_CLEAR _ULL(0x200) +#define KVX_SFR_PS_USE_WFXL_SET _ULL(0x20000000000) + +#define KVX_SFR_PS_DCE_MASK _ULL(0x400) /* Data Cache Enable */ +#define KVX_SFR_PS_DCE_SHIFT 10 +#define KVX_SFR_PS_DCE_WIDTH 1 +#define KVX_SFR_PS_DCE_WFXL_MASK _ULL(0x400) +#define KVX_SFR_PS_DCE_WFXL_CLEAR _ULL(0x400) +#define KVX_SFR_PS_DCE_WFXL_SET _ULL(0x40000000000) + +#define KVX_SFR_PS_MME_MASK _ULL(0x800) /* Memory Management Enable */ +#define KVX_SFR_PS_MME_SHIFT 11 +#define KVX_SFR_PS_MME_WIDTH 1 +#define KVX_SFR_PS_MME_WFXL_MASK _ULL(0x800) +#define KVX_SFR_PS_MME_WFXL_CLEAR _ULL(0x800) +#define KVX_SFR_PS_MME_WFXL_SET _ULL(0x80000000000) + +#define KVX_SFR_PS_IL_MASK _ULL(0x3000) /* Interrupt Level */ +#define KVX_SFR_PS_IL_SHIFT 12 +#define KVX_SFR_PS_IL_WIDTH 2 +#define KVX_SFR_PS_IL_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_PS_IL_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_PS_IL_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_PS_VS_MASK _ULL(0xc000) /* Virtual Space */ +#define KVX_SFR_PS_VS_SHIFT 14 +#define KVX_SFR_PS_VS_WIDTH 2 +#define KVX_SFR_PS_VS_WFXL_MASK _ULL(0xc000) +#define KVX_SFR_PS_VS_WFXL_CLEAR _ULL(0xc000) +#define KVX_SFR_PS_VS_WFXL_SET _ULL(0xc00000000000) + +#define KVX_SFR_PS_V64_MASK _ULL(0x10000) /* Virtual 64 bits mode. */ +#define KVX_SFR_PS_V64_SHIFT 16 +#define KVX_SFR_PS_V64_WIDTH 1 +#define KVX_SFR_PS_V64_WFXL_MASK _ULL(0x10000) +#define KVX_SFR_PS_V64_WFXL_CLEAR _ULL(0x10000) +#define KVX_SFR_PS_V64_WFXL_SET _ULL(0x1000000000000) + +#define KVX_SFR_PS_L2E_MASK _ULL(0x20000) /* L2 cache Enable. */ +#define KVX_SFR_PS_L2E_SHIFT 17 +#define KVX_SFR_PS_L2E_WIDTH 1 +#define KVX_SFR_PS_L2E_WFXL_MASK _ULL(0x20000) +#define KVX_SFR_PS_L2E_WFXL_CLEAR _ULL(0x20000) +#define KVX_SFR_PS_L2E_WFXL_SET _ULL(0x2000000000000) + +#define KVX_SFR_PS_SME_MASK _ULL(0x40000) /* Step Mode Enabled */ +#define KVX_SFR_PS_SME_SHIFT 18 +#define KVX_SFR_PS_SME_WIDTH 1 +#define KVX_SFR_PS_SME_WFXL_MASK _ULL(0x40000) +#define KVX_SFR_PS_SME_WFXL_CLEAR _ULL(0x40000) +#define KVX_SFR_PS_SME_WFXL_SET _ULL(0x4000000000000) + +#define KVX_SFR_PS_SMR_MASK _ULL(0x80000) /* Step Mode Ready */ +#define KVX_SFR_PS_SMR_SHIFT 19 +#define KVX_SFR_PS_SMR_WIDTH 1 +#define KVX_SFR_PS_SMR_WFXL_MASK _ULL(0x80000) +#define KVX_SFR_PS_SMR_WFXL_CLEAR _ULL(0x80000) +#define KVX_SFR_PS_SMR_WFXL_SET _ULL(0x8000000000000) + +#define KVX_SFR_PS_PMJ_MASK _ULL(0xf00000) /* Page Mask in JTLB. */ +#define KVX_SFR_PS_PMJ_SHIFT 20 +#define KVX_SFR_PS_PMJ_WIDTH 4 +#define KVX_SFR_PS_PMJ_WFXL_MASK _ULL(0xf00000) +#define KVX_SFR_PS_PMJ_WFXL_CLEAR _ULL(0xf00000) +#define KVX_SFR_PS_PMJ_WFXL_SET _ULL(0xf0000000000000) + +#define KVX_SFR_PS_MMUP_MASK _ULL(0x1000000) /* Privileged on MMU. */ +#define KVX_SFR_PS_MMUP_SHIFT 24 +#define KVX_SFR_PS_MMUP_WIDTH 1 +#define KVX_SFR_PS_MMUP_WFXL_MASK _ULL(0x1000000) +#define KVX_SFR_PS_MMUP_WFXL_CLEAR _ULL(0x1000000) +#define KVX_SFR_PS_MMUP_WFXL_SET _ULL(0x100000000000000) + +#define KVX_SFR_SPS_PL_MASK _ULL(0x3) /* Current Privilege Level */ +#define KVX_SFR_SPS_PL_SHIFT 0 +#define KVX_SFR_SPS_PL_WIDTH 2 +#define KVX_SFR_SPS_PL_WFXL_MASK _ULL(0x3) +#define KVX_SFR_SPS_PL_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_SPS_PL_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_SPS_ET_MASK _ULL(0x4) /* Exception Taken */ +#define KVX_SFR_SPS_ET_SHIFT 2 +#define KVX_SFR_SPS_ET_WIDTH 1 +#define KVX_SFR_SPS_ET_WFXL_MASK _ULL(0x4) +#define KVX_SFR_SPS_ET_WFXL_CLEAR _ULL(0x4) +#define KVX_SFR_SPS_ET_WFXL_SET _ULL(0x400000000) + +#define KVX_SFR_SPS_HTD_MASK _ULL(0x8) /* Hardware Trap Disable */ +#define KVX_SFR_SPS_HTD_SHIFT 3 +#define KVX_SFR_SPS_HTD_WIDTH 1 +#define KVX_SFR_SPS_HTD_WFXL_MASK _ULL(0x8) +#define KVX_SFR_SPS_HTD_WFXL_CLEAR _ULL(0x8) +#define KVX_SFR_SPS_HTD_WFXL_SET _ULL(0x800000000) + +#define KVX_SFR_SPS_IE_MASK _ULL(0x10) /* Interrupt Enable */ +#define KVX_SFR_SPS_IE_SHIFT 4 +#define KVX_SFR_SPS_IE_WIDTH 1 +#define KVX_SFR_SPS_IE_WFXL_MASK _ULL(0x10) +#define KVX_SFR_SPS_IE_WFXL_CLEAR _ULL(0x10) +#define KVX_SFR_SPS_IE_WFXL_SET _ULL(0x1000000000) + +#define KVX_SFR_SPS_HLE_MASK _ULL(0x20) /* Hardware Loop Enable */ +#define KVX_SFR_SPS_HLE_SHIFT 5 +#define KVX_SFR_SPS_HLE_WIDTH 1 +#define KVX_SFR_SPS_HLE_WFXL_MASK _ULL(0x20) +#define KVX_SFR_SPS_HLE_WFXL_CLEAR _ULL(0x20) +#define KVX_SFR_SPS_HLE_WFXL_SET _ULL(0x2000000000) + +#define KVX_SFR_SPS_SRE_MASK _ULL(0x40) /* Software REserved */ +#define KVX_SFR_SPS_SRE_SHIFT 6 +#define KVX_SFR_SPS_SRE_WIDTH 1 +#define KVX_SFR_SPS_SRE_WFXL_MASK _ULL(0x40) +#define KVX_SFR_SPS_SRE_WFXL_CLEAR _ULL(0x40) +#define KVX_SFR_SPS_SRE_WFXL_SET _ULL(0x4000000000) + +#define KVX_SFR_SPS_DAUS_MASK _ULL(0x80) /* Data Accesses Use SPS settings= */ +#define KVX_SFR_SPS_DAUS_SHIFT 7 +#define KVX_SFR_SPS_DAUS_WIDTH 1 +#define KVX_SFR_SPS_DAUS_WFXL_MASK _ULL(0x80) +#define KVX_SFR_SPS_DAUS_WFXL_CLEAR _ULL(0x80) +#define KVX_SFR_SPS_DAUS_WFXL_SET _ULL(0x8000000000) + +#define KVX_SFR_SPS_ICE_MASK _ULL(0x100) /* Instruction Cache Enable */ +#define KVX_SFR_SPS_ICE_SHIFT 8 +#define KVX_SFR_SPS_ICE_WIDTH 1 +#define KVX_SFR_SPS_ICE_WFXL_MASK _ULL(0x100) +#define KVX_SFR_SPS_ICE_WFXL_CLEAR _ULL(0x100) +#define KVX_SFR_SPS_ICE_WFXL_SET _ULL(0x10000000000) + +#define KVX_SFR_SPS_USE_MASK _ULL(0x200) /* Uncached Streaming Enable */ +#define KVX_SFR_SPS_USE_SHIFT 9 +#define KVX_SFR_SPS_USE_WIDTH 1 +#define KVX_SFR_SPS_USE_WFXL_MASK _ULL(0x200) +#define KVX_SFR_SPS_USE_WFXL_CLEAR _ULL(0x200) +#define KVX_SFR_SPS_USE_WFXL_SET _ULL(0x20000000000) + +#define KVX_SFR_SPS_DCE_MASK _ULL(0x400) /* Data Cache Enable */ +#define KVX_SFR_SPS_DCE_SHIFT 10 +#define KVX_SFR_SPS_DCE_WIDTH 1 +#define KVX_SFR_SPS_DCE_WFXL_MASK _ULL(0x400) +#define KVX_SFR_SPS_DCE_WFXL_CLEAR _ULL(0x400) +#define KVX_SFR_SPS_DCE_WFXL_SET _ULL(0x40000000000) + +#define KVX_SFR_SPS_MME_MASK _ULL(0x800) /* Memory Management Enable */ +#define KVX_SFR_SPS_MME_SHIFT 11 +#define KVX_SFR_SPS_MME_WIDTH 1 +#define KVX_SFR_SPS_MME_WFXL_MASK _ULL(0x800) +#define KVX_SFR_SPS_MME_WFXL_CLEAR _ULL(0x800) +#define KVX_SFR_SPS_MME_WFXL_SET _ULL(0x80000000000) + +#define KVX_SFR_SPS_IL_MASK _ULL(0x3000) /* Interrupt Level */ +#define KVX_SFR_SPS_IL_SHIFT 12 +#define KVX_SFR_SPS_IL_WIDTH 2 +#define KVX_SFR_SPS_IL_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_SPS_IL_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_SPS_IL_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_SPS_VS_MASK _ULL(0xc000) /* Virtual Space */ +#define KVX_SFR_SPS_VS_SHIFT 14 +#define KVX_SFR_SPS_VS_WIDTH 2 +#define KVX_SFR_SPS_VS_WFXL_MASK _ULL(0xc000) +#define KVX_SFR_SPS_VS_WFXL_CLEAR _ULL(0xc000) +#define KVX_SFR_SPS_VS_WFXL_SET _ULL(0xc00000000000) + +#define KVX_SFR_SPS_V64_MASK _ULL(0x10000) /* Virtual 64 bits mode. */ +#define KVX_SFR_SPS_V64_SHIFT 16 +#define KVX_SFR_SPS_V64_WIDTH 1 +#define KVX_SFR_SPS_V64_WFXL_MASK _ULL(0x10000) +#define KVX_SFR_SPS_V64_WFXL_CLEAR _ULL(0x10000) +#define KVX_SFR_SPS_V64_WFXL_SET _ULL(0x1000000000000) + +#define KVX_SFR_SPS_L2E_MASK _ULL(0x20000) /* L2 cache Enable. */ +#define KVX_SFR_SPS_L2E_SHIFT 17 +#define KVX_SFR_SPS_L2E_WIDTH 1 +#define KVX_SFR_SPS_L2E_WFXL_MASK _ULL(0x20000) +#define KVX_SFR_SPS_L2E_WFXL_CLEAR _ULL(0x20000) +#define KVX_SFR_SPS_L2E_WFXL_SET _ULL(0x2000000000000) + +#define KVX_SFR_SPS_SME_MASK _ULL(0x40000) /* Step Mode Enabled */ +#define KVX_SFR_SPS_SME_SHIFT 18 +#define KVX_SFR_SPS_SME_WIDTH 1 +#define KVX_SFR_SPS_SME_WFXL_MASK _ULL(0x40000) +#define KVX_SFR_SPS_SME_WFXL_CLEAR _ULL(0x40000) +#define KVX_SFR_SPS_SME_WFXL_SET _ULL(0x4000000000000) + +#define KVX_SFR_SPS_SMR_MASK _ULL(0x80000) /* Step Mode Ready */ +#define KVX_SFR_SPS_SMR_SHIFT 19 +#define KVX_SFR_SPS_SMR_WIDTH 1 +#define KVX_SFR_SPS_SMR_WFXL_MASK _ULL(0x80000) +#define KVX_SFR_SPS_SMR_WFXL_CLEAR _ULL(0x80000) +#define KVX_SFR_SPS_SMR_WFXL_SET _ULL(0x8000000000000) + +#define KVX_SFR_SPS_PMJ_MASK _ULL(0xf00000) /* Page Mask in JTLB. */ +#define KVX_SFR_SPS_PMJ_SHIFT 20 +#define KVX_SFR_SPS_PMJ_WIDTH 4 +#define KVX_SFR_SPS_PMJ_WFXL_MASK _ULL(0xf00000) +#define KVX_SFR_SPS_PMJ_WFXL_CLEAR _ULL(0xf00000) +#define KVX_SFR_SPS_PMJ_WFXL_SET _ULL(0xf0000000000000) + +#define KVX_SFR_SPS_MMUP_MASK _ULL(0x1000000) /* Privileged on MMU. */ +#define KVX_SFR_SPS_MMUP_SHIFT 24 +#define KVX_SFR_SPS_MMUP_WIDTH 1 +#define KVX_SFR_SPS_MMUP_WFXL_MASK _ULL(0x1000000) +#define KVX_SFR_SPS_MMUP_WFXL_CLEAR _ULL(0x1000000) +#define KVX_SFR_SPS_MMUP_WFXL_SET _ULL(0x100000000000000) + +#define KVX_SFR_SPS_PL0_PL_MASK _ULL(0x3) /* Current Privilege Level */ +#define KVX_SFR_SPS_PL0_PL_SHIFT 0 +#define KVX_SFR_SPS_PL0_PL_WIDTH 2 +#define KVX_SFR_SPS_PL0_PL_WFXL_MASK _ULL(0x3) +#define KVX_SFR_SPS_PL0_PL_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_SPS_PL0_PL_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_SPS_PL0_ET_MASK _ULL(0x4) /* Exception Taken */ +#define KVX_SFR_SPS_PL0_ET_SHIFT 2 +#define KVX_SFR_SPS_PL0_ET_WIDTH 1 +#define KVX_SFR_SPS_PL0_ET_WFXL_MASK _ULL(0x4) +#define KVX_SFR_SPS_PL0_ET_WFXL_CLEAR _ULL(0x4) +#define KVX_SFR_SPS_PL0_ET_WFXL_SET _ULL(0x400000000) + +#define KVX_SFR_SPS_PL0_HTD_MASK _ULL(0x8) /* Hardware Trap Disable */ +#define KVX_SFR_SPS_PL0_HTD_SHIFT 3 +#define KVX_SFR_SPS_PL0_HTD_WIDTH 1 +#define KVX_SFR_SPS_PL0_HTD_WFXL_MASK _ULL(0x8) +#define KVX_SFR_SPS_PL0_HTD_WFXL_CLEAR _ULL(0x8) +#define KVX_SFR_SPS_PL0_HTD_WFXL_SET _ULL(0x800000000) + +#define KVX_SFR_SPS_PL0_IE_MASK _ULL(0x10) /* Interrupt Enable */ +#define KVX_SFR_SPS_PL0_IE_SHIFT 4 +#define KVX_SFR_SPS_PL0_IE_WIDTH 1 +#define KVX_SFR_SPS_PL0_IE_WFXL_MASK _ULL(0x10) +#define KVX_SFR_SPS_PL0_IE_WFXL_CLEAR _ULL(0x10) +#define KVX_SFR_SPS_PL0_IE_WFXL_SET _ULL(0x1000000000) + +#define KVX_SFR_SPS_PL0_HLE_MASK _ULL(0x20) /* Hardware Loop Enable */ +#define KVX_SFR_SPS_PL0_HLE_SHIFT 5 +#define KVX_SFR_SPS_PL0_HLE_WIDTH 1 +#define KVX_SFR_SPS_PL0_HLE_WFXL_MASK _ULL(0x20) +#define KVX_SFR_SPS_PL0_HLE_WFXL_CLEAR _ULL(0x20) +#define KVX_SFR_SPS_PL0_HLE_WFXL_SET _ULL(0x2000000000) + +#define KVX_SFR_SPS_PL0_SRE_MASK _ULL(0x40) /* Software REserved */ +#define KVX_SFR_SPS_PL0_SRE_SHIFT 6 +#define KVX_SFR_SPS_PL0_SRE_WIDTH 1 +#define KVX_SFR_SPS_PL0_SRE_WFXL_MASK _ULL(0x40) +#define KVX_SFR_SPS_PL0_SRE_WFXL_CLEAR _ULL(0x40) +#define KVX_SFR_SPS_PL0_SRE_WFXL_SET _ULL(0x4000000000) + +#define KVX_SFR_SPS_PL0_DAUS_MASK _ULL(0x80) /* Data Accesses Use SPS sett= ings */ +#define KVX_SFR_SPS_PL0_DAUS_SHIFT 7 +#define KVX_SFR_SPS_PL0_DAUS_WIDTH 1 +#define KVX_SFR_SPS_PL0_DAUS_WFXL_MASK _ULL(0x80) +#define KVX_SFR_SPS_PL0_DAUS_WFXL_CLEAR _ULL(0x80) +#define KVX_SFR_SPS_PL0_DAUS_WFXL_SET _ULL(0x8000000000) + +#define KVX_SFR_SPS_PL0_ICE_MASK _ULL(0x100) /* Instruction Cache Enable */ +#define KVX_SFR_SPS_PL0_ICE_SHIFT 8 +#define KVX_SFR_SPS_PL0_ICE_WIDTH 1 +#define KVX_SFR_SPS_PL0_ICE_WFXL_MASK _ULL(0x100) +#define KVX_SFR_SPS_PL0_ICE_WFXL_CLEAR _ULL(0x100) +#define KVX_SFR_SPS_PL0_ICE_WFXL_SET _ULL(0x10000000000) + +#define KVX_SFR_SPS_PL0_USE_MASK _ULL(0x200) /* Uncached Streaming Enable = */ +#define KVX_SFR_SPS_PL0_USE_SHIFT 9 +#define KVX_SFR_SPS_PL0_USE_WIDTH 1 +#define KVX_SFR_SPS_PL0_USE_WFXL_MASK _ULL(0x200) +#define KVX_SFR_SPS_PL0_USE_WFXL_CLEAR _ULL(0x200) +#define KVX_SFR_SPS_PL0_USE_WFXL_SET _ULL(0x20000000000) + +#define KVX_SFR_SPS_PL0_DCE_MASK _ULL(0x400) /* Data Cache Enable */ +#define KVX_SFR_SPS_PL0_DCE_SHIFT 10 +#define KVX_SFR_SPS_PL0_DCE_WIDTH 1 +#define KVX_SFR_SPS_PL0_DCE_WFXL_MASK _ULL(0x400) +#define KVX_SFR_SPS_PL0_DCE_WFXL_CLEAR _ULL(0x400) +#define KVX_SFR_SPS_PL0_DCE_WFXL_SET _ULL(0x40000000000) + +#define KVX_SFR_SPS_PL0_MME_MASK _ULL(0x800) /* Memory Management Enable */ +#define KVX_SFR_SPS_PL0_MME_SHIFT 11 +#define KVX_SFR_SPS_PL0_MME_WIDTH 1 +#define KVX_SFR_SPS_PL0_MME_WFXL_MASK _ULL(0x800) +#define KVX_SFR_SPS_PL0_MME_WFXL_CLEAR _ULL(0x800) +#define KVX_SFR_SPS_PL0_MME_WFXL_SET _ULL(0x80000000000) + +#define KVX_SFR_SPS_PL0_IL_MASK _ULL(0x3000) /* Interrupt Level */ +#define KVX_SFR_SPS_PL0_IL_SHIFT 12 +#define KVX_SFR_SPS_PL0_IL_WIDTH 2 +#define KVX_SFR_SPS_PL0_IL_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_SPS_PL0_IL_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_SPS_PL0_IL_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_SPS_PL0_VS_MASK _ULL(0xc000) /* Virtual Space */ +#define KVX_SFR_SPS_PL0_VS_SHIFT 14 +#define KVX_SFR_SPS_PL0_VS_WIDTH 2 +#define KVX_SFR_SPS_PL0_VS_WFXL_MASK _ULL(0xc000) +#define KVX_SFR_SPS_PL0_VS_WFXL_CLEAR _ULL(0xc000) +#define KVX_SFR_SPS_PL0_VS_WFXL_SET _ULL(0xc00000000000) + +#define KVX_SFR_SPS_PL0_V64_MASK _ULL(0x10000) /* Virtual 64 bits mode. */ +#define KVX_SFR_SPS_PL0_V64_SHIFT 16 +#define KVX_SFR_SPS_PL0_V64_WIDTH 1 +#define KVX_SFR_SPS_PL0_V64_WFXL_MASK _ULL(0x10000) +#define KVX_SFR_SPS_PL0_V64_WFXL_CLEAR _ULL(0x10000) +#define KVX_SFR_SPS_PL0_V64_WFXL_SET _ULL(0x1000000000000) + +#define KVX_SFR_SPS_PL0_L2E_MASK _ULL(0x20000) /* L2 cache Enable. */ +#define KVX_SFR_SPS_PL0_L2E_SHIFT 17 +#define KVX_SFR_SPS_PL0_L2E_WIDTH 1 +#define KVX_SFR_SPS_PL0_L2E_WFXL_MASK _ULL(0x20000) +#define KVX_SFR_SPS_PL0_L2E_WFXL_CLEAR _ULL(0x20000) +#define KVX_SFR_SPS_PL0_L2E_WFXL_SET _ULL(0x2000000000000) + +#define KVX_SFR_SPS_PL0_SME_MASK _ULL(0x40000) /* Step Mode Enabled */ +#define KVX_SFR_SPS_PL0_SME_SHIFT 18 +#define KVX_SFR_SPS_PL0_SME_WIDTH 1 +#define KVX_SFR_SPS_PL0_SME_WFXL_MASK _ULL(0x40000) +#define KVX_SFR_SPS_PL0_SME_WFXL_CLEAR _ULL(0x40000) +#define KVX_SFR_SPS_PL0_SME_WFXL_SET _ULL(0x4000000000000) + +#define KVX_SFR_SPS_PL0_SMR_MASK _ULL(0x80000) /* Step Mode Ready */ +#define KVX_SFR_SPS_PL0_SMR_SHIFT 19 +#define KVX_SFR_SPS_PL0_SMR_WIDTH 1 +#define KVX_SFR_SPS_PL0_SMR_WFXL_MASK _ULL(0x80000) +#define KVX_SFR_SPS_PL0_SMR_WFXL_CLEAR _ULL(0x80000) +#define KVX_SFR_SPS_PL0_SMR_WFXL_SET _ULL(0x8000000000000) + +#define KVX_SFR_SPS_PL0_PMJ_MASK _ULL(0xf00000) /* Page Mask in JTLB. */ +#define KVX_SFR_SPS_PL0_PMJ_SHIFT 20 +#define KVX_SFR_SPS_PL0_PMJ_WIDTH 4 +#define KVX_SFR_SPS_PL0_PMJ_WFXL_MASK _ULL(0xf00000) +#define KVX_SFR_SPS_PL0_PMJ_WFXL_CLEAR _ULL(0xf00000) +#define KVX_SFR_SPS_PL0_PMJ_WFXL_SET _ULL(0xf0000000000000) + +#define KVX_SFR_SPS_PL0_MMUP_MASK _ULL(0x1000000) /* Privileged on MMU. */ +#define KVX_SFR_SPS_PL0_MMUP_SHIFT 24 +#define KVX_SFR_SPS_PL0_MMUP_WIDTH 1 +#define KVX_SFR_SPS_PL0_MMUP_WFXL_MASK _ULL(0x1000000) +#define KVX_SFR_SPS_PL0_MMUP_WFXL_CLEAR _ULL(0x1000000) +#define KVX_SFR_SPS_PL0_MMUP_WFXL_SET _ULL(0x100000000000000) + +#define KVX_SFR_SPS_PL1_PL_MASK _ULL(0x3) /* Current Privilege Level */ +#define KVX_SFR_SPS_PL1_PL_SHIFT 0 +#define KVX_SFR_SPS_PL1_PL_WIDTH 2 +#define KVX_SFR_SPS_PL1_PL_WFXL_MASK _ULL(0x3) +#define KVX_SFR_SPS_PL1_PL_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_SPS_PL1_PL_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_SPS_PL1_ET_MASK _ULL(0x4) /* Exception Taken */ +#define KVX_SFR_SPS_PL1_ET_SHIFT 2 +#define KVX_SFR_SPS_PL1_ET_WIDTH 1 +#define KVX_SFR_SPS_PL1_ET_WFXL_MASK _ULL(0x4) +#define KVX_SFR_SPS_PL1_ET_WFXL_CLEAR _ULL(0x4) +#define KVX_SFR_SPS_PL1_ET_WFXL_SET _ULL(0x400000000) + +#define KVX_SFR_SPS_PL1_HTD_MASK _ULL(0x8) /* Hardware Trap Disable */ +#define KVX_SFR_SPS_PL1_HTD_SHIFT 3 +#define KVX_SFR_SPS_PL1_HTD_WIDTH 1 +#define KVX_SFR_SPS_PL1_HTD_WFXL_MASK _ULL(0x8) +#define KVX_SFR_SPS_PL1_HTD_WFXL_CLEAR _ULL(0x8) +#define KVX_SFR_SPS_PL1_HTD_WFXL_SET _ULL(0x800000000) + +#define KVX_SFR_SPS_PL1_IE_MASK _ULL(0x10) /* Interrupt Enable */ +#define KVX_SFR_SPS_PL1_IE_SHIFT 4 +#define KVX_SFR_SPS_PL1_IE_WIDTH 1 +#define KVX_SFR_SPS_PL1_IE_WFXL_MASK _ULL(0x10) +#define KVX_SFR_SPS_PL1_IE_WFXL_CLEAR _ULL(0x10) +#define KVX_SFR_SPS_PL1_IE_WFXL_SET _ULL(0x1000000000) + +#define KVX_SFR_SPS_PL1_HLE_MASK _ULL(0x20) /* Hardware Loop Enable */ +#define KVX_SFR_SPS_PL1_HLE_SHIFT 5 +#define KVX_SFR_SPS_PL1_HLE_WIDTH 1 +#define KVX_SFR_SPS_PL1_HLE_WFXL_MASK _ULL(0x20) +#define KVX_SFR_SPS_PL1_HLE_WFXL_CLEAR _ULL(0x20) +#define KVX_SFR_SPS_PL1_HLE_WFXL_SET _ULL(0x2000000000) + +#define KVX_SFR_SPS_PL1_SRE_MASK _ULL(0x40) /* Software REserved */ +#define KVX_SFR_SPS_PL1_SRE_SHIFT 6 +#define KVX_SFR_SPS_PL1_SRE_WIDTH 1 +#define KVX_SFR_SPS_PL1_SRE_WFXL_MASK _ULL(0x40) +#define KVX_SFR_SPS_PL1_SRE_WFXL_CLEAR _ULL(0x40) +#define KVX_SFR_SPS_PL1_SRE_WFXL_SET _ULL(0x4000000000) + +#define KVX_SFR_SPS_PL1_DAUS_MASK _ULL(0x80) /* Data Accesses Use SPS sett= ings */ +#define KVX_SFR_SPS_PL1_DAUS_SHIFT 7 +#define KVX_SFR_SPS_PL1_DAUS_WIDTH 1 +#define KVX_SFR_SPS_PL1_DAUS_WFXL_MASK _ULL(0x80) +#define KVX_SFR_SPS_PL1_DAUS_WFXL_CLEAR _ULL(0x80) +#define KVX_SFR_SPS_PL1_DAUS_WFXL_SET _ULL(0x8000000000) + +#define KVX_SFR_SPS_PL1_ICE_MASK _ULL(0x100) /* Instruction Cache Enable */ +#define KVX_SFR_SPS_PL1_ICE_SHIFT 8 +#define KVX_SFR_SPS_PL1_ICE_WIDTH 1 +#define KVX_SFR_SPS_PL1_ICE_WFXL_MASK _ULL(0x100) +#define KVX_SFR_SPS_PL1_ICE_WFXL_CLEAR _ULL(0x100) +#define KVX_SFR_SPS_PL1_ICE_WFXL_SET _ULL(0x10000000000) + +#define KVX_SFR_SPS_PL1_USE_MASK _ULL(0x200) /* Uncached Streaming Enable = */ +#define KVX_SFR_SPS_PL1_USE_SHIFT 9 +#define KVX_SFR_SPS_PL1_USE_WIDTH 1 +#define KVX_SFR_SPS_PL1_USE_WFXL_MASK _ULL(0x200) +#define KVX_SFR_SPS_PL1_USE_WFXL_CLEAR _ULL(0x200) +#define KVX_SFR_SPS_PL1_USE_WFXL_SET _ULL(0x20000000000) + +#define KVX_SFR_SPS_PL1_DCE_MASK _ULL(0x400) /* Data Cache Enable */ +#define KVX_SFR_SPS_PL1_DCE_SHIFT 10 +#define KVX_SFR_SPS_PL1_DCE_WIDTH 1 +#define KVX_SFR_SPS_PL1_DCE_WFXL_MASK _ULL(0x400) +#define KVX_SFR_SPS_PL1_DCE_WFXL_CLEAR _ULL(0x400) +#define KVX_SFR_SPS_PL1_DCE_WFXL_SET _ULL(0x40000000000) + +#define KVX_SFR_SPS_PL1_MME_MASK _ULL(0x800) /* Memory Management Enable */ +#define KVX_SFR_SPS_PL1_MME_SHIFT 11 +#define KVX_SFR_SPS_PL1_MME_WIDTH 1 +#define KVX_SFR_SPS_PL1_MME_WFXL_MASK _ULL(0x800) +#define KVX_SFR_SPS_PL1_MME_WFXL_CLEAR _ULL(0x800) +#define KVX_SFR_SPS_PL1_MME_WFXL_SET _ULL(0x80000000000) + +#define KVX_SFR_SPS_PL1_IL_MASK _ULL(0x3000) /* Interrupt Level */ +#define KVX_SFR_SPS_PL1_IL_SHIFT 12 +#define KVX_SFR_SPS_PL1_IL_WIDTH 2 +#define KVX_SFR_SPS_PL1_IL_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_SPS_PL1_IL_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_SPS_PL1_IL_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_SPS_PL1_VS_MASK _ULL(0xc000) /* Virtual Space */ +#define KVX_SFR_SPS_PL1_VS_SHIFT 14 +#define KVX_SFR_SPS_PL1_VS_WIDTH 2 +#define KVX_SFR_SPS_PL1_VS_WFXL_MASK _ULL(0xc000) +#define KVX_SFR_SPS_PL1_VS_WFXL_CLEAR _ULL(0xc000) +#define KVX_SFR_SPS_PL1_VS_WFXL_SET _ULL(0xc00000000000) + +#define KVX_SFR_SPS_PL1_V64_MASK _ULL(0x10000) /* Virtual 64 bits mode. */ +#define KVX_SFR_SPS_PL1_V64_SHIFT 16 +#define KVX_SFR_SPS_PL1_V64_WIDTH 1 +#define KVX_SFR_SPS_PL1_V64_WFXL_MASK _ULL(0x10000) +#define KVX_SFR_SPS_PL1_V64_WFXL_CLEAR _ULL(0x10000) +#define KVX_SFR_SPS_PL1_V64_WFXL_SET _ULL(0x1000000000000) + +#define KVX_SFR_SPS_PL1_L2E_MASK _ULL(0x20000) /* L2 cache Enable. */ +#define KVX_SFR_SPS_PL1_L2E_SHIFT 17 +#define KVX_SFR_SPS_PL1_L2E_WIDTH 1 +#define KVX_SFR_SPS_PL1_L2E_WFXL_MASK _ULL(0x20000) +#define KVX_SFR_SPS_PL1_L2E_WFXL_CLEAR _ULL(0x20000) +#define KVX_SFR_SPS_PL1_L2E_WFXL_SET _ULL(0x2000000000000) + +#define KVX_SFR_SPS_PL1_SME_MASK _ULL(0x40000) /* Step Mode Enabled */ +#define KVX_SFR_SPS_PL1_SME_SHIFT 18 +#define KVX_SFR_SPS_PL1_SME_WIDTH 1 +#define KVX_SFR_SPS_PL1_SME_WFXL_MASK _ULL(0x40000) +#define KVX_SFR_SPS_PL1_SME_WFXL_CLEAR _ULL(0x40000) +#define KVX_SFR_SPS_PL1_SME_WFXL_SET _ULL(0x4000000000000) + +#define KVX_SFR_SPS_PL1_SMR_MASK _ULL(0x80000) /* Step Mode Ready */ +#define KVX_SFR_SPS_PL1_SMR_SHIFT 19 +#define KVX_SFR_SPS_PL1_SMR_WIDTH 1 +#define KVX_SFR_SPS_PL1_SMR_WFXL_MASK _ULL(0x80000) +#define KVX_SFR_SPS_PL1_SMR_WFXL_CLEAR _ULL(0x80000) +#define KVX_SFR_SPS_PL1_SMR_WFXL_SET _ULL(0x8000000000000) + +#define KVX_SFR_SPS_PL1_PMJ_MASK _ULL(0xf00000) /* Page Mask in JTLB. */ +#define KVX_SFR_SPS_PL1_PMJ_SHIFT 20 +#define KVX_SFR_SPS_PL1_PMJ_WIDTH 4 +#define KVX_SFR_SPS_PL1_PMJ_WFXL_MASK _ULL(0xf00000) +#define KVX_SFR_SPS_PL1_PMJ_WFXL_CLEAR _ULL(0xf00000) +#define KVX_SFR_SPS_PL1_PMJ_WFXL_SET _ULL(0xf0000000000000) + +#define KVX_SFR_SPS_PL1_MMUP_MASK _ULL(0x1000000) /* Privileged on MMU. */ +#define KVX_SFR_SPS_PL1_MMUP_SHIFT 24 +#define KVX_SFR_SPS_PL1_MMUP_WIDTH 1 +#define KVX_SFR_SPS_PL1_MMUP_WFXL_MASK _ULL(0x1000000) +#define KVX_SFR_SPS_PL1_MMUP_WFXL_CLEAR _ULL(0x1000000) +#define KVX_SFR_SPS_PL1_MMUP_WFXL_SET _ULL(0x100000000000000) + +#define KVX_SFR_SPS_PL2_PL_MASK _ULL(0x3) /* Current Privilege Level */ +#define KVX_SFR_SPS_PL2_PL_SHIFT 0 +#define KVX_SFR_SPS_PL2_PL_WIDTH 2 +#define KVX_SFR_SPS_PL2_PL_WFXL_MASK _ULL(0x3) +#define KVX_SFR_SPS_PL2_PL_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_SPS_PL2_PL_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_SPS_PL2_ET_MASK _ULL(0x4) /* Exception Taken */ +#define KVX_SFR_SPS_PL2_ET_SHIFT 2 +#define KVX_SFR_SPS_PL2_ET_WIDTH 1 +#define KVX_SFR_SPS_PL2_ET_WFXL_MASK _ULL(0x4) +#define KVX_SFR_SPS_PL2_ET_WFXL_CLEAR _ULL(0x4) +#define KVX_SFR_SPS_PL2_ET_WFXL_SET _ULL(0x400000000) + +#define KVX_SFR_SPS_PL2_HTD_MASK _ULL(0x8) /* Hardware Trap Disable */ +#define KVX_SFR_SPS_PL2_HTD_SHIFT 3 +#define KVX_SFR_SPS_PL2_HTD_WIDTH 1 +#define KVX_SFR_SPS_PL2_HTD_WFXL_MASK _ULL(0x8) +#define KVX_SFR_SPS_PL2_HTD_WFXL_CLEAR _ULL(0x8) +#define KVX_SFR_SPS_PL2_HTD_WFXL_SET _ULL(0x800000000) + +#define KVX_SFR_SPS_PL2_IE_MASK _ULL(0x10) /* Interrupt Enable */ +#define KVX_SFR_SPS_PL2_IE_SHIFT 4 +#define KVX_SFR_SPS_PL2_IE_WIDTH 1 +#define KVX_SFR_SPS_PL2_IE_WFXL_MASK _ULL(0x10) +#define KVX_SFR_SPS_PL2_IE_WFXL_CLEAR _ULL(0x10) +#define KVX_SFR_SPS_PL2_IE_WFXL_SET _ULL(0x1000000000) + +#define KVX_SFR_SPS_PL2_HLE_MASK _ULL(0x20) /* Hardware Loop Enable */ +#define KVX_SFR_SPS_PL2_HLE_SHIFT 5 +#define KVX_SFR_SPS_PL2_HLE_WIDTH 1 +#define KVX_SFR_SPS_PL2_HLE_WFXL_MASK _ULL(0x20) +#define KVX_SFR_SPS_PL2_HLE_WFXL_CLEAR _ULL(0x20) +#define KVX_SFR_SPS_PL2_HLE_WFXL_SET _ULL(0x2000000000) + +#define KVX_SFR_SPS_PL2_SRE_MASK _ULL(0x40) /* Software REserved */ +#define KVX_SFR_SPS_PL2_SRE_SHIFT 6 +#define KVX_SFR_SPS_PL2_SRE_WIDTH 1 +#define KVX_SFR_SPS_PL2_SRE_WFXL_MASK _ULL(0x40) +#define KVX_SFR_SPS_PL2_SRE_WFXL_CLEAR _ULL(0x40) +#define KVX_SFR_SPS_PL2_SRE_WFXL_SET _ULL(0x4000000000) + +#define KVX_SFR_SPS_PL2_DAUS_MASK _ULL(0x80) /* Data Accesses Use SPS sett= ings */ +#define KVX_SFR_SPS_PL2_DAUS_SHIFT 7 +#define KVX_SFR_SPS_PL2_DAUS_WIDTH 1 +#define KVX_SFR_SPS_PL2_DAUS_WFXL_MASK _ULL(0x80) +#define KVX_SFR_SPS_PL2_DAUS_WFXL_CLEAR _ULL(0x80) +#define KVX_SFR_SPS_PL2_DAUS_WFXL_SET _ULL(0x8000000000) + +#define KVX_SFR_SPS_PL2_ICE_MASK _ULL(0x100) /* Instruction Cache Enable */ +#define KVX_SFR_SPS_PL2_ICE_SHIFT 8 +#define KVX_SFR_SPS_PL2_ICE_WIDTH 1 +#define KVX_SFR_SPS_PL2_ICE_WFXL_MASK _ULL(0x100) +#define KVX_SFR_SPS_PL2_ICE_WFXL_CLEAR _ULL(0x100) +#define KVX_SFR_SPS_PL2_ICE_WFXL_SET _ULL(0x10000000000) + +#define KVX_SFR_SPS_PL2_USE_MASK _ULL(0x200) /* Uncached Streaming Enable = */ +#define KVX_SFR_SPS_PL2_USE_SHIFT 9 +#define KVX_SFR_SPS_PL2_USE_WIDTH 1 +#define KVX_SFR_SPS_PL2_USE_WFXL_MASK _ULL(0x200) +#define KVX_SFR_SPS_PL2_USE_WFXL_CLEAR _ULL(0x200) +#define KVX_SFR_SPS_PL2_USE_WFXL_SET _ULL(0x20000000000) + +#define KVX_SFR_SPS_PL2_DCE_MASK _ULL(0x400) /* Data Cache Enable */ +#define KVX_SFR_SPS_PL2_DCE_SHIFT 10 +#define KVX_SFR_SPS_PL2_DCE_WIDTH 1 +#define KVX_SFR_SPS_PL2_DCE_WFXL_MASK _ULL(0x400) +#define KVX_SFR_SPS_PL2_DCE_WFXL_CLEAR _ULL(0x400) +#define KVX_SFR_SPS_PL2_DCE_WFXL_SET _ULL(0x40000000000) + +#define KVX_SFR_SPS_PL2_MME_MASK _ULL(0x800) /* Memory Management Enable */ +#define KVX_SFR_SPS_PL2_MME_SHIFT 11 +#define KVX_SFR_SPS_PL2_MME_WIDTH 1 +#define KVX_SFR_SPS_PL2_MME_WFXL_MASK _ULL(0x800) +#define KVX_SFR_SPS_PL2_MME_WFXL_CLEAR _ULL(0x800) +#define KVX_SFR_SPS_PL2_MME_WFXL_SET _ULL(0x80000000000) + +#define KVX_SFR_SPS_PL2_IL_MASK _ULL(0x3000) /* Interrupt Level */ +#define KVX_SFR_SPS_PL2_IL_SHIFT 12 +#define KVX_SFR_SPS_PL2_IL_WIDTH 2 +#define KVX_SFR_SPS_PL2_IL_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_SPS_PL2_IL_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_SPS_PL2_IL_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_SPS_PL2_VS_MASK _ULL(0xc000) /* Virtual Space */ +#define KVX_SFR_SPS_PL2_VS_SHIFT 14 +#define KVX_SFR_SPS_PL2_VS_WIDTH 2 +#define KVX_SFR_SPS_PL2_VS_WFXL_MASK _ULL(0xc000) +#define KVX_SFR_SPS_PL2_VS_WFXL_CLEAR _ULL(0xc000) +#define KVX_SFR_SPS_PL2_VS_WFXL_SET _ULL(0xc00000000000) + +#define KVX_SFR_SPS_PL2_V64_MASK _ULL(0x10000) /* Virtual 64 bits mode. */ +#define KVX_SFR_SPS_PL2_V64_SHIFT 16 +#define KVX_SFR_SPS_PL2_V64_WIDTH 1 +#define KVX_SFR_SPS_PL2_V64_WFXL_MASK _ULL(0x10000) +#define KVX_SFR_SPS_PL2_V64_WFXL_CLEAR _ULL(0x10000) +#define KVX_SFR_SPS_PL2_V64_WFXL_SET _ULL(0x1000000000000) + +#define KVX_SFR_SPS_PL2_L2E_MASK _ULL(0x20000) /* L2 cache Enable. */ +#define KVX_SFR_SPS_PL2_L2E_SHIFT 17 +#define KVX_SFR_SPS_PL2_L2E_WIDTH 1 +#define KVX_SFR_SPS_PL2_L2E_WFXL_MASK _ULL(0x20000) +#define KVX_SFR_SPS_PL2_L2E_WFXL_CLEAR _ULL(0x20000) +#define KVX_SFR_SPS_PL2_L2E_WFXL_SET _ULL(0x2000000000000) + +#define KVX_SFR_SPS_PL2_SME_MASK _ULL(0x40000) /* Step Mode Enabled */ +#define KVX_SFR_SPS_PL2_SME_SHIFT 18 +#define KVX_SFR_SPS_PL2_SME_WIDTH 1 +#define KVX_SFR_SPS_PL2_SME_WFXL_MASK _ULL(0x40000) +#define KVX_SFR_SPS_PL2_SME_WFXL_CLEAR _ULL(0x40000) +#define KVX_SFR_SPS_PL2_SME_WFXL_SET _ULL(0x4000000000000) + +#define KVX_SFR_SPS_PL2_SMR_MASK _ULL(0x80000) /* Step Mode Ready */ +#define KVX_SFR_SPS_PL2_SMR_SHIFT 19 +#define KVX_SFR_SPS_PL2_SMR_WIDTH 1 +#define KVX_SFR_SPS_PL2_SMR_WFXL_MASK _ULL(0x80000) +#define KVX_SFR_SPS_PL2_SMR_WFXL_CLEAR _ULL(0x80000) +#define KVX_SFR_SPS_PL2_SMR_WFXL_SET _ULL(0x8000000000000) + +#define KVX_SFR_SPS_PL2_PMJ_MASK _ULL(0xf00000) /* Page Mask in JTLB. */ +#define KVX_SFR_SPS_PL2_PMJ_SHIFT 20 +#define KVX_SFR_SPS_PL2_PMJ_WIDTH 4 +#define KVX_SFR_SPS_PL2_PMJ_WFXL_MASK _ULL(0xf00000) +#define KVX_SFR_SPS_PL2_PMJ_WFXL_CLEAR _ULL(0xf00000) +#define KVX_SFR_SPS_PL2_PMJ_WFXL_SET _ULL(0xf0000000000000) + +#define KVX_SFR_SPS_PL2_MMUP_MASK _ULL(0x1000000) /* Privileged on MMU. */ +#define KVX_SFR_SPS_PL2_MMUP_SHIFT 24 +#define KVX_SFR_SPS_PL2_MMUP_WIDTH 1 +#define KVX_SFR_SPS_PL2_MMUP_WFXL_MASK _ULL(0x1000000) +#define KVX_SFR_SPS_PL2_MMUP_WFXL_CLEAR _ULL(0x1000000) +#define KVX_SFR_SPS_PL2_MMUP_WFXL_SET _ULL(0x100000000000000) + +#define KVX_SFR_SPS_PL3_PL_MASK _ULL(0x3) /* Current Privilege Level */ +#define KVX_SFR_SPS_PL3_PL_SHIFT 0 +#define KVX_SFR_SPS_PL3_PL_WIDTH 2 +#define KVX_SFR_SPS_PL3_PL_WFXL_MASK _ULL(0x3) +#define KVX_SFR_SPS_PL3_PL_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_SPS_PL3_PL_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_SPS_PL3_ET_MASK _ULL(0x4) /* Exception Taken */ +#define KVX_SFR_SPS_PL3_ET_SHIFT 2 +#define KVX_SFR_SPS_PL3_ET_WIDTH 1 +#define KVX_SFR_SPS_PL3_ET_WFXL_MASK _ULL(0x4) +#define KVX_SFR_SPS_PL3_ET_WFXL_CLEAR _ULL(0x4) +#define KVX_SFR_SPS_PL3_ET_WFXL_SET _ULL(0x400000000) + +#define KVX_SFR_SPS_PL3_HTD_MASK _ULL(0x8) /* Hardware Trap Disable */ +#define KVX_SFR_SPS_PL3_HTD_SHIFT 3 +#define KVX_SFR_SPS_PL3_HTD_WIDTH 1 +#define KVX_SFR_SPS_PL3_HTD_WFXL_MASK _ULL(0x8) +#define KVX_SFR_SPS_PL3_HTD_WFXL_CLEAR _ULL(0x8) +#define KVX_SFR_SPS_PL3_HTD_WFXL_SET _ULL(0x800000000) + +#define KVX_SFR_SPS_PL3_IE_MASK _ULL(0x10) /* Interrupt Enable */ +#define KVX_SFR_SPS_PL3_IE_SHIFT 4 +#define KVX_SFR_SPS_PL3_IE_WIDTH 1 +#define KVX_SFR_SPS_PL3_IE_WFXL_MASK _ULL(0x10) +#define KVX_SFR_SPS_PL3_IE_WFXL_CLEAR _ULL(0x10) +#define KVX_SFR_SPS_PL3_IE_WFXL_SET _ULL(0x1000000000) + +#define KVX_SFR_SPS_PL3_HLE_MASK _ULL(0x20) /* Hardware Loop Enable */ +#define KVX_SFR_SPS_PL3_HLE_SHIFT 5 +#define KVX_SFR_SPS_PL3_HLE_WIDTH 1 +#define KVX_SFR_SPS_PL3_HLE_WFXL_MASK _ULL(0x20) +#define KVX_SFR_SPS_PL3_HLE_WFXL_CLEAR _ULL(0x20) +#define KVX_SFR_SPS_PL3_HLE_WFXL_SET _ULL(0x2000000000) + +#define KVX_SFR_SPS_PL3_SRE_MASK _ULL(0x40) /* Software REserved */ +#define KVX_SFR_SPS_PL3_SRE_SHIFT 6 +#define KVX_SFR_SPS_PL3_SRE_WIDTH 1 +#define KVX_SFR_SPS_PL3_SRE_WFXL_MASK _ULL(0x40) +#define KVX_SFR_SPS_PL3_SRE_WFXL_CLEAR _ULL(0x40) +#define KVX_SFR_SPS_PL3_SRE_WFXL_SET _ULL(0x4000000000) + +#define KVX_SFR_SPS_PL3_DAUS_MASK _ULL(0x80) /* Data Accesses Use SPS sett= ings */ +#define KVX_SFR_SPS_PL3_DAUS_SHIFT 7 +#define KVX_SFR_SPS_PL3_DAUS_WIDTH 1 +#define KVX_SFR_SPS_PL3_DAUS_WFXL_MASK _ULL(0x80) +#define KVX_SFR_SPS_PL3_DAUS_WFXL_CLEAR _ULL(0x80) +#define KVX_SFR_SPS_PL3_DAUS_WFXL_SET _ULL(0x8000000000) + +#define KVX_SFR_SPS_PL3_ICE_MASK _ULL(0x100) /* Instruction Cache Enable */ +#define KVX_SFR_SPS_PL3_ICE_SHIFT 8 +#define KVX_SFR_SPS_PL3_ICE_WIDTH 1 +#define KVX_SFR_SPS_PL3_ICE_WFXL_MASK _ULL(0x100) +#define KVX_SFR_SPS_PL3_ICE_WFXL_CLEAR _ULL(0x100) +#define KVX_SFR_SPS_PL3_ICE_WFXL_SET _ULL(0x10000000000) + +#define KVX_SFR_SPS_PL3_USE_MASK _ULL(0x200) /* Uncached Streaming Enable = */ +#define KVX_SFR_SPS_PL3_USE_SHIFT 9 +#define KVX_SFR_SPS_PL3_USE_WIDTH 1 +#define KVX_SFR_SPS_PL3_USE_WFXL_MASK _ULL(0x200) +#define KVX_SFR_SPS_PL3_USE_WFXL_CLEAR _ULL(0x200) +#define KVX_SFR_SPS_PL3_USE_WFXL_SET _ULL(0x20000000000) + +#define KVX_SFR_SPS_PL3_DCE_MASK _ULL(0x400) /* Data Cache Enable */ +#define KVX_SFR_SPS_PL3_DCE_SHIFT 10 +#define KVX_SFR_SPS_PL3_DCE_WIDTH 1 +#define KVX_SFR_SPS_PL3_DCE_WFXL_MASK _ULL(0x400) +#define KVX_SFR_SPS_PL3_DCE_WFXL_CLEAR _ULL(0x400) +#define KVX_SFR_SPS_PL3_DCE_WFXL_SET _ULL(0x40000000000) + +#define KVX_SFR_SPS_PL3_MME_MASK _ULL(0x800) /* Memory Management Enable */ +#define KVX_SFR_SPS_PL3_MME_SHIFT 11 +#define KVX_SFR_SPS_PL3_MME_WIDTH 1 +#define KVX_SFR_SPS_PL3_MME_WFXL_MASK _ULL(0x800) +#define KVX_SFR_SPS_PL3_MME_WFXL_CLEAR _ULL(0x800) +#define KVX_SFR_SPS_PL3_MME_WFXL_SET _ULL(0x80000000000) + +#define KVX_SFR_SPS_PL3_IL_MASK _ULL(0x3000) /* Interrupt Level */ +#define KVX_SFR_SPS_PL3_IL_SHIFT 12 +#define KVX_SFR_SPS_PL3_IL_WIDTH 2 +#define KVX_SFR_SPS_PL3_IL_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_SPS_PL3_IL_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_SPS_PL3_IL_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_SPS_PL3_VS_MASK _ULL(0xc000) /* Virtual Space */ +#define KVX_SFR_SPS_PL3_VS_SHIFT 14 +#define KVX_SFR_SPS_PL3_VS_WIDTH 2 +#define KVX_SFR_SPS_PL3_VS_WFXL_MASK _ULL(0xc000) +#define KVX_SFR_SPS_PL3_VS_WFXL_CLEAR _ULL(0xc000) +#define KVX_SFR_SPS_PL3_VS_WFXL_SET _ULL(0xc00000000000) + +#define KVX_SFR_SPS_PL3_V64_MASK _ULL(0x10000) /* Virtual 64 bits mode. */ +#define KVX_SFR_SPS_PL3_V64_SHIFT 16 +#define KVX_SFR_SPS_PL3_V64_WIDTH 1 +#define KVX_SFR_SPS_PL3_V64_WFXL_MASK _ULL(0x10000) +#define KVX_SFR_SPS_PL3_V64_WFXL_CLEAR _ULL(0x10000) +#define KVX_SFR_SPS_PL3_V64_WFXL_SET _ULL(0x1000000000000) + +#define KVX_SFR_SPS_PL3_L2E_MASK _ULL(0x20000) /* L2 cache Enable. */ +#define KVX_SFR_SPS_PL3_L2E_SHIFT 17 +#define KVX_SFR_SPS_PL3_L2E_WIDTH 1 +#define KVX_SFR_SPS_PL3_L2E_WFXL_MASK _ULL(0x20000) +#define KVX_SFR_SPS_PL3_L2E_WFXL_CLEAR _ULL(0x20000) +#define KVX_SFR_SPS_PL3_L2E_WFXL_SET _ULL(0x2000000000000) + +#define KVX_SFR_SPS_PL3_SME_MASK _ULL(0x40000) /* Step Mode Enabled */ +#define KVX_SFR_SPS_PL3_SME_SHIFT 18 +#define KVX_SFR_SPS_PL3_SME_WIDTH 1 +#define KVX_SFR_SPS_PL3_SME_WFXL_MASK _ULL(0x40000) +#define KVX_SFR_SPS_PL3_SME_WFXL_CLEAR _ULL(0x40000) +#define KVX_SFR_SPS_PL3_SME_WFXL_SET _ULL(0x4000000000000) + +#define KVX_SFR_SPS_PL3_SMR_MASK _ULL(0x80000) /* Step Mode Ready */ +#define KVX_SFR_SPS_PL3_SMR_SHIFT 19 +#define KVX_SFR_SPS_PL3_SMR_WIDTH 1 +#define KVX_SFR_SPS_PL3_SMR_WFXL_MASK _ULL(0x80000) +#define KVX_SFR_SPS_PL3_SMR_WFXL_CLEAR _ULL(0x80000) +#define KVX_SFR_SPS_PL3_SMR_WFXL_SET _ULL(0x8000000000000) + +#define KVX_SFR_SPS_PL3_PMJ_MASK _ULL(0xf00000) /* Page Mask in JTLB. */ +#define KVX_SFR_SPS_PL3_PMJ_SHIFT 20 +#define KVX_SFR_SPS_PL3_PMJ_WIDTH 4 +#define KVX_SFR_SPS_PL3_PMJ_WFXL_MASK _ULL(0xf00000) +#define KVX_SFR_SPS_PL3_PMJ_WFXL_CLEAR _ULL(0xf00000) +#define KVX_SFR_SPS_PL3_PMJ_WFXL_SET _ULL(0xf0000000000000) + +#define KVX_SFR_SPS_PL3_MMUP_MASK _ULL(0x1000000) /* Privileged on MMU. */ +#define KVX_SFR_SPS_PL3_MMUP_SHIFT 24 +#define KVX_SFR_SPS_PL3_MMUP_WIDTH 1 +#define KVX_SFR_SPS_PL3_MMUP_WFXL_MASK _ULL(0x1000000) +#define KVX_SFR_SPS_PL3_MMUP_WFXL_CLEAR _ULL(0x1000000) +#define KVX_SFR_SPS_PL3_MMUP_WFXL_SET _ULL(0x100000000000000) + +#define KVX_SFR_PSO_PL0_MASK _ULL(0x3) /* Current Privilege Level bit 0 ow= ner */ +#define KVX_SFR_PSO_PL0_SHIFT 0 +#define KVX_SFR_PSO_PL0_WIDTH 2 +#define KVX_SFR_PSO_PL0_WFXL_MASK _ULL(0x3) +#define KVX_SFR_PSO_PL0_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_PSO_PL0_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_PSO_PL1_MASK _ULL(0xc) /* Current Privilege Level bit 1 ow= ner */ +#define KVX_SFR_PSO_PL1_SHIFT 2 +#define KVX_SFR_PSO_PL1_WIDTH 2 +#define KVX_SFR_PSO_PL1_WFXL_MASK _ULL(0xc) +#define KVX_SFR_PSO_PL1_WFXL_CLEAR _ULL(0xc) +#define KVX_SFR_PSO_PL1_WFXL_SET _ULL(0xc00000000) + +#define KVX_SFR_PSO_ET_MASK _ULL(0x30) /* Exception Taken owner */ +#define KVX_SFR_PSO_ET_SHIFT 4 +#define KVX_SFR_PSO_ET_WIDTH 2 +#define KVX_SFR_PSO_ET_WFXL_MASK _ULL(0x30) +#define KVX_SFR_PSO_ET_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_PSO_ET_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_PSO_HTD_MASK _ULL(0xc0) /* Hardware Trap Disable owner */ +#define KVX_SFR_PSO_HTD_SHIFT 6 +#define KVX_SFR_PSO_HTD_WIDTH 2 +#define KVX_SFR_PSO_HTD_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_PSO_HTD_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_PSO_HTD_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_PSO_IE_MASK _ULL(0x300) /* Interrupt Enable owner */ +#define KVX_SFR_PSO_IE_SHIFT 8 +#define KVX_SFR_PSO_IE_WIDTH 2 +#define KVX_SFR_PSO_IE_WFXL_MASK _ULL(0x300) +#define KVX_SFR_PSO_IE_WFXL_CLEAR _ULL(0x300) +#define KVX_SFR_PSO_IE_WFXL_SET _ULL(0x30000000000) + +#define KVX_SFR_PSO_HLE_MASK _ULL(0xc00) /* Hardware Loop Enable owner */ +#define KVX_SFR_PSO_HLE_SHIFT 10 +#define KVX_SFR_PSO_HLE_WIDTH 2 +#define KVX_SFR_PSO_HLE_WFXL_MASK _ULL(0xc00) +#define KVX_SFR_PSO_HLE_WFXL_CLEAR _ULL(0xc00) +#define KVX_SFR_PSO_HLE_WFXL_SET _ULL(0xc0000000000) + +#define KVX_SFR_PSO_SRE_MASK _ULL(0x3000) /* Software REserved owner */ +#define KVX_SFR_PSO_SRE_SHIFT 12 +#define KVX_SFR_PSO_SRE_WIDTH 2 +#define KVX_SFR_PSO_SRE_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_PSO_SRE_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_PSO_SRE_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_PSO_DAUS_MASK _ULL(0xc000) /* Data Accesses Use SPS settin= gs owner */ +#define KVX_SFR_PSO_DAUS_SHIFT 14 +#define KVX_SFR_PSO_DAUS_WIDTH 2 +#define KVX_SFR_PSO_DAUS_WFXL_MASK _ULL(0xc000) +#define KVX_SFR_PSO_DAUS_WFXL_CLEAR _ULL(0xc000) +#define KVX_SFR_PSO_DAUS_WFXL_SET _ULL(0xc00000000000) + +#define KVX_SFR_PSO_ICE_MASK _ULL(0x30000) /* Instruction Cache Enable own= er */ +#define KVX_SFR_PSO_ICE_SHIFT 16 +#define KVX_SFR_PSO_ICE_WIDTH 2 +#define KVX_SFR_PSO_ICE_WFXL_MASK _ULL(0x30000) +#define KVX_SFR_PSO_ICE_WFXL_CLEAR _ULL(0x30000) +#define KVX_SFR_PSO_ICE_WFXL_SET _ULL(0x3000000000000) + +#define KVX_SFR_PSO_USE_MASK _ULL(0xc0000) /* Uncached Streaming Enable ow= ner */ +#define KVX_SFR_PSO_USE_SHIFT 18 +#define KVX_SFR_PSO_USE_WIDTH 2 +#define KVX_SFR_PSO_USE_WFXL_MASK _ULL(0xc0000) +#define KVX_SFR_PSO_USE_WFXL_CLEAR _ULL(0xc0000) +#define KVX_SFR_PSO_USE_WFXL_SET _ULL(0xc000000000000) + +#define KVX_SFR_PSO_DCE_MASK _ULL(0x300000) /* Data Cache Enable owner */ +#define KVX_SFR_PSO_DCE_SHIFT 20 +#define KVX_SFR_PSO_DCE_WIDTH 2 +#define KVX_SFR_PSO_DCE_WFXL_MASK _ULL(0x300000) +#define KVX_SFR_PSO_DCE_WFXL_CLEAR _ULL(0x300000) +#define KVX_SFR_PSO_DCE_WFXL_SET _ULL(0x30000000000000) + +#define KVX_SFR_PSO_MME_MASK _ULL(0xc00000) /* Memory Management Enable ow= ner */ +#define KVX_SFR_PSO_MME_SHIFT 22 +#define KVX_SFR_PSO_MME_WIDTH 2 +#define KVX_SFR_PSO_MME_WFXL_MASK _ULL(0xc00000) +#define KVX_SFR_PSO_MME_WFXL_CLEAR _ULL(0xc00000) +#define KVX_SFR_PSO_MME_WFXL_SET _ULL(0xc0000000000000) + +#define KVX_SFR_PSO_IL0_MASK _ULL(0x3000000) /* Interrupt Level bit 0 owne= r */ +#define KVX_SFR_PSO_IL0_SHIFT 24 +#define KVX_SFR_PSO_IL0_WIDTH 2 +#define KVX_SFR_PSO_IL0_WFXL_MASK _ULL(0x3000000) +#define KVX_SFR_PSO_IL0_WFXL_CLEAR _ULL(0x3000000) +#define KVX_SFR_PSO_IL0_WFXL_SET _ULL(0x300000000000000) + +#define KVX_SFR_PSO_IL1_MASK _ULL(0xc000000) /* Interrupt Level bit 1 owne= r */ +#define KVX_SFR_PSO_IL1_SHIFT 26 +#define KVX_SFR_PSO_IL1_WIDTH 2 +#define KVX_SFR_PSO_IL1_WFXL_MASK _ULL(0xc000000) +#define KVX_SFR_PSO_IL1_WFXL_CLEAR _ULL(0xc000000) +#define KVX_SFR_PSO_IL1_WFXL_SET _ULL(0xc00000000000000) + +#define KVX_SFR_PSO_VS0_MASK _ULL(0x30000000) /* Virtual Space bit 0 owner= */ +#define KVX_SFR_PSO_VS0_SHIFT 28 +#define KVX_SFR_PSO_VS0_WIDTH 2 +#define KVX_SFR_PSO_VS0_WFXL_MASK _ULL(0x30000000) +#define KVX_SFR_PSO_VS0_WFXL_CLEAR _ULL(0x30000000) +#define KVX_SFR_PSO_VS0_WFXL_SET _ULL(0x3000000000000000) + +#define KVX_SFR_PSO_VS1_MASK _ULL(0xc0000000) /* Virtual Space bit 1 owner= */ +#define KVX_SFR_PSO_VS1_SHIFT 30 +#define KVX_SFR_PSO_VS1_WIDTH 2 +#define KVX_SFR_PSO_VS1_WFXL_MASK _ULL(0xc0000000) +#define KVX_SFR_PSO_VS1_WFXL_CLEAR _ULL(0xc0000000) +#define KVX_SFR_PSO_VS1_WFXL_SET _ULL(0xc000000000000000) + +#define KVX_SFR_PSO_V64_MASK _ULL(0x300000000) /* Virtual 64 bits mode own= er */ +#define KVX_SFR_PSO_V64_SHIFT 32 +#define KVX_SFR_PSO_V64_WIDTH 2 +#define KVX_SFR_PSO_V64_WFXM_MASK _ULL(0x300000000) +#define KVX_SFR_PSO_V64_WFXM_CLEAR _ULL(0x3) +#define KVX_SFR_PSO_V64_WFXM_SET _ULL(0x300000000) + +#define KVX_SFR_PSO_L2E_MASK _ULL(0xc00000000) /* L2 cache Enable owner */ +#define KVX_SFR_PSO_L2E_SHIFT 34 +#define KVX_SFR_PSO_L2E_WIDTH 2 +#define KVX_SFR_PSO_L2E_WFXM_MASK _ULL(0xc00000000) +#define KVX_SFR_PSO_L2E_WFXM_CLEAR _ULL(0xc) +#define KVX_SFR_PSO_L2E_WFXM_SET _ULL(0xc00000000) + +#define KVX_SFR_PSO_SME_MASK _ULL(0x3000000000) /* Step Mode Enabled owner= */ +#define KVX_SFR_PSO_SME_SHIFT 36 +#define KVX_SFR_PSO_SME_WIDTH 2 +#define KVX_SFR_PSO_SME_WFXM_MASK _ULL(0x3000000000) +#define KVX_SFR_PSO_SME_WFXM_CLEAR _ULL(0x30) +#define KVX_SFR_PSO_SME_WFXM_SET _ULL(0x3000000000) + +#define KVX_SFR_PSO_SMR_MASK _ULL(0xc000000000) /* Step Mode Ready owner */ +#define KVX_SFR_PSO_SMR_SHIFT 38 +#define KVX_SFR_PSO_SMR_WIDTH 2 +#define KVX_SFR_PSO_SMR_WFXM_MASK _ULL(0xc000000000) +#define KVX_SFR_PSO_SMR_WFXM_CLEAR _ULL(0xc0) +#define KVX_SFR_PSO_SMR_WFXM_SET _ULL(0xc000000000) + +#define KVX_SFR_PSO_PMJ0_MASK _ULL(0x30000000000) /* Page Mask in JTLB bit= 0 owner */ +#define KVX_SFR_PSO_PMJ0_SHIFT 40 +#define KVX_SFR_PSO_PMJ0_WIDTH 2 +#define KVX_SFR_PSO_PMJ0_WFXM_MASK _ULL(0x30000000000) +#define KVX_SFR_PSO_PMJ0_WFXM_CLEAR _ULL(0x300) +#define KVX_SFR_PSO_PMJ0_WFXM_SET _ULL(0x30000000000) + +#define KVX_SFR_PSO_PMJ1_MASK _ULL(0xc0000000000) /* Page Mask in JTLB bit= 1 owner */ +#define KVX_SFR_PSO_PMJ1_SHIFT 42 +#define KVX_SFR_PSO_PMJ1_WIDTH 2 +#define KVX_SFR_PSO_PMJ1_WFXM_MASK _ULL(0xc0000000000) +#define KVX_SFR_PSO_PMJ1_WFXM_CLEAR _ULL(0xc00) +#define KVX_SFR_PSO_PMJ1_WFXM_SET _ULL(0xc0000000000) + +#define KVX_SFR_PSO_PMJ2_MASK _ULL(0x300000000000) /* Page Mask in JTLB bi= t 2 owner */ +#define KVX_SFR_PSO_PMJ2_SHIFT 44 +#define KVX_SFR_PSO_PMJ2_WIDTH 2 +#define KVX_SFR_PSO_PMJ2_WFXM_MASK _ULL(0x300000000000) +#define KVX_SFR_PSO_PMJ2_WFXM_CLEAR _ULL(0x3000) +#define KVX_SFR_PSO_PMJ2_WFXM_SET _ULL(0x300000000000) + +#define KVX_SFR_PSO_PMJ3_MASK _ULL(0xc00000000000) /* Page Mask in JTLB bi= t 3 owner */ +#define KVX_SFR_PSO_PMJ3_SHIFT 46 +#define KVX_SFR_PSO_PMJ3_WIDTH 2 +#define KVX_SFR_PSO_PMJ3_WFXM_MASK _ULL(0xc00000000000) +#define KVX_SFR_PSO_PMJ3_WFXM_CLEAR _ULL(0xc000) +#define KVX_SFR_PSO_PMJ3_WFXM_SET _ULL(0xc00000000000) + +#define KVX_SFR_PSO_MMUP_MASK _ULL(0x3000000000000) /* Privileged on MMU o= wner. */ +#define KVX_SFR_PSO_MMUP_SHIFT 48 +#define KVX_SFR_PSO_MMUP_WIDTH 2 +#define KVX_SFR_PSO_MMUP_WFXM_MASK _ULL(0x3000000000000) +#define KVX_SFR_PSO_MMUP_WFXM_CLEAR _ULL(0x30000) +#define KVX_SFR_PSO_MMUP_WFXM_SET _ULL(0x3000000000000) + +#define KVX_SFR_PSOW_PL0_MASK _ULL(0x3) /* Current Privilege Level bit 0 o= wner */ +#define KVX_SFR_PSOW_PL0_SHIFT 0 +#define KVX_SFR_PSOW_PL0_WIDTH 2 +#define KVX_SFR_PSOW_PL0_WFXL_MASK _ULL(0x3) +#define KVX_SFR_PSOW_PL0_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_PSOW_PL0_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_PSOW_PL1_MASK _ULL(0xc) /* Current Privilege Level bit 1 o= wner */ +#define KVX_SFR_PSOW_PL1_SHIFT 2 +#define KVX_SFR_PSOW_PL1_WIDTH 2 +#define KVX_SFR_PSOW_PL1_WFXL_MASK _ULL(0xc) +#define KVX_SFR_PSOW_PL1_WFXL_CLEAR _ULL(0xc) +#define KVX_SFR_PSOW_PL1_WFXL_SET _ULL(0xc00000000) + +#define KVX_SFR_PSOW_ET_MASK _ULL(0x30) /* Exception Taken owner */ +#define KVX_SFR_PSOW_ET_SHIFT 4 +#define KVX_SFR_PSOW_ET_WIDTH 2 +#define KVX_SFR_PSOW_ET_WFXL_MASK _ULL(0x30) +#define KVX_SFR_PSOW_ET_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_PSOW_ET_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_PSOW_HTD_MASK _ULL(0xc0) /* Hardware Trap Disable owner */ +#define KVX_SFR_PSOW_HTD_SHIFT 6 +#define KVX_SFR_PSOW_HTD_WIDTH 2 +#define KVX_SFR_PSOW_HTD_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_PSOW_HTD_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_PSOW_HTD_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_PSOW_IE_MASK _ULL(0x300) /* Interrupt Enable owner */ +#define KVX_SFR_PSOW_IE_SHIFT 8 +#define KVX_SFR_PSOW_IE_WIDTH 2 +#define KVX_SFR_PSOW_IE_WFXL_MASK _ULL(0x300) +#define KVX_SFR_PSOW_IE_WFXL_CLEAR _ULL(0x300) +#define KVX_SFR_PSOW_IE_WFXL_SET _ULL(0x30000000000) + +#define KVX_SFR_PSOW_HLE_MASK _ULL(0xc00) /* Hardware Loop Enable owner */ +#define KVX_SFR_PSOW_HLE_SHIFT 10 +#define KVX_SFR_PSOW_HLE_WIDTH 2 +#define KVX_SFR_PSOW_HLE_WFXL_MASK _ULL(0xc00) +#define KVX_SFR_PSOW_HLE_WFXL_CLEAR _ULL(0xc00) +#define KVX_SFR_PSOW_HLE_WFXL_SET _ULL(0xc0000000000) + +#define KVX_SFR_PSOW_SRE_MASK _ULL(0x3000) /* Software REserved owner */ +#define KVX_SFR_PSOW_SRE_SHIFT 12 +#define KVX_SFR_PSOW_SRE_WIDTH 2 +#define KVX_SFR_PSOW_SRE_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_PSOW_SRE_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_PSOW_SRE_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_PSOW_DAUS_MASK _ULL(0xc000) /* Data Accesses Use SPS setti= ngs owner */ +#define KVX_SFR_PSOW_DAUS_SHIFT 14 +#define KVX_SFR_PSOW_DAUS_WIDTH 2 +#define KVX_SFR_PSOW_DAUS_WFXL_MASK _ULL(0xc000) +#define KVX_SFR_PSOW_DAUS_WFXL_CLEAR _ULL(0xc000) +#define KVX_SFR_PSOW_DAUS_WFXL_SET _ULL(0xc00000000000) + +#define KVX_SFR_PSOW_ICE_MASK _ULL(0x30000) /* Instruction Cache Enable ow= ner */ +#define KVX_SFR_PSOW_ICE_SHIFT 16 +#define KVX_SFR_PSOW_ICE_WIDTH 2 +#define KVX_SFR_PSOW_ICE_WFXL_MASK _ULL(0x30000) +#define KVX_SFR_PSOW_ICE_WFXL_CLEAR _ULL(0x30000) +#define KVX_SFR_PSOW_ICE_WFXL_SET _ULL(0x3000000000000) + +#define KVX_SFR_PSOW_USE_MASK _ULL(0xc0000) /* Uncached Streaming Enable o= wner */ +#define KVX_SFR_PSOW_USE_SHIFT 18 +#define KVX_SFR_PSOW_USE_WIDTH 2 +#define KVX_SFR_PSOW_USE_WFXL_MASK _ULL(0xc0000) +#define KVX_SFR_PSOW_USE_WFXL_CLEAR _ULL(0xc0000) +#define KVX_SFR_PSOW_USE_WFXL_SET _ULL(0xc000000000000) + +#define KVX_SFR_PSOW_DCE_MASK _ULL(0x300000) /* Data Cache Enable owner */ +#define KVX_SFR_PSOW_DCE_SHIFT 20 +#define KVX_SFR_PSOW_DCE_WIDTH 2 +#define KVX_SFR_PSOW_DCE_WFXL_MASK _ULL(0x300000) +#define KVX_SFR_PSOW_DCE_WFXL_CLEAR _ULL(0x300000) +#define KVX_SFR_PSOW_DCE_WFXL_SET _ULL(0x30000000000000) + +#define KVX_SFR_PSOW_MME_MASK _ULL(0xc00000) /* Memory Management Enable o= wner */ +#define KVX_SFR_PSOW_MME_SHIFT 22 +#define KVX_SFR_PSOW_MME_WIDTH 2 +#define KVX_SFR_PSOW_MME_WFXL_MASK _ULL(0xc00000) +#define KVX_SFR_PSOW_MME_WFXL_CLEAR _ULL(0xc00000) +#define KVX_SFR_PSOW_MME_WFXL_SET _ULL(0xc0000000000000) + +#define KVX_SFR_PSOW_IL0_MASK _ULL(0x3000000) /* Interrupt Level bit 0 own= er */ +#define KVX_SFR_PSOW_IL0_SHIFT 24 +#define KVX_SFR_PSOW_IL0_WIDTH 2 +#define KVX_SFR_PSOW_IL0_WFXL_MASK _ULL(0x3000000) +#define KVX_SFR_PSOW_IL0_WFXL_CLEAR _ULL(0x3000000) +#define KVX_SFR_PSOW_IL0_WFXL_SET _ULL(0x300000000000000) + +#define KVX_SFR_PSOW_IL1_MASK _ULL(0xc000000) /* Interrupt Level bit 1 own= er */ +#define KVX_SFR_PSOW_IL1_SHIFT 26 +#define KVX_SFR_PSOW_IL1_WIDTH 2 +#define KVX_SFR_PSOW_IL1_WFXL_MASK _ULL(0xc000000) +#define KVX_SFR_PSOW_IL1_WFXL_CLEAR _ULL(0xc000000) +#define KVX_SFR_PSOW_IL1_WFXL_SET _ULL(0xc00000000000000) + +#define KVX_SFR_PSOW_VS0_MASK _ULL(0x30000000) /* Virtual Space bit 0 owne= r */ +#define KVX_SFR_PSOW_VS0_SHIFT 28 +#define KVX_SFR_PSOW_VS0_WIDTH 2 +#define KVX_SFR_PSOW_VS0_WFXL_MASK _ULL(0x30000000) +#define KVX_SFR_PSOW_VS0_WFXL_CLEAR _ULL(0x30000000) +#define KVX_SFR_PSOW_VS0_WFXL_SET _ULL(0x3000000000000000) + +#define KVX_SFR_PSOW_VS1_MASK _ULL(0xc0000000) /* Virtual Space bit 1 owne= r */ +#define KVX_SFR_PSOW_VS1_SHIFT 30 +#define KVX_SFR_PSOW_VS1_WIDTH 2 +#define KVX_SFR_PSOW_VS1_WFXL_MASK _ULL(0xc0000000) +#define KVX_SFR_PSOW_VS1_WFXL_CLEAR _ULL(0xc0000000) +#define KVX_SFR_PSOW_VS1_WFXL_SET _ULL(0xc000000000000000) + +#define KVX_SFR_PSOW_V64_MASK _ULL(0x300000000) /* Virtual 64 bits mode ow= ner */ +#define KVX_SFR_PSOW_V64_SHIFT 32 +#define KVX_SFR_PSOW_V64_WIDTH 2 +#define KVX_SFR_PSOW_V64_WFXM_MASK _ULL(0x300000000) +#define KVX_SFR_PSOW_V64_WFXM_CLEAR _ULL(0x3) +#define KVX_SFR_PSOW_V64_WFXM_SET _ULL(0x300000000) + +#define KVX_SFR_PSOW_L2E_MASK _ULL(0xc00000000) /* L2 cache Enable owner */ +#define KVX_SFR_PSOW_L2E_SHIFT 34 +#define KVX_SFR_PSOW_L2E_WIDTH 2 +#define KVX_SFR_PSOW_L2E_WFXM_MASK _ULL(0xc00000000) +#define KVX_SFR_PSOW_L2E_WFXM_CLEAR _ULL(0xc) +#define KVX_SFR_PSOW_L2E_WFXM_SET _ULL(0xc00000000) + +#define KVX_SFR_PSOW_SME_MASK _ULL(0x3000000000) /* Step Mode Enabled owne= r */ +#define KVX_SFR_PSOW_SME_SHIFT 36 +#define KVX_SFR_PSOW_SME_WIDTH 2 +#define KVX_SFR_PSOW_SME_WFXM_MASK _ULL(0x3000000000) +#define KVX_SFR_PSOW_SME_WFXM_CLEAR _ULL(0x30) +#define KVX_SFR_PSOW_SME_WFXM_SET _ULL(0x3000000000) + +#define KVX_SFR_PSOW_SMR_MASK _ULL(0xc000000000) /* Step Mode Ready owner = */ +#define KVX_SFR_PSOW_SMR_SHIFT 38 +#define KVX_SFR_PSOW_SMR_WIDTH 2 +#define KVX_SFR_PSOW_SMR_WFXM_MASK _ULL(0xc000000000) +#define KVX_SFR_PSOW_SMR_WFXM_CLEAR _ULL(0xc0) +#define KVX_SFR_PSOW_SMR_WFXM_SET _ULL(0xc000000000) + +#define KVX_SFR_PSOW_PMJ0_MASK _ULL(0x30000000000) /* Page Mask in JTLB bi= t 0 owner */ +#define KVX_SFR_PSOW_PMJ0_SHIFT 40 +#define KVX_SFR_PSOW_PMJ0_WIDTH 2 +#define KVX_SFR_PSOW_PMJ0_WFXM_MASK _ULL(0x30000000000) +#define KVX_SFR_PSOW_PMJ0_WFXM_CLEAR _ULL(0x300) +#define KVX_SFR_PSOW_PMJ0_WFXM_SET _ULL(0x30000000000) + +#define KVX_SFR_PSOW_PMJ1_MASK _ULL(0xc0000000000) /* Page Mask in JTLB bi= t 1 owner */ +#define KVX_SFR_PSOW_PMJ1_SHIFT 42 +#define KVX_SFR_PSOW_PMJ1_WIDTH 2 +#define KVX_SFR_PSOW_PMJ1_WFXM_MASK _ULL(0xc0000000000) +#define KVX_SFR_PSOW_PMJ1_WFXM_CLEAR _ULL(0xc00) +#define KVX_SFR_PSOW_PMJ1_WFXM_SET _ULL(0xc0000000000) + +#define KVX_SFR_PSOW_PMJ2_MASK _ULL(0x300000000000) /* Page Mask in JTLB b= it 2 owner */ +#define KVX_SFR_PSOW_PMJ2_SHIFT 44 +#define KVX_SFR_PSOW_PMJ2_WIDTH 2 +#define KVX_SFR_PSOW_PMJ2_WFXM_MASK _ULL(0x300000000000) +#define KVX_SFR_PSOW_PMJ2_WFXM_CLEAR _ULL(0x3000) +#define KVX_SFR_PSOW_PMJ2_WFXM_SET _ULL(0x300000000000) + +#define KVX_SFR_PSOW_PMJ3_MASK _ULL(0xc00000000000) /* Page Mask in JTLB b= it 3 owner */ +#define KVX_SFR_PSOW_PMJ3_SHIFT 46 +#define KVX_SFR_PSOW_PMJ3_WIDTH 2 +#define KVX_SFR_PSOW_PMJ3_WFXM_MASK _ULL(0xc00000000000) +#define KVX_SFR_PSOW_PMJ3_WFXM_CLEAR _ULL(0xc000) +#define KVX_SFR_PSOW_PMJ3_WFXM_SET _ULL(0xc00000000000) + +#define KVX_SFR_PSOW_MMUP_MASK _ULL(0x3000000000000) /* Privileged on MMU = owner. */ +#define KVX_SFR_PSOW_MMUP_SHIFT 48 +#define KVX_SFR_PSOW_MMUP_WIDTH 2 +#define KVX_SFR_PSOW_MMUP_WFXM_MASK _ULL(0x3000000000000) +#define KVX_SFR_PSOW_MMUP_WFXM_CLEAR _ULL(0x30000) +#define KVX_SFR_PSOW_MMUP_WFXM_SET _ULL(0x3000000000000) + +#define KVX_SFR_CS_IC_MASK _ULL(0x1) /* Integer Carry */ +#define KVX_SFR_CS_IC_SHIFT 0 +#define KVX_SFR_CS_IC_WIDTH 1 +#define KVX_SFR_CS_IC_WFXL_MASK _ULL(0x1) +#define KVX_SFR_CS_IC_WFXL_CLEAR _ULL(0x1) +#define KVX_SFR_CS_IC_WFXL_SET _ULL(0x100000000) + +#define KVX_SFR_CS_IO_MASK _ULL(0x2) /* IEEE 754 Invalid Operation */ +#define KVX_SFR_CS_IO_SHIFT 1 +#define KVX_SFR_CS_IO_WIDTH 1 +#define KVX_SFR_CS_IO_WFXL_MASK _ULL(0x2) +#define KVX_SFR_CS_IO_WFXL_CLEAR _ULL(0x2) +#define KVX_SFR_CS_IO_WFXL_SET _ULL(0x200000000) + +#define KVX_SFR_CS_DZ_MASK _ULL(0x4) /* IEEE 754 Divide by Zero */ +#define KVX_SFR_CS_DZ_SHIFT 2 +#define KVX_SFR_CS_DZ_WIDTH 1 +#define KVX_SFR_CS_DZ_WFXL_MASK _ULL(0x4) +#define KVX_SFR_CS_DZ_WFXL_CLEAR _ULL(0x4) +#define KVX_SFR_CS_DZ_WFXL_SET _ULL(0x400000000) + +#define KVX_SFR_CS_OV_MASK _ULL(0x8) /* IEEE 754 Overflow */ +#define KVX_SFR_CS_OV_SHIFT 3 +#define KVX_SFR_CS_OV_WIDTH 1 +#define KVX_SFR_CS_OV_WFXL_MASK _ULL(0x8) +#define KVX_SFR_CS_OV_WFXL_CLEAR _ULL(0x8) +#define KVX_SFR_CS_OV_WFXL_SET _ULL(0x800000000) + +#define KVX_SFR_CS_UN_MASK _ULL(0x10) /* IEEE 754 Underflow */ +#define KVX_SFR_CS_UN_SHIFT 4 +#define KVX_SFR_CS_UN_WIDTH 1 +#define KVX_SFR_CS_UN_WFXL_MASK _ULL(0x10) +#define KVX_SFR_CS_UN_WFXL_CLEAR _ULL(0x10) +#define KVX_SFR_CS_UN_WFXL_SET _ULL(0x1000000000) + +#define KVX_SFR_CS_IN_MASK _ULL(0x20) /* IEEE 754 Inexact */ +#define KVX_SFR_CS_IN_SHIFT 5 +#define KVX_SFR_CS_IN_WIDTH 1 +#define KVX_SFR_CS_IN_WFXL_MASK _ULL(0x20) +#define KVX_SFR_CS_IN_WFXL_CLEAR _ULL(0x20) +#define KVX_SFR_CS_IN_WFXL_SET _ULL(0x2000000000) + +#define KVX_SFR_CS_XIO_MASK _ULL(0x200) /* Extension IEEE 754 Invalid Oper= ation */ +#define KVX_SFR_CS_XIO_SHIFT 9 +#define KVX_SFR_CS_XIO_WIDTH 1 +#define KVX_SFR_CS_XIO_WFXL_MASK _ULL(0x200) +#define KVX_SFR_CS_XIO_WFXL_CLEAR _ULL(0x200) +#define KVX_SFR_CS_XIO_WFXL_SET _ULL(0x20000000000) + +#define KVX_SFR_CS_XDZ_MASK _ULL(0x400) /* Extension IEEE 754 Divide by Ze= ro */ +#define KVX_SFR_CS_XDZ_SHIFT 10 +#define KVX_SFR_CS_XDZ_WIDTH 1 +#define KVX_SFR_CS_XDZ_WFXL_MASK _ULL(0x400) +#define KVX_SFR_CS_XDZ_WFXL_CLEAR _ULL(0x400) +#define KVX_SFR_CS_XDZ_WFXL_SET _ULL(0x40000000000) + +#define KVX_SFR_CS_XOV_MASK _ULL(0x800) /* Extension IEEE 754 Overflow */ +#define KVX_SFR_CS_XOV_SHIFT 11 +#define KVX_SFR_CS_XOV_WIDTH 1 +#define KVX_SFR_CS_XOV_WFXL_MASK _ULL(0x800) +#define KVX_SFR_CS_XOV_WFXL_CLEAR _ULL(0x800) +#define KVX_SFR_CS_XOV_WFXL_SET _ULL(0x80000000000) + +#define KVX_SFR_CS_XUN_MASK _ULL(0x1000) /* Extension IEEE 754 Underflow */ +#define KVX_SFR_CS_XUN_SHIFT 12 +#define KVX_SFR_CS_XUN_WIDTH 1 +#define KVX_SFR_CS_XUN_WFXL_MASK _ULL(0x1000) +#define KVX_SFR_CS_XUN_WFXL_CLEAR _ULL(0x1000) +#define KVX_SFR_CS_XUN_WFXL_SET _ULL(0x100000000000) + +#define KVX_SFR_CS_XIN_MASK _ULL(0x2000) /* Extension IEEE 754 Inexact */ +#define KVX_SFR_CS_XIN_SHIFT 13 +#define KVX_SFR_CS_XIN_WIDTH 1 +#define KVX_SFR_CS_XIN_WFXL_MASK _ULL(0x2000) +#define KVX_SFR_CS_XIN_WFXL_CLEAR _ULL(0x2000) +#define KVX_SFR_CS_XIN_WFXL_SET _ULL(0x200000000000) + +#define KVX_SFR_CS_RM_MASK _ULL(0x30000) /* IEEE 754 Rounding Mode */ +#define KVX_SFR_CS_RM_SHIFT 16 +#define KVX_SFR_CS_RM_WIDTH 2 +#define KVX_SFR_CS_RM_WFXL_MASK _ULL(0x30000) +#define KVX_SFR_CS_RM_WFXL_CLEAR _ULL(0x30000) +#define KVX_SFR_CS_RM_WFXL_SET _ULL(0x3000000000000) + +#define KVX_SFR_CS_XRM_MASK _ULL(0x300000) /* Extension IEEE 754 Rounding = Mode */ +#define KVX_SFR_CS_XRM_SHIFT 20 +#define KVX_SFR_CS_XRM_WIDTH 2 +#define KVX_SFR_CS_XRM_WFXL_MASK _ULL(0x300000) +#define KVX_SFR_CS_XRM_WFXL_CLEAR _ULL(0x300000) +#define KVX_SFR_CS_XRM_WFXL_SET _ULL(0x30000000000000) + +#define KVX_SFR_CS_XMF_MASK _ULL(0x1000000) /* eXtension ModiFied */ +#define KVX_SFR_CS_XMF_SHIFT 24 +#define KVX_SFR_CS_XMF_WIDTH 1 +#define KVX_SFR_CS_XMF_WFXL_MASK _ULL(0x1000000) +#define KVX_SFR_CS_XMF_WFXL_CLEAR _ULL(0x1000000) +#define KVX_SFR_CS_XMF_WFXL_SET _ULL(0x100000000000000) + +#define KVX_SFR_CS_CC_MASK _ULL(0xffff00000000) /* Carry Counter */ +#define KVX_SFR_CS_CC_SHIFT 32 +#define KVX_SFR_CS_CC_WIDTH 16 +#define KVX_SFR_CS_CC_WFXM_MASK _ULL(0xffff00000000) +#define KVX_SFR_CS_CC_WFXM_CLEAR _ULL(0xffff) +#define KVX_SFR_CS_CC_WFXM_SET _ULL(0xffff00000000) + +#define KVX_SFR_CS_XDROP_MASK _ULL(0x3f000000000000) /* Extension Conversi= on Drop Bits */ +#define KVX_SFR_CS_XDROP_SHIFT 48 +#define KVX_SFR_CS_XDROP_WIDTH 6 +#define KVX_SFR_CS_XDROP_WFXM_MASK _ULL(0x3f000000000000) +#define KVX_SFR_CS_XDROP_WFXM_CLEAR _ULL(0x3f0000) +#define KVX_SFR_CS_XDROP_WFXM_SET _ULL(0x3f000000000000) + +#define KVX_SFR_CS_XPOW2_MASK _ULL(0xfc0000000000000) /* Extension FScale = Power of Two */ +#define KVX_SFR_CS_XPOW2_SHIFT 54 +#define KVX_SFR_CS_XPOW2_WIDTH 6 +#define KVX_SFR_CS_XPOW2_WFXM_MASK _ULL(0xfc0000000000000) +#define KVX_SFR_CS_XPOW2_WFXM_CLEAR _ULL(0xfc00000) +#define KVX_SFR_CS_XPOW2_WFXM_SET _ULL(0xfc0000000000000) + +#define KVX_SFR_AESPC_AESPC_MASK _ULL(0xffffffffffffffff) /* Arithmetic Ex= ception Saved PC */ +#define KVX_SFR_AESPC_AESPC_SHIFT 0 +#define KVX_SFR_AESPC_AESPC_WIDTH 64 +#define KVX_SFR_AESPC_AESPC_WFXL_MASK _ULL(0xffffffff) +#define KVX_SFR_AESPC_AESPC_WFXL_CLEAR _ULL(0xffffffff) +#define KVX_SFR_AESPC_AESPC_WFXL_SET _ULL(0xffffffff00000000) +#define KVX_SFR_AESPC_AESPC_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_AESPC_AESPC_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_AESPC_AESPC_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_CSIT_ICIE_MASK _ULL(0x1) /* Integer Carry Interrupt Enable= */ +#define KVX_SFR_CSIT_ICIE_SHIFT 0 +#define KVX_SFR_CSIT_ICIE_WIDTH 1 +#define KVX_SFR_CSIT_ICIE_WFXL_MASK _ULL(0x1) +#define KVX_SFR_CSIT_ICIE_WFXL_CLEAR _ULL(0x1) +#define KVX_SFR_CSIT_ICIE_WFXL_SET _ULL(0x100000000) + +#define KVX_SFR_CSIT_IOIE_MASK _ULL(0x2) /* IEEE 754 Invalid Operation Int= errupt Enable */ +#define KVX_SFR_CSIT_IOIE_SHIFT 1 +#define KVX_SFR_CSIT_IOIE_WIDTH 1 +#define KVX_SFR_CSIT_IOIE_WFXL_MASK _ULL(0x2) +#define KVX_SFR_CSIT_IOIE_WFXL_CLEAR _ULL(0x2) +#define KVX_SFR_CSIT_IOIE_WFXL_SET _ULL(0x200000000) + +#define KVX_SFR_CSIT_DZIE_MASK _ULL(0x4) /* IEEE 754 Divide by Zero Interr= upt Enable */ +#define KVX_SFR_CSIT_DZIE_SHIFT 2 +#define KVX_SFR_CSIT_DZIE_WIDTH 1 +#define KVX_SFR_CSIT_DZIE_WFXL_MASK _ULL(0x4) +#define KVX_SFR_CSIT_DZIE_WFXL_CLEAR _ULL(0x4) +#define KVX_SFR_CSIT_DZIE_WFXL_SET _ULL(0x400000000) + +#define KVX_SFR_CSIT_OVIE_MASK _ULL(0x8) /* IEEE 754 Overflow Interrupt En= able */ +#define KVX_SFR_CSIT_OVIE_SHIFT 3 +#define KVX_SFR_CSIT_OVIE_WIDTH 1 +#define KVX_SFR_CSIT_OVIE_WFXL_MASK _ULL(0x8) +#define KVX_SFR_CSIT_OVIE_WFXL_CLEAR _ULL(0x8) +#define KVX_SFR_CSIT_OVIE_WFXL_SET _ULL(0x800000000) + +#define KVX_SFR_CSIT_UNIE_MASK _ULL(0x10) /* IEEE 754 Underflow Interrupt = Enable */ +#define KVX_SFR_CSIT_UNIE_SHIFT 4 +#define KVX_SFR_CSIT_UNIE_WIDTH 1 +#define KVX_SFR_CSIT_UNIE_WFXL_MASK _ULL(0x10) +#define KVX_SFR_CSIT_UNIE_WFXL_CLEAR _ULL(0x10) +#define KVX_SFR_CSIT_UNIE_WFXL_SET _ULL(0x1000000000) + +#define KVX_SFR_CSIT_INIE_MASK _ULL(0x20) /* IEEE 754 Inexact Interrupt En= able */ +#define KVX_SFR_CSIT_INIE_SHIFT 5 +#define KVX_SFR_CSIT_INIE_WIDTH 1 +#define KVX_SFR_CSIT_INIE_WFXL_MASK _ULL(0x20) +#define KVX_SFR_CSIT_INIE_WFXL_CLEAR _ULL(0x20) +#define KVX_SFR_CSIT_INIE_WFXL_SET _ULL(0x2000000000) + +#define KVX_SFR_CSIT_XIOIE_MASK _ULL(0x200) /* Extension IEEE 754 Invalid = Operation Interrupt Enable */ +#define KVX_SFR_CSIT_XIOIE_SHIFT 9 +#define KVX_SFR_CSIT_XIOIE_WIDTH 1 +#define KVX_SFR_CSIT_XIOIE_WFXL_MASK _ULL(0x200) +#define KVX_SFR_CSIT_XIOIE_WFXL_CLEAR _ULL(0x200) +#define KVX_SFR_CSIT_XIOIE_WFXL_SET _ULL(0x20000000000) + +#define KVX_SFR_CSIT_XDZIE_MASK _ULL(0x400) /* Extension IEEE 754 Divide b= y Zero Interrupt Enable */ +#define KVX_SFR_CSIT_XDZIE_SHIFT 10 +#define KVX_SFR_CSIT_XDZIE_WIDTH 1 +#define KVX_SFR_CSIT_XDZIE_WFXL_MASK _ULL(0x400) +#define KVX_SFR_CSIT_XDZIE_WFXL_CLEAR _ULL(0x400) +#define KVX_SFR_CSIT_XDZIE_WFXL_SET _ULL(0x40000000000) + +#define KVX_SFR_CSIT_XOVIE_MASK _ULL(0x800) /* Extension IEEE 754 Overflow= Interrupt Enable */ +#define KVX_SFR_CSIT_XOVIE_SHIFT 11 +#define KVX_SFR_CSIT_XOVIE_WIDTH 1 +#define KVX_SFR_CSIT_XOVIE_WFXL_MASK _ULL(0x800) +#define KVX_SFR_CSIT_XOVIE_WFXL_CLEAR _ULL(0x800) +#define KVX_SFR_CSIT_XOVIE_WFXL_SET _ULL(0x80000000000) + +#define KVX_SFR_CSIT_XUNIE_MASK _ULL(0x1000) /* Extension IEEE 754 Underfl= ow Interrupt Enable */ +#define KVX_SFR_CSIT_XUNIE_SHIFT 12 +#define KVX_SFR_CSIT_XUNIE_WIDTH 1 +#define KVX_SFR_CSIT_XUNIE_WFXL_MASK _ULL(0x1000) +#define KVX_SFR_CSIT_XUNIE_WFXL_CLEAR _ULL(0x1000) +#define KVX_SFR_CSIT_XUNIE_WFXL_SET _ULL(0x100000000000) + +#define KVX_SFR_CSIT_XINIE_MASK _ULL(0x2000) /* Extension IEEE 754 Inexact= Interrupt Enable */ +#define KVX_SFR_CSIT_XINIE_SHIFT 13 +#define KVX_SFR_CSIT_XINIE_WIDTH 1 +#define KVX_SFR_CSIT_XINIE_WFXL_MASK _ULL(0x2000) +#define KVX_SFR_CSIT_XINIE_WFXL_CLEAR _ULL(0x2000) +#define KVX_SFR_CSIT_XINIE_WFXL_SET _ULL(0x200000000000) + +#define KVX_SFR_CSIT_AEIR_MASK _ULL(0x10000) /* Arithmetic Exception Inter= rupt Raised */ +#define KVX_SFR_CSIT_AEIR_SHIFT 16 +#define KVX_SFR_CSIT_AEIR_WIDTH 1 +#define KVX_SFR_CSIT_AEIR_WFXL_MASK _ULL(0x10000) +#define KVX_SFR_CSIT_AEIR_WFXL_CLEAR _ULL(0x10000) +#define KVX_SFR_CSIT_AEIR_WFXL_SET _ULL(0x1000000000000) + +#define KVX_SFR_CSIT_AEC_MASK _ULL(0xe0000) /* Arithmetic Exception Code */ +#define KVX_SFR_CSIT_AEC_SHIFT 17 +#define KVX_SFR_CSIT_AEC_WIDTH 3 +#define KVX_SFR_CSIT_AEC_WFXL_MASK _ULL(0xe0000) +#define KVX_SFR_CSIT_AEC_WFXL_CLEAR _ULL(0xe0000) +#define KVX_SFR_CSIT_AEC_WFXL_SET _ULL(0xe000000000000) + +#define KVX_SFR_CSIT_SPCV_MASK _ULL(0x100000) /* SPC Valid */ +#define KVX_SFR_CSIT_SPCV_SHIFT 20 +#define KVX_SFR_CSIT_SPCV_WIDTH 1 +#define KVX_SFR_CSIT_SPCV_WFXL_MASK _ULL(0x100000) +#define KVX_SFR_CSIT_SPCV_WFXL_CLEAR _ULL(0x100000) +#define KVX_SFR_CSIT_SPCV_WFXL_SET _ULL(0x10000000000000) + +#define KVX_SFR_ES_EC_MASK _ULL(0xf) /* Exception Class */ +#define KVX_SFR_ES_EC_SHIFT 0 +#define KVX_SFR_ES_EC_WIDTH 4 +#define KVX_SFR_ES_EC_WFXL_MASK _ULL(0xf) +#define KVX_SFR_ES_EC_WFXL_CLEAR _ULL(0xf) +#define KVX_SFR_ES_EC_WFXL_SET _ULL(0xf00000000) + +#define KVX_SFR_ES_ED_MASK _ULL(0xfffffffffffffff0) /* Exception Details */ +#define KVX_SFR_ES_ED_SHIFT 4 +#define KVX_SFR_ES_ED_WIDTH 60 +#define KVX_SFR_ES_ED_WFXL_MASK _ULL(0xfffffff0) +#define KVX_SFR_ES_ED_WFXL_CLEAR _ULL(0xfffffff0) +#define KVX_SFR_ES_ED_WFXL_SET _ULL(0xfffffff000000000) +#define KVX_SFR_ES_ED_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_ES_ED_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_ES_ED_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_ES_OAPL_MASK _ULL(0x30) /* Origin Absolute PL */ +#define KVX_SFR_ES_OAPL_SHIFT 4 +#define KVX_SFR_ES_OAPL_WIDTH 2 +#define KVX_SFR_ES_OAPL_WFXL_MASK _ULL(0x30) +#define KVX_SFR_ES_OAPL_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_ES_OAPL_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_ES_ORPL_MASK _ULL(0xc0) /* Origin Relative PL */ +#define KVX_SFR_ES_ORPL_SHIFT 6 +#define KVX_SFR_ES_ORPL_WIDTH 2 +#define KVX_SFR_ES_ORPL_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_ES_ORPL_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_ES_ORPL_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_ES_PTAPL_MASK _ULL(0x300) /* Primary Target Absolute PL */ +#define KVX_SFR_ES_PTAPL_SHIFT 8 +#define KVX_SFR_ES_PTAPL_WIDTH 2 +#define KVX_SFR_ES_PTAPL_WFXL_MASK _ULL(0x300) +#define KVX_SFR_ES_PTAPL_WFXL_CLEAR _ULL(0x300) +#define KVX_SFR_ES_PTAPL_WFXL_SET _ULL(0x30000000000) + +#define KVX_SFR_ES_PTRPL_MASK _ULL(0xc00) /* Primary Target Relative PL */ +#define KVX_SFR_ES_PTRPL_SHIFT 10 +#define KVX_SFR_ES_PTRPL_WIDTH 2 +#define KVX_SFR_ES_PTRPL_WFXL_MASK _ULL(0xc00) +#define KVX_SFR_ES_PTRPL_WFXL_CLEAR _ULL(0xc00) +#define KVX_SFR_ES_PTRPL_WFXL_SET _ULL(0xc0000000000) + +#define KVX_SFR_ES_ITN_MASK _ULL(0x1f000) /* InTerrupt Number */ +#define KVX_SFR_ES_ITN_SHIFT 12 +#define KVX_SFR_ES_ITN_WIDTH 5 +#define KVX_SFR_ES_ITN_WFXL_MASK _ULL(0x1f000) +#define KVX_SFR_ES_ITN_WFXL_CLEAR _ULL(0x1f000) +#define KVX_SFR_ES_ITN_WFXL_SET _ULL(0x1f00000000000) + +#define KVX_SFR_ES_ITL_MASK _ULL(0x60000) /* InTerrupt Level */ +#define KVX_SFR_ES_ITL_SHIFT 17 +#define KVX_SFR_ES_ITL_WIDTH 2 +#define KVX_SFR_ES_ITL_WFXL_MASK _ULL(0x60000) +#define KVX_SFR_ES_ITL_WFXL_CLEAR _ULL(0x60000) +#define KVX_SFR_ES_ITL_WFXL_SET _ULL(0x6000000000000) + +#define KVX_SFR_ES_ITI_MASK _ULL(0x1ff80000) /* InTerrupt Info */ +#define KVX_SFR_ES_ITI_SHIFT 19 +#define KVX_SFR_ES_ITI_WIDTH 10 +#define KVX_SFR_ES_ITI_WFXL_MASK _ULL(0x1ff80000) +#define KVX_SFR_ES_ITI_WFXL_CLEAR _ULL(0x1ff80000) +#define KVX_SFR_ES_ITI_WFXL_SET _ULL(0x1ff8000000000000) + +#define KVX_SFR_ES_SN_MASK _ULL(0xfff000) /* Syscall Number */ +#define KVX_SFR_ES_SN_SHIFT 12 +#define KVX_SFR_ES_SN_WIDTH 12 +#define KVX_SFR_ES_SN_WFXL_MASK _ULL(0xfff000) +#define KVX_SFR_ES_SN_WFXL_CLEAR _ULL(0xfff000) +#define KVX_SFR_ES_SN_WFXL_SET _ULL(0xfff00000000000) + +#define KVX_SFR_ES_HTC_MASK _ULL(0x1f000) /* Hardware Trap Cause */ +#define KVX_SFR_ES_HTC_SHIFT 12 +#define KVX_SFR_ES_HTC_WIDTH 5 +#define KVX_SFR_ES_HTC_WFXL_MASK _ULL(0x1f000) +#define KVX_SFR_ES_HTC_WFXL_CLEAR _ULL(0x1f000) +#define KVX_SFR_ES_HTC_WFXL_SET _ULL(0x1f00000000000) + +#define KVX_SFR_ES_SFRT_MASK _ULL(0x20000) /* SFR Trap */ +#define KVX_SFR_ES_SFRT_SHIFT 17 +#define KVX_SFR_ES_SFRT_WIDTH 1 +#define KVX_SFR_ES_SFRT_WFXL_MASK _ULL(0x20000) +#define KVX_SFR_ES_SFRT_WFXL_CLEAR _ULL(0x20000) +#define KVX_SFR_ES_SFRT_WFXL_SET _ULL(0x2000000000000) + +#define KVX_SFR_ES_SFRI_MASK _ULL(0x1c0000) /* SFR Instruction */ +#define KVX_SFR_ES_SFRI_SHIFT 18 +#define KVX_SFR_ES_SFRI_WIDTH 3 +#define KVX_SFR_ES_SFRI_WFXL_MASK _ULL(0x1c0000) +#define KVX_SFR_ES_SFRI_WFXL_CLEAR _ULL(0x1c0000) +#define KVX_SFR_ES_SFRI_WFXL_SET _ULL(0x1c000000000000) + +#define KVX_SFR_ES_GPRP_MASK _ULL(0x7e00000) /* GPR Pointer */ +#define KVX_SFR_ES_GPRP_SHIFT 21 +#define KVX_SFR_ES_GPRP_WIDTH 6 +#define KVX_SFR_ES_GPRP_WFXL_MASK _ULL(0x7e00000) +#define KVX_SFR_ES_GPRP_WFXL_CLEAR _ULL(0x7e00000) +#define KVX_SFR_ES_GPRP_WFXL_SET _ULL(0x7e0000000000000) + +#define KVX_SFR_ES_SFRP_MASK _ULL(0xff8000000) /* SFR Pointer */ +#define KVX_SFR_ES_SFRP_SHIFT 27 +#define KVX_SFR_ES_SFRP_WIDTH 9 +#define KVX_SFR_ES_SFRP_WFXL_MASK _ULL(0xf8000000) +#define KVX_SFR_ES_SFRP_WFXL_CLEAR _ULL(0xf8000000) +#define KVX_SFR_ES_SFRP_WFXL_SET _ULL(0xf800000000000000) +#define KVX_SFR_ES_SFRP_WFXM_MASK _ULL(0xf00000000) +#define KVX_SFR_ES_SFRP_WFXM_CLEAR _ULL(0xf) +#define KVX_SFR_ES_SFRP_WFXM_SET _ULL(0xf00000000) + +#define KVX_SFR_ES_DHT_MASK _ULL(0x1000000000) /* Disabled Hardware Trap */ +#define KVX_SFR_ES_DHT_SHIFT 36 +#define KVX_SFR_ES_DHT_WIDTH 1 +#define KVX_SFR_ES_DHT_WFXM_MASK _ULL(0x1000000000) +#define KVX_SFR_ES_DHT_WFXM_CLEAR _ULL(0x10) +#define KVX_SFR_ES_DHT_WFXM_SET _ULL(0x1000000000) + +#define KVX_SFR_ES_RWX_MASK _ULL(0x38000000000) /* Read Write Execute */ +#define KVX_SFR_ES_RWX_SHIFT 39 +#define KVX_SFR_ES_RWX_WIDTH 3 +#define KVX_SFR_ES_RWX_WFXM_MASK _ULL(0x38000000000) +#define KVX_SFR_ES_RWX_WFXM_CLEAR _ULL(0x380) +#define KVX_SFR_ES_RWX_WFXM_SET _ULL(0x38000000000) + +#define KVX_SFR_ES_NTA_MASK _ULL(0x40000000000) /* Non-Trapping Access */ +#define KVX_SFR_ES_NTA_SHIFT 42 +#define KVX_SFR_ES_NTA_WIDTH 1 +#define KVX_SFR_ES_NTA_WFXM_MASK _ULL(0x40000000000) +#define KVX_SFR_ES_NTA_WFXM_CLEAR _ULL(0x400) +#define KVX_SFR_ES_NTA_WFXM_SET _ULL(0x40000000000) + +#define KVX_SFR_ES_UCA_MASK _ULL(0x80000000000) /* Un-Cached Access */ +#define KVX_SFR_ES_UCA_SHIFT 43 +#define KVX_SFR_ES_UCA_WIDTH 1 +#define KVX_SFR_ES_UCA_WFXM_MASK _ULL(0x80000000000) +#define KVX_SFR_ES_UCA_WFXM_CLEAR _ULL(0x800) +#define KVX_SFR_ES_UCA_WFXM_SET _ULL(0x80000000000) + +#define KVX_SFR_ES_AS_MASK _ULL(0x3f00000000000) /* Access Size */ +#define KVX_SFR_ES_AS_SHIFT 44 +#define KVX_SFR_ES_AS_WIDTH 6 +#define KVX_SFR_ES_AS_WFXM_MASK _ULL(0x3f00000000000) +#define KVX_SFR_ES_AS_WFXM_CLEAR _ULL(0x3f000) +#define KVX_SFR_ES_AS_WFXM_SET _ULL(0x3f00000000000) + +#define KVX_SFR_ES_BS_MASK _ULL(0x3c000000000000) /* Bundle Size */ +#define KVX_SFR_ES_BS_SHIFT 50 +#define KVX_SFR_ES_BS_WIDTH 4 +#define KVX_SFR_ES_BS_WFXM_MASK _ULL(0x3c000000000000) +#define KVX_SFR_ES_BS_WFXM_CLEAR _ULL(0x3c0000) +#define KVX_SFR_ES_BS_WFXM_SET _ULL(0x3c000000000000) + +#define KVX_SFR_ES_DRI_MASK _ULL(0xfc0000000000000) /* Data Register Index= */ +#define KVX_SFR_ES_DRI_SHIFT 54 +#define KVX_SFR_ES_DRI_WIDTH 6 +#define KVX_SFR_ES_DRI_WFXM_MASK _ULL(0xfc0000000000000) +#define KVX_SFR_ES_DRI_WFXM_CLEAR _ULL(0xfc00000) +#define KVX_SFR_ES_DRI_WFXM_SET _ULL(0xfc0000000000000) + +#define KVX_SFR_ES_PIC_MASK _ULL(0xf000000000000000) /* Privileged Instruc= tion Code */ +#define KVX_SFR_ES_PIC_SHIFT 60 +#define KVX_SFR_ES_PIC_WIDTH 4 +#define KVX_SFR_ES_PIC_WFXM_MASK _ULL(0xf000000000000000) +#define KVX_SFR_ES_PIC_WFXM_CLEAR _ULL(0xf0000000) +#define KVX_SFR_ES_PIC_WFXM_SET _ULL(0xf000000000000000) + +#define KVX_SFR_ES_DC_MASK _ULL(0x3000) /* Debug Cause */ +#define KVX_SFR_ES_DC_SHIFT 12 +#define KVX_SFR_ES_DC_WIDTH 2 +#define KVX_SFR_ES_DC_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_ES_DC_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_ES_DC_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_ES_BN_MASK _ULL(0x4000) /* Breakpoint Number */ +#define KVX_SFR_ES_BN_SHIFT 14 +#define KVX_SFR_ES_BN_WIDTH 1 +#define KVX_SFR_ES_BN_WFXL_MASK _ULL(0x4000) +#define KVX_SFR_ES_BN_WFXL_CLEAR _ULL(0x4000) +#define KVX_SFR_ES_BN_WFXL_SET _ULL(0x400000000000) + +#define KVX_SFR_ES_WN_MASK _ULL(0x8000) /* Watchpoint Number */ +#define KVX_SFR_ES_WN_SHIFT 15 +#define KVX_SFR_ES_WN_WIDTH 1 +#define KVX_SFR_ES_WN_WFXL_MASK _ULL(0x8000) +#define KVX_SFR_ES_WN_WFXL_CLEAR _ULL(0x8000) +#define KVX_SFR_ES_WN_WFXL_SET _ULL(0x800000000000) + +#define KVX_SFR_ES_PL0_EC_MASK _ULL(0xf) /* Exception Class */ +#define KVX_SFR_ES_PL0_EC_SHIFT 0 +#define KVX_SFR_ES_PL0_EC_WIDTH 4 +#define KVX_SFR_ES_PL0_EC_WFXL_MASK _ULL(0xf) +#define KVX_SFR_ES_PL0_EC_WFXL_CLEAR _ULL(0xf) +#define KVX_SFR_ES_PL0_EC_WFXL_SET _ULL(0xf00000000) + +#define KVX_SFR_ES_PL0_ED_MASK _ULL(0xfffffffffffffff0) /* Exception Detai= ls */ +#define KVX_SFR_ES_PL0_ED_SHIFT 4 +#define KVX_SFR_ES_PL0_ED_WIDTH 60 +#define KVX_SFR_ES_PL0_ED_WFXL_MASK _ULL(0xfffffff0) +#define KVX_SFR_ES_PL0_ED_WFXL_CLEAR _ULL(0xfffffff0) +#define KVX_SFR_ES_PL0_ED_WFXL_SET _ULL(0xfffffff000000000) +#define KVX_SFR_ES_PL0_ED_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_ES_PL0_ED_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_ES_PL0_ED_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_ES_PL0_OAPL_MASK _ULL(0x30) /* Origin Absolute PL */ +#define KVX_SFR_ES_PL0_OAPL_SHIFT 4 +#define KVX_SFR_ES_PL0_OAPL_WIDTH 2 +#define KVX_SFR_ES_PL0_OAPL_WFXL_MASK _ULL(0x30) +#define KVX_SFR_ES_PL0_OAPL_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_ES_PL0_OAPL_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_ES_PL0_ORPL_MASK _ULL(0xc0) /* Origin Relative PL */ +#define KVX_SFR_ES_PL0_ORPL_SHIFT 6 +#define KVX_SFR_ES_PL0_ORPL_WIDTH 2 +#define KVX_SFR_ES_PL0_ORPL_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_ES_PL0_ORPL_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_ES_PL0_ORPL_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_ES_PL0_PTAPL_MASK _ULL(0x300) /* Target Absolute PL */ +#define KVX_SFR_ES_PL0_PTAPL_SHIFT 8 +#define KVX_SFR_ES_PL0_PTAPL_WIDTH 2 +#define KVX_SFR_ES_PL0_PTAPL_WFXL_MASK _ULL(0x300) +#define KVX_SFR_ES_PL0_PTAPL_WFXL_CLEAR _ULL(0x300) +#define KVX_SFR_ES_PL0_PTAPL_WFXL_SET _ULL(0x30000000000) + +#define KVX_SFR_ES_PL0_PTRPL_MASK _ULL(0xc00) /* Target Relative PL */ +#define KVX_SFR_ES_PL0_PTRPL_SHIFT 10 +#define KVX_SFR_ES_PL0_PTRPL_WIDTH 2 +#define KVX_SFR_ES_PL0_PTRPL_WFXL_MASK _ULL(0xc00) +#define KVX_SFR_ES_PL0_PTRPL_WFXL_CLEAR _ULL(0xc00) +#define KVX_SFR_ES_PL0_PTRPL_WFXL_SET _ULL(0xc0000000000) + +#define KVX_SFR_ES_PL0_ITN_MASK _ULL(0x1f000) /* InTerrupt Number */ +#define KVX_SFR_ES_PL0_ITN_SHIFT 12 +#define KVX_SFR_ES_PL0_ITN_WIDTH 5 +#define KVX_SFR_ES_PL0_ITN_WFXL_MASK _ULL(0x1f000) +#define KVX_SFR_ES_PL0_ITN_WFXL_CLEAR _ULL(0x1f000) +#define KVX_SFR_ES_PL0_ITN_WFXL_SET _ULL(0x1f00000000000) + +#define KVX_SFR_ES_PL0_ITL_MASK _ULL(0x60000) /* InTerrupt Level */ +#define KVX_SFR_ES_PL0_ITL_SHIFT 17 +#define KVX_SFR_ES_PL0_ITL_WIDTH 2 +#define KVX_SFR_ES_PL0_ITL_WFXL_MASK _ULL(0x60000) +#define KVX_SFR_ES_PL0_ITL_WFXL_CLEAR _ULL(0x60000) +#define KVX_SFR_ES_PL0_ITL_WFXL_SET _ULL(0x6000000000000) + +#define KVX_SFR_ES_PL0_ITI_MASK _ULL(0x1ff80000) /* InTerrupt Info */ +#define KVX_SFR_ES_PL0_ITI_SHIFT 19 +#define KVX_SFR_ES_PL0_ITI_WIDTH 10 +#define KVX_SFR_ES_PL0_ITI_WFXL_MASK _ULL(0x1ff80000) +#define KVX_SFR_ES_PL0_ITI_WFXL_CLEAR _ULL(0x1ff80000) +#define KVX_SFR_ES_PL0_ITI_WFXL_SET _ULL(0x1ff8000000000000) + +#define KVX_SFR_ES_PL0_SN_MASK _ULL(0xfff000) /* Syscall Number */ +#define KVX_SFR_ES_PL0_SN_SHIFT 12 +#define KVX_SFR_ES_PL0_SN_WIDTH 12 +#define KVX_SFR_ES_PL0_SN_WFXL_MASK _ULL(0xfff000) +#define KVX_SFR_ES_PL0_SN_WFXL_CLEAR _ULL(0xfff000) +#define KVX_SFR_ES_PL0_SN_WFXL_SET _ULL(0xfff00000000000) + +#define KVX_SFR_ES_PL0_HTC_MASK _ULL(0x1f000) /* Hardware Trap Cause */ +#define KVX_SFR_ES_PL0_HTC_SHIFT 12 +#define KVX_SFR_ES_PL0_HTC_WIDTH 5 +#define KVX_SFR_ES_PL0_HTC_WFXL_MASK _ULL(0x1f000) +#define KVX_SFR_ES_PL0_HTC_WFXL_CLEAR _ULL(0x1f000) +#define KVX_SFR_ES_PL0_HTC_WFXL_SET _ULL(0x1f00000000000) + +#define KVX_SFR_ES_PL0_SFRT_MASK _ULL(0x20000) /* SFR Trap */ +#define KVX_SFR_ES_PL0_SFRT_SHIFT 17 +#define KVX_SFR_ES_PL0_SFRT_WIDTH 1 +#define KVX_SFR_ES_PL0_SFRT_WFXL_MASK _ULL(0x20000) +#define KVX_SFR_ES_PL0_SFRT_WFXL_CLEAR _ULL(0x20000) +#define KVX_SFR_ES_PL0_SFRT_WFXL_SET _ULL(0x2000000000000) + +#define KVX_SFR_ES_PL0_SFRI_MASK _ULL(0x1c0000) /* SFR Instruction */ +#define KVX_SFR_ES_PL0_SFRI_SHIFT 18 +#define KVX_SFR_ES_PL0_SFRI_WIDTH 3 +#define KVX_SFR_ES_PL0_SFRI_WFXL_MASK _ULL(0x1c0000) +#define KVX_SFR_ES_PL0_SFRI_WFXL_CLEAR _ULL(0x1c0000) +#define KVX_SFR_ES_PL0_SFRI_WFXL_SET _ULL(0x1c000000000000) + +#define KVX_SFR_ES_PL0_GPRP_MASK _ULL(0x7e00000) /* GPR Pointer */ +#define KVX_SFR_ES_PL0_GPRP_SHIFT 21 +#define KVX_SFR_ES_PL0_GPRP_WIDTH 6 +#define KVX_SFR_ES_PL0_GPRP_WFXL_MASK _ULL(0x7e00000) +#define KVX_SFR_ES_PL0_GPRP_WFXL_CLEAR _ULL(0x7e00000) +#define KVX_SFR_ES_PL0_GPRP_WFXL_SET _ULL(0x7e0000000000000) + +#define KVX_SFR_ES_PL0_SFRP_MASK _ULL(0xff8000000) /* SFR Pointer */ +#define KVX_SFR_ES_PL0_SFRP_SHIFT 27 +#define KVX_SFR_ES_PL0_SFRP_WIDTH 9 +#define KVX_SFR_ES_PL0_SFRP_WFXL_MASK _ULL(0xf8000000) +#define KVX_SFR_ES_PL0_SFRP_WFXL_CLEAR _ULL(0xf8000000) +#define KVX_SFR_ES_PL0_SFRP_WFXL_SET _ULL(0xf800000000000000) +#define KVX_SFR_ES_PL0_SFRP_WFXM_MASK _ULL(0xf00000000) +#define KVX_SFR_ES_PL0_SFRP_WFXM_CLEAR _ULL(0xf) +#define KVX_SFR_ES_PL0_SFRP_WFXM_SET _ULL(0xf00000000) + +#define KVX_SFR_ES_PL0_DHT_MASK _ULL(0x1000000000) /* Disabled Hardware Tr= ap */ +#define KVX_SFR_ES_PL0_DHT_SHIFT 36 +#define KVX_SFR_ES_PL0_DHT_WIDTH 1 +#define KVX_SFR_ES_PL0_DHT_WFXM_MASK _ULL(0x1000000000) +#define KVX_SFR_ES_PL0_DHT_WFXM_CLEAR _ULL(0x10) +#define KVX_SFR_ES_PL0_DHT_WFXM_SET _ULL(0x1000000000) + +#define KVX_SFR_ES_PL0_RWX_MASK _ULL(0x38000000000) /* Read Write Execute = */ +#define KVX_SFR_ES_PL0_RWX_SHIFT 39 +#define KVX_SFR_ES_PL0_RWX_WIDTH 3 +#define KVX_SFR_ES_PL0_RWX_WFXM_MASK _ULL(0x38000000000) +#define KVX_SFR_ES_PL0_RWX_WFXM_CLEAR _ULL(0x380) +#define KVX_SFR_ES_PL0_RWX_WFXM_SET _ULL(0x38000000000) + +#define KVX_SFR_ES_PL0_NTA_MASK _ULL(0x40000000000) /* Non-Trapping Access= */ +#define KVX_SFR_ES_PL0_NTA_SHIFT 42 +#define KVX_SFR_ES_PL0_NTA_WIDTH 1 +#define KVX_SFR_ES_PL0_NTA_WFXM_MASK _ULL(0x40000000000) +#define KVX_SFR_ES_PL0_NTA_WFXM_CLEAR _ULL(0x400) +#define KVX_SFR_ES_PL0_NTA_WFXM_SET _ULL(0x40000000000) + +#define KVX_SFR_ES_PL0_UCA_MASK _ULL(0x80000000000) /* Un-Cached Access */ +#define KVX_SFR_ES_PL0_UCA_SHIFT 43 +#define KVX_SFR_ES_PL0_UCA_WIDTH 1 +#define KVX_SFR_ES_PL0_UCA_WFXM_MASK _ULL(0x80000000000) +#define KVX_SFR_ES_PL0_UCA_WFXM_CLEAR _ULL(0x800) +#define KVX_SFR_ES_PL0_UCA_WFXM_SET _ULL(0x80000000000) + +#define KVX_SFR_ES_PL0_AS_MASK _ULL(0x3f00000000000) /* Access Size */ +#define KVX_SFR_ES_PL0_AS_SHIFT 44 +#define KVX_SFR_ES_PL0_AS_WIDTH 6 +#define KVX_SFR_ES_PL0_AS_WFXM_MASK _ULL(0x3f00000000000) +#define KVX_SFR_ES_PL0_AS_WFXM_CLEAR _ULL(0x3f000) +#define KVX_SFR_ES_PL0_AS_WFXM_SET _ULL(0x3f00000000000) + +#define KVX_SFR_ES_PL0_BS_MASK _ULL(0x3c000000000000) /* Bundle Size */ +#define KVX_SFR_ES_PL0_BS_SHIFT 50 +#define KVX_SFR_ES_PL0_BS_WIDTH 4 +#define KVX_SFR_ES_PL0_BS_WFXM_MASK _ULL(0x3c000000000000) +#define KVX_SFR_ES_PL0_BS_WFXM_CLEAR _ULL(0x3c0000) +#define KVX_SFR_ES_PL0_BS_WFXM_SET _ULL(0x3c000000000000) + +#define KVX_SFR_ES_PL0_DRI_MASK _ULL(0xfc0000000000000) /* Data Register I= ndex */ +#define KVX_SFR_ES_PL0_DRI_SHIFT 54 +#define KVX_SFR_ES_PL0_DRI_WIDTH 6 +#define KVX_SFR_ES_PL0_DRI_WFXM_MASK _ULL(0xfc0000000000000) +#define KVX_SFR_ES_PL0_DRI_WFXM_CLEAR _ULL(0xfc00000) +#define KVX_SFR_ES_PL0_DRI_WFXM_SET _ULL(0xfc0000000000000) + +#define KVX_SFR_ES_PL0_PIC_MASK _ULL(0xf000000000000000) /* Privileged Ins= truction Code */ +#define KVX_SFR_ES_PL0_PIC_SHIFT 60 +#define KVX_SFR_ES_PL0_PIC_WIDTH 4 +#define KVX_SFR_ES_PL0_PIC_WFXM_MASK _ULL(0xf000000000000000) +#define KVX_SFR_ES_PL0_PIC_WFXM_CLEAR _ULL(0xf0000000) +#define KVX_SFR_ES_PL0_PIC_WFXM_SET _ULL(0xf000000000000000) + +#define KVX_SFR_ES_PL0_DC_MASK _ULL(0x3000) /* Debug Cause */ +#define KVX_SFR_ES_PL0_DC_SHIFT 12 +#define KVX_SFR_ES_PL0_DC_WIDTH 2 +#define KVX_SFR_ES_PL0_DC_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_ES_PL0_DC_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_ES_PL0_DC_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_ES_PL0_BN_MASK _ULL(0x4000) /* Breakpoint Number */ +#define KVX_SFR_ES_PL0_BN_SHIFT 14 +#define KVX_SFR_ES_PL0_BN_WIDTH 1 +#define KVX_SFR_ES_PL0_BN_WFXL_MASK _ULL(0x4000) +#define KVX_SFR_ES_PL0_BN_WFXL_CLEAR _ULL(0x4000) +#define KVX_SFR_ES_PL0_BN_WFXL_SET _ULL(0x400000000000) + +#define KVX_SFR_ES_PL0_WN_MASK _ULL(0x8000) /* Watchpoint Number */ +#define KVX_SFR_ES_PL0_WN_SHIFT 15 +#define KVX_SFR_ES_PL0_WN_WIDTH 1 +#define KVX_SFR_ES_PL0_WN_WFXL_MASK _ULL(0x8000) +#define KVX_SFR_ES_PL0_WN_WFXL_CLEAR _ULL(0x8000) +#define KVX_SFR_ES_PL0_WN_WFXL_SET _ULL(0x800000000000) + +#define KVX_SFR_ES_PL1_EC_MASK _ULL(0xf) /* Exception Class */ +#define KVX_SFR_ES_PL1_EC_SHIFT 0 +#define KVX_SFR_ES_PL1_EC_WIDTH 4 +#define KVX_SFR_ES_PL1_EC_WFXL_MASK _ULL(0xf) +#define KVX_SFR_ES_PL1_EC_WFXL_CLEAR _ULL(0xf) +#define KVX_SFR_ES_PL1_EC_WFXL_SET _ULL(0xf00000000) + +#define KVX_SFR_ES_PL1_ED_MASK _ULL(0xfffffffffffffff0) /* Exception Detai= ls */ +#define KVX_SFR_ES_PL1_ED_SHIFT 4 +#define KVX_SFR_ES_PL1_ED_WIDTH 60 +#define KVX_SFR_ES_PL1_ED_WFXL_MASK _ULL(0xfffffff0) +#define KVX_SFR_ES_PL1_ED_WFXL_CLEAR _ULL(0xfffffff0) +#define KVX_SFR_ES_PL1_ED_WFXL_SET _ULL(0xfffffff000000000) +#define KVX_SFR_ES_PL1_ED_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_ES_PL1_ED_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_ES_PL1_ED_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_ES_PL1_OAPL_MASK _ULL(0x30) /* Origin Absolute PL */ +#define KVX_SFR_ES_PL1_OAPL_SHIFT 4 +#define KVX_SFR_ES_PL1_OAPL_WIDTH 2 +#define KVX_SFR_ES_PL1_OAPL_WFXL_MASK _ULL(0x30) +#define KVX_SFR_ES_PL1_OAPL_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_ES_PL1_OAPL_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_ES_PL1_ORPL_MASK _ULL(0xc0) /* Origin Relative PL */ +#define KVX_SFR_ES_PL1_ORPL_SHIFT 6 +#define KVX_SFR_ES_PL1_ORPL_WIDTH 2 +#define KVX_SFR_ES_PL1_ORPL_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_ES_PL1_ORPL_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_ES_PL1_ORPL_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_ES_PL1_PTAPL_MASK _ULL(0x300) /* Target Absolute PL */ +#define KVX_SFR_ES_PL1_PTAPL_SHIFT 8 +#define KVX_SFR_ES_PL1_PTAPL_WIDTH 2 +#define KVX_SFR_ES_PL1_PTAPL_WFXL_MASK _ULL(0x300) +#define KVX_SFR_ES_PL1_PTAPL_WFXL_CLEAR _ULL(0x300) +#define KVX_SFR_ES_PL1_PTAPL_WFXL_SET _ULL(0x30000000000) + +#define KVX_SFR_ES_PL1_PTRPL_MASK _ULL(0xc00) /* Target Relative PL */ +#define KVX_SFR_ES_PL1_PTRPL_SHIFT 10 +#define KVX_SFR_ES_PL1_PTRPL_WIDTH 2 +#define KVX_SFR_ES_PL1_PTRPL_WFXL_MASK _ULL(0xc00) +#define KVX_SFR_ES_PL1_PTRPL_WFXL_CLEAR _ULL(0xc00) +#define KVX_SFR_ES_PL1_PTRPL_WFXL_SET _ULL(0xc0000000000) + +#define KVX_SFR_ES_PL1_ITN_MASK _ULL(0x1f000) /* InTerrupt Number */ +#define KVX_SFR_ES_PL1_ITN_SHIFT 12 +#define KVX_SFR_ES_PL1_ITN_WIDTH 5 +#define KVX_SFR_ES_PL1_ITN_WFXL_MASK _ULL(0x1f000) +#define KVX_SFR_ES_PL1_ITN_WFXL_CLEAR _ULL(0x1f000) +#define KVX_SFR_ES_PL1_ITN_WFXL_SET _ULL(0x1f00000000000) + +#define KVX_SFR_ES_PL1_ITL_MASK _ULL(0x60000) /* InTerrupt Level */ +#define KVX_SFR_ES_PL1_ITL_SHIFT 17 +#define KVX_SFR_ES_PL1_ITL_WIDTH 2 +#define KVX_SFR_ES_PL1_ITL_WFXL_MASK _ULL(0x60000) +#define KVX_SFR_ES_PL1_ITL_WFXL_CLEAR _ULL(0x60000) +#define KVX_SFR_ES_PL1_ITL_WFXL_SET _ULL(0x6000000000000) + +#define KVX_SFR_ES_PL1_ITI_MASK _ULL(0x1ff80000) /* InTerrupt Info */ +#define KVX_SFR_ES_PL1_ITI_SHIFT 19 +#define KVX_SFR_ES_PL1_ITI_WIDTH 10 +#define KVX_SFR_ES_PL1_ITI_WFXL_MASK _ULL(0x1ff80000) +#define KVX_SFR_ES_PL1_ITI_WFXL_CLEAR _ULL(0x1ff80000) +#define KVX_SFR_ES_PL1_ITI_WFXL_SET _ULL(0x1ff8000000000000) + +#define KVX_SFR_ES_PL1_SN_MASK _ULL(0xfff000) /* Syscall Number */ +#define KVX_SFR_ES_PL1_SN_SHIFT 12 +#define KVX_SFR_ES_PL1_SN_WIDTH 12 +#define KVX_SFR_ES_PL1_SN_WFXL_MASK _ULL(0xfff000) +#define KVX_SFR_ES_PL1_SN_WFXL_CLEAR _ULL(0xfff000) +#define KVX_SFR_ES_PL1_SN_WFXL_SET _ULL(0xfff00000000000) + +#define KVX_SFR_ES_PL1_HTC_MASK _ULL(0x1f000) /* Hardware Trap Cause */ +#define KVX_SFR_ES_PL1_HTC_SHIFT 12 +#define KVX_SFR_ES_PL1_HTC_WIDTH 5 +#define KVX_SFR_ES_PL1_HTC_WFXL_MASK _ULL(0x1f000) +#define KVX_SFR_ES_PL1_HTC_WFXL_CLEAR _ULL(0x1f000) +#define KVX_SFR_ES_PL1_HTC_WFXL_SET _ULL(0x1f00000000000) + +#define KVX_SFR_ES_PL1_SFRT_MASK _ULL(0x20000) /* SFR Trap */ +#define KVX_SFR_ES_PL1_SFRT_SHIFT 17 +#define KVX_SFR_ES_PL1_SFRT_WIDTH 1 +#define KVX_SFR_ES_PL1_SFRT_WFXL_MASK _ULL(0x20000) +#define KVX_SFR_ES_PL1_SFRT_WFXL_CLEAR _ULL(0x20000) +#define KVX_SFR_ES_PL1_SFRT_WFXL_SET _ULL(0x2000000000000) + +#define KVX_SFR_ES_PL1_SFRI_MASK _ULL(0x1c0000) /* SFR Instruction */ +#define KVX_SFR_ES_PL1_SFRI_SHIFT 18 +#define KVX_SFR_ES_PL1_SFRI_WIDTH 3 +#define KVX_SFR_ES_PL1_SFRI_WFXL_MASK _ULL(0x1c0000) +#define KVX_SFR_ES_PL1_SFRI_WFXL_CLEAR _ULL(0x1c0000) +#define KVX_SFR_ES_PL1_SFRI_WFXL_SET _ULL(0x1c000000000000) + +#define KVX_SFR_ES_PL1_GPRP_MASK _ULL(0x7e00000) /* GPR Pointer */ +#define KVX_SFR_ES_PL1_GPRP_SHIFT 21 +#define KVX_SFR_ES_PL1_GPRP_WIDTH 6 +#define KVX_SFR_ES_PL1_GPRP_WFXL_MASK _ULL(0x7e00000) +#define KVX_SFR_ES_PL1_GPRP_WFXL_CLEAR _ULL(0x7e00000) +#define KVX_SFR_ES_PL1_GPRP_WFXL_SET _ULL(0x7e0000000000000) + +#define KVX_SFR_ES_PL1_SFRP_MASK _ULL(0xff8000000) /* SFR Pointer */ +#define KVX_SFR_ES_PL1_SFRP_SHIFT 27 +#define KVX_SFR_ES_PL1_SFRP_WIDTH 9 +#define KVX_SFR_ES_PL1_SFRP_WFXL_MASK _ULL(0xf8000000) +#define KVX_SFR_ES_PL1_SFRP_WFXL_CLEAR _ULL(0xf8000000) +#define KVX_SFR_ES_PL1_SFRP_WFXL_SET _ULL(0xf800000000000000) +#define KVX_SFR_ES_PL1_SFRP_WFXM_MASK _ULL(0xf00000000) +#define KVX_SFR_ES_PL1_SFRP_WFXM_CLEAR _ULL(0xf) +#define KVX_SFR_ES_PL1_SFRP_WFXM_SET _ULL(0xf00000000) + +#define KVX_SFR_ES_PL1_DHT_MASK _ULL(0x1000000000) /* Disabled Hardware Tr= ap */ +#define KVX_SFR_ES_PL1_DHT_SHIFT 36 +#define KVX_SFR_ES_PL1_DHT_WIDTH 1 +#define KVX_SFR_ES_PL1_DHT_WFXM_MASK _ULL(0x1000000000) +#define KVX_SFR_ES_PL1_DHT_WFXM_CLEAR _ULL(0x10) +#define KVX_SFR_ES_PL1_DHT_WFXM_SET _ULL(0x1000000000) + +#define KVX_SFR_ES_PL1_RWX_MASK _ULL(0x38000000000) /* Read Write Execute = */ +#define KVX_SFR_ES_PL1_RWX_SHIFT 39 +#define KVX_SFR_ES_PL1_RWX_WIDTH 3 +#define KVX_SFR_ES_PL1_RWX_WFXM_MASK _ULL(0x38000000000) +#define KVX_SFR_ES_PL1_RWX_WFXM_CLEAR _ULL(0x380) +#define KVX_SFR_ES_PL1_RWX_WFXM_SET _ULL(0x38000000000) + +#define KVX_SFR_ES_PL1_NTA_MASK _ULL(0x40000000000) /* Non-Trapping Access= */ +#define KVX_SFR_ES_PL1_NTA_SHIFT 42 +#define KVX_SFR_ES_PL1_NTA_WIDTH 1 +#define KVX_SFR_ES_PL1_NTA_WFXM_MASK _ULL(0x40000000000) +#define KVX_SFR_ES_PL1_NTA_WFXM_CLEAR _ULL(0x400) +#define KVX_SFR_ES_PL1_NTA_WFXM_SET _ULL(0x40000000000) + +#define KVX_SFR_ES_PL1_UCA_MASK _ULL(0x80000000000) /* Un-Cached Access */ +#define KVX_SFR_ES_PL1_UCA_SHIFT 43 +#define KVX_SFR_ES_PL1_UCA_WIDTH 1 +#define KVX_SFR_ES_PL1_UCA_WFXM_MASK _ULL(0x80000000000) +#define KVX_SFR_ES_PL1_UCA_WFXM_CLEAR _ULL(0x800) +#define KVX_SFR_ES_PL1_UCA_WFXM_SET _ULL(0x80000000000) + +#define KVX_SFR_ES_PL1_AS_MASK _ULL(0x3f00000000000) /* Access Size */ +#define KVX_SFR_ES_PL1_AS_SHIFT 44 +#define KVX_SFR_ES_PL1_AS_WIDTH 6 +#define KVX_SFR_ES_PL1_AS_WFXM_MASK _ULL(0x3f00000000000) +#define KVX_SFR_ES_PL1_AS_WFXM_CLEAR _ULL(0x3f000) +#define KVX_SFR_ES_PL1_AS_WFXM_SET _ULL(0x3f00000000000) + +#define KVX_SFR_ES_PL1_BS_MASK _ULL(0x3c000000000000) /* Bundle Size */ +#define KVX_SFR_ES_PL1_BS_SHIFT 50 +#define KVX_SFR_ES_PL1_BS_WIDTH 4 +#define KVX_SFR_ES_PL1_BS_WFXM_MASK _ULL(0x3c000000000000) +#define KVX_SFR_ES_PL1_BS_WFXM_CLEAR _ULL(0x3c0000) +#define KVX_SFR_ES_PL1_BS_WFXM_SET _ULL(0x3c000000000000) + +#define KVX_SFR_ES_PL1_DRI_MASK _ULL(0xfc0000000000000) /* Data Register I= ndex */ +#define KVX_SFR_ES_PL1_DRI_SHIFT 54 +#define KVX_SFR_ES_PL1_DRI_WIDTH 6 +#define KVX_SFR_ES_PL1_DRI_WFXM_MASK _ULL(0xfc0000000000000) +#define KVX_SFR_ES_PL1_DRI_WFXM_CLEAR _ULL(0xfc00000) +#define KVX_SFR_ES_PL1_DRI_WFXM_SET _ULL(0xfc0000000000000) + +#define KVX_SFR_ES_PL1_PIC_MASK _ULL(0xf000000000000000) /* Privileged Ins= truction Code */ +#define KVX_SFR_ES_PL1_PIC_SHIFT 60 +#define KVX_SFR_ES_PL1_PIC_WIDTH 4 +#define KVX_SFR_ES_PL1_PIC_WFXM_MASK _ULL(0xf000000000000000) +#define KVX_SFR_ES_PL1_PIC_WFXM_CLEAR _ULL(0xf0000000) +#define KVX_SFR_ES_PL1_PIC_WFXM_SET _ULL(0xf000000000000000) + +#define KVX_SFR_ES_PL1_DC_MASK _ULL(0x3000) /* Debug Cause */ +#define KVX_SFR_ES_PL1_DC_SHIFT 12 +#define KVX_SFR_ES_PL1_DC_WIDTH 2 +#define KVX_SFR_ES_PL1_DC_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_ES_PL1_DC_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_ES_PL1_DC_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_ES_PL1_BN_MASK _ULL(0x4000) /* Breakpoint Number */ +#define KVX_SFR_ES_PL1_BN_SHIFT 14 +#define KVX_SFR_ES_PL1_BN_WIDTH 1 +#define KVX_SFR_ES_PL1_BN_WFXL_MASK _ULL(0x4000) +#define KVX_SFR_ES_PL1_BN_WFXL_CLEAR _ULL(0x4000) +#define KVX_SFR_ES_PL1_BN_WFXL_SET _ULL(0x400000000000) + +#define KVX_SFR_ES_PL1_WN_MASK _ULL(0x8000) /* Watchpoint Number */ +#define KVX_SFR_ES_PL1_WN_SHIFT 15 +#define KVX_SFR_ES_PL1_WN_WIDTH 1 +#define KVX_SFR_ES_PL1_WN_WFXL_MASK _ULL(0x8000) +#define KVX_SFR_ES_PL1_WN_WFXL_CLEAR _ULL(0x8000) +#define KVX_SFR_ES_PL1_WN_WFXL_SET _ULL(0x800000000000) + +#define KVX_SFR_ES_PL2_EC_MASK _ULL(0xf) /* Exception Class */ +#define KVX_SFR_ES_PL2_EC_SHIFT 0 +#define KVX_SFR_ES_PL2_EC_WIDTH 4 +#define KVX_SFR_ES_PL2_EC_WFXL_MASK _ULL(0xf) +#define KVX_SFR_ES_PL2_EC_WFXL_CLEAR _ULL(0xf) +#define KVX_SFR_ES_PL2_EC_WFXL_SET _ULL(0xf00000000) + +#define KVX_SFR_ES_PL2_ED_MASK _ULL(0xfffffffffffffff0) /* Exception Detai= ls */ +#define KVX_SFR_ES_PL2_ED_SHIFT 4 +#define KVX_SFR_ES_PL2_ED_WIDTH 60 +#define KVX_SFR_ES_PL2_ED_WFXL_MASK _ULL(0xfffffff0) +#define KVX_SFR_ES_PL2_ED_WFXL_CLEAR _ULL(0xfffffff0) +#define KVX_SFR_ES_PL2_ED_WFXL_SET _ULL(0xfffffff000000000) +#define KVX_SFR_ES_PL2_ED_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_ES_PL2_ED_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_ES_PL2_ED_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_ES_PL2_OAPL_MASK _ULL(0x30) /* Origin Absolute PL */ +#define KVX_SFR_ES_PL2_OAPL_SHIFT 4 +#define KVX_SFR_ES_PL2_OAPL_WIDTH 2 +#define KVX_SFR_ES_PL2_OAPL_WFXL_MASK _ULL(0x30) +#define KVX_SFR_ES_PL2_OAPL_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_ES_PL2_OAPL_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_ES_PL2_ORPL_MASK _ULL(0xc0) /* Origin Relative PL */ +#define KVX_SFR_ES_PL2_ORPL_SHIFT 6 +#define KVX_SFR_ES_PL2_ORPL_WIDTH 2 +#define KVX_SFR_ES_PL2_ORPL_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_ES_PL2_ORPL_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_ES_PL2_ORPL_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_ES_PL2_PTAPL_MASK _ULL(0x300) /* Target Absolute PL */ +#define KVX_SFR_ES_PL2_PTAPL_SHIFT 8 +#define KVX_SFR_ES_PL2_PTAPL_WIDTH 2 +#define KVX_SFR_ES_PL2_PTAPL_WFXL_MASK _ULL(0x300) +#define KVX_SFR_ES_PL2_PTAPL_WFXL_CLEAR _ULL(0x300) +#define KVX_SFR_ES_PL2_PTAPL_WFXL_SET _ULL(0x30000000000) + +#define KVX_SFR_ES_PL2_PTRPL_MASK _ULL(0xc00) /* Target Relative PL */ +#define KVX_SFR_ES_PL2_PTRPL_SHIFT 10 +#define KVX_SFR_ES_PL2_PTRPL_WIDTH 2 +#define KVX_SFR_ES_PL2_PTRPL_WFXL_MASK _ULL(0xc00) +#define KVX_SFR_ES_PL2_PTRPL_WFXL_CLEAR _ULL(0xc00) +#define KVX_SFR_ES_PL2_PTRPL_WFXL_SET _ULL(0xc0000000000) + +#define KVX_SFR_ES_PL2_ITN_MASK _ULL(0x1f000) /* InTerrupt Number */ +#define KVX_SFR_ES_PL2_ITN_SHIFT 12 +#define KVX_SFR_ES_PL2_ITN_WIDTH 5 +#define KVX_SFR_ES_PL2_ITN_WFXL_MASK _ULL(0x1f000) +#define KVX_SFR_ES_PL2_ITN_WFXL_CLEAR _ULL(0x1f000) +#define KVX_SFR_ES_PL2_ITN_WFXL_SET _ULL(0x1f00000000000) + +#define KVX_SFR_ES_PL2_ITL_MASK _ULL(0x60000) /* InTerrupt Level */ +#define KVX_SFR_ES_PL2_ITL_SHIFT 17 +#define KVX_SFR_ES_PL2_ITL_WIDTH 2 +#define KVX_SFR_ES_PL2_ITL_WFXL_MASK _ULL(0x60000) +#define KVX_SFR_ES_PL2_ITL_WFXL_CLEAR _ULL(0x60000) +#define KVX_SFR_ES_PL2_ITL_WFXL_SET _ULL(0x6000000000000) + +#define KVX_SFR_ES_PL2_ITI_MASK _ULL(0x1ff80000) /* InTerrupt Info */ +#define KVX_SFR_ES_PL2_ITI_SHIFT 19 +#define KVX_SFR_ES_PL2_ITI_WIDTH 10 +#define KVX_SFR_ES_PL2_ITI_WFXL_MASK _ULL(0x1ff80000) +#define KVX_SFR_ES_PL2_ITI_WFXL_CLEAR _ULL(0x1ff80000) +#define KVX_SFR_ES_PL2_ITI_WFXL_SET _ULL(0x1ff8000000000000) + +#define KVX_SFR_ES_PL2_SN_MASK _ULL(0xfff000) /* Syscall Number */ +#define KVX_SFR_ES_PL2_SN_SHIFT 12 +#define KVX_SFR_ES_PL2_SN_WIDTH 12 +#define KVX_SFR_ES_PL2_SN_WFXL_MASK _ULL(0xfff000) +#define KVX_SFR_ES_PL2_SN_WFXL_CLEAR _ULL(0xfff000) +#define KVX_SFR_ES_PL2_SN_WFXL_SET _ULL(0xfff00000000000) + +#define KVX_SFR_ES_PL2_HTC_MASK _ULL(0x1f000) /* Hardware Trap Cause */ +#define KVX_SFR_ES_PL2_HTC_SHIFT 12 +#define KVX_SFR_ES_PL2_HTC_WIDTH 5 +#define KVX_SFR_ES_PL2_HTC_WFXL_MASK _ULL(0x1f000) +#define KVX_SFR_ES_PL2_HTC_WFXL_CLEAR _ULL(0x1f000) +#define KVX_SFR_ES_PL2_HTC_WFXL_SET _ULL(0x1f00000000000) + +#define KVX_SFR_ES_PL2_SFRT_MASK _ULL(0x20000) /* SFR Trap */ +#define KVX_SFR_ES_PL2_SFRT_SHIFT 17 +#define KVX_SFR_ES_PL2_SFRT_WIDTH 1 +#define KVX_SFR_ES_PL2_SFRT_WFXL_MASK _ULL(0x20000) +#define KVX_SFR_ES_PL2_SFRT_WFXL_CLEAR _ULL(0x20000) +#define KVX_SFR_ES_PL2_SFRT_WFXL_SET _ULL(0x2000000000000) + +#define KVX_SFR_ES_PL2_SFRI_MASK _ULL(0x1c0000) /* SFR Instruction */ +#define KVX_SFR_ES_PL2_SFRI_SHIFT 18 +#define KVX_SFR_ES_PL2_SFRI_WIDTH 3 +#define KVX_SFR_ES_PL2_SFRI_WFXL_MASK _ULL(0x1c0000) +#define KVX_SFR_ES_PL2_SFRI_WFXL_CLEAR _ULL(0x1c0000) +#define KVX_SFR_ES_PL2_SFRI_WFXL_SET _ULL(0x1c000000000000) + +#define KVX_SFR_ES_PL2_GPRP_MASK _ULL(0x7e00000) /* GPR Pointer */ +#define KVX_SFR_ES_PL2_GPRP_SHIFT 21 +#define KVX_SFR_ES_PL2_GPRP_WIDTH 6 +#define KVX_SFR_ES_PL2_GPRP_WFXL_MASK _ULL(0x7e00000) +#define KVX_SFR_ES_PL2_GPRP_WFXL_CLEAR _ULL(0x7e00000) +#define KVX_SFR_ES_PL2_GPRP_WFXL_SET _ULL(0x7e0000000000000) + +#define KVX_SFR_ES_PL2_SFRP_MASK _ULL(0xff8000000) /* SFR Pointer */ +#define KVX_SFR_ES_PL2_SFRP_SHIFT 27 +#define KVX_SFR_ES_PL2_SFRP_WIDTH 9 +#define KVX_SFR_ES_PL2_SFRP_WFXL_MASK _ULL(0xf8000000) +#define KVX_SFR_ES_PL2_SFRP_WFXL_CLEAR _ULL(0xf8000000) +#define KVX_SFR_ES_PL2_SFRP_WFXL_SET _ULL(0xf800000000000000) +#define KVX_SFR_ES_PL2_SFRP_WFXM_MASK _ULL(0xf00000000) +#define KVX_SFR_ES_PL2_SFRP_WFXM_CLEAR _ULL(0xf) +#define KVX_SFR_ES_PL2_SFRP_WFXM_SET _ULL(0xf00000000) + +#define KVX_SFR_ES_PL2_DHT_MASK _ULL(0x1000000000) /* Disabled Hardware Tr= ap */ +#define KVX_SFR_ES_PL2_DHT_SHIFT 36 +#define KVX_SFR_ES_PL2_DHT_WIDTH 1 +#define KVX_SFR_ES_PL2_DHT_WFXM_MASK _ULL(0x1000000000) +#define KVX_SFR_ES_PL2_DHT_WFXM_CLEAR _ULL(0x10) +#define KVX_SFR_ES_PL2_DHT_WFXM_SET _ULL(0x1000000000) + +#define KVX_SFR_ES_PL2_RWX_MASK _ULL(0x38000000000) /* Read Write Execute = */ +#define KVX_SFR_ES_PL2_RWX_SHIFT 39 +#define KVX_SFR_ES_PL2_RWX_WIDTH 3 +#define KVX_SFR_ES_PL2_RWX_WFXM_MASK _ULL(0x38000000000) +#define KVX_SFR_ES_PL2_RWX_WFXM_CLEAR _ULL(0x380) +#define KVX_SFR_ES_PL2_RWX_WFXM_SET _ULL(0x38000000000) + +#define KVX_SFR_ES_PL2_NTA_MASK _ULL(0x40000000000) /* Non-Trapping Access= */ +#define KVX_SFR_ES_PL2_NTA_SHIFT 42 +#define KVX_SFR_ES_PL2_NTA_WIDTH 1 +#define KVX_SFR_ES_PL2_NTA_WFXM_MASK _ULL(0x40000000000) +#define KVX_SFR_ES_PL2_NTA_WFXM_CLEAR _ULL(0x400) +#define KVX_SFR_ES_PL2_NTA_WFXM_SET _ULL(0x40000000000) + +#define KVX_SFR_ES_PL2_UCA_MASK _ULL(0x80000000000) /* Un-Cached Access */ +#define KVX_SFR_ES_PL2_UCA_SHIFT 43 +#define KVX_SFR_ES_PL2_UCA_WIDTH 1 +#define KVX_SFR_ES_PL2_UCA_WFXM_MASK _ULL(0x80000000000) +#define KVX_SFR_ES_PL2_UCA_WFXM_CLEAR _ULL(0x800) +#define KVX_SFR_ES_PL2_UCA_WFXM_SET _ULL(0x80000000000) + +#define KVX_SFR_ES_PL2_AS_MASK _ULL(0x3f00000000000) /* Access Size */ +#define KVX_SFR_ES_PL2_AS_SHIFT 44 +#define KVX_SFR_ES_PL2_AS_WIDTH 6 +#define KVX_SFR_ES_PL2_AS_WFXM_MASK _ULL(0x3f00000000000) +#define KVX_SFR_ES_PL2_AS_WFXM_CLEAR _ULL(0x3f000) +#define KVX_SFR_ES_PL2_AS_WFXM_SET _ULL(0x3f00000000000) + +#define KVX_SFR_ES_PL2_BS_MASK _ULL(0x3c000000000000) /* Bundle Size */ +#define KVX_SFR_ES_PL2_BS_SHIFT 50 +#define KVX_SFR_ES_PL2_BS_WIDTH 4 +#define KVX_SFR_ES_PL2_BS_WFXM_MASK _ULL(0x3c000000000000) +#define KVX_SFR_ES_PL2_BS_WFXM_CLEAR _ULL(0x3c0000) +#define KVX_SFR_ES_PL2_BS_WFXM_SET _ULL(0x3c000000000000) + +#define KVX_SFR_ES_PL2_DRI_MASK _ULL(0xfc0000000000000) /* Data Register I= ndex */ +#define KVX_SFR_ES_PL2_DRI_SHIFT 54 +#define KVX_SFR_ES_PL2_DRI_WIDTH 6 +#define KVX_SFR_ES_PL2_DRI_WFXM_MASK _ULL(0xfc0000000000000) +#define KVX_SFR_ES_PL2_DRI_WFXM_CLEAR _ULL(0xfc00000) +#define KVX_SFR_ES_PL2_DRI_WFXM_SET _ULL(0xfc0000000000000) + +#define KVX_SFR_ES_PL2_PIC_MASK _ULL(0xf000000000000000) /* Privileged Ins= truction Code */ +#define KVX_SFR_ES_PL2_PIC_SHIFT 60 +#define KVX_SFR_ES_PL2_PIC_WIDTH 4 +#define KVX_SFR_ES_PL2_PIC_WFXM_MASK _ULL(0xf000000000000000) +#define KVX_SFR_ES_PL2_PIC_WFXM_CLEAR _ULL(0xf0000000) +#define KVX_SFR_ES_PL2_PIC_WFXM_SET _ULL(0xf000000000000000) + +#define KVX_SFR_ES_PL2_DC_MASK _ULL(0x3000) /* Debug Cause */ +#define KVX_SFR_ES_PL2_DC_SHIFT 12 +#define KVX_SFR_ES_PL2_DC_WIDTH 2 +#define KVX_SFR_ES_PL2_DC_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_ES_PL2_DC_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_ES_PL2_DC_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_ES_PL2_BN_MASK _ULL(0x4000) /* Breakpoint Number */ +#define KVX_SFR_ES_PL2_BN_SHIFT 14 +#define KVX_SFR_ES_PL2_BN_WIDTH 1 +#define KVX_SFR_ES_PL2_BN_WFXL_MASK _ULL(0x4000) +#define KVX_SFR_ES_PL2_BN_WFXL_CLEAR _ULL(0x4000) +#define KVX_SFR_ES_PL2_BN_WFXL_SET _ULL(0x400000000000) + +#define KVX_SFR_ES_PL2_WN_MASK _ULL(0x8000) /* Watchpoint Number */ +#define KVX_SFR_ES_PL2_WN_SHIFT 15 +#define KVX_SFR_ES_PL2_WN_WIDTH 1 +#define KVX_SFR_ES_PL2_WN_WFXL_MASK _ULL(0x8000) +#define KVX_SFR_ES_PL2_WN_WFXL_CLEAR _ULL(0x8000) +#define KVX_SFR_ES_PL2_WN_WFXL_SET _ULL(0x800000000000) + +#define KVX_SFR_ES_PL3_EC_MASK _ULL(0xf) /* Exception Class */ +#define KVX_SFR_ES_PL3_EC_SHIFT 0 +#define KVX_SFR_ES_PL3_EC_WIDTH 4 +#define KVX_SFR_ES_PL3_EC_WFXL_MASK _ULL(0xf) +#define KVX_SFR_ES_PL3_EC_WFXL_CLEAR _ULL(0xf) +#define KVX_SFR_ES_PL3_EC_WFXL_SET _ULL(0xf00000000) + +#define KVX_SFR_ES_PL3_ED_MASK _ULL(0xfffffffffffffff0) /* Exception Detai= ls */ +#define KVX_SFR_ES_PL3_ED_SHIFT 4 +#define KVX_SFR_ES_PL3_ED_WIDTH 60 +#define KVX_SFR_ES_PL3_ED_WFXL_MASK _ULL(0xfffffff0) +#define KVX_SFR_ES_PL3_ED_WFXL_CLEAR _ULL(0xfffffff0) +#define KVX_SFR_ES_PL3_ED_WFXL_SET _ULL(0xfffffff000000000) +#define KVX_SFR_ES_PL3_ED_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_ES_PL3_ED_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_ES_PL3_ED_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_ES_PL3_OAPL_MASK _ULL(0x30) /* Origin Absolute PL */ +#define KVX_SFR_ES_PL3_OAPL_SHIFT 4 +#define KVX_SFR_ES_PL3_OAPL_WIDTH 2 +#define KVX_SFR_ES_PL3_OAPL_WFXL_MASK _ULL(0x30) +#define KVX_SFR_ES_PL3_OAPL_WFXL_CLEAR _ULL(0x30) +#define KVX_SFR_ES_PL3_OAPL_WFXL_SET _ULL(0x3000000000) + +#define KVX_SFR_ES_PL3_ORPL_MASK _ULL(0xc0) /* Origin Relative PL */ +#define KVX_SFR_ES_PL3_ORPL_SHIFT 6 +#define KVX_SFR_ES_PL3_ORPL_WIDTH 2 +#define KVX_SFR_ES_PL3_ORPL_WFXL_MASK _ULL(0xc0) +#define KVX_SFR_ES_PL3_ORPL_WFXL_CLEAR _ULL(0xc0) +#define KVX_SFR_ES_PL3_ORPL_WFXL_SET _ULL(0xc000000000) + +#define KVX_SFR_ES_PL3_PTAPL_MASK _ULL(0x300) /* Target Absolute PL */ +#define KVX_SFR_ES_PL3_PTAPL_SHIFT 8 +#define KVX_SFR_ES_PL3_PTAPL_WIDTH 2 +#define KVX_SFR_ES_PL3_PTAPL_WFXL_MASK _ULL(0x300) +#define KVX_SFR_ES_PL3_PTAPL_WFXL_CLEAR _ULL(0x300) +#define KVX_SFR_ES_PL3_PTAPL_WFXL_SET _ULL(0x30000000000) + +#define KVX_SFR_ES_PL3_PTRPL_MASK _ULL(0xc00) /* Target Relative PL */ +#define KVX_SFR_ES_PL3_PTRPL_SHIFT 10 +#define KVX_SFR_ES_PL3_PTRPL_WIDTH 2 +#define KVX_SFR_ES_PL3_PTRPL_WFXL_MASK _ULL(0xc00) +#define KVX_SFR_ES_PL3_PTRPL_WFXL_CLEAR _ULL(0xc00) +#define KVX_SFR_ES_PL3_PTRPL_WFXL_SET _ULL(0xc0000000000) + +#define KVX_SFR_ES_PL3_ITN_MASK _ULL(0x1f000) /* InTerrupt Number */ +#define KVX_SFR_ES_PL3_ITN_SHIFT 12 +#define KVX_SFR_ES_PL3_ITN_WIDTH 5 +#define KVX_SFR_ES_PL3_ITN_WFXL_MASK _ULL(0x1f000) +#define KVX_SFR_ES_PL3_ITN_WFXL_CLEAR _ULL(0x1f000) +#define KVX_SFR_ES_PL3_ITN_WFXL_SET _ULL(0x1f00000000000) + +#define KVX_SFR_ES_PL3_ITL_MASK _ULL(0x60000) /* InTerrupt Level */ +#define KVX_SFR_ES_PL3_ITL_SHIFT 17 +#define KVX_SFR_ES_PL3_ITL_WIDTH 2 +#define KVX_SFR_ES_PL3_ITL_WFXL_MASK _ULL(0x60000) +#define KVX_SFR_ES_PL3_ITL_WFXL_CLEAR _ULL(0x60000) +#define KVX_SFR_ES_PL3_ITL_WFXL_SET _ULL(0x6000000000000) + +#define KVX_SFR_ES_PL3_ITI_MASK _ULL(0x1ff80000) /* InTerrupt Info */ +#define KVX_SFR_ES_PL3_ITI_SHIFT 19 +#define KVX_SFR_ES_PL3_ITI_WIDTH 10 +#define KVX_SFR_ES_PL3_ITI_WFXL_MASK _ULL(0x1ff80000) +#define KVX_SFR_ES_PL3_ITI_WFXL_CLEAR _ULL(0x1ff80000) +#define KVX_SFR_ES_PL3_ITI_WFXL_SET _ULL(0x1ff8000000000000) + +#define KVX_SFR_ES_PL3_SN_MASK _ULL(0xfff000) /* Syscall Number */ +#define KVX_SFR_ES_PL3_SN_SHIFT 12 +#define KVX_SFR_ES_PL3_SN_WIDTH 12 +#define KVX_SFR_ES_PL3_SN_WFXL_MASK _ULL(0xfff000) +#define KVX_SFR_ES_PL3_SN_WFXL_CLEAR _ULL(0xfff000) +#define KVX_SFR_ES_PL3_SN_WFXL_SET _ULL(0xfff00000000000) + +#define KVX_SFR_ES_PL3_HTC_MASK _ULL(0x1f000) /* Hardware Trap Cause */ +#define KVX_SFR_ES_PL3_HTC_SHIFT 12 +#define KVX_SFR_ES_PL3_HTC_WIDTH 5 +#define KVX_SFR_ES_PL3_HTC_WFXL_MASK _ULL(0x1f000) +#define KVX_SFR_ES_PL3_HTC_WFXL_CLEAR _ULL(0x1f000) +#define KVX_SFR_ES_PL3_HTC_WFXL_SET _ULL(0x1f00000000000) + +#define KVX_SFR_ES_PL3_SFRT_MASK _ULL(0x20000) /* SFR Trap */ +#define KVX_SFR_ES_PL3_SFRT_SHIFT 17 +#define KVX_SFR_ES_PL3_SFRT_WIDTH 1 +#define KVX_SFR_ES_PL3_SFRT_WFXL_MASK _ULL(0x20000) +#define KVX_SFR_ES_PL3_SFRT_WFXL_CLEAR _ULL(0x20000) +#define KVX_SFR_ES_PL3_SFRT_WFXL_SET _ULL(0x2000000000000) + +#define KVX_SFR_ES_PL3_SFRI_MASK _ULL(0x1c0000) /* SFR Instruction */ +#define KVX_SFR_ES_PL3_SFRI_SHIFT 18 +#define KVX_SFR_ES_PL3_SFRI_WIDTH 3 +#define KVX_SFR_ES_PL3_SFRI_WFXL_MASK _ULL(0x1c0000) +#define KVX_SFR_ES_PL3_SFRI_WFXL_CLEAR _ULL(0x1c0000) +#define KVX_SFR_ES_PL3_SFRI_WFXL_SET _ULL(0x1c000000000000) + +#define KVX_SFR_ES_PL3_GPRP_MASK _ULL(0x7e00000) /* GPR Pointer */ +#define KVX_SFR_ES_PL3_GPRP_SHIFT 21 +#define KVX_SFR_ES_PL3_GPRP_WIDTH 6 +#define KVX_SFR_ES_PL3_GPRP_WFXL_MASK _ULL(0x7e00000) +#define KVX_SFR_ES_PL3_GPRP_WFXL_CLEAR _ULL(0x7e00000) +#define KVX_SFR_ES_PL3_GPRP_WFXL_SET _ULL(0x7e0000000000000) + +#define KVX_SFR_ES_PL3_SFRP_MASK _ULL(0xff8000000) /* SFR Pointer */ +#define KVX_SFR_ES_PL3_SFRP_SHIFT 27 +#define KVX_SFR_ES_PL3_SFRP_WIDTH 9 +#define KVX_SFR_ES_PL3_SFRP_WFXL_MASK _ULL(0xf8000000) +#define KVX_SFR_ES_PL3_SFRP_WFXL_CLEAR _ULL(0xf8000000) +#define KVX_SFR_ES_PL3_SFRP_WFXL_SET _ULL(0xf800000000000000) +#define KVX_SFR_ES_PL3_SFRP_WFXM_MASK _ULL(0xf00000000) +#define KVX_SFR_ES_PL3_SFRP_WFXM_CLEAR _ULL(0xf) +#define KVX_SFR_ES_PL3_SFRP_WFXM_SET _ULL(0xf00000000) + +#define KVX_SFR_ES_PL3_DHT_MASK _ULL(0x1000000000) /* Disabled Hardware Tr= ap */ +#define KVX_SFR_ES_PL3_DHT_SHIFT 36 +#define KVX_SFR_ES_PL3_DHT_WIDTH 1 +#define KVX_SFR_ES_PL3_DHT_WFXM_MASK _ULL(0x1000000000) +#define KVX_SFR_ES_PL3_DHT_WFXM_CLEAR _ULL(0x10) +#define KVX_SFR_ES_PL3_DHT_WFXM_SET _ULL(0x1000000000) + +#define KVX_SFR_ES_PL3_RWX_MASK _ULL(0x38000000000) /* Read Write Execute = */ +#define KVX_SFR_ES_PL3_RWX_SHIFT 39 +#define KVX_SFR_ES_PL3_RWX_WIDTH 3 +#define KVX_SFR_ES_PL3_RWX_WFXM_MASK _ULL(0x38000000000) +#define KVX_SFR_ES_PL3_RWX_WFXM_CLEAR _ULL(0x380) +#define KVX_SFR_ES_PL3_RWX_WFXM_SET _ULL(0x38000000000) + +#define KVX_SFR_ES_PL3_NTA_MASK _ULL(0x40000000000) /* Non-Trapping Access= */ +#define KVX_SFR_ES_PL3_NTA_SHIFT 42 +#define KVX_SFR_ES_PL3_NTA_WIDTH 1 +#define KVX_SFR_ES_PL3_NTA_WFXM_MASK _ULL(0x40000000000) +#define KVX_SFR_ES_PL3_NTA_WFXM_CLEAR _ULL(0x400) +#define KVX_SFR_ES_PL3_NTA_WFXM_SET _ULL(0x40000000000) + +#define KVX_SFR_ES_PL3_UCA_MASK _ULL(0x80000000000) /* Un-Cached Access */ +#define KVX_SFR_ES_PL3_UCA_SHIFT 43 +#define KVX_SFR_ES_PL3_UCA_WIDTH 1 +#define KVX_SFR_ES_PL3_UCA_WFXM_MASK _ULL(0x80000000000) +#define KVX_SFR_ES_PL3_UCA_WFXM_CLEAR _ULL(0x800) +#define KVX_SFR_ES_PL3_UCA_WFXM_SET _ULL(0x80000000000) + +#define KVX_SFR_ES_PL3_AS_MASK _ULL(0x3f00000000000) /* Access Size */ +#define KVX_SFR_ES_PL3_AS_SHIFT 44 +#define KVX_SFR_ES_PL3_AS_WIDTH 6 +#define KVX_SFR_ES_PL3_AS_WFXM_MASK _ULL(0x3f00000000000) +#define KVX_SFR_ES_PL3_AS_WFXM_CLEAR _ULL(0x3f000) +#define KVX_SFR_ES_PL3_AS_WFXM_SET _ULL(0x3f00000000000) + +#define KVX_SFR_ES_PL3_BS_MASK _ULL(0x3c000000000000) /* Bundle Size */ +#define KVX_SFR_ES_PL3_BS_SHIFT 50 +#define KVX_SFR_ES_PL3_BS_WIDTH 4 +#define KVX_SFR_ES_PL3_BS_WFXM_MASK _ULL(0x3c000000000000) +#define KVX_SFR_ES_PL3_BS_WFXM_CLEAR _ULL(0x3c0000) +#define KVX_SFR_ES_PL3_BS_WFXM_SET _ULL(0x3c000000000000) + +#define KVX_SFR_ES_PL3_DRI_MASK _ULL(0xfc0000000000000) /* Data Register I= ndex */ +#define KVX_SFR_ES_PL3_DRI_SHIFT 54 +#define KVX_SFR_ES_PL3_DRI_WIDTH 6 +#define KVX_SFR_ES_PL3_DRI_WFXM_MASK _ULL(0xfc0000000000000) +#define KVX_SFR_ES_PL3_DRI_WFXM_CLEAR _ULL(0xfc00000) +#define KVX_SFR_ES_PL3_DRI_WFXM_SET _ULL(0xfc0000000000000) + +#define KVX_SFR_ES_PL3_PIC_MASK _ULL(0xf000000000000000) /* Privileged Ins= truction Code */ +#define KVX_SFR_ES_PL3_PIC_SHIFT 60 +#define KVX_SFR_ES_PL3_PIC_WIDTH 4 +#define KVX_SFR_ES_PL3_PIC_WFXM_MASK _ULL(0xf000000000000000) +#define KVX_SFR_ES_PL3_PIC_WFXM_CLEAR _ULL(0xf0000000) +#define KVX_SFR_ES_PL3_PIC_WFXM_SET _ULL(0xf000000000000000) + +#define KVX_SFR_ES_PL3_DC_MASK _ULL(0x3000) /* Debug Cause */ +#define KVX_SFR_ES_PL3_DC_SHIFT 12 +#define KVX_SFR_ES_PL3_DC_WIDTH 2 +#define KVX_SFR_ES_PL3_DC_WFXL_MASK _ULL(0x3000) +#define KVX_SFR_ES_PL3_DC_WFXL_CLEAR _ULL(0x3000) +#define KVX_SFR_ES_PL3_DC_WFXL_SET _ULL(0x300000000000) + +#define KVX_SFR_ES_PL3_BN_MASK _ULL(0x4000) /* Breakpoint Number */ +#define KVX_SFR_ES_PL3_BN_SHIFT 14 +#define KVX_SFR_ES_PL3_BN_WIDTH 1 +#define KVX_SFR_ES_PL3_BN_WFXL_MASK _ULL(0x4000) +#define KVX_SFR_ES_PL3_BN_WFXL_CLEAR _ULL(0x4000) +#define KVX_SFR_ES_PL3_BN_WFXL_SET _ULL(0x400000000000) + +#define KVX_SFR_ES_PL3_WN_MASK _ULL(0x8000) /* Watchpoint Number */ +#define KVX_SFR_ES_PL3_WN_SHIFT 15 +#define KVX_SFR_ES_PL3_WN_WIDTH 1 +#define KVX_SFR_ES_PL3_WN_WFXL_MASK _ULL(0x8000) +#define KVX_SFR_ES_PL3_WN_WFXL_CLEAR _ULL(0x8000) +#define KVX_SFR_ES_PL3_WN_WFXL_SET _ULL(0x800000000000) + +#define KVX_SFR_TCR_T0CE_MASK _ULL(0x10000) /* Timer 0 Count Enable */ +#define KVX_SFR_TCR_T0CE_SHIFT 16 +#define KVX_SFR_TCR_T0CE_WIDTH 1 +#define KVX_SFR_TCR_T0CE_WFXL_MASK _ULL(0x10000) +#define KVX_SFR_TCR_T0CE_WFXL_CLEAR _ULL(0x10000) +#define KVX_SFR_TCR_T0CE_WFXL_SET _ULL(0x1000000000000) + +#define KVX_SFR_TCR_T1CE_MASK _ULL(0x20000) /* Timer 1 Count Enable */ +#define KVX_SFR_TCR_T1CE_SHIFT 17 +#define KVX_SFR_TCR_T1CE_WIDTH 1 +#define KVX_SFR_TCR_T1CE_WFXL_MASK _ULL(0x20000) +#define KVX_SFR_TCR_T1CE_WFXL_CLEAR _ULL(0x20000) +#define KVX_SFR_TCR_T1CE_WFXL_SET _ULL(0x2000000000000) + +#define KVX_SFR_TCR_T0IE_MASK _ULL(0x40000) /* Timer 0 Interrupt Enable */ +#define KVX_SFR_TCR_T0IE_SHIFT 18 +#define KVX_SFR_TCR_T0IE_WIDTH 1 +#define KVX_SFR_TCR_T0IE_WFXL_MASK _ULL(0x40000) +#define KVX_SFR_TCR_T0IE_WFXL_CLEAR _ULL(0x40000) +#define KVX_SFR_TCR_T0IE_WFXL_SET _ULL(0x4000000000000) + +#define KVX_SFR_TCR_T1IE_MASK _ULL(0x80000) /* Timer 1 Interrupt Enable */ +#define KVX_SFR_TCR_T1IE_SHIFT 19 +#define KVX_SFR_TCR_T1IE_WIDTH 1 +#define KVX_SFR_TCR_T1IE_WFXL_MASK _ULL(0x80000) +#define KVX_SFR_TCR_T1IE_WFXL_CLEAR _ULL(0x80000) +#define KVX_SFR_TCR_T1IE_WFXL_SET _ULL(0x8000000000000) + +#define KVX_SFR_TCR_T0ST_MASK _ULL(0x100000) /* Timer 0 Status */ +#define KVX_SFR_TCR_T0ST_SHIFT 20 +#define KVX_SFR_TCR_T0ST_WIDTH 1 +#define KVX_SFR_TCR_T0ST_WFXL_MASK _ULL(0x100000) +#define KVX_SFR_TCR_T0ST_WFXL_CLEAR _ULL(0x100000) +#define KVX_SFR_TCR_T0ST_WFXL_SET _ULL(0x10000000000000) + +#define KVX_SFR_TCR_T1ST_MASK _ULL(0x200000) /* Timer 1 Status */ +#define KVX_SFR_TCR_T1ST_SHIFT 21 +#define KVX_SFR_TCR_T1ST_WIDTH 1 +#define KVX_SFR_TCR_T1ST_WFXL_MASK _ULL(0x200000) +#define KVX_SFR_TCR_T1ST_WFXL_CLEAR _ULL(0x200000) +#define KVX_SFR_TCR_T1ST_WFXL_SET _ULL(0x20000000000000) + +#define KVX_SFR_TCR_T0SI_MASK _ULL(0x400000) /* Stop Timer 0 in Idle */ +#define KVX_SFR_TCR_T0SI_SHIFT 22 +#define KVX_SFR_TCR_T0SI_WIDTH 1 +#define KVX_SFR_TCR_T0SI_WFXL_MASK _ULL(0x400000) +#define KVX_SFR_TCR_T0SI_WFXL_CLEAR _ULL(0x400000) +#define KVX_SFR_TCR_T0SI_WFXL_SET _ULL(0x40000000000000) + +#define KVX_SFR_TCR_T1SI_MASK _ULL(0x800000) /* Stop Timer 1 in Idle */ +#define KVX_SFR_TCR_T1SI_SHIFT 23 +#define KVX_SFR_TCR_T1SI_WIDTH 1 +#define KVX_SFR_TCR_T1SI_WFXL_MASK _ULL(0x800000) +#define KVX_SFR_TCR_T1SI_WFXL_CLEAR _ULL(0x800000) +#define KVX_SFR_TCR_T1SI_WFXL_SET _ULL(0x80000000000000) + +#define KVX_SFR_TCR_WCE_MASK _ULL(0x1000000) /* Watchdog Counting Enable */ +#define KVX_SFR_TCR_WCE_SHIFT 24 +#define KVX_SFR_TCR_WCE_WIDTH 1 +#define KVX_SFR_TCR_WCE_WFXL_MASK _ULL(0x1000000) +#define KVX_SFR_TCR_WCE_WFXL_CLEAR _ULL(0x1000000) +#define KVX_SFR_TCR_WCE_WFXL_SET _ULL(0x100000000000000) + +#define KVX_SFR_TCR_WIE_MASK _ULL(0x2000000) /* Watchdog Interrupt Enable = */ +#define KVX_SFR_TCR_WIE_SHIFT 25 +#define KVX_SFR_TCR_WIE_WIDTH 1 +#define KVX_SFR_TCR_WIE_WFXL_MASK _ULL(0x2000000) +#define KVX_SFR_TCR_WIE_WFXL_CLEAR _ULL(0x2000000) +#define KVX_SFR_TCR_WIE_WFXL_SET _ULL(0x200000000000000) + +#define KVX_SFR_TCR_WUI_MASK _ULL(0x4000000) /* Watchdog Underflow Inform = */ +#define KVX_SFR_TCR_WUI_SHIFT 26 +#define KVX_SFR_TCR_WUI_WIDTH 1 +#define KVX_SFR_TCR_WUI_WFXL_MASK _ULL(0x4000000) +#define KVX_SFR_TCR_WUI_WFXL_CLEAR _ULL(0x4000000) +#define KVX_SFR_TCR_WUI_WFXL_SET _ULL(0x400000000000000) + +#define KVX_SFR_TCR_WUS_MASK _ULL(0x8000000) /* Watchdog Underflow Status = */ +#define KVX_SFR_TCR_WUS_SHIFT 27 +#define KVX_SFR_TCR_WUS_WIDTH 1 +#define KVX_SFR_TCR_WUS_WFXL_MASK _ULL(0x8000000) +#define KVX_SFR_TCR_WUS_WFXL_CLEAR _ULL(0x8000000) +#define KVX_SFR_TCR_WUS_WFXL_SET _ULL(0x800000000000000) + +#define KVX_SFR_TCR_WSI_MASK _ULL(0x10000000) /* Watchdog Stop in Idle */ +#define KVX_SFR_TCR_WSI_SHIFT 28 +#define KVX_SFR_TCR_WSI_WIDTH 1 +#define KVX_SFR_TCR_WSI_WFXL_MASK _ULL(0x10000000) +#define KVX_SFR_TCR_WSI_WFXL_CLEAR _ULL(0x10000000) +#define KVX_SFR_TCR_WSI_WFXL_SET _ULL(0x1000000000000000) + +#define KVX_SFR_PM0_PM0_MASK _ULL(0xffffffffffffffff) /* Performance Monit= or 0 */ +#define KVX_SFR_PM0_PM0_SHIFT 0 +#define KVX_SFR_PM0_PM0_WIDTH 64 +#define KVX_SFR_PM0_PM0_WFXL_MASK _ULL(0xffffffff) +#define KVX_SFR_PM0_PM0_WFXL_CLEAR _ULL(0xffffffff) +#define KVX_SFR_PM0_PM0_WFXL_SET _ULL(0xffffffff00000000) +#define KVX_SFR_PM0_PM0_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_PM0_PM0_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_PM0_PM0_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_PM1_PM1_MASK _ULL(0xffffffffffffffff) /* Performance Monit= or 1 */ +#define KVX_SFR_PM1_PM1_SHIFT 0 +#define KVX_SFR_PM1_PM1_WIDTH 64 +#define KVX_SFR_PM1_PM1_WFXL_MASK _ULL(0xffffffff) +#define KVX_SFR_PM1_PM1_WFXL_CLEAR _ULL(0xffffffff) +#define KVX_SFR_PM1_PM1_WFXL_SET _ULL(0xffffffff00000000) +#define KVX_SFR_PM1_PM1_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_PM1_PM1_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_PM1_PM1_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_PM2_PM2_MASK _ULL(0xffffffffffffffff) /* Performance Monit= or 2 */ +#define KVX_SFR_PM2_PM2_SHIFT 0 +#define KVX_SFR_PM2_PM2_WIDTH 64 +#define KVX_SFR_PM2_PM2_WFXL_MASK _ULL(0xffffffff) +#define KVX_SFR_PM2_PM2_WFXL_CLEAR _ULL(0xffffffff) +#define KVX_SFR_PM2_PM2_WFXL_SET _ULL(0xffffffff00000000) +#define KVX_SFR_PM2_PM2_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_PM2_PM2_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_PM2_PM2_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_PM3_PM3_MASK _ULL(0xffffffffffffffff) /* Performance Monit= or 3 */ +#define KVX_SFR_PM3_PM3_SHIFT 0 +#define KVX_SFR_PM3_PM3_WIDTH 64 +#define KVX_SFR_PM3_PM3_WFXL_MASK _ULL(0xffffffff) +#define KVX_SFR_PM3_PM3_WFXL_CLEAR _ULL(0xffffffff) +#define KVX_SFR_PM3_PM3_WFXL_SET _ULL(0xffffffff00000000) +#define KVX_SFR_PM3_PM3_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_PM3_PM3_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_PM3_PM3_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_PMSA_PMSA_MASK _ULL(0xffffffffffffffff) /* Performance Mon= itor Saved Address */ +#define KVX_SFR_PMSA_PMSA_SHIFT 0 +#define KVX_SFR_PMSA_PMSA_WIDTH 64 +#define KVX_SFR_PMSA_PMSA_WFXL_MASK _ULL(0xffffffff) +#define KVX_SFR_PMSA_PMSA_WFXL_CLEAR _ULL(0xffffffff) +#define KVX_SFR_PMSA_PMSA_WFXL_SET _ULL(0xffffffff00000000) +#define KVX_SFR_PMSA_PMSA_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_PMSA_PMSA_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_PMSA_PMSA_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_T0V_T0V_MASK _ULL(0xffffffffffffffff) /* Timer 0 value */ +#define KVX_SFR_T0V_T0V_SHIFT 0 +#define KVX_SFR_T0V_T0V_WIDTH 64 +#define KVX_SFR_T0V_T0V_WFXL_MASK _ULL(0xffffffff) +#define KVX_SFR_T0V_T0V_WFXL_CLEAR _ULL(0xffffffff) +#define KVX_SFR_T0V_T0V_WFXL_SET _ULL(0xffffffff00000000) +#define KVX_SFR_T0V_T0V_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_T0V_T0V_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_T0V_T0V_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_T1V_T1V_MASK _ULL(0xffffffffffffffff) /* Timer 1 value */ +#define KVX_SFR_T1V_T1V_SHIFT 0 +#define KVX_SFR_T1V_T1V_WIDTH 64 +#define KVX_SFR_T1V_T1V_WFXL_MASK _ULL(0xffffffff) +#define KVX_SFR_T1V_T1V_WFXL_CLEAR _ULL(0xffffffff) +#define KVX_SFR_T1V_T1V_WFXL_SET _ULL(0xffffffff00000000) +#define KVX_SFR_T1V_T1V_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_T1V_T1V_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_T1V_T1V_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_T0R_T0R_MASK _ULL(0xffffffffffffffff) /* Timer 0 reload va= lue */ +#define KVX_SFR_T0R_T0R_SHIFT 0 +#define KVX_SFR_T0R_T0R_WIDTH 64 +#define KVX_SFR_T0R_T0R_WFXL_MASK _ULL(0xffffffff) +#define KVX_SFR_T0R_T0R_WFXL_CLEAR _ULL(0xffffffff) +#define KVX_SFR_T0R_T0R_WFXL_SET _ULL(0xffffffff00000000) +#define KVX_SFR_T0R_T0R_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_T0R_T0R_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_T0R_T0R_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_T1R_T1R_MASK _ULL(0xffffffffffffffff) /* Timer 1 reload va= lue */ +#define KVX_SFR_T1R_T1R_SHIFT 0 +#define KVX_SFR_T1R_T1R_WIDTH 64 +#define KVX_SFR_T1R_T1R_WFXL_MASK _ULL(0xffffffff) +#define KVX_SFR_T1R_T1R_WFXL_CLEAR _ULL(0xffffffff) +#define KVX_SFR_T1R_T1R_WFXL_SET _ULL(0xffffffff00000000) +#define KVX_SFR_T1R_T1R_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_T1R_T1R_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_T1R_T1R_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_WDV_WDV_MASK _ULL(0xffffffffffffffff) /* Watchdog Value */ +#define KVX_SFR_WDV_WDV_SHIFT 0 +#define KVX_SFR_WDV_WDV_WIDTH 64 +#define KVX_SFR_WDV_WDV_WFXL_MASK _ULL(0xffffffff) +#define KVX_SFR_WDV_WDV_WFXL_CLEAR _ULL(0xffffffff) +#define KVX_SFR_WDV_WDV_WFXL_SET _ULL(0xffffffff00000000) +#define KVX_SFR_WDV_WDV_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_WDV_WDV_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_WDV_WDV_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_WDR_WDR_MASK _ULL(0xffffffffffffffff) /* Watchdog Reload V= alue */ +#define KVX_SFR_WDR_WDR_SHIFT 0 +#define KVX_SFR_WDR_WDR_WIDTH 64 +#define KVX_SFR_WDR_WDR_WFXL_MASK _ULL(0xffffffff) +#define KVX_SFR_WDR_WDR_WFXL_CLEAR _ULL(0xffffffff) +#define KVX_SFR_WDR_WDR_WFXL_SET _ULL(0xffffffff00000000) +#define KVX_SFR_WDR_WDR_WFXM_MASK _ULL(0xffffffff00000000) +#define KVX_SFR_WDR_WDR_WFXM_CLEAR _ULL(0xffffffff) +#define KVX_SFR_WDR_WDR_WFXM_SET _ULL(0xffffffff00000000) + +#define KVX_SFR_PMC_PM0C_MASK _ULL(0x3f) /* PM0 Configuration */ +#define KVX_SFR_PMC_PM0C_SHIFT 0 +#define KVX_SFR_PMC_PM0C_WIDTH 6 +#define KVX_SFR_PMC_PM0C_WFXL_MASK _ULL(0x3f) +#define KVX_SFR_PMC_PM0C_WFXL_CLEAR _ULL(0x3f) +#define KVX_SFR_PMC_PM0C_WFXL_SET _ULL(0x3f00000000) + +#define KVX_SFR_PMC_PM1C_MASK _ULL(0x1f80) /* PM1 Configuration */ +#define KVX_SFR_PMC_PM1C_SHIFT 7 +#define KVX_SFR_PMC_PM1C_WIDTH 6 +#define KVX_SFR_PMC_PM1C_WFXL_MASK _ULL(0x1f80) +#define KVX_SFR_PMC_PM1C_WFXL_CLEAR _ULL(0x1f80) +#define KVX_SFR_PMC_PM1C_WFXL_SET _ULL(0x1f8000000000) + +#define KVX_SFR_PMC_PM2C_MASK _ULL(0xfc000) /* PM2 Configuration */ +#define KVX_SFR_PMC_PM2C_SHIFT 14 +#define KVX_SFR_PMC_PM2C_WIDTH 6 +#define KVX_SFR_PMC_PM2C_WFXL_MASK _ULL(0xfc000) +#define KVX_SFR_PMC_PM2C_WFXL_CLEAR _ULL(0xfc000) +#define KVX_SFR_PMC_PM2C_WFXL_SET _ULL(0xfc00000000000) + +#define KVX_SFR_PMC_PM3C_MASK _ULL(0x7e00000) /* PM3 Configuration */ +#define KVX_SFR_PMC_PM3C_SHIFT 21 +#define KVX_SFR_PMC_PM3C_WIDTH 6 +#define KVX_SFR_PMC_PM3C_WFXL_MASK _ULL(0x7e00000) +#define KVX_SFR_PMC_PM3C_WFXL_CLEAR _ULL(0x7e00000) +#define KVX_SFR_PMC_PM3C_WFXL_SET _ULL(0x7e0000000000000) + +#define KVX_SFR_PMC_SAV_MASK _ULL(0x40000000) /* Saved Address Valid */ +#define KVX_SFR_PMC_SAV_SHIFT 30 +#define KVX_SFR_PMC_SAV_WIDTH 1 +#define KVX_SFR_PMC_SAV_WFXL_MASK _ULL(0x40000000) +#define KVX_SFR_PMC_SAV_WFXL_CLEAR _ULL(0x40000000) +#define KVX_SFR_PMC_SAV_WFXL_SET _ULL(0x4000000000000000) + +#define KVX_SFR_PMC_PM0IE_MASK _ULL(0x100000000) /* PM0 Interrupt Enable */ +#define KVX_SFR_PMC_PM0IE_SHIFT 32 +#define KVX_SFR_PMC_PM0IE_WIDTH 1 +#define KVX_SFR_PMC_PM0IE_WFXM_MASK _ULL(0x100000000) +#define KVX_SFR_PMC_PM0IE_WFXM_CLEAR _ULL(0x1) +#define KVX_SFR_PMC_PM0IE_WFXM_SET _ULL(0x100000000) + +#define KVX_SFR_PMC_PM1IE_MASK _ULL(0x200000000) /* PM1 Interrupt Enable */ +#define KVX_SFR_PMC_PM1IE_SHIFT 33 +#define KVX_SFR_PMC_PM1IE_WIDTH 1 +#define KVX_SFR_PMC_PM1IE_WFXM_MASK _ULL(0x200000000) +#define KVX_SFR_PMC_PM1IE_WFXM_CLEAR _ULL(0x2) +#define KVX_SFR_PMC_PM1IE_WFXM_SET _ULL(0x200000000) + +#define KVX_SFR_PMC_PM2IE_MASK _ULL(0x400000000) /* PM2 Interrupt Enable */ +#define KVX_SFR_PMC_PM2IE_SHIFT 34 +#define KVX_SFR_PMC_PM2IE_WIDTH 1 +#define KVX_SFR_PMC_PM2IE_WFXM_MASK _ULL(0x400000000) +#define KVX_SFR_PMC_PM2IE_WFXM_CLEAR _ULL(0x4) +#define KVX_SFR_PMC_PM2IE_WFXM_SET _ULL(0x400000000) + +#define KVX_SFR_PMC_PM3IE_MASK _ULL(0x800000000) /* PM3 Interrupt Enable */ +#define KVX_SFR_PMC_PM3IE_SHIFT 35 +#define KVX_SFR_PMC_PM3IE_WIDTH 1 +#define KVX_SFR_PMC_PM3IE_WFXM_MASK _ULL(0x800000000) +#define KVX_SFR_PMC_PM3IE_WFXM_CLEAR _ULL(0x8) +#define KVX_SFR_PMC_PM3IE_WFXM_SET _ULL(0x800000000) + +#define KVX_SFR_PMC_SAT_MASK _ULL(0x3000000000) /* Saved Address Type */ +#define KVX_SFR_PMC_SAT_SHIFT 36 +#define KVX_SFR_PMC_SAT_WIDTH 2 +#define KVX_SFR_PMC_SAT_WFXM_MASK _ULL(0x3000000000) +#define KVX_SFR_PMC_SAT_WFXM_CLEAR _ULL(0x30) +#define KVX_SFR_PMC_SAT_WFXM_SET _ULL(0x3000000000) + +#define KVX_SFR_PCR_PID_MASK _ULL(0xff) /* Processing Identifier in cluste= r */ +#define KVX_SFR_PCR_PID_SHIFT 0 +#define KVX_SFR_PCR_PID_WIDTH 8 +#define KVX_SFR_PCR_PID_WFXL_MASK _ULL(0xff) +#define KVX_SFR_PCR_PID_WFXL_CLEAR _ULL(0xff) +#define KVX_SFR_PCR_PID_WFXL_SET _ULL(0xff00000000) + +#define KVX_SFR_PCR_CID_MASK _ULL(0xff00) /* Cluster Identifier in system = */ +#define KVX_SFR_PCR_CID_SHIFT 8 +#define KVX_SFR_PCR_CID_WIDTH 8 +#define KVX_SFR_PCR_CID_WFXL_MASK _ULL(0xff00) +#define KVX_SFR_PCR_CID_WFXL_CLEAR _ULL(0xff00) +#define KVX_SFR_PCR_CID_WFXL_SET _ULL(0xff0000000000) + +#define KVX_SFR_PCR_MID_MASK _ULL(0xff0000) /* MPPA Identifier */ +#define KVX_SFR_PCR_MID_SHIFT 16 +#define KVX_SFR_PCR_MID_WIDTH 8 +#define KVX_SFR_PCR_MID_WFXL_MASK _ULL(0xff0000) +#define KVX_SFR_PCR_MID_WFXL_CLEAR _ULL(0xff0000) +#define KVX_SFR_PCR_MID_WFXL_SET _ULL(0xff000000000000) + +#define KVX_SFR_PCR_CAR_MASK _ULL(0xf000000) /* Core Architecture Revision= ID */ +#define KVX_SFR_PCR_CAR_SHIFT 24 +#define KVX_SFR_PCR_CAR_WIDTH 4 +#define KVX_SFR_PCR_CAR_WFXL_MASK _ULL(0xf000000) +#define KVX_SFR_PCR_CAR_WFXL_CLEAR _ULL(0xf000000) +#define KVX_SFR_PCR_CAR_WFXL_SET _ULL(0xf00000000000000) + +#define KVX_SFR_PCR_CMA_MASK _ULL(0xf0000000) /* Core Micro-Architecture R= evision ID */ +#define KVX_SFR_PCR_CMA_SHIFT 28 +#define KVX_SFR_PCR_CMA_WIDTH 4 +#define KVX_SFR_PCR_CMA_WFXL_MASK _ULL(0xf0000000) +#define KVX_SFR_PCR_CMA_WFXL_CLEAR _ULL(0xf0000000) +#define KVX_SFR_PCR_CMA_WFXL_SET _ULL(0xf000000000000000) + +#define KVX_SFR_PCR_SV_MASK _ULL(0xff00000000) /* System-On-Chip Version */ +#define KVX_SFR_PCR_SV_SHIFT 32 +#define KVX_SFR_PCR_SV_WIDTH 8 +#define KVX_SFR_PCR_SV_WFXM_MASK _ULL(0xff00000000) +#define KVX_SFR_PCR_SV_WFXM_CLEAR _ULL(0xff) +#define KVX_SFR_PCR_SV_WFXM_SET _ULL(0xff00000000) + +#define KVX_SFR_PCR_ST_MASK _ULL(0xf0000000000) /* System-On-Chip Type */ +#define KVX_SFR_PCR_ST_SHIFT 40 +#define KVX_SFR_PCR_ST_WIDTH 4 +#define KVX_SFR_PCR_ST_WFXM_MASK _ULL(0xf0000000000) +#define KVX_SFR_PCR_ST_WFXM_CLEAR _ULL(0xf00) +#define KVX_SFR_PCR_ST_WFXM_SET _ULL(0xf0000000000) + +#define KVX_SFR_PCR_BM_MASK _ULL(0xff00000000000) /* Boot Mode */ +#define KVX_SFR_PCR_BM_SHIFT 44 +#define KVX_SFR_PCR_BM_WIDTH 8 +#define KVX_SFR_PCR_BM_WFXM_MASK _ULL(0xff00000000000) +#define KVX_SFR_PCR_BM_WFXM_CLEAR _ULL(0xff000) +#define KVX_SFR_PCR_BM_WFXM_SET _ULL(0xff00000000000) + +#define KVX_SFR_PCR_COE_MASK _ULL(0x10000000000000) /* COprocessor Enable = */ +#define KVX_SFR_PCR_COE_SHIFT 52 +#define KVX_SFR_PCR_COE_WIDTH 1 +#define KVX_SFR_PCR_COE_WFXM_MASK _ULL(0x10000000000000) +#define KVX_SFR_PCR_COE_WFXM_CLEAR _ULL(0x100000) +#define KVX_SFR_PCR_COE_WFXM_SET _ULL(0x10000000000000) + +#define KVX_SFR_PCR_L1CE_MASK _ULL(0x20000000000000) /* L1 cache Coherency= Enable */ +#define KVX_SFR_PCR_L1CE_SHIFT 53 +#define KVX_SFR_PCR_L1CE_WIDTH 1 +#define KVX_SFR_PCR_L1CE_WFXM_MASK _ULL(0x20000000000000) +#define KVX_SFR_PCR_L1CE_WFXM_CLEAR _ULL(0x200000) +#define KVX_SFR_PCR_L1CE_WFXM_SET _ULL(0x20000000000000) + +#define KVX_SFR_PCR_DSEM_MASK _ULL(0x40000000000000) /* Data Simple Ecc ex= ception Mode */ +#define KVX_SFR_PCR_DSEM_SHIFT 54 +#define KVX_SFR_PCR_DSEM_WIDTH 1 +#define KVX_SFR_PCR_DSEM_WFXM_MASK _ULL(0x40000000000000) +#define KVX_SFR_PCR_DSEM_WFXM_CLEAR _ULL(0x400000) +#define KVX_SFR_PCR_DSEM_WFXM_SET _ULL(0x40000000000000) + +#define KVX_SFR_MMC_ASN_MASK _ULL(0x1ff) /* Address Space Number */ +#define KVX_SFR_MMC_ASN_SHIFT 0 +#define KVX_SFR_MMC_ASN_WIDTH 9 +#define KVX_SFR_MMC_ASN_WFXL_MASK _ULL(0x1ff) +#define KVX_SFR_MMC_ASN_WFXL_CLEAR _ULL(0x1ff) +#define KVX_SFR_MMC_ASN_WFXL_SET _ULL(0x1ff00000000) + +#define KVX_SFR_MMC_S_MASK _ULL(0x200) /* Speculative */ +#define KVX_SFR_MMC_S_SHIFT 9 +#define KVX_SFR_MMC_S_WIDTH 1 +#define KVX_SFR_MMC_S_WFXL_MASK _ULL(0x200) +#define KVX_SFR_MMC_S_WFXL_CLEAR _ULL(0x200) +#define KVX_SFR_MMC_S_WFXL_SET _ULL(0x20000000000) + +#define KVX_SFR_MMC_SNE_MASK _ULL(0x4000) /* Speculative NOMAPPING Enable = */ +#define KVX_SFR_MMC_SNE_SHIFT 14 +#define KVX_SFR_MMC_SNE_WIDTH 1 +#define KVX_SFR_MMC_SNE_WFXL_MASK _ULL(0x4000) +#define KVX_SFR_MMC_SNE_WFXL_CLEAR _ULL(0x4000) +#define KVX_SFR_MMC_SNE_WFXL_SET _ULL(0x400000000000) + +#define KVX_SFR_MMC_SPE_MASK _ULL(0x8000) /* Speculative PROTECTION Enable= */ +#define KVX_SFR_MMC_SPE_SHIFT 15 +#define KVX_SFR_MMC_SPE_WIDTH 1 +#define KVX_SFR_MMC_SPE_WFXL_MASK _ULL(0x8000) +#define KVX_SFR_MMC_SPE_WFXL_CLEAR _ULL(0x8000) +#define KVX_SFR_MMC_SPE_WFXL_SET _ULL(0x800000000000) + +#define KVX_SFR_MMC_PTC_MASK _ULL(0x30000) /* Protection Trap Cause */ +#define KVX_SFR_MMC_PTC_SHIFT 16 +#define KVX_SFR_MMC_PTC_WIDTH 2 +#define KVX_SFR_MMC_PTC_WFXL_MASK _ULL(0x30000) +#define KVX_SFR_MMC_PTC_WFXL_CLEAR _ULL(0x30000) +#define KVX_SFR_MMC_PTC_WFXL_SET _ULL(0x3000000000000) + +#define KVX_SFR_MMC_SW_MASK _ULL(0x3c0000) /* Select Way */ +#define KVX_SFR_MMC_SW_SHIFT 18 +#define KVX_SFR_MMC_SW_WIDTH 4 +#define KVX_SFR_MMC_SW_WFXL_MASK _ULL(0x3c0000) +#define KVX_SFR_MMC_SW_WFXL_CLEAR _ULL(0x3c0000) +#define KVX_SFR_MMC_SW_WFXL_SET _ULL(0x3c000000000000) + +#define KVX_SFR_MMC_SS_MASK _ULL(0xfc00000) /* Select Set */ +#define KVX_SFR_MMC_SS_SHIFT 22 +#define KVX_SFR_MMC_SS_WIDTH 6 +#define KVX_SFR_MMC_SS_WFXL_MASK _ULL(0xfc00000) +#define KVX_SFR_MMC_SS_WFXL_CLEAR _ULL(0xfc00000) +#define KVX_SFR_MMC_SS_WFXL_SET _ULL(0xfc0000000000000) + +#define KVX_SFR_MMC_SB_MASK _ULL(0x10000000) /* Select Buffer */ +#define KVX_SFR_MMC_SB_SHIFT 28 +#define KVX_SFR_MMC_SB_WIDTH 1 +#define KVX_SFR_MMC_SB_WFXL_MASK _ULL(0x10000000) +#define KVX_SFR_MMC_SB_WFXL_CLEAR _ULL(0x10000000) +#define KVX_SFR_MMC_SB_WFXL_SET _ULL(0x1000000000000000) + +#define KVX_SFR_MMC_PAR_MASK _ULL(0x40000000) /* PARity error flag */ +#define KVX_SFR_MMC_PAR_SHIFT 30 +#define KVX_SFR_MMC_PAR_WIDTH 1 +#define KVX_SFR_MMC_PAR_WFXL_MASK _ULL(0x40000000) +#define KVX_SFR_MMC_PAR_WFXL_CLEAR _ULL(0x40000000) +#define KVX_SFR_MMC_PAR_WFXL_SET _ULL(0x4000000000000000) + +#define KVX_SFR_MMC_E_MASK _ULL(0x80000000) /* Error Flag */ +#define KVX_SFR_MMC_E_SHIFT 31 +#define KVX_SFR_MMC_E_WIDTH 1 +#define KVX_SFR_MMC_E_WFXL_MASK _ULL(0x80000000) +#define KVX_SFR_MMC_E_WFXL_CLEAR _ULL(0x80000000) +#define KVX_SFR_MMC_E_WFXL_SET _ULL(0x8000000000000000) + +#define KVX_SFR_TEL_ES_MASK _ULL(0x3) /* Entry Status */ +#define KVX_SFR_TEL_ES_SHIFT 0 +#define KVX_SFR_TEL_ES_WIDTH 2 +#define KVX_SFR_TEL_ES_WFXL_MASK _ULL(0x3) +#define KVX_SFR_TEL_ES_WFXL_CLEAR _ULL(0x3) +#define KVX_SFR_TEL_ES_WFXL_SET _ULL(0x300000000) + +#define KVX_SFR_TEL_CP_MASK _ULL(0xc) /* Cache Policy */ +#define KVX_SFR_TEL_CP_SHIFT 2 +#define KVX_SFR_TEL_CP_WIDTH 2 +#define KVX_SFR_TEL_CP_WFXL_MASK _ULL(0xc) +#define KVX_SFR_TEL_CP_WFXL_CLEAR _ULL(0xc) +#define KVX_SFR_TEL_CP_WFXL_SET _ULL(0xc00000000) + +#define KVX_SFR_TEL_PA_MASK _ULL(0xf0) /* Protection Attributes */ +#define KVX_SFR_TEL_PA_SHIFT 4 +#define KVX_SFR_TEL_PA_WIDTH 4 +#define KVX_SFR_TEL_PA_WFXL_MASK _ULL(0xf0) +#define KVX_SFR_TEL_PA_WFXL_CLEAR _ULL(0xf0) +#define KVX_SFR_TEL_PA_WFXL_SET _ULL(0xf000000000) + +#define KVX_SFR_TEL_PS_MASK _ULL(0xc00) /* Page Size */ +#define KVX_SFR_TEL_PS_SHIFT 10 +#define KVX_SFR_TEL_PS_WIDTH 2 +#define KVX_SFR_TEL_PS_WFXL_MASK _ULL(0xc00) +#define KVX_SFR_TEL_PS_WFXL_CLEAR _ULL(0xc00) +#define KVX_SFR_TEL_PS_WFXL_SET _ULL(0xc0000000000) + +#define KVX_SFR_TEL_FN_MASK _ULL(0xfffffff000) /* Frame Number */ +#define KVX_SFR_TEL_FN_SHIFT 12 +#define KVX_SFR_TEL_FN_WIDTH 28 +#define KVX_SFR_TEL_FN_WFXL_MASK _ULL(0xfffff000) +#define KVX_SFR_TEL_FN_WFXL_CLEAR _ULL(0xfffff000) +#define KVX_SFR_TEL_FN_WFXL_SET _ULL(0xfffff00000000000) +#define KVX_SFR_TEL_FN_WFXM_MASK _ULL(0xff00000000) +#define KVX_SFR_TEL_FN_WFXM_CLEAR _ULL(0xff) +#define KVX_SFR_TEL_FN_WFXM_SET _ULL(0xff00000000) + +#define KVX_SFR_TEH_ASN_MASK _ULL(0x1ff) /* Address Space Number */ +#define KVX_SFR_TEH_ASN_SHIFT 0 +#define KVX_SFR_TEH_ASN_WIDTH 9 +#define KVX_SFR_TEH_ASN_WFXL_MASK _ULL(0x1ff) +#define KVX_SFR_TEH_ASN_WFXL_CLEAR _ULL(0x1ff) +#define KVX_SFR_TEH_ASN_WFXL_SET _ULL(0x1ff00000000) + +#define KVX_SFR_TEH_G_MASK _ULL(0x200) /* Global page indicator */ +#define KVX_SFR_TEH_G_SHIFT 9 +#define KVX_SFR_TEH_G_WIDTH 1 +#define KVX_SFR_TEH_G_WFXL_MASK _ULL(0x200) +#define KVX_SFR_TEH_G_WFXL_CLEAR _ULL(0x200) +#define KVX_SFR_TEH_G_WFXL_SET _ULL(0x20000000000) + +#define KVX_SFR_TEH_VS_MASK _ULL(0xc00) /* Virtual Space */ +#define KVX_SFR_TEH_VS_SHIFT 10 +#define KVX_SFR_TEH_VS_WIDTH 2 +#define KVX_SFR_TEH_VS_WFXL_MASK _ULL(0xc00) +#define KVX_SFR_TEH_VS_WFXL_CLEAR _ULL(0xc00) +#define KVX_SFR_TEH_VS_WFXL_SET _ULL(0xc0000000000) + +#define KVX_SFR_TEH_PN_MASK _ULL(0x1fffffff000) /* Page Number */ +#define KVX_SFR_TEH_PN_SHIFT 12 +#define KVX_SFR_TEH_PN_WIDTH 29 +#define KVX_SFR_TEH_PN_WFXL_MASK _ULL(0xfffff000) +#define KVX_SFR_TEH_PN_WFXL_CLEAR _ULL(0xfffff000) +#define KVX_SFR_TEH_PN_WFXL_SET _ULL(0xfffff00000000000) +#define KVX_SFR_TEH_PN_WFXM_MASK _ULL(0x1ff00000000) +#define KVX_SFR_TEH_PN_WFXM_CLEAR _ULL(0x1ff) +#define KVX_SFR_TEH_PN_WFXM_SET _ULL(0x1ff00000000) + +#define KVX_SFR_DC_BE0_MASK _ULL(0x1) /* Breakpoint 0 Enable */ +#define KVX_SFR_DC_BE0_SHIFT 0 +#define KVX_SFR_DC_BE0_WIDTH 1 +#define KVX_SFR_DC_BE0_WFXL_MASK _ULL(0x1) +#define KVX_SFR_DC_BE0_WFXL_CLEAR _ULL(0x1) +#define KVX_SFR_DC_BE0_WFXL_SET _ULL(0x100000000) + +#define KVX_SFR_DC_BR0_MASK _ULL(0x7e) /* Breakpoint 0 Range */ +#define KVX_SFR_DC_BR0_SHIFT 1 +#define KVX_SFR_DC_BR0_WIDTH 6 +#define KVX_SFR_DC_BR0_WFXL_MASK _ULL(0x7e) +#define KVX_SFR_DC_BR0_WFXL_CLEAR _ULL(0x7e) +#define KVX_SFR_DC_BR0_WFXL_SET _ULL(0x7e00000000) + +#define KVX_SFR_DC_BE1_MASK _ULL(0x80) /* Breakpoint 1 Enable */ +#define KVX_SFR_DC_BE1_SHIFT 7 +#define KVX_SFR_DC_BE1_WIDTH 1 +#define KVX_SFR_DC_BE1_WFXL_MASK _ULL(0x80) +#define KVX_SFR_DC_BE1_WFXL_CLEAR _ULL(0x80) +#define KVX_SFR_DC_BE1_WFXL_SET _ULL(0x8000000000) + +#define KVX_SFR_DC_BR1_MASK _ULL(0x3f00) /* Breakpoint 1 Range */ +#define KVX_SFR_DC_BR1_SHIFT 8 +#define KVX_SFR_DC_BR1_WIDTH 6 +#define KVX_SFR_DC_BR1_WFXL_MASK _ULL(0x3f00) +#define KVX_SFR_DC_BR1_WFXL_CLEAR _ULL(0x3f00) +#define KVX_SFR_DC_BR1_WFXL_SET _ULL(0x3f0000000000) + +#define KVX_SFR_DC_WE0_MASK _ULL(0x4000) /* Watchpoint 0 Enable */ +#define KVX_SFR_DC_WE0_SHIFT 14 +#define KVX_SFR_DC_WE0_WIDTH 1 +#define KVX_SFR_DC_WE0_WFXL_MASK _ULL(0x4000) +#define KVX_SFR_DC_WE0_WFXL_CLEAR _ULL(0x4000) +#define KVX_SFR_DC_WE0_WFXL_SET _ULL(0x400000000000) + +#define KVX_SFR_DC_WR0_MASK _ULL(0x1f8000) /* Watchpoint 0 Range */ +#define KVX_SFR_DC_WR0_SHIFT 15 +#define KVX_SFR_DC_WR0_WIDTH 6 +#define KVX_SFR_DC_WR0_WFXL_MASK _ULL(0x1f8000) +#define KVX_SFR_DC_WR0_WFXL_CLEAR _ULL(0x1f8000) +#define KVX_SFR_DC_WR0_WFXL_SET _ULL(0x1f800000000000) + +#define KVX_SFR_DC_WE1_MASK _ULL(0x200000) /* Watchpoint 1 Enable */ +#define KVX_SFR_DC_WE1_SHIFT 21 +#define KVX_SFR_DC_WE1_WIDTH 1 +#define KVX_SFR_DC_WE1_WFXL_MASK _ULL(0x200000) +#define KVX_SFR_DC_WE1_WFXL_CLEAR _ULL(0x200000) +#define KVX_SFR_DC_WE1_WFXL_SET _ULL(0x20000000000000) + +#define KVX_SFR_DC_WR1_MASK _ULL(0xfc00000) /* Watchpoint 1 Range */ +#define KVX_SFR_DC_WR1_SHIFT 22 +#define KVX_SFR_DC_WR1_WIDTH 6 +#define KVX_SFR_DC_WR1_WFXL_MASK _ULL(0xfc00000) +#define KVX_SFR_DC_WR1_WFXL_CLEAR _ULL(0xfc00000) +#define KVX_SFR_DC_WR1_WFXL_SET _ULL(0xfc0000000000000) + +#define KVX_SFR_MES_PSE_MASK _ULL(0x1) /* Program Simple Ecc */ +#define KVX_SFR_MES_PSE_SHIFT 0 +#define KVX_SFR_MES_PSE_WIDTH 1 +#define KVX_SFR_MES_PSE_WFXL_MASK _ULL(0x1) +#define KVX_SFR_MES_PSE_WFXL_CLEAR _ULL(0x1) +#define KVX_SFR_MES_PSE_WFXL_SET _ULL(0x100000000) + +#define KVX_SFR_MES_PILSY_MASK _ULL(0x2) /* Program cache Invalidated Line= following pSYs error. */ +#define KVX_SFR_MES_PILSY_SHIFT 1 +#define KVX_SFR_MES_PILSY_WIDTH 1 +#define KVX_SFR_MES_PILSY_WFXL_MASK _ULL(0x2) +#define KVX_SFR_MES_PILSY_WFXL_CLEAR _ULL(0x2) +#define KVX_SFR_MES_PILSY_WFXL_SET _ULL(0x200000000) + +#define KVX_SFR_MES_PILDE_MASK _ULL(0x4) /* Program cache Invalidated Line= following pDEcc error. */ +#define KVX_SFR_MES_PILDE_SHIFT 2 +#define KVX_SFR_MES_PILDE_WIDTH 1 +#define KVX_SFR_MES_PILDE_WFXL_MASK _ULL(0x4) +#define KVX_SFR_MES_PILDE_WFXL_CLEAR _ULL(0x4) +#define KVX_SFR_MES_PILDE_WFXL_SET _ULL(0x400000000) + +#define KVX_SFR_MES_PILPA_MASK _ULL(0x8) /* Program cache Invalidated Line= following pPArity error. */ +#define KVX_SFR_MES_PILPA_SHIFT 3 +#define KVX_SFR_MES_PILPA_WIDTH 1 +#define KVX_SFR_MES_PILPA_WFXL_MASK _ULL(0x8) +#define KVX_SFR_MES_PILPA_WFXL_CLEAR _ULL(0x8) +#define KVX_SFR_MES_PILPA_WFXL_SET _ULL(0x800000000) + +#define KVX_SFR_MES_DSE_MASK _ULL(0x10) /* Data Simple Ecc */ +#define KVX_SFR_MES_DSE_SHIFT 4 +#define KVX_SFR_MES_DSE_WIDTH 1 +#define KVX_SFR_MES_DSE_WFXL_MASK _ULL(0x10) +#define KVX_SFR_MES_DSE_WFXL_CLEAR _ULL(0x10) +#define KVX_SFR_MES_DSE_WFXL_SET _ULL(0x1000000000) + +#define KVX_SFR_MES_DILSY_MASK _ULL(0x20) /* Data cache Invalidated Line f= ollowing dSYs error. */ +#define KVX_SFR_MES_DILSY_SHIFT 5 +#define KVX_SFR_MES_DILSY_WIDTH 1 +#define KVX_SFR_MES_DILSY_WFXL_MASK _ULL(0x20) +#define KVX_SFR_MES_DILSY_WFXL_CLEAR _ULL(0x20) +#define KVX_SFR_MES_DILSY_WFXL_SET _ULL(0x2000000000) + +#define KVX_SFR_MES_DILDE_MASK _ULL(0x40) /* Data cache Invalidated Line f= ollowing dDEcc error. */ +#define KVX_SFR_MES_DILDE_SHIFT 6 +#define KVX_SFR_MES_DILDE_WIDTH 1 +#define KVX_SFR_MES_DILDE_WFXL_MASK _ULL(0x40) +#define KVX_SFR_MES_DILDE_WFXL_CLEAR _ULL(0x40) +#define KVX_SFR_MES_DILDE_WFXL_SET _ULL(0x4000000000) + +#define KVX_SFR_MES_DILPA_MASK _ULL(0x80) /* Data cache Invalidated Line f= ollowing dPArity error. */ +#define KVX_SFR_MES_DILPA_SHIFT 7 +#define KVX_SFR_MES_DILPA_WIDTH 1 +#define KVX_SFR_MES_DILPA_WFXL_MASK _ULL(0x80) +#define KVX_SFR_MES_DILPA_WFXL_CLEAR _ULL(0x80) +#define KVX_SFR_MES_DILPA_WFXL_SET _ULL(0x8000000000) + +#define KVX_SFR_MES_DDEE_MASK _ULL(0x100) /* Data DEcc Error. */ +#define KVX_SFR_MES_DDEE_SHIFT 8 +#define KVX_SFR_MES_DDEE_WIDTH 1 +#define KVX_SFR_MES_DDEE_WFXL_MASK _ULL(0x100) +#define KVX_SFR_MES_DDEE_WFXL_CLEAR _ULL(0x100) +#define KVX_SFR_MES_DDEE_WFXL_SET _ULL(0x10000000000) + +#define KVX_SFR_MES_DSYE_MASK _ULL(0x200) /* Data dSYs Error. */ +#define KVX_SFR_MES_DSYE_SHIFT 9 +#define KVX_SFR_MES_DSYE_WIDTH 1 +#define KVX_SFR_MES_DSYE_WFXL_MASK _ULL(0x200) +#define KVX_SFR_MES_DSYE_WFXL_CLEAR _ULL(0x200) +#define KVX_SFR_MES_DSYE_WFXL_SET _ULL(0x20000000000) + +#define KVX_SFR_WS_WU0_MASK _ULL(0x1) /* Wake-Up 0 */ +#define KVX_SFR_WS_WU0_SHIFT 0 +#define KVX_SFR_WS_WU0_WIDTH 1 +#define KVX_SFR_WS_WU0_WFXL_MASK _ULL(0x1) +#define KVX_SFR_WS_WU0_WFXL_CLEAR _ULL(0x1) +#define KVX_SFR_WS_WU0_WFXL_SET _ULL(0x100000000) + +#define KVX_SFR_WS_WU1_MASK _ULL(0x2) /* Wake-Up 1 */ +#define KVX_SFR_WS_WU1_SHIFT 1 +#define KVX_SFR_WS_WU1_WIDTH 1 +#define KVX_SFR_WS_WU1_WFXL_MASK _ULL(0x2) +#define KVX_SFR_WS_WU1_WFXL_CLEAR _ULL(0x2) +#define KVX_SFR_WS_WU1_WFXL_SET _ULL(0x200000000) + +#define KVX_SFR_WS_WU2_MASK _ULL(0x4) /* Wake-Up 2 */ +#define KVX_SFR_WS_WU2_SHIFT 2 +#define KVX_SFR_WS_WU2_WIDTH 1 +#define KVX_SFR_WS_WU2_WFXL_MASK _ULL(0x4) +#define KVX_SFR_WS_WU2_WFXL_CLEAR _ULL(0x4) +#define KVX_SFR_WS_WU2_WFXL_SET _ULL(0x400000000) + +#define KVX_SFR_IPE_FE_MASK _ULL(0xffff) /* Forward Events */ +#define KVX_SFR_IPE_FE_SHIFT 0 +#define KVX_SFR_IPE_FE_WIDTH 16 +#define KVX_SFR_IPE_FE_WFXL_MASK _ULL(0xffff) +#define KVX_SFR_IPE_FE_WFXL_CLEAR _ULL(0xffff) +#define KVX_SFR_IPE_FE_WFXL_SET _ULL(0xffff00000000) + +#define KVX_SFR_IPE_BE_MASK _ULL(0xffff0000) /* Backward Events */ +#define KVX_SFR_IPE_BE_SHIFT 16 +#define KVX_SFR_IPE_BE_WIDTH 16 +#define KVX_SFR_IPE_BE_WFXL_MASK _ULL(0xffff0000) +#define KVX_SFR_IPE_BE_WFXL_CLEAR _ULL(0xffff0000) +#define KVX_SFR_IPE_BE_WFXL_SET _ULL(0xffff000000000000) + +#define KVX_SFR_IPE_FM_MASK _ULL(0xffff00000000) /* Forward Mode */ +#define KVX_SFR_IPE_FM_SHIFT 32 +#define KVX_SFR_IPE_FM_WIDTH 16 +#define KVX_SFR_IPE_FM_WFXM_MASK _ULL(0xffff00000000) +#define KVX_SFR_IPE_FM_WFXM_CLEAR _ULL(0xffff) +#define KVX_SFR_IPE_FM_WFXM_SET _ULL(0xffff00000000) + +#define KVX_SFR_IPE_BM_MASK _ULL(0xffff000000000000) /* Backward Modes */ +#define KVX_SFR_IPE_BM_SHIFT 48 +#define KVX_SFR_IPE_BM_WIDTH 16 +#define KVX_SFR_IPE_BM_WFXM_MASK _ULL(0xffff000000000000) +#define KVX_SFR_IPE_BM_WFXM_CLEAR _ULL(0xffff0000) +#define KVX_SFR_IPE_BM_WFXM_SET _ULL(0xffff000000000000) + +#endif /*_ASM_KVX_SFR_DEFS_H */ diff --git a/arch/kvx/include/asm/swab.h b/arch/kvx/include/asm/swab.h new file mode 100644 index 0000000000000..83a48871c8fbb --- /dev/null +++ b/arch/kvx/include/asm/swab.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_SWAB_H +#define _ASM_KVX_SWAB_H + +#include + +#define U64_BYTE_SWAP_MATRIX 0x0102040810204080ULL +#define U32_BYTE_SWAP_MATRIX 0x0000000001020408ULL +#define U16_BYTE_SWAP_MATRIX 0x0000000000000102ULL +#define U32_WORD_SWAP_MATRIX 0x0000000002010804ULL +#define U32_HL_BYTE_SWAP_MATRIX 0x0000000004080102ULL + +static inline __attribute_const__ __u64 __arch_swab64(__u64 val) +{ + return __builtin_kvx_sbmm8(val, U64_BYTE_SWAP_MATRIX); +} + +static inline __attribute_const__ __u32 __arch_swab32(__u32 val) +{ + return __builtin_kvx_sbmm8(val, U32_BYTE_SWAP_MATRIX); +} + +static inline __attribute_const__ __u16 __arch_swab16(__u16 val) +{ + return __builtin_kvx_sbmm8(val, U16_BYTE_SWAP_MATRIX); +} + +static inline __attribute_const__ __u32 __arch_swahw32(__u32 val) +{ + return __builtin_kvx_sbmm8(val, U32_WORD_SWAP_MATRIX); +} + +static inline __attribute_const__ __u32 __arch_swahb32(__u32 val) +{ + return __builtin_kvx_sbmm8(val, U32_HL_BYTE_SWAP_MATRIX); +} + +#define __arch_swab64 __arch_swab64 +#define __arch_swab32 __arch_swab32 +#define __arch_swab16 __arch_swab16 +#define __arch_swahw32 __arch_swahw32 +#define __arch_swahb32 __arch_swahb32 +#endif diff --git a/arch/kvx/include/asm/sys_arch.h b/arch/kvx/include/asm/sys_arc= h.h new file mode 100644 index 0000000000000..ae518106a0a8d --- /dev/null +++ b/arch/kvx/include/asm/sys_arch.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_SYS_ARCH_H +#define _ASM_KVX_SYS_ARCH_H + +#include + +#define EXCEPTION_STRIDE 0x40 +#define EXCEPTION_ALIGNMENT 0x100 + +#define kvx_cluster_id() ((int) \ + ((kvx_sfr_get(PCR) & KVX_SFR_PCR_CID_MASK) \ + >> KVX_SFR_PCR_CID_SHIFT)) + +#define KVX_SFR_START(__sfr_reg) \ + (KVX_SFR_## __sfr_reg ## _SHIFT) + +#define KVX_SFR_END(__sfr_reg) \ + (KVX_SFR_## __sfr_reg ## _SHIFT + KVX_SFR_## __sfr_reg ## _WIDTH - 1) + +/** + * Get the value to clear a sfr + */ +#define SFR_CLEAR(__sfr, __field, __lm) \ + KVX_SFR_## __sfr ## _ ## __field ## _ ## __lm ## _CLEAR + +#define SFR_CLEAR_WFXL(__sfr, __field) SFR_CLEAR(__sfr, __field, WFXL) +#define SFR_CLEAR_WFXM(__sfr, __field) SFR_CLEAR(__sfr, __field, WFXM) + +/** + * Get the value to set a sfr. + */ +#define SFR_SET_WFXL(__sfr, __field, __val) \ + (__val << (KVX_SFR_ ## __sfr ## _ ## __field ## _SHIFT + 32)) + +#define SFR_SET_WFXM(__sfr, __field, __val) \ + (__val << (KVX_SFR_ ## __sfr ## _ ## __field ## _SHIFT)) + +/** + * Generate the mask to clear and set a value using wfx{m|l}. + */ +#define SFR_SET_VAL_WFXL(__sfr, __field, __val) \ + (SFR_SET_WFXL(__sfr, __field, __val) | SFR_CLEAR_WFXL(__sfr, __field)) +#define SFR_SET_VAL_WFXM(__sfr, __field, __val) \ + (SFR_SET_WFXM(__sfr, __field, __val) | SFR_CLEAR_WFXM(__sfr, __field)) + +#endif /* _ASM_KVX_SYS_ARCH_H */ --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout149.security-mail.net (smtpout149.security-mail.net [85.31.212.149]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C1C5816C862 for ; Mon, 22 Jul 2024 09:43:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.149 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641411; cv=none; b=VdSoQ9cJ5wHs2T7m68DzzzBws4tDdTunkJ35K/09jB1EMZrBZ5h64MtpEyofng0oj/Hvrn+hWUSMjm5uhVk+Ch8luI8jmi5QarzzEA3ENh3G26nwHA3tO3ssihX75huzkbkwESk8exTd9wdwtMwlS7ctB4Hxu5jOjAw1a3d7P8I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641411; c=relaxed/simple; bh=yiIIaJWJm3G+BS5HXK8fiZQ3PgG4MBqpY9GLN4Ty0cI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=SaFXunCglzTHbwaX7eEsBOqeRFN/GhNx9aGgez1eZZ9YmJb2GmqLhgWprKUo76sHBmyaSzCF240VOPyz3k0aZp5E5IcSKMzNJP2UIzNHWqyYMK1myzJMHjxQD4lrTgLYjW/Dd5zZycIx246xbUFtwa3nd6r3+EFPfgNxycMQIZI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=YR3UOs4h; arc=none smtp.client-ip=85.31.212.149 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="YR3UOs4h" Received: from localhost (fx409.security-mail.net [127.0.0.1]) by fx409.security-mail.net (Postfix) with ESMTP id 748B7349A5D for ; Mon, 22 Jul 2024 11:43:27 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641407; bh=yiIIaJWJm3G+BS5HXK8fiZQ3PgG4MBqpY9GLN4Ty0cI=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=YR3UOs4hdTeLnBjze7nfeP22D1Dit9F2Iyj/zFLNuFS4pFbfpnQv2wAGNgcZte9VY okFQ34X1wO0Y55ZRujWGYWXYpMxgO55pQfcBpyl2pDn9RmUDLsjacZk6wxw1Ks3+SS cb6TTw0YfSMyfQJF2vyxpL0PiXZ5+VdPnnrZ8tLY= Received: from fx409 (fx409.security-mail.net [127.0.0.1]) by fx409.security-mail.net (Postfix) with ESMTP id 3189E349A3F; Mon, 22 Jul 2024 11:43:27 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx409.security-mail.net (Postfix) with ESMTPS id 7E162349A79; Mon, 22 Jul 2024 11:43:26 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 3C40640317; Mon, 22 Jul 2024 11:43:26 +0200 (CEST) X-Secumail-id: <182ba.669e29be.7935d.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland , Yury Norov , Rasmus Villemoes Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Jules Maselbas , Julien Villette Subject: [RFC PATCH v3 15/37] kvx: Add atomic/locking headers Date: Mon, 22 Jul 2024 11:41:26 +0200 Message-ID: <20240722094226.21602-16-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add common headers (atomic, bitops, barrier and locking) for basic kvx support. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Jules Maselbas Signed-off-by: Jules Maselbas Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Julien Villette Signed-off-by: Julien Villette Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: - use {READ,WRITE}_ONCE for arch_atomic64_{read,set} - use asm-generic/bitops/atomic.h instead of __test_and_*_bit - removed duplicated includes - rewrite xchg and cmpxchg in C using builtins for acswap insn V2 -> V3: - use arch_atomic64_read instead of plain pointer dereference - undef ATOMIC64_RETURN_OP - override generic atomics for: - arch_atomic64_{add,sub}_return - arch_atomic64_fetch_{add,sub,and,or,xor} - add missing if (!word) return 0 in __ffs - typos --- arch/kvx/include/asm/atomic.h | 119 +++++++++++++++++++++++ arch/kvx/include/asm/barrier.h | 15 +++ arch/kvx/include/asm/bitops.h | 118 +++++++++++++++++++++++ arch/kvx/include/asm/bitrev.h | 32 +++++++ arch/kvx/include/asm/cmpxchg.h | 170 +++++++++++++++++++++++++++++++++ 5 files changed, 454 insertions(+) create mode 100644 arch/kvx/include/asm/atomic.h create mode 100644 arch/kvx/include/asm/barrier.h create mode 100644 arch/kvx/include/asm/bitops.h create mode 100644 arch/kvx/include/asm/bitrev.h create mode 100644 arch/kvx/include/asm/cmpxchg.h diff --git a/arch/kvx/include/asm/atomic.h b/arch/kvx/include/asm/atomic.h new file mode 100644 index 0000000000000..1ccd5ca51a763 --- /dev/null +++ b/arch/kvx/include/asm/atomic.h @@ -0,0 +1,119 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_ATOMIC_H +#define _ASM_KVX_ATOMIC_H + +#include + +#include + +#define ATOMIC64_INIT(i) { (i) } + +#define arch_atomic64_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), = old, new)) +#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new)) + +static inline long arch_atomic64_read(const atomic64_t *v) +{ + return READ_ONCE(v->counter); +} + +static inline void arch_atomic64_set(atomic64_t *v, long i) +{ + WRITE_ONCE(v->counter, i); +} + +#define ATOMIC64_RETURN_OP(op, c_op) \ +static inline long arch_atomic64_##op##_return(long i, atomic64_t *v) \ +{ \ + long new, old, ret; \ + \ + do { \ + old =3D arch_atomic64_read(v); \ + new =3D old c_op i; \ + ret =3D arch_cmpxchg(&v->counter, old, new); \ + } while (ret !=3D old); \ + \ + return new; \ +} + +#define ATOMIC64_OP(op, c_op) \ +static inline void arch_atomic64_##op(long i, atomic64_t *v) \ +{ \ + long new, old, ret; \ + \ + do { \ + old =3D arch_atomic64_read(v); \ + new =3D old c_op i; \ + ret =3D arch_cmpxchg(&v->counter, old, new); \ + } while (ret !=3D old); \ +} + +#define ATOMIC64_FETCH_OP(op, c_op) \ +static inline long arch_atomic64_fetch_##op(long i, atomic64_t *v) \ +{ \ + long new, old, ret; \ + \ + do { \ + old =3D arch_atomic64_read(v); \ + new =3D old c_op i; \ + ret =3D arch_cmpxchg(&v->counter, old, new); \ + } while (ret !=3D old); \ + \ + return old; \ +} + +#define ATOMIC64_OPS(op, c_op) \ + ATOMIC64_OP(op, c_op) \ + ATOMIC64_RETURN_OP(op, c_op) \ + ATOMIC64_FETCH_OP(op, c_op) + +ATOMIC64_OPS(and, &) +ATOMIC64_OPS(or, |) +ATOMIC64_OPS(xor, ^) +ATOMIC64_OPS(add, +) +ATOMIC64_OPS(sub, -) + +#undef ATOMIC64_OPS +#undef ATOMIC64_FETCH_OP +#undef ATOMIC64_RETURN_OP +#undef ATOMIC64_OP + +static inline int arch_atomic_add_return(int i, atomic_t *v) +{ + int new, old, ret; + + do { + old =3D v->counter; + new =3D old + i; + ret =3D arch_cmpxchg(&v->counter, old, new); + } while (ret !=3D old); + + return new; +} + +static inline int arch_atomic_sub_return(int i, atomic_t *v) +{ + return arch_atomic_add_return(-i, v); +} + +#define arch_atomic64_add_return arch_atomic64_add_return + +#define arch_atomic64_fetch_add arch_atomic64_fetch_add + +#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub + +#define arch_atomic64_fetch_and arch_atomic64_fetch_and + +#define arch_atomic64_fetch_or arch_atomic64_fetch_or + +#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor + +#define arch_atomic64_sub_return arch_atomic64_sub_return + +#include + +#endif /* _ASM_KVX_ATOMIC_H */ diff --git a/arch/kvx/include/asm/barrier.h b/arch/kvx/include/asm/barrier.h new file mode 100644 index 0000000000000..371f1c70746dc --- /dev/null +++ b/arch/kvx/include/asm/barrier.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_BARRIER_H +#define _ASM_KVX_BARRIER_H + +/* fence is sufficient to guarantee write ordering */ +#define mb() __builtin_kvx_fence() + +#include + +#endif /* _ASM_KVX_BARRIER_H */ diff --git a/arch/kvx/include/asm/bitops.h b/arch/kvx/include/asm/bitops.h new file mode 100644 index 0000000000000..7782ce93cfee4 --- /dev/null +++ b/arch/kvx/include/asm/bitops.h @@ -0,0 +1,118 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Yann Sionneau + */ + +#ifndef _ASM_KVX_BITOPS_H +#define _ASM_KVX_BITOPS_H + +#ifdef __KERNEL__ + +#ifndef _LINUX_BITOPS_H +#error only can be included directly +#endif + +#include + +static inline int fls(int x) +{ + return 32 - __builtin_kvx_clzw(x); +} + +static inline int fls64(__u64 x) +{ + return 64 - __builtin_kvx_clzd(x); +} + +/** + * __ffs - find first set bit in word + * @word: The word to search + * + * Undefined if no set bit exists, so code should check against 0 first. + */ +static inline unsigned long __ffs(unsigned long word) +{ + if (!word) + return 0; + + return __builtin_kvx_ctzd(word); +} + +/** + * __fls - find last set bit in word + * @word: The word to search + * + * Undefined if no set bit exists, so code should check against 0 first. + */ +static inline unsigned long __fls(unsigned long word) +{ + return 63 - __builtin_kvx_clzd(word); +} + + +/** + * ffs - find first set bit in word + * @x: the word to search + * + * This is defined the same way as the libc and compiler builtin ffs + * routines, therefore differs in spirit from the other bitops. + * + * ffs(value) returns 0 if value is 0 or the position of the first + * set bit if value is nonzero. The first (least significant) bit + * is at position 1. + */ +static inline int ffs(int x) +{ + if (!x) + return 0; + return __builtin_kvx_ctzw(x) + 1; +} + +static inline unsigned int __arch_hweight32(unsigned int w) +{ + unsigned int count; + + asm volatile ("cbsw %0 =3D %1\n\t;;" + : "=3Dr" (count) + : "r" (w)); + + return count; +} + +static inline unsigned int __arch_hweight64(__u64 w) +{ + unsigned int count; + + asm volatile ("cbsd %0 =3D %1\n\t;;" + : "=3Dr" (count) + : "r" (w)); + + return count; +} + +static inline unsigned int __arch_hweight16(unsigned int w) +{ + return __arch_hweight32(w & 0xffff); +} + +static inline unsigned int __arch_hweight8(unsigned int w) +{ + return __arch_hweight32(w & 0xff); +} + +#include + +#include +#include + +#include +#include +#include +#include +#include + +#endif + +#endif diff --git a/arch/kvx/include/asm/bitrev.h b/arch/kvx/include/asm/bitrev.h new file mode 100644 index 0000000000000..79865081905a6 --- /dev/null +++ b/arch/kvx/include/asm/bitrev.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_BITREV_H +#define _ASM_KVX_BITREV_H + +#include + +/* Bit reversal constant for matrix multiply */ +#define BIT_REVERSE 0x0102040810204080ULL + +static __always_inline __attribute_const__ u32 __arch_bitrev32(u32 x) +{ + /* Reverse all bits for each bytes and then byte-reverse the 32 LSB */ + return swab32(__builtin_kvx_sbmm8(BIT_REVERSE, x)); +} + +static __always_inline __attribute_const__ u16 __arch_bitrev16(u16 x) +{ + /* Reverse all bits for each bytes and then byte-reverse the 16 LSB */ + return swab16(__builtin_kvx_sbmm8(BIT_REVERSE, x)); +} + +static __always_inline __attribute_const__ u8 __arch_bitrev8(u8 x) +{ + return __builtin_kvx_sbmm8(BIT_REVERSE, x); +} + +#endif diff --git a/arch/kvx/include/asm/cmpxchg.h b/arch/kvx/include/asm/cmpxchg.h new file mode 100644 index 0000000000000..041a3e7797103 --- /dev/null +++ b/arch/kvx/include/asm/cmpxchg.h @@ -0,0 +1,170 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Yann Sionneau + * Jules Maselbas + */ + +#ifndef _ASM_KVX_CMPXCHG_H +#define _ASM_KVX_CMPXCHG_H + +#include +#include +#include +#include + +/* + * On kvx, we have a boolean compare and swap which means that the operati= on + * returns only the success of operation. + * If operation succeed, this is simple, we just need to return the provid= ed + * old value. However, if it fails, we need to load the value to return it= for + * the caller. If the loaded value is different from the "old" provided by= the + * caller, we can return it since it will means it failed. + * However, if for some reason the value we read is equal to the old value + * provided by the caller, we can't simply return it or the caller will th= ink it + * succeeded. So if the value we read is the same as the "old" provided by + * the caller, we try again until either we succeed or we fail with a diff= erent + * value than the provided one. + */ + +static inline unsigned int __cmpxchg_u32(unsigned int old, unsigned int ne= w, + volatile unsigned int *ptr) +{ + unsigned int exp =3D old; + + __builtin_kvx_fence(); + while (exp =3D=3D old) { + if (__builtin_kvx_acswapw((void *)ptr, new, exp)) + break; /* acswap succeed */ + exp =3D *ptr; + } + + return exp; +} + +static inline unsigned long __cmpxchg_u64(unsigned long old, unsigned long= new, + volatile unsigned long *ptr) +{ + unsigned long exp =3D old; + + __builtin_kvx_fence(); + while (exp =3D=3D old) { + if (__builtin_kvx_acswapd((void *)ptr, new, exp)) + break; /* acswap succeed */ + exp =3D *ptr; + } + + return exp; +} + +extern unsigned long __cmpxchg_called_with_bad_pointer(void) + __compiletime_error("Bad argument size for cmpxchg"); + +static __always_inline unsigned long __cmpxchg(unsigned long old, + unsigned long new, + volatile void *ptr, int size) +{ + switch (size) { + case 4: + return __cmpxchg_u32(old, new, ptr); + case 8: + return __cmpxchg_u64(old, new, ptr); + default: + return __cmpxchg_called_with_bad_pointer(); + } +} + +#define arch_cmpxchg(ptr, old, new) \ + ((__typeof__(*(ptr))) __cmpxchg( \ + (unsigned long)(old), (unsigned long)(new), \ + (ptr), sizeof(*(ptr)))) + +/* + * In order to optimize xchg for 16 bits, we can use insf/extfz if we know= the + * bounds. This way, we only take one more bundle than standard xchg. We s= imply + * do a read modify acswap on a 32 bits word. + */ + +#define __kvx_insf(org, val, start, stop) __asm__ __volatile__( \ + "insf %[_org] =3D %[_val], %[_stop], %[_start]\n\t;;" \ + : [_org]"+r"(org) \ + : [_val]"r"(val), [_stop]"i"(stop), [_start]"i"(start)) + +#define __kvx_extfz(out, val, start, stop) __asm__ __volatile__( \ + "extfz %[_out] =3D %[_val], %[_stop], %[_start]\n\t;;" \ + : [_out]"=3Dr"(out) \ + : [_val]"r"(val), [_stop]"i"(stop), [_start]"i"(start)) + +/* Needed for generic qspinlock implementation */ +static inline unsigned int __xchg_u16(unsigned int old, unsigned int new, + volatile unsigned int *ptr) +{ + unsigned int off =3D ((unsigned long)ptr) % sizeof(unsigned int); + unsigned int val; + + ptr =3D PTR_ALIGN_DOWN(ptr, sizeof(unsigned int)); + __builtin_kvx_fence(); + do { + old =3D *ptr; + val =3D old; + if (off =3D=3D 0) + __kvx_insf(val, new, 0, 15); + else + __kvx_insf(val, new, 16, 31); + } while (!__builtin_kvx_acswapw((void *)ptr, val, old)); + + if (off =3D=3D 0) + __kvx_extfz(old, old, 0, 15); + else + __kvx_extfz(old, old, 16, 31); + + return old; +} + +static inline unsigned int __xchg_u32(unsigned int old, unsigned int new, + volatile unsigned int *ptr) +{ + __builtin_kvx_fence(); + do + old =3D *ptr; + while (!__builtin_kvx_acswapw((void *)ptr, new, old)); + + return old; +} + +static inline unsigned long __xchg_u64(unsigned long old, unsigned long ne= w, + volatile unsigned long *ptr) +{ + __builtin_kvx_fence(); + do + old =3D *ptr; + while (!__builtin_kvx_acswapd((void *)ptr, new, old)); + + return old; +} + +extern unsigned long __xchg_called_with_bad_pointer(void) + __compiletime_error("Bad argument size for xchg"); + +static __always_inline unsigned long __xchg(unsigned long val, + volatile void *ptr, int size) +{ + switch (size) { + case 2: + return __xchg_u16(0, val, ptr); + case 4: + return __xchg_u32(0, val, ptr); + case 8: + return __xchg_u64(0, val, ptr); + default: + return __xchg_called_with_bad_pointer(); + } +} + +#define arch_xchg(ptr, val) \ + ((__typeof__(*(ptr))) __xchg( \ + (unsigned long)(val), \ + (ptr), sizeof(*(ptr)))) + +#endif --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout149.security-mail.net (smtpout149.security-mail.net [85.31.212.149]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8286016C437 for ; Mon, 22 Jul 2024 09:43:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.149 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641413; cv=none; b=UFTQIEmnVzpUSTYTS4bP+G42e9tajHjlhvXCpLUdPtnl7inVraDloj2XBV5mG8bi8A+ilAXDuTB1xhnOFfQr6Bvl179wDp8rkF6X7DismNfyP9ojzktfQJvKGlZK7MGG3FWklvKqT7Pq02GFYzyBXFF0s3DNAowsWwhiEZ1RWqo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641413; c=relaxed/simple; bh=vkX5nIOXjOU9j4Zc1+nAzhkQ0/XwM13wZIigMwVtpGE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Y148jiNeNgOD6N1eHKO1mRehp8P5Bw9hE4rK7rc2Kmbu/1ByAIHRSRzObKqRZfgxFo7uEissd5keVV0LApJ0HvF8Xdu5eQwRyq9nwPo/axDrBEPglutsKj3URoHQZOIpe9QsUJwPn+1wzq2c4gRjsfwtR+ZD2cLvZHUvwywDdnI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=NNNCICN3; arc=none smtp.client-ip=85.31.212.149 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="NNNCICN3" Received: from localhost (fx409.security-mail.net [127.0.0.1]) by fx409.security-mail.net (Postfix) with ESMTP id 3455F349A6A for ; Mon, 22 Jul 2024 11:43:29 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641409; bh=vkX5nIOXjOU9j4Zc1+nAzhkQ0/XwM13wZIigMwVtpGE=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=NNNCICN3xQtR7dUKpVH8lCafasVC//wZI8NLynVJS9ckkhwl8eqd/yEti0WAHosYu 7t/DlyZj+POp+rbZIvrfNYbH2FST4Mdr6WvUEJ7wYhGMApeJwKyCCta7rXOEqIQ8Ez ywhBXo8pUVmalaKoXts/yaXYyOk4Fz0Wp8FVrSyo= Received: from fx409 (fx409.security-mail.net [127.0.0.1]) by fx409.security-mail.net (Postfix) with ESMTP id 06E7A349619 for ; Mon, 22 Jul 2024 11:43:29 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx409.security-mail.net (Postfix) with ESMTPS id 6E8C73499E1; Mon, 22 Jul 2024 11:43:28 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 344EE40317; Mon, 22 Jul 2024 11:43:28 +0200 (CEST) X-Secumail-id: <4d12.669e29c0.6cc06.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Vincent Chardon Subject: [RFC PATCH v3 16/37] kvx: Add other common headers Date: Mon, 22 Jul 2024 11:41:27 +0200 Message-ID: <20240722094226.21602-17-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add some other common headers for basic kvx support. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Vincent Chardon Signed-off-by: Vincent Chardon Signed-off-by: Yann Sionneau --- Notes: V2 -> V3: remove __rm_firmware_regs_start declaration. --- arch/kvx/include/asm/asm-prototypes.h | 14 ++++++++ arch/kvx/include/asm/clocksource.h | 17 +++++++++ arch/kvx/include/asm/linkage.h | 13 +++++++ arch/kvx/include/asm/pci.h | 36 +++++++++++++++++++ arch/kvx/include/asm/sections.h | 17 +++++++++ arch/kvx/include/asm/spinlock.h | 16 +++++++++ arch/kvx/include/asm/spinlock_types.h | 17 +++++++++ arch/kvx/include/asm/stackprotector.h | 47 +++++++++++++++++++++++++ arch/kvx/include/asm/timex.h | 20 +++++++++++ arch/kvx/include/asm/types.h | 12 +++++++ arch/kvx/include/uapi/asm/bitsperlong.h | 14 ++++++++ arch/kvx/include/uapi/asm/byteorder.h | 12 +++++++ tools/include/uapi/asm/bitsperlong.h | 2 ++ 13 files changed, 237 insertions(+) create mode 100644 arch/kvx/include/asm/asm-prototypes.h create mode 100644 arch/kvx/include/asm/clocksource.h create mode 100644 arch/kvx/include/asm/linkage.h create mode 100644 arch/kvx/include/asm/pci.h create mode 100644 arch/kvx/include/asm/sections.h create mode 100644 arch/kvx/include/asm/spinlock.h create mode 100644 arch/kvx/include/asm/spinlock_types.h create mode 100644 arch/kvx/include/asm/stackprotector.h create mode 100644 arch/kvx/include/asm/timex.h create mode 100644 arch/kvx/include/asm/types.h create mode 100644 arch/kvx/include/uapi/asm/bitsperlong.h create mode 100644 arch/kvx/include/uapi/asm/byteorder.h diff --git a/arch/kvx/include/asm/asm-prototypes.h b/arch/kvx/include/asm/a= sm-prototypes.h new file mode 100644 index 0000000000000..af032508e30cf --- /dev/null +++ b/arch/kvx/include/asm/asm-prototypes.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_ASM_PROTOTYPES_H +#define _ASM_KVX_ASM_PROTOTYPES_H + +#include + +#include + +#endif /* _ASM_KVX_ASM_PROTOTYPES_H */ diff --git a/arch/kvx/include/asm/clocksource.h b/arch/kvx/include/asm/cloc= ksource.h new file mode 100644 index 0000000000000..4df7c66ffbb53 --- /dev/null +++ b/arch/kvx/include/asm/clocksource.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Yann Sionneau + * Clement Leger + */ + +#ifndef _ASM_KVX_CLOCKSOURCE_H +#define _ASM_KVX_CLOCKSOURCE_H + +#include + +struct arch_clocksource_data { + void __iomem *regs; +}; + +#endif /* _ASM_KVX_CLOCKSOURCE_H */ diff --git a/arch/kvx/include/asm/linkage.h b/arch/kvx/include/asm/linkage.h new file mode 100644 index 0000000000000..84e1cacf67c21 --- /dev/null +++ b/arch/kvx/include/asm/linkage.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Yann Sionneau + */ + +#ifndef __ASM_KVX_LINKAGE_H +#define __ASM_KVX_LINKAGE_H + +#define __ALIGN .align 4 +#define __ALIGN_STR ".align 4" + +#endif diff --git a/arch/kvx/include/asm/pci.h b/arch/kvx/include/asm/pci.h new file mode 100644 index 0000000000000..d5bbaaf041b53 --- /dev/null +++ b/arch/kvx/include/asm/pci.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Vincent Chardon + * Clement Leger + */ + +#ifndef __ASM_KVX_PCI_H_ +#define __ASM_KVX_PCI_H_ + +#include +#include +#include + +#define ARCH_GENERIC_PCI_MMAP_RESOURCE 1 +#define HAVE_PCI_MMAP 1 + +extern int isa_dma_bridge_buggy; + +/* Can be used to override the logic in pci_scan_bus for skipping + * already-configured bus numbers - to be used for buggy BIOSes + * or architectures with incomplete PCI setup by the loader. + */ +#define pcibios_assign_all_busses() 0 + +#define PCIBIOS_MIN_IO 0UL +#define PCIBIOS_MIN_MEM 0UL + +#ifdef CONFIG_PCI_DOMAINS +static inline int pci_proc_domain(struct pci_bus *bus) +{ + return pci_domain_nr(bus); +} +#endif /* CONFIG_PCI_DOMAINS */ + +#endif /* _ASM_KVX_PCI_H */ diff --git a/arch/kvx/include/asm/sections.h b/arch/kvx/include/asm/section= s.h new file mode 100644 index 0000000000000..60e63fc530802 --- /dev/null +++ b/arch/kvx/include/asm/sections.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_SECTIONS_H +#define _ASM_KVX_SECTIONS_H + +#include + +extern char __rodata_start[], __rodata_end[]; +extern char __initdata_start[], __initdata_end[]; +extern char __inittext_start[], __inittext_end[]; +extern char __exception_start[], __exception_end[]; + +#endif diff --git a/arch/kvx/include/asm/spinlock.h b/arch/kvx/include/asm/spinloc= k.h new file mode 100644 index 0000000000000..ed32fdba1e19d --- /dev/null +++ b/arch/kvx/include/asm/spinlock.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_SPINLOCK_H +#define _ASM_KVX_SPINLOCK_H + +#include +#include + +/* See include/linux/spinlock.h */ +#define smp_mb__after_spinlock() smp_mb() + +#endif diff --git a/arch/kvx/include/asm/spinlock_types.h b/arch/kvx/include/asm/s= pinlock_types.h new file mode 100644 index 0000000000000..929a7df16ef39 --- /dev/null +++ b/arch/kvx/include/asm/spinlock_types.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_SPINLOCK_TYPES_H +#define _ASM_KVX_SPINLOCK_TYPES_H + +#if !defined(__LINUX_SPINLOCK_TYPES_RAW_H) && !defined(__ASM_SPINLOCK_H) +# error "please don't include this file directly" +#endif + +#include +#include + +#endif /* _ASM_KVX_SPINLOCK_TYPES_H */ diff --git a/arch/kvx/include/asm/stackprotector.h b/arch/kvx/include/asm/s= tackprotector.h new file mode 100644 index 0000000000000..2c190bbb5efc8 --- /dev/null +++ b/arch/kvx/include/asm/stackprotector.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * derived from arch/mips/include/asm/stackprotector.h + * + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +/* + * GCC stack protector support. + * + * Stack protector works by putting predefined pattern at the start of + * the stack frame and verifying that it hasn't been overwritten when + * returning from the function. The pattern is called stack canary + * and gcc expects it to be defined by a global variable called + * "__stack_chk_guard" on KVX. This unfortunately means that on SMP + * we cannot have a different canary value per task. + */ + +#ifndef __ASM_STACKPROTECTOR_H +#define __ASM_STACKPROTECTOR_H + +#include +#include + +extern unsigned long __stack_chk_guard; + +/* + * Initialize the stackprotector canary value. + * + * NOTE: this must only be called from functions that never return, + * and it must always be inlined. + */ +static __always_inline void boot_init_stack_canary(void) +{ + unsigned long canary; + + /* Try to get a semi random initial value. */ + get_random_bytes(&canary, sizeof(canary)); + canary ^=3D LINUX_VERSION_CODE; + canary &=3D CANARY_MASK; + + current->stack_canary =3D canary; + __stack_chk_guard =3D current->stack_canary; +} + +#endif /* _ASM_STACKPROTECTOR_H */ diff --git a/arch/kvx/include/asm/timex.h b/arch/kvx/include/asm/timex.h new file mode 100644 index 0000000000000..51e346faa887f --- /dev/null +++ b/arch/kvx/include/asm/timex.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_TIMEX_H +#define _ASM_KVX_TIMEX_H + +#define get_cycles get_cycles + +#include +#include + +static inline cycles_t get_cycles(void) +{ + return kvx_sfr_get(PM0); +} + +#endif /* _ASM_KVX_TIMEX_H */ diff --git a/arch/kvx/include/asm/types.h b/arch/kvx/include/asm/types.h new file mode 100644 index 0000000000000..1e6c024ee8923 --- /dev/null +++ b/arch/kvx/include/asm/types.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_TYPES_H +#define _ASM_KVX_TYPES_H + +#include + +#endif /* _ASM_KVX_TYPES_H */ diff --git a/arch/kvx/include/uapi/asm/bitsperlong.h b/arch/kvx/include/uap= i/asm/bitsperlong.h new file mode 100644 index 0000000000000..02a91596d5670 --- /dev/null +++ b/arch/kvx/include/uapi/asm/bitsperlong.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _UAPI_ASM_KVX_BITSPERLONG_H +#define _UAPI_ASM_KVX_BITSPERLONG_H + +#define __BITS_PER_LONG 64 + +#include + +#endif /* _UAPI_ASM_KVX_BITSPERLONG_H */ diff --git a/arch/kvx/include/uapi/asm/byteorder.h b/arch/kvx/include/uapi/= asm/byteorder.h new file mode 100644 index 0000000000000..b7d827daec737 --- /dev/null +++ b/arch/kvx/include/uapi/asm/byteorder.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_BYTEORDER_H +#define _ASM_KVX_BYTEORDER_H + +#include + +#endif /* _ASM_KVX_BYTEORDER_H */ diff --git a/tools/include/uapi/asm/bitsperlong.h b/tools/include/uapi/asm/= bitsperlong.h index c65267afc3415..aada3dfc4d61f 100644 --- a/tools/include/uapi/asm/bitsperlong.h +++ b/tools/include/uapi/asm/bitsperlong.h @@ -13,6 +13,8 @@ #include "../../../arch/ia64/include/uapi/asm/bitsperlong.h" #elif defined(__alpha__) #include "../../../arch/alpha/include/uapi/asm/bitsperlong.h" +#elif defined(__kvx__) +#include "../../../arch/kvx/include/uapi/asm/bitsperlong.h" #else #include #endif --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout149.security-mail.net (smtpout149.security-mail.net [85.31.212.149]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A09616C86B for ; Mon, 22 Jul 2024 09:43:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.149 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641415; cv=none; b=Fxamt0poYucY6pVU0ZELRdEBpBE3MwF1epG0l1w5yD4yTJPEX6m5s2pLq6slgCeC5I1E9WgU4ZHEqZrh8slmHG6NlKwtZ1UMYjxlvRCZkLVj/Khqcrigro1uFs4s2aO7T8+v5evoinIzJLDQL5NC/KANkWKsC+bVivaryWRBIvE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641415; c=relaxed/simple; bh=tFKBpw/ToOxRTAEfd3n97FAY8FUem4p/zbFUoSnKb68=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=KFhmdWEFSH/Smd94rzjKPnD8Py2JaQRmK0ELLSBtQJ4FBAhzv/5ohZApX4a1q9GkKyxxCazNzWp5eHirKYfQNF85fchuTSBj50sHXXf6yW6ub5PAjjzFA6jCwzh9riz2Kh3+NgOvvioJey5KVjByFIcmgQnNHMr22zJDIwTvNp0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=GNP2m6cK; arc=none smtp.client-ip=85.31.212.149 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="GNP2m6cK" Received: from localhost (fx409.security-mail.net [127.0.0.1]) by fx409.security-mail.net (Postfix) with ESMTP id E0FFC349AE3 for ; Mon, 22 Jul 2024 11:43:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641410; bh=tFKBpw/ToOxRTAEfd3n97FAY8FUem4p/zbFUoSnKb68=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=GNP2m6cKe1H8hVVgt694QpWIwD88mN+rMp9bVz0rx+B+keZ7djcOYWTJa1YkXhaaI RQYDGaa+ntelAnQSDfCt1W4seBOlviVrjr5Ob4Y+q6BC0fvfsrxl7vubaUkNzSNYas TP7qJ/6esGLJNHnitWo8MKHU5eVotSbJKHJ+vv0k= Received: from fx409 (fx409.security-mail.net [127.0.0.1]) by fx409.security-mail.net (Postfix) with ESMTP id A0730349AAA; Mon, 22 Jul 2024 11:43:30 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx409.security-mail.net (Postfix) with ESMTPS id EEF99349A04; Mon, 22 Jul 2024 11:43:29 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id B52D140317; Mon, 22 Jul 2024 11:43:29 +0200 (CEST) X-Secumail-id: <2b1d.669e29c1.ebe5f.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Alex Michon , Clement Leger , Guillaume Missonnier , Guillaume Thouvenin , Jules Maselbas , Julien Hascoet , Julien Villette , Marc =?utf-8?b?UG91bGhpw6hz?= , Luc Michel , Marius Gligor Subject: [RFC PATCH v3 17/37] kvx: Add boot and setup routines Date: Mon, 22 Jul 2024 11:41:28 +0200 Message-ID: <20240722094226.21602-18-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done From: Yann Sionneau Add basic boot, setup and reset routines for kvx. Co-developed-by: Alex Michon Signed-off-by: Alex Michon Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Guillaume Missonnier Signed-off-by: Guillaume Missonnier Co-developed-by: Guillaume Thouvenin Signed-off-by: Guillaume Thouvenin Co-developed-by: Jules Maselbas Signed-off-by: Jules Maselbas Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Julien Hascoet Signed-off-by: Julien Hascoet Co-developed-by: Julien Villette Signed-off-by: Julien Villette Co-developed-by: Marc Poulhi=C3=A8s Signed-off-by: Marc Poulhi=C3=A8s Co-developed-by: Luc Michel Signed-off-by: Luc Michel Co-developed-by: Marius Gligor Signed-off-by: Marius Gligor Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: removed L2 cache firmware starting code V2 -> V3: - increase watchdog timeout from 30s to 120s - memory mapping: - add SMEM to direct map at VA 0xFFFFFF8000000000 - DDR starts at VA 0xFFFFFF8100000000 (prev. 0xFFFFFF8000000000) - add a 2 MB mapping for smem direct map - smem identity mapping now use global addr. (prev. cluster local) - assembly syntax changes - switch from ENTRY/ENDPROC macros to SYM_FUNC_{START,END} - macro renaming - replace strtobool with kstrtobool - add CONFIG_SMP guarding around smp_init_cpus() - add missing static qualifier --- arch/kvx/include/asm/setup.h | 34 ++ arch/kvx/kernel/common.c | 11 + arch/kvx/kernel/head.S | 580 +++++++++++++++++++++++++++++++++++ arch/kvx/kernel/prom.c | 26 ++ arch/kvx/kernel/reset.c | 37 +++ arch/kvx/kernel/setup.c | 181 +++++++++++ arch/kvx/kernel/time.c | 242 +++++++++++++++ 7 files changed, 1111 insertions(+) create mode 100644 arch/kvx/include/asm/setup.h create mode 100644 arch/kvx/kernel/common.c create mode 100644 arch/kvx/kernel/head.S create mode 100644 arch/kvx/kernel/prom.c create mode 100644 arch/kvx/kernel/reset.c create mode 100644 arch/kvx/kernel/setup.c create mode 100644 arch/kvx/kernel/time.c diff --git a/arch/kvx/include/asm/setup.h b/arch/kvx/include/asm/setup.h new file mode 100644 index 0000000000000..c522ce2fe8adf --- /dev/null +++ b/arch/kvx/include/asm/setup.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_SETUP_H +#define _ASM_KVX_SETUP_H + +#include + +#include + +/* Magic is found in r0 when some parameters are given to kernel */ +#define LINUX_BOOT_PARAM_MAGIC ULL(0x31564752414E494C) + +#ifndef __ASSEMBLY__ + +void early_fixmap_init(void); + +void setup_device_tree(void); + +void setup_arch_memory(void); + +void kvx_init_mmu(void); + +void setup_processor(void); + +asmlinkage __visible void __init arch_low_level_start(unsigned long r0, + void *dtb_ptr); + +#endif + +#endif /* _ASM_KVX_SETUP_H */ diff --git a/arch/kvx/kernel/common.c b/arch/kvx/kernel/common.c new file mode 100644 index 0000000000000..322498f034fd9 --- /dev/null +++ b/arch/kvx/kernel/common.c @@ -0,0 +1,11 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#include +#include + +DEFINE_PER_CPU(int, __preempt_count) =3D INIT_PREEMPT_COUNT; +EXPORT_PER_CPU_SYMBOL(__preempt_count); diff --git a/arch/kvx/kernel/head.S b/arch/kvx/kernel/head.S new file mode 100644 index 0000000000000..cd6b990fcb01e --- /dev/null +++ b/arch/kvx/kernel/head.S @@ -0,0 +1,580 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + * Marius Gligor + * Julian Vetter + * Julien Hascoet + * Yann Sionneau + * Marc Poulhi=C3=A8s + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#ifdef CONFIG_SMP +#define SECONDARY_START_ADDR smp_secondary_start +#else +#define SECONDARY_START_ADDR proc_power_off +#endif + +#define PS_VAL_WFXL(__field, __val) \ + SFR_SET_VAL_WFXL(PS, __field, __val) + +#define PS_WFXL_VALUE PS_VAL_WFXL(HLE, 1) | \ + PS_VAL_WFXL(USE, 1) | \ + PS_VAL_WFXL(DCE, 1) | \ + PS_VAL_WFXL(ICE, 1) | \ + PS_VAL_WFXL(MME, 1) | \ + PS_VAL_WFXL(MMUP, 1) | \ + PS_VAL_WFXL(ET, 0) | \ + PS_VAL_WFXL(HTD, 0) | \ + PS_VAL_WFXL(V64, 1) | \ + PS_VAL_WFXL(PMJ, KVX_SUPPORTED_PSIZE) + +#define PCR_VAL_WFXM(__field, __val) \ + SFR_SET_VAL_WFXM(PCR, __field, __val) + +#define PCR_WFXM_VALUE PCR_VAL_WFXM(L1CE, 1) + +/* 120 sec for primary watchdog timeout */ +#define PRIMARY_WATCHDOG_VALUE (120000000000UL) + +#define TCR_WFXL_VALUE SFR_SET_VAL_WFXL(TCR, WUI, 1) | \ + SFR_SET_VAL_WFXL(TCR, WCE, 1) + +/* Enable STOP in WS */ +#define WS_ENABLE_WU2 (KVX_SFR_WS_WU2_MASK) +/* We only want to clear bits in ws */ +#define WS_WFXL_VALUE (WS_ENABLE_WU2) + +/* SMP stuff */ +#define KVX_RM_ID 16 +#define RM_PID_MASK ((KVX_RM_ID) << KVX_SFR_PCR_PID_SHIFT) + +/* Clean error and selected buffer */ +#define MMC_CLEAR_ERROR (KVX_SFR_MMC_E_MASK) + +#define TEH_VIRTUAL_MEMORY \ + TLB_MK_TEH_ENTRY(DDR_VIRT_OFFSET, 0, TLB_G_GLOBAL, 0) + +#define TEL_VIRTUAL_MEMORY \ + TLB_MK_TEL_ENTRY(DDR_PHYS_OFFSET, TLB_PS_512M, TLB_ES_A_MODIFIED,\ + TLB_CP_W_C, TLB_PA_NA_RWX) + +/* (TEH|TEL)_SHARED_MEMORY are mapping 0x0 to 0x0 */ +#define TEH_SHARED_MEMORY \ + TLB_MK_TEH_ENTRY(0x1000000, 0, TLB_G_GLOBAL, 0) + +#define TEL_SHARED_MEMORY \ + TLB_MK_TEL_ENTRY(0x1000000, TLB_PS_2M, TLB_ES_A_MODIFIED,\ + TLB_CP_W_C, TLB_PA_NA_RWX) + +#define TEH_VIRTUAL_SMEM \ + TLB_MK_TEH_ENTRY(0 + PAGE_OFFSET, 0, TLB_G_GLOBAL, 0) + +#define TEL_VIRTUAL_SMEM \ + TLB_MK_TEL_ENTRY(0x1000000, TLB_PS_2M, TLB_ES_A_MODIFIED,\ + TLB_CP_W_C, TLB_PA_NA_RWX) + +#define TEH_GDB_PAGE_MEMORY \ + TLB_MK_TEH_ENTRY(0, 0, TLB_G_GLOBAL, 0) + +#define TEL_GDB_PAGE_MEMORY \ + TLB_MK_TEL_ENTRY(0, TLB_PS_4K, TLB_ES_A_MODIFIED,\ + TLB_CP_U_U, TLB_PA_RWX_RWX) + +/** + * Macros + */ +.altmacro + +/* To select the JTLB we clear SB from MMC */ +.macro select_jtlb scratch_reg + make \scratch_reg =3D KVX_SFR_MMC_SB_MASK + ;; + wfxl $mmc, \scratch_reg +.endm + +/* To select the LTLB we set SB from MMC */ +.macro select_ltlb scratch_reg + make \scratch_reg =3D KVX_SFR_MMC_SB_MASK << 32 + ;; + wfxl $mmc, \scratch_reg +.endm + +/* Set SW of the MMC with number found in the reg register */ +.macro select_way_from_register reg scratch1 scratch2 + slld \scratch1 =3D \reg, KVX_SFR_MMC_SW_SHIFT + make \scratch2 =3D KVX_SFR_MMC_SW_MASK + ;; + slld \scratch1 =3D \scratch1, 32 + ;; + ord \scratch1 =3D \scratch1, \scratch2 + ;; + wfxl $mmc, \scratch1 +.endm + +/* Set SW of the MMC with the immediate */ +.macro select_way_from_immediate imm scratch1 scratch2 + make \scratch1 =3D (\imm << KVX_SFR_MMC_SW_SHIFT) << 32 + make \scratch2 =3D KVX_SFR_MMC_SW_MASK + ;; + ord \scratch1 =3D \scratch1, \scratch2 + ;; + wfxl $mmc, \scratch1 +.endm + +/* write tlb after setting teh and tel registers */ +.macro write_tlb_entry teh tel + set $teh =3D \teh + ;; + set $tel =3D \tel + ;; + tlbwrite +.endm + +/* Boot args */ +#define BOOT_ARGS_COUNT 2 +.align 16 +.section .boot.data, "aw", @progbits +rm_boot_args: +.skip BOOT_ARGS_COUNT * 8 + +/* + * This is our entry point. When entering from bootloader, + * the following registers are set: + * $r0 is a magic (LINUX_BOOT_PARAM_MAGIC) + * $r1 device tree pointer + * + * WARNING WARNING WARNING + * ! DO NOT CLOBBER THEM ! + * WARNING WARNING WARNING + * + * Try to use register above $r20 to ease parameter adding in future + */ + +__HEAD + +.align 8 +.section .boot.startup, "ax", @progbits +SYM_FUNC_START(kvx_start) + /* Setup 64 bit really early to avoid bugs */ + make $r21 =3D PS_VAL_WFXL(V64, 1) + ;; + wfxl $ps, $r21 + ;; + call asm_init_pl + ;; + get $r20 =3D $pcr + ;; + andd $r21 =3D $r20, RM_PID_MASK + ;; + cb.dnez $r21 ? asm_rm_cfg_pwr_ctrl + ;; +init_core: + call asm_init_mmu + ;; + /* Setup default processor status */ + make $r25 =3D PS_WFXL_VALUE + make $r26 =3D PCR_WFXM_VALUE + ;; + /** + * There is nothing much we can do if we take a early trap since the + * kernel is not yet ready to handle them. + * Register this as the early exception handler to at least avoid + * going in a black hole. + */ + make $r27 =3D __early_exception_start + ;; + set $ev =3D $r27 + make $r27 =3D gdb_mmu_enabled + ;; + wfxm $pcr, $r26 + ;; + wfxl $sps, $r25 + ;; + set $spc =3D $r27 + ;; + fence /* Ensure there is no outstanding load/store before dcache enable */ + ;; + rfe /* return into a virtual memory world */ + ;; + + /* gdb_mmu_enable is a label to easily install a breakpoint once the + * mmu is enabled and virtual addresses of kernel text can be accessed + * by gdb. See Documentation/kvx/kvx.txt for more details. */ +gdb_mmu_enabled: + i1inval + ;; + barrier + ;; + d1inval + ;; + /* Extract processor identifier */ + get $r24 =3D $pcr + ;; + extfz $r24 =3D $r24, KVX_SFR_END(PCR_PID), KVX_SFR_START(PCR_PID) + ;; + /* If proc 0, then go to clear bss and do normal boot */ + cb.deqz $r24? clear_bss + make $r25 =3D SECONDARY_START_ADDR + ;; + icall $r25 + ;; +clear_bss: + /* Copy bootloader arguments before cloberring them */ + copyd $r20 =3D $r0 + copyd $r21 =3D $r1 + ;; + /* Clear BSS */ + make $r0 =3D __bss_start + make $r1 =3D __bss_stop + call asm_memzero + ;; + /* Setup stack */ + make $r40 =3D init_thread_union + make $r41 =3D init_task + ;; + set $sr =3D $r41 + copyd $r0 =3D $r20 + copyd $r1 =3D $r21 + ;; + addd $sp =3D $r40, THREAD_SIZE + /* Clear frame pointer */ + make $fp =3D 0x0 + /* Setup the exception handler */ + make $r27 =3D __exception_start + ;; + set $ev =3D $r27 + /* Here we go ! start the C stuff */ + make $r20 =3D arch_low_level_start + ;; + icall $r20 + ;; + make $r20 =3D proc_power_off + ;; + igoto $r20 + ;; +SYM_FUNC_END(kvx_start) + +/** + * When PE 0 is started from the RM, arguments from the bootloaders are co= pied + * into rm_boot_args. It allows to give parameters from RM to PE. + * Note that the 4K alignment is required by the reset pc register... + */ +.align (4 * 1024) +SYM_FUNC_START(pe_start_wrapper) + make $r0 =3D rm_boot_args + make $r27 =3D PWR_CTRL_ADDR + make $r28 =3D kvx_start + ;; + lq $r0r1 =3D 0[$r0] + ;; + /* Set reset PC back to original value for SMP start */ + sd KVX_PWR_CTRL_RESET_PC_OFFSET[$r27] =3D $r28 + ;; + fence + goto kvx_start + ;; +SYM_FUNC_END(pe_start_wrapper) + +/** + * asm_memzero - Clear a memory zone with zeroes + * $r0 is the start of memory zone (must be align on 32 bytes boundary) + * $r1 is the end of memory zone (must be align on 32 bytes boundary) + */ +SYM_FUNC_START(asm_memzero) + sbfd $r32 =3D $r0, $r1 + make $r36 =3D 0 + make $r37 =3D 0 + ;; + make $r38 =3D 0 + make $r39 =3D 0 + /* Divide by 32 for hardware loop */ + srld $r32 =3D $r32, 5 + ;; + /* Clear memory with hardware loop */ + loopdo $r32, clear_mem_done + ;; + so 0[$r0] =3D $r36r37r38r39 + addd $r0 =3D $r0, 32 + ;; + clear_mem_done: + ret + ;; +SYM_FUNC_END(asm_memzero) + +/** + * Configure the power controller to be accessible by PEs + */ +SYM_FUNC_START(asm_rm_cfg_pwr_ctrl) + /* Enable hwloop for memzero */ + make $r32 =3D PS_VAL_WFXL(HLE, 1) + ;; + wfxl $ps, $r32 + ;; + make $r26 =3D PWR_CTRL_ADDR + make $r27 =3D PWR_CTRL_GLOBAL_CONFIG_PE_EN + ;; + /* Set PE enable in power controller */ + sd PWR_CTRL_GLOBAL_CONFIG_SET_OFFSET[$r26] =3D $r27 + make $r28 =3D rm_boot_args + ;; + /* Store parameters (r0:magic, r1:DTB) for PE0 */ + sq 0[$r28] =3D $r0r1 + make $r29 =3D pe_start_wrapper + ;; + /* Set PE reset PC to arguments wrapper */ + sd KVX_PWR_CTRL_RESET_PC_OFFSET[$r26] =3D $r29 + ;; + /* make sure parameters are visible by PE 0 */ + fence + ;; + /* Start PE 0 (1 << cpu) */ + make $r27 =3D 1 + make $r26 =3D PWR_CTRL_ADDR + ;; + /* Wake up PE0 */ + sd PWR_CTRL_WUP_SET_OFFSET[$r26] =3D $r27 + ;; + /* And clear wakeup to allow PE0 to sleep */ + sd PWR_CTRL_WUP_CLEAR_OFFSET[$r26] =3D $r27 + ;; + /* Nothing left to do on RM here */ + goto proc_power_off + ;; +SYM_FUNC_END(asm_rm_cfg_pwr_ctrl) + +#define request_ownership(__pl) ;\ + make $r21 =3D SYO_WFXL_VALUE_##__pl ;\ + ;; ;\ + wfxl $syow, $r21 ;\ + ;; ;\ + make $r21 =3D HTO_WFXL_VALUE_##__pl ;\ + ;; ;\ + wfxl $htow, $r21 ;\ + ;; ;\ + make $r21 =3D MO_WFXL_VALUE_##__pl ;\ + make $r22 =3D MO_WFXM_VALUE_##__pl ;\ + ;; ;\ + wfxl $mow, $r21 ;\ + ;; ;\ + wfxm $mow, $r22 ;\ + ;; ;\ + make $r21 =3D ITO_WFXL_VALUE_##__pl ;\ + make $r22 =3D ITO_WFXM_VALUE_##__pl ;\ + ;; ;\ + wfxl $itow, $r21 ;\ + ;; ;\ + wfxm $itow, $r22 ;\ + ;; ;\ + make $r21 =3D PSO_WFXL_VALUE_##__pl ;\ + make $r22 =3D PSO_WFXM_VALUE_##__pl ;\ + ;; ;\ + wfxl $psow, $r21 ;\ + ;; ;\ + wfxm $psow, $r22 ;\ + ;; ;\ + make $r21 =3D DO_WFXL_VALUE_##__pl ;\ + ;; ;\ + wfxl $dow, $r21 ;\ + ;; + +/** + * Initialize privilege level for Kernel + */ +SYM_FUNC_START(asm_init_pl) + get $r21 =3D $ps + ;; + /* Extract privilege level from $ps to check if we need to + * lower our privilege level + */ + extfz $r20 =3D $r21, KVX_SFR_END(PS_PL), KVX_SFR_START(PS_PL) + ;; + /* If our privilege level is 0, then we need to lower in execution level + * to ring 1 in order to let the debug routines be inserted at runtime + * by the JTAG. In both case, we will request the resources we need for + * linux to run. + */ + cb.deqz $r20? delegate_pl + ;; + /* + * When someone is already above us, request the resources we need to + * run the kernel. No need to request double exception or ECC traps for + * instance. When doing so, the more privileged level will trap for + * permission and delegate us the required resources. + */ + request_ownership(PL_CUR) + ;; + ret + ;; +delegate_pl: + request_ownership(PL_CUR_PLUS_1) + ;; + /* Copy our $ps into $sps for 1:1 restoration */ + get $r22 =3D $ps + ;; + /* We will return to $ra after rfe */ + get $r21 =3D $ra + /* Set privilege level to +1 is $sps */ + addd $r22 =3D $r22, PL_CUR_PLUS_1 + ;; + set $spc =3D $r21 + ;; + set $sps =3D $r22 + ;; + rfe + ;; +SYM_FUNC_END(asm_init_pl) + +/** + * Reset and initialize minimal tlb entries + */ +SYM_FUNC_START(asm_init_mmu) + make $r20 =3D MMC_CLEAR_ERROR + ;; + wfxl $mmc, $r20 + ;; + /* Reset the JTLB */ + select_jtlb $r20 + ;; + make $r20 =3D (MMU_JTLB_SETS - 1) /* Used to select the set */ + make $r21 =3D 0 /* Used for shifting and as scratch register */ + ;; + set $tel =3D $r21 /* tel is always equal to 0 */ + ;; + clear_jtlb: + slld $r21 =3D $r20, KVX_SFR_TEH_PN_SHIFT + addd $r20 =3D $r20, -1 + ;; + set $teh =3D $r21 + ;; + make $r22 =3D (MMU_JTLB_WAYS - 1) /* Used to select the way */ + ;; + loop_jtlb_way: + select_way_from_register $r22 $r23 $r24 + ;; + tlbwrite + ;; + addd $r22 =3D $r22, -1 + ;; + cb.dgez $r22? loop_jtlb_way + ;; + /* loop_jtlb_way done */ + cb.dgez $r20? clear_jtlb + ;; + clear_jtlb_done: + /* Reset the LTLB */ + select_ltlb $r20 + ;; + clear_ltlb: + /* There is only one set that is 0 so we can reuse the same + values for TEH and TEL. */ + make $r20 =3D (MMU_LTLB_WAYS - 1) + ;; + loop_ltlb_way: + select_way_from_register $r20, $r21, $r22 + ;; + tlbwrite + ;; + addd $r20 =3D $r20, -1 + ;; + cb.dgez $r20? loop_ltlb_way + ;; + clear_ltlb_done: + + /* See Documentation/kvx/kvx.txt for details about the settings of + the LTLB */ + select_way_from_immediate LTLB_ENTRY_KERNEL_TEXT, $r20, $r21 + ;; + make $r20 =3D TEH_VIRTUAL_MEMORY + make $r21 =3D TEL_VIRTUAL_MEMORY + ;; + write_tlb_entry $r20, $r21 + ;; + select_way_from_immediate LTLB_ENTRY_EARLY_SMEM, $r20, $r21 + ;; + make $r20 =3D TEH_SHARED_MEMORY + make $r21 =3D TEL_SHARED_MEMORY + ;; + write_tlb_entry $r20, $r21 + ;; + select_way_from_immediate LTLB_ENTRY_VIRTUAL_SMEM, $r20, $r21 + ;; + make $r20 =3D TEH_VIRTUAL_SMEM + make $r21 =3D TEL_VIRTUAL_SMEM + ;; + write_tlb_entry $r20, $r21 + ;; + select_way_from_immediate LTLB_ENTRY_GDB_PAGE, $r20, $r21 + ;; + make $r20 =3D _debug_start_lma + make $r21 =3D _debug_start + ;; + andd $r20 =3D $r20, KVX_SFR_TEH_PN_MASK + andd $r21 =3D $r21, KVX_SFR_TEL_FN_MASK + ;; + addd $r20 =3D $r20, TEH_GDB_PAGE_MEMORY + addd $r21 =3D $r21, TEL_GDB_PAGE_MEMORY + ;; + write_tlb_entry $r20, $r21 + ;; + ret + ;; +SYM_FUNC_END(asm_init_mmu) + +/** + * Entry point for secondary processors + * $r24 has been set in caller and is the proc id + */ +SYM_FUNC_START(smp_secondary_start) +#ifdef CONFIG_SMP + d1inval + ;; + i1inval + ;; + barrier + ;; + make $r25 =3D __cpu_up_task_pointer + make $r26 =3D __cpu_up_stack_pointer + ;; + ld.xs $sp =3D $r24[$r26] + /* Clear frame pointer */ + make $fp =3D 0x0 + ;; + ld.xs $r25 =3D $r24[$r25] + ;; + set $sr =3D $r25 + make $r27 =3D __exception_start + ;; + set $ev =3D $r27 + make $r26 =3D start_kernel_secondary + ;; + icall $r26 + ;; +#endif +SYM_FUNC_END(smp_secondary_start) + +SYM_FUNC_START(proc_power_off) + make $r1 =3D WS_WFXL_VALUE + ;; + /* Enable STOP */ + wfxl $ws, $r1 + ;; +1: stop + ;; + goto 1b + ;; +SYM_FUNC_END(proc_power_off) diff --git a/arch/kvx/kernel/prom.c b/arch/kvx/kernel/prom.c new file mode 100644 index 0000000000000..ec142565aef8e --- /dev/null +++ b/arch/kvx/kernel/prom.c @@ -0,0 +1,26 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#include +#include +#include +#include + +#include + +void __init setup_device_tree(void) +{ + const char *name; + + name =3D of_flat_dt_get_machine_name(); + if (!name) + return; + + pr_info("Machine model: %s\n", name); + dump_stack_set_arch_desc("%s (DT)", name); + + unflatten_device_tree(); +} diff --git a/arch/kvx/kernel/reset.c b/arch/kvx/kernel/reset.c new file mode 100644 index 0000000000000..afa0ceb9d7e9e --- /dev/null +++ b/arch/kvx/kernel/reset.c @@ -0,0 +1,37 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#include +#include + +#include + +static void kvx_default_power_off(void) +{ + smp_send_stop(); + local_cpu_stop(); +} + +void (*pm_power_off)(void) =3D kvx_default_power_off; +EXPORT_SYMBOL(pm_power_off); + +void machine_restart(char *cmd) +{ + smp_send_stop(); + do_kernel_restart(cmd); + pr_err("Reboot failed -- System halted\n"); + local_cpu_stop(); +} + +void machine_halt(void) +{ + pm_power_off(); +} + +void machine_power_off(void) +{ + pm_power_off(); +} diff --git a/arch/kvx/kernel/setup.c b/arch/kvx/kernel/setup.c new file mode 100644 index 0000000000000..670e860665672 --- /dev/null +++ b/arch/kvx/kernel/setup.c @@ -0,0 +1,181 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct screen_info screen_info; + +unsigned long memory_start; +EXPORT_SYMBOL(memory_start); +unsigned long memory_end; +EXPORT_SYMBOL(memory_end); + +DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_kvx, cpu_info); +EXPORT_PER_CPU_SYMBOL(cpu_info); + +static bool use_streaming =3D true; +static int __init parse_kvx_streaming(char *arg) +{ + int ret; + + ret =3D kstrtobool(arg, &use_streaming); + + if (!use_streaming) { + pr_info("disabling streaming\n"); + kvx_sfr_set_field(PS, USE, 0); + } + + return ret; +} +early_param("kvx.streaming", parse_kvx_streaming); + +static void __init setup_user_privilege(void) +{ + /* + * We want to let the user control various fields of ps: + * - hardware loop + * - instruction cache enable + * - streaming enable + */ + uint64_t mask =3D KVX_SFR_PSOW_HLE_MASK | + KVX_SFR_PSOW_ICE_MASK | + KVX_SFR_PSOW_USE_MASK; + + uint64_t value =3D (1 << KVX_SFR_PSOW_HLE_SHIFT) | + (1 << KVX_SFR_PSOW_ICE_SHIFT) | + (1 << KVX_SFR_PSOW_USE_SHIFT); + + kvx_sfr_set_mask(PSOW, mask, value); +} + +void __init setup_cpuinfo(void) +{ + struct cpuinfo_kvx *n =3D this_cpu_ptr(&cpu_info); + u64 pcr =3D kvx_sfr_get(PCR); + + n->copro_enable =3D kvx_sfr_field_val(pcr, PCR, COE); + n->arch_rev =3D kvx_sfr_field_val(pcr, PCR, CAR); + n->uarch_rev =3D kvx_sfr_field_val(pcr, PCR, CMA); +} + +/* + * Everything that needs to be setup PER cpu should be put here. + * This function will be called by per-cpu setup routine. + */ +void __init setup_processor(void) +{ + /* Clear performance monitor 0 */ + kvx_sfr_set_field(PMC, PM0C, 0); + +#ifdef CONFIG_ENABLE_TCA + /* Enable TCA (COE =3D Coprocessor Enable) */ + kvx_sfr_set_field(PCR, COE, 1); +#else + kvx_sfr_set_field(PCR, COE, 0); +#endif + + /* + * On kvx, we have speculative accesses which differ from normal + * accesses by the fact their trapping policy is directed by mmc.sne + * (speculative no-mapping enable) and mmc.spe (speculative protection + * enabled). + * To handle these accesses properly, we disable all traps on + * speculative accesses while in kernel and user (sne & spe) + * in order to silently discard data if fetched. + * This allows to do an effective prefetch. + */ + kvx_sfr_set_field(MMC, SNE, 0); + kvx_sfr_set_field(MMC, SPE, 0); + + if (!use_streaming) + kvx_sfr_set_field(PS, USE, 0); + + kvx_init_core_irq(); + + setup_user_privilege(); + + setup_cpuinfo(); +} + +static char builtin_cmdline[COMMAND_LINE_SIZE] __initdata =3D CONFIG_CMDLI= NE; + +void __init setup_arch(char **cmdline_p) +{ + if (builtin_cmdline[0]) { + /* append boot loader cmdline to builtin */ + strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE); + strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE); + strscpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE); + } + + *cmdline_p =3D boot_command_line; + + setup_processor(); + + /* Jump labels needs fixmap to be setup for text modifications */ + early_fixmap_init(); + + /* Parameters might set static keys */ + jump_label_init(); + /* + * Parse early param after setting up arch memory since + * we need fixmap for earlycon and fixedmap need to do + * memory allocation (fixed_range_init). + */ + parse_early_param(); + + setup_arch_memory(); + + paging_init(); + + setup_device_tree(); + +#ifdef CONFIG_SMP + smp_init_cpus(); +#endif + +#ifdef CONFIG_VT + conswitchp =3D &dummy_con; +#endif +} + +asmlinkage __visible void __init arch_low_level_start(unsigned long r0, + void *dtb_ptr) +{ + void *dt =3D __dtb_start; + + kvx_mmu_early_setup(); + + if (r0 =3D=3D LINUX_BOOT_PARAM_MAGIC) + dt =3D __va(dtb_ptr); + + if (!early_init_dt_scan(dt)) + panic("Missing device tree\n"); + + start_kernel(); +} diff --git a/arch/kvx/kernel/time.c b/arch/kvx/kernel/time.c new file mode 100644 index 0000000000000..e10c507d09849 --- /dev/null +++ b/arch/kvx/kernel/time.c @@ -0,0 +1,242 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Yann Sionneau + * Guillaume Thouvenin + * Luc Michel + * Julian Vetter + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#define KVX_TIMER_MIN_DELTA 1 +#define KVX_TIMER_MAX_DELTA 0xFFFFFFFFFFFFFFFFULL +#define KVX_TIMER_MAX_VALUE 0xFFFFFFFFFFFFFFFFULL + +/* + * Clockevent + */ +static unsigned int kvx_timer_frequency; +static unsigned int kvx_periodic_timer_value; +static unsigned int kvx_timer_irq; + +static void kvx_timer_set_value(unsigned long value, unsigned long reload_= value) +{ + kvx_sfr_set(T0R, reload_value); + kvx_sfr_set(T0V, value); + /* Enable timer */ + kvx_sfr_set_field(TCR, T0CE, 1); +} + +static int kvx_clkevent_set_next_event(unsigned long cycles, + struct clock_event_device *dev) +{ + /* + * Hardware does not support oneshot mode. + * In order to support it, set a really high reload value. + * Then, during the interrupt handler, disable the timer if + * in oneshot mode + */ + kvx_timer_set_value(cycles - 1, KVX_TIMER_MAX_VALUE); + + return 0; +} + +/* + * Configure the rtc to periodically tick HZ times per second + */ +static int kvx_clkevent_set_state_periodic(struct clock_event_device *dev) +{ + kvx_timer_set_value(kvx_periodic_timer_value, + kvx_periodic_timer_value); + + return 0; +} + +static int kvx_clkevent_set_state_oneshot(struct clock_event_device *dev) +{ + /* Same as for kvx_clkevent_set_next_event */ + kvx_clkevent_set_next_event(kvx_periodic_timer_value, dev); + + return 0; +} + +static int kvx_clkevent_set_state_shutdown(struct clock_event_device *dev) +{ + kvx_sfr_set_field(TCR, T0CE, 0); + + return 0; +} + +static DEFINE_PER_CPU(struct clock_event_device, kvx_clockevent_device) = =3D { + .name =3D "kvx-timer-0", + .features =3D CLOCK_EVT_FEAT_ONESHOT | CLOCK_EVT_FEAT_PERIODIC, + /* arbitrary rating for this clockevent */ + .rating =3D 300, + .set_next_event =3D kvx_clkevent_set_next_event, + .set_state_periodic =3D kvx_clkevent_set_state_periodic, + .set_state_oneshot =3D kvx_clkevent_set_state_oneshot, + .set_state_shutdown =3D kvx_clkevent_set_state_shutdown, +}; + +static irqreturn_t kvx_timer_irq_handler(int irq, void *dev_id) +{ + struct clock_event_device *evt =3D this_cpu_ptr(&kvx_clockevent_device); + + /* Disable timer if in oneshot mode before reloading */ + if (likely(clockevent_state_oneshot(evt))) + kvx_sfr_set_field(TCR, T0CE, 0); + + evt->event_handler(evt); + + return IRQ_HANDLED; +} + +static int kvx_timer_starting_cpu(unsigned int cpu) +{ + struct clock_event_device *evt =3D this_cpu_ptr(&kvx_clockevent_device); + + evt->cpumask =3D cpumask_of(cpu); + evt->irq =3D kvx_timer_irq; + + clockevents_config_and_register(evt, kvx_timer_frequency, + KVX_TIMER_MIN_DELTA, + KVX_TIMER_MAX_DELTA); + + /* Enable timer interrupt */ + kvx_sfr_set_field(TCR, T0IE, 1); + + enable_percpu_irq(kvx_timer_irq, IRQ_TYPE_NONE); + + return 0; +} + +static int kvx_timer_dying_cpu(unsigned int cpu) +{ + disable_percpu_irq(kvx_timer_irq); + + return 0; +} + +static int __init kvx_setup_core_timer(struct device_node *np) +{ + struct clock_event_device *evt =3D this_cpu_ptr(&kvx_clockevent_device); + struct clk *clk; + int err; + + clk =3D of_clk_get(np, 0); + if (IS_ERR(clk)) { + pr_err("kvx_core_timer: Failed to get CPU clock: %ld\n", + PTR_ERR(clk)); + return 1; + } + + kvx_timer_frequency =3D clk_get_rate(clk); + clk_put(clk); + kvx_periodic_timer_value =3D kvx_timer_frequency / HZ; + + kvx_timer_irq =3D irq_of_parse_and_map(np, 0); + if (!kvx_timer_irq) { + pr_err("kvx_core_timer: Failed to parse irq: %d\n", + kvx_timer_irq); + return -EINVAL; + } + + err =3D request_percpu_irq(kvx_timer_irq, kvx_timer_irq_handler, + "kvx_core_timer", evt); + if (err) { + pr_err("kvx_core_timer: can't register interrupt %d (%d)\n", + kvx_timer_irq, err); + return err; + } + + err =3D cpuhp_setup_state(CPUHP_AP_KVX_TIMER_STARTING, + "kvx/time:online", + kvx_timer_starting_cpu, + kvx_timer_dying_cpu); + if (err < 0) { + pr_err("kvx_core_timer: Failed to setup hotplug state"); + return err; + } + + return 0; +} + +TIMER_OF_DECLARE(kvx_core_timer, "kalray,kv3-1-timer", + kvx_setup_core_timer); + +/* + * Clocksource + */ +static u64 kvx_dsu_clocksource_read(struct clocksource *cs) +{ + return readq(cs->archdata.regs); +} + +static struct clocksource kvx_dsu_clocksource =3D { + .name =3D "kvx-dsu-clock", + .rating =3D 400, + .read =3D kvx_dsu_clocksource_read, + .mask =3D CLOCKSOURCE_MASK(64), + .flags =3D CLOCK_SOURCE_IS_CONTINUOUS, +}; + +static u64 notrace kvx_dsu_sched_read(void) +{ + return readq_relaxed(kvx_dsu_clocksource.archdata.regs); +} + +static int __init kvx_setup_dsu_clock(struct device_node *np) +{ + int ret; + struct clk *clk; + unsigned long kvx_dsu_frequency; + + kvx_dsu_clocksource.archdata.regs =3D of_iomap(np, 0); + + WARN_ON(!kvx_dsu_clocksource.archdata.regs); + if (!kvx_dsu_clocksource.archdata.regs) + return -ENXIO; + + clk =3D of_clk_get(np, 0); + if (IS_ERR(clk)) { + pr_err("Failed to get CPU clock: %ld\n", PTR_ERR(clk)); + return PTR_ERR(clk); + } + + kvx_dsu_frequency =3D clk_get_rate(clk); + clk_put(clk); + + ret =3D clocksource_register_hz(&kvx_dsu_clocksource, + kvx_dsu_frequency); + if (ret) { + pr_err("failed to register dsu clocksource"); + return ret; + } + + sched_clock_register(kvx_dsu_sched_read, 64, kvx_dsu_frequency); + return 0; +} + +TIMER_OF_DECLARE(kvx_dsu_clock, "kalray,coolidge-dsu-clock", + kvx_setup_dsu_clock); + +void __init time_init(void) +{ + of_clk_init(NULL); + + timer_probe(); +} --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout43.security-mail.net (smtpout43.security-mail.net [85.31.212.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D32516C87F for ; Mon, 22 Jul 2024 09:43:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641416; cv=none; b=bDiZ4AhuqUDVez7NEjufwLRvX3fdSiFoFJNeyCNayJFM8UJr0tejSwtZKr2JXXgUoBIJCFP3IaYov1PTAdAoq3SwH3sftEEbKW7USlWPd5LQ91Lv8bj15M2UQNR7FuK5LUI34RnqB8QBTm0zau1Bw03rte9nZe8n/iiWjfhRdUs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641416; c=relaxed/simple; bh=UcCsqCyIJf3Irw5xvHzO1XtqpQa7ACfzmsz56nw050M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=c99dVx6s+40oWcaN7FH1o4pJ9AoRXLvC8i59sIDZoM8TtxpOCeJ/ZchBdfCqYO9twKtldv4BxDPtU1A3+aBy3gPfxxcH+tza8VzLy8eu106dUZkc9LMy7s80/v0fctqI7rpObcThzcPv6JqbhYXQXS1x97HFsOGODfxNX/qUff0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=eOIwzr6x; arc=none smtp.client-ip=85.31.212.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="eOIwzr6x" Received: from localhost (fx303.security-mail.net [127.0.0.1]) by fx303.security-mail.net (Postfix) with ESMTP id C179A30ED87 for ; Mon, 22 Jul 2024 11:43:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641412; bh=UcCsqCyIJf3Irw5xvHzO1XtqpQa7ACfzmsz56nw050M=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=eOIwzr6xNTbmS+EEnYhlkmwRJORMkHosHjjnrQFf+cSxg4lMy/lQq5GdNSWTJrcnI YrhQ5z26z56nUjhk7cbB40wIb2mvG7Bmc2LDQfXWGJ5EXWpmow/f51f383QlhXnGzL F6M3bY5NZyrPI3sBEEbzqWwD9fvlRpPv4oB+Wnrc= Received: from fx303 (fx303.security-mail.net [127.0.0.1]) by fx303.security-mail.net (Postfix) with ESMTP id 8CADC30EBEE; Mon, 22 Jul 2024 11:43:32 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx303.security-mail.net (Postfix) with ESMTPS id CE5D430EB14; Mon, 22 Jul 2024 11:43:31 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 9993640317; Mon, 22 Jul 2024 11:43:31 +0200 (CEST) X-Secumail-id: <16424.669e29c3.cb630.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Guillaume Thouvenin , Luc Michel , Marius Gligor Subject: [RFC PATCH v3 18/37] kvx: Add exception/interrupt handling Date: Mon, 22 Jul 2024 11:41:29 +0200 Message-ID: <20240722094226.21602-19-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add the exception and interrupt handling machanism for basic kvx support. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Guillaume Thouvenin Signed-off-by: Guillaume Thouvenin Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Luc Michel Signed-off-by: Luc Michel Co-developed-by: Marius Gligor Signed-off-by: Marius Gligor Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: removed ipi.h headers and driver V2 -> V3: - Use generic entry/exit for exception routines: - adapt struct pt_regs arguments order for gen. entry - arch_enter_from_user_mode/arch_exit_to_user_mode_prepare: - use generic entry hooks for dame irq checks - introduce do_debug, do_syscall, do_hwtrap - remove do_IRQ and use generic_handle_arch_irq instead - add cpu.c containing kvx_of_parent_cpuid() - needed by core_intc driver - typos --- arch/kvx/include/asm/break_hook.h | 69 +++++++ arch/kvx/include/asm/bug.h | 67 +++++++ arch/kvx/include/asm/entry-common.h | 52 ++++++ arch/kvx/include/asm/hw_irq.h | 14 ++ arch/kvx/include/asm/irqflags.h | 58 ++++++ arch/kvx/include/asm/stacktrace.h | 46 +++++ arch/kvx/include/asm/traps.h | 79 ++++++++ arch/kvx/kernel/cpu.c | 24 +++ arch/kvx/kernel/dame_handler.c | 113 +++++++++++ arch/kvx/kernel/irq.c | 52 ++++++ arch/kvx/kernel/traps.c | 278 ++++++++++++++++++++++++++++ 11 files changed, 852 insertions(+) create mode 100644 arch/kvx/include/asm/break_hook.h create mode 100644 arch/kvx/include/asm/bug.h create mode 100644 arch/kvx/include/asm/entry-common.h create mode 100644 arch/kvx/include/asm/hw_irq.h create mode 100644 arch/kvx/include/asm/irqflags.h create mode 100644 arch/kvx/include/asm/stacktrace.h create mode 100644 arch/kvx/include/asm/traps.h create mode 100644 arch/kvx/kernel/cpu.c create mode 100644 arch/kvx/kernel/dame_handler.c create mode 100644 arch/kvx/kernel/irq.c create mode 100644 arch/kvx/kernel/traps.c diff --git a/arch/kvx/include/asm/break_hook.h b/arch/kvx/include/asm/break= _hook.h new file mode 100644 index 0000000000000..011441b0076a0 --- /dev/null +++ b/arch/kvx/include/asm/break_hook.h @@ -0,0 +1,69 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef __ASM_KVX_BREAK_HOOK_H_ +#define __ASM_KVX_BREAK_HOOK_H_ + +#include + +#include +#include + +/* + * The following macros define the different causes of break: + * We use the `set $vsfr0 =3D $rXX` instruction which will raise a trap in= to the + * debugger. The trapping instruction is read and decoded to extract the s= ource + * register number. The source register number is used to differentiate the + * trap cause. + */ +#define BREAK_CAUSE_BUG KVX_REG_R1 +#define BREAK_CAUSE_KGDB_DYN KVX_REG_R2 +#define BREAK_CAUSE_KGDB_COMP KVX_REG_R3 +#define BREAK_CAUSE_BKPT KVX_REG_R63 + +/** + * enum break_ret - Break return value + * @BREAK_HOOK_HANDLED: Hook handled successfully + * @BREAK_HOOK_ERROR: Hook was not handled + */ +enum break_ret { + BREAK_HOOK_HANDLED =3D 0, + BREAK_HOOK_ERROR =3D 1, +}; + +/* + * The following macro assembles a `set` instruction targeting $vsfr0 + * using the source register whose number is __id. + */ +#define KVX_BREAK_INSN(__id) \ + KVX_INSN_SET_SYLLABLE_0(KVX_INSN_PARALLEL_EOB, KVX_SFR_VSFR0, __id) + +#define KVX_BREAK_INSN_SIZE (KVX_INSN_SET_SIZE * KVX_INSN_SYLLABLE_WIDTH) + +struct pt_regs; + +/** + * struct break_hook - Break hook description + * @node: List node + * @handler: handler called when break matches this hook + * @imm: Immediate value expected for break insn + * @mode: Hook mode (user/kernel) + */ +struct break_hook { + struct list_head node; + int (*handler)(struct pt_regs *regs, struct break_hook *brk_hook); + u8 id; + u8 mode; +}; + +void kvx_skip_break_insn(struct pt_regs *regs); + +void break_hook_register(struct break_hook *brk_hook); +void break_hook_unregister(struct break_hook *brk_hook); + +int break_hook_handler(struct pt_regs *regs, u64 es); + +#endif /* __ASM_KVX_BREAK_HOOK_H_ */ diff --git a/arch/kvx/include/asm/bug.h b/arch/kvx/include/asm/bug.h new file mode 100644 index 0000000000000..62f556b00d5a2 --- /dev/null +++ b/arch/kvx/include/asm/bug.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_BUG_H +#define _ASM_KVX_BUG_H + +#include +#include +#include + +#include + +#ifdef CONFIG_GENERIC_BUG + +#define BUG_INSN KVX_BREAK_INSN(BREAK_CAUSE_BUG) + +#define __BUG_ENTRY_ADDR ".dword 1b" + +#ifdef CONFIG_DEBUG_BUGVERBOSE +#define __BUG_ENTRY_LAST_MEMBER flags +#define __BUG_ENTRY \ + __BUG_ENTRY_ADDR "\n\t" \ + ".dword %0\n\t" \ + ".short %1\n\t" +#else +#define __BUG_ENTRY_LAST_MEMBER file +#define __BUG_ENTRY \ + __BUG_ENTRY_ADDR "\n\t" +#endif + +#define BUG() \ +do { \ + __asm__ __volatile__ ( \ + "1:\n\t" \ + ".word " __stringify(BUG_INSN) "\n" \ + ".pushsection __bug_table,\"a\"\n\t" \ + "2:\n\t" \ + __BUG_ENTRY \ + ".fill 1, %2, 0\n\t" \ + ".popsection" \ + : \ + : "i" (__FILE__), "i" (__LINE__), \ + "i" (sizeof(struct bug_entry) - \ + offsetof(struct bug_entry, __BUG_ENTRY_LAST_MEMBER))); \ + unreachable(); \ +} while (0) + +#else /* CONFIG_GENERIC_BUG */ +#define BUG() \ +do { \ + __asm__ __volatile__ (".word " __stringify(BUG_INSN) "\n"); \ + unreachable(); \ +} while (0) +#endif /* CONFIG_GENERIC_BUG */ + +#define HAVE_ARCH_BUG + +struct pt_regs; + +void die(struct pt_regs *regs, unsigned long ea, const char *str); + +#include + +#endif /* _ASM_KVX_BUG_H */ diff --git a/arch/kvx/include/asm/entry-common.h b/arch/kvx/include/asm/ent= ry-common.h new file mode 100644 index 0000000000000..8c4b6d3241391 --- /dev/null +++ b/arch/kvx/include/asm/entry-common.h @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Julian Vetter + */ +#ifndef _ASM_KVX_ENTRY_COMMON_H +#define _ASM_KVX_ENTRY_COMMON_H + +#include +#include + +static __always_inline void arch_enter_from_user_mode(struct pt_regs *regs) +{ + unsigned long ilr; + + /* + * Make sure DAMEs trigged by user space are reflected in $ILR + * (interrupts pending) bits + */ + __builtin_kvx_barrier(); + + ilr =3D kvx_sfr_get(ILR); + + if (ilr & KVX_SFR_ILR_IT16_MASK) { + if (user_mode(regs)) + force_sig_fault(SIGBUS, BUS_ADRERR, + (void __user *) NULL); + else + panic("DAME error encountered while in kernel!\n"); + } +} + +#define arch_enter_from_user_mode arch_enter_from_user_mode + +static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, + unsigned long ti_work) +{ + unsigned long ilr; + + __builtin_kvx_barrier(); + + ilr =3D kvx_sfr_get(ILR); + + if (ilr & KVX_SFR_ILR_IT16_MASK) + panic("DAME error encountered while in kernel!\n"); +} + +#define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare + +#define on_thread_stack() (on_task_stack(current, current_stack_pointer)) + +#endif /* _ASM_KVX_ENTRY_COMMON_H */ diff --git a/arch/kvx/include/asm/hw_irq.h b/arch/kvx/include/asm/hw_irq.h new file mode 100644 index 0000000000000..f073dba3b1c54 --- /dev/null +++ b/arch/kvx/include/asm/hw_irq.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * derived from arch/mips/include/asm/ide.h + * + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_HW_IRQ_H +#define _ASM_KVX_HW_IRQ_H + +void kvx_init_core_irq(void); + +#endif /* _ASM_KVX_HW_IRQ_H */ diff --git a/arch/kvx/include/asm/irqflags.h b/arch/kvx/include/asm/irqflag= s.h new file mode 100644 index 0000000000000..681c890b3fcdf --- /dev/null +++ b/arch/kvx/include/asm/irqflags.h @@ -0,0 +1,58 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_IRQFLAGS_H +#define _ASM_KVX_IRQFLAGS_H + +#include + +#include + +static inline notrace unsigned long arch_local_save_flags(void) +{ + return kvx_sfr_get(PS) & (1 << KVX_SFR_PS_IE_SHIFT); +} + +static inline notrace unsigned long arch_local_irq_save(void) +{ + unsigned long flags =3D arch_local_save_flags(); + + kvx_sfr_set_field(PS, IE, 0); + + return flags; +} + +static inline notrace void arch_local_irq_restore(unsigned long flags) +{ + /* If flags are set, interrupt are enabled), set the IE bit */ + if (flags) + kvx_sfr_set_field(PS, IE, 1); + else + kvx_sfr_set_field(PS, IE, 0); +} + +static inline notrace void arch_local_irq_enable(void) +{ + kvx_sfr_set_field(PS, IE, 1); +} + +static inline notrace void arch_local_irq_disable(void) +{ + kvx_sfr_set_field(PS, IE, 0); +} + +static inline notrace bool arch_irqs_disabled_flags(unsigned long flags) +{ + return (flags & (1 << KVX_SFR_PS_IE_SHIFT)) =3D=3D 0; +} + +static inline notrace bool arch_irqs_disabled(void) +{ + return arch_irqs_disabled_flags(kvx_sfr_get(PS)); +} + + +#endif /* _ASM_KVX_IRQFLAGS_H */ diff --git a/arch/kvx/include/asm/stacktrace.h b/arch/kvx/include/asm/stack= trace.h new file mode 100644 index 0000000000000..75112e733ebd4 --- /dev/null +++ b/arch/kvx/include/asm/stacktrace.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_STACKTRACE_H +#define _ASM_KVX_STACKTRACE_H + +#include + +/** + * Structure of a frame on the stack + */ +struct stackframe { + unsigned long fp; /* Next frame pointer */ + unsigned long ra; /* Return address */ +}; + +static inline bool on_task_stack(struct task_struct *tsk, unsigned long sp) +{ + unsigned long low =3D (unsigned long) task_stack_page(tsk); + unsigned long high =3D low + THREAD_SIZE; + + if (sp < low || sp >=3D high) + return false; + + return true; +} + +void show_stacktrace(struct task_struct *task, struct pt_regs *regs, + const char *loglvl); + + +void walk_stackframe(struct task_struct *task, struct stackframe *frame, + bool (*fn)(unsigned long, void *, const char *), void *arg, + const char *loglvl); + +static inline void start_stackframe(struct stackframe *frame, + unsigned long fp, + unsigned long pc) +{ + frame->fp =3D fp; + frame->ra =3D pc; +} +#endif /* _ASM_KVX_STACKTRACE_H */ diff --git a/arch/kvx/include/asm/traps.h b/arch/kvx/include/asm/traps.h new file mode 100644 index 0000000000000..7d59bade058f2 --- /dev/null +++ b/arch/kvx/include/asm/traps.h @@ -0,0 +1,79 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + * Marius Gligor + */ + +#ifndef _ASM_KVX_TRAPS_H +#define _ASM_KVX_TRAPS_H + +#include + +#define KVX_TRAP_RESET 0x0 +#define KVX_TRAP_OPCODE 0x1 +#define KVX_TRAP_PRIVILEGE 0x2 +#define KVX_TRAP_DMISALIGN 0x3 +#define KVX_TRAP_PSYSERROR 0x4 +#define KVX_TRAP_DSYSERROR 0x5 +#define KVX_TRAP_PDECCERROR 0x6 +#define KVX_TRAP_DDECCERROR 0x7 +#define KVX_TRAP_PPARERROR 0x8 +#define KVX_TRAP_DPARERROR 0x9 +#define KVX_TRAP_PSECERROR 0xA +#define KVX_TRAP_DSECERROR 0xB +#define KVX_TRAP_NOMAPPING 0xC +#define KVX_TRAP_PROTECTION 0xD +#define KVX_TRAP_WRITETOCLEAN 0xE +#define KVX_TRAP_ATOMICTOCLEAN 0xF +#define KVX_TRAP_TPAR 0x10 +#define KVX_TRAP_DOUBLE_ECC 0x11 +#define KVX_TRAP_VSFR 0x12 +#define KVX_TRAP_PL_OVERFLOW 0x13 + +#define KVX_TRAP_COUNT 0x14 + +#define KVX_TRAP_SFRI_NOT_BCU 0 +#define KVX_TRAP_SFRI_GET 1 +#define KVX_TRAP_SFRI_IGET 2 +#define KVX_TRAP_SFRI_SET 4 +#define KVX_TRAP_SFRI_WFXL 5 +#define KVX_TRAP_SFRI_WFXM 6 +#define KVX_TRAP_SFRI_RSWAP 7 + +/* Access type on memory trap */ +#define KVX_TRAP_RWX_FETCH 1 +#define KVX_TRAP_RWX_WRITE 2 +#define KVX_TRAP_RWX_READ 4 +#define KVX_TRAP_RWX_ATOMIC 6 + +#ifndef __ASSEMBLY__ + +typedef void (*trap_handler_func) (struct pt_regs *regs, uint64_t es, + uint64_t ea); + +#define trap_cause(__es) kvx_sfr_field_val(__es, ES, HTC) + +#define trap_sfri(__es) \ + kvx_sfr_field_val((__es), ES, SFRI) + +#define trap_gprp(__es) \ + kvx_sfr_field_val((__es), ES, GPRP) + +#define trap_sfrp(__es) \ + kvx_sfr_field_val((__es), ES, SFRP) + +#ifdef CONFIG_MMU +extern void do_page_fault(struct pt_regs *regs, uint64_t es, uint64_t ea); +extern void do_writetoclean(struct pt_regs *regs, uint64_t es, uint64_t ea= ); +#endif + +void user_do_sig(struct pt_regs *regs, int signo, int code, unsigned long = addr); +void do_debug(struct pt_regs *regs, u64 ea); +void do_hwtrap(struct pt_regs *regs, uint64_t es, uint64_t ea); +void do_syscall(struct pt_regs *regs); + +#endif /* __ASSEMBLY__ */ + +#endif diff --git a/arch/kvx/kernel/cpu.c b/arch/kvx/kernel/cpu.c new file mode 100644 index 0000000000000..b8412be1e4134 --- /dev/null +++ b/arch/kvx/kernel/cpu.c @@ -0,0 +1,24 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2024 Kalray Inc. + * Author(s): Yann Sionneau + */ + +#include +#include + +int kvx_of_parent_cpuid(struct device_node *node, unsigned long *cpuid) +{ + for (; node; node =3D node->parent) { + if (of_device_is_compatible(node, "kalray,kv3-pe")) { + *cpuid =3D (unsigned long)of_get_cpu_hwid(node, 0); + if (*cpuid =3D=3D ~0UL) { + pr_warn("Found CPU without CPU ID\n"); + return -ENODEV; + } + return 0; + } + } + + return -1; +} diff --git a/arch/kvx/kernel/dame_handler.c b/arch/kvx/kernel/dame_handler.c new file mode 100644 index 0000000000000..ce190bee82113 --- /dev/null +++ b/arch/kvx/kernel/dame_handler.c @@ -0,0 +1,113 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static unsigned int kvx_dame_irq; + +static const char *error_str[KVX_SFR_ES_ITI_WIDTH] =3D { + "PSE", + "PILSY", + "PILDE", + "PILPA", + "DSE", + "DILSY", + "DILDE", + "DILPA", + "DDEE", + "DSYE" +}; + +static irqreturn_t dame_irq_handler(int irq, void *dev_id) +{ + int bit; + struct pt_regs *regs =3D get_irq_regs(); + unsigned long error_status =3D kvx_sfr_field_val(regs->es, ES, ITI); + + if (error_status) { + pr_err("Memory Error:\n"); + for_each_set_bit(bit, &error_status, KVX_SFR_ES_ITI_WIDTH) + pr_err("- %s\n", error_str[bit]); + } + + /* + * If the DAME happened in user mode, we can handle it properly + * by killing the user process. + * Otherwise, if we are in kernel, we are fried... + */ + if (user_mode(regs)) + force_sig_fault(SIGBUS, BUS_ADRERR, (void __user *) NULL); + else + die(regs, 0, "DAME error encountered while in kernel !!!!\n"); + + return IRQ_HANDLED; +} + +static int kvx_dame_starting_cpu(unsigned int cpu) +{ + enable_percpu_irq(kvx_dame_irq, IRQ_TYPE_NONE); + + return 0; +} + +static int kvx_dame_dying_cpu(unsigned int cpu) +{ + disable_percpu_irq(kvx_dame_irq); + + return 0; +} + +static int __init dame_handler_init(void) +{ + struct device_node *dame_node; + int ret; + + dame_node =3D of_find_compatible_node(NULL, NULL, + "kalray,kvx-dame-handler"); + if (!dame_node) { + pr_err("Failed to find dame handler device tree node\n"); + return -ENODEV; + } + + kvx_dame_irq =3D irq_of_parse_and_map(dame_node, 0); + of_node_put(dame_node); + + if (!kvx_dame_irq) { + pr_err("Failed to parse dame irq\n"); + return -ENODEV; + } + + ret =3D request_percpu_irq(kvx_dame_irq, dame_irq_handler, "dame", + &kvx_dame_irq); + if (ret) { + pr_err("Failed to request dame irq\n"); + return -ENODEV; + } + + ret =3D cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, + "kvx/dame_handler:online", + kvx_dame_starting_cpu, + kvx_dame_dying_cpu); + if (ret <=3D 0) { + pr_err("Failed to setup cpuhp\n"); + return ret; + } + + pr_info("DAME handler registered\n"); + + return 0; +} + +core_initcall(dame_handler_init); diff --git a/arch/kvx/kernel/irq.c b/arch/kvx/kernel/irq.c new file mode 100644 index 0000000000000..f0f243eb20130 --- /dev/null +++ b/arch/kvx/kernel/irq.c @@ -0,0 +1,52 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#include +#include +#include +#include +#include + +#define IT_MASK(__it) (KVX_SFR_ILL_ ## __it ## _MASK) +#define IT_LEVEL(__it, __level) \ + (__level##ULL << KVX_SFR_ILL_ ## __it ## _SHIFT) + +/* + * Early Hardware specific Interrupt setup + * - Called very early (start_kernel -> setup_arch -> setup_processor) + * - Needed for each CPU + */ +void kvx_init_core_irq(void) +{ + /* + * On KVX, the kernel only cares about the following ITs: + * - IT0: Timer 0 + * - IT2: Watchdog + * - IT4: APIC IT 1 + * - IT24: IPI + */ + uint64_t mask =3D IT_MASK(IT0) | IT_MASK(IT2) | IT_MASK(IT4) | + IT_MASK(IT24); + + /* + * Specific priorities for ITs: + * - Watchdog has the highest priority: 3 + * - Timer has priority: 2 + * - APIC entries have the lowest priority: 1 + */ + uint64_t value =3D IT_LEVEL(IT0, 0x2) | IT_LEVEL(IT2, 0x3) | + IT_LEVEL(IT4, 0x1) | IT_LEVEL(IT24, 0x1); + + kvx_sfr_set_mask(ILL, mask, value); + + /* Set core level to 0 */ + kvx_sfr_set_field(PS, IL, 0); +} + +void __init init_IRQ(void) +{ + irqchip_init(); +} diff --git a/arch/kvx/kernel/traps.c b/arch/kvx/kernel/traps.c new file mode 100644 index 0000000000000..d8f5732f5367b --- /dev/null +++ b/arch/kvx/kernel/traps.c @@ -0,0 +1,278 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + * Marius Gligor + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +int show_unhandled_signals =3D 1; + +static DEFINE_SPINLOCK(die_lock); + +static trap_handler_func trap_handler_table[KVX_TRAP_COUNT] =3D { NULL }; + +/* Trap names associated to the trap numbers */ +static const char * const trap_name[] =3D { + "RESET", + "OPCODE", + "PRIVILEGE", + "DMISALIGN", + "PSYSERROR", + "DSYSERROR", + "PDECCERROR", + "DDECCERROR", + "PPARERROR", + "DPARERROR", + "PSECERROR", + "DSECERROR", + /* MMU related traps */ + "NOMAPPING", + "PROTECTION", + "WRITETOCLEAN", + "ATOMICTOCLEAN", + "TPAR", + "DOUBLE_ECC", + "VSFR", + "PL_OVERFLOW" +}; + +void die(struct pt_regs *regs, unsigned long ea, const char *str) +{ + static int die_counter; + int ret; + + oops_enter(); + + spin_lock_irq(&die_lock); + console_verbose(); + bust_spinlocks(1); + + pr_emerg("%s [#%d]\n", str, ++die_counter); + print_modules(); + show_regs(regs); + + if (!user_mode(regs)) + show_stacktrace(NULL, regs, KERN_EMERG); + + ret =3D notify_die(DIE_OOPS, str, regs, ea, 0, SIGSEGV); + + bust_spinlocks(0); + add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE); + spin_unlock_irq(&die_lock); + oops_exit(); + + if (in_interrupt()) + panic("Fatal exception in interrupt"); + if (panic_on_oops) + panic("Fatal exception"); + if (ret !=3D NOTIFY_STOP) + make_task_dead(SIGSEGV); +} + +void user_do_sig(struct pt_regs *regs, int signo, int code, unsigned long = addr) +{ + struct task_struct *tsk =3D current; + + if (show_unhandled_signals && unhandled_signal(tsk, signo) + && printk_ratelimit()) { + pr_info("%s[%d]: unhandled signal %d code 0x%x at 0x%lx", + tsk->comm, task_pid_nr(tsk), signo, code, addr); + print_vma_addr(KERN_CONT " in ", instruction_pointer(regs)); + pr_cont("\n"); + show_regs(regs); + } + if (signo =3D=3D SIGKILL) { + force_sig(signo); + return; + } + force_sig_fault(signo, code, (void __user *) addr); +} + +static void do_trap_error(struct pt_regs *regs, int signo, int code, + unsigned long addr, const char *str) +{ + if (user_mode(regs)) { + user_do_sig(regs, signo, code, addr); + } else { + if (!fixup_exception(regs)) + die(regs, addr, str); + } +} + +static void panic_or_kill(uint64_t es, uint64_t ea, struct pt_regs *regs, + int signo, int sigcode) +{ + if (user_mode(regs)) { + user_do_sig(regs, signo, sigcode, ea); + return; + } + + pr_alert(CUT_HERE "ERROR: TRAP %s received at 0x%.16llx\n", + trap_name[trap_cause(es)], regs->spc); + die(regs, ea, "Oops"); + make_task_dead(SIGKILL); +} + +int is_valid_bugaddr(unsigned long pc) +{ + /* + * Since the bug was reported, this means that the break hook handling + * already check the faulting instruction so there is no need for + * additionnal check here. This is a BUG for sure. + */ + return 1; +} + +static int bug_break_handler(struct pt_regs *regs, struct break_hook *brk_= hook) +{ + enum bug_trap_type type; + + type =3D report_bug(regs->spc, regs); + switch (type) { + case BUG_TRAP_TYPE_NONE: + return BREAK_HOOK_ERROR; + case BUG_TRAP_TYPE_WARN: + break; + case BUG_TRAP_TYPE_BUG: + die(regs, regs->spc, "Kernel BUG"); + break; + } + + /* Skip over break insn if we survived ! */ + kvx_skip_break_insn(regs); + + return BREAK_HOOK_HANDLED; +} + +static struct break_hook bug_break_hook =3D { + .handler =3D bug_break_handler, + .id =3D BREAK_CAUSE_BUG, + .mode =3D MODE_KERNEL, +}; + +#define GEN_TRAP_HANDLER(__name, __sig, __code) \ +static void __name ## _trap_handler(struct pt_regs *regs, uint64_t es, \ + uint64_t ea) \ +{ \ + panic_or_kill(es, ea, regs, __sig, __code); \ +} + +GEN_TRAP_HANDLER(default, SIGKILL, SI_KERNEL); +GEN_TRAP_HANDLER(privilege, SIGILL, ILL_PRVREG); +GEN_TRAP_HANDLER(dmisalign, SIGBUS, BUS_ADRALN); +GEN_TRAP_HANDLER(syserror, SIGBUS, BUS_ADRERR); +GEN_TRAP_HANDLER(opcode, SIGILL, ILL_ILLOPC); + +static void register_trap_handler(unsigned int trap_nb, trap_handler_func = fn) +{ + + if (trap_nb >=3D KVX_TRAP_COUNT || fn =3D=3D NULL) + panic("Failed to register handler #%d\n", trap_nb); + + trap_handler_table[trap_nb] =3D fn; +} + +static void do_vsfr_fault(struct pt_regs *regs, uint64_t es, uint64_t ea) +{ + if (break_hook_handler(regs, es) =3D=3D BREAK_HOOK_HANDLED) + return; + + panic_or_kill(es, ea, regs, SIGILL, ILL_PRVREG); +} + +void __init trap_init(void) +{ + int i; + + break_hook_register(&bug_break_hook); + + for (i =3D 0; i < KVX_TRAP_COUNT; i++) + register_trap_handler(i, default_trap_handler); +#ifdef CONFIG_MMU + register_trap_handler(KVX_TRAP_NOMAPPING, do_page_fault); + register_trap_handler(KVX_TRAP_PROTECTION, do_page_fault); + register_trap_handler(KVX_TRAP_WRITETOCLEAN, do_writetoclean); +#endif + + register_trap_handler(KVX_TRAP_PSYSERROR, syserror_trap_handler); + register_trap_handler(KVX_TRAP_DSYSERROR, syserror_trap_handler); + register_trap_handler(KVX_TRAP_PRIVILEGE, privilege_trap_handler); + register_trap_handler(KVX_TRAP_OPCODE, opcode_trap_handler); + register_trap_handler(KVX_TRAP_DMISALIGN, dmisalign_trap_handler); + register_trap_handler(KVX_TRAP_VSFR, do_vsfr_fault); +} + +void do_debug(struct pt_regs *regs, u64 ea) +{ + irqentry_state_t state =3D irqentry_enter(regs); + + debug_handler(regs, ea); + + irqentry_exit(regs, state); +} + +void do_hwtrap(struct pt_regs *regs, uint64_t es, uint64_t ea) +{ + irqentry_state_t state =3D irqentry_enter(regs); + + int htc =3D trap_cause(es); + trap_handler_func trap_handler =3D trap_handler_table[htc]; + + /* Normal traps are between 0 and 15 */ + if (unlikely(htc >=3D KVX_TRAP_COUNT)) { + pr_err("Invalid trap %d !\n", htc); + goto done; + } + + /* If irqs were enabled in the preempted context, reenable them */ + if (!regs_irqs_disabled(regs)) + local_irq_enable(); + + trap_handler(regs, es, ea); + + local_irq_disable(); + +done: + irqentry_exit(regs, state); +} + +void do_syscall(struct pt_regs *regs) +{ + if (user_mode(regs)) { + long syscall =3D (regs->es & KVX_SFR_ES_SN_MASK) >> KVX_SFR_ES_SN_SHIFT; + + syscall =3D syscall_enter_from_user_mode(regs, syscall); + + syscall_handler(regs, (ulong)syscall); + + syscall_exit_to_user_mode(regs); + } else { + irqentry_state_t state =3D irqentry_nmi_enter(regs); + + do_trap_error(regs, SIGILL, ILL_ILLTRP, regs->spc, + "Oops - scall from PL2"); + + irqentry_nmi_exit(regs, state); + } +} --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout143.security-mail.net (smtpout143.security-mail.net [85.31.212.143]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D893816D31D for ; Mon, 22 Jul 2024 09:45:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.143 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641511; cv=none; b=lJTcV/O3W5cGuY5nDm24kSOh+cpBGg59q0Y2ColgAww/Z5AD3aD04QtFG0vmDSG2RVRpEe/trYQzSgj/BHJR2o9aLRbqKEiiOwmUV6wSXMEEr2uNG/VwdEkS7ZpU5OffQixOq7q+raF089XjZKku8Xqf3m2LYx47RZBYhptXar8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641511; c=relaxed/simple; bh=wtUIEjp9nl8GBvzI2dMgN5okl7BiS/K6FDMX592+yNo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=sdsBC4NkCRUDT82yWNt9T++kTJmlO79+7IvtQv8HvEXcML7AJ7mzQhioOJSefDBmKbRn/keqnd8AxEeJ/apMmp/CizYRdv7D8aDwgvCSvkFjGmVhWPs2jpZ1mQH8/jYmKoar4g9VQkBEizPf9bgGyhUz32ZucG0L9jnxM1gEzb8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=TuR8JI1L; arc=none smtp.client-ip=85.31.212.143 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="TuR8JI1L" Received: from localhost (localhost [127.0.0.1]) by fx403.security-mail.net (Postfix) with ESMTP id CEA7948EDE9 for ; Mon, 22 Jul 2024 11:43:33 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641413; bh=wtUIEjp9nl8GBvzI2dMgN5okl7BiS/K6FDMX592+yNo=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=TuR8JI1L4ZodMtuazz+okKzLYIoXD8FD2q4TqYRHxJ+X/pm6X+HBdEWhApGiyYh2C 99X/jKAdbwWwn7AhjQQhwA6St2TXGWRPyWpBEOcoZXGT3S6utvKn/0EMNCmSQerD4o ummccDvPbx8L/b8T2HPI9Ikq3MLfB1hjnfBMyHSg= Received: from fx403 (localhost [127.0.0.1]) by fx403.security-mail.net (Postfix) with ESMTP id 8943848EE82; Mon, 22 Jul 2024 11:43:33 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx403.security-mail.net (Postfix) with ESMTPS id ADA9948E46E; Mon, 22 Jul 2024 11:43:32 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 75FD040317; Mon, 22 Jul 2024 11:43:32 +0200 (CEST) X-Secumail-id: From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Vincent Chardon Subject: [RFC PATCH v3 19/37] irqchip: Add irq-kvx-apic-gic driver Date: Mon, 22 Jul 2024 11:41:30 +0200 Message-ID: <20240722094226.21602-20-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Each Cluster of the Coolidge SoC includes an Advanced Programmable Interrupt Controller (APIC) and Generic Interrupt Controller (GIC). The APIC GIC acts as an intermediary interrupt controller, muxing/routing incoming interrupts to cores in the cluster. The 139 possible input interrupt lines are organized as follows: - 128 from the mailbox controller (one it per mailboxes) - 1 from the NoC router - 5 from IOMMUs - 1 from L2 cache DMA job FIFO - 1 from cluster watchdog - 2 for SECC, DECC - 1 from Data NoC The 72 possible output interrupt lines: - 68: 4 interrupts per core (17 cores) - 1 for L2 cache controller - 3 extra that are for padding Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Vincent Chardon Signed-off-by: Vincent Chardon Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: - removed irq-kvx-itgen driver (moved in its own patch) - removed irq-kvx-apic-mailbox driver (moved in its own patch) - removed irq-kvx-core-intc driver (moved in its own patch) - removed print on probe success V2 -> V3: update compatible --- drivers/irqchip/Kconfig | 6 + drivers/irqchip/Makefile | 1 + drivers/irqchip/irq-kvx-apic-gic.c | 356 +++++++++++++++++++++++++++++ 3 files changed, 363 insertions(+) create mode 100644 drivers/irqchip/irq-kvx-apic-gic.c diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig index 14464716bacbb..566425731b757 100644 --- a/drivers/irqchip/Kconfig +++ b/drivers/irqchip/Kconfig @@ -332,6 +332,12 @@ config MIPS_GIC select IRQ_DOMAIN_HIERARCHY select MIPS_CM =20 +config KVX_APIC_GIC + bool + depends on KVX + select IRQ_DOMAIN + select IRQ_DOMAIN_HIERARCHY + config INGENIC_IRQ bool depends on MACH_INGENIC diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile index d9dc3d99aaa86..f59255947c8ed 100644 --- a/drivers/irqchip/Makefile +++ b/drivers/irqchip/Makefile @@ -68,6 +68,7 @@ obj-$(CONFIG_BCM7120_L2_IRQ) +=3D irq-bcm7120-l2.o obj-$(CONFIG_BRCMSTB_L2_IRQ) +=3D irq-brcmstb-l2.o obj-$(CONFIG_KEYSTONE_IRQ) +=3D irq-keystone.o obj-$(CONFIG_MIPS_GIC) +=3D irq-mips-gic.o +obj-$(CONFIG_KVX_APIC_GIC) +=3D irq-kvx-apic-gic.o obj-$(CONFIG_ARCH_MEDIATEK) +=3D irq-mtk-sysirq.o irq-mtk-cirq.o obj-$(CONFIG_ARCH_DIGICOLOR) +=3D irq-digicolor.o obj-$(CONFIG_ARCH_SA1100) +=3D irq-sa11x0.o diff --git a/drivers/irqchip/irq-kvx-apic-gic.c b/drivers/irqchip/irq-kvx-a= pic-gic.c new file mode 100644 index 0000000000000..95196bd7ef058 --- /dev/null +++ b/drivers/irqchip/irq-kvx-apic-gic.c @@ -0,0 +1,356 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2017 - 2022 Kalray Inc. + * Author(s): Clement Leger + * Julian Vetter + */ + +#define pr_fmt(fmt) "kvx_apic_gic: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* APIC is organized in 18 groups of 4 output lines + * However, the two upper lines are for Secure RM and DMA engine + * Thus, we do not have to use them + */ +#define KVX_GIC_PER_CPU_IT_COUNT 4 +#define KVX_GIC_INPUT_IT_COUNT 0x9D +#define KVX_GIC_OUTPUT_IT_COUNT 0x10 + +/* GIC enable register definitions */ +#define KVX_GIC_ENABLE_OFFSET 0x0 +#define KVX_GIC_ENABLE_ELEM_SIZE 0x1 +#define KVX_GIC_ELEM_SIZE 0x400 + +/* GIC status lac register definitions */ +#define KVX_GIC_STATUS_LAC_OFFSET 0x120 +#define KVX_GIC_STATUS_LAC_ELEM_SIZE 0x8 +#define KVX_GIC_STATUS_LAC_ARRAY_SIZE 0x3 + +/** + * For each CPU, there is 4 output lines coming from the apic GIC. + * We only use 1 line and this structure represent this line. + * @base Output line base address + * @cpu CPU associated to this line + */ +struct gic_out_irq_line { + void __iomem *base; + unsigned int cpu; +}; + +/** + * Input irq line. + * This structure is used to store the status of the input line and the + * associated output line. + * @enabled Boolean for line status + * @cpu CPU currently receiving this interrupt + * @it_num Interrupt number + */ +struct gic_in_irq_line { + bool enabled; + struct gic_out_irq_line *out_line; + unsigned int it_num; +}; + +/** + * struct kvx_apic_gic - kvx apic gic + * @base: Base address of the controller + * @domain Domain for this controller + * @input_nr_irqs: maximum number of supported input interrupts + * @cpus: Per cpu interrupt configuration + * @output_irq: Array of output irq lines + * @input_irq: Array of input irq lines + */ +struct kvx_apic_gic { + raw_spinlock_t lock; + void __iomem *base; + struct irq_domain *domain; + uint32_t input_nr_irqs; + /* For each cpu, there is an output IT line */ + struct gic_out_irq_line output_irq[KVX_GIC_OUTPUT_IT_COUNT]; + /* Input interrupt status */ + struct gic_in_irq_line input_irq[KVX_GIC_INPUT_IT_COUNT]; +}; + +static int gic_parent_irq; + +/** + * Enable/Disable an output irq line + * This function is used by both mask/unmask to disable/enable the line. + */ +static void irq_line_set_enable(struct gic_out_irq_line *irq_line, + struct gic_in_irq_line *in_irq_line, + int enable) +{ + void __iomem *enable_line_addr =3D irq_line->base + + KVX_GIC_ENABLE_OFFSET + + in_irq_line->it_num * KVX_GIC_ENABLE_ELEM_SIZE; + + writeb((uint8_t) enable ? 1 : 0, enable_line_addr); + in_irq_line->enabled =3D enable; +} + +static void kvx_apic_gic_set_line(struct irq_data *data, int enable) +{ + struct kvx_apic_gic *gic =3D irq_data_get_irq_chip_data(data); + unsigned int in_irq =3D irqd_to_hwirq(data); + struct gic_in_irq_line *in_line =3D &gic->input_irq[in_irq]; + struct gic_out_irq_line *out_line =3D in_line->out_line; + + raw_spin_lock(&gic->lock); + /* Set line enable on currently assigned cpu */ + irq_line_set_enable(out_line, in_line, enable); + raw_spin_unlock(&gic->lock); +} + +static void kvx_apic_gic_mask(struct irq_data *data) +{ + kvx_apic_gic_set_line(data, 0); +} + +static void kvx_apic_gic_unmask(struct irq_data *data) +{ + kvx_apic_gic_set_line(data, 1); +} + +#ifdef CONFIG_SMP + +static int kvx_apic_gic_set_affinity(struct irq_data *d, + const struct cpumask *cpumask, + bool force) +{ + struct kvx_apic_gic *gic =3D irq_data_get_irq_chip_data(d); + unsigned int new_cpu; + unsigned int hw_irq =3D irqd_to_hwirq(d); + struct gic_in_irq_line *input_line =3D &gic->input_irq[hw_irq]; + struct gic_out_irq_line *new_out_line; + + /* We assume there is only one cpu in the mask */ + new_cpu =3D cpumask_first(cpumask); + new_out_line =3D &gic->output_irq[new_cpu]; + + raw_spin_lock(&gic->lock); + + /* Nothing to do, line is the same */ + if (new_out_line =3D=3D input_line->out_line) + goto out; + + /* If old line was enabled, enable the new one before disabling + * the old one + */ + if (input_line->enabled) + irq_line_set_enable(new_out_line, input_line, 1); + + /* Disable it on old line */ + irq_line_set_enable(input_line->out_line, input_line, 0); + + /* Assign new output line to input IRQ */ + input_line->out_line =3D new_out_line; + +out: + raw_spin_unlock(&gic->lock); + + irq_data_update_effective_affinity(d, cpumask_of(new_cpu)); + + return IRQ_SET_MASK_OK; +} +#endif + +static struct irq_chip kvx_apic_gic_chip =3D { + .name =3D "kvx apic gic", + .irq_mask =3D kvx_apic_gic_mask, + .irq_unmask =3D kvx_apic_gic_unmask, +#ifdef CONFIG_SMP + .irq_set_affinity =3D kvx_apic_gic_set_affinity, +#endif +}; + +static int kvx_apic_gic_alloc(struct irq_domain *domain, unsigned int virq, + unsigned int nr_irqs, void *args) +{ + int i; + struct irq_fwspec *fwspec =3D args; + int hwirq =3D fwspec->param[0]; + + for (i =3D 0; i < nr_irqs; i++) { + irq_domain_set_info(domain, virq + i, hwirq + i, + &kvx_apic_gic_chip, + domain->host_data, handle_simple_irq, + NULL, NULL); + } + + return 0; +} + +static const struct irq_domain_ops kvx_apic_gic_domain_ops =3D { + .alloc =3D kvx_apic_gic_alloc, + .free =3D irq_domain_free_irqs_common, +}; + +static void irq_line_get_status_lac(struct gic_out_irq_line *out_irq_line, + uint64_t status[KVX_GIC_STATUS_LAC_ARRAY_SIZE]) +{ + int i; + + for (i =3D 0; i < KVX_GIC_STATUS_LAC_ARRAY_SIZE; i++) { + status[i] =3D readq(out_irq_line->base + + KVX_GIC_STATUS_LAC_OFFSET + + i * KVX_GIC_STATUS_LAC_ELEM_SIZE); + } +} + +static void kvx_apic_gic_handle_irq(struct irq_desc *desc) +{ + struct kvx_apic_gic *gic_data =3D irq_desc_get_handler_data(desc); + struct gic_out_irq_line *out_line; + uint64_t status[KVX_GIC_STATUS_LAC_ARRAY_SIZE]; + unsigned long irqn, cascade_irq; + unsigned long cpu =3D smp_processor_id(); + + out_line =3D &gic_data->output_irq[cpu]; + + irq_line_get_status_lac(out_line, status); + + for_each_set_bit(irqn, (unsigned long *) status, + KVX_GIC_STATUS_LAC_ARRAY_SIZE * BITS_PER_LONG) { + + cascade_irq =3D irq_find_mapping(gic_data->domain, irqn); + + generic_handle_irq(cascade_irq); + } +} + +static void __init apic_gic_init(struct kvx_apic_gic *gic) +{ + unsigned int cpu, line; + struct gic_in_irq_line *input_irq_line; + struct gic_out_irq_line *output_irq_line; + uint64_t status[KVX_GIC_STATUS_LAC_ARRAY_SIZE]; + + /* Initialize all input lines (device -> )*/ + for (line =3D 0; line < KVX_GIC_INPUT_IT_COUNT; line++) { + input_irq_line =3D &gic->input_irq[line]; + input_irq_line->enabled =3D false; + /* All input lines map on output 0 */ + input_irq_line->out_line =3D &gic->output_irq[0]; + input_irq_line->it_num =3D line; + } + + /* Clear all output lines (-> cpus) */ + for (cpu =3D 0; cpu < KVX_GIC_OUTPUT_IT_COUNT; cpu++) { + output_irq_line =3D &gic->output_irq[cpu]; + output_irq_line->cpu =3D cpu; + output_irq_line->base =3D gic->base + + cpu * (KVX_GIC_ELEM_SIZE * KVX_GIC_PER_CPU_IT_COUNT); + + /* Disable all external lines on this core */ + for (line =3D 0; line < KVX_GIC_INPUT_IT_COUNT; line++) + irq_line_set_enable(output_irq_line, + &gic->input_irq[line], 0x0); + + irq_line_get_status_lac(output_irq_line, status); + } +} + +static int kvx_gic_starting_cpu(unsigned int cpu) +{ + enable_percpu_irq(gic_parent_irq, IRQ_TYPE_NONE); + + return 0; +} + +static int kvx_gic_dying_cpu(unsigned int cpu) +{ + disable_percpu_irq(gic_parent_irq); + + return 0; +} + +static int __init kvx_init_apic_gic(struct device_node *node, + struct device_node *parent) +{ + struct kvx_apic_gic *gic; + int ret; + unsigned int irq; + + if (!parent) { + pr_err("kvx apic gic does not have parent\n"); + return -EINVAL; + } + + gic =3D kzalloc(sizeof(*gic), GFP_KERNEL); + if (!gic) + return -ENOMEM; + + if (of_property_read_u32(node, "kalray,intc-nr-irqs", + &gic->input_nr_irqs)) + gic->input_nr_irqs =3D KVX_GIC_INPUT_IT_COUNT; + + if (WARN_ON(gic->input_nr_irqs > KVX_GIC_INPUT_IT_COUNT)) { + ret =3D -EINVAL; + goto err_kfree; + } + + gic->base =3D of_io_request_and_map(node, 0, node->name); + if (!gic->base) { + ret =3D -EINVAL; + goto err_kfree; + } + + raw_spin_lock_init(&gic->lock); + apic_gic_init(gic); + + gic->domain =3D irq_domain_add_linear(node, + gic->input_nr_irqs, + &kvx_apic_gic_domain_ops, + gic); + if (!gic->domain) { + pr_err("Failed to add IRQ domain\n"); + ret =3D -EINVAL; + goto err_iounmap; + } + + irq =3D irq_of_parse_and_map(node, 0); + if (irq <=3D 0) { + pr_err("unable to parse irq\n"); + ret =3D -EINVAL; + goto err_irq_domain_remove; + } + + irq_set_chained_handler_and_data(irq, kvx_apic_gic_handle_irq, + gic); + + gic_parent_irq =3D irq; + ret =3D cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, + "kvx/gic:online", + kvx_gic_starting_cpu, + kvx_gic_dying_cpu); + if (ret < 0) { + pr_err("Failed to setup hotplug state"); + goto err_irq_unmap; + } + + return 0; + +err_irq_unmap: + irq_dispose_mapping(irq); +err_irq_domain_remove: + irq_domain_remove(gic->domain); +err_iounmap: + iounmap(gic->base); +err_kfree: + kfree(gic); + + return ret; +} + +IRQCHIP_DECLARE(kvx_apic_gic, "kalray,coolidge-apic-gic", kvx_init_apic_gi= c); --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout43.security-mail.net (smtpout43.security-mail.net [85.31.212.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C54BA16CD05 for ; Mon, 22 Jul 2024 09:43:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641417; cv=none; b=a8Kjkr/EWOe16IdhXPgnZlBdtrh1Xdt7KZtuvqOicnoTWb/w0V7eiInmMNU3RE01Bis0cHLWdIuBG9xUfKteYuhF9bKmwn/eOdKSvdZCR8nStbGvCxMbFTp3vNBINNNoVEpG3z4WWRsXN6a6UhTZv37mUoKAmKTGTknoPPx9KWM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641417; c=relaxed/simple; bh=f0O95pfNxRih9yCZu4XYDwcU7fm8dulNHBC/bzASeEo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=UnB/tQSeg3EfdeNkqt9ilkKeHNkSUrqNjWjMS29fII7j7gZGgHctsLUIslhWtax1SyibqvJCPmVHARnJ3XuP3mFjvQYdxdIIOMPH9CTgESysx83B0Nzg5Ov/LRCO9HyKCPONiLxaT4tGDiq4g2na33mjzQuCHp+yMSDxmA1QIg4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=cocUl2za; arc=none smtp.client-ip=85.31.212.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="cocUl2za" Received: from localhost (fx303.security-mail.net [127.0.0.1]) by fx303.security-mail.net (Postfix) with ESMTP id 4FA4E30EF96 for ; Mon, 22 Jul 2024 11:43:34 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641414; bh=f0O95pfNxRih9yCZu4XYDwcU7fm8dulNHBC/bzASeEo=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=cocUl2za0D6G/5OQO1ABaUshnKmMPrdxmco/vgnBOP0IRDf1ps3CAz9Vt9J9PLTiv uMsH0TbjNsEdLifrt+U8AwCPWI+fVAYzt4+DI9ScIyziY2TNdqq+3Dfjmu6fnSWz5X rOHp7JS5wogwM/h2ILP9EP4zz60SVdsDxEjdgmaQ= Received: from fx303 (fx303.security-mail.net [127.0.0.1]) by fx303.security-mail.net (Postfix) with ESMTP id 1B4E430EF08; Mon, 22 Jul 2024 11:43:34 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx303.security-mail.net (Postfix) with ESMTPS id 8758930EEE1; Mon, 22 Jul 2024 11:43:33 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 4971740317; Mon, 22 Jul 2024 11:43:33 +0200 (CEST) X-Secumail-id: <9700.669e29c5.84e85.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Vincent Chardon Subject: [RFC PATCH v3 20/37] irqchip: Add irq-kvx-itgen driver Date: Mon, 22 Jul 2024 11:41:31 +0200 Message-ID: <20240722094226.21602-21-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau The Kalray Coolidge SoC contains several interrupt generators (itgen). The itgen is both an interrupt-controller and a msi-client. Peripheral Controllers such as PCIe, I2C, SPI, GPIO, etc. need to send interrupts to the Compute Clusters. The purpose of this module is to forward interrupts to the compute clusters through the AXI interconnect. Features: - up to 256 inputs. - supports level interrupt inputs. - five interrupt outputs (one per cluster). Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Vincent Chardon Signed-off-by: Vincent Chardon Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: new patch - removed header include/linux/irqchip/irq-kvx-apic-gic.h - header moved to drivers/irqchip/ but in another patch - removed print on probe success V2 -> V3: - replace core intc by itgen in commit msg - use dev_err_probe() - typos - add static qualifier - update compatible --- drivers/irqchip/Kconfig | 8 ++ drivers/irqchip/Makefile | 1 + drivers/irqchip/irq-kvx-itgen.c | 234 ++++++++++++++++++++++++++++++++ 3 files changed, 243 insertions(+) create mode 100644 drivers/irqchip/irq-kvx-itgen.c diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig index 566425731b757..bf06506d611d5 100644 --- a/drivers/irqchip/Kconfig +++ b/drivers/irqchip/Kconfig @@ -338,6 +338,14 @@ config KVX_APIC_GIC select IRQ_DOMAIN select IRQ_DOMAIN_HIERARCHY =20 +config KVX_ITGEN + bool + depends on KVX + select GENERIC_IRQ_IPI if SMP + select GENERIC_MSI_IRQ_DOMAIN + select IRQ_DOMAIN + select IRQ_DOMAIN_HIERARCHY + config INGENIC_IRQ bool depends on MACH_INGENIC diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile index f59255947c8ed..b2c514faf9bf7 100644 --- a/drivers/irqchip/Makefile +++ b/drivers/irqchip/Makefile @@ -69,6 +69,7 @@ obj-$(CONFIG_BRCMSTB_L2_IRQ) +=3D irq-brcmstb-l2.o obj-$(CONFIG_KEYSTONE_IRQ) +=3D irq-keystone.o obj-$(CONFIG_MIPS_GIC) +=3D irq-mips-gic.o obj-$(CONFIG_KVX_APIC_GIC) +=3D irq-kvx-apic-gic.o +obj-$(CONFIG_KVX_ITGEN) +=3D irq-kvx-itgen.o obj-$(CONFIG_ARCH_MEDIATEK) +=3D irq-mtk-sysirq.o irq-mtk-cirq.o obj-$(CONFIG_ARCH_DIGICOLOR) +=3D irq-digicolor.o obj-$(CONFIG_ARCH_SA1100) +=3D irq-sa11x0.o diff --git a/drivers/irqchip/irq-kvx-itgen.c b/drivers/irqchip/irq-kvx-itge= n.c new file mode 100644 index 0000000000000..67d76ab528ad6 --- /dev/null +++ b/drivers/irqchip/irq-kvx-itgen.c @@ -0,0 +1,234 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Julian Vetter + * Vincent Chardon + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* Parameters */ +#define KVX_ITGEN_PARAM_OFFSET 0x1100 +#define KVX_ITGEN_PARAM_IT_NUM_OFFSET 0x0 + +/* Target configuration */ +#define KVX_ITGEN_CFG_ENABLE_OFFSET 0x8 +#define KVX_ITGEN_CFG_ELEM_SIZE 0x10 +#define KVX_ITGEN_CFG_TARGET_OFFSET 0x0 +#define KVX_ITGEN_CFG_TARGET_MAILBOX_SHIFT 0x0 +#define KVX_ITGEN_CFG_TARGET_MAILBOX_MASK 0x7FUL +#define KVX_ITGEN_CFG_TARGET_CLUSTER_SHIFT 0x8 +#define KVX_ITGEN_CFG_TARGET_CLUSTER_MASK 0x700UL +#define KVX_ITGEN_CFG_TARGET_SELECT_BIT_SHIFT 0x18 +#define KVX_ITGEN_CFG_TARGET_SELECT_BIT_MASK 0x3F000000UL + +#define MB_ADDR_CLUSTER_SHIFT 24 +#define MB_ADDR_MAILBOX_SHIFT 9 + +/** + * struct kvx_itgen - kvx interrupt generator (MSI client) + * @base: base address of the itgen controller + * @domain: IRQ domain of the controller + * @pdev: Platform device associated to the controller + */ +struct kvx_itgen { + void __iomem *base; + struct irq_domain *domain; + struct platform_device *pdev; +}; + +static void __iomem *get_itgen_cfg_offset(struct kvx_itgen *itgen, + irq_hw_number_t hwirq) +{ + return itgen->base + KVX_ITGEN_CFG_TARGET_OFFSET + + hwirq * KVX_ITGEN_CFG_ELEM_SIZE; +} + +static void __iomem *get_itgen_param_offset(struct kvx_itgen *itgen) +{ + return itgen->base + KVX_ITGEN_PARAM_OFFSET; +} + +static void kvx_itgen_enable(struct irq_data *data, u32 value) +{ + struct kvx_itgen *itgen =3D irq_data_get_irq_chip_data(data); + void __iomem *enable_reg =3D + get_itgen_cfg_offset(itgen, irqd_to_hwirq(data)) + + KVX_ITGEN_CFG_ENABLE_OFFSET; + + dev_dbg(&itgen->pdev->dev, "%sabling hwirq %d, addr %p\n", + value ? "En" : "Dis", + (int) irqd_to_hwirq(data), + enable_reg); + writel(value, enable_reg); +} + +static void kvx_itgen_mask(struct irq_data *data) +{ + kvx_itgen_enable(data, 0x0); + irq_chip_mask_parent(data); +} + +static void kvx_itgen_unmask(struct irq_data *data) +{ + kvx_itgen_enable(data, 0x1); + irq_chip_unmask_parent(data); +} + +#ifdef CONFIG_SMP +static int kvx_itgen_irq_set_affinity(struct irq_data *data, + const struct cpumask *dest, bool force) +{ + return -ENOSYS; +} +#endif + +static struct irq_chip itgen_irq_chip =3D { + .name =3D "kvx-itgen", + .irq_mask =3D kvx_itgen_mask, + .irq_unmask =3D kvx_itgen_unmask, +#ifdef CONFIG_SMP + .irq_set_affinity =3D kvx_itgen_irq_set_affinity, +#endif +}; + +#define ITGEN_UNSUPPORTED_TYPES (IRQ_TYPE_LEVEL_LOW | IRQ_TYPE_EDGE_FALLIN= G) + +static int kvx_itgen_domain_alloc(struct irq_domain *domain, unsigned int = virq, + unsigned int nr_irqs, void *args) +{ + int i, err; + struct irq_fwspec *fwspec =3D args; + int hwirq =3D fwspec->param[0]; + int type =3D IRQ_TYPE_NONE; + struct kvx_itgen *itgen; + + if (fwspec->param_count >=3D 2) + type =3D fwspec->param[1]; + + WARN_ON(type & ITGEN_UNSUPPORTED_TYPES); + + err =3D platform_msi_device_domain_alloc(domain, virq, nr_irqs); + if (err) + return err; + + itgen =3D platform_msi_get_host_data(domain); + + for (i =3D 0; i < nr_irqs; i++) { + irq_domain_set_hwirq_and_chip(domain, virq + i, hwirq + i, + &itgen_irq_chip, itgen); + if (type =3D=3D IRQ_TYPE_LEVEL_HIGH) + irq_set_handler(virq + i, handle_level_irq); + } + + return 0; +} + +static const struct irq_domain_ops itgen_domain_ops =3D { + .alloc =3D kvx_itgen_domain_alloc, + .free =3D irq_domain_free_irqs_common, +}; + +static void kvx_itgen_write_msg(struct msi_desc *desc, struct msi_msg *msg) +{ + struct irq_data *d =3D irq_get_irq_data(desc->irq); + struct kvx_itgen *itgen =3D irq_data_get_irq_chip_data(d); + uint32_t cfg_val =3D 0; + uintptr_t dest_addr =3D ((uint64_t) msg->address_hi << 32) | + msg->address_lo; + void __iomem *cfg =3D get_itgen_cfg_offset(itgen, irqd_to_hwirq(d)); + + /* + * The address passed in the msi data is the address of the target + * mailbox. The itgen however writes to the mailbox based on the mppa + * id, cluster id and mailbox id instead of an address. So, extract + * these information from the mailbox address. + */ + + cfg_val |=3D (((kvx_sfr_get(PCR) & KVX_SFR_PCR_CID_MASK) >> + KVX_SFR_PCR_CID_SHIFT) + << KVX_ITGEN_CFG_TARGET_CLUSTER_SHIFT); + cfg_val |=3D ((dest_addr >> MB_ADDR_MAILBOX_SHIFT) & + KVX_ITGEN_CFG_TARGET_MAILBOX_MASK) + << KVX_ITGEN_CFG_TARGET_MAILBOX_SHIFT; + + /* + * msg->data contains the bit number to be written and is included in + * the itgen config + */ + cfg_val |=3D ((msg->data << KVX_ITGEN_CFG_TARGET_SELECT_BIT_SHIFT) + & KVX_ITGEN_CFG_TARGET_SELECT_BIT_MASK); + + dev_dbg(&itgen->pdev->dev, + "Writing dest_addr %lx, value %x to cfg %p\n", + dest_addr, cfg_val, cfg); + + writel(cfg_val, cfg); +} + +static int +kvx_itgen_device_probe(struct platform_device *pdev) +{ + struct kvx_itgen *itgen; + u32 it_count; + struct resource *mem; + + itgen =3D devm_kzalloc(&pdev->dev, sizeof(*itgen), GFP_KERNEL); + if (!itgen) + return -ENOMEM; + + mem =3D platform_get_resource(pdev, IORESOURCE_MEM, 0); + itgen->base =3D devm_ioremap_resource(&pdev->dev, mem); + if (IS_ERR(itgen->base)) + return dev_err_probe(&pdev->dev, PTR_ERR(itgen->base), + "Failed to ioremap itgen\n"); + + itgen->pdev =3D pdev; + it_count =3D readl(get_itgen_param_offset(itgen) + + KVX_ITGEN_PARAM_IT_NUM_OFFSET); + + itgen->domain =3D platform_msi_create_device_domain(&pdev->dev, + it_count, + kvx_itgen_write_msg, + &itgen_domain_ops, + itgen); + if (!itgen->domain) { + dev_err(&pdev->dev, "Failed to create device domain\n"); + return -ENOMEM; + } + + platform_set_drvdata(pdev, itgen); + + return 0; +} + +static const struct of_device_id itgen_of_match[] =3D { + { .compatible =3D "kalray,coolidge-itgen" }, + { /* END */ } +}; +MODULE_DEVICE_TABLE(of, itgen_of_match); + +static struct platform_driver itgen_platform_driver =3D { + .driver =3D { + .name =3D "kvx-itgen", + .of_match_table =3D itgen_of_match, + }, + .probe =3D kvx_itgen_device_probe, +}; + +static int __init kvx_itgen_init(void) +{ + return platform_driver_register(&itgen_platform_driver); +} + +arch_initcall(kvx_itgen_init); --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout145.security-mail.net (smtpout145.security-mail.net [85.31.212.145]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A61C016CD3A for ; Mon, 22 Jul 2024 09:43:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.145 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641420; cv=none; b=QcmwEX27kl4/mf0xx5wcFpkxkVGKn1zrsWvucc7mA52Mjg8I8l8YG2nEjr0rFRqZ9rBbbHT989npdN+33LolZb/hVTf/D1DrKQ6cNXPCKeLW0GHBjkJZ2Vwoi5I+HXfUmxG8JppnnnTzQoInf8GZGzrNmLfy2Z/Mzp6z0h3cLyo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641420; c=relaxed/simple; bh=L96ODBHo6UR6sT6ZF6s0e8YDw+7iW1MS0z104V6XW2M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=LCM8ujcshbssdUE1ntj3sokytcnJcYwuCYYnfcL7GqAEepf35kje463H9wHoGxoT5zZPWtu5LNy1Im61cLeGeqzv03bXj/sYqX3/OTfF5bE92lUB8ou4wKx7mWULXF6ojf62XRPsm4RYZKx0KUec3+suRwVp8PsGkaICItBGDNY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=mqfThSte; arc=none smtp.client-ip=85.31.212.145 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="mqfThSte" Received: from localhost (fx405.security-mail.net [127.0.0.1]) by fx405.security-mail.net (Postfix) with ESMTP id 109F2335F4D for ; Mon, 22 Jul 2024 11:43:36 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641416; bh=L96ODBHo6UR6sT6ZF6s0e8YDw+7iW1MS0z104V6XW2M=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=mqfThSte/Q7qp2WfoN8PcJB7qJ59t7ULhup44eZz0ORc7OV/YV2vdYDjZYmbmT60E D+XVLFSt1FN8H+p5DNAVEFbh6Kj3Vh2OWOxh9+Uf4ujDiL8lgaeyIN1bEBGr/TTbal TiJEM4wyz7wOK/a7JqyjRSNdpYpAv47sd5jkLLvA= Received: from fx405 (fx405.security-mail.net [127.0.0.1]) by fx405.security-mail.net (Postfix) with ESMTP id 99E6B335E3E; Mon, 22 Jul 2024 11:43:35 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx405.security-mail.net (Postfix) with ESMTPS id 67BD2335DAF; Mon, 22 Jul 2024 11:43:34 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 32E8740317; Mon, 22 Jul 2024 11:43:34 +0200 (CEST) X-Secumail-id: <8c76.669e29c6.63f6b.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Jules Maselbas , Luc Michel Subject: [RFC PATCH v3 21/37] irqchip: Add irq-kvx-apic-mailbox driver Date: Mon, 22 Jul 2024 11:41:32 +0200 Message-ID: <20240722094226.21602-22-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau The APIC includes a mailbox controller, containing 128 mailboxes. Each mailbox is an 8-byte word. Each mailbox can be independently configured with a trigger condition and an input function. After a write to a mailbox if the trigger condition is met, an interrupt will be generated. Since this hardware block generates IRQs based on writes at some memory locations, it is both an interrupt-controller and an MSI controller. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Jules Maselbas Signed-off-by: Jules Maselbas Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Luc Michel Signed-off-by: Luc Michel Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: - new patch - removed print on probe success V2 -> V3: update compatible --- drivers/irqchip/Kconfig | 8 + drivers/irqchip/Makefile | 1 + drivers/irqchip/irq-kvx-apic-mailbox.c | 480 +++++++++++++++++++++++++ 3 files changed, 489 insertions(+) create mode 100644 drivers/irqchip/irq-kvx-apic-mailbox.c diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig index bf06506d611d5..dbac5b4400e02 100644 --- a/drivers/irqchip/Kconfig +++ b/drivers/irqchip/Kconfig @@ -346,6 +346,14 @@ config KVX_ITGEN select IRQ_DOMAIN select IRQ_DOMAIN_HIERARCHY =20 +config KVX_APIC_MAILBOX + bool + depends on KVX + select GENERIC_IRQ_IPI if SMP + select GENERIC_MSI_IRQ_DOMAIN + select IRQ_DOMAIN + select IRQ_DOMAIN_HIERARCHY + config INGENIC_IRQ bool depends on MACH_INGENIC diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile index b2c514faf9bf7..16a2f666b788d 100644 --- a/drivers/irqchip/Makefile +++ b/drivers/irqchip/Makefile @@ -70,6 +70,7 @@ obj-$(CONFIG_KEYSTONE_IRQ) +=3D irq-keystone.o obj-$(CONFIG_MIPS_GIC) +=3D irq-mips-gic.o obj-$(CONFIG_KVX_APIC_GIC) +=3D irq-kvx-apic-gic.o obj-$(CONFIG_KVX_ITGEN) +=3D irq-kvx-itgen.o +obj-$(CONFIG_KVX_APIC_MAILBOX) +=3D irq-kvx-apic-mailbox.o obj-$(CONFIG_ARCH_MEDIATEK) +=3D irq-mtk-sysirq.o irq-mtk-cirq.o obj-$(CONFIG_ARCH_DIGICOLOR) +=3D irq-digicolor.o obj-$(CONFIG_ARCH_SA1100) +=3D irq-sa11x0.o diff --git a/drivers/irqchip/irq-kvx-apic-mailbox.c b/drivers/irqchip/irq-k= vx-apic-mailbox.c new file mode 100644 index 0000000000000..6f0e9134db2ff --- /dev/null +++ b/drivers/irqchip/irq-kvx-apic-mailbox.c @@ -0,0 +1,480 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Jules Maselbas + */ + +#define pr_fmt(fmt) "kvx_apic_mailbox: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define KVX_MAILBOX_MODE_WRITE 0x0 +#define KVX_MAILBOX_MODE_OR 0x1 +#define KVX_MAILBOX_MODE_ADD 0x2 + +#define KVX_MAILBOX_TRIG_NO_TRIG 0x0 +#define KVX_MAILBOX_TRIG_DOORBELL 0x1 +#define KVX_MAILBOX_TRIG_MATCH 0x2 +#define KVX_MAILBOX_TRIG_BARRIER 0x3 +#define KVX_MAILBOX_TRIG_THRESHOLD 0x4 + +#define KVX_MAILBOX_OFFSET 0x0 +#define KVX_MAILBOX_ELEM_SIZE 0x200 +#define KVX_MAILBOX_MASK_OFFSET 0x10 +#define KVX_MAILBOX_FUNCT_OFFSET 0x18 +#define KVX_MAILBOX_LAC_OFFSET 0x8 +#define KVX_MAILBOX_VALUE_OFFSET 0x0 +#define KVX_MAILBOX_FUNCT_MODE_SHIFT 0x0 +#define KVX_MAILBOX_FUNCT_TRIG_SHIFT 0x8 + +#define MAILBOXES_MAX_COUNT 128 + +/* Mailboxes are 64 bits wide */ +#define MAILBOXES_BIT_SIZE 64 + +/* Maximum number of mailboxes available */ +#define MAILBOXES_MAX_BIT_COUNT (MAILBOXES_MAX_COUNT * MAILBOXES_BIT_SIZE) + +/* Mailboxes are grouped by 8 in a single page */ +#define MAILBOXES_BITS_PER_PAGE (8 * MAILBOXES_BIT_SIZE) + +/** + * struct mb_data - per mailbox data + * @cpu: CPU on which the mailbox is routed + * @parent_irq: Parent IRQ on the GIC + */ +struct mb_data { + unsigned int cpu; + unsigned int parent_irq; +}; + +/** + * struct kvx_apic_mailbox - kvx apic mailbox + * @base: base address of the controller + * @device_domain: IRQ device domain for mailboxes + * @msi_domain: platform MSI domain for MSI interface + * @domain_info: Domain information needed for the MSI domain + * @mb_count: Count of mailboxes we are handling + * @available: bitmap of availables bits in mailboxes + * @mailboxes_lock: lock for irq migration + * @mask_lock: lock for irq masking + * @mb_data: data associated to each mailbox + */ +struct kvx_apic_mailbox { + void __iomem *base; + phys_addr_t phys_base; + struct irq_domain *device_domain; + struct irq_domain *msi_domain; + struct msi_domain_info domain_info; + /* Start and count of device mailboxes */ + unsigned int mb_count; + /* Bitmap of allocated bits in mailboxes */ + DECLARE_BITMAP(available, MAILBOXES_MAX_BIT_COUNT); + spinlock_t mailboxes_lock; + raw_spinlock_t mask_lock; + struct mb_data mb_data[MAILBOXES_MAX_COUNT]; +}; + +/** + * struct kvx_irq_data - per irq data + * @mb: Mailbox structure + */ +struct kvx_irq_data { + struct kvx_apic_mailbox *mb; +}; + +static void kvx_mailbox_get_from_hwirq(unsigned int hw_irq, + unsigned int *mailbox_num, + unsigned int *mailbox_bit) +{ + *mailbox_num =3D hw_irq / MAILBOXES_BIT_SIZE; + *mailbox_bit =3D hw_irq % MAILBOXES_BIT_SIZE; +} + +static void __iomem *kvx_mailbox_get_addr(struct kvx_apic_mailbox *mb, + unsigned int num) +{ + return mb->base + (num * KVX_MAILBOX_ELEM_SIZE); +} + +static phys_addr_t kvx_mailbox_get_phys_addr(struct kvx_apic_mailbox *mb, + unsigned int num) +{ + return mb->phys_base + (num * KVX_MAILBOX_ELEM_SIZE); +} + +static void kvx_mailbox_msi_compose_msg(struct irq_data *data, + struct msi_msg *msg) +{ + struct kvx_irq_data *kd =3D irq_data_get_irq_chip_data(data); + struct kvx_apic_mailbox *mb =3D kd->mb; + unsigned int mb_num, mb_bit; + phys_addr_t mb_addr; + + kvx_mailbox_get_from_hwirq(irqd_to_hwirq(data), &mb_num, &mb_bit); + mb_addr =3D kvx_mailbox_get_phys_addr(mb, mb_num); + + msg->address_hi =3D upper_32_bits(mb_addr); + msg->address_lo =3D lower_32_bits(mb_addr); + msg->data =3D mb_bit; + + iommu_dma_compose_msi_msg(irq_data_get_msi_desc(data), msg); +} + +static void kvx_mailbox_set_irq_enable(struct irq_data *data, + bool enabled) +{ + struct kvx_irq_data *kd =3D irq_data_get_irq_chip_data(data); + struct kvx_apic_mailbox *mb =3D kd->mb; + unsigned int mb_num, mb_bit; + void __iomem *mb_addr; + u64 mask_value, mb_value; + + kvx_mailbox_get_from_hwirq(irqd_to_hwirq(data), &mb_num, &mb_bit); + mb_addr =3D kvx_mailbox_get_addr(mb, mb_num); + + raw_spin_lock(&mb->mask_lock); + mask_value =3D readq(mb_addr + KVX_MAILBOX_MASK_OFFSET); + if (enabled) + mask_value |=3D BIT_ULL(mb_bit); + else + mask_value &=3D ~BIT_ULL(mb_bit); + + writeq(mask_value, mb_addr + KVX_MAILBOX_MASK_OFFSET); + + raw_spin_unlock(&mb->mask_lock); + + /** + * Since interrupts on mailboxes are edge triggered and are only + * triggered when writing the value, we need to trigger it manually + * after updating the mask if enabled. If the interrupt was triggered by + * the device just after the mask write, we can trigger a spurious + * interrupt but that is still better than missing one... + * Moreover, the mailbox is configured in OR mode which means that even + * if we write a single bit, all other bits will be kept intact. + */ + if (enabled) { + mb_value =3D readq(mb_addr + KVX_MAILBOX_VALUE_OFFSET); + if (mb_value & BIT_ULL(mb_bit)) + writeq(BIT_ULL(mb_bit), + mb_addr + KVX_MAILBOX_VALUE_OFFSET); + } +} + +static void kvx_mailbox_mask(struct irq_data *data) +{ + kvx_mailbox_set_irq_enable(data, false); +} + +static void kvx_mailbox_unmask(struct irq_data *data) +{ + kvx_mailbox_set_irq_enable(data, true); +} + +static void kvx_mailbox_set_cpu(struct kvx_apic_mailbox *mb, int mb_id, + int new_cpu) +{ + irq_set_affinity(mb->mb_data[mb_id].parent_irq, cpumask_of(new_cpu)); + mb->mb_data[mb_id].cpu =3D new_cpu; +} + +static void kvx_mailbox_free_bit(struct kvx_apic_mailbox *mb, int hw_irq) +{ + unsigned int mb_num, mb_bit; + + kvx_mailbox_get_from_hwirq(hw_irq, &mb_num, &mb_bit); + bitmap_clear(mb->available, hw_irq, 1); + + /* If there is no more IRQ on this mailbox, reset it to CPU 0 */ + if (mb->available[mb_num] =3D=3D 0) + kvx_mailbox_set_cpu(mb, mb_num, 0); +} + +struct irq_chip kvx_apic_mailbox_irq_chip =3D { + .name =3D "kvx apic mailbox", + .irq_compose_msi_msg =3D kvx_mailbox_msi_compose_msg, + .irq_mask =3D kvx_mailbox_mask, + .irq_unmask =3D kvx_mailbox_unmask, +}; + +static int kvx_mailbox_allocate_bits(struct kvx_apic_mailbox *mb, int num_= req) +{ + int first, align_mask =3D 0; + + /* This must be a power of 2 for bitmap_find_next_zero_area to work */ + BUILD_BUG_ON((MAILBOXES_BITS_PER_PAGE & (MAILBOXES_BITS_PER_PAGE - 1))); + + /* + * If user requested more than 1 mailbox, we must make sure it will be + * aligned on a page size for iommu_dma_prepare_msi to be correctly + * mapped in a single page. + */ + if (num_req > 1) + align_mask =3D (MAILBOXES_BITS_PER_PAGE - 1); + + spin_lock(&mb->mailboxes_lock); + + first =3D bitmap_find_next_zero_area(mb->available, + mb->mb_count * MAILBOXES_BIT_SIZE, 0, + num_req, align_mask); + if (first >=3D MAILBOXES_MAX_BIT_COUNT) { + spin_unlock(&mb->mailboxes_lock); + return -ENOSPC; + } + + bitmap_set(mb->available, first, num_req); + + spin_unlock(&mb->mailboxes_lock); + + return first; +} + +static int kvx_apic_mailbox_msi_alloc(struct irq_domain *domain, + unsigned int virq, + unsigned int nr_irqs, void *args) +{ + int i, err; + int hwirq =3D 0; + u64 mb_addr; + struct irq_data *d; + struct kvx_irq_data *kd; + struct kvx_apic_mailbox *mb =3D domain->host_data; + struct msi_alloc_info *msi_info =3D (struct msi_alloc_info *)args; + struct msi_desc *desc =3D msi_info->desc; + unsigned int mb_num, mb_bit; + + /* We will not be able to guarantee page alignment ! */ + if (nr_irqs > MAILBOXES_BITS_PER_PAGE) + return -EINVAL; + + hwirq =3D kvx_mailbox_allocate_bits(mb, nr_irqs); + if (hwirq < 0) + return hwirq; + + kvx_mailbox_get_from_hwirq(hwirq, &mb_num, &mb_bit); + mb_addr =3D (u64) kvx_mailbox_get_phys_addr(mb, mb_num); + err =3D iommu_dma_prepare_msi(desc, mb_addr); + if (err) + goto free_mb_bits; + + for (i =3D 0; i < nr_irqs; i++) { + kd =3D kmalloc(sizeof(*kd), GFP_KERNEL); + if (!kd) { + err =3D -ENOMEM; + goto free_irq_data; + } + + kd->mb =3D mb; + irq_domain_set_info(domain, virq + i, hwirq + i, + &kvx_apic_mailbox_irq_chip, + kd, handle_simple_irq, + NULL, NULL); + } + + return 0; + +free_irq_data: + for (i--; i >=3D 0; i--) { + d =3D irq_domain_get_irq_data(domain, virq + i); + kd =3D irq_data_get_irq_chip_data(d); + kfree(kd); + } + +free_mb_bits: + spin_lock(&mb->mailboxes_lock); + bitmap_clear(mb->available, hwirq, nr_irqs); + spin_unlock(&mb->mailboxes_lock); + + return err; +} + +static void kvx_apic_mailbox_msi_free(struct irq_domain *domain, + unsigned int virq, + unsigned int nr_irqs) +{ + int i; + struct irq_data *d; + struct kvx_irq_data *kd; + struct kvx_apic_mailbox *mb =3D domain->host_data; + + spin_lock(&mb->mailboxes_lock); + + for (i =3D 0; i < nr_irqs; i++) { + d =3D irq_domain_get_irq_data(domain, virq + i); + kd =3D irq_data_get_irq_chip_data(d); + kfree(kd); + kvx_mailbox_free_bit(mb, d->hwirq); + } + + spin_unlock(&mb->mailboxes_lock); +} + +static const struct irq_domain_ops kvx_apic_mailbox_domain_ops =3D { + .alloc =3D kvx_apic_mailbox_msi_alloc, + .free =3D kvx_apic_mailbox_msi_free +}; + +static struct irq_chip kvx_msi_irq_chip =3D { + .name =3D "KVX MSI", +}; + +static void kvx_apic_mailbox_handle_irq(struct irq_desc *desc) +{ + struct irq_data *data =3D irq_desc_get_irq_data(desc); + struct kvx_apic_mailbox *mb =3D irq_desc_get_handler_data(desc); + void __iomem *mb_addr =3D kvx_mailbox_get_addr(mb, irqd_to_hwirq(data)); + unsigned int irqn, cascade_irq, bit; + u64 mask_value, masked_its; + u64 mb_value; + /* Since we allocate 64 interrupts for each mailbox, the scheme + * to find the hwirq associated to a mailbox irq is the + * following: + * hw_irq =3D mb_num * MAILBOXES_BIT_SIZE + bit + */ + unsigned int mb_hwirq =3D irqd_to_hwirq(data) * MAILBOXES_BIT_SIZE; + + mb_value =3D readq(mb_addr + KVX_MAILBOX_LAC_OFFSET); + mask_value =3D readq(mb_addr + KVX_MAILBOX_MASK_OFFSET); + /* Mask any disabled interrupts */ + mb_value &=3D mask_value; + + /** + * Write all pending ITs that are masked to process them later + * Since the mailbox is in OR mode, these bits will be merged with any + * already set bits and thus avoid losing any interrupts. + */ + masked_its =3D (~mask_value) & mb_value; + if (masked_its) + writeq(masked_its, mb_addr + KVX_MAILBOX_LAC_OFFSET); + + for_each_set_bit(bit, (unsigned long *) &mb_value, BITS_PER_LONG) { + irqn =3D bit + mb_hwirq; + cascade_irq =3D irq_find_mapping(mb->device_domain, irqn); + generic_handle_irq(cascade_irq); + } +} + +static void __init +apic_mailbox_reset(struct kvx_apic_mailbox *mb) +{ + unsigned int i; + unsigned int mb_end =3D mb->mb_count; + void __iomem *mb_addr; + u64 funct_val =3D (KVX_MAILBOX_MODE_OR << KVX_MAILBOX_FUNCT_MODE_SHIFT) | + (KVX_MAILBOX_TRIG_DOORBELL << KVX_MAILBOX_FUNCT_TRIG_SHIFT); + + for (i =3D 0; i < mb_end; i++) { + mb_addr =3D kvx_mailbox_get_addr(mb, i); + /* Disable all interrupts */ + writeq(0ULL, mb_addr + KVX_MAILBOX_MASK_OFFSET); + /* Set mailbox to OR mode + trigger */ + writeq(funct_val, mb_addr + KVX_MAILBOX_FUNCT_OFFSET); + /* Load & Clear mailbox value */ + readq(mb_addr + KVX_MAILBOX_LAC_OFFSET); + } +} + +static struct msi_domain_ops kvx_msi_domain_ops =3D { +}; + +static struct msi_domain_info kvx_msi_domain_info =3D { + .flags =3D (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS), + .ops =3D &kvx_msi_domain_ops, + .chip =3D &kvx_msi_irq_chip, +}; + +static int __init +kvx_init_apic_mailbox(struct device_node *node, + struct device_node *parent) +{ + struct kvx_apic_mailbox *mb; + unsigned int parent_irq, irq_count; + struct resource res; + int ret, i; + + mb =3D kzalloc(sizeof(*mb), GFP_KERNEL); + if (!mb) + return -ENOMEM; + + ret =3D of_address_to_resource(node, 0, &res); + if (ret) + return -EINVAL; + + mb->phys_base =3D res.start; + mb->base =3D of_io_request_and_map(node, 0, node->name); + if (!mb->base) { + ret =3D -EINVAL; + goto err_kfree; + } + + spin_lock_init(&mb->mailboxes_lock); + raw_spin_lock_init(&mb->mask_lock); + + irq_count =3D of_irq_count(node); + if (irq_count =3D=3D 0 || irq_count > MAILBOXES_MAX_COUNT) { + ret =3D -EINVAL; + goto err_kfree; + } + mb->mb_count =3D irq_count; + + apic_mailbox_reset(mb); + + mb->device_domain =3D irq_domain_add_tree(node, + &kvx_apic_mailbox_domain_ops, + mb); + if (!mb->device_domain) { + pr_err("Failed to setup device domain\n"); + ret =3D -EINVAL; + goto err_iounmap; + } + + mb->msi_domain =3D platform_msi_create_irq_domain(of_node_to_fwnode(node), + &kvx_msi_domain_info, + mb->device_domain); + if (!mb->msi_domain) { + ret =3D -EINVAL; + goto err_irq_domain_add_tree; + } + + /* Chain all interrupts from gic to mailbox */ + for (i =3D 0; i < irq_count; i++) { + parent_irq =3D irq_of_parse_and_map(node, i); + if (parent_irq =3D=3D 0) { + pr_err("unable to parse irq\n"); + ret =3D -EINVAL; + goto err_irq_domain_msi_create; + } + mb->mb_data[i].parent_irq =3D parent_irq; + + irq_set_chained_handler_and_data(parent_irq, + kvx_apic_mailbox_handle_irq, + mb); + } + + return 0; + +err_irq_domain_msi_create: + irq_domain_remove(mb->msi_domain); +err_irq_domain_add_tree: + irq_domain_remove(mb->device_domain); +err_iounmap: + iounmap(mb->base); +err_kfree: + kfree(mb); + + return ret; +} + +IRQCHIP_DECLARE(kvx_apic_mailbox, "kalray,coolidge-apic-mailbox", + kvx_init_apic_mailbox); --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout45.security-mail.net (smtpout45.security-mail.net [85.31.212.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D354416CD38 for ; Mon, 22 Jul 2024 09:45:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641516; cv=none; b=bRI1BssNlMIOViZKWeP6VMtENUaRw/IlBavLhOd3RRgHH8gPdVx8P03CXAI2ghfiaKXNf1sBjbz+4nEIUh4zqQXdyvdeS0YrL78MfoSwUsnI5H8FkPdiBOX0U2mlaK4xdCXwBQx32p5XmYmRyL3A9O8i1zjkMm12/3TK+Jk5EbE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641516; c=relaxed/simple; bh=RextYat3Ge6uyQ+AHL1kM8QnyeEFds56yLW3LRhE70k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=m0D8dPJOrvJpMcYdix0B6O1MjZF3dt9aIVgQ4QZEWQrp3uMvsMm2SXYAGz1irk741wgr5nDeJCfoXc13X1bDE73KLjGVTJtnn/L6chX+x0ORBnB2LNZDEql1h19XN4+YHb0m6mzTTFceGjf7aQa3fuldlAlBwR5Uyru/nV8WmOc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=bJY+XfgR; arc=none smtp.client-ip=85.31.212.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="bJY+XfgR" Received: from localhost (localhost [127.0.0.1]) by fx301.security-mail.net (Postfix) with ESMTP id DA0B96D2350 for ; Mon, 22 Jul 2024 11:43:35 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641415; bh=RextYat3Ge6uyQ+AHL1kM8QnyeEFds56yLW3LRhE70k=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=bJY+XfgRrKH12rGSgqkD/9U6lUS45bfjNNV2FD6877hTYpnBDsmgh5l1CdBzm2jlT dB98WhPUabbooQZRvx0xADc0b6MuEFfE/c7cAl3v69T0H0DGoSZwg3q93ZD8I+L8zs pxAPgcxBrz4LuO1IlZZ6GL9D+tKTGV2TebvCh5lY= Received: from fx301 (localhost [127.0.0.1]) by fx301.security-mail.net (Postfix) with ESMTP id BA6E06D2E03; Mon, 22 Jul 2024 11:43:35 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx301.security-mail.net (Postfix) with ESMTPS id 46C446D2BC3; Mon, 22 Jul 2024 11:43:35 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 01A7E40317; Mon, 22 Jul 2024 11:43:35 +0200 (CEST) X-Secumail-id: <83f3.669e29c7.3a620.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Vincent Chardon , Jules Maselbas Subject: [RFC PATCH v3 22/37] irqchip: Add kvx-core-intc core interrupt controller driver Date: Mon, 22 Jul 2024 11:41:33 +0200 Message-ID: <20240722094226.21602-23-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Each kvx core includes a hardware interrupt controller (core INTC) with the following features: - 32 independent interrupt sources - 4-bit priotity level - Individual interrupt enable bit - Interrupt status bit displaying the pending interrupts - Priority management between the 32 interrupts Among those 32 interrupt sources, the 0-17 and 24-31 are hard-wired to hard= ware sources. The remaining interrupt sources can be triggered via software by directly writing to the ILR SFR. The hard-wired interrupt sources are the following: 0: Timer 0 1: Timer 1 2: Watchdog 3: Performance Monitors 4: APIC GIC line 0 5: APIC GIC line 1 6: APIC GIC line 2 7: APIC GIC line 3 12: SECC error from memory system 13: Arithmetic exception (carry and IEEE 754 flags) 16: Data Asynchronous Memory Error (DAME), raised for DECC/DSYS errors 17: CLI (Cache Line Invalidation) for L1D or L1I following DECC/DSYS/Parity errors 24-31: IPIs The APIC GIC lines will be used to route interrupts coming from SoC peripherals from outside the Cluster to the kvx core. Those peripherals include USB host controller, eMMC/SD host controller, i2c, spi, PCIe, IOMMUs etc... Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Vincent Chardon Signed-off-by: Vincent Chardon Co-developed-by: Jules Maselbas Signed-off-by: Jules Maselbas Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: new patch - removed print on probe success V2 -> V3: - now use generic irq multi-handler - correctly set hwirq using irq_domain_set_info() - move core_intc DT node as CPU subnode --- drivers/irqchip/Kconfig | 5 ++ drivers/irqchip/Makefile | 1 + drivers/irqchip/irq-kvx-core-intc.c | 117 ++++++++++++++++++++++++++++ 3 files changed, 123 insertions(+) create mode 100644 drivers/irqchip/irq-kvx-core-intc.c diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig index dbac5b4400e02..da1dbd79dab54 100644 --- a/drivers/irqchip/Kconfig +++ b/drivers/irqchip/Kconfig @@ -332,6 +332,11 @@ config MIPS_GIC select IRQ_DOMAIN_HIERARCHY select MIPS_CM =20 +config KVX_CORE_INTC + bool + depends on KVX + select IRQ_DOMAIN + config KVX_APIC_GIC bool depends on KVX diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile index 16a2f666b788d..30b69db8789f7 100644 --- a/drivers/irqchip/Makefile +++ b/drivers/irqchip/Makefile @@ -68,6 +68,7 @@ obj-$(CONFIG_BCM7120_L2_IRQ) +=3D irq-bcm7120-l2.o obj-$(CONFIG_BRCMSTB_L2_IRQ) +=3D irq-brcmstb-l2.o obj-$(CONFIG_KEYSTONE_IRQ) +=3D irq-keystone.o obj-$(CONFIG_MIPS_GIC) +=3D irq-mips-gic.o +obj-$(CONFIG_KVX_CORE_INTC) +=3D irq-kvx-core-intc.o obj-$(CONFIG_KVX_APIC_GIC) +=3D irq-kvx-apic-gic.o obj-$(CONFIG_KVX_ITGEN) +=3D irq-kvx-itgen.o obj-$(CONFIG_KVX_APIC_MAILBOX) +=3D irq-kvx-apic-mailbox.o diff --git a/drivers/irqchip/irq-kvx-core-intc.c b/drivers/irqchip/irq-kvx-= core-intc.c new file mode 100644 index 0000000000000..2f5be973d2b07 --- /dev/null +++ b/drivers/irqchip/irq-kvx-core-intc.c @@ -0,0 +1,117 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#define pr_fmt(fmt) "kvx_core_intc: " fmt + +#include +#include +#include +#include +#include +#include +#include + +#include + +#define KVX_CORE_INTC_IRQ 32 + +static struct irq_domain *root_domain; + +static void handle_kvx_irq(struct pt_regs *regs) +{ + u32 ilr, ile, cause, hwirq_mask; + u8 es_itn, hwirq; + unsigned long es; + + ilr =3D kvx_sfr_get(ILR); + ile =3D kvx_sfr_get(ILE); + es =3D regs->es; + + es_itn =3D (es & KVX_SFR_ES_ITN_MASK) >> KVX_SFR_ES_ITN_SHIFT; + cause =3D (1 << es_itn); + + hwirq_mask =3D (ilr & ile) | cause; + kvx_sfr_clear_bit(ILR, hwirq_mask); + + while (hwirq_mask) { + hwirq =3D __ffs(hwirq_mask); + generic_handle_domain_irq(root_domain, hwirq); + hwirq_mask &=3D ~BIT_ULL(hwirq); + } + + kvx_sfr_set_field(PS, IL, 0); +} + +static void kvx_irq_mask(struct irq_data *data) +{ + kvx_sfr_clear_bit(ILE, data->hwirq); +} + +static void kvx_irq_unmask(struct irq_data *data) +{ + kvx_sfr_set_bit(ILE, data->hwirq); +} + +static struct irq_chip kvx_irq_chip =3D { + .name =3D "kvx core Intc", + .irq_mask =3D kvx_irq_mask, + .irq_unmask =3D kvx_irq_unmask, +}; + +static int kvx_irq_map(struct irq_domain *d, unsigned int irq, + irq_hw_number_t hwirq) +{ + /* All interrupts for core are per cpu */ + irq_set_percpu_devid(irq); + irq_domain_set_info(d, irq, hwirq, &kvx_irq_chip, d->host_data, + handle_percpu_devid_irq, NULL, NULL); + + return 0; +} + +static const struct irq_domain_ops kvx_irq_ops =3D { + .xlate =3D irq_domain_xlate_onecell, + .map =3D kvx_irq_map, +}; + +static int __init +kvx_init_core_intc(struct device_node *intc, struct device_node *parent) +{ + uint32_t core_nr_irqs; + unsigned long cpuid; + int ret; + + ret =3D kvx_of_parent_cpuid(intc, &cpuid); + if (ret) + panic("core intc has no CPU parent\n"); + + if (smp_processor_id() !=3D cpuid) { + fwnode_dev_initialized(of_fwnode_handle(intc), true); + return 0; + } + + if (of_property_read_u32(intc, "kalray,intc-nr-irqs", &core_nr_irqs)) + core_nr_irqs =3D KVX_CORE_INTC_IRQ; + + /* We only have up to 32 interrupts, + * linear is likely to be the best choice + */ + root_domain =3D irq_domain_add_linear(intc, core_nr_irqs, + &kvx_irq_ops, NULL); + if (!root_domain) + panic("root irq domain not available\n"); + + /* + * Needed for primary domain lookup to succeed + * This is a primary irqchip, and can never have a parent + */ + irq_set_default_host(root_domain); + set_handle_irq(handle_kvx_irq); + + return 0; +} + +IRQCHIP_DECLARE(kvx_core_intc, "kalray,kv3-1-intc", kvx_init_core_intc); --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout36.security-mail.net (smtpout36.security-mail.net [85.31.212.36]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A15016C6B2 for ; Mon, 22 Jul 2024 09:45:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.36 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641510; cv=none; b=Im7Meym9spchJWg2vn1SuoBJzaaCJDEvlnso9JKQGdhpJzKwphahc+X7ZXjBwxsfQwbGmjDY/C1IsgoOWhEeVJtABo7PgZJJa9+iITH8TREgZXdG+jtKIdXHp332TEEdHjHVc5QjXeXvR8kRGcgPX9Et5PpKIBo1PUL0LsrNVW4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641510; c=relaxed/simple; bh=4lRyF64RkHBX5ZQZLMQSwUqK2UfA044YgCciOeNOMbQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=qtuGiImJZST8JqMXF3WgPyBCKaXQsFasmPoW1vjkkY0p6YXR04JBTwbEcA3HqHJpejW1KvIi1azJDAuBP/XoT7MMOn2jOlTRPBLTq5EXbiHgXwsNGA2A9Bm6+huc11GtbrfsH2BB2foNB2ii6/516N56hQ6DyucGKMRJCoePnyc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=cp1uhP3b; arc=none smtp.client-ip=85.31.212.36 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="cp1uhP3b" Received: from localhost (fx306.security-mail.net [127.0.0.1]) by fx306.security-mail.net (Postfix) with ESMTP id AD7B435D031 for ; Mon, 22 Jul 2024 11:43:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641417; bh=4lRyF64RkHBX5ZQZLMQSwUqK2UfA044YgCciOeNOMbQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=cp1uhP3bH6qQlwuvH/ckiNgyqrCFKjSjWPmfXuA/lAyxmksFRoqKrTOVjIVv9kY0x Eszq7A6jNhOXEp2s3mCiYIKfAGLINwdif61EFB06WvxSt9HO9Do/kfmWkNAWOO17O6 xIw9HAcrEerzH2UXvT9LyS0cft0h2FOCz5B/bnCw= Received: from fx306 (fx306.security-mail.net [127.0.0.1]) by fx306.security-mail.net (Postfix) with ESMTP id 80C2735CFE2; Mon, 22 Jul 2024 11:43:37 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx306.security-mail.net (Postfix) with ESMTPS id DD24D35CF9F; Mon, 22 Jul 2024 11:43:36 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id A3FD740317; Mon, 22 Jul 2024 11:43:36 +0200 (CEST) X-Secumail-id: From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Oleg Nesterov , Christian Brauner , Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Guillaume Thouvenin , Marius Gligor , Vincent Chardon , linux-riscv@lists.infradead.org Subject: [RFC PATCH v3 23/37] kvx: Add process management Date: Mon, 22 Jul 2024 11:41:34 +0200 Message-ID: <20240722094226.21602-24-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add process management support for kvx, including: thread info definition, context switch and process tracing. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Guillaume Thouvenin Signed-off-by: Guillaume Thouvenin Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Marius Gligor Signed-off-by: Marius Gligor Co-developed-by: Vincent Chardon Signed-off-by: Vincent Chardon Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: no change V2 -> V3: Updated to use generic entry/exit syscall_work flag, etc. --- arch/kvx/include/asm/current.h | 22 +++ arch/kvx/include/asm/ptrace.h | 217 +++++++++++++++++++++++++++ arch/kvx/include/asm/switch_to.h | 18 +++ arch/kvx/include/asm/thread_info.h | 72 +++++++++ arch/kvx/include/uapi/asm/ptrace.h | 114 ++++++++++++++ arch/kvx/kernel/process.c | 195 ++++++++++++++++++++++++ arch/kvx/kernel/ptrace.c | 233 +++++++++++++++++++++++++++++ arch/kvx/kernel/stacktrace.c | 177 ++++++++++++++++++++++ 8 files changed, 1048 insertions(+) create mode 100644 arch/kvx/include/asm/current.h create mode 100644 arch/kvx/include/asm/ptrace.h create mode 100644 arch/kvx/include/asm/switch_to.h create mode 100644 arch/kvx/include/asm/thread_info.h create mode 100644 arch/kvx/include/uapi/asm/ptrace.h create mode 100644 arch/kvx/kernel/process.c create mode 100644 arch/kvx/kernel/ptrace.c create mode 100644 arch/kvx/kernel/stacktrace.c diff --git a/arch/kvx/include/asm/current.h b/arch/kvx/include/asm/current.h new file mode 100644 index 0000000000000..b5fd0f076ec99 --- /dev/null +++ b/arch/kvx/include/asm/current.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_CURRENT_H +#define _ASM_KVX_CURRENT_H + +#include +#include + +struct task_struct; + +static __always_inline struct task_struct *get_current(void) +{ + return (struct task_struct *) kvx_sfr_get(SR); +} + +#define current get_current() + +#endif /* _ASM_KVX_CURRENT_H */ diff --git a/arch/kvx/include/asm/ptrace.h b/arch/kvx/include/asm/ptrace.h new file mode 100644 index 0000000000000..87254045ff54f --- /dev/null +++ b/arch/kvx/include/asm/ptrace.h @@ -0,0 +1,217 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Marius Gligor + * Yann Sionneau + */ + +#ifndef _ASM_KVX_PTRACE_H +#define _ASM_KVX_PTRACE_H + +#include +#include +#include + +#define GPR_COUNT 64 +#define SFR_COUNT 9 +#define VIRT_COUNT 1 + +#define ES_SYSCALL 0x3 + +#define KVX_HW_BREAKPOINT_COUNT 2 +#define KVX_HW_WATCHPOINT_COUNT 1 + +#define REG_SIZE sizeof(u64) + +/** + * When updating pt_regs structure, this size must be updated. + * This is the expected size of the pt_regs struct. + * It ensures the structure layout from gcc is the same as the one we + * expect in order to do packed load (load/store octuple) in assembly. + * Conclusion: never put sizeof(pt_regs) in here or we lose this check + * (build time check done in asm-offsets.c via BUILD_BUG_ON) + */ +#define PT_REGS_STRUCT_EXPECTED_SIZE \ + ((GPR_COUNT + SFR_COUNT + VIRT_COUNT) * REG_SIZE + \ + 2 * REG_SIZE) /* Padding for stack alignment */ + +/** + * Saved register structure. Note that we should save only the necessary + * registers. + * When you modify it, please read carefully the comment above. + * Moreover, you will need to modify user_pt_regs to match the beginning + * of this struct 1:1 + */ +struct pt_regs { + /* GPR */ + uint64_t r0; + uint64_t r1; + uint64_t r2; + uint64_t r3; + uint64_t r4; + uint64_t r5; + uint64_t r6; + uint64_t r7; + uint64_t r8; + uint64_t r9; + uint64_t r10; + uint64_t r11; + union { + uint64_t r12; + uint64_t sp; + }; + union { + uint64_t r13; + uint64_t tp; + }; + union { + uint64_t r14; + uint64_t fp; + }; + uint64_t r15; + uint64_t r16; + uint64_t r17; + uint64_t r18; + uint64_t r19; + uint64_t r20; + uint64_t r21; + uint64_t r22; + uint64_t r23; + uint64_t r24; + uint64_t r25; + uint64_t r26; + uint64_t r27; + uint64_t r28; + uint64_t r29; + uint64_t r30; + uint64_t r31; + uint64_t r32; + uint64_t r33; + uint64_t r34; + uint64_t r35; + uint64_t r36; + uint64_t r37; + uint64_t r38; + uint64_t r39; + uint64_t r40; + uint64_t r41; + uint64_t r42; + uint64_t r43; + uint64_t r44; + uint64_t r45; + uint64_t r46; + uint64_t r47; + uint64_t r48; + uint64_t r49; + uint64_t r50; + uint64_t r51; + uint64_t r52; + uint64_t r53; + uint64_t r54; + uint64_t r55; + uint64_t r56; + uint64_t r57; + uint64_t r58; + uint64_t r59; + uint64_t r60; + uint64_t r61; + uint64_t r62; + uint64_t r63; + + /* SFR */ + uint64_t lc; + uint64_t le; + uint64_t ls; + uint64_t ra; + + uint64_t cs; + uint64_t spc; + + uint64_t sps; + uint64_t es; + + uint64_t ilr; + + /* "Virtual" registers */ + uint64_t orig_r0; + + /* Padding for stack alignment (see STACK_ALIGN) */ + uint64_t padding[2]; + + /** + * If you add some fields, please read carefully the comment for + * PT_REGS_STRUCT_EXPECTED_SIZE. + */ +}; + +#define pl(__reg) kvx_sfr_field_val(__reg, PS, PL) + +#define MODE_KERNEL 0 +#define MODE_USER 1 + +/* Privilege level is relative in $sps, so 1 indicates current PL + 1 */ +#define user_mode(regs) (pl((regs)->sps) =3D=3D MODE_USER) +#define es_ec(regs) kvx_sfr_field_val(regs->es, ES, EC) +#define es_sysno(regs) kvx_sfr_field_val(regs->es, ES, SN) + +#define debug_dc(es) kvx_sfr_field_val((es), ES, DC) + +/* ptrace */ +#define PTRACE_GET_HW_PT_REGS 20 +#define PTRACE_SET_HW_PT_REGS 21 +#define PTRACE_SYSEMU 22 +#define PTRACE_SYSEMU_SINGLESTEP 23 +#define arch_has_single_step() 1 + +#define DEBUG_CAUSE_BREAKPOINT 0 +#define DEBUG_CAUSE_WATCHPOINT 1 +#define DEBUG_CAUSE_STEPI 2 +#define DEBUG_CAUSE_DSU_BREAK 3 + +static inline void enable_single_step(struct pt_regs *regs) +{ + regs->sps |=3D KVX_SFR_PS_SME_MASK; +} + +static inline void disable_single_step(struct pt_regs *regs) +{ + regs->sps &=3D ~KVX_SFR_PS_SME_MASK; +} + +static inline bool in_syscall(struct pt_regs const *regs) +{ + return es_ec(regs) =3D=3D ES_SYSCALL; +} + +static inline unsigned long get_current_sp(void) +{ + register const unsigned long current_sp __asm__ ("$r12"); + + return current_sp; +} + +extern char *user_scall_rt_sigreturn_end; +extern char *user_scall_rt_sigreturn; + +static inline unsigned long instruction_pointer(struct pt_regs *regs) +{ + return regs->spc; +} + +static inline long regs_return_value(struct pt_regs *regs) +{ + return regs->r0; +} + +static inline unsigned long user_stack_pointer(struct pt_regs *regs) +{ + return regs->sp; +} + +static inline int regs_irqs_disabled(struct pt_regs *regs) +{ + return !(regs->sps & KVX_SFR_PS_IE_MASK); +} + +#endif /* _ASM_KVX_PTRACE_H */ diff --git a/arch/kvx/include/asm/switch_to.h b/arch/kvx/include/asm/switch= _to.h new file mode 100644 index 0000000000000..b08521741ea78 --- /dev/null +++ b/arch/kvx/include/asm/switch_to.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_SWITCH_TO_H +#define _ASM_KVX_SWITCH_TO_H + +struct task_struct; + +/* context switching is now performed out-of-line in switch_to.S */ +extern struct task_struct *__switch_to(struct task_struct *prev, + struct task_struct *next); + +#define switch_to(prev, next, last) ((last) =3D __switch_to((prev), (next)= )) + +#endif /* _ASM_KVX_SWITCH_TO_H */ diff --git a/arch/kvx/include/asm/thread_info.h b/arch/kvx/include/asm/thre= ad_info.h new file mode 100644 index 0000000000000..4ecee40ef84c8 --- /dev/null +++ b/arch/kvx/include/asm/thread_info.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + */ + +#ifndef _ASM_KVX_THREAD_INFO_H +#define _ASM_KVX_THREAD_INFO_H + + +#include + +/* + * Size of the kernel stack for each process. + */ +#define THREAD_SIZE_ORDER 2 +#define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER) + +/* + * Thread information flags + * these are process state flags that various assembly files may need to + * access + * - pending work-to-be-done flags are in LSW + * - other flags in MSW + */ +#define TIF_NOTIFY_RESUME 1 /* resumption notification requested */ +#define TIF_SIGPENDING 2 /* signal pending */ +#define TIF_NEED_RESCHED 3 /* rescheduling necessary */ +#define TIF_SINGLESTEP 4 /* restore singlestep on return to user mode */ +#define TIF_UPROBE 5 /* uprobe breakpoint or singlestep */ +#define TIF_RESTORE_SIGMASK 9 +#define TIF_NOTIFY_SIGNAL 10 /* signal notifications exist */ +#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_R= ESCHED */ +#define TIF_MEMDIE 17 /* is terminating due to OOM killer */ + +#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) +#define _TIF_SIGPENDING (1 << TIF_SIGPENDING) +#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) +#define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL) +#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) + +#ifndef __ASSEMBLY__ +/* + * We are using THREAD_INFO_IN_TASK so this struct is almost useless + * please prefer adding fields in thread_struct (processor.h) rather + * than here. + * This struct is merely a remnant of distant times where it was placed + * on the stack to avoid large task_struct. + * + * cf https://lwn.net/Articles/700615/ + */ +struct thread_info { + unsigned long flags; /* low level flags */ + int preempt_count; +#ifdef CONFIG_SMP + u32 cpu; /* current CPU */ +#endif + unsigned long syscall_work; /* SYSCALL_WORK_ flags */ +}; + +#define INIT_THREAD_INFO(tsk) \ +{ \ + .flags =3D 0, \ + .preempt_count =3D INIT_PREEMPT_COUNT, \ +} + +/* Get the current stack pointer from C */ +register unsigned long current_stack_pointer asm("$sp") __used; + +#endif /* __ASSEMBLY__*/ +#endif /* _ASM_KVX_THREAD_INFO_H */ diff --git a/arch/kvx/include/uapi/asm/ptrace.h b/arch/kvx/include/uapi/asm= /ptrace.h new file mode 100644 index 0000000000000..50db5310704f0 --- /dev/null +++ b/arch/kvx/include/uapi/asm/ptrace.h @@ -0,0 +1,114 @@ +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Yann Sionneau + */ + +#ifndef _UAPI_ASM_KVX_PTRACE_H +#define _UAPI_ASM_KVX_PTRACE_H + +#include +/* + * User-mode register state for core dumps, ptrace, sigcontext + * + * This decouples struct pt_regs from the userspace ABI. + * The struct user_regs_struct must form a prefix of struct pt_regs. + */ +struct user_regs_struct { + /* GPR */ + unsigned long r0; + unsigned long r1; + unsigned long r2; + unsigned long r3; + unsigned long r4; + unsigned long r5; + unsigned long r6; + unsigned long r7; + unsigned long r8; + unsigned long r9; + unsigned long r10; + unsigned long r11; + union { + unsigned long r12; + unsigned long sp; + }; + union { + unsigned long r13; + unsigned long tp; + }; + union { + unsigned long r14; + unsigned long fp; + }; + unsigned long r15; + unsigned long r16; + unsigned long r17; + unsigned long r18; + unsigned long r19; + unsigned long r20; + unsigned long r21; + unsigned long r22; + unsigned long r23; + unsigned long r24; + unsigned long r25; + unsigned long r26; + unsigned long r27; + unsigned long r28; + unsigned long r29; + unsigned long r30; + unsigned long r31; + unsigned long r32; + unsigned long r33; + unsigned long r34; + unsigned long r35; + unsigned long r36; + unsigned long r37; + unsigned long r38; + unsigned long r39; + unsigned long r40; + unsigned long r41; + unsigned long r42; + unsigned long r43; + unsigned long r44; + unsigned long r45; + unsigned long r46; + unsigned long r47; + unsigned long r48; + unsigned long r49; + unsigned long r50; + unsigned long r51; + unsigned long r52; + unsigned long r53; + unsigned long r54; + unsigned long r55; + unsigned long r56; + unsigned long r57; + unsigned long r58; + unsigned long r59; + unsigned long r60; + unsigned long r61; + unsigned long r62; + unsigned long r63; + + /* SFR */ + unsigned long lc; + unsigned long le; + unsigned long ls; + unsigned long ra; + + unsigned long cs; + unsigned long spc; +}; + +/* TCA registers structure exposed to user */ +struct user_tca_regs { + struct { + __u64 x; + __u64 y; + __u64 z; + __u64 t; + } regs[48]; +}; + +#endif /* _UAPI_ASM_KVX_PTRACE_H */ diff --git a/arch/kvx/kernel/process.c b/arch/kvx/kernel/process.c new file mode 100644 index 0000000000000..181e45cfe7480 --- /dev/null +++ b/arch/kvx/kernel/process.c @@ -0,0 +1,195 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + * Marius Gligor + * Yann Sionneau + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#if defined(CONFIG_STACKPROTECTOR) +#include +unsigned long __stack_chk_guard __read_mostly; +EXPORT_SYMBOL(__stack_chk_guard); +#endif + +#define SCALL_NUM_EXIT "0xfff" + +void arch_cpu_idle(void) +{ + wait_for_interrupt(); + local_irq_enable(); +} + +void show_regs(struct pt_regs *regs) +{ + + int in_kernel =3D 1; + unsigned short i, reg_offset; + void *ptr; + + show_regs_print_info(KERN_DEFAULT); + + if (user_mode(regs)) + in_kernel =3D 0; + + pr_info("\nmode: %s\n" + " PC: %016llx PS: %016llx\n" + " CS: %016llx RA: %016llx\n" + " LS: %016llx LE: %016llx\n" + " LC: %016llx\n\n", + in_kernel ? "kernel" : "user", + regs->spc, regs->sps, + regs->cs, regs->ra, regs->ls, regs->le, regs->lc); + + /* GPR */ + ptr =3D regs; + ptr +=3D offsetof(struct pt_regs, r0); + reg_offset =3D offsetof(struct pt_regs, r1) - + offsetof(struct pt_regs, r0); + + /** + * Display all the 64 GPRs assuming they are ordered correctly + * in the pt_regs struct... + */ + for (i =3D 0; i < GPR_COUNT; i +=3D 2) { + pr_info(" R%d: %016llx R%d: %016llx\n", + i, *(uint64_t *)ptr, + i + 1, *(uint64_t *)(ptr + reg_offset)); + ptr +=3D reg_offset * 2; + } + + pr_info("\n\n"); +} + +/** + * Prepare a thread to return to userspace + */ +void start_thread(struct pt_regs *regs, + unsigned long pc, unsigned long sp) +{ + /* Remove MMUP bit (user is not privilege in current virtual space) */ + u64 clear_bit =3D KVX_SFR_PS_MMUP_MASK | KVX_SFR_PS_SME_MASK | + KVX_SFR_PS_SMR_MASK; + regs->spc =3D pc; + regs->sp =3D sp; + regs->sps =3D kvx_sfr_get(PS); + + regs->sps &=3D ~clear_bit; + + /* Set privilege level to +1 (relative) */ + regs->sps &=3D ~KVX_SFR_PS_PL_MASK; + regs->sps |=3D (1 << KVX_SFR_PS_PL_SHIFT); +} + +int copy_thread(struct task_struct *p, const struct kernel_clone_args *arg= s) +{ + struct pt_regs *regs, *childregs =3D task_pt_regs(p); + unsigned long clone_flags =3D args->flags; + unsigned long usp =3D args->stack; + unsigned long tls =3D args->tls; + + /* p->thread holds context to be restored by __switch_to() */ + if (unlikely(args->fn)) { + /* Kernel thread */ + memset(childregs, 0, sizeof(struct pt_regs)); + + p->thread.ctx_switch.r20 =3D (uint64_t)args->fn; + p->thread.ctx_switch.r21 =3D (uint64_t)args->fn_arg; + p->thread.ctx_switch.ra =3D + (unsigned long) ret_from_kernel_thread; + } else { + regs =3D current_pt_regs(); + + /* Copy current process registers */ + *childregs =3D *regs; + + /* Store tracing status in r20 to avoid computing it + * in assembly + */ + p->thread.ctx_switch.r20 =3D 0; + + + childregs->r0 =3D 0; /* Return value of fork() */ + /* Set stack pointer if any */ + if (usp) + childregs->sp =3D usp; + + /* Set a new TLS ? */ + if (clone_flags & CLONE_SETTLS) + childregs->r13 =3D tls; + } + + p->thread.ctx_switch.ra =3D (unsigned long) ret_from_fork; + p->thread.kernel_sp =3D + (unsigned long) (task_stack_page(p) + THREAD_SIZE); + p->thread.ctx_switch.sp =3D (unsigned long) childregs; + + return 0; +} + +void release_thread(struct task_struct *dead_task) +{ +} + +void flush_thread(void) +{ + +} + +static bool find_wchan(unsigned long pc, void *arg, const char *loglvl) +{ + unsigned long *p =3D arg; + + /* + * If the pc is in a scheduler function (waiting), then, this is the + * address where the process is currently stuck. Note that scheduler + * functions also include lock functions. This functions are + * materialized using annotations to put them in special text sections. + */ + if (!in_sched_functions(pc)) { + *p =3D pc; + return true; + } + + return false; +} + +/* + * __get_wchan is called to obtain "schedule()" caller function address. + */ +unsigned long __get_wchan(struct task_struct *p) +{ + unsigned long pc =3D 0; + struct stackframe frame; + + /* + * We need to obtain the task stack since we don't want the stack to + * move under our feet. + */ + if (!try_get_task_stack(p)) + return 0; + + start_stackframe(&frame, thread_saved_reg(p, fp), + thread_saved_reg(p, ra)); + walk_stackframe(p, &frame, find_wchan, &pc, KERN_DEFAULT); + + put_task_stack(p); + + return pc; +} + diff --git a/arch/kvx/kernel/ptrace.c b/arch/kvx/kernel/ptrace.c new file mode 100644 index 0000000000000..48167235ac642 --- /dev/null +++ b/arch/kvx/kernel/ptrace.c @@ -0,0 +1,233 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * derived from arch/riscv/kernel/ptrace.c + * + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Marius Gligor + * Clement Leger + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#define CREATE_TRACE_POINTS +#include + +#define HW_PT_CMD_GET_CAPS 0 +#define HW_PT_CMD_GET_PT 1 +#define HW_PT_CMD_SET_RESERVE 0 +#define HW_PT_CMD_SET_ENABLE 1 + +#define FROM_GDB_CMD_MASK 3 +#define FROM_GDB_HP_TYPE_SHIFT 2 +#define FROM_GDB_HP_TYPE_MASK 4 +#define FROM_GDB_WP_TYPE_SHIFT 3 +#define FROM_GDB_WP_TYPE_MASK 0x18 +#define FROM_GDB_HP_IDX_SHIFT 5 + +#define hw_pt_cmd(addr) ((addr) & FROM_GDB_CMD_MASK) +#define hw_pt_is_bkp(addr) ((((addr) & FROM_GDB_HP_TYPE_MASK) >> \ + FROM_GDB_HP_TYPE_SHIFT) =3D=3D KVX_HW_BREAKPOINT_TYPE) +#define get_hw_pt_wp_type(addr) ((((addr) & FROM_GDB_WP_TYPE_MASK)) >> \ + FROM_GDB_WP_TYPE_SHIFT) +#define get_hw_pt_idx(addr) ((addr) >> FROM_GDB_HP_IDX_SHIFT) +#define get_hw_pt_addr(data) ((data)[0]) +#define get_hw_pt_len(data) ((data)[1] >> 1) +#define hw_pt_is_enabled(data) ((data)[1] & 1) + +enum kvx_regset { + REGSET_GPR, +#ifdef CONFIG_ENABLE_TCA + REGSET_TCA, +#endif +}; + +void ptrace_disable(struct task_struct *child) +{ + /* nothing to do */ +} + +static int kvx_gpr_get(struct task_struct *target, + const struct user_regset *regset, + struct membuf to) +{ + return membuf_write(&to, task_pt_regs(target), + sizeof(struct user_regs_struct)); +} + +static int kvx_gpr_set(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + const void *kbuf, const void __user *ubuf) +{ + struct pt_regs *regs =3D task_pt_regs(target); + + return user_regset_copyin(&pos, &count, &kbuf, &ubuf, regs, 0, -1); +} + +#ifdef CONFIG_ENABLE_TCA +static int kvx_tca_reg_get(struct task_struct *target, + const struct user_regset *regset, + struct membuf to) +{ + struct ctx_switch_regs *ctx_regs =3D &target->thread.ctx_switch; + struct tca_reg *regs =3D ctx_regs->tca_regs; + int ret; + + if (!ctx_regs->tca_regs_saved) + ret =3D membuf_zero(&to, sizeof(*regs)); + else + ret =3D membuf_write(&to, regs, sizeof(*regs)); + + return ret; +} + +static int kvx_tca_reg_set(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + const void *kbuf, const void __user *ubuf) +{ + struct ctx_switch_regs *ctx_regs =3D &target->thread.ctx_switch; + struct tca_reg *regs =3D ctx_regs->tca_regs; + int ret =3D 0; + + if (!ctx_regs->tca_regs_saved) + user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf, + 0, -1); + else + ret =3D user_regset_copyin(&pos, &count, &kbuf, &ubuf, regs, + 0, -1); + + return ret; +} +#endif + +static const struct user_regset kvx_user_regset[] =3D { + [REGSET_GPR] =3D { + .core_note_type =3D NT_PRSTATUS, + .n =3D ELF_NGREG, + .size =3D sizeof(elf_greg_t), + .align =3D sizeof(elf_greg_t), + .regset_get =3D &kvx_gpr_get, + .set =3D &kvx_gpr_set, + }, +#ifdef CONFIG_ENABLE_TCA + [REGSET_TCA] =3D { + .core_note_type =3D NT_KVX_TCA, + .n =3D TCA_REG_COUNT, + .size =3D sizeof(struct tca_reg), + .align =3D sizeof(struct tca_reg), + .regset_get =3D &kvx_tca_reg_get, + .set =3D &kvx_tca_reg_set, + }, +#endif +}; + +static const struct user_regset_view user_kvx_view =3D { + .name =3D "kvx", + .e_machine =3D EM_KVX, + .regsets =3D kvx_user_regset, + .n =3D ARRAY_SIZE(kvx_user_regset) +}; + +const struct user_regset_view *task_user_regset_view(struct task_struct *t= ask) +{ + return &user_kvx_view; +} + +long arch_ptrace(struct task_struct *child, long request, + unsigned long addr, unsigned long data) +{ + return ptrace_request(child, request, addr, data); +} + +static int kvx_bkpt_handler(struct pt_regs *regs, struct break_hook *brk_h= ook) +{ + /* Unexpected breakpoint */ + if (!(current->ptrace & PT_PTRACED)) + return BREAK_HOOK_ERROR; + + /* deliver the signal to userspace */ + force_sig_fault(SIGTRAP, TRAP_BRKPT, (void __user *) regs->spc); + + return BREAK_HOOK_HANDLED; +} + +static void kvx_stepi(struct pt_regs *regs) +{ + /* deliver the signal to userspace */ + force_sig_fault(SIGTRAP, TRAP_TRACE, (void __user *) regs->spc); +} + +void user_enable_single_step(struct task_struct *child) +{ + struct pt_regs *regs =3D task_pt_regs(child); + + enable_single_step(regs); +} + +void user_disable_single_step(struct task_struct *child) +{ + struct pt_regs *regs =3D task_pt_regs(child); + + disable_single_step(regs); +} + +/** + * Main debug handler called by the _debug_handler routine in entry.S + * This handler will perform the required action + * @es: Exception Syndrome register value + * @ea: Exception Address register + * @regs: pointer to registers saved when enter debug + */ +static int ptrace_debug_handler(struct pt_regs *regs, u64 ea) +{ + + int debug_cause =3D debug_dc(regs->es); + + switch (debug_cause) { + case DEBUG_CAUSE_STEPI: + kvx_stepi(regs); + break; + default: + break; + } + + return DEBUG_HOOK_HANDLED; +} + +static struct debug_hook ptrace_debug_hook =3D { + .handler =3D ptrace_debug_handler, + .mode =3D MODE_USER, +}; + +static struct break_hook bkpt_break_hook =3D { + .id =3D BREAK_CAUSE_BKPT, + .handler =3D kvx_bkpt_handler, + .mode =3D MODE_USER, +}; + +static int __init arch_init_breakpoint(void) +{ + break_hook_register(&bkpt_break_hook); + debug_hook_register(&ptrace_debug_hook); + + return 0; +} + +postcore_initcall(arch_init_breakpoint); diff --git a/arch/kvx/kernel/stacktrace.c b/arch/kvx/kernel/stacktrace.c new file mode 100644 index 0000000000000..4a6952157b19b --- /dev/null +++ b/arch/kvx/kernel/stacktrace.c @@ -0,0 +1,177 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Vincent Chardon + */ + +#include +#include +#include +#include +#include + +#include +#include + +#define STACK_SLOT_PER_LINE 4 +#define STACK_MAX_SLOT_PRINT (STACK_SLOT_PER_LINE * 8) + +static int notrace unwind_frame(struct task_struct *task, + struct stackframe *frame) +{ + unsigned long fp =3D frame->fp; + + /* Frame pointer must be aligned on 8 bytes */ + if (fp & 0x7) + return -EINVAL; + + if (!task) + task =3D current; + + if (!on_task_stack(task, fp)) + return -EINVAL; + + frame->fp =3D READ_ONCE_NOCHECK(*(unsigned long *)(fp)); + frame->ra =3D READ_ONCE_NOCHECK(*(unsigned long *)(fp + 8)); + + /* + * When starting, we set the frame pointer to 0, hence end of + * frame linked list is signal by that + */ + if (!frame->fp) + return -EINVAL; + + return 0; +} + +void notrace walk_stackframe(struct task_struct *task, struct stackframe *= frame, + bool (*fn)(unsigned long, void *, const char *), + void *arg, const char *loglvl) +{ + unsigned long addr; + int ret; + + while (1) { + addr =3D frame->ra; + + if (fn(addr, arg, loglvl)) + break; + + ret =3D unwind_frame(task, frame); + if (ret) + break; + } +} + +#ifdef CONFIG_STACKTRACE +static bool append_stack_addr(unsigned long pc, void *arg, const char *log= lvl) +{ + struct stack_trace *trace; + + trace =3D (struct stack_trace *)arg; + if (trace->skip =3D=3D 0) { + trace->entries[trace->nr_entries++] =3D pc; + if (trace->nr_entries =3D=3D trace->max_entries) + return true; + } else { + trace->skip--; + } + + return false; +} + +/* + * Save stack-backtrace addresses into a stack_trace buffer. + */ +void save_stack_trace(struct stack_trace *trace) +{ + struct stackframe frame; + + trace->nr_entries =3D 0; + /* We want to skip this function and the caller */ + trace->skip +=3D 2; + + start_stackframe(&frame, (unsigned long) __builtin_frame_address(0), + (unsigned long) save_stack_trace); + walk_stackframe(current, &frame, append_stack_addr, trace, + KERN_DEFAULT); +} +EXPORT_SYMBOL(save_stack_trace); +#endif /* CONFIG_STACKTRACE */ + +static bool print_pc(unsigned long pc, void *arg, const char *loglvl) +{ + unsigned long *skip =3D arg; + + if (*skip =3D=3D 0) + print_ip_sym(loglvl, pc); + else + (*skip)--; + + return false; +} + +void show_stacktrace(struct task_struct *task, struct pt_regs *regs, + const char *loglvl) +{ + struct stackframe frame; + unsigned long skip =3D 0; + + /* Obviously, we can't backtrace on usermode ! */ + if (regs && user_mode(regs)) + return; + + if (!task) + task =3D current; + + if (!try_get_task_stack(task)) + return; + + if (regs) { + start_stackframe(&frame, regs->fp, regs->spc); + } else if (task =3D=3D current) { + /* Skip current function and caller */ + skip =3D 2; + start_stackframe(&frame, + (unsigned long) __builtin_frame_address(0), + (unsigned long) show_stacktrace); + } else { + /* task blocked in __switch_to */ + start_stackframe(&frame, + thread_saved_reg(task, fp), + thread_saved_reg(task, ra)); + } + + pr_cont("%sCall Trace:\n", loglvl); + walk_stackframe(task, &frame, print_pc, &skip, loglvl); + + put_task_stack(task); +} + +/* + * If show_stack is called with a non-null task, then the task will have b= een + * claimed with try_get_task_stack by the caller. If task is NULL or curre= nt + * then there is no need to get task stack since it's our current stack... + */ +void show_stack(struct task_struct *task, unsigned long *sp, const char *l= oglvl) +{ + int i =3D 0; + + if (!sp) + sp =3D (unsigned long *) get_current_sp(); + + printk("%sStack dump (@%p):\n", loglvl, sp); + for (i =3D 0; i < STACK_MAX_SLOT_PRINT; i++) { + if (kstack_end(sp)) + break; + + if (i && (i % STACK_SLOT_PER_LINE) =3D=3D 0) + pr_cont("\n\t"); + + pr_cont("%016lx ", *sp++); + } + pr_cont("\n"); + + show_stacktrace(task, NULL, loglvl); +} --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout146.security-mail.net (smtpout146.security-mail.net [85.31.212.146]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7466316D333 for ; Mon, 22 Jul 2024 09:45:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.146 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641517; cv=none; b=QjKN/LaBlmltcft/AZRamruCPI4Z/RikCfrqBboJCqWe0BJVvXdcPv9uCoODPdCxZR6V355dK0Kq9z8lSpwnwGKUC19HoRuga53gzXZwZwh+Wx/BVrHOacOxxA/gnsW2GH+HSDZSBc0RDYa0CdqChTqtnUFS5YLtTyUVyngWwJI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641517; c=relaxed/simple; bh=sArPv2pizZJv40btN4vo4Ix7RS3PDwEomgr+mFdpbV0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=T6rFxsVpTMKqwyset/9hsVsKJy17mxyOto9vrG4fkF5ElizC7rq6E4MmsBHV1+yMSUhxDY+TIK/LkyUauXdZ3AAPKQIktI2t23rmxPKyODXe5EZ6Bd/H6pUotsGegGU/TnuBMb2XLRqSfZAvxXjbN4cxZGgF8mci3ojmiy2kwf0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=ABRoUeK4; arc=none smtp.client-ip=85.31.212.146 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="ABRoUeK4" Received: from localhost (fx601.security-mail.net [127.0.0.1]) by fx601.security-mail.net (Postfix) with ESMTP id BD6E5349684 for ; Mon, 22 Jul 2024 11:43:42 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641422; bh=sArPv2pizZJv40btN4vo4Ix7RS3PDwEomgr+mFdpbV0=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=ABRoUeK479alcv2RpuD1/Fjyq7fnZdYgpbxfrpWM6AFzra66bQgCVXr8h+Mpf4JrG BKceFG2lu8PAUDfwvISPpUPosGojkS+1wnvDyHhi/Aaam3giyYKOKTxvTaRrHxCy+z JaCmZCzUSEJdByg/6jeJAOlPIdyeGUj1SaVnly74= Received: from fx601 (fx601.security-mail.net [127.0.0.1]) by fx601.security-mail.net (Postfix) with ESMTP id 880D4349627; Mon, 22 Jul 2024 11:43:42 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx601.security-mail.net (Postfix) with ESMTPS id B46413495C0; Mon, 22 Jul 2024 11:43:41 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 6C83F40317; Mon, 22 Jul 2024 11:43:41 +0200 (CEST) X-Secumail-id: <11c82.669e29cd.b16c3.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Will Deacon , "Aneesh Kumar K.V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Guillaume Thouvenin , Jean-Christophe Pince , Jules Maselbas , Julien Hascoet , Louis Morhet , Marc =?utf-8?b?UG91bGhpw6hz?= , Marius Gligor , Vincent Chardon , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org Subject: [RFC PATCH v3 24/37] kvx: Add memory management Date: Mon, 22 Jul 2024 11:41:35 +0200 Message-ID: <20240722094226.21602-25-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done From: Yann Sionneau Add memory management support for kvx, including: cache and tlb management, page fault handling, ioremap/mmap and streaming dma support. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Guillaume Thouvenin Signed-off-by: Guillaume Thouvenin Co-developed-by: Jean-Christophe Pince Signed-off-by: Jean-Christophe Pince Co-developed-by: Jules Maselbas Signed-off-by: Jules Maselbas Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Julien Hascoet Signed-off-by: Julien Hascoet Co-developed-by: Louis Morhet Signed-off-by: Louis Morhet Co-developed-by: Marc Poulhi=C3=A8s Signed-off-by: Marc Poulhi=C3=A8s Co-developed-by: Marius Gligor Signed-off-by: Marius Gligor Co-developed-by: Vincent Chardon Signed-off-by: Vincent Chardon Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: removed L2 cache management V2 -> V3: - replace KVX_PFN by KVX_PTE - fix dcache inval when virt range spans several VMAs - replace BUGS with WARN_ON_ONCE - setup_bootmem: memblock_reserve() all smem at boot --- arch/kvx/include/asm/cache.h | 44 +++ arch/kvx/include/asm/cacheflush.h | 158 ++++++++++ arch/kvx/include/asm/fixmap.h | 47 +++ arch/kvx/include/asm/hugetlb.h | 36 +++ arch/kvx/include/asm/mem_map.h | 44 +++ arch/kvx/include/asm/mmu.h | 289 +++++++++++++++++ arch/kvx/include/asm/mmu_context.h | 156 ++++++++++ arch/kvx/include/asm/mmu_stats.h | 38 +++ arch/kvx/include/asm/page.h | 172 ++++++++++ arch/kvx/include/asm/page_size.h | 29 ++ arch/kvx/include/asm/pgalloc.h | 70 +++++ arch/kvx/include/asm/pgtable-bits.h | 105 +++++++ arch/kvx/include/asm/pgtable.h | 468 ++++++++++++++++++++++++++++ arch/kvx/include/asm/sparsemem.h | 15 + arch/kvx/include/asm/symbols.h | 16 + arch/kvx/include/asm/tlb.h | 24 ++ arch/kvx/include/asm/tlb_defs.h | 132 ++++++++ arch/kvx/include/asm/tlbflush.h | 60 ++++ arch/kvx/include/asm/vmalloc.h | 10 + arch/kvx/mm/cacheflush.c | 158 ++++++++++ arch/kvx/mm/dma-mapping.c | 83 +++++ arch/kvx/mm/extable.c | 24 ++ arch/kvx/mm/fault.c | 270 ++++++++++++++++ arch/kvx/mm/init.c | 297 ++++++++++++++++++ arch/kvx/mm/mmap.c | 31 ++ arch/kvx/mm/mmu.c | 202 ++++++++++++ arch/kvx/mm/mmu_stats.c | 94 ++++++ arch/kvx/mm/tlb.c | 445 ++++++++++++++++++++++++++ 28 files changed, 3517 insertions(+) create mode 100644 arch/kvx/include/asm/cache.h create mode 100644 arch/kvx/include/asm/cacheflush.h create mode 100644 arch/kvx/include/asm/fixmap.h create mode 100644 arch/kvx/include/asm/hugetlb.h create mode 100644 arch/kvx/include/asm/mem_map.h create mode 100644 arch/kvx/include/asm/mmu.h create mode 100644 arch/kvx/include/asm/mmu_context.h create mode 100644 arch/kvx/include/asm/mmu_stats.h create mode 100644 arch/kvx/include/asm/page.h create mode 100644 arch/kvx/include/asm/page_size.h create mode 100644 arch/kvx/include/asm/pgalloc.h create mode 100644 arch/kvx/include/asm/pgtable-bits.h create mode 100644 arch/kvx/include/asm/pgtable.h create mode 100644 arch/kvx/include/asm/sparsemem.h create mode 100644 arch/kvx/include/asm/symbols.h create mode 100644 arch/kvx/include/asm/tlb.h create mode 100644 arch/kvx/include/asm/tlb_defs.h create mode 100644 arch/kvx/include/asm/tlbflush.h create mode 100644 arch/kvx/include/asm/vmalloc.h create mode 100644 arch/kvx/mm/cacheflush.c create mode 100644 arch/kvx/mm/dma-mapping.c create mode 100644 arch/kvx/mm/extable.c create mode 100644 arch/kvx/mm/fault.c create mode 100644 arch/kvx/mm/init.c create mode 100644 arch/kvx/mm/mmap.c create mode 100644 arch/kvx/mm/mmu.c create mode 100644 arch/kvx/mm/mmu_stats.c create mode 100644 arch/kvx/mm/tlb.c diff --git a/arch/kvx/include/asm/cache.h b/arch/kvx/include/asm/cache.h new file mode 100644 index 0000000000000..f95792883622b --- /dev/null +++ b/arch/kvx/include/asm/cache.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Julian Vetter + */ + +#ifndef _ASM_KVX_CACHE_H +#define _ASM_KVX_CACHE_H + +/** + * On KVX I$ and D$ have the same size (16KB). + * Caches are 16KB big, 4-way set associative, true lru, with 64-byte + * lines. The D$ is also write-through. + */ +#define KVX_ICACHE_WAY_COUNT 4 +#define KVX_ICACHE_SET_COUNT 64 +#define KVX_ICACHE_LINE_SHIFT 6 +#define KVX_ICACHE_LINE_SIZE (1 << KVX_ICACHE_LINE_SHIFT) +#define KVX_ICACHE_SIZE \ + (KVX_ICACHE_WAY_COUNT * KVX_ICACHE_SET_COUNT * KVX_ICACHE_LINE_SIZE) + +/** + * Invalidate the whole I-cache if the size to flush is more than this val= ue + */ +#define KVX_ICACHE_INVAL_SIZE (KVX_ICACHE_SIZE) + +/* D-Cache */ +#define KVX_DCACHE_WAY_COUNT 4 +#define KVX_DCACHE_SET_COUNT 64 +#define KVX_DCACHE_LINE_SHIFT 6 +#define KVX_DCACHE_LINE_SIZE (1 << KVX_DCACHE_LINE_SHIFT) +#define KVX_DCACHE_SIZE \ + (KVX_DCACHE_WAY_COUNT * KVX_DCACHE_SET_COUNT * KVX_DCACHE_LINE_SIZE) + +/** + * Same for I-cache + */ +#define KVX_DCACHE_INVAL_SIZE (KVX_DCACHE_SIZE) + +#define L1_CACHE_SHIFT KVX_DCACHE_LINE_SHIFT +#define L1_CACHE_BYTES KVX_DCACHE_LINE_SIZE + +#endif /* _ASM_KVX_CACHE_H */ diff --git a/arch/kvx/include/asm/cacheflush.h b/arch/kvx/include/asm/cache= flush.h new file mode 100644 index 0000000000000..256a201e423a7 --- /dev/null +++ b/arch/kvx/include/asm/cacheflush.h @@ -0,0 +1,158 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Yann Sionneau + * Guillaume Thouvenin + * Marius Gligor + */ + +#ifndef _ASM_KVX_CACHEFLUSH_H +#define _ASM_KVX_CACHEFLUSH_H + +#include +#include + +#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 0 + +#define flush_cache_mm(mm) do { } while (0) +#define flush_cache_range(vma, start, end) do { } while (0) +#define flush_cache_dup_mm(mm) do { } while (0) +#define flush_cache_page(vma, vmaddr, pfn) do { } while (0) + +#define flush_cache_vmap(start, end) do { } while (0) +#define flush_cache_vunmap(start, end) do { } while (0) + +#define flush_dcache_page(page) do { } while (0) + +#define flush_dcache_mmap_lock(mapping) do { } while (0) +#define flush_dcache_mmap_unlock(mapping) do { } while (0) + +#define l1_inval_dcache_all __builtin_kvx_dinval +#define kvx_fence __builtin_kvx_fence +#define l1_inval_icache_all __builtin_kvx_iinval + +int dcache_wb_inval_virt_range(unsigned long vaddr, unsigned long len, boo= l wb, + bool inval); +void dcache_wb_inval_phys_range(phys_addr_t addr, unsigned long len, bool = wb, + bool inval); + +/* + * The L1 is indexed by virtual addresses and as such, invalidation takes + * virtual addresses as arguments. + */ +static inline +void l1_inval_dcache_range(unsigned long vaddr, unsigned long size) +{ + unsigned long end =3D vaddr + size; + + /* Then inval L1 */ + if (size >=3D KVX_DCACHE_INVAL_SIZE) { + __builtin_kvx_dinval(); + return; + } + + vaddr =3D ALIGN_DOWN(vaddr, KVX_DCACHE_LINE_SIZE); + for (; vaddr < end; vaddr +=3D KVX_DCACHE_LINE_SIZE) + __builtin_kvx_dinvall((void *) vaddr); +} + +static inline +void inval_dcache_range(phys_addr_t paddr, unsigned long size) +{ + l1_inval_dcache_range((unsigned long) phys_to_virt(paddr), size); +} + +static inline +void wb_dcache_range(phys_addr_t paddr, unsigned long size) +{ + /* Fence to ensure all write are committed */ + kvx_fence(); +} + +static inline +void wbinval_dcache_range(phys_addr_t paddr, unsigned long size) +{ + /* Fence to ensure all write are committed */ + kvx_fence(); + + l1_inval_dcache_range((unsigned long) phys_to_virt(paddr), size); +} + +static inline +void l1_inval_icache_range(unsigned long start, unsigned long end) +{ + unsigned long addr; + unsigned long size =3D end - start; + + if (size >=3D KVX_ICACHE_INVAL_SIZE) { + __builtin_kvx_iinval(); + __builtin_kvx_barrier(); + return; + } + + start =3D ALIGN_DOWN(start, KVX_ICACHE_LINE_SIZE); + for (addr =3D start; addr < end; addr +=3D KVX_ICACHE_LINE_SIZE) + __builtin_kvx_iinvals((void *) addr); + + __builtin_kvx_barrier(); +} + +static inline +void wbinval_icache_range(phys_addr_t paddr, unsigned long size) +{ + unsigned long vaddr =3D (unsigned long) phys_to_virt(paddr); + + /* Fence to ensure all write are committed */ + kvx_fence(); + + l1_inval_icache_range(vaddr, vaddr + size); +} + +static inline +void sync_dcache_icache(unsigned long start, unsigned long end) +{ + /* Fence to ensure all write are committed */ + kvx_fence(); + /* Then invalidate the L1 icache */ + l1_inval_icache_range(start, end); +} + +static inline +void local_flush_icache_range(unsigned long start, unsigned long end) +{ + sync_dcache_icache(start, end); +} + +#ifdef CONFIG_SMP +void flush_icache_range(unsigned long start, unsigned long end); +#else +#define flush_icache_range local_flush_icache_range +#endif + +static inline +void flush_icache_page(struct vm_area_struct *vma, struct page *page) +{ + unsigned long start =3D (unsigned long) page_address(page); + unsigned long end =3D start + PAGE_SIZE; + + sync_dcache_icache(start, end); +} + +static inline +void flush_icache_user_range(struct vm_area_struct *vma, struct page *page, + unsigned long vaddr, int len) +{ + sync_dcache_icache(vaddr, vaddr + len); +} + +#define copy_to_user_page(vma, page, vaddr, dst, src, len) \ + do { \ + memcpy(dst, src, len); \ + if (vma->vm_flags & VM_EXEC) \ + flush_icache_user_range(vma, page, vaddr, len); \ + } while (0) +#define copy_from_user_page(vma, page, vaddr, dst, src, len) \ + memcpy(dst, src, len) + +#endif /* _ASM_KVX_CACHEFLUSH_H */ diff --git a/arch/kvx/include/asm/fixmap.h b/arch/kvx/include/asm/fixmap.h new file mode 100644 index 0000000000000..3863e410d71de --- /dev/null +++ b/arch/kvx/include/asm/fixmap.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Marius Gligor + */ + +#ifndef _ASM_KVX_FIXMAP_H +#define _ASM_KVX_FIXMAP_H + +/** + * Use the latest available kernel address minus one page. + * This is needed since __fix_to_virt returns + * (FIXADDR_TOP - ((x) << PAGE_SHIFT)) + * Due to that, first member will be shifted by 0 and will be equal to + * FIXADDR_TOP. + * Some other architectures simply add a FIX_HOLE at the beginning of + * the fixed_addresses enum (I think ?). + */ +#define FIXADDR_TOP (-PAGE_SIZE) + +#define ASM_FIX_TO_VIRT(IDX) \ + (FIXADDR_TOP - ((IDX) << PAGE_SHIFT)) + +#ifndef __ASSEMBLY__ +#include +#include + +enum fixed_addresses { + FIX_EARLYCON_MEM_BASE, + FIX_GDB_BARE_DISPLACED_MEM_BASE, + /* Used to access text early in RW mode (jump label) */ + FIX_TEXT_PATCH, + __end_of_fixed_addresses +}; + +#define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT) +#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE) +#define FIXMAP_PAGE_IO (PAGE_KERNEL_DEVICE) + +void __set_fixmap(enum fixed_addresses idx, + phys_addr_t phys, pgprot_t prot); + +#include +#endif /* __ASSEMBLY__ */ + +#endif diff --git a/arch/kvx/include/asm/hugetlb.h b/arch/kvx/include/asm/hugetlb.h new file mode 100644 index 0000000000000..a5984e8ede7eb --- /dev/null +++ b/arch/kvx/include/asm/hugetlb.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Guillaume Thouvenin + * Clement Leger + */ + +#ifndef _ASM_KVX_HUGETLB_H +#define _ASM_KVX_HUGETLB_H + +#include + +#define __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT +extern void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte); + +#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR +extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm, + unsigned long addr, pte_t *ptep); + +#define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS +extern int huge_ptep_set_access_flags(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, + pte_t pte, int dirty); + +#define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT +extern void huge_ptep_set_wrprotect(struct mm_struct *mm, + unsigned long addr, pte_t *ptep); + +#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH +extern pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep); + +#include + +#endif /* _ASM_KVX_HUGETLB_H */ diff --git a/arch/kvx/include/asm/mem_map.h b/arch/kvx/include/asm/mem_map.h new file mode 100644 index 0000000000000..ea90144209ce1 --- /dev/null +++ b/arch/kvx/include/asm/mem_map.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + */ + +#ifndef _ASM_KVX_MEM_MAP_H +#define _ASM_KVX_MEM_MAP_H + +#include +#include + +#include +#include + +/** + * kvx memory mapping defines + * For more information on memory mapping, please see + * Documentation/kvx/kvx-mmu.txt + * + * All _BASE defines are relative to PAGE_OFFSET + */ + +/* Guard between various memory map zones */ +#define MAP_GUARD_SIZE SZ_1G + +/** + * Kernel direct memory mapping + */ +#define KERNEL_DIRECT_MEMORY_MAP_BASE PAGE_OFFSET +#define KERNEL_DIRECT_MEMORY_MAP_SIZE UL(0x1000000000) +#define KERNEL_DIRECT_MEMORY_MAP_END \ + (KERNEL_DIRECT_MEMORY_MAP_BASE + KERNEL_DIRECT_MEMORY_MAP_SIZE) + +/** + * Vmalloc mapping (goes from kernel direct memory map up to fixmap start - + * guard size) + */ +#define KERNEL_VMALLOC_MAP_BASE (KERNEL_DIRECT_MEMORY_MAP_END + MAP_GUARD_= SIZE) +#define KERNEL_VMALLOC_MAP_SIZE \ + (FIXADDR_START - KERNEL_VMALLOC_MAP_BASE - MAP_GUARD_SIZE) + +#endif diff --git a/arch/kvx/include/asm/mmu.h b/arch/kvx/include/asm/mmu.h new file mode 100644 index 0000000000000..a9fecb560fe5b --- /dev/null +++ b/arch/kvx/include/asm/mmu.h @@ -0,0 +1,289 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Guillaume Thouvenin + * Clement Leger + * Marc Poulhi=C3=A8s + */ + +#ifndef _ASM_KVX_MMU_H +#define _ASM_KVX_MMU_H + +#include +#include +#include + +#include +#include +#include +#include +#include + +/* Virtual addresses can use at most 41 bits */ +#define MMU_VIRT_BITS 41 + +/* + * See Documentation/kvx/kvx-mmu.rst for details about the division of the + * virtual memory space. + */ +#if defined(CONFIG_KVX_4K_PAGES) +#define MMU_USR_ADDR_BITS 39 +#else +#error "Only 4Ko page size is supported at this time" +#endif + +typedef struct mm_context { + unsigned long end_brk; + unsigned long asn[NR_CPUS]; + unsigned long sigpage; +} mm_context_t; + +struct __packed tlb_entry_low { + unsigned int es:2; /* Entry Status */ + unsigned int cp:2; /* Cache Policy */ + unsigned int pa:4; /* Protection Attributes */ + unsigned int r:2; /* Reserved */ + unsigned int ps:2; /* Page Size */ + unsigned int fn:28; /* Frame Number */ +}; + +struct __packed tlb_entry_high { + unsigned int asn:9; /* Address Space Number */ + unsigned int g:1; /* Global Indicator */ + unsigned int vs:2; /* Virtual Space */ + unsigned int pn:29; /* Page Number */ +}; + +struct kvx_tlb_format { + union { + struct tlb_entry_low tel; + uint64_t tel_val; + }; + union { + struct tlb_entry_high teh; + uint64_t teh_val; + }; +}; + +#define KVX_EMPTY_TLB_ENTRY { .tel_val =3D 0x0, .teh_val =3D 0x0 } + +/* Bit [0:39] of the TLB format corresponds to TLB Entry low */ +/* Bit [40:80] of the TLB format corresponds to the TLB Entry high */ +#define kvx_mmu_set_tlb_entry(tlbf) do { \ + kvx_sfr_set(TEL, (uint64_t) tlbf.tel_val); \ + kvx_sfr_set(TEH, (uint64_t) tlbf.teh_val); \ +} while (0) + +#define kvx_mmu_get_tlb_entry(tlbf) do { \ + tlbf.tel_val =3D kvx_sfr_get(TEL); \ + tlbf.teh_val =3D kvx_sfr_get(TEH); \ +} while (0) + +/* Use kvx_mmc_ to read a field from MMC value passed as parameter */ +#define __kvx_mmc(mmc_reg, field) \ + kvx_sfr_field_val(mmc_reg, MMC, field) + +#define kvx_mmc_error(mmc) __kvx_mmc(mmc, E) +#define kvx_mmc_parity(mmc) __kvx_mmc(mmc, PAR) +#define kvx_mmc_sb(mmc) __kvx_mmc(mmc, SB) +#define kvx_mmc_ss(mmc) __kvx_mmc(mmc, SS) +#define kvx_mmc_sw(mmc) __kvx_mmc(mmc, SW) +#define kvx_mmc_asn(mmc) __kvx_mmc(mmc, ASN) + +#define KVX_TLB_ACCESS_READ 0 +#define KVX_TLB_ACCESS_WRITE 1 +#define KVX_TLB_ACCESS_PROBE 2 + +#ifdef CONFIG_KVX_DEBUG_TLB_ACCESS + +#define KVX_TLB_ACCESS_SIZE (1 << CONFIG_KVX_DEBUG_TLB_ACCESS_BITS) +#define KVX_TLB_ACCESS_MASK GENMASK((CONFIG_KVX_DEBUG_TLB_ACCESS_BITS - 1)= , 0) +#define KVX_TLB_ACCESS_GET_IDX(idx) (idx & KVX_TLB_ACCESS_MASK) + +/* This structure is used to make decoding of MMC easier in gdb */ +struct mmc_t { + unsigned int asn:9; + unsigned int s: 1; + unsigned int r1: 4; + unsigned int sne: 1; + unsigned int spe: 1; + unsigned int ptc: 2; + unsigned int sw: 4; + unsigned int ss: 6; + unsigned int sb: 1; + unsigned int r2: 1; + unsigned int par: 1; + unsigned int e: 1; +}; + +struct __packed kvx_tlb_access_t { + struct kvx_tlb_format entry; /* 128 bits */ + union { + struct mmc_t mmc; + uint32_t mmc_val; + }; + uint32_t type; +}; + +extern void kvx_update_tlb_access(int type); + +#else +#define kvx_update_tlb_access(type) do {} while (0) +#endif + +static inline void kvx_mmu_readtlb(void) +{ + kvx_update_tlb_access(KVX_TLB_ACCESS_READ); + asm volatile ("tlbread\n;;"); +} + +static inline void kvx_mmu_writetlb(void) +{ + kvx_update_tlb_access(KVX_TLB_ACCESS_WRITE); + asm volatile ("tlbwrite\n;;"); +} + +static inline void kvx_mmu_probetlb(void) +{ + kvx_update_tlb_access(KVX_TLB_ACCESS_PROBE); + asm volatile ("tlbprobe\n;;"); +} + +#define kvx_mmu_add_entry(buffer, way, entry) do { \ + kvx_sfr_set_field(MMC, SB, buffer); \ + kvx_sfr_set_field(MMC, SW, way); \ + kvx_mmu_set_tlb_entry(entry); \ + kvx_mmu_writetlb(); \ +} while (0) + +#define kvx_mmu_remove_ltlb_entry(way) do { \ + struct kvx_tlb_format __invalid_entry =3D KVX_EMPTY_TLB_ENTRY; \ + kvx_mmu_add_entry(MMC_SB_LTLB, way, __invalid_entry); \ +} while (0) + +static inline int get_page_size_shift(int ps) +{ + /* + * Use the same assembly trick using sbmm to directly get the page size + * shift using a constant which encodes all page size shifts + */ + return __builtin_kvx_sbmm8(KVX_PS_SHIFT_MATRIX, + KVX_SBMM_BYTE_SEL << ps); +} + +/* + * 4 bits are used to index the kvx access permissions. Bits are used as + * follow: + * + * +---------------+------------+-------------+------------+ + * | Bit 3 | Bit 2 | Bit 1 | Bit 0 | + * |---------------+------------+-------------+------------| + * | _PAGE_GLOBAL | _PAGE_EXEC | _PAGE_WRITE | _PAGE_READ | + * +---------------+------------+-------------+------------+ + * + * If _PAGE_GLOBAL is set then the page belongs to the kernel. Otherwise it + * belongs to the user. When the page belongs to user-space then give the + * same rights to the kernel-space. + * In order to quickly compute policy from this value, we use sbmm instruc= tion. + * The main interest is to avoid an additionnal load and specifically in t= he + * assembly refill handler. + */ +static inline u8 get_page_access_perms(u8 policy) +{ + /* If PAGE_READ is unset, there is no permission for this page */ + if (!(policy & (_PAGE_READ >> _PAGE_PERMS_SHIFT))) + return TLB_PA_NA_NA; + + /* Discard the _PAGE_READ bit to get a linear number in [0,7] */ + policy >>=3D 1; + + /* Use sbmm to directly get the page perms */ + return __builtin_kvx_sbmm8(KVX_PAGE_PA_MATRIX, + KVX_SBMM_BYTE_SEL << policy); +} + +static inline struct kvx_tlb_format tlb_mk_entry( + void *paddr, + void *vaddr, + unsigned int ps, + unsigned int global, + unsigned int pa, + unsigned int cp, + unsigned int asn, + unsigned int es) +{ + struct kvx_tlb_format entry; + u64 mask =3D ULONG_MAX << get_page_size_shift(ps); + + BUG_ON(ps >=3D (1 << KVX_SFR_TEL_PS_WIDTH)); + + /* + * 0 matches the virtual space: + * - either we are virtualized and the hypervisor will set it + * for us when using writetlb + * - Or we are native and the virtual space is 0 + */ + entry.teh_val =3D TLB_MK_TEH_ENTRY((uintptr_t)vaddr & mask, 0, global, + asn); + entry.tel_val =3D TLB_MK_TEL_ENTRY((uintptr_t)paddr, ps, es, cp, pa); + + return entry; +} + +static inline unsigned long tlb_entry_phys(struct kvx_tlb_format tlbe) +{ + return ((unsigned long) tlbe.tel.fn << KVX_SFR_TEL_FN_SHIFT); +} + +static inline unsigned long tlb_entry_virt(struct kvx_tlb_format tlbe) +{ + return ((unsigned long) tlbe.teh.pn << KVX_SFR_TEH_PN_SHIFT); +} + +static inline unsigned long tlb_entry_size(struct kvx_tlb_format tlbe) +{ + return BIT(get_page_size_shift(tlbe.tel.ps)); +} + +static inline int tlb_entry_overlaps(struct kvx_tlb_format tlbe1, + struct kvx_tlb_format tlbe2) +{ + unsigned long start1, end1; + unsigned long start2, end2; + + start1 =3D tlb_entry_virt(tlbe1); + end1 =3D start1 + tlb_entry_size(tlbe1); + + start2 =3D tlb_entry_virt(tlbe2); + end2 =3D start2 + tlb_entry_size(tlbe2); + + return start1 <=3D end2 && end1 >=3D start2; +} + +static inline int tlb_entry_match_addr(struct kvx_tlb_format tlbe, + unsigned long vaddr) +{ + /* + * TLB entries store up to 41 bits so the provided address must be + * truncated to match teh.pn. + */ + vaddr &=3D GENMASK(MMU_VIRT_BITS - 1, KVX_SFR_TEH_PN_SHIFT); + + return tlb_entry_virt(tlbe) =3D=3D vaddr; +} + +extern void kvx_mmu_early_setup(void); + +static inline void paging_init(void) {} + +void kvx_mmu_ltlb_remove_entry(unsigned long vaddr); +void kvx_mmu_ltlb_add_entry(unsigned long vaddr, phys_addr_t paddr, + pgprot_t flags, unsigned long page_shift); + +void kvx_mmu_jtlb_add_entry(unsigned long address, pte_t *ptep, + unsigned int asn); +extern void mmu_early_init(void); + +struct mm_struct; + +#endif /* _ASM_KVX_MMU_H */ diff --git a/arch/kvx/include/asm/mmu_context.h b/arch/kvx/include/asm/mmu_= context.h new file mode 100644 index 0000000000000..39fa92f1506b2 --- /dev/null +++ b/arch/kvx/include/asm/mmu_context.h @@ -0,0 +1,156 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + */ + +#ifndef __ASM_KVX_MMU_CONTEXT_H +#define __ASM_KVX_MMU_CONTEXT_H + +/* + * Management of the Address Space Number: + * Coolidge architecture provides a 9-bit ASN to tag TLB entries. This can= be + * used to allow several entries with the same virtual address (so from + * different process) to be in the TLB at the same time. That means that w= on't + * necessarily flush the TLB when a context switch occurs and so it will + * improve performances. + */ +#include + +#include +#include +#include + +#include + +#define MM_CTXT_ASN_MASK GENMASK(KVX_SFR_MMC_ASN_WIDTH - 1, 0) +#define MM_CTXT_CYCLE_MASK (~MM_CTXT_ASN_MASK) +#define MM_CTXT_NO_ASN UL(0x0) +#define MM_CTXT_FIRST_CYCLE (MM_CTXT_ASN_MASK + 1) + +#define mm_asn(mm, cpu) ((mm)->context.asn[cpu]) + +DECLARE_PER_CPU(unsigned long, kvx_asn_cache); +#define cpu_asn_cache(cpu) per_cpu(kvx_asn_cache, cpu) + +static inline void get_new_mmu_context(struct mm_struct *mm, unsigned int = cpu) +{ + unsigned long asn =3D cpu_asn_cache(cpu); + + asn++; + /* Check if we need to start a new cycle */ + if ((asn & MM_CTXT_ASN_MASK) =3D=3D 0) { + pr_debug("%s: start new cycle, flush all tlb\n", __func__); + local_flush_tlb_all(); + + /* + * Above check for rollover of 9 bit ASN in 64 bit container. + * If the container itself wrapped around, set it to a non zero + * "generation" to distinguish from no context + */ + if (asn =3D=3D 0) + asn =3D MM_CTXT_FIRST_CYCLE; + } + + cpu_asn_cache(cpu) =3D asn; + mm_asn(mm, cpu) =3D asn; + + pr_debug("%s: mm =3D 0x%llx: cpu[%d], cycle: %lu, asn: %lu\n", + __func__, (unsigned long long)mm, cpu, + (asn & MM_CTXT_CYCLE_MASK) >> KVX_SFR_MMC_ASN_WIDTH, + asn & MM_CTXT_ASN_MASK); +} + +static inline void get_mmu_context(struct mm_struct *mm, unsigned int cpu) +{ + + unsigned long asn =3D mm_asn(mm, cpu); + + /* + * Move to new ASN if it was not from current alloc-cycle/generation. + * This is done by ensuring that the generation bits in both + * mm->context.asn and cpu_asn_cache counter are exactly same. + * + * NOTE: this also works for checking if mm has a context since the + * first alloc-cycle/generation is always '1'. MM_CTXT_NO_ASN value + * contains cycle '0', and thus it will match. + */ + if ((asn ^ cpu_asn_cache(cpu)) & MM_CTXT_CYCLE_MASK) + get_new_mmu_context(mm, cpu); +} + +static inline void activate_context(struct mm_struct *mm, unsigned int cpu) +{ + unsigned long flags; + + local_irq_save(flags); + + get_mmu_context(mm, cpu); + + kvx_sfr_set_field(MMC, ASN, mm_asn(mm, cpu) & MM_CTXT_ASN_MASK); + + local_irq_restore(flags); +} + +/** + * Redefining the generic hooks that are: + * - activate_mm + * - deactivate_mm + * - enter_lazy_tlb + * - init_new_context + * - destroy_context + * - switch_mm + */ + +#define activate_mm(prev, next) switch_mm((prev), (next), NULL) +#define deactivate_mm(tsk, mm) do { } while (0) +#define enter_lazy_tlb(mm, tsk) do { } while (0) + +static inline int init_new_context(struct task_struct *tsk, + struct mm_struct *mm) +{ + int cpu; + + for_each_possible_cpu(cpu) + mm_asn(mm, cpu) =3D MM_CTXT_NO_ASN; + + return 0; +} + +static inline void destroy_context(struct mm_struct *mm) +{ + int cpu =3D smp_processor_id(); + unsigned long flags; + + local_irq_save(flags); + mm_asn(mm, cpu) =3D MM_CTXT_NO_ASN; + local_irq_restore(flags); +} + +static inline void switch_mm(struct mm_struct *prev, struct mm_struct *nex= t, + struct task_struct *tsk) +{ + unsigned int cpu =3D smp_processor_id(); + + /** + * Comment taken from arc, but logic is the same for us: + * + * Note that the mm_cpumask is "aggregating" only, we don't clear it + * for the switched-out task, unlike some other arches. + * It is used to enlist cpus for sending TLB flush IPIs and not sending + * it to CPUs where a task once ran-on, could cause stale TLB entry + * re-use, specially for a multi-threaded task. + * e.g. T1 runs on C1, migrates to C3. T2 running on C2 munmaps. + * For a non-aggregating mm_cpumask, IPI not sent C1, and if T1 + * were to re-migrate to C1, it could access the unmapped region + * via any existing stale TLB entries. + */ + cpumask_set_cpu(cpu, mm_cpumask(next)); + + if (prev !=3D next) + activate_context(next, cpu); +} + + +#endif /* __ASM_KVX_MMU_CONTEXT_H */ diff --git a/arch/kvx/include/asm/mmu_stats.h b/arch/kvx/include/asm/mmu_st= ats.h new file mode 100644 index 0000000000000..999352dbc1ce1 --- /dev/null +++ b/arch/kvx/include/asm/mmu_stats.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_MMU_STATS_H +#define _ASM_KVX_MMU_STATS_H + +#ifdef CONFIG_KVX_MMU_STATS +#include + +struct mmu_refill_stats { + unsigned long count; + unsigned long total; + unsigned long min; + unsigned long max; +}; + +enum mmu_refill_type { + MMU_REFILL_TYPE_USER, + MMU_REFILL_TYPE_KERNEL, + MMU_REFILL_TYPE_KERNEL_DIRECT, + MMU_REFILL_TYPE_COUNT, +}; + +struct mmu_stats { + struct mmu_refill_stats refill[MMU_REFILL_TYPE_COUNT]; + /* keep these fields ordered this way for assembly */ + unsigned long cycles_between_refill; + unsigned long last_refill; + unsigned long tlb_flush_all; +}; + +DECLARE_PER_CPU(struct mmu_stats, mmu_stats); +#endif + +#endif /* _ASM_KVX_MMU_STATS_H */ diff --git a/arch/kvx/include/asm/page.h b/arch/kvx/include/asm/page.h new file mode 100644 index 0000000000000..63a3bdd3bfdf6 --- /dev/null +++ b/arch/kvx/include/asm/page.h @@ -0,0 +1,172 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Guillaume Thouvenin + * Clement Leger + * Marius Gligor + */ + +#ifndef _ASM_KVX_PAGE_H +#define _ASM_KVX_PAGE_H + +#include + +#define PAGE_SHIFT CONFIG_KVX_PAGE_SHIFT +#define PAGE_SIZE _BITUL(PAGE_SHIFT) +#define PAGE_MASK (~(PAGE_SIZE - 1)) + +#define DDR_PHYS_OFFSET (0x100000000) +#define PHYS_OFFSET CONFIG_KVX_PHYS_OFFSET +#define PAGE_OFFSET CONFIG_KVX_PAGE_OFFSET +#define DDR_VIRT_OFFSET (DDR_PHYS_OFFSET + PAGE_OFFSET) + +#define VA_TO_PA_OFFSET (PHYS_OFFSET - PAGE_OFFSET) +#define PA_TO_VA_OFFSET (PAGE_OFFSET - PHYS_OFFSET) + +/* + * These macros are specifically written for assembly. They are useful for + * converting symbols above PAGE_OFFSET to their physical addresses. + */ +#define __PA(x) ((x) + VA_TO_PA_OFFSET) +#define __VA(x) ((x) + PA_TO_VA_OFFSET) + +#if defined(CONFIG_KVX_4K_PAGES) +/* Maximum usable bit using 4K pages and current page table layout */ +#define VA_MAX_BITS 40 +#define PGDIR_SHIFT 30 +#define PMD_SHIFT 21 +#else +#error "64K page not supported" +#endif + +/* + * Define _SHIFT, _SIZE and _MASK corresponding of the different page + * sizes supported by kvx. + */ +#define KVX_PAGE_4K_SHIFT 12 +#define KVX_PAGE_4K_SIZE BIT(KVX_PAGE_4K_SHIFT) +#define KVX_PAGE_4K_MASK (~(KVX_PAGE_4K_SIZE - 1)) + +#define KVX_PAGE_64K_SHIFT 16 +#define KVX_PAGE_64K_SIZE BIT(KVX_PAGE_64K_SHIFT) +#define KVX_PAGE_64K_MASK (~(KVX_PAGE_64K_SIZE - 1)) + +#define KVX_PAGE_2M_SHIFT 21 +#define KVX_PAGE_2M_SIZE BIT(KVX_PAGE_2M_SHIFT) +#define KVX_PAGE_2M_MASK (~(KVX_PAGE_2M_SIZE - 1)) + +#define KVX_PAGE_512M_SHIFT 29 +#define KVX_PAGE_512M_SIZE BIT(KVX_PAGE_512M_SHIFT) +#define KVX_PAGE_512M_MASK (~(KVX_PAGE_512M_SIZE - 1)) + +/* Encode all page shift into one 32bit constant for sbmm */ +#define KVX_PS_SHIFT_MATRIX ((KVX_PAGE_512M_SHIFT << 24) | \ + (KVX_PAGE_2M_SHIFT << 16) | \ + (KVX_PAGE_64K_SHIFT << 8) | \ + (KVX_PAGE_4K_SHIFT)) + +/* Encode all page access policy into one 64bit constant for sbmm */ +#define KVX_PAGE_PA_MATRIX ((UL(TLB_PA_NA_RWX) << 56) | \ + (UL(TLB_PA_NA_RX) << 48) | \ + (UL(TLB_PA_NA_RW) << 40) | \ + (UL(TLB_PA_NA_R) << 32) | \ + (UL(TLB_PA_RWX_RWX) << 24) | \ + (UL(TLB_PA_RX_RX) << 16) | \ + (UL(TLB_PA_RW_RW) << 8) | \ + (UL(TLB_PA_R_R))) + +/* + * Select a byte using sbmm8. When shifted by one bit left, we get the next + * byte. + * For instance using this default constant with sbmm yields the value bet= ween + * first byte of the double word. + * If constant is shifted by 1, the value is now 0x0000000000000002ULL and= this + * yield the second byte and so on, and so on ! + */ +#define KVX_SBMM_BYTE_SEL 0x01 + +#ifndef __ASSEMBLY__ + +#include + +/* Page Global Directory entry */ +typedef struct { + unsigned long pgd; +} pgd_t; + +/* Page Middle Directory entry */ +typedef struct { + unsigned long pmd; +} pmd_t; + +/* Page Table entry */ +typedef struct { + unsigned long pte; +} pte_t; + +/* Protection bits */ +typedef struct { + unsigned long pgprot; +} pgprot_t; + +typedef struct page *pgtable_t; + +/** + * Macros to access entry values + */ +#define pgd_val(x) ((x).pgd) +#define pmd_val(x) ((x).pmd) +#define pte_val(x) ((x).pte) +#define pgprot_val(x) ((x).pgprot) + +/** + * Macro to create entry from value + */ +#define __pgd(x) ((pgd_t) { (x) }) +#define __pmd(x) ((pmd_t) { (x) }) +#define __pte(x) ((pte_t) { (x) }) +#define __pgprot(x) ((pgprot_t) { (x) }) + +#define pte_pgprot(x) __pgprot(pte_val(x) & ~PFN_PTE_MASK) + +#define __pa(x) ((unsigned long)(x) + VA_TO_PA_OFFSET) +#define __va(x) ((void *)((unsigned long) (x) + PA_TO_VA_OFFSET)) + +#define phys_to_pfn(phys) (PFN_DOWN(phys)) +#define pfn_to_phys(pfn) (PFN_PHYS(pfn)) + +#define virt_to_pfn(vaddr) (phys_to_pfn(__pa(vaddr))) +#define pfn_to_virt(pfn) (__va(pfn_to_phys(pfn))) + +#define virt_to_page(vaddr) (pfn_to_page(virt_to_pfn(vaddr))) +#define page_to_virt(page) (pfn_to_virt(page_to_pfn(page))) + +#define page_to_phys(page) virt_to_phys(page_to_virt(page)) +#define phys_to_page(phys) (pfn_to_page(phys_to_pfn(phys))) + +#define virt_addr_valid(vaddr) (pfn_valid(virt_to_pfn(vaddr))) + +extern void clear_page(void *to); +extern void copy_page(void *to, void *from); + +static inline void clear_user_page(void *page, unsigned long vaddr, + struct page *pg) +{ + clear_page(page); +} + +static inline void copy_user_page(void *to, void *from, unsigned long vadd= r, + struct page *topage) +{ + copy_page(to, from); +} + +#define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | \ + VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) + +#include +#include + +#endif /* __ASSEMBLY__ */ + +#endif /* _ASM_KVX_PAGE_H */ diff --git a/arch/kvx/include/asm/page_size.h b/arch/kvx/include/asm/page_s= ize.h new file mode 100644 index 0000000000000..2c2850205b50e --- /dev/null +++ b/arch/kvx/include/asm/page_size.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Guillaume Thouvenin + * Clement Leger + */ + +#ifndef _ASM_KVX_PAGE_SIZE_H +#define _ASM_KVX_PAGE_SIZE_H + +#include + +#if defined(CONFIG_HUGETLB_PAGE) +#define HUGE_PAGE_SIZE (MMC_PMJ_64K | MMC_PMJ_2M | MMC_PMJ_512M) +#else +#define HUGE_PAGE_SIZE (0) +#endif + +#if defined(CONFIG_KVX_4K_PAGES) +#define TLB_DEFAULT_PS TLB_PS_4K +#define KVX_SUPPORTED_PSIZE (MMC_PMJ_4K | HUGE_PAGE_SIZE) +#elif defined(CONFIG_KVX_64K_PAGES) +#define TLB_DEFAULT_PS TLB_PS_64K +#define KVX_SUPPORTED_PSIZE (MMC_PMJ_64K | HUGE_PAGE_SIZE) +#else +#error "Unsupported page size" +#endif + +#endif diff --git a/arch/kvx/include/asm/pgalloc.h b/arch/kvx/include/asm/pgalloc.h new file mode 100644 index 0000000000000..c199343600339 --- /dev/null +++ b/arch/kvx/include/asm/pgalloc.h @@ -0,0 +1,70 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Guillaume Thouvenin + * Clement Leger + */ + +#ifndef _ASM_KVX_PGALLOC_H +#define _ASM_KVX_PGALLOC_H + +#include +#include + +#define __HAVE_ARCH_PGD_FREE +#include + +static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) +{ + free_pages((unsigned long) pgd, PAGES_PER_PGD); +} + +static inline pgd_t *pgd_alloc(struct mm_struct *mm) +{ + pgd_t *pgd; + + pgd =3D (pgd_t *) __get_free_pages(GFP_KERNEL, PAGES_PER_PGD); + if (unlikely(pgd =3D=3D NULL)) + return NULL; + + memset(pgd, 0, USER_PTRS_PER_PGD * sizeof(pgd_t)); + + /* Copy kernel mappings */ + memcpy(pgd + USER_PTRS_PER_PGD, + init_mm.pgd + USER_PTRS_PER_PGD, + (PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t)); + + return pgd; +} + +static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *p= md) +{ + unsigned long pfn =3D virt_to_pfn(pmd); + + set_pud(pud, __pud((unsigned long)pfn << PAGE_SHIFT)); +} + +static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, p= te_t *pte) +{ + unsigned long pfn =3D virt_to_pfn(pte); + + set_pmd(pmd, __pmd((unsigned long)pfn << PAGE_SHIFT)); +} + +static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, pgtable_= t pte) +{ + unsigned long pfn =3D virt_to_pfn(page_address(pte)); + + set_pmd(pmd, __pmd((unsigned long)pfn << PAGE_SHIFT)); +} + +#if CONFIG_PGTABLE_LEVELS > 2 +#define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) +#endif + +#define __pte_free_tlb(tlb, pte, buf) do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ + } while (0) + +#endif /* _ASM_KVX_PGALLOC_H */ diff --git a/arch/kvx/include/asm/pgtable-bits.h b/arch/kvx/include/asm/pgt= able-bits.h new file mode 100644 index 0000000000000..7e4f20cd1b39c --- /dev/null +++ b/arch/kvx/include/asm/pgtable-bits.h @@ -0,0 +1,105 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Guillaume Thouvenin + * Clement Leger + * Julian Vetter + */ + +#ifndef _ASM_KVX_PGTABLE_BITS_H +#define _ASM_KVX_PGTABLE_BITS_H + +/* + * Protection bit definition + * As we don't have any HW to handle page table walk, we can define + * our own PTE format. In order to make things easier, we are trying to ma= tch + * some parts of $tel and $teh. + * + * PageSZ must be on bit 10 and 11, because it matches the TEL.PS bits. By= doing + * this it is easier in assembly to set the TEL.PS to PageSZ. In other wor= ds, + * KVX_PAGE_SZ_SHIFT =3D=3D KVX_SFR_TEL_PS_SHIFT. It is checked by using a + * BUILD_BUG_ON() in arch/kvx/mm/tlb.c. + * + * The Huge bit must be somewhere in the first 12 bits to be able to detec= t it + * when reading the PMD entry. + * + * KV3-1: + * +---------+--------+----+--------+---+---+---+---+---+---+------+---+-= --+ + * | 63..23 | 22..13 | 12 | 11..10 | 9 | 8 | 7 | 6 | 5 | 4 | 3..2 | 1 | = 0 | + * +---------+--------+----+--------+---+---+---+---+---+---+------+---+-= --+ + * PFN Unused S PageSZ H G X W R D CP A P + * + * Note: PFN is 40-bits wide. We use 41-bits to ensure that the upper bit = is + * always set to 0. This is required when shifting the PFN to the ri= ght. + */ + +/* The following shifts are used in ASM to easily extract bits */ +#define _PAGE_PERMS_SHIFT 5 +#define _PAGE_GLOBAL_SHIFT 8 +#define _PAGE_HUGE_SHIFT 9 + +#define _PAGE_PRESENT (1 << 0) /* Present */ +#define _PAGE_ACCESSED (1 << 1) /* Set by tlb refill code on any access= */ +/* Bits 2 - 3 are reserved for the cache policy */ +#define _PAGE_DIRTY (1 << 4) /* Set by tlb refill code on any write = */ +#define _PAGE_READ (1 << _PAGE_PERMS_SHIFT) /* Readable */ +#define _PAGE_WRITE (1 << 6) /* Writable */ +#define _PAGE_EXEC (1 << 7) /* Executable */ +#define _PAGE_GLOBAL (1 << _PAGE_GLOBAL_SHIFT) /* Global */ +#define _PAGE_HUGE (1 << _PAGE_HUGE_SHIFT) /* Huge page */ +/* Bits 10 - 11 are reserved for the page size */ +#define _PAGE_SOFT (1 << 12) /* Reserved for software */ + +#define _PAGE_SZ_64K (TLB_PS_64K << KVX_PAGE_SZ_SHIFT) +#define _PAGE_SZ_2M (TLB_PS_2M << KVX_PAGE_SZ_SHIFT) +#define _PAGE_SZ_512M (TLB_PS_512M << KVX_PAGE_SZ_SHIFT) + +#define _PAGE_SPECIAL _PAGE_SOFT + +/* + * If _PAGE_PRESENT is clear because the user mapped it with PROT_NONE + * pte_present still gives true. Bit[15] of the PTE is used since its unus= ed + * for a PTE entry for kv3-1 (see above) + */ +#define _PAGE_NONE (1 << 15) + +/* Used for swap PTEs only. */ +#define _PAGE_SWP_EXCLUSIVE (1 << 2) + +/* Note: mask used in assembly cannot be generated with GENMASK */ +#define PFN_PTE_SHIFT 23 +#define PFN_PTE_MASK (~(((1 << PFN_PTE_SHIFT) - 1))) + +#define KVX_PAGE_SZ_SHIFT 10 +#define KVX_PAGE_SZ_MASK KVX_SFR_TEL_PS_MASK + +/* Huge page of 64K are hold in PTE table */ +#define KVX_PAGE_64K_NR_CONT (1UL << (KVX_PAGE_64K_SHIFT - PAGE_SHIFT)) +/* Huge page of 512M are hold in PMD table */ +#define KVX_PAGE_512M_NR_CONT (1UL << (KVX_PAGE_512M_SHIFT - PMD_SHIFT)) + +#define KVX_PAGE_CP_SHIFT 2 +#define KVX_PAGE_CP_MASK KVX_SFR_TEL_CP_MASK + + +#define _PAGE_CACHED (TLB_CP_W_C << KVX_PAGE_CP_SHIFT) +#define _PAGE_UNCACHED (TLB_CP_U_U << KVX_PAGE_CP_SHIFT) +#define _PAGE_DEVICE (TLB_CP_D_U << KVX_PAGE_CP_SHIFT) + +#define KVX_ACCESS_PERMS_BITS 4 +#define KVX_ACCESS_PERMS_OFFSET _PAGE_PERMS_SHIFT +#define KVX_ACCESS_PERMS_SIZE (1 << KVX_ACCESS_PERMS_BITS) + +#define KVX_ACCESS_PERM_START_BIT KVX_ACCESS_PERMS_OFFSET +#define KVX_ACCESS_PERM_STOP_BIT \ + (KVX_ACCESS_PERMS_OFFSET + KVX_ACCESS_PERMS_BITS - 1) +#define KVX_ACCESS_PERMS_MASK \ + GENMASK(KVX_ACCESS_PERM_STOP_BIT, KVX_ACCESS_PERM_START_BIT) +#define KVX_ACCESS_PERMS_INDEX(x) \ + ((unsigned int)(x & KVX_ACCESS_PERMS_MASK) >> KVX_ACCESS_PERMS_OFFSET) + +/* Bits read, write, exec and global are not preserved across pte_modify()= */ +#define _PAGE_CHG_MASK (~(unsigned long)(_PAGE_READ | _PAGE_WRITE | \ + _PAGE_EXEC | _PAGE_GLOBAL)) + +#endif diff --git a/arch/kvx/include/asm/pgtable.h b/arch/kvx/include/asm/pgtable.h new file mode 100644 index 0000000000000..f33e81d7c5b1f --- /dev/null +++ b/arch/kvx/include/asm/pgtable.h @@ -0,0 +1,468 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Guillaume Thouvenin + * Clement Leger + * Marius Gligor + * Yann Sionneau + */ + +#ifndef _ASM_KVX_PGTABLE_H +#define _ASM_KVX_PGTABLE_H + +#include +#include + +#include +#include + +#include + +#include + +struct mm_struct; +struct vm_area_struct; + +/* + * Hugetlb definitions. All sizes are supported (64 KB, 2 MB and 512 MB). + */ +#if defined(CONFIG_KVX_4K_PAGES) +#define HUGE_MAX_HSTATE 3 +#elif defined(CONFIG_KVX_64K_PAGES) +#define HUGE_MAX_HSTATE 2 +#else +#error "Unsupported page size" +#endif + +#define HPAGE_SHIFT PMD_SHIFT +#define HPAGE_SIZE BIT(HPAGE_SHIFT) +#define HPAGE_MASK (~(HPAGE_SIZE - 1)) +#define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT) + +extern pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, + vm_flags_t flags); +#define arch_make_huge_pte arch_make_huge_pte + +/* Vmalloc definitions */ +#define VMALLOC_START KERNEL_VMALLOC_MAP_BASE +#define VMALLOC_END (VMALLOC_START + KERNEL_VMALLOC_MAP_SIZE - 1) + +/* Also used by GDB script to go through the page table */ +#define PGDIR_BITS (VA_MAX_BITS - PGDIR_SHIFT) +#define PMD_BITS (PGDIR_SHIFT - PMD_SHIFT) +#define PTE_BITS (PMD_SHIFT - PAGE_SHIFT) + +/* Size of region mapped by a page global directory */ +#define PGDIR_SIZE BIT(PGDIR_SHIFT) +#define PGDIR_MASK (~(PGDIR_SIZE - 1)) + +/* Size of region mapped by a page middle directory */ +#define PMD_SIZE BIT(PMD_SHIFT) +#define PMD_MASK (~(PMD_SIZE - 1)) + +/* Number of entries in the page global directory */ +#define PAGES_PER_PGD 2 +#define PTRS_PER_PGD (PAGES_PER_PGD * PAGE_SIZE / sizeof(pgd_t)) + +/* Number of entries in the page middle directory */ +#define PTRS_PER_PMD (PAGE_SIZE / sizeof(pmd_t)) + +/* Number of entries in the page table */ +#define PTRS_PER_PTE (PAGE_SIZE / sizeof(pte_t)) + +#define USER_PTRS_PER_PGD (TASK_SIZE/PGDIR_SIZE) + +extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; + +/* Page protection bits */ +#define _PAGE_BASE (_PAGE_PRESENT | _PAGE_CACHED) +#define _PAGE_KERNEL (_PAGE_PRESENT | _PAGE_GLOBAL | \ + _PAGE_READ | _PAGE_WRITE) +#define _PAGE_KERNEL_EXEC (_PAGE_BASE | _PAGE_READ | _PAGE_EXEC | \ + _PAGE_GLOBAL | _PAGE_WRITE) +#define _PAGE_KERNEL_DEVICE (_PAGE_KERNEL | _PAGE_DEVICE) +#define _PAGE_KERNEL_NOCACHE (_PAGE_KERNEL | _PAGE_UNCACHED) + +#define PAGE_NONE __pgprot(_PAGE_NONE) +#define PAGE_READ __pgprot(_PAGE_BASE | _PAGE_READ) +#define PAGE_READ_WRITE __pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_WRITE) +#define PAGE_READ_EXEC __pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC) +#define PAGE_READ_WRITE_EXEC __pgprot(_PAGE_BASE | _PAGE_READ | \ + _PAGE_EXEC | _PAGE_WRITE) + +#define PAGE_KERNEL __pgprot(_PAGE_KERNEL | _PAGE_CACHED) +#define PAGE_KERNEL_EXEC __pgprot(_PAGE_KERNEL_EXEC) +#define PAGE_KERNEL_NOCACHE __pgprot(_PAGE_KERNEL | _PAGE_UNCACHED) +#define PAGE_KERNEL_DEVICE __pgprot(_PAGE_KERNEL_DEVICE) +#define PAGE_KERNEL_RO __pgprot((_PAGE_KERNEL | _PAGE_CACHED) & ~(_PAGE_W= RITE)) +#define PAGE_KERNEL_ROX __pgprot(_PAGE_KERNEL_EXEC & ~(_PAGE_WRITE)) + +#define pgprot_noncached(prot) (__pgprot((pgprot_val(prot) & ~KVX_PAGE_CP_= MASK) | _PAGE_UNCACHED)) + +/* + * ZERO_PAGE is a global shared page that is always zero: used + * for zero-mapped memory areas etc.. + */ +extern struct page *empty_zero_page; +#define ZERO_PAGE(vaddr) (empty_zero_page) + + +/* + * Encode and decode a swap entry. Swap PTEs are all PTEs that are !pte_no= ne() + * && !pte_present(). Swap entries are encoded in an arch dependent format: + * + * +--------+----+-------+------+---+---+---+ + * | 63..16 | 15 | 14..8 | 7..3 | 2 | 1 | 0 | + * +--------+----+-------+------+---+---+---+ + * offset 0 0 type E 0 0 + * + * This allows for up to 31 swap files and 1PB per swap file. + */ +#define __SWP_TYPE_SHIFT 3 +#define __SWP_TYPE_BITS 5 +#define __SWP_TYPE_MASK ((1UL << __SWP_TYPE_BITS) - 1) +#define __SWP_OFFSET_SHIFT 16 + +#define MAX_SWAPFILES_CHECK() \ + BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > __SWP_TYPE_BITS) + +#define __swp_type(x) (((x).val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK) +#define __swp_offset(x) ((x).val >> __SWP_OFFSET_SHIFT) +#define __swp_entry(type, offset) ((swp_entry_t) \ + { ((type) << __SWP_TYPE_SHIFT) | ((offset) << __SWP_OFFSET_SHIFT) }) + +#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) +#define __swp_entry_to_pte(x) ((pte_t) { (x).val }) + +static inline int pte_swp_exclusive(pte_t pte) +{ + return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; +} + +static inline pte_t pte_swp_mkexclusive(pte_t pte) +{ + pte_val(pte) |=3D _PAGE_SWP_EXCLUSIVE; + return pte; +} + +static inline pte_t pte_swp_clear_exclusive(pte_t pte) +{ + pte_val(pte) &=3D ~_PAGE_SWP_EXCLUSIVE; + return pte; +} + +/* + * PGD definitions: + * - pgd_ERROR + */ +#define pgd_ERROR(e) \ + pr_err("%s:%d: bad pgd %016lx.\n", __FILE__, __LINE__, pgd_val(e)) + +/* + * PUD + * + * As we manage a three level page table the call to set_pud is used to fi= ll + * PGD. + */ +static inline void set_pud(pud_t *pudp, pud_t pmd) +{ + *pudp =3D pmd; +} + +static inline int pud_none(pud_t pud) +{ + return pud_val(pud) =3D=3D 0; +} + +static inline int pud_bad(pud_t pud) +{ + return pud_none(pud); +} +static inline int pud_present(pud_t pud) +{ + return pud_val(pud) !=3D 0; +} + +static inline void pud_clear(pud_t *pud) +{ + set_pud(pud, __pud(0)); +} + +/* + * PMD definitions: + * - set_pmd + * - pmd_present + * - pmd_none + * - pmd_bad + * - pmd_clear + * - pmd_page + */ + +static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) +{ + *pmdp =3D pmd; +} + +/* Returns 1 if entry is present */ +static inline int pmd_present(pmd_t pmd) +{ + return pmd_val(pmd) !=3D 0; +} + +/* Returns 1 if the corresponding entry has the value 0 */ +static inline int pmd_none(pmd_t pmd) +{ + return pmd_val(pmd) =3D=3D 0; +} + +/* Used to check that a page middle directory entry is valid */ +static inline int pmd_bad(pmd_t pmd) +{ + return pmd_none(pmd); +} + +/* Clears the entry to prevent process to use the linear address that + * mapped it. + */ +static inline void pmd_clear(pmd_t *pmdp) +{ + set_pmd(pmdp, __pmd(0)); +} + +/* + * Returns the address of the descriptor of the page table referred by the + * PMD entry. + */ +static inline struct page *pmd_page(pmd_t pmd) +{ + if (pmd_val(pmd) & _PAGE_HUGE) + return pfn_to_page( + (pmd_val(pmd) & PFN_PTE_MASK) >> PFN_PTE_SHIFT); + + return pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT); +} + +#define pmd_ERROR(e) \ + pr_err("%s:%d: bad pmd %016lx.\n", __FILE__, __LINE__, pmd_val(e)) + +static inline pmd_t *pud_pgtable(pud_t pud) +{ + return (pmd_t *)pfn_to_virt(pud_val(pud) >> PAGE_SHIFT); +} + +static inline struct page *pud_page(pud_t pud) +{ + return pfn_to_page(pud_val(pud) >> PAGE_SHIFT); +} + +/* + * PTE definitions: + * - set_pte + * - set_pte_at + * - pte_clear + * - pte_page + * - pte_pfn + * - pte_present + * - pte_none + * - pte_write + * - pte_dirty + * - pte_young + * - pte_special + * - pte_mkdirty + * - pte_mkwrite_novma + * - pte_mkclean + * - pte_mkyoung + * - pte_mkold + * - pte_mkspecial + * - pte_wrprotect + */ + +static inline void set_pte(pte_t *ptep, pte_t pteval) +{ + *ptep =3D pteval; +} + +static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval) +{ + set_pte(ptep, pteval); +} + +#define pte_clear(mm, addr, ptep) set_pte(ptep, __pte(0)) + +/* Constructs a page table entry */ +static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot) +{ + return __pte(((pfn << PFN_PTE_SHIFT) & PFN_PTE_MASK) | + pgprot_val(prot)); +} + +/* Builds a page table entry by combining a page descriptor and a group of + * access rights. + */ +#define mk_pte(page, prot) (pfn_pte(page_to_pfn(page), prot)) + +/* Modifies page access rights */ +static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) +{ + return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)); +} + +#define pte_page(x) pfn_to_page(pte_pfn(x)) + +static inline unsigned long pmd_page_vaddr(pmd_t pmd) +{ + return (unsigned long)pfn_to_virt(pmd_val(pmd) >> PAGE_SHIFT); +} + +/* Yields the page frame number (PFN) of a page table entry */ +static inline unsigned long pte_pfn(pte_t pte) +{ + return ((pte_val(pte) & PFN_PTE_MASK) >> PFN_PTE_SHIFT); +} + +static inline int pte_present(pte_t pte) +{ + return (pte_val(pte) & (_PAGE_PRESENT | _PAGE_NONE)); +} + +static inline int pte_none(pte_t pte) +{ + return (pte_val(pte) =3D=3D 0); +} + +static inline int pte_write(pte_t pte) +{ + return pte_val(pte) & _PAGE_WRITE; +} + +static inline int pte_dirty(pte_t pte) +{ + return pte_val(pte) & _PAGE_DIRTY; +} + +static inline int pte_young(pte_t pte) +{ + return pte_val(pte) & _PAGE_ACCESSED; +} + +static inline int pte_special(pte_t pte) +{ + return pte_val(pte) & _PAGE_SPECIAL; +} + +static inline int pte_huge(pte_t pte) +{ + return pte_val(pte) & _PAGE_HUGE; +} + +static inline pte_t pte_mkdirty(pte_t pte) +{ + return __pte(pte_val(pte) | _PAGE_DIRTY); +} + +static inline pte_t pte_mkwrite_novma(pte_t pte) +{ + return __pte(pte_val(pte) | _PAGE_WRITE); +} + +static inline pte_t pte_mkclean(pte_t pte) +{ + return __pte(pte_val(pte) & ~(_PAGE_DIRTY)); +} + +static inline pte_t pte_mkyoung(pte_t pte) +{ + return __pte(pte_val(pte) | _PAGE_ACCESSED); +} + +static inline pte_t pte_mkold(pte_t pte) +{ + return __pte(pte_val(pte) & ~(_PAGE_ACCESSED)); +} + +static inline pte_t pte_mkspecial(pte_t pte) +{ + return __pte(pte_val(pte) | _PAGE_SPECIAL); +} + +static inline pte_t pte_wrprotect(pte_t pte) +{ + return __pte(pte_val(pte) & ~(_PAGE_WRITE)); +} + +static inline pte_t pte_mkhuge(pte_t pte) +{ + return __pte(pte_val(pte) | _PAGE_HUGE); +} + +static inline pte_t pte_of_pmd(pmd_t pmd) +{ + return __pte(pmd_val(pmd)); +} + +#define pmd_pfn(pmd) pte_pfn(pte_of_pmd(pmd)) + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + +#define pmdp_establish pmdp_establish +static inline pmd_t pmdp_establish(struct vm_area_struct *vma, + unsigned long address, pmd_t *pmdp, pmd_t pmd) +{ + return __pmd(xchg(&pmd_val(*pmdp), pmd_val(pmd))); +} + +static inline int pmd_trans_huge(pmd_t pmd) +{ + return !!(pmd_val(pmd) & _PAGE_HUGE); +} + +static inline pmd_t pmd_of_pte(pte_t pte) +{ + return __pmd(pte_val(pte)); +} + + +#define pmd_mkclean(pmd) pmd_of_pte(pte_mkclean(pte_of_pmd(pmd))) +#define pmd_mkdirty(pmd) pmd_of_pte(pte_mkdirty(pte_of_pmd(pmd))) +#define pmd_mkold(pmd) pmd_of_pte(pte_mkold(pte_of_pmd(pmd))) +#define pmd_mkwrite_novma(pmd) pmd_of_pte(pte_mkwrite_novma(pte_of_pm= d(pmd))) +#define pmd_mkyoung(pmd) pmd_of_pte(pte_mkyoung(pte_of_pmd(pmd))) +#define pmd_modify(pmd, prot) pmd_of_pte(pte_modify(pte_of_pmd(pmd), prot)) +#define pmd_wrprotect(pmd) pmd_of_pte(pte_wrprotect(pte_of_pmd(pmd))) + +static inline pmd_t pmd_mkhuge(pmd_t pmd) +{ + /* Create a huge page in PMD implies a size of 2 MB */ + return __pmd(pmd_val(pmd) | + _PAGE_HUGE | (TLB_PS_2M << KVX_PAGE_SZ_SHIFT)); +} + +static inline pmd_t pmd_mkinvalid(pmd_t pmd) +{ + pmd_val(pmd) &=3D ~(_PAGE_PRESENT); + + return pmd; +} + +#define pmd_dirty(pmd) pte_dirty(pte_of_pmd(pmd)) +#define pmd_write(pmd) pte_write(pte_of_pmd(pmd)) +#define pmd_young(pmd) pte_young(pte_of_pmd(pmd)) + +#define mk_pmd(page, prot) pmd_of_pte(mk_pte(page, prot)) + +static inline pmd_t pfn_pmd(unsigned long pfn, pgprot_t prot) +{ + return __pmd(((pfn << PFN_PTE_SHIFT) & PFN_PTE_MASK) | + pgprot_val(prot)); +} + +static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, + pmd_t *pmdp, pmd_t pmd) +{ + *pmdp =3D pmd; +} + +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + +#endif /* _ASM_KVX_PGTABLE_H */ diff --git a/arch/kvx/include/asm/sparsemem.h b/arch/kvx/include/asm/sparse= mem.h new file mode 100644 index 0000000000000..2f35743f20fb2 --- /dev/null +++ b/arch/kvx/include/asm/sparsemem.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_SPARSEMEM_H +#define _ASM_KVX_SPARSEMEM_H + +#ifdef CONFIG_SPARSEMEM +#define MAX_PHYSMEM_BITS 40 +#define SECTION_SIZE_BITS 30 +#endif /* CONFIG_SPARSEMEM */ + +#endif /* _ASM_KVX_SPARSEMEM_H */ diff --git a/arch/kvx/include/asm/symbols.h b/arch/kvx/include/asm/symbols.h new file mode 100644 index 0000000000000..a53c1607979fe --- /dev/null +++ b/arch/kvx/include/asm/symbols.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_SYMBOLS_H +#define _ASM_KVX_SYMBOLS_H + +/* Symbols to patch TLB refill handler */ +extern char kvx_perf_tlb_refill[], kvx_std_tlb_refill[]; + +/* Entry point of the ELF, used to start other PEs in SMP */ +extern int kvx_start[]; + +#endif diff --git a/arch/kvx/include/asm/tlb.h b/arch/kvx/include/asm/tlb.h new file mode 100644 index 0000000000000..190b682e18191 --- /dev/null +++ b/arch/kvx/include/asm/tlb.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Guillaume Thouvenin + * Clement Leger + */ + +#ifndef _ASM_KVX_TLB_H +#define _ASM_KVX_TLB_H + +struct mmu_gather; + +static void tlb_flush(struct mmu_gather *tlb); + +int clear_ltlb_entry(unsigned long vaddr); + +#include + +static inline unsigned int pgprot_cache_policy(unsigned long flags) +{ + return (flags & KVX_PAGE_CP_MASK) >> KVX_PAGE_CP_SHIFT; +} + +#endif /* _ASM_KVX_TLB_H */ diff --git a/arch/kvx/include/asm/tlb_defs.h b/arch/kvx/include/asm/tlb_def= s.h new file mode 100644 index 0000000000000..1c77f7387ecea --- /dev/null +++ b/arch/kvx/include/asm/tlb_defs.h @@ -0,0 +1,132 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Julian Vetter + * Guillaume Thouvenin + * Marius Gligor + */ + +#ifndef _ASM_KVX_TLB_DEFS_H +#define _ASM_KVX_TLB_DEFS_H + +#include + +#include + +/* Architecture specification */ +#define MMC_SB_JTLB 0 +#define MMC_SB_LTLB 1 + +#define MMU_LTLB_SETS 1 +#define MMU_LTLB_WAYS 16 + +#define MMU_JTLB_SETS 64 +#define MMU_JTLB_WAYS_SHIFT 2 +#define MMU_JTLB_WAYS (1 << MMU_JTLB_WAYS_SHIFT) + +#define MMU_JTLB_ENTRIES (MMU_JTLB_SETS << MMU_JTLB_WAYS_SHIFT) + +/* Set is determined using the 6 lsb of virtual page */ +#define MMU_JTLB_SET_MASK (MMU_JTLB_SETS - 1) +#define MMU_JTLB_WAY_MASK (MMU_JTLB_WAYS - 1) + +/* TLB: Entry Status */ +#define TLB_ES_INVALID 0 +#define TLB_ES_PRESENT 1 +#define TLB_ES_MODIFIED 2 +#define TLB_ES_A_MODIFIED 3 + +/* TLB: Cache Policy - First value is for data, the second is for instruct= ion + * Symbols are + * D: device + * U: uncached + * W: write through + * C: cache enabled + */ +#define TLB_CP_D_U 0 +#define TLB_CP_U_U 1 +#define TLB_CP_W_C 2 +#define TLB_CP_U_C 3 + +/* TLB: Protection Attributes: First value is when PM=3D0, second is when = PM=3D1 + * Symbols are: + * NA: no access + * R : read + * W : write + * X : execute + */ +#define TLB_PA_NA_NA 0 +#define TLB_PA_NA_R 1 +#define TLB_PA_NA_RW 2 +#define TLB_PA_NA_RX 3 +#define TLB_PA_NA_RWX 4 +#define TLB_PA_R_R 5 +#define TLB_PA_R_RW 6 +#define TLB_PA_R_RX 7 +#define TLB_PA_R_RWX 8 +#define TLB_PA_RW_RW 9 +#define TLB_PA_RW_RWX 10 +#define TLB_PA_RX_RX 11 +#define TLB_PA_RX_RWX 12 +#define TLB_PA_RWX_RWX 13 + +/* TLB: Page Size */ +#define TLB_PS_4K 0 +#define TLB_PS_64K 1 +#define TLB_PS_2M 2 +#define TLB_PS_512M 3 + +#define TLB_G_GLOBAL 1 +#define TLB_G_USE_ASN 0 + +#define TLB_MK_TEH_ENTRY(_vaddr, _vs, _global, _asn) \ + (((_vs) << KVX_SFR_TEH_VS_SHIFT) | \ + ((_global) << KVX_SFR_TEH_G_SHIFT) | \ + ((_asn) << KVX_SFR_TEH_ASN_SHIFT) | \ + (((_vaddr) >> KVX_SFR_TEH_PN_SHIFT) << KVX_SFR_TEH_PN_SHIFT)) + +#define TLB_MK_TEL_ENTRY(_paddr, _ps, _es, _cp, _pa) \ + (((_es) << KVX_SFR_TEL_ES_SHIFT) | \ + ((_ps) << KVX_SFR_TEL_PS_SHIFT) | \ + ((_cp) << KVX_SFR_TEL_CP_SHIFT) | \ + ((_pa) << KVX_SFR_TEL_PA_SHIFT) | \ + (((_paddr) >> KVX_SFR_TEL_FN_SHIFT) << KVX_SFR_TEL_FN_SHIFT)) + + +/* Refill routine related defines */ +#define REFILL_PERF_ENTRIES 4 +#define REFILL_PERF_PAGE_SIZE SZ_512M + +/* paddr will be inserted in assembly code */ +#define REFILL_PERF_TEL_VAL \ + TLB_MK_TEL_ENTRY(0, TLB_PS_512M, TLB_ES_A_MODIFIED, TLB_CP_W_C, \ + TLB_PA_NA_RWX) +/* vaddr will be inserted in assembly code */ +#define REFILL_PERF_TEH_VAL TLB_MK_TEH_ENTRY(0, 0, TLB_G_GLOBAL, 0) + +/* + * LTLB fixed entry index + */ +#define LTLB_ENTRY_KERNEL_TEXT 0 +#define LTLB_ENTRY_GDB_PAGE 1 +#define LTLB_ENTRY_VIRTUAL_SMEM 2 +/* Reserve entries for kernel pagination */ +#define LTLB_KERNEL_RESERVED 3 +/* This define should reflect the maximum number of fixed LTLB entries */ +#define LTLB_ENTRY_FIXED_COUNT (LTLB_KERNEL_RESERVED + REFILL_PERF_ENTRIES) +#define LTLB_ENTRY_EARLY_SMEM LTLB_ENTRY_FIXED_COUNT + +/* MMC: Protection Trap Cause */ +#define MMC_PTC_RESERVED 0 +#define MMC_PTC_READ 1 +#define MMC_PTC_WRITE 2 +#define MMC_PTC_EXECUTE 3 + +/* MMC: Page size Mask in JTLB */ +#define MMC_PMJ_4K 1 +#define MMC_PMJ_64K 2 +#define MMC_PMJ_2M 4 +#define MMC_PMJ_512M 8 + +#endif diff --git a/arch/kvx/include/asm/tlbflush.h b/arch/kvx/include/asm/tlbflus= h.h new file mode 100644 index 0000000000000..6c11fe7108078 --- /dev/null +++ b/arch/kvx/include/asm/tlbflush.h @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Guillaume Thouvenin + * Clement Leger + */ + +#ifndef _ASM_KVX_TLBFLUSH_H +#define _ASM_KVX_TLBFLUSH_H + +#include +#include +#include + +extern void local_flush_tlb_page(struct vm_area_struct *vma, + unsigned long addr); +extern void local_flush_tlb_all(void); +extern void local_flush_tlb_mm(struct mm_struct *mm); +extern void local_flush_tlb_range(struct vm_area_struct *vma, + unsigned long start, + unsigned long end); +extern void local_flush_tlb_kernel_range(unsigned long start, + unsigned long end); + +#ifdef CONFIG_SMP +extern void smp_flush_tlb_all(void); +extern void smp_flush_tlb_mm(struct mm_struct *mm); +extern void smp_flush_tlb_page(struct vm_area_struct *vma, unsigned long a= ddr); +extern void smp_flush_tlb_range(struct vm_area_struct *vma, unsigned long = start, + unsigned long end); +extern void smp_flush_tlb_kernel_range(unsigned long start, unsigned long = end); + +static inline void flush_tlb(void) +{ + smp_flush_tlb_mm(current->mm); +} + +#define flush_tlb_page smp_flush_tlb_page +#define flush_tlb_all smp_flush_tlb_all +#define flush_tlb_mm smp_flush_tlb_mm +#define flush_tlb_range smp_flush_tlb_range +#define flush_tlb_kernel_range smp_flush_tlb_kernel_range + +#else +#define flush_tlb_page local_flush_tlb_page +#define flush_tlb_all local_flush_tlb_all +#define flush_tlb_mm local_flush_tlb_mm +#define flush_tlb_range local_flush_tlb_range +#define flush_tlb_kernel_range local_flush_tlb_kernel_range +#endif + +void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd); + +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *v= ma, + unsigned long address, pte_t *ptep, unsigned int nr); +void update_mmu_cache(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep); + +#endif /* _ASM_KVX_TLBFLUSH_H */ diff --git a/arch/kvx/include/asm/vmalloc.h b/arch/kvx/include/asm/vmalloc.h new file mode 100644 index 0000000000000..a07692431108e --- /dev/null +++ b/arch/kvx/include/asm/vmalloc.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_VMALLOC_H +#define _ASM_KVX_VMALLOC_H + +#endif /* _ASM_KVX_VMALLOC_H */ diff --git a/arch/kvx/mm/cacheflush.c b/arch/kvx/mm/cacheflush.c new file mode 100644 index 0000000000000..62ec07266708e --- /dev/null +++ b/arch/kvx/mm/cacheflush.c @@ -0,0 +1,158 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#include +#include +#include + +#include + +#ifdef CONFIG_SMP + +struct flush_data { + unsigned long start; + unsigned long end; +}; + +static inline void ipi_flush_icache_range(void *arg) +{ + struct flush_data *ta =3D arg; + + local_flush_icache_range(ta->start, ta->end); +} + +void flush_icache_range(unsigned long start, unsigned long end) +{ + struct flush_data data =3D { + .start =3D start, + .end =3D end + }; + + /* Then invalidate L1 icache on all cpus */ + on_each_cpu(ipi_flush_icache_range, &data, 1); +} +EXPORT_SYMBOL(flush_icache_range); + +#endif /* CONFIG_SMP */ + +void dcache_wb_inval_phys_range(phys_addr_t addr, unsigned long len, bool = wb, + bool inval) +{ + if (wb && inval) { + wbinval_dcache_range(addr, len); + } else { + if (inval) + inval_dcache_range(addr, len); + if (wb) + wb_dcache_range(addr, len); + } +} + +static inline pte_t *get_ptep(struct mm_struct *mm, unsigned long addr) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + + pgd =3D pgd_offset(mm, addr); + if (pgd_none(*pgd)) + return NULL; + + p4d =3D p4d_offset(pgd, addr); + if (p4d_none(*p4d)) + return NULL; + + pud =3D pud_offset(p4d, addr); + if (pud_none(*pud)) + return NULL; + + pmd =3D pmd_offset(pud, addr); + if (pmd_none(*pmd)) + return NULL; + + /* checks for huge page at pmd level */ + if (pmd_leaf(*pmd)) { + pte =3D (pte_t *) pmd; + if (!pte_present(*pte)) + return NULL; + + return pte; + } + + pte =3D pte_offset_map(pmd, addr); + if (!pte_present(*pte)) + return NULL; + + return pte; +} + +static unsigned long dcache_wb_inval_virt_to_phys(struct vm_area_struct *v= ma, + unsigned long vaddr, + unsigned long len, + bool wb, bool inval) +{ + unsigned long pfn, offset, pgsize; + pte_t *ptep; + + ptep =3D get_ptep(vma->vm_mm, vaddr); + if (!ptep) { + /* + * Since we did not found a matching pte, return needed + * length to be aligned on next page boundary + */ + offset =3D (vaddr & (PAGE_SIZE - 1)); + return PAGE_SIZE - offset; + } + + /* Handle page sizes correctly */ + pgsize =3D (pte_val(*ptep) & KVX_PAGE_SZ_MASK) >> KVX_PAGE_SZ_SHIFT; + pgsize =3D (1 << get_page_size_shift(pgsize)); + + offset =3D vaddr & (pgsize - 1); + len =3D min(pgsize - offset, len); + pfn =3D pte_pfn(*ptep); + + dcache_wb_inval_phys_range(PFN_PHYS(pfn) + offset, len, wb, inval); + + return len; +} + +int dcache_wb_inval_virt_range(unsigned long vaddr, unsigned long len, boo= l wb, + bool inval) +{ + unsigned long end =3D vaddr + len; + unsigned long vma_end; + struct vm_area_struct *vma; + unsigned long rlen; + struct mm_struct *mm =3D current->mm; + + /* necessary for find_vma */ + mmap_read_lock(mm); + + /* + * Verify that the specified address region actually belongs to this + * process. + */ + while (vaddr < end) { + vma =3D find_vma(current->mm, vaddr); + if (vma =3D=3D NULL || vaddr < vma->vm_start) { + mmap_read_unlock(mm); + return -EFAULT; + } + vma_end =3D min(end, vma->vm_end); + while (vaddr < vma_end) { + rlen =3D dcache_wb_inval_virt_to_phys(vma, vaddr, len, wb, inval); + len -=3D rlen; + vaddr +=3D rlen; + } + } + + mmap_read_unlock(mm); + + return 0; +} diff --git a/arch/kvx/mm/dma-mapping.c b/arch/kvx/mm/dma-mapping.c new file mode 100644 index 0000000000000..d73fbb1c35896 --- /dev/null +++ b/arch/kvx/mm/dma-mapping.c @@ -0,0 +1,83 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + * Jules Maselbas + * Julian Vetter + */ + +#include +#include +#include "../../../drivers/iommu/dma-iommu.h" + +#include + +void arch_dma_prep_coherent(struct page *page, size_t size) +{ + unsigned long addr =3D (unsigned long) page_to_phys(page); + + /* Flush pending data and invalidate pages */ + wbinval_dcache_range(addr, size); +} + +/** + * The implementation of arch should follow the following rules: + * map for_cpu for_device unmap + * TO_DEV writeback none writeback none + * FROM_DEV invalidate invalidate(*) invalidate invalidate(*) + * BIDIR writeback invalidate writeback invalidate + * + * (*) - only necessary if the CPU speculatively prefetches. + * + * (see https://lkml.org/lkml/2018/5/18/979) + */ +void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, + enum dma_data_direction dir) +{ + switch (dir) { + case DMA_FROM_DEVICE: + inval_dcache_range(paddr, size); + break; + + case DMA_TO_DEVICE: + case DMA_BIDIRECTIONAL: + wb_dcache_range(paddr, size); + break; + + default: + WARN_ON_ONCE(true); + } +} + +void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, + enum dma_data_direction dir) +{ + switch (dir) { + case DMA_TO_DEVICE: + break; + case DMA_FROM_DEVICE: + break; + + case DMA_BIDIRECTIONAL: + inval_dcache_range(paddr, size); + break; + + default: + WARN_ON_ONCE(true); + } +} + +#ifdef CONFIG_IOMMU_DMA +void arch_teardown_dma_ops(struct device *dev) +{ + dev->dma_ops =3D NULL; +} +#endif /* CONFIG_IOMMU_DMA*/ + +void arch_setup_dma_ops(struct device *dev, bool coherent) +{ + dev->dma_coherent =3D coherent; + if (device_iommu_mapped(dev)) + iommu_setup_dma_ops(dev); +} diff --git a/arch/kvx/mm/extable.c b/arch/kvx/mm/extable.c new file mode 100644 index 0000000000000..8889a1a90a775 --- /dev/null +++ b/arch/kvx/mm/extable.c @@ -0,0 +1,24 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * derived from arch/riscv/mm/extable.c + * + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + + +#include +#include +#include + +int fixup_exception(struct pt_regs *regs) +{ + const struct exception_table_entry *fixup; + + fixup =3D search_exception_tables(regs->spc); + if (fixup) { + regs->spc =3D fixup->fixup; + return 1; + } + return 0; +} diff --git a/arch/kvx/mm/fault.c b/arch/kvx/mm/fault.c new file mode 100644 index 0000000000000..9d8513db57a8d --- /dev/null +++ b/arch/kvx/mm/fault.c @@ -0,0 +1,270 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Guillaume Thouvenin + * Clement Leger + * Yann Sionneau + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +static int handle_vmalloc_fault(uint64_t ea) +{ + /* + * Synchronize this task's top level page-table with + * the 'reference' page table. + * As we only have 2 or 3 level page table we don't need to + * deal with other levels. + */ + unsigned long addr =3D ea & PAGE_MASK; + pgd_t *pgd_k, *pgd; + p4d_t *p4d_k, *p4d; + pud_t *pud_k, *pud; + pmd_t *pmd_k, *pmd; + pte_t *pte_k; + + pgd =3D pgd_offset(current->active_mm, ea); + pgd_k =3D pgd_offset_k(ea); + if (!pgd_present(*pgd_k)) + return 1; + set_pgd(pgd, *pgd_k); + + p4d =3D p4d_offset(pgd, ea); + p4d_k =3D p4d_offset(pgd_k, ea); + if (!p4d_present(*p4d_k)) + return 1; + + pud =3D pud_offset(p4d, ea); + pud_k =3D pud_offset(p4d_k, ea); + if (!pud_present(*pud_k)) + return 1; + + pmd =3D pmd_offset(pud, ea); + pmd_k =3D pmd_offset(pud_k, ea); + if (!pmd_present(*pmd_k)) + return 1; + + /* Some other architectures set pmd to synchronize them but + * as we just synchronized the pgd we don't see how they can + * be different. Maybe we miss something so in case we + * put a guard here. + */ + if (pmd_val(*pmd) !=3D pmd_val(*pmd_k)) + pr_err("%s: pmd not synchronized (0x%lx !=3D 0x%lx)\n", + __func__, pmd_val(*pmd), pmd_val(*pmd_k)); + + pte_k =3D pte_offset_kernel(pmd_k, ea); + if (!pte_present(*pte_k)) { + pr_err("%s: PTE not present for 0x%llx\n", + __func__, ea); + return 1; + } + + /* We refill the TLB now to avoid to take another nomapping + * trap. + */ + kvx_mmu_jtlb_add_entry(addr, pte_k, 0); + + return 0; +} + +void do_page_fault(struct pt_regs *regs, uint64_t es, uint64_t ea) +{ + struct mm_struct *mm; + struct vm_area_struct *vma; + unsigned long flags, cause, vma_mask; + int code =3D SEGV_MAPERR; + vm_fault_t fault; + + cause =3D kvx_sfr_field_val(es, ES, RWX); + + /* We fault-in kernel-space virtual memory on-demand. The + * 'reference' page table is init_mm.pgd. + */ + if (is_vmalloc_addr((void *) ea) && !user_mode(regs)) { + if (handle_vmalloc_fault(ea)) + goto no_context; + return; + } + + mm =3D current->mm; + + /* + * If we're in an interrupt or have no user + * context, we must not take the fault.. + */ + if (unlikely(faulthandler_disabled() || !mm)) + goto no_context; + + /* By default we retry and fault task can be killed */ + flags =3D FAULT_FLAG_DEFAULT; + + if (user_mode(regs)) + flags |=3D FAULT_FLAG_USER; + + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, ea); + +retry: + mmap_read_lock(mm); + + vma =3D find_vma(mm, ea); + if (!vma) + goto bad_area; + if (likely(vma->vm_start <=3D ea)) + goto good_area; + if (unlikely(!(vma->vm_flags & VM_GROWSDOWN))) + goto bad_area; + vma =3D expand_stack(mm, ea); + if (unlikely(!vma)) + goto bad_area_nosemaphore; + +good_area: + /* Handle access type */ + switch (cause) { + case KVX_TRAP_RWX_FETCH: + vma_mask =3D VM_EXEC; + break; + case KVX_TRAP_RWX_READ: + vma_mask =3D VM_READ; + break; + case KVX_TRAP_RWX_WRITE: + vma_mask =3D VM_WRITE; + flags |=3D FAULT_FLAG_WRITE; + break; + /* Atomic are both read/write */ + case KVX_TRAP_RWX_ATOMIC: + vma_mask =3D VM_WRITE | VM_READ; + flags |=3D FAULT_FLAG_WRITE; + break; + default: + panic("%s: unhandled cause %lu", __func__, cause); + } + + if ((vma->vm_flags & vma_mask) !=3D vma_mask) { + code =3D SEGV_ACCERR; + goto bad_area; + } + + /* + * If for any reason we can not handle the fault we make sure that + * we exit gracefully rather then retry endlessly with the same + * result. + */ + fault =3D handle_mm_fault(vma, ea, flags, regs); + + /* + * If we need to retry but a fatal signal is pending, handle the + * signal first. We do not need to release the mmap_sem because it + * would already be released in __lock_page_or_retry in mm/filemap.c. + */ + if (fault_signal_pending(fault, regs)) + return; + + /* The fault is fully completed (including releasing mmap lock) */ + if (fault & VM_FAULT_COMPLETED) + return; + + if (unlikely(fault & VM_FAULT_ERROR)) { + if (fault & VM_FAULT_OOM) + goto out_of_memory; + else if (fault & VM_FAULT_SIGSEGV) + goto bad_area; + else if (fault & VM_FAULT_SIGBUS) + goto do_sigbus; + BUG(); + } + + if (unlikely((fault & VM_FAULT_RETRY) && (flags & FAULT_FLAG_ALLOW_RETRY)= )) { + flags |=3D FAULT_FLAG_TRIED; + /* No need to up_read(&mm->mmap_sem) as we would + * have already released it in __lock_page_or_retry(). + * Look in mm/filemap.c for explanations. + */ + goto retry; + } + + /* Fault errors and retry case have been handled nicely */ + mmap_read_unlock(mm); + return; + +bad_area: + mmap_read_unlock(mm); +bad_area_nosemaphore: + + if (user_mode(regs)) { + user_do_sig(regs, SIGSEGV, code, ea); + return; + } + +no_context: + /* Are we prepared to handle this kernel fault? + * + * (The kernel has valid exception-points in the source + * when it accesses user-memory. When it fails in one + * of those points, we find it in a table and do a jump + * to some fixup code that loads an appropriate error + * code) + */ + if (fixup_exception(regs)) + return; + + /* + * Oops. The kernel tried to access some bad page. We'll have to + * terminate things with extreme prejudice. + */ + bust_spinlocks(1); + if (kvx_sfr_field_val(es, ES, HTC) =3D=3D KVX_TRAP_PROTECTION) + pr_alert(CUT_HERE "Kernel protection trap at virtual address %016llx\n", + ea); + else { + pr_alert(CUT_HERE "Unable to handle kernel %s at virtual address %016llx= \n", + (ea < PAGE_SIZE) ? "NULL pointer dereference" : + "paging request", ea); + } + die(regs, ea, "Oops"); + bust_spinlocks(0); + make_task_dead(SIGKILL); + +out_of_memory: + /* + * We ran out of memory, call the OOM killer, and return the userspace + * (which will retry the fault, or kill us if we got oom-killed). + */ + mmap_read_unlock(mm); + if (!user_mode(regs)) + goto no_context; + pagefault_out_of_memory(); + return; + +do_sigbus: + mmap_read_unlock(mm); + /* Kernel mode? Handle exceptions or die */ + if (!user_mode(regs)) + goto no_context; + + user_do_sig(regs, SIGBUS, BUS_ADRERR, ea); + + return; + +} + +void do_writetoclean(struct pt_regs *regs, uint64_t es, uint64_t ea) +{ + panic("%s not implemented", __func__); +} diff --git a/arch/kvx/mm/init.c b/arch/kvx/mm/init.c new file mode 100644 index 0000000000000..c68468b8957ea --- /dev/null +++ b/arch/kvx/mm/init.c @@ -0,0 +1,297 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + */ + +/* Memblock header depends on types.h but does not include it ! */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +/* + * On kvx, memory map contains the first 2G of DDR being aliased. + * Full contiguous DDR is located at @[4G - 68G]. + * However, to access this DDR in 32bit mode, the first 2G of DDR are + * mirrored from 4G to 2G. + * These first 2G are accessible from all DMAs (included 32 bits one). + * + * Hence, the memory map is the following: + * + * (68G) 0x1100000000-> +-------------+ + * | | + * 66G |(ZONE_NORMAL)| + * | | + * (6G) 0x180000000-> +-------------+ + * | | + * 2G |(ZONE_DMA32) | + * | | + * (4G) 0x100000000-> +-------------+ +--+ + * | | | + * 2G | (Alias) | | 2G Alias + * | | | + * (2G) 0x80000000-> +-------------+ <--+ + * + * The translation of 64 bits -> 32 bits can then be done using dma-ranges= property + * in device-trees. + */ + +#define DDR_64BIT_START (4ULL * SZ_1G) +#define DDR_32BIT_ALIAS_SIZE (2ULL * SZ_1G) + +#define MAX_DMA32_PFN PHYS_PFN(DDR_64BIT_START + DDR_32BIT_ALIAS_SIZE) + +#define SMEM_GLOBAL_START (16 * 1024 * 1024) +#define SMEM_GLOBAL_END (20 * 1024 * 1024) +#define MAX_SMEM_MEMBLOCKS (32) + +pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss; + +/* + * empty_zero_page is a special page that is used for zero-initialized dat= a and + * COW. + */ +struct page *empty_zero_page; +EXPORT_SYMBOL(empty_zero_page); + +extern char _start[]; +extern char __kernel_smem_code_start[]; +extern char __kernel_smem_code_end[]; + +static void __init zone_sizes_init(void) +{ + unsigned long zones_size[MAX_NR_ZONES]; + + memset(zones_size, 0, sizeof(zones_size)); + + zones_size[ZONE_DMA32] =3D min(MAX_DMA32_PFN, max_low_pfn); + zones_size[ZONE_NORMAL] =3D max_low_pfn; + + free_area_init(zones_size); +} + +#ifdef CONFIG_BLK_DEV_INITRD +static void __init setup_initrd(void) +{ + u64 base =3D phys_initrd_start; + u64 end =3D phys_initrd_start + phys_initrd_size; + + if (phys_initrd_size =3D=3D 0) { + pr_info("initrd not found or empty"); + return; + } + + if (base < memblock_start_of_DRAM() || end > memblock_end_of_DRAM()) { + pr_err("initrd not in accessible memory, disabling it"); + phys_initrd_size =3D 0; + return; + } + + pr_info("initrd: 0x%llx - 0x%llx\n", base, end); + + memblock_reserve(phys_initrd_start, phys_initrd_size); + + /* the generic initrd code expects virtual addresses */ + initrd_start =3D (unsigned long) __va(base); + initrd_end =3D initrd_start + phys_initrd_size; +} +#endif + +static phys_addr_t memory_limit =3D PHYS_ADDR_MAX; + +static int __init early_mem(char *p) +{ + if (!p) + return 1; + + memory_limit =3D memparse(p, &p) & PAGE_MASK; + pr_notice("Memory limited to %lldMB\n", memory_limit >> 20); + + return 0; +} +early_param("mem", early_mem); + +static void __init setup_bootmem(void) +{ + phys_addr_t start, end =3D 0; + u64 i; + struct memblock_region *region =3D NULL; + struct memblock_region to_be_reserved[MAX_SMEM_MEMBLOCKS] =3D {}; + struct memblock_region *reserve =3D to_be_reserved; + + init_mm.start_code =3D (unsigned long)_stext; + init_mm.end_code =3D (unsigned long)_etext; + init_mm.end_data =3D (unsigned long)_edata; + init_mm.brk =3D (unsigned long)_end; + + memblock_reserve(__pa(_start), __pa(_end) - __pa(_start)); + + + for_each_mem_range(i, &start, &end) { + pr_info("%15s: memory : 0x%lx - 0x%lx\n", __func__, + (unsigned long)start, + (unsigned long)end); + } + + /* min_low_pfn is the lowest PFN available in the system */ + min_low_pfn =3D PFN_UP(memblock_start_of_DRAM()); + + /* max_low_pfn indicates the end if NORMAL zone */ + max_low_pfn =3D PFN_DOWN(memblock_end_of_DRAM()); + + /* Set the maximum number of pages in the system */ + set_max_mapnr(max_low_pfn - min_low_pfn); + +#ifdef CONFIG_BLK_DEV_INITRD + setup_initrd(); +#endif + + if (memory_limit !=3D PHYS_ADDR_MAX) + memblock_mem_limit_remove_map(memory_limit); + + /* Don't reserve the device tree if its builtin */ + if (!is_kernel_rodata((unsigned long) initial_boot_params)) + early_init_fdt_reserve_self(); + early_init_fdt_scan_reserved_mem(); + + start =3D SMEM_GLOBAL_START; + /* + * make sure all smem is reserved + * It seems that it's not OK to call memblock_reserve() from + * for_each_reserved_mem_region() {} loop, so let's store + * the "to be reserved" regions in the to_be_reserved array. + */ + for_each_reserved_mem_region(region) { + if (region->base >=3D SMEM_GLOBAL_END) + break; /* we finished to scan the smem area */ + + if (region->base > SMEM_GLOBAL_START && region->base !=3D start) { + if (reserve > &to_be_reserved[MAX_SMEM_MEMBLOCKS - 1]) { + pr_crit("Impossible to mark the whole smem as reserved memory.\n"); + pr_crit("It would take more than the maximum of %d memblocks.\n", + MAX_SMEM_MEMBLOCKS); + break; + } + reserve->base =3D start; + reserve->size =3D region->base - start; + reserve++; + } + + start =3D region->base + region->size; + } + + if (start < SMEM_GLOBAL_END) + memblock_reserve(start, SMEM_GLOBAL_END - start); + + /* Now let's reserve the smem memblocks for real */ + for (region =3D to_be_reserved; region->size !=3D 0; region++) + memblock_reserve(region->base, region->size); + + memblock_allow_resize(); + memblock_dump_all(); +} + +static pmd_t fixmap_pmd[PTRS_PER_PMD] __page_aligned_bss __maybe_unused; +static pte_t fixmap_pte[PTRS_PER_PTE] __page_aligned_bss __maybe_unused; + +void __init early_fixmap_init(void) +{ + unsigned long vaddr; + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + + /* + * Fixed mappings: + */ + vaddr =3D __fix_to_virt(__end_of_fixed_addresses - 1); + pgd =3D pgd_offset_pgd(swapper_pg_dir, vaddr); + set_pgd(pgd, __pgd(__pa_symbol(fixmap_pmd))); + + p4d =3D p4d_offset(pgd, vaddr); + pud =3D pud_offset(p4d, vaddr); + pmd =3D pmd_offset(pud, vaddr); + set_pmd(pmd, __pmd(__pa_symbol(fixmap_pte))); +} + +void __init setup_arch_memory(void) +{ + setup_bootmem(); + sparse_init(); + zone_sizes_init(); +} + +void __init mem_init(void) +{ + memblock_free_all(); + + /* allocate the zero page */ + empty_zero_page =3D alloc_page(GFP_KERNEL | __GFP_ZERO); + if (!empty_zero_page) + panic("Failed to allocate the empty_zero_page"); +} + +void free_initmem(void) +{ +#ifdef CONFIG_POISON_INITMEM + free_initmem_default(0x0); +#else + free_initmem_default(-1); +#endif +} + +void __set_fixmap(enum fixed_addresses idx, + phys_addr_t phys, pgprot_t flags) +{ + unsigned long addr =3D __fix_to_virt(idx); + pte_t *pte; + + + BUG_ON(idx >=3D __end_of_fixed_addresses); + + pte =3D &fixmap_pte[pte_index(addr)]; + + if (pgprot_val(flags)) { + set_pte(pte, pfn_pte(phys_to_pfn(phys), flags)); + } else { + /* Remove the fixmap */ + pte_clear(&init_mm, addr, pte); + } + local_flush_tlb_kernel_range(addr, addr + PAGE_SIZE); +} + +static const pgprot_t protection_map[16] =3D { + [VM_NONE] =3D PAGE_NONE, + [VM_READ] =3D PAGE_READ, + [VM_WRITE] =3D PAGE_READ, + [VM_WRITE | VM_READ] =3D PAGE_READ, + [VM_EXEC] =3D PAGE_READ_EXEC, + [VM_EXEC | VM_READ] =3D PAGE_READ_EXEC, + [VM_EXEC | VM_WRITE] =3D PAGE_READ_EXEC, + [VM_EXEC | VM_WRITE | VM_READ] =3D PAGE_READ_EXEC, + [VM_SHARED] =3D PAGE_NONE, + [VM_SHARED | VM_READ] =3D PAGE_READ, + [VM_SHARED | VM_WRITE] =3D PAGE_READ_WRITE, + [VM_SHARED | VM_WRITE | VM_READ] =3D PAGE_READ_WRITE, + [VM_SHARED | VM_EXEC] =3D PAGE_READ_EXEC, + [VM_SHARED | VM_EXEC | VM_READ] =3D PAGE_READ_EXEC, + [VM_SHARED | VM_EXEC | VM_WRITE] =3D PAGE_READ_WRITE_EXEC, + [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] =3D PAGE_READ_WRITE_EXEC +}; +DECLARE_VM_GET_PAGE_PROT diff --git a/arch/kvx/mm/mmap.c b/arch/kvx/mm/mmap.c new file mode 100644 index 0000000000000..a2225db644384 --- /dev/null +++ b/arch/kvx/mm/mmap.c @@ -0,0 +1,31 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * derived from arch/arm64/mm/mmap.c + * + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifdef CONFIG_STRICT_DEVMEM + +#include +#include + +#include + +/* + * devmem_is_allowed() checks to see if /dev/mem access to a certain addre= ss + * is valid. The argument is a physical page number. We mimic x86 here by + * disallowing access to system RAM as well as device-exclusive MMIO regio= ns. + * This effectively disable read()/write() on /dev/mem. + */ +int devmem_is_allowed(unsigned long pfn) +{ + if (iomem_is_exclusive(pfn << PAGE_SHIFT)) + return 0; + if (!page_is_ram(pfn)) + return 1; + return 0; +} + +#endif diff --git a/arch/kvx/mm/mmu.c b/arch/kvx/mm/mmu.c new file mode 100644 index 0000000000000..9cb11bd2dfdf7 --- /dev/null +++ b/arch/kvx/mm/mmu.c @@ -0,0 +1,202 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + * Vincent Chardon + * Jules Maselbas + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#define DUMP_LTLB 0 +#define DUMP_JTLB 1 + +DEFINE_PER_CPU_ALIGNED(uint8_t[MMU_JTLB_SETS], jtlb_current_set_way); +static struct kvx_tlb_format ltlb_entries[MMU_LTLB_WAYS]; +static unsigned long ltlb_entries_bmp; + +static int kvx_mmu_ltlb_overlaps(struct kvx_tlb_format tlbe) +{ + int bit =3D LTLB_ENTRY_FIXED_COUNT; + + for_each_set_bit_from(bit, <lb_entries_bmp, MMU_LTLB_WAYS) { + if (tlb_entry_overlaps(tlbe, ltlb_entries[bit])) + return 1; + } + + return 0; +} + +/** + * kvx_mmu_ltlb_add_entry - Add a kernel entry in the LTLB + * + * In order to lock some entries in the LTLB and be always mapped, this + * function can be called with a physical address, a virtual address and + * protection attribute to add an entry into the LTLB. This is mainly for + * performances since there won't be any NOMAPPING traps for these pages. + * + * @vaddr: Virtual address for the entry (must be aligned according to tlb= _ps) + * @paddr: Physical address for the entry (must be aligned according to tl= b_ps) + * @flags: Protection attributes + * @tlb_ps: Page size attribute for TLB (TLB_PS_*) + */ +void kvx_mmu_ltlb_add_entry(unsigned long vaddr, phys_addr_t paddr, + pgprot_t flags, unsigned long tlb_ps) +{ + unsigned int cp; + unsigned int idx; + unsigned long irqflags; + struct kvx_tlb_format tlbe; + u64 page_size =3D BIT(get_page_size_shift(tlb_ps)); + + BUG_ON(!IS_ALIGNED(vaddr, page_size) || !IS_ALIGNED(paddr, page_size)); + + cp =3D pgprot_cache_policy(pgprot_val(flags)); + + tlbe =3D tlb_mk_entry( + (void *) paddr, + (void *) vaddr, + tlb_ps, + TLB_G_GLOBAL, + TLB_PA_NA_RW, + cp, + 0, + TLB_ES_A_MODIFIED); + + local_irq_save(irqflags); + + if (IS_ENABLED(CONFIG_KVX_DEBUG_TLB_WRITE) && + kvx_mmu_ltlb_overlaps(tlbe)) + panic("VA %lx overlaps with an existing LTLB mapping", vaddr); + + idx =3D find_next_zero_bit(<lb_entries_bmp, MMU_LTLB_WAYS, + LTLB_ENTRY_FIXED_COUNT); + /* This should never happen */ + BUG_ON(idx >=3D MMU_LTLB_WAYS); + __set_bit(idx, <lb_entries_bmp); + ltlb_entries[idx] =3D tlbe; + kvx_mmu_add_entry(MMC_SB_LTLB, idx, tlbe); + + if (kvx_mmc_error(kvx_sfr_get(MMC))) + panic("Failed to write entry to the LTLB"); + + local_irq_restore(irqflags); +} + +void kvx_mmu_ltlb_remove_entry(unsigned long vaddr) +{ + int ret, bit =3D LTLB_ENTRY_FIXED_COUNT; + struct kvx_tlb_format tlbe; + + for_each_set_bit_from(bit, <lb_entries_bmp, MMU_LTLB_WAYS) { + tlbe =3D ltlb_entries[bit]; + if (tlb_entry_match_addr(tlbe, vaddr)) { + __clear_bit(bit, <lb_entries_bmp); + break; + } + } + + if (bit =3D=3D MMU_LTLB_WAYS) + panic("Trying to remove non-existent LTLB entry for addr %lx\n", + vaddr); + + ret =3D clear_ltlb_entry(vaddr); + if (ret) + panic("Failed to remove LTLB entry for addr %lx\n", vaddr); +} + +/** + * kvx_mmu_jtlb_add_entry - Add an entry into JTLB + * + * JTLB is used both for kernel and user entries. + * + * @address: Virtual address for the entry (must be aligned according to t= lb_ps) + * @ptep: pte entry pointer + * @asn: ASN (if pte entry is not global) + */ +void kvx_mmu_jtlb_add_entry(unsigned long address, pte_t *ptep, + unsigned int asn) +{ + unsigned int shifted_addr, set, way; + unsigned long flags, pte_val; + struct kvx_tlb_format tlbe; + unsigned int pa, cp, ps; + phys_addr_t pfn; + + pte_val =3D pte_val(*ptep); + + pfn =3D (phys_addr_t)pte_pfn(*ptep); + + asn &=3D MM_CTXT_ASN_MASK; + + /* Set page as accessed */ + pte_val(*ptep) |=3D _PAGE_ACCESSED; + + BUILD_BUG_ON(KVX_PAGE_SZ_SHIFT !=3D KVX_SFR_TEL_PS_SHIFT); + + ps =3D (pte_val & KVX_PAGE_SZ_MASK) >> KVX_PAGE_SZ_SHIFT; + pa =3D get_page_access_perms(KVX_ACCESS_PERMS_INDEX(pte_val)); + cp =3D pgprot_cache_policy(pte_val); + + tlbe =3D tlb_mk_entry( + (void *)pfn_to_phys(pfn), + (void *)address, + ps, + (pte_val & _PAGE_GLOBAL) ? TLB_G_GLOBAL : TLB_G_USE_ASN, + pa, + cp, + asn, + TLB_ES_A_MODIFIED); + + shifted_addr =3D address >> get_page_size_shift(ps); + set =3D shifted_addr & MMU_JTLB_SET_MASK; + + local_irq_save(flags); + + if (IS_ENABLED(CONFIG_KVX_DEBUG_TLB_WRITE) && + kvx_mmu_ltlb_overlaps(tlbe)) + panic("VA %lx overlaps with an existing LTLB mapping", address); + + way =3D get_cpu_var(jtlb_current_set_way)[set]++; + put_cpu_var(jtlb_current_set_way); + + way &=3D MMU_JTLB_WAY_MASK; + + kvx_mmu_add_entry(MMC_SB_JTLB, way, tlbe); + + if (IS_ENABLED(CONFIG_KVX_DEBUG_TLB_WRITE) && + kvx_mmc_error(kvx_sfr_get(MMC))) + panic("Failed to write entry to the JTLB (in update_mmu_cache)"); + + local_irq_restore(flags); +} + +void __init kvx_mmu_early_setup(void) +{ + int bit; + struct kvx_tlb_format tlbe; + + kvx_mmu_remove_ltlb_entry(LTLB_ENTRY_EARLY_SMEM); + + if (raw_smp_processor_id() !=3D 0) { + /* Apply existing ltlb entries starting from first one free */ + bit =3D LTLB_ENTRY_FIXED_COUNT; + for_each_set_bit_from(bit, <lb_entries_bmp, MMU_LTLB_WAYS) { + tlbe =3D ltlb_entries[bit]; + kvx_mmu_add_entry(MMC_SB_LTLB, bit, tlbe); + } + } +} diff --git a/arch/kvx/mm/mmu_stats.c b/arch/kvx/mm/mmu_stats.c new file mode 100644 index 0000000000000..e70b7e2dcab1a --- /dev/null +++ b/arch/kvx/mm/mmu_stats.c @@ -0,0 +1,94 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#include +#include + +#include + +static struct dentry *mmu_stats_debufs; + +static const char *mmu_refill_types_name[MMU_REFILL_TYPE_COUNT] =3D { + [MMU_REFILL_TYPE_USER] =3D "User", + [MMU_REFILL_TYPE_KERNEL] =3D "Kernel", + [MMU_REFILL_TYPE_KERNEL_DIRECT] =3D "Kernel Direct" +}; + +DEFINE_PER_CPU(struct mmu_stats, mmu_stats); + +static int mmu_stats_show(struct seq_file *m, void *v) +{ + int cpu, type; + unsigned long avg =3D 0, total_refill, efficiency, total_cycles; + struct mmu_stats *stats; + struct mmu_refill_stats *ref_stat; + + total_cycles =3D get_cycles(); + for_each_present_cpu(cpu) { + stats =3D &per_cpu(mmu_stats, cpu); + total_refill =3D 0; + + seq_printf(m, " - CPU %d\n", cpu); + for (type =3D 0; type < MMU_REFILL_TYPE_COUNT; type++) { + ref_stat =3D &stats->refill[type]; + total_refill +=3D ref_stat->count; + if (ref_stat->count) + avg =3D ref_stat->total / ref_stat->count; + else + avg =3D 0; + + seq_printf(m, + " - %s refill stats:\n" + " - count: %lu\n" + " - min: %lu\n" + " - avg: %lu\n" + " - max: %lu\n", + mmu_refill_types_name[type], + ref_stat->count, + ref_stat->min, + avg, + ref_stat->max + ); + } + + if (total_refill) + avg =3D stats->cycles_between_refill / total_refill; + else + avg =3D 0; + + seq_printf(m, " - Average cycles between refill: %lu\n", avg); + seq_printf(m, " - tlb_flush_all calls: %lu\n", + stats->tlb_flush_all); + efficiency =3D stats->cycles_between_refill * 100 / + stats->last_refill; + seq_printf(m, " - Efficiency: %lu%%\n", efficiency); + } + + return 0; +} + +static int mmu_stats_open(struct inode *inode, struct file *filp) +{ + return single_open(filp, mmu_stats_show, NULL); +} + +static const struct file_operations mmu_stats_fops =3D { + .open =3D mmu_stats_open, + .read =3D seq_read, + .llseek =3D seq_lseek, + .release =3D single_release, +}; + +static int __init mmu_stats_debufs_init(void) +{ + mmu_stats_debufs =3D debugfs_create_dir("kvx_mmu_debug", NULL); + + debugfs_create_file("mmu_stats", 0444, mmu_stats_debufs, NULL, + &mmu_stats_fops); + + return 0; +} +subsys_initcall(mmu_stats_debufs_init); diff --git a/arch/kvx/mm/tlb.c b/arch/kvx/mm/tlb.c new file mode 100644 index 0000000000000..a606dc89bba4e --- /dev/null +++ b/arch/kvx/mm/tlb.c @@ -0,0 +1,445 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + */ + +#include +#include + +#include +#include +#include +#include +#include +#include + +/* + * When in kernel, use dummy ASN 42 to be able to catch any problem easily= if + * ASN is not restored properly. + */ +#define KERNEL_DUMMY_ASN 42 + +/* Threshold of page count above which we will regenerate a new ASN */ +#define ASN_FLUSH_PAGE_THRESHOLD (MMU_JTLB_ENTRIES) + +/* Threshold of page count above which we will flush the whole JTLB */ +#define FLUSH_ALL_PAGE_THRESHOLD (MMU_JTLB_ENTRIES) + +DEFINE_PER_CPU(unsigned long, kvx_asn_cache) =3D MM_CTXT_FIRST_CYCLE; + +#ifdef CONFIG_KVX_DEBUG_TLB_ACCESS + +static DEFINE_PER_CPU_ALIGNED(struct kvx_tlb_access_t[KVX_TLB_ACCESS_SIZE], + kvx_tlb_access_rb); +/* Lower bits hold the index and upper ones hold the number of wrapped */ +static DEFINE_PER_CPU(unsigned int, kvx_tlb_access_idx); + +void kvx_update_tlb_access(int type) +{ + unsigned int *idx_ptr =3D &get_cpu_var(kvx_tlb_access_idx); + unsigned int idx; + struct kvx_tlb_access_t *tab =3D get_cpu_var(kvx_tlb_access_rb); + + idx =3D KVX_TLB_ACCESS_GET_IDX(*idx_ptr); + + kvx_mmu_get_tlb_entry(tab[idx].entry); + tab[idx].mmc_val =3D kvx_sfr_get(MMC); + tab[idx].type =3D type; + + (*idx_ptr)++; + put_cpu_var(kvx_tlb_access_rb); + put_cpu_var(kvx_tlb_access_idx); +}; + +#endif + +/** + * clear_tlb_entry() - clear an entry in TLB if it exists + * @addr: the address used to set TEH.PN + * @global: is page global or not + * @asn: ASN used if page is not global + * @tlb_type: tlb type (MMC_SB_LTLB or MMC_SB_JTLB) + * + * Preemption must be disabled when calling this function. There is no nee= d to + * invalidate micro TLB because it is invalidated when we write TLB. + * + * Return: 0 if TLB entry was found and deleted properly, -ENOENT if not f= ound + * -EINVAL if found but in incorrect TLB. + * + */ +static int clear_tlb_entry(unsigned long addr, + unsigned int global, + unsigned int asn, + unsigned int tlb_type) +{ + struct kvx_tlb_format entry; + unsigned long mmc_val; + int saved_asn, ret =3D 0; + + /* Sanitize ASN */ + asn &=3D MM_CTXT_ASN_MASK; + + /* Before probing we need to save the current ASN */ + mmc_val =3D kvx_sfr_get(MMC); + saved_asn =3D kvx_sfr_field_val(mmc_val, MMC, ASN); + kvx_sfr_set_field(MMC, ASN, asn); + + /* Probe is based on PN and ASN. So ES can be anything */ + entry =3D tlb_mk_entry(0, (void *)addr, 0, global, 0, 0, 0, + TLB_ES_INVALID); + kvx_mmu_set_tlb_entry(entry); + + kvx_mmu_probetlb(); + + mmc_val =3D kvx_sfr_get(MMC); + + if (kvx_mmc_error(mmc_val)) { + if (kvx_mmc_parity(mmc_val)) { + /* + * This should never happens unless you are bombared by + * streams of charged particules. If it happens just + * flush the JTLB and let's continue (but check your + * environment you are probably not in a safe place). + */ + WARN(1, "%s: parity error during lookup (addr 0x%lx, asn %u). JTLB will= be flushed\n", + __func__, addr, asn); + kvx_sfr_set_field(MMC, PAR, 0); + local_flush_tlb_all(); + } + + /* + * else there is no matching entry so just clean the error and + * restore the ASN before returning. + */ + kvx_sfr_set_field(MMC, E, 0); + ret =3D -ENOENT; + goto restore_asn; + } + + /* We surely don't want to flush another TLB type or we are fried */ + if (kvx_mmc_sb(mmc_val) !=3D tlb_type) { + ret =3D -EINVAL; + goto restore_asn; + } + + /* + * At this point the probe found an entry. TEL and TEH have correct + * values so we just need to set the entry status to invalid to clear + * the entry. + */ + kvx_sfr_set_field(TEL, ES, TLB_ES_INVALID); + + kvx_mmu_writetlb(); + + /* Need to read MMC SFR again */ + mmc_val =3D kvx_sfr_get(MMC); + if (kvx_mmc_error(mmc_val)) + panic("%s: Failed to clear entry (addr 0x%lx, asn %u)", + __func__, addr, asn); + else + pr_debug("%s: Entry (addr 0x%lx, asn %u) cleared\n", + __func__, addr, asn); + +restore_asn: + kvx_sfr_set_field(MMC, ASN, saved_asn); + + return ret; +} + +static void clear_jtlb_entry(unsigned long addr, + unsigned int global, + unsigned int asn) +{ + clear_tlb_entry(addr, global, asn, MMC_SB_JTLB); +} + +/** + * clear_ltlb_entry() - Remove a LTLB entry + * @vaddr: Virtual address to be matched against LTLB entries + * + * Return: Same value as clear_tlb_entry + */ +int clear_ltlb_entry(unsigned long vaddr) +{ + return clear_tlb_entry(vaddr, TLB_G_GLOBAL, KERNEL_DUMMY_ASN, + MMC_SB_LTLB); +} + +/* If mm is current we just need to assign the current task a new ASN. By + * doing this all previous TLB entries with the former ASN will be invalid= ated. + * if mm is not the current one we invalidate the context and when this ot= her + * mm will be swapped in then a new context will be generated. + */ +void local_flush_tlb_mm(struct mm_struct *mm) +{ + int cpu =3D smp_processor_id(); + + destroy_context(mm); + if (current->active_mm =3D=3D mm) + activate_context(mm, cpu); +} + +void local_flush_tlb_page(struct vm_area_struct *vma, + unsigned long addr) +{ + int cpu =3D smp_processor_id(); + unsigned int current_asn; + struct mm_struct *mm; + unsigned long flags; + + local_irq_save(flags); + + mm =3D vma->vm_mm; + current_asn =3D mm_asn(mm, cpu); + + /* If mm has no context there is nothing to do */ + if (current_asn !=3D MM_CTXT_NO_ASN) + clear_jtlb_entry(addr, TLB_G_USE_ASN, current_asn); + + local_irq_restore(flags); +} + +void local_flush_tlb_all(void) +{ + struct kvx_tlb_format tlbe =3D KVX_EMPTY_TLB_ENTRY; + int set, way; + unsigned long flags; +#ifdef CONFIG_KVX_MMU_STATS + struct mmu_stats *stats =3D &per_cpu(mmu_stats, smp_processor_id()); + + stats->tlb_flush_all++; +#endif + + local_irq_save(flags); + + /* Select JTLB and prepare TEL (constant) */ + kvx_sfr_set(TEL, (uint64_t) tlbe.tel_val); + kvx_sfr_set_field(MMC, SB, MMC_SB_JTLB); + + for (set =3D 0; set < MMU_JTLB_SETS; set++) { + tlbe.teh.pn =3D set; + for (way =3D 0; way < MMU_JTLB_WAYS; way++) { + /* Set is selected automatically according to the + * virtual address. + * With 4K pages the set is the value of the 6 lower + * significant bits of the page number. + */ + kvx_sfr_set(TEH, (uint64_t) tlbe.teh_val); + kvx_sfr_set_field(MMC, SW, way); + kvx_mmu_writetlb(); + + if (kvx_mmc_error(kvx_sfr_get(MMC))) + panic("Failed to initialize JTLB[s:%02d w:%d]", + set, way); + } + } + + local_irq_restore(flags); +} + +void local_flush_tlb_range(struct vm_area_struct *vma, + unsigned long start, + unsigned long end) +{ + const unsigned int cpu =3D smp_processor_id(); + unsigned long flags; + unsigned int current_asn; + unsigned long pages =3D (end - start) >> PAGE_SHIFT; + + if (pages > ASN_FLUSH_PAGE_THRESHOLD) { + local_flush_tlb_mm(vma->vm_mm); + return; + } + + start &=3D PAGE_MASK; + + local_irq_save(flags); + + current_asn =3D mm_asn(vma->vm_mm, cpu); + if (current_asn !=3D MM_CTXT_NO_ASN) { + while (start < end) { + clear_jtlb_entry(start, TLB_G_USE_ASN, current_asn); + start +=3D PAGE_SIZE; + } + } + + local_irq_restore(flags); +} + +/** + * local_flush_tlb_kernel_range() - flush kernel TLB entries + * @start: start kernel virtual address + * @end: end kernel virtual address + * + * Return: nothing + */ +void local_flush_tlb_kernel_range(unsigned long start, unsigned long end) +{ + unsigned long flags; + unsigned long pages =3D (end - start) >> PAGE_SHIFT; + + if (pages > FLUSH_ALL_PAGE_THRESHOLD) { + local_flush_tlb_all(); + return; + } + + start &=3D PAGE_MASK; + + local_irq_save(flags); + + while (start < end) { + clear_jtlb_entry(start, TLB_G_GLOBAL, KERNEL_DUMMY_ASN); + start +=3D PAGE_SIZE; + } + + local_irq_restore(flags); +} + +void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd) +{ + pte_t pte =3D __pte(pmd_val(*pmd)); + + /* THP PMD accessors are implemented in terms of PTE */ + update_mmu_cache(vma, addr, &pte); +} + +void update_mmu_cache(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep) +{ + struct mm_struct *mm; + unsigned long asn; + int cpu =3D smp_processor_id(); + + if (unlikely(ptep =3D=3D NULL)) + panic("pte should not be NULL\n"); + + /* Flush any previous TLB entries */ + flush_tlb_page(vma, address); + + /* No need to add the TLB entry until the process that owns the memory + * is running. + */ + mm =3D current->active_mm; + if (vma && (mm !=3D vma->vm_mm)) + return; + + /* + * Get the ASN: + * ASN can have no context if address belongs to kernel space. In + * fact as pages are global for kernel space the ASN is ignored + * and can be equal to any value. + */ + asn =3D mm_asn(mm, cpu); + +#ifdef CONFIG_KVX_DEBUG_ASN + /* + * For addresses that belong to user space, the value of the ASN + * must match the mmc.asn and be non zero + */ + if (address < PAGE_OFFSET) { + unsigned int mmc_asn =3D kvx_mmc_asn(kvx_sfr_get(MMC)); + + if (asn =3D=3D MM_CTXT_NO_ASN) + panic("%s: ASN [%lu] is not properly set for address 0x%lx on CPU %d\n", + __func__, asn, address, cpu); + + if ((asn & MM_CTXT_ASN_MASK) !=3D mmc_asn) + panic("%s: ASN not synchronized with MMC: asn:%lu !=3D mmc.asn:%u\n", + __func__, (asn & MM_CTXT_ASN_MASK), mmc_asn); + } +#endif + + kvx_mmu_jtlb_add_entry(address, ptep, asn); +} + +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *v= ma, + unsigned long addr, pte_t *ptep, unsigned int nr) +{ + for (;;) { + update_mmu_cache(vma, addr, ptep); + if (--nr =3D=3D 0) + break; + addr +=3D PAGE_SIZE; + ptep++; + } +} + +#ifdef CONFIG_SMP + +struct tlb_args { + struct vm_area_struct *ta_vma; + unsigned long ta_start; + unsigned long ta_end; +}; + +static inline void ipi_flush_tlb_page(void *arg) +{ + struct tlb_args *ta =3D arg; + + local_flush_tlb_page(ta->ta_vma, ta->ta_start); +} + +void +smp_flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) +{ + struct tlb_args ta =3D { + .ta_vma =3D vma, + .ta_start =3D addr + }; + + on_each_cpu_mask(mm_cpumask(vma->vm_mm), ipi_flush_tlb_page, &ta, 1); +} +EXPORT_SYMBOL(smp_flush_tlb_page); + +void +smp_flush_tlb_mm(struct mm_struct *mm) +{ + on_each_cpu_mask(mm_cpumask(mm), (smp_call_func_t)local_flush_tlb_mm, + mm, 1); +} +EXPORT_SYMBOL(smp_flush_tlb_mm); + +static inline void ipi_flush_tlb_range(void *arg) +{ + struct tlb_args *ta =3D arg; + + local_flush_tlb_range(ta->ta_vma, ta->ta_start, ta->ta_end); +} + +void +smp_flush_tlb_range(struct vm_area_struct *vma, unsigned long start, + unsigned long end) +{ + struct tlb_args ta =3D { + .ta_vma =3D vma, + .ta_start =3D start, + .ta_end =3D end + }; + + on_each_cpu_mask(mm_cpumask(vma->vm_mm), ipi_flush_tlb_range, &ta, 1); +} +EXPORT_SYMBOL(smp_flush_tlb_range); + +static inline void ipi_flush_tlb_kernel_range(void *arg) +{ + struct tlb_args *ta =3D arg; + + local_flush_tlb_kernel_range(ta->ta_start, ta->ta_end); +} + +void +smp_flush_tlb_kernel_range(unsigned long start, unsigned long end) +{ + struct tlb_args ta =3D { + .ta_start =3D start, + .ta_end =3D end + }; + + on_each_cpu(ipi_flush_tlb_kernel_range, &ta, 1); +} +EXPORT_SYMBOL(smp_flush_tlb_kernel_range); + +#endif --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout143.security-mail.net (smtpout143.security-mail.net [85.31.212.143]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C60BF16DC2C for ; Mon, 22 Jul 2024 09:43:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.143 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641431; cv=none; b=QK6yDGHfzzFe3SBQj8cHXdMrQ3T50kDVGtdlePFk5Zw9QtpaAykdRYjHggNDJWwb3GydZg7dQxTMvt7ew3DIv9jpUX2+/eLBDqC3+R5RSnD4ej/fJ22SxiSgAmvriB4MOlDOgZlLnIxGotD6UdKVvy9thzpuAEhmOmBwYG4O62I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641431; c=relaxed/simple; bh=Fwfz9n+9mX51kZLWm8S68xPtyl2NxNsFIKJVoUtQkwA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Zo8zoQhhA505fv8hos+tYuKSMrxIMVpdM5fvs9gs2NLDMZ8j55kQnmTqLKOsE91ZJVsyl58wQNNsMS09/7fVqfnmeSF5Fb0kEr0ccvVGtnTOYekzGe8h1TfRCohq1mX0Ki976iEhltwHSSdAniCyJkXZRKMgan6rzoz3TBrsjMU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=YP+jCKVy; arc=none smtp.client-ip=85.31.212.143 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="YP+jCKVy" Received: from localhost (localhost [127.0.0.1]) by fx403.security-mail.net (Postfix) with ESMTP id 46FB648EEA7 for ; Mon, 22 Jul 2024 11:43:45 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641425; bh=Fwfz9n+9mX51kZLWm8S68xPtyl2NxNsFIKJVoUtQkwA=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=YP+jCKVykvYxRTy9/faaVV0/LhNGURSy52upnwT0SsIYLYZ0cFv4l/S3P1L0CT9nC 3/COOgjxwDEcyKtBHA6rIfSw5SylWZi2e68T/wQBHQyX2lMzWndsIdPPCuH3BG9y8S 9s5LE0nm5ZQGTIHqCyrl+4BGuLIERmCfOyT67O34= Received: from fx403 (localhost [127.0.0.1]) by fx403.security-mail.net (Postfix) with ESMTP id 01C5548EEA5; Mon, 22 Jul 2024 11:43:45 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx403.security-mail.net (Postfix) with ESMTPS id F353A48E386; Mon, 22 Jul 2024 11:43:43 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 9E7A140317; Mon, 22 Jul 2024 11:43:43 +0200 (CEST) X-Secumail-id: From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Christian Brauner , Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Guillaume Thouvenin , Julien Villette , Marius Gligor , linux-riscv@lists.infradead.org Subject: [RFC PATCH v3 25/37] kvx: Add system call support Date: Mon, 22 Jul 2024 11:41:36 +0200 Message-ID: <20240722094226.21602-26-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add system call support and related uaccess.h for kvx. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Guillaume Thouvenin Signed-off-by: Guillaume Thouvenin Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Julien Villette Signed-off-by: Julien Villette Co-developed-by: Marius Gligor Signed-off-by: Marius Gligor Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: - typo fixes (clober* -> clobber*) - use generic __access_ok V2 -> V3: - Updated exception handling code to use generic entry/exit - Assembly syntax update - Remove legacy syscalls: - __ARCH_WANT_RENAMEAT - __ARCH_WANT_NEW_STAT - __ARCH_WANT_SET_GET_RLIMIT - Remove useless debug --- arch/kvx/include/asm/seccomp.h | 14 + arch/kvx/include/asm/syscall.h | 112 +++ arch/kvx/include/asm/syscalls.h | 19 + arch/kvx/include/asm/uaccess.h | 317 +++++++ arch/kvx/include/asm/unistd.h | 11 + arch/kvx/include/uapi/asm/cachectl.h | 25 + arch/kvx/include/uapi/asm/unistd.h | 13 + arch/kvx/kernel/entry.S | 1208 ++++++++++++++++++++++++++ arch/kvx/kernel/sys_kvx.c | 58 ++ arch/kvx/kernel/syscall_table.c | 19 + 10 files changed, 1796 insertions(+) create mode 100644 arch/kvx/include/asm/seccomp.h create mode 100644 arch/kvx/include/asm/syscall.h create mode 100644 arch/kvx/include/asm/syscalls.h create mode 100644 arch/kvx/include/asm/uaccess.h create mode 100644 arch/kvx/include/asm/unistd.h create mode 100644 arch/kvx/include/uapi/asm/cachectl.h create mode 100644 arch/kvx/include/uapi/asm/unistd.h create mode 100644 arch/kvx/kernel/entry.S create mode 100644 arch/kvx/kernel/sys_kvx.c create mode 100644 arch/kvx/kernel/syscall_table.c diff --git a/arch/kvx/include/asm/seccomp.h b/arch/kvx/include/asm/seccomp.h new file mode 100644 index 0000000000000..566945f84648a --- /dev/null +++ b/arch/kvx/include/asm/seccomp.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _ASM_SECCOMP_H +#define _ASM_SECCOMP_H + +#include + +#include + +#define SECCOMP_ARCH_NATIVE AUDIT_ARCH_KVX +#define SECCOMP_ARCH_NATIVE_NR NR_syscalls +#define SECCOMP_ARCH_NATIVE_NAME "kvx" + +#endif /* _ASM_SECCOMP_H */ diff --git a/arch/kvx/include/asm/syscall.h b/arch/kvx/include/asm/syscall.h new file mode 100644 index 0000000000000..a70e26a37cc0b --- /dev/null +++ b/arch/kvx/include/asm/syscall.h @@ -0,0 +1,112 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_SYSCALL_H +#define _ASM_KVX_SYSCALL_H + +#include +#include +#include + +#include + +/* The array of function pointers for syscalls. */ +extern void *sys_call_table[]; + +void scall_machine_exit(unsigned char value); + +/** + * syscall_get_nr - find which system call a task is executing + * @task: task of interest, must be blocked + * @regs: task_pt_regs() of @task + * + * If @task is executing a system call, it returns the system call number. + * If @task is not executing a system call, i.e. it's blocked inside the k= ernel + * for a fault or a signal, it returns -1. + * + * This returns an int even on 64-bit machines. Only 32 bits of the system= call + * number are meaningful. If the actual arch value is 64 bits, this trunca= tes + * it to 32 bits so 0xffffffff means -1. + * + * It's only valid to call this when @task is known to be in blocked state. + */ +static inline int syscall_get_nr(struct task_struct *task, struct pt_regs = *regs) +{ + if (!in_syscall(regs)) + return -1; + + return es_sysno(regs); +} + +static inline void syscall_rollback(struct task_struct *task, + struct pt_regs *regs) +{ + regs->r0 =3D regs->orig_r0; +} + +static inline long syscall_get_error(struct task_struct *task, + struct pt_regs *regs) +{ + /* 0 if the syscall succeeded, otherwise -Errorcode */ + return IS_ERR_VALUE(regs->r0) ? regs->r0 : 0; +} + +static inline long syscall_get_return_value(struct task_struct *task, + struct pt_regs *regs) +{ + return regs->r0; +} + +static inline void syscall_set_return_value(struct task_struct *task, + struct pt_regs *regs, + int error, long val) +{ + if (error) + val =3D error; + + regs->r0 =3D val; +} + +static inline int syscall_get_arch(struct task_struct *task) +{ + return AUDIT_ARCH_KVX; +} + +static inline void syscall_get_arguments(struct task_struct *task, + struct pt_regs *regs, + unsigned long *args) +{ + args[0] =3D regs->orig_r0; + args++; + memcpy(args, ®s->r1, 5 * sizeof(args[0])); +} + +typedef long (*syscall_fn)(ulong, ulong, ulong, ulong, ulong, ulong, ulong= ); +static inline void syscall_handler(struct pt_regs *regs, ulong syscall) +{ + syscall_fn fn; + + regs->orig_r0 =3D regs->r0; + + if (syscall >=3D NR_syscalls) { + regs->r0 =3D sys_ni_syscall(); + return; + } + + fn =3D sys_call_table[syscall]; + + regs->r0 =3D fn(regs->orig_r0, regs->r1, regs->r2, + regs->r3, regs->r4, regs->r5, regs->r6); +} + +int __init setup_syscall_sigreturn_page(void *sigpage_addr); + +static inline bool arch_syscall_is_vdso_sigreturn(struct pt_regs *regs) +{ + return false; +} + +#endif diff --git a/arch/kvx/include/asm/syscalls.h b/arch/kvx/include/asm/syscall= s.h new file mode 100644 index 0000000000000..755c9674694b8 --- /dev/null +++ b/arch/kvx/include/asm/syscalls.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_SYSCALLS_H +#define _ASM_KVX_SYSCALLS_H + +#define sys_rt_sigreturn sys_rt_sigreturn + +#include + +long sys_cachectl(unsigned long addr, unsigned long len, unsigned long cac= he, + unsigned long flags); + +long sys_rt_sigreturn(void); + +#endif diff --git a/arch/kvx/include/asm/uaccess.h b/arch/kvx/include/asm/uaccess.h new file mode 100644 index 0000000000000..4ab3eab60de06 --- /dev/null +++ b/arch/kvx/include/asm/uaccess.h @@ -0,0 +1,317 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * derived from arch/riscv/include/asm/uaccess.h + * + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + */ + +#ifndef _ASM_KVX_UACCESS_H +#define _ASM_KVX_UACCESS_H + +#include +#include + +/** + * access_ok: - Checks if a user space pointer is valid + * @addr: User space pointer to start of block to check + * @size: Size of block to check + * + * Context: User context only. This function may sleep. + * + * Checks if a pointer to a block of memory in user space is valid. + * + * Returns true (nonzero) if the memory block may be valid, false (zero) + * if it is definitely invalid. + * + * Note that, depending on architecture, this function probably just + * checks that the pointer is in the user space range - after calling + * this function, memory access functions may still return -EFAULT. + */ +#define access_ok(addr, size) ({ \ + __chk_user_ptr(addr); \ + likely(__access_ok((addr), (size))); \ +}) + +#include + +/* + * The exception table consists of pairs of addresses: the first is the + * address of an instruction that is allowed to fault, and the second is + * the address at which the program should continue. No registers are + * modified, so it is entirely up to the continuation code to figure out + * what to do. + * + * All the routines below use bits of fixup code that are out of line + * with the main instruction path. This means when everything is well, + * we don't even have to jump over them. Further, they do not intrude + * on our cache or TLB entries. + */ + +struct exception_table_entry { + unsigned long insn, fixup; +}; + +extern int fixup_exception(struct pt_regs *regs); + +/** + * Assembly defined function (usercopy.S) + */ +extern unsigned long +raw_copy_from_user(void *to, const void __user *from, unsigned long n); + +extern unsigned long +raw_copy_to_user(void __user *to, const void *from, unsigned long n); + +extern unsigned long +asm_clear_user(void __user *to, unsigned long n); + +#define __clear_user asm_clear_user + +static inline __must_check unsigned long +clear_user(void __user *to, unsigned long n) +{ + might_fault(); + if (!access_ok(to, n)) + return n; + + return asm_clear_user(to, n); +} + +extern __must_check long strnlen_user(const char __user *str, long n); +extern long strncpy_from_user(char *dest, const char __user *src, long cou= nt); + +#define __enable_user_access() +#define __disable_user_access() + +/** + * get_user: - Get a simple variable from user space. + * @x: Variable to store result. + * @ptr: Source address, in user space. + * + * Context: User context only. This function may sleep. + * + * This macro copies a single simple variable from user space to kernel + * space. It supports simple types like char and int, but not larger + * data types like structures or arrays. + * + * @ptr must have pointer-to-simple-variable type, and the result of + * dereferencing @ptr must be assignable to @x without a cast. + * + * Returns zero on success, or -EFAULT on error. + * On error, the variable @x is set to zero. + */ +#define get_user(x, ptr) \ +({ \ + long __e =3D -EFAULT; \ + const __typeof__(*(ptr)) __user *__p =3D (ptr); \ + might_fault(); \ + if (likely(access_ok(__p, sizeof(*__p)))) { \ + __e =3D __get_user(x, __p); \ + } else { \ + x =3D 0; \ + } \ + __e; \ +}) + +/** + * __get_user: - Get a simple variable from user space, with less checking. + * @x: Variable to store result. + * @ptr: Source address, in user space. + * + * Context: User context only. This function may sleep. + * + * This macro copies a single simple variable from user space to kernel + * space. It supports simple types like char and int, but not larger + * data types like structures or arrays. + * + * @ptr must have pointer-to-simple-variable type, and the result of + * dereferencing @ptr must be assignable to @x without a cast. + * + * Caller must check the pointer with access_ok() before calling this + * function. + * + * Returns zero on success, or -EFAULT on error. + * On error, the variable @x is set to zero. + */ +#define __get_user(x, ptr) \ +({ \ + unsigned long __err =3D 0; \ + __chk_user_ptr(ptr); \ + \ + __enable_user_access(); \ + __get_user_nocheck(x, ptr, __err); \ + __disable_user_access(); \ + \ + __err; \ +}) + +#define __get_user_nocheck(x, ptr, err) \ +do { \ + unsigned long __gu_addr =3D (unsigned long)(ptr); \ + unsigned long __gu_val; \ + switch (sizeof(*(ptr))) { \ + case 1: \ + __get_user_asm("lbz", __gu_val, __gu_addr, err); \ + break; \ + case 2: \ + __get_user_asm("lhz", __gu_val, __gu_addr, err); \ + break; \ + case 4: \ + __get_user_asm("lwz", __gu_val, __gu_addr, err); \ + break; \ + case 8: \ + __get_user_asm("ld", __gu_val, __gu_addr, err); \ + break; \ + default: \ + BUILD_BUG(); \ + } \ + (x) =3D (__typeof__(*(ptr)))__gu_val; \ +} while (0) + +#define __get_user_asm(op, x, addr, err) \ +({ \ + __asm__ __volatile__( \ + "1: "op" %1 =3D 0[%2]\n" \ + " ;;\n" \ + "2:\n" \ + ".section .fixup,\"ax\"\n" \ + "3: make %0 =3D 2b\n" \ + " make %1 =3D 0\n" \ + " ;;\n" \ + " make %0 =3D %3\n" \ + " igoto %0\n" \ + " ;;\n" \ + ".previous\n" \ + ".section __ex_table,\"a\"\n" \ + " .align 8\n" \ + " .dword 1b,3b\n" \ + ".previous" \ + : "=3Dr"(err), "=3Dr"(x) \ + : "r"(addr), "i"(-EFAULT), "0"(err)); \ +}) + +/** + * put_user: - Write a simple value into user space. + * @x: Value to copy to user space. + * @ptr: Destination address, in user space. + * + * Context: User context only. This function may sleep. + * + * This macro copies a single simple value from kernel space to user + * space. It supports simple types like char and int, but not larger + * data types like structures or arrays. + * + * @ptr must have pointer-to-simple-variable type, and @x must be assignab= le + * to the result of dereferencing @ptr. + * + * Returns zero on success, or -EFAULT on error. + */ +#define put_user(x, ptr) \ +({ \ + long __e =3D -EFAULT; \ + __typeof__(*(ptr)) __user *__p =3D (ptr); \ + might_fault(); \ + if (likely(access_ok(__p, sizeof(*__p)))) { \ + __e =3D __put_user(x, __p); \ + } \ + __e; \ +}) + +/** + * __put_user: - Write a simple value into user space, with less checking. + * @x: Value to copy to user space. + * @ptr: Destination address, in user space. + * + * Context: User context only. This function may sleep. + * + * This macro copies a single simple value from kernel space to user + * space. It supports simple types like char and int, but not larger + * data types like structures or arrays. + * + * @ptr must have pointer-to-simple-variable type, and @x must be assignab= le + * to the result of dereferencing @ptr. + * + * Caller must check the pointer with access_ok() before calling this + * function. + * + * Returns zero on success, or -EFAULT on error. + */ +#define __put_user(x, ptr) \ +({ \ + unsigned long __err =3D 0; \ + __chk_user_ptr(ptr); \ + \ + __enable_user_access(); \ + __put_user_nocheck(x, ptr, __err); \ + __disable_user_access(); \ + \ + __err; \ +}) + +#define __put_user_nocheck(x, ptr, err) \ +do { \ + unsigned long __pu_addr =3D (unsigned long)(ptr); \ + __typeof__(*(ptr)) __pu_val =3D (x); \ + switch (sizeof(*(ptr))) { \ + case 1: \ + __put_user_asm("sb", __pu_val, __pu_addr, err); \ + break; \ + case 2: \ + __put_user_asm("sh", __pu_val, __pu_addr, err); \ + break; \ + case 4: \ + __put_user_asm("sw", __pu_val, __pu_addr, err); \ + break; \ + case 8: \ + __put_user_asm("sd", __pu_val, __pu_addr, err); \ + break; \ + default: \ + BUILD_BUG(); \ + } \ +} while (0) + +#define __put_user_asm(op, x, addr, err) \ +({ \ + __asm__ __volatile__( \ + "1: "op" 0[%2] =3D %1\n" \ + " ;;\n" \ + "2:\n" \ + ".section .fixup,\"ax\"\n" \ + "3: make %0 =3D 2b\n" \ + " ;;\n" \ + " make %0 =3D %3\n" \ + " igoto %0\n" \ + " ;;\n" \ + ".previous\n" \ + ".section __ex_table,\"a\"\n" \ + " .align 8\n" \ + " .dword 1b,3b\n" \ + ".previous" \ + : "=3Dr"(err) \ + : "r"(x), "r"(addr), "i"(-EFAULT), "0"(err)); \ +}) + +#define HAVE_GET_KERNEL_NOFAULT + +#define __get_kernel_nofault(dst, src, type, err_label) \ +do { \ + long __kr_err; \ + \ + __get_user_nocheck(*((type *)(dst)), (type *)(src), __kr_err); \ + if (unlikely(__kr_err)) \ + goto err_label; \ +} while (0) + +#define __put_kernel_nofault(dst, src, type, err_label) \ +do { \ + long __kr_err; \ + \ + __put_user_nocheck(*((type *)(src)), (type *)(dst), __kr_err); \ + if (unlikely(__kr_err)) \ + goto err_label; \ +} while (0) + + +#endif /* _ASM_KVX_UACCESS_H */ diff --git a/arch/kvx/include/asm/unistd.h b/arch/kvx/include/asm/unistd.h new file mode 100644 index 0000000000000..6cd093dbf2d80 --- /dev/null +++ b/arch/kvx/include/asm/unistd.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#define __ARCH_WANT_SYS_CLONE + +#include + +#define NR_syscalls (__NR_syscalls) diff --git a/arch/kvx/include/uapi/asm/cachectl.h b/arch/kvx/include/uapi/a= sm/cachectl.h new file mode 100644 index 0000000000000..be0a1aa23cf64 --- /dev/null +++ b/arch/kvx/include/uapi/asm/cachectl.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _UAPI_ASM_KVX_CACHECTL_H +#define _UAPI_ASM_KVX_CACHECTL_H + +/* + * Cache type for cachectl system call + */ +#define CACHECTL_CACHE_DCACHE (1 << 0) + +/* + * Flags for cachectl system call + */ +#define CACHECTL_FLAG_OP_INVAL (1 << 0) +#define CACHECTL_FLAG_OP_WB (1 << 1) +#define CACHECTL_FLAG_OP_MASK (CACHECTL_FLAG_OP_INVAL | \ + CACHECTL_FLAG_OP_WB) + +#define CACHECTL_FLAG_ADDR_PHYS (1 << 2) + +#endif /* _UAPI_ASM_KVX_CACHECTL_H */ diff --git a/arch/kvx/include/uapi/asm/unistd.h b/arch/kvx/include/uapi/asm= /unistd.h new file mode 100644 index 0000000000000..fe02b89abcec0 --- /dev/null +++ b/arch/kvx/include/uapi/asm/unistd.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#define __ARCH_WANT_SYS_CLONE3 + +#include + +/* Additional KVX specific syscalls */ +#define __NR_cachectl (__NR_arch_specific_syscall) +__SYSCALL(__NR_cachectl, sys_cachectl) diff --git a/arch/kvx/kernel/entry.S b/arch/kvx/kernel/entry.S new file mode 100644 index 0000000000000..9686b76fbe0c9 --- /dev/null +++ b/arch/kvx/kernel/entry.S @@ -0,0 +1,1208 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + * Marius Gligor + * Yann Sionneau + * Julien Villette + */ + +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define MMU_SET_MASK ((1 << KVX_SFR_MMC_SS_WIDTH) - 1) + +/* Mask to replicate data from 32bits LSB to MSBs */ +#define REPLICATE_32_MASK 0x0804020108040201 + +#define PS_HLE_EN_ET_CLEAR_DAUS_DIS (SFR_SET_VAL_WFXL(PS, HLE, 1) | \ + SFR_SET_VAL_WFXL(PS, ET, 0) | \ + SFR_SET_VAL_WFXL(PS, DAUS, 0)) + +#define MMC_CLEAR_TLB_CLEAR_WAY (SFR_CLEAR_WFXL(MMC, SB) | \ + SFR_CLEAR_WFXL(MMC, SW)) + +#define MMC_SEL_TLB_CLEAR_WAY(__tlb) (SFR_SET_WFXL(MMC, SB, __tlb) | \ + MMC_CLEAR_TLB_CLEAR_WAY) +#define MMC_SEL_JTLB_CLEAR_WAY MMC_SEL_TLB_CLEAR_WAY(MMC_SB_JTLB) +#define MMC_SEL_LTLB_CLEAR_WAY MMC_SEL_TLB_CLEAR_WAY(MMC_SB_LTLB) + +#define MME_WFXL(_enable) SFR_SET_VAL_WFXL(PS, MME, _enable) + +/* Temporary scract system register for trap handling */ +#define TMP_SCRATCH_SR $sr_pl3 + +.altmacro + +#define TEL_DEFAULT_VALUE (TLB_ES_A_MODIFIED << KVX_SFR_TEL_ES_SHIFT) + +#define TASK_THREAD_SAVE_AREA_QUAD(__q) \ + (TASK_THREAD_SAVE_AREA + (__q) * QUAD_REG_SIZE) + +#ifdef CONFIG_KVX_DEBUG_TLB_WRITE +.section .rodata +mmc_error_panic_str_label: + .string "Failed to write entry to the JTLB (in assembly refill)" +#endif + +#ifdef CONFIG_KVX_DEBUG_ASN +.section .rodata +asn_error_panic_str_label: + .string "ASN mismatch !" +#endif + +/** + * prepare_kernel_entry: Simply clear the framepointer, reset the $le to 0= and + * place the pointer to pt_regs struct in $r0, to have it in the right pla= ce + * for the calls to 'do_'. + */ +.macro prepare_kernel_entry + make $r1 =3D 0x0 + /* Clear frame pointer for the kernel */ + make $fp =3D 0 + ;; + /* Clear $le on entry */ + set $le =3D $r1 + /* pt_regs argument for 'do_(struct pt_regs *)' */ + copyd $r0 =3D $sp + /* Reenable hwloop, MMU and clear exception taken */ + make $r2 =3D PS_HLE_EN_ET_CLEAR_DAUS_DIS + ;; + wfxl $ps, $r2 + ;; +.endm + +/** + * save_r0r1r2r3: Save registers $r0 - $r3 in the temporary thread area wi= thout + * modifiying any other register. $r4 is only used temporarily and will be + * restored to its original value. After the call, $r0 - $r3 are saved in + * task_struct.thread.save_area. + */ +.macro save_r0r1r2r3 + rswap $r4 =3D $sr + ;; + /* Save registers in save_area */ + so __PA(TASK_THREAD_SAVE_AREA_QUAD(0))[$r4] =3D $r0r1r2r3 + /* Restore register */ + rswap $r4 =3D $sr + ;; +.endm + +/** + * This macro checks whether the entry was from user space ($sps.pl =3D 1)= or + * from kernel space ($sps.pl =3D 0). If the entry was from user space it = will + * switch the stack to the kernel stack and save the original $sp. + * After this macro the registers $r0 to $r2 have the following content: + * $r0 -> pt_regs + * $r1 -> original $sp (when entering into the kernel) + * $r2 -> task_struct ($sr) + * $r12 -> kernel stack ($sp) + */ +.macro switch_stack + get $r2 =3D $sr + ;; + get $r1 =3D $sps + ;; + /* Check if $sps.pl bit is 0 (ie in kernel mode) + * if so, then we dont have to switch stack */ + cb.even $r1? 1f + /* Copy original $sp in a scratch reg */ + copyd $r1 =3D $sp + ;; + /* restore sp from kernel stack pointer */ + ld $sp =3D __PA(TASK_THREAD_KERNEL_SP)[$r2] + ;; +1: +.endm + +/** + * Disable MMU when entering the kernel + * This macro does not require any register because the used register ($r0) + * is restored after use. This macro can safely be used directly after ent= ering + * exceptions. + * ps_val: Additional $ps modification while disabling the MMU + */ +.macro disable_mmu ps_val=3D0 + set TMP_SCRATCH_SR =3D $r0 + ;; + /* Disable MME in $sps */ + make $r0 =3D MME_WFXL(0) + ;; + wfxl $sps, $r0 + ;; + /* Enable DAUS in $ps */ + make $r0 =3D SFR_SET_VAL_WFXL(PS, DAUS, 1) | \ps_val + ;; + wfxl $ps, $r0 + /* We are now accessing data with MMU disabled */ + ;; + get $r0 =3D TMP_SCRATCH_SR + ;; +.endm + +/** + * Save all general purpose registers and some special purpose registers f= or + * entry into kernel space. This macro expects to be called immediately af= ter + * the switch_stack macro, because it expects the following register + * assignments: + * $r1 -> original $sp (when entering into the kernel) + * $r2 -> task_struct ($sr) + */ +.macro save_regs_for_exception + /* make some room on stack to save registers */ + addd $sp =3D $sp, -(PT_SIZE_ON_STACK) + so __PA(PT_Q4-PT_SIZE_ON_STACK)[$sp] =3D $r4r5r6r7 + /* Compute physical address of stack to avoid using 64bits immediate */ + addd $r6 =3D $sp, VA_TO_PA_OFFSET - (PT_SIZE_ON_STACK) + ;; + so PT_Q60[$r6] =3D $r60r61r62r63 + /* Now that $r60r61r62r63 is saved, we can use it for saving + * original stack stored in scratch_reg. Note that we can not + * use the r12r13r14r15 quad to do that because it would + * modify the current $r12/sp ! This is if course not what we + * want and hence we use the freshly saved quad $r60r61r62r63. + * + * Note that we must use scratch_reg before reloading the saved + * quad since the scratch reg is contained in it, so reloading + * it before copying it would overwrite it. + */ + copyd $r60 =3D $r1 + ;; + /* Reload the saved quad registers to save correct values + * Since we use the scratch reg before that */ + lo $r0r1r2r3 =3D __PA(TASK_THREAD_SAVE_AREA_QUAD(0))[$r2] + ;; + so PT_Q8[$r6] =3D $r8r9r10r11 + ;; + so PT_Q0[$r6] =3D $r0r1r2r3 + copyd $r61 =3D $r13 + ;; + so PT_Q16[$r6] =3D $r16r17r18r19 + copyd $r62 =3D $r14 + ;; + so PT_Q20[$r6] =3D $r20r21r22r23 + copyd $r63 =3D $r15 + ;; + so PT_Q24[$r6] =3D $r24r25r26r27 + get $r8 =3D $cs + ;; + so PT_Q28[$r6] =3D $r28r29r30r31 + get $r9 =3D $spc + ;; + so PT_Q32[$r6] =3D $r32r33r34r35 + get $r10 =3D $sps + ;; + so PT_Q36[$r6] =3D $r36r37r38r39 + get $r11 =3D $es + ;; + so PT_Q40[$r6] =3D $r40r41r42r43 + get $r16 =3D $lc + ;; + so PT_Q44[$r6] =3D $r44r45r46r47 + get $r17 =3D $le + ;; + so PT_Q48[$r6] =3D $r48r49r50r51 + get $r18 =3D $ls + ;; + so PT_Q52[$r6] =3D $r52r53r54r55 + get $r19 =3D $ra + ;; + so PT_Q56[$r6] =3D $r56r57r58r59 + ;; + /* Store the original $sp in the pt_regs */ + so PT_Q12[$r6] =3D $r60r61r62r63 + ;; + /* Store special registers */ + so PT_CS_SPC_SPS_ES[$r6] =3D $r8r9r10r11 + ;; + so PT_LC_LE_LS_RA[$r6] =3D $r16r17r18r19 + /* Clear frame pointer */ + make $fp =3D 0 + ;; +.endm + +.section .exception.text, "ax", @progbits ;\ +SYM_FUNC_START(return_from_exception) + disable_mmu KVX_SFR_PS_HLE_MASK + ;; + lo $r0r1r2r3 =3D __PA(PT_CS_SPC_SPS_ES)[$sp] + /* Compute physical address of stack to avoid using 64bits immediate */ + addd $r14 =3D $sp, VA_TO_PA_OFFSET + ;; + lo $r4r5r6r7 =3D PT_LC_LE_LS_RA[$r14] + ;; + lo $r60r61r62r63 =3D PT_Q60[$r14] + ;; + lo $r56r57r58r59 =3D PT_Q56[$r14] + ;; + lo $r52r53r54r55 =3D PT_Q52[$r14] + get $r11 =3D $sps + ;; + lo $r48r49r50r51 =3D PT_Q48[$r14] + /* Generate a mask of ones at each bit where the current $sps + * differs from the $sps to be restored + */ + xord $r11 =3D $r2, $r11 + /* prepare wfxl clear mask on LSBs */ + notd $r15 =3D $r2 + /* prepare wfxl set mask on MSBs */ + slld $r13 =3D $r2, 32 + ;; + lo $r44r45r46r47 =3D PT_Q44[$r14] + /* Replicate mask of ones on the 32 MSBs */ + sbmm8 $r11 =3D $r11, REPLICATE_32_MASK + /* Combine the set and clear mask for wfxl */ + insf $r13 =3D $r15, 31, 0 + ;; + lo $r40r41r42r43 =3D PT_Q40[$r14] + set $lc =3D $r4 + /* Mask to drop identical bits in order to avoid useless + * privilege traps + */ + andd $r13 =3D $r13, $r11 + ;; + lo $r16r17r18r19 =3D PT_Q16[$r14] + ;; + lo $r20r21r22r23 =3D PT_Q20[$r14] + ;; + lo $r24r25r26r27 =3D PT_Q24[$r14] + ;; + lo $r28r29r30r31 =3D PT_Q28[$r14] + set $le =3D $r5 + ;; + lo $r32r33r34r35 =3D PT_Q32[$r14] + set $ls =3D $r6 + ;; + lo $r36r37r38r39 =3D PT_Q36[$r14] + set $ra =3D $r7 + ;; + lo $r8r9r10r11 =3D PT_Q8[$r14] + set $cs =3D $r0 + /* MME was disabled by disable_mmu, reenable it before leaving */ + ord $r13 =3D $r13, SFR_SET_VAL_WFXL(SPS, MME, 1) + ;; + lo $r4r5r6r7 =3D PT_Q4[$r14] + set $spc =3D $r1 + ;; + lo $r0r1r2r3 =3D PT_Q0[$r14] + /* Store $sps wfxl value in scratch system register */ + set TMP_SCRATCH_SR =3D $r13 + ;; + lo $r12r13r14r15 =3D PT_Q12[$r14] + rswap $r0 =3D TMP_SCRATCH_SR + ;; + wfxl $sps, $r0 + ;; + /* Finally, restore $r0 value */ + rswap $r0 =3D TMP_SCRATCH_SR + ;; + barrier + ;; + rfe + ;; +SYM_FUNC_END(return_from_exception) + +.macro debug_prologue + barrier + ;; +.endm + +.macro debug_function + get $r1 =3D $ea + ;; + /* do_debug($r0: pt_regs, $r1: ea) */ + call do_debug + ;; +.endm + +/* These labels will be used for instruction patching */ +.global kvx_perf_tlb_refill, kvx_std_tlb_refill +.macro trap_prologue + disable_mmu + ;; + /* Save r3 in a temporary system register to check if the trap is a + * nomapping or not */ + set TMP_SCRATCH_SR =3D $r3 + ;; + get $r3 =3D $es + ;; + /* Hardware trap cause */ + extfz $r3 =3D $r3, KVX_SFR_END(ES_HTC), KVX_SFR_START(ES_HTC) + ;; + /* Is this a nomapping trap ? */ + compd.eq $r3 =3D $r3, KVX_TRAP_NOMAPPING + ;; + /* if nomapping trap, try fast_refill */ + cb.even $r3? trap_slow_path + ;; + /* + * Fast TLB refill routine + * + * On kvx, we do not have hardware page walking, hence, TLB refill is + * done via software upon no-mapping traps. + * This routine must be as fast as possible to avoid wasting CPU time. + * For that purpose, it is done in assembly allowing us to save only 12 + * registers ($r0 -> $r11) instead of a full context save. + * Remark: this exception handler's code is always mapped in LTLB, it wil= l never cause + * nomappings while fetching. The only possible cause of nomapping would + * be data accesses to the page table. Therefore, inside this refill + * handler, we disable MMU only for DATA accesses (lsu) using $ps.DAUS. + * + * The JTLB is 128 sets, 4-way set-associative. + * + * Refill flow: + * 1 - Disable MMU only for DATA accesses using DAUS + * 2 - Save necessary registers + * 3 - Walk the page table to find the TLB entry to add (virt to phys) + * 4 - Compute the TLB entry to be written (convert PTE to TLB entry) + * 5 - Compute the target set (0 -> 127) for the new TLB entry. This + * is done by extracting the 6 lsb of page number + * 6 - Get the current way, which is selected using round robin + * 7 - Mark PTE entry as _PAGE_ACCESSED (and optionally PAGE_DIRTY) + * 8 - Commit the new TLB entry + * 9 - Enable back MMU for DATA accesses by disabling DAUS + */ + /* Get current task */ + rswap $r63 =3D $sr + ;; +#ifdef CONFIG_KVX_MMU_STATS + get $r3 =3D $pm0 + ;; + sd __PA(TASK_THREAD_ENTRY_TS)[$r63] =3D $r3 + ;; +#endif + /* Restore $r3 from temporary system scratch register */ + get $r3 =3D TMP_SCRATCH_SR + /* Save registers to save $tel, $teh and $mmc */ + so __PA(TASK_THREAD_SAVE_AREA_QUAD(2))[$r63] =3D $r8r9r10r11 + ;; + get $r8 =3D $tel + /* Save registers for refill handler */ + so __PA(TASK_THREAD_SAVE_AREA_QUAD(1))[$r63] =3D $r4r5r6r7 + ;; + /* Get exception address */ + get $r0 =3D $ea + /* Save more registers to be comfy */ + so __PA(TASK_THREAD_SAVE_AREA_QUAD(0))[$r63] =3D $r0r1r2r3 + ;; + get $r9 =3D $teh + ;; + get $r10 =3D $mmc + ;; + /* Restore $r63 value */ + rswap $r63 =3D $sr + /* Check kernel address range */ + addd $r4 =3D $r0, -KERNEL_DIRECT_MEMORY_MAP_BASE + /* Load task active mm to access pgd used in kvx_std_tlb_refill */ + ld $r1 =3D __PA(TASK_ACTIVE_MM)[$r63] + ;; +kvx_perf_tlb_refill: + /* Check if the address is in the kernel direct memory mapping */ + compd.ltu $r3 =3D $r4, KERNEL_DIRECT_MEMORY_MAP_SIZE + /* Clear low bits of virtual address to align on page size */ + andd $r5 =3D $r0, ~(REFILL_PERF_PAGE_SIZE - 1) + /* Create corresponding physical address */ + addd $r2 =3D $r4, PHYS_OFFSET + ;; + /* If address is not a kernel one, take the standard path */ + cb.deqz $r3? kvx_std_tlb_refill + /* Prepare $teh value with virtual address and kernel value */ + ord $r7 =3D $r5, REFILL_PERF_TEH_VAL + ;; + /* Get $pm0 value as a pseudo random value for LTLB way to use */ + get $r4 =3D $pm0 + /* Clear low bits of physical address to align on page size */ + andd $r2 =3D $r2, ~(REFILL_PERF_PAGE_SIZE - 1) + /* Prepare value for $mmc wfxl to select LTLB and correct way */ + make $r5 =3D MMC_SEL_LTLB_CLEAR_WAY + ;; + /* Keep low bits of timer value */ + andw $r4 =3D $r4, (REFILL_PERF_ENTRIES - 1) + /* Get current task pointer for register restoration */ + get $r11 =3D $sr + ;; + /* Add LTLB base way number for kernel refill way */ + addw $r4 =3D $r4, LTLB_KERNEL_RESERVED + /* Prepare $tel value with physical address and kernel value */ + ord $r6 =3D $r2, REFILL_PERF_TEL_VAL + set $teh =3D $r7 + ;; + /* insert way in $mmc wfxl value */ + insf $r5 =3D $r4, KVX_SFR_END(MMC_SW) + 32, KVX_SFR_START(MMC_SW) + 32 + set $tel =3D $r6 + ;; + wfxl $mmc, $r5 + ;; + goto do_tlb_write + ;; +kvx_std_tlb_refill: + /* extract PGD offset */ + extfz $r3 =3D $r0, (ASM_PGDIR_SHIFT + ASM_PGDIR_BITS - 1), ASM_PGDIR_SHIFT + /* is mm NULL ? if so, use init_mm */ + cmoved.deqz $r1? $r1 =3D init_mm + ;; + get $r7 =3D $pcr + /* Load PGD base address into $r1 */ + ld $r1 =3D __PA(MM_PGD)[$r1] + ;; + /* Add offset for physical address */ + addd $r1 =3D $r1, VA_TO_PA_OFFSET + /* Extract processor ID to compute cpu_offset */ + extfz $r7 =3D $r7, KVX_SFR_END(PCR_PID), KVX_SFR_START(PCR_PID) + ;; + /* Load PGD entry offset */ + ld.xs $r1 =3D $r3[$r1] + /* Load per_cpu_offset */ +#if defined(CONFIG_SMP) + make $r5 =3D __PA(__per_cpu_offset) +#endif + ;; + /* extract PMD offset*/ + extfz $r3 =3D $r0, (ASM_PMD_SHIFT + ASM_PMD_BITS - 1), ASM_PMD_SHIFT + /* If PGD entry is null -> out */ + cb.deqz $r1? refill_err_out +#if defined(CONFIG_SMP) + /* Load cpu offset */ + ld.xs $r7 =3D $r7[$r5] +#else + /* Force cpu offset to 0 */ + make $r7 =3D 0 +#endif + ;; + /* Load PMD entry offset and keep pointer to the entry for huge page */ + addx8d $r2 =3D $r3, $r1 + ld.xs $r1 =3D $r3[$r1] + ;; + /* Check if it is a huge page (2Mb or 512Mb in PMD table)*/ + andd $r6 =3D $r1, _PAGE_HUGE + /* If PMD entry is null -> out */ + cb.deqz $r1? refill_err_out + /* extract PTE offset */ + extfz $r3 =3D $r0, (PAGE_SHIFT + 8), PAGE_SHIFT + ;; + /* + * If the page is a huge one we already have set the PTE and the + * pointer to the PTE. + */ + cb.dnez $r6? is_huge_page + ;; + /* Load PTE entry */ + ld.xs $r1 =3D $r3[$r1] + addx8d $r2 =3D $r3, $r1 + ;; + /* Check if it is a huge page (64Kb in PTE table) */ + andd $r6 =3D $r1, _PAGE_HUGE + ;; + /* Check if PTE entry is for a huge page */ + cb.dnez $r6? is_huge_page + ;; + /* 4K: Extract set value */ + extfz $r0 =3D $r1, (PAGE_SHIFT + KVX_SFR_MMC_SS_WIDTH - 1), PAGE_SHIFT + /* 4K: Extract virt page from ea */ + andd $r4 =3D $r0, PAGE_MASK + ;; +/* + * This path expects the following: + * - $r0 =3D set index + * - $r1 =3D PTE entry + * - $r2 =3D PTE entry address + * - $r4 =3D virtual page address + */ +pte_prepared: + /* Compute per_cpu_offset + current way of set address */ + addd $r5 =3D $r0, $r7 + /* Get exception cause for access type handling (page dirtying) */ + get $r7 =3D $es + /* Clear way and select JTLB */ + make $r6 =3D MMC_SEL_JTLB_CLEAR_WAY + ;; + /* Load current way to use for current set */ + lbz $r0 =3D __PA(jtlb_current_set_way)[$r5] + /* Check if the access was a "write" access */ + andd $r7 =3D $r7, (KVX_TRAP_RWX_WRITE << KVX_SFR_ES_RWX_SHIFT) + ;; + /* If bit PRESENT of pte entry is 0, then entry is not present */ + cb.even $r1? refill_err_out + /* + * Set the JTLB way in $mmc value, add 32 bits to be in the set part. + * Since we are refilling JTLB, we must make sure we insert only + * relevant bits (ie 2 bits for ways) to avoid using nonexistent ways. + */ + insf $r6 =3D $r0, KVX_SFR_START(MMC_SW) + 32 + (MMU_JTLB_WAYS_SHIFT - 1),\ + KVX_SFR_START(MMC_SW) + 32 + /* Extract page global bit */ + extfz $r3 =3D $r1, _PAGE_GLOBAL_SHIFT, _PAGE_GLOBAL_SHIFT + /* Increment way value, note that we do not care about overflow since + * we only use the two lower byte */ + addd $r0 =3D $r0, 1 + ;; + /* Prepare MMC */ + wfxl $mmc, $r6 + ;; + /* Insert global bit (if any) to its position into $teh value */ + insf $r4 =3D $r3, KVX_SFR_TEH_G_SHIFT, KVX_SFR_TEH_G_SHIFT + /* If "write" access ($r7 !=3D 0), then set it as dirty */ + cmoved.dnez $r7? $r7 =3D _PAGE_DIRTY + /* Clear bits not related to FN in the pte entry for TEL writing */ + andd $r6 =3D $r1, PFN_PTE_MASK + /* Store new way */ + sb __PA(jtlb_current_set_way)[$r5] =3D $r0 + ;; + /* Extract access perms from pte entry (discard PAGE_READ bit +1) */ + extfz $r3 =3D $r1, KVX_ACCESS_PERM_STOP_BIT, KVX_ACCESS_PERM_START_BIT + 1 + /* Move FN bits to their place */ + srld $r6 =3D $r6, PFN_PTE_SHIFT - PAGE_SHIFT + /* Extract the page size + cache policy */ + andd $r0 =3D $r1, (KVX_PAGE_SZ_MASK | KVX_PAGE_CP_MASK) + /* Prepare SBMM value */ + make $r5 =3D KVX_SBMM_BYTE_SEL + ;; + /* Add page size + cache policy to $tel value */ + ord $r6 =3D $r6, $r0 + /* Get $mmc to get current ASN */ + get $r0 =3D $mmc + /* Add _PAGE_ACCESSED bit to PTE entry for writeback */ + ord $r7 =3D $r7, _PAGE_ACCESSED + ;; + /* Or PTE value with accessed/dirty flags */ + ord $r1 =3D $r1, $r7 + /* Generate the byte selection for sbmm */ + slld $r5 =3D $r5, $r3 + /* Compute the mask to extract set and mask exception address */ + make $r7 =3D KVX_PAGE_PA_MATRIX + ;; + ord $r0 =3D $r6, TEL_DEFAULT_VALUE + /* Add ASN from mmc into future $teh value */ + insf $r4 =3D $r0, KVX_SFR_END(MMC_ASN), KVX_SFR_START(MMC_ASN) + /* Get the page permission value */ + sbmm8 $r6 =3D $r7, $r5 + /* Check PAGE_READ bit in PTE entry */ + andd $r3 =3D $r1, _PAGE_READ + ;; + /* If PAGE_READ bit is not set, set policy as NA_NA */ + cmoved.deqz $r3? $r6 =3D TLB_PA_NA_NA + ;; + /* Shift PA to correct position */ + slld $r6 =3D $r6, KVX_SFR_TEL_PA_SHIFT + set $teh =3D $r4 + ;; + /* Store updated PTE entry */ + sd 0[$r2] =3D $r1 + /* Prepare $tel */ + ord $r6 =3D $r0, $r6 + /* Get current task pointer for register restoration */ + get $r11 =3D $sr + ;; + set $tel =3D $r6 + ;; +do_tlb_write: + tlbwrite + ;; +#ifdef CONFIG_KVX_DEBUG_TLB_WRITE + goto mmc_error_check + ;; +mmc_error_check_ok: +#endif +#ifdef CONFIG_KVX_DEBUG_ASN + goto asn_check + ;; +asn_check_ok: +#endif + set $tel =3D $r8 + /* Restore registers */ + lo $r4r5r6r7 =3D __PA(TASK_THREAD_SAVE_AREA_QUAD(1))[$r11] + ;; + set $teh =3D $r9 + lo $r0r1r2r3 =3D __PA(TASK_THREAD_SAVE_AREA_QUAD(0))[$r11] + ;; + set $mmc =3D $r10 + ;; + lo $r8r9r10r11 =3D __PA(TASK_THREAD_SAVE_AREA_QUAD(2))[$r11] + ;; +#ifdef CONFIG_KVX_MMU_STATS + /* + * Fence to simulate a direct data dependency after returning from trap + * nomapping handling. This is the worst case that can happen and the + * processor will be stalled waiting for previous loads to complete. + */ + fence + ;; + get $r4 =3D $pm0 + ;; + get $r0 =3D $sr + ;; + /* Get cycles measured on trap entry */ + ld $r1 =3D __PA(TASK_THREAD_ENTRY_TS)[$r0] + ;; + /* Compute refill time */ + sbfd $r0 =3D $r1, $r4 + ;; +#ifdef CONFIG_SMP + get $r1 =3D $pcr + ;; + /* Extract processor ID to compute cpu_offset */ + extfz $r1 =3D $r1, KVX_SFR_END(PCR_PID), KVX_SFR_START(PCR_PID) + make $r2 =3D __PA(__per_cpu_offset) + ;; + /* Load cpu offset */ + ld.xs $r1 =3D $r1[$r2] + ;; + addd $r1 =3D $r1, __PA(mmu_stats) + ;; +#else + make $r1 =3D __PA(mmu_stats) + ;; +#endif + /* Load time between refill + last refill cycle */ + lq $r2r3 =3D MMU_STATS_CYCLES_BETWEEN_REFILL_OFF[$r1] + ;; + /* Set last update time to current if 0 */ + cmoved.deqz $r3? $r3 =3D $r4 + /* remove current refill time to current cycle */ + sbfd $r4 =3D $r0, $r4 + ;; + /* Compute time between last refill and current refill */ + sbfd $r5 =3D $r3, $r4 + /* Update last cycle time */ + copyd $r3 =3D $r4 + ;; + /* Increment total time between refill */ + addd $r2 =3D $r2, $r5 + ;; + sq MMU_STATS_CYCLES_BETWEEN_REFILL_OFF[$r1] =3D $r2r3 + /* Get exception address */ + get $r4 =3D $ea + ;; + /* $r2 holds refill type (user/kernel/kernel_direct) */ + make $r2 =3D MMU_STATS_REFILL_KERNEL_OFF + /* Check if address is a kernel direct mapping one */ + compd.ltu $r3 =3D $r4, (KERNEL_DIRECT_MEMORY_MAP_BASE + \ + KERNEL_DIRECT_MEMORY_MAP_SIZE) + ;; + cmoved.dnez $r3? $r2 =3D MMU_STATS_REFILL_KERNEL_DIRECT_OFF + /* Check if address is a user (ie below kernel) */ + compd.ltu $r3 =3D $r4, KERNEL_DIRECT_MEMORY_MAP_BASE + ;; + cmoved.dnez $r3? $r2 =3D MMU_STATS_REFILL_USER_OFF + ;; + /* Compute refill struct addr into one reg */ + addd $r1 =3D $r2, $r1 + /* Load refill_struct values */ + lo $r4r5r6r7 =3D $r2[$r1] + ;; + /* Increment count */ + addd $r4 =3D $r4, 1 + /* Increment total cycles count */ + addd $r5 =3D $r5, $r0 + ;; + /* Set min to ~0 if 0 */ + cmoved.deqz $r6? $r6 =3D ~0 + ;; + /* Compare min and max */ + compd.ltu $r2 =3D $r0, $r6 + compd.gtu $r3 =3D $r0, $r7 + ;; + /* Update min and max*/ + cmoved.dnez $r2? $r6 =3D $r0 + cmoved.dnez $r3? $r7 =3D $r0 + ;; + /* store back all values */ + so 0[$r1] =3D $r4r5r6r7 + ;; + get $r0 =3D $sr + ;; + /* Restore clobberred registers */ + lo $r4r5r6r7 =3D __PA(TASK_THREAD_SAVE_AREA_QUAD(1))[$r0] + ;; + lo $r0r1r2r3 =3D __PA(TASK_THREAD_SAVE_AREA_QUAD(0))[$r0] + ;; +#endif + /* Save $r4 for reenabling mmu and data cache in sps */ + set TMP_SCRATCH_SR =3D $r4 + /* Enable MME in $sps */ + make $r4 =3D MME_WFXL(1) + ;; + wfxl $sps, $r4 + ;; + get $r4 =3D TMP_SCRATCH_SR + ;; + barrier + ;; + rfe + ;; + +is_huge_page: + /* + * When entering this path: + * - $r0 =3D $ea + * - $r1 =3D PTE entry + * - $r7 =3D cpu offset for tlb_current_set_way + * + * From now on, we have the PTE value in $r1 so we can extract the page + * size. This value is stored as it is expected by the MMU (ie between + * 0 and 3). + * Note that the page size value is located at the same place as in + * $tel and this is checked at build time so we can use TEL_PS defines. + * In this codepath, we extract the set and mask exception address and + * align virt and phys address with what the hardware expects. The MMU + * expects the lsb of the virt and phys addresses to be 0 according to + * the page size, i.e., for 4K pages, the 12 lsb must be 0, for 64K + * pages, the 16 lsb must be 0 and so on. + */ + extfz $r5 =3D $r1, KVX_SFR_END(TEL_PS), KVX_SFR_START(TEL_PS) + /* Compute the mask to extract set and mask exception address */ + make $r4 =3D KVX_PS_SHIFT_MATRIX + make $r6 =3D KVX_SBMM_BYTE_SEL + ;; + /* Generate the byte selection for sbmm */ + slld $r6 =3D $r6, $r5 + ;; + /* Get the shift value */ + sbmm8 $r5 =3D $r4, $r6 + make $r4 =3D 0xFFFFFFFFFFFFFFFF + ;; + /* extract TLB set from $ea (6 lsb of virt page) */ + srld $r5 =3D $r0, $r5 + /* Generate ea masking according to page shift */ + slld $r4 =3D $r4, $r5 + ;; + /* Mask to get the set value */ + andd $r0 =3D $r5, MMU_SET_MASK + /* Extract virt page from ea */ + andd $r4 =3D $r0, $r4 + ;; + /* Return to fast path */ + goto pte_prepared + ;; + +#ifdef CONFIG_KVX_DEBUG_TLB_WRITE +mmc_error_check: + get $r1 =3D $mmc + ;; + andd $r1 =3D $r1, KVX_SFR_MMC_E_MASK + ;; + cb.deqz $r1? mmc_error_check_ok + ;; + make $r0 =3D mmc_error_panic_str_label + goto asm_panic + ;; +#endif +#ifdef CONFIG_KVX_DEBUG_ASN +/* + * When entering this path $r11 =3D $sr. + * WARNING: Do not clobber it here if you don't want to mess up the regist= er + * restoration above. + */ +asn_check: + get $r1 =3D $ea + ;; + /* Check if kernel address, if so, there is no ASN */ + compd.geu $r2 =3D $r1, PAGE_OFFSET + ;; + cb.dnez $r2? asn_check_ok + ;; + get $r2 =3D $pcr + /* Load active mm addr */ + ld $r3 =3D __PA(TASK_ACTIVE_MM)[$r11] + ;; + get $r5 =3D $mmc + /* Extract processor ID to compute cpu_offset */ + extfz $r2 =3D $r2, KVX_SFR_END(PCR_PID), KVX_SFR_START(PCR_PID) + addd $r3 =3D $r3, MM_CTXT_ASN + ;; + extfz $r4 =3D $r5, KVX_SFR_END(MMC_ASN), KVX_SFR_START(MMC_ASN) + addd $r3 =3D $r3, VA_TO_PA_OFFSET + ;; + /* Load current ASN from active_mm */ + ld.xs $r3 =3D $r2[$r3] + ;; + /* Error if ASN is not set */ + cb.deqz $r3? asn_check_err + /* Mask $r3 ASN cycle part */ + andd $r5 =3D $r3, ((1 << KVX_SFR_MMC_ASN_WIDTH) - 1) + ;; + /* Compare the ASN in $mmc and the ASN in current task mm */ + compd.eq $r3 =3D $r5, $r4 + ;; + cb.dnez $r3? asn_check_ok + ;; +asn_check_err: + /* We are fried, die peacefully */ + make $r0 =3D asn_error_panic_str_label + goto asm_panic + ;; +#endif + +#if defined(CONFIG_KVX_DEBUG_ASN) || defined(CONFIG_KVX_DEBUG_TLB_WRITE) + +/** + * This routine calls panic from assembly after setting appropriate things + * $r0 =3D panic string + */ +asm_panic: + /* + * Reenable hardware loop and traps (for nomapping) since some functions + * might need it. Moreover, disable DAUS to reenable MMU accesses. + */ + make $r32 =3D PS_HLE_EN_ET_CLEAR_DAUS_DIS + make $r33 =3D 0 + get $r34 =3D $sr + ;; + /* Clear hw loop exit to disable current loop */ + set $le =3D $r33 + ;; + wfxl $ps, $r32 + ;; + /* Restore kernel stack */ + ld $r12 =3D TASK_THREAD_KERNEL_SP[$r34] + ;; + call panic + ;; +#endif + +/* Error path for TLB refill */ +refill_err_out: + get $r2 =3D $sr + ;; + /* Restore clobbered registers */ + lo $r8r9r10r11 =3D __PA(TASK_THREAD_SAVE_AREA_QUAD(2))[$r2] + ;; + lo $r4r5r6r7 =3D __PA(TASK_THREAD_SAVE_AREA_QUAD(1))[$r2] + goto trap_switch_stack + ;; + +/* This path is entered only when the trap is NOT a NOMAPPING */ +trap_slow_path: + /* Restore $r3 from temporary scratch system register */ + get $r3 =3D TMP_SCRATCH_SR + ;; +.endm + +.macro trap_function + /* + * In the fast path we don't care about the DAME, because we won't + * check it but if we enter the slow path we need to propagate it + * because we will check it later in the C code + */ + barrier + ;; + get $r2 =3D $ea + ;; + get $r1 =3D $es + ;; + /* do_hwtrap($r0: pt_regs, $r1: es, $r2: ea) */ + call do_hwtrap + ;; +.endm + +.macro interrupt_prologue + barrier + ;; +.endm + +.macro interrupt_function + /* generic_handle_arch_irq($r0: pt_regs) */ + call generic_handle_arch_irq + ;; +.endm + +.macro syscall_prologue + barrier + ;; +.endm + +.macro syscall_function + call do_syscall + ;; +.endm + + +.macro entry_handler type +.section .exception.text, "ax", @progbits +SYM_FUNC_START(kvx_\()\type\()_handler) + \type\()_prologue + disable_mmu + save_r0r1r2r3 +\type\()_switch_stack: + switch_stack + save_regs_for_exception + prepare_kernel_entry + \type\()_function + goto return_from_exception + ;; +SYM_FUNC_END(kvx_\()\type\()_handler) + +/* + * Exception vectors stored in $ev + */ +.section .exception.\()\type, "ax", @progbits +SYM_FUNC_START(kvx_\()\type\()_handler_trampoline): + goto kvx_\()\type\()_handler + ;; +SYM_FUNC_END(kvx_\()\type\()_handler_trampoline) + +/* + * Early Exception vectors stored in $ev + */ +.section .early_exception.\()\type, "ax", @progbits +SYM_FUNC_START(kvx_\()\type\()_early_handler): +1: nop + ;; + goto 1b + ;; +SYM_FUNC_END(kvx_\()\type\()_early_handler) +.endm + +/* + * Exception handlers + */ +entry_handler debug +entry_handler trap +entry_handler interrupt +entry_handler syscall + +#define SAVE_TCA_REG(__reg_num, __task_reg, __zero_reg1, __zero_reg2) \ + xso (CTX_SWITCH_TCA_REGS + (__reg_num * TCA_REG_SIZE))[__task_reg] =3D \ + $a##__reg_num ;\ + xmovetq $a##__reg_num##.lo =3D __zero_reg1, __zero_reg2 ;\ + xmovetq $a##__reg_num##.hi =3D __zero_reg1, __zero_reg2 ;\ + ;; + +.macro save_tca_regs task_reg zero_reg1 zero_reg2 + SAVE_TCA_REG(0, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(1, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(2, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(3, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(4, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(5, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(6, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(7, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(8, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(9, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(10, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(11, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(12, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(13, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(14, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(15, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(16, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(17, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(18, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(19, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(20, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(21, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(22, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(23, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(24, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(25, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(26, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(27, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(28, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(29, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(30, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(31, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(32, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(33, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(34, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(35, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(36, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(37, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(38, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(39, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(40, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(41, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(42, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(43, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(44, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(45, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(46, \task_reg, \zero_reg1, \zero_reg2) + SAVE_TCA_REG(47, \task_reg, \zero_reg1, \zero_reg2) +.endm + +#define RESTORE_TCA_REG(__reg_num, __task_reg) \ + xlo.u $a##__reg_num =3D (CTX_SWITCH_TCA_REGS + (__reg_num * TCA_REG_SIZE)= ) \ + [__task_reg] ;\ + ;; + +.macro restore_tca_regs task_reg + RESTORE_TCA_REG(0, \task_reg) + RESTORE_TCA_REG(1, \task_reg) + RESTORE_TCA_REG(2, \task_reg) + RESTORE_TCA_REG(3, \task_reg) + RESTORE_TCA_REG(4, \task_reg) + RESTORE_TCA_REG(5, \task_reg) + RESTORE_TCA_REG(6, \task_reg) + RESTORE_TCA_REG(7, \task_reg) + RESTORE_TCA_REG(8, \task_reg) + RESTORE_TCA_REG(9, \task_reg) + RESTORE_TCA_REG(10, \task_reg) + RESTORE_TCA_REG(11, \task_reg) + RESTORE_TCA_REG(12, \task_reg) + RESTORE_TCA_REG(13, \task_reg) + RESTORE_TCA_REG(14, \task_reg) + RESTORE_TCA_REG(15, \task_reg) + RESTORE_TCA_REG(16, \task_reg) + RESTORE_TCA_REG(17, \task_reg) + RESTORE_TCA_REG(18, \task_reg) + RESTORE_TCA_REG(19, \task_reg) + RESTORE_TCA_REG(20, \task_reg) + RESTORE_TCA_REG(21, \task_reg) + RESTORE_TCA_REG(22, \task_reg) + RESTORE_TCA_REG(23, \task_reg) + RESTORE_TCA_REG(24, \task_reg) + RESTORE_TCA_REG(25, \task_reg) + RESTORE_TCA_REG(26, \task_reg) + RESTORE_TCA_REG(27, \task_reg) + RESTORE_TCA_REG(28, \task_reg) + RESTORE_TCA_REG(29, \task_reg) + RESTORE_TCA_REG(30, \task_reg) + RESTORE_TCA_REG(31, \task_reg) + RESTORE_TCA_REG(32, \task_reg) + RESTORE_TCA_REG(33, \task_reg) + RESTORE_TCA_REG(34, \task_reg) + RESTORE_TCA_REG(35, \task_reg) + RESTORE_TCA_REG(36, \task_reg) + RESTORE_TCA_REG(37, \task_reg) + RESTORE_TCA_REG(38, \task_reg) + RESTORE_TCA_REG(39, \task_reg) + RESTORE_TCA_REG(40, \task_reg) + RESTORE_TCA_REG(41, \task_reg) + RESTORE_TCA_REG(42, \task_reg) + RESTORE_TCA_REG(43, \task_reg) + RESTORE_TCA_REG(44, \task_reg) + RESTORE_TCA_REG(45, \task_reg) + RESTORE_TCA_REG(46, \task_reg) + RESTORE_TCA_REG(47, \task_reg) +.endm + +.text +/** + * Return from fork. + * start_thread will set return stack and pc. Then copy_thread will + * take care of the copying logic. + * $r20 will then contains 0 if tracing disabled (set by copy_thread) + * The mechanism is almost the same as for ret_from_kernel_thread. + */ +SYM_FUNC_START(ret_from_fork) + call schedule_tail + ;; + cb.deqz $r20? 1f + ;; + /* Call fn(arg) */ + copyd $r0 =3D $r21 + ;; + icall $r20 + ;; +1: + copyd $r0 =3D $sp /* pt_regs */ + ;; + call syscall_exit_to_user_mode + ;; + goto return_from_exception + ;; +SYM_FUNC_END(ret_from_fork) + +/* + * The callee-saved registers must be saved and restored. + * When entering: + * - r0 =3D previous task struct + * - r1 =3D next task struct + * Moreover, the parameters for function call (given by copy_thread) + * are stored in: + * - r20 =3D Func to call + * - r21 =3D Argument for function + */ +SYM_FUNC_START(__switch_to) + sd CTX_SWITCH_FP[$r0] =3D $fp + ;; + /* Save previous task context */ + so CTX_SWITCH_Q20[$r0] =3D $r20r21r22r23 + ;; + so CTX_SWITCH_Q24[$r0] =3D $r24r25r26r27 + get $r16 =3D $ra + ;; + so CTX_SWITCH_Q28[$r0] =3D $r28r29r30r31 + copyd $r17 =3D $sp + get $r2 =3D $cs + ;; + so CTX_SWITCH_RA_SP_R18_R19[$r0] =3D $r16r17r18r19 + /* Extract XMF bit which means coprocessor was used by user */ + andd $r3 =3D $r2, KVX_SFR_CS_XMF_MASK + ;; +#ifdef CONFIG_ENABLE_TCA + make $r4 =3D 0 + make $r5 =3D 0 + make $r6 =3D 1 + cb.dnez $r3? save_tca_registers + /* Check if next task needs TCA registers to be restored */ + ld $r7 =3D CTX_SWITCH_TCA_REGS_SAVED[$r1] + ;; +check_restore_tca: + cb.dnez $r7? restore_tca_registers + ;; +restore_fast_path: +#endif + /* Restore next task context */ + lo $r16r17r18r19 =3D CTX_SWITCH_RA_SP_R18_R19[$r1] + ;; + lo $r20r21r22r23 =3D CTX_SWITCH_Q20[$r1] + ;; + lo $r24r25r26r27 =3D CTX_SWITCH_Q24[$r1] + copyd $sp =3D $r17 + set $ra =3D $r16 + ;; + lo $r28r29r30r31 =3D CTX_SWITCH_Q28[$r1] + set $sr =3D $r1 + ;; + ld $fp =3D CTX_SWITCH_FP[$r1] + ;; + ret + ;; +#ifdef CONFIG_ENABLE_TCA +save_tca_registers: + save_tca_regs $r0 $r4 $r5 + ;; + /* Indicates that we saved the TCA context */ + sb CTX_SWITCH_TCA_REGS_SAVED[$r0] =3D $r6 + goto check_restore_tca + ;; +restore_tca_registers: + restore_tca_regs $r1 + ;; + /* Clear TCA registers saved hint */ + sb CTX_SWITCH_TCA_REGS_SAVED[$r1] =3D $r5 + goto restore_fast_path + ;; +#endif +SYM_FUNC_END(__switch_to) + +/*********************************************************************** +* Misc functions +***********************************************************************/ + +/** + * Avoid hardcoding trampoline for rt_sigreturn by using this code and + * copying it on user trampoline + */ +.pushsection .text +.global user_scall_rt_sigreturn_end +SYM_FUNC_START(user_scall_rt_sigreturn) + make $r6 =3D __NR_rt_sigreturn + ;; + scall $r6 + ;; +user_scall_rt_sigreturn_end: +SYM_FUNC_END(user_scall_rt_sigreturn) +.popsection diff --git a/arch/kvx/kernel/sys_kvx.c b/arch/kvx/kernel/sys_kvx.c new file mode 100644 index 0000000000000..b32a821d76c9d --- /dev/null +++ b/arch/kvx/kernel/sys_kvx.c @@ -0,0 +1,58 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + */ + +#include + +#include +#include + +SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len, + unsigned long, prot, unsigned long, flags, + unsigned long, fd, off_t, off) +{ + /* offset must be a multiple of the page size */ + if (unlikely(offset_in_page(off) !=3D 0)) + return -EINVAL; + + /* Unlike mmap2 where the offset is in PAGE_SIZE-byte units, here it + * is in bytes. So we need to use PAGE_SHIFT. + */ + return ksys_mmap_pgoff(addr, len, prot, flags, fd, off >> PAGE_SHIFT); +} + +SYSCALL_DEFINE4(cachectl, unsigned long, addr, unsigned long, len, + unsigned long, cache, unsigned long, flags) +{ + bool wb =3D !!(flags & CACHECTL_FLAG_OP_WB); + bool inval =3D !!(flags & CACHECTL_FLAG_OP_INVAL); + + if (len =3D=3D 0) + return 0; + + /* Check for overflow */ + if (addr + len < addr) + return -EFAULT; + + if (cache !=3D CACHECTL_CACHE_DCACHE) + return -EINVAL; + + if ((flags & CACHECTL_FLAG_OP_MASK) =3D=3D 0) + return -EINVAL; + + if (flags & CACHECTL_FLAG_ADDR_PHYS) { + if (!IS_ENABLED(CONFIG_CACHECTL_UNSAFE_PHYS_OPERATIONS)) + return -EINVAL; + + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + + dcache_wb_inval_phys_range(addr, len, wb, inval); + return 0; + } + + return dcache_wb_inval_virt_range(addr, len, wb, inval); +} diff --git a/arch/kvx/kernel/syscall_table.c b/arch/kvx/kernel/syscall_tabl= e.c new file mode 100644 index 0000000000000..55a8ba9d79435 --- /dev/null +++ b/arch/kvx/kernel/syscall_table.c @@ -0,0 +1,19 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * derived from arch/riscv/kernel/syscall_table.c + * + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#include + +#include + +#undef __SYSCALL +#define __SYSCALL(nr, call)[nr] =3D (call), + +void *sys_call_table[__NR_syscalls] =3D { + [0 ... __NR_syscalls - 1] =3D sys_ni_syscall, +#include +}; --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout145.security-mail.net (smtpout145.security-mail.net [85.31.212.145]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42F8816E88E for ; Mon, 22 Jul 2024 09:43:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.145 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641431; cv=none; b=rU6DA3j8mU5ABwJNgE1mK3hW19qPbTLvi+MsLfhAuBUu27dXtnDwz8OKruvtcYtcFvjjSchVrUEszRn5W5WI8fkZ6gxjFOT6iqrxcxrHBzbHYEFeLwjjd/e28YQldJvA/jS/tvhi0luCS9urRaW7b5EKTA5F35b9u22hqwcP5Jc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641431; c=relaxed/simple; bh=giN4n+jzmplOf9Ex5NGT7uwj8TAAZeGrZHbapAwQ1vY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=cOmOJuELF1bAr1YbYkJuqeE1WHNo9ceUI1uWCljL0Y7c2ROLbVzFsH3yIBfy2xX1DH4ytFgSEMEz0TE33a6KaKT66cgZXIYKuSChsZqsvi+goHcj/LVxexRymARYxPWgDd11XTBnKkPTAXb57raPDKNFtDV7G3oXgsa7QkwDRFs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=RLi1KN7w; arc=none smtp.client-ip=85.31.212.145 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="RLi1KN7w" Received: from localhost (fx405.security-mail.net [127.0.0.1]) by fx405.security-mail.net (Postfix) with ESMTP id 9E2E6335C57 for ; Mon, 22 Jul 2024 11:43:46 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641426; bh=giN4n+jzmplOf9Ex5NGT7uwj8TAAZeGrZHbapAwQ1vY=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=RLi1KN7wCc1aE/+FkGL9iZdOOsjnBLoJPM749uy7NhVyK/ybzZQ2EqurOgedCgwa4 2RnTq6rNTwh5d24TF7Sfil+x0A8AzbTpar9kC1/yM/m+itCLfD41N247LXqv8+EVoX XjQh93BYHrUQ+iBcMQ6zSP9y2fw4T9gP4L3ulDis= Received: from fx405 (fx405.security-mail.net [127.0.0.1]) by fx405.security-mail.net (Postfix) with ESMTP id 3FEF8335F91; Mon, 22 Jul 2024 11:43:46 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx405.security-mail.net (Postfix) with ESMTPS id 32040335FA9; Mon, 22 Jul 2024 11:43:44 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 7669240317; Mon, 22 Jul 2024 11:43:44 +0200 (CEST) X-Secumail-id: <114b4.669e29d0.b7bf2.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Jules Maselbas Subject: [RFC PATCH v3 26/37] kvx: Add signal handling support Date: Mon, 22 Jul 2024 11:41:37 +0200 Message-ID: <20240722094226.21602-27-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add sigcontext definition and signal handling support for kvx. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Jules Maselbas Signed-off-by: Jules Maselbas Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: Use read_thread_flags() as suggested by Mark Rutland V2 -> V3: - Updated to use generic functions (e.g., arch_do_signal_or_restart) - Remove do_work_pending (now done by generic exit_to_user_mode_loop) --- arch/kvx/include/uapi/asm/sigcontext.h | 16 ++ arch/kvx/kernel/signal.c | 252 +++++++++++++++++++++++++ arch/kvx/kernel/vdso.c | 87 +++++++++ 3 files changed, 355 insertions(+) create mode 100644 arch/kvx/include/uapi/asm/sigcontext.h create mode 100644 arch/kvx/kernel/signal.c create mode 100644 arch/kvx/kernel/vdso.c diff --git a/arch/kvx/include/uapi/asm/sigcontext.h b/arch/kvx/include/uapi= /asm/sigcontext.h new file mode 100644 index 0000000000000..a26688fe8571f --- /dev/null +++ b/arch/kvx/include/uapi/asm/sigcontext.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _UAPI_ASM_KVX_SIGCONTEXT_H +#define _UAPI_ASM_KVX_SIGCONTEXT_H + +#include + +struct sigcontext { + struct user_regs_struct sc_regs; +}; + +#endif /* _UAPI_ASM_KVX_SIGCONTEXT_H */ diff --git a/arch/kvx/kernel/signal.c b/arch/kvx/kernel/signal.c new file mode 100644 index 0000000000000..f49c66aef2df6 --- /dev/null +++ b/arch/kvx/kernel/signal.c @@ -0,0 +1,252 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +struct rt_sigframe { + struct siginfo info; + struct ucontext uc; +}; + +int __init setup_syscall_sigreturn_page(void *sigpage_addr) +{ + unsigned int frame_size =3D (uintptr_t) &user_scall_rt_sigreturn_end - + (uintptr_t) &user_scall_rt_sigreturn; + + /* Copy the sigreturn scall implementation */ + memcpy(sigpage_addr, &user_scall_rt_sigreturn, frame_size); + + flush_icache_range((unsigned long) sigpage_addr, + (unsigned long) sigpage_addr + frame_size); + + return 0; +} + +static long restore_sigcontext(struct pt_regs *regs, + struct sigcontext __user *sc) +{ + long err; + + /* sc_regs is structured the same as the start of pt_regs */ + err =3D __copy_from_user(regs, &sc->sc_regs, sizeof(sc->sc_regs)); + + return err; +} + +long sys_rt_sigreturn(void) +{ + struct pt_regs *regs =3D current_pt_regs(); + struct rt_sigframe __user *frame; + struct task_struct *task; + sigset_t set; + + current->restart_block.fn =3D do_no_restart_syscall; + + frame =3D (struct rt_sigframe __user *) user_stack_pointer(regs); + + /* + * Stack is not aligned but should be ! + * User probably did some malicious things. + */ + if (user_stack_pointer(regs) & STACK_ALIGN_MASK) + goto badframe; + + if (!access_ok(frame, sizeof(*frame))) + goto badframe; + + /* Restore sigmask */ + if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set))) + goto badframe; + + set_current_blocked(&set); + + if (restore_sigcontext(regs, &frame->uc.uc_mcontext)) + goto badframe; + + if (restore_altstack(&frame->uc.uc_stack)) + goto badframe; + + return regs->r0; + +badframe: + task =3D current; + if (show_unhandled_signals) { + pr_info_ratelimited( + "%s[%d]: bad frame in %s: frame=3D%p pc=3D%p sp=3D%p\n", + task->comm, task_pid_nr(task), __func__, + frame, (void *) instruction_pointer(regs), + (void *) user_stack_pointer(regs)); + } + force_sig(SIGSEGV); + return 0; +} + + +static long setup_sigcontext(struct rt_sigframe __user *frame, + struct pt_regs *regs) +{ + struct sigcontext __user *sc =3D &frame->uc.uc_mcontext; + long err; + + /* sc_regs is structured the same as the start of pt_regs */ + err =3D __copy_to_user(&sc->sc_regs, regs, sizeof(sc->sc_regs)); + + return err; +} + +static inline void __user *get_sigframe(struct ksignal *ksig, + struct pt_regs *regs, size_t framesize) +{ + unsigned long sp; + /* Default to using normal stack */ + sp =3D regs->sp; + + /* + * If we are on the alternate signal stack and would overflow it, don't. + * Return an always-bogus address instead so we will die with SIGSEGV. + */ + if (on_sig_stack(sp) && !likely(on_sig_stack(sp - framesize))) + return (void __user __force *)(-1UL); + + /* This is the X/Open sanctioned signal stack switching. */ + sp =3D sigsp(sp, ksig) - framesize; + + /* Align the stack frame on 16bytes */ + sp &=3D ~STACK_ALIGN_MASK; + + return (void __user *)sp; +} + +/* TODO: Use VDSO when ready ! */ +static int setup_rt_frame(struct ksignal *ksig, sigset_t *set, + struct pt_regs *regs) +{ + unsigned long sigpage =3D current->mm->context.sigpage; + struct rt_sigframe __user *frame; + long err =3D 0; + + frame =3D get_sigframe(ksig, regs, sizeof(*frame)); + if (!access_ok(frame, sizeof(*frame))) + return -EFAULT; + + err |=3D copy_siginfo_to_user(&frame->info, &ksig->info); + + /* Create the ucontext. */ + err |=3D __put_user(0, &frame->uc.uc_flags); + err |=3D __put_user(NULL, &frame->uc.uc_link); + err |=3D __save_altstack(&frame->uc.uc_stack, user_stack_pointer(regs)); + err |=3D setup_sigcontext(frame, regs); + err |=3D __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set)); + if (err) + return -EFAULT; + + /* + * When returning from the handler, we want to jump to the + * sigpage which will execute the sigreturn scall. + */ + regs->ra =3D sigpage; + /* Return to signal handler */ + regs->spc =3D (unsigned long)ksig->ka.sa.sa_handler; + regs->sp =3D (unsigned long) frame; + + /* Parameters for signal handler */ + regs->r0 =3D ksig->sig; /* r0: signal number */ + regs->r1 =3D (unsigned long)(&frame->info); /* r1: siginfo pointer */ + regs->r2 =3D (unsigned long)(&frame->uc); /* r2: ucontext pointer */ + + return 0; +} + +static void handle_signal(struct ksignal *ksig, struct pt_regs *regs) +{ + bool stepping; + sigset_t *oldset =3D sigmask_to_save(); + int ret; + + /* Are we in a system call? */ + if (in_syscall(regs)) { + /* If so, check system call restarting.. */ + switch (regs->r0) { + case -ERESTART_RESTARTBLOCK: + case -ERESTARTNOHAND: + regs->r0 =3D -EINTR; + break; + case -ERESTARTSYS: + if (!(ksig->ka.sa.sa_flags & SA_RESTART)) { + regs->r0 =3D -EINTR; + break; + } + fallthrough; + case -ERESTARTNOINTR: + regs->r0 =3D regs->orig_r0; + regs->spc -=3D 0x4; + break; + } + } + + /* + * If TF is set due to a debugger (TIF_FORCED_TF), clear TF now + * so that register information in the sigcontext is correct and + * then notify the tracer before entering the signal handler. + */ + stepping =3D test_thread_flag(TIF_SINGLESTEP); + if (stepping) + user_disable_single_step(current); + + ret =3D setup_rt_frame(ksig, oldset, regs); + + signal_setup_done(ret, ksig, stepping); +} + +void arch_do_signal_or_restart(struct pt_regs *regs) +{ + struct ksignal ksig; + + if (get_signal(&ksig)) { + handle_signal(&ksig, regs); + return; + } + + /* Did we come from a system call? */ + if (in_syscall(regs)) { + /* + * If we are here, this means there is no handler + * present and we must restart the syscall. + */ + switch (regs->r0) { + case -ERESTART_RESTARTBLOCK: + /* Modify the syscall number in order to restart it */ + regs->r6 =3D __NR_restart_syscall; + fallthrough; + case -ERESTARTNOHAND: + case -ERESTARTSYS: + case -ERESTARTNOINTR: + /* We are restarting the syscall */ + regs->r0 =3D regs->orig_r0; + /* + * scall instruction isn't bundled with anything else, + * so we can just revert the spc to restart the syscall. + */ + regs->spc -=3D 0x4; + break; + } + } + + /* + * If there's no signal to deliver, we just put the saved sigmask + * back. + */ + restore_saved_sigmask(); +} diff --git a/arch/kvx/kernel/vdso.c b/arch/kvx/kernel/vdso.c new file mode 100644 index 0000000000000..1515de15eb31b --- /dev/null +++ b/arch/kvx/kernel/vdso.c @@ -0,0 +1,87 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#include +#include +#include + +#include + +static struct page *signal_page; + +static int __init init_sigreturn(void) +{ + struct page *sigpage; + void *mapped_sigpage; + int err; + + sigpage =3D alloc_page(GFP_KERNEL); + if (!sigpage) + panic("Cannot allocate sigreturn page"); + + mapped_sigpage =3D vmap(&sigpage, 1, 0, PAGE_KERNEL); + if (!mapped_sigpage) + panic("Cannot map sigreturn page"); + + clear_page(mapped_sigpage); + + err =3D setup_syscall_sigreturn_page(mapped_sigpage); + if (err) + panic("Cannot set signal return syscall, err: %x.", err); + + vunmap(mapped_sigpage); + + signal_page =3D sigpage; + + return 0; +} +arch_initcall(init_sigreturn); + +static int sigpage_mremap(const struct vm_special_mapping *sm, + struct vm_area_struct *new_vma) +{ + current->mm->context.sigpage =3D new_vma->vm_start; + return 0; +} + +static const struct vm_special_mapping sigpage_mapping =3D { + .name =3D "[sigpage]", + .pages =3D &signal_page, + .mremap =3D sigpage_mremap, +}; + +int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) +{ + int ret =3D 0; + unsigned long addr; + struct mm_struct *mm =3D current->mm; + struct vm_area_struct *vma; + + mmap_write_lock(mm); + + addr =3D get_unmapped_area(NULL, STACK_TOP, PAGE_SIZE, 0, 0); + if (IS_ERR_VALUE(addr)) { + ret =3D addr; + goto up_fail; + } + + vma =3D _install_special_mapping( + mm, + addr, + PAGE_SIZE, + VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC, + &sigpage_mapping); + if (IS_ERR(vma)) { + ret =3D PTR_ERR(vma); + goto up_fail; + } + + mm->context.sigpage =3D addr; + +up_fail: + mmap_write_unlock(mm); + return ret; +} --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout143.security-mail.net (smtpout143.security-mail.net [85.31.212.143]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 63E0C16E899 for ; Mon, 22 Jul 2024 09:43:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.143 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641430; cv=none; b=DSYznywWOHi/SrRPsZCu9xSawxO6KaR3+zaGz/lFVaxZvtONa01RGeVVM7cnT3Uhdd0G8Ru83gj/T5OediRvZ+skGtSd6/nGr74XbbUkvKfO1GaNG8+I5UkKnRCXdhVQpDINgpP8meZ6/h61EiufwIjKT70TWrTn631/HHA0wPk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641430; c=relaxed/simple; bh=7YR6j+1R9mz+vgiTv2FAgB1g9CVaQI/buI0GwB5Mols=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=i22pjUaeo3L3cXhLF7zzGJawVTrH5Hz349gZHwr0cAFyilRQQUg8ORJpCLTXPEZ2cVK50H2liV0Pp2+o7nMBlJ7JjzbCZM1VHoHikiRLMxUdza1Rw3ZMgmlmltijAIxt6sO3NuQoXZpsl+VvRQHWse/kSE4cBfo4CNISyPsneSI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=V9NvfVC8; arc=none smtp.client-ip=85.31.212.143 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="V9NvfVC8" Received: from localhost (localhost [127.0.0.1]) by fx403.security-mail.net (Postfix) with ESMTP id D263B48ECED for ; Mon, 22 Jul 2024 11:43:46 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641426; bh=7YR6j+1R9mz+vgiTv2FAgB1g9CVaQI/buI0GwB5Mols=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=V9NvfVC8Gh9NdexNuysa6QNcEWw0Xx9VfDUMfGCL5Lg46w7TCOcrJpXcC+NOhg55O u0K0vGPc/HyDKH19HFD5aYiqEHxyb8epmKXkO+ymxfwcvt8Ukpl3ygoUW94gpWqs7L LeFBABjcrGt358/5N953KH9Q0WuBTlb/8IuG53qo= Received: from fx403 (localhost [127.0.0.1]) by fx403.security-mail.net (Postfix) with ESMTP id 7B50248F114; Mon, 22 Jul 2024 11:43:46 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx403.security-mail.net (Postfix) with ESMTPS id 81A1548E6F0; Mon, 22 Jul 2024 11:43:45 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 4C4CE4032F; Mon, 22 Jul 2024 11:43:45 +0200 (CEST) X-Secumail-id: From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Eric Biederman , Kees Cook Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Marius Gligor , linux-mm@kvack.org Subject: [RFC PATCH v3 27/37] kvx: Add ELF relocations and module support Date: Mon, 22 Jul 2024 11:41:38 +0200 Message-ID: <20240722094226.21602-28-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add ELF-related definition and module relocation code for basic kvx support. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Marius Gligor Signed-off-by: Marius Gligor Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: no changes V2 -> V3: define CORE_DUMP_USE_REGSET --- arch/kvx/include/asm/elf.h | 156 ++++++++++++++++++++++++++++++++ arch/kvx/include/asm/vermagic.h | 12 +++ arch/kvx/kernel/module.c | 148 ++++++++++++++++++++++++++++++ 3 files changed, 316 insertions(+) create mode 100644 arch/kvx/include/asm/elf.h create mode 100644 arch/kvx/include/asm/vermagic.h create mode 100644 arch/kvx/kernel/module.c diff --git a/arch/kvx/include/asm/elf.h b/arch/kvx/include/asm/elf.h new file mode 100644 index 0000000000000..7b6804a939e5e --- /dev/null +++ b/arch/kvx/include/asm/elf.h @@ -0,0 +1,156 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Yann Sionneau + * Clement Leger + * Marius Gligor + * Guillaume Thouvenin + */ + +#ifndef _ASM_KVX_ELF_H +#define _ASM_KVX_ELF_H + +#include + +#include + +/* + * These are used to set parameters in the core dumps. + */ +#define ELF_CLASS ELFCLASS64 +#define ELF_DATA ELFDATA2LSB +#define ELF_ARCH EM_KVX + +typedef uint64_t elf_greg_t; +typedef uint64_t elf_fpregset_t; + +#define ELF_NGREG (sizeof(struct user_regs_struct) / sizeof(elf_greg_t)) +typedef elf_greg_t elf_gregset_t[ELF_NGREG]; + +#define ELF_CORE_COPY_REGS(dest, regs) \ + *(struct user_regs_struct *)&(dest) =3D \ + *(struct user_regs_struct *)regs; \ + +/* + * This is used to ensure we don't load something for the wrong architectu= re. + */ +#define elf_check_arch(x) ((x)->e_machine =3D=3D EM_KVX) + +#define ELF_CORE_EFLAGS 0x1308 + +#define CORE_DUMP_USE_REGSET +#define ELF_EXEC_PAGESIZE (PAGE_SIZE) + +/* + * This is the location that an ET_DYN program is loaded if exec'ed. Typi= cal + * use of this is to invoke "./ld.so someprog" to test out a new version of + * the loader. We need to make sure that it is out of the way of the prog= ram + * that it will "exec", and that there is sufficient room for the brk. + */ +#define ELF_ET_DYN_BASE ((TASK_SIZE / 3) * 2) + +/* + * This yields a mask that user programs can use to figure out what + * instruction set this CPU supports. This could be done in user space, + * but it's not easy, and we've already done it here. + */ +#define ELF_HWCAP (elf_hwcap) +extern unsigned long elf_hwcap; + +/* + * This yields a string that ld.so will use to load implementation + * specific libraries for optimization. This is more specific in + * intent than poking at uname or /proc/cpuinfo. + */ +#define ELF_PLATFORM (NULL) + +#define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1 +struct linux_binprm; +extern int arch_setup_additional_pages(struct linux_binprm *bprm, + int uses_interp); + +/* KVX relocs */ +#define R_KVX_NONE 0 +#define R_KVX_16 1 +#define R_KVX_32 2 +#define R_KVX_64 3 +#define R_KVX_S16_PCREL 4 +#define R_KVX_PCREL17 5 +#define R_KVX_PCREL27 6 +#define R_KVX_32_PCREL 7 +#define R_KVX_S37_PCREL_LO10 8 +#define R_KVX_S37_PCREL_UP27 9 +#define R_KVX_S43_PCREL_LO10 10 +#define R_KVX_S43_PCREL_UP27 11 +#define R_KVX_S43_PCREL_EX6 12 +#define R_KVX_S64_PCREL_LO10 13 +#define R_KVX_S64_PCREL_UP27 14 +#define R_KVX_S64_PCREL_EX27 15 +#define R_KVX_64_PCREL 16 +#define R_KVX_S16 17 +#define R_KVX_S32_LO5 18 +#define R_KVX_S32_UP27 19 +#define R_KVX_S37_LO10 20 +#define R_KVX_S37_UP27 21 +#define R_KVX_S37_GOTOFF_LO10 22 +#define R_KVX_S37_GOTOFF_UP27 23 +#define R_KVX_S43_GOTOFF_LO10 24 +#define R_KVX_S43_GOTOFF_UP27 25 +#define R_KVX_S43_GOTOFF_EX6 26 +#define R_KVX_32_GOTOFF 27 +#define R_KVX_64_GOTOFF 28 +#define R_KVX_32_GOT 29 +#define R_KVX_S37_GOT_LO10 30 +#define R_KVX_S37_GOT_UP27 31 +#define R_KVX_S43_GOT_LO10 32 +#define R_KVX_S43_GOT_UP27 33 +#define R_KVX_S43_GOT_EX6 34 +#define R_KVX_64_GOT 35 +#define R_KVX_GLOB_DAT 36 +#define R_KVX_COPY 37 +#define R_KVX_JMP_SLOT 38 +#define R_KVX_RELATIVE 39 +#define R_KVX_S43_LO10 40 +#define R_KVX_S43_UP27 41 +#define R_KVX_S43_EX6 42 +#define R_KVX_S64_LO10 43 +#define R_KVX_S64_UP27 44 +#define R_KVX_S64_EX27 45 +#define R_KVX_S37_GOTADDR_LO10 46 +#define R_KVX_S37_GOTADDR_UP27 47 +#define R_KVX_S43_GOTADDR_LO10 48 +#define R_KVX_S43_GOTADDR_UP27 49 +#define R_KVX_S43_GOTADDR_EX6 50 +#define R_KVX_S64_GOTADDR_LO10 51 +#define R_KVX_S64_GOTADDR_UP27 52 +#define R_KVX_S64_GOTADDR_EX27 53 +#define R_KVX_64_DTPMOD 54 +#define R_KVX_64_DTPOFF 55 +#define R_KVX_S37_TLS_DTPOFF_LO10 56 +#define R_KVX_S37_TLS_DTPOFF_UP27 57 +#define R_KVX_S43_TLS_DTPOFF_LO10 58 +#define R_KVX_S43_TLS_DTPOFF_UP27 59 +#define R_KVX_S43_TLS_DTPOFF_EX6 60 +#define R_KVX_S37_TLS_GD_LO10 61 +#define R_KVX_S37_TLS_GD_UP27 62 +#define R_KVX_S43_TLS_GD_LO10 63 +#define R_KVX_S43_TLS_GD_UP27 64 +#define R_KVX_S43_TLS_GD_EX6 65 +#define R_KVX_S37_TLS_LD_LO10 66 +#define R_KVX_S37_TLS_LD_UP27 67 +#define R_KVX_S43_TLS_LD_LO10 68 +#define R_KVX_S43_TLS_LD_UP27 69 +#define R_KVX_S43_TLS_LD_EX6 70 +#define R_KVX_64_TPOFF 71 +#define R_KVX_S37_TLS_IE_LO10 72 +#define R_KVX_S37_TLS_IE_UP27 73 +#define R_KVX_S43_TLS_IE_LO10 74 +#define R_KVX_S43_TLS_IE_UP27 75 +#define R_KVX_S43_TLS_IE_EX6 76 +#define R_KVX_S37_TLS_LE_LO10 77 +#define R_KVX_S37_TLS_LE_UP27 78 +#define R_KVX_S43_TLS_LE_LO10 79 +#define R_KVX_S43_TLS_LE_UP27 80 +#define R_KVX_S43_TLS_LE_EX6 81 + +#endif /* _ASM_KVX_ELF_H */ diff --git a/arch/kvx/include/asm/vermagic.h b/arch/kvx/include/asm/vermagi= c.h new file mode 100644 index 0000000000000..fef9a33065df9 --- /dev/null +++ b/arch/kvx/include/asm/vermagic.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_VERMAGIC_H +#define _ASM_KVX_VERMAGIC_H + +#define MODULE_ARCH_VERMAGIC "kvx" + +#endif /* _ASM_KVX_VERMAGIC_H */ diff --git a/arch/kvx/kernel/module.c b/arch/kvx/kernel/module.c new file mode 100644 index 0000000000000..b9383792ae456 --- /dev/null +++ b/arch/kvx/kernel/module.c @@ -0,0 +1,148 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Yann Sionneau + * Clement Leger + */ + +#include +#include +#include + + +static int apply_rela_bits(Elf64_Addr loc, Elf64_Addr val, + int sign, int immsize, int bits, int rshift, + int lshift, unsigned int relocnum, + struct module *me) +{ + unsigned long long umax; + long long min, max; + unsigned long long mask =3D GENMASK_ULL(bits + lshift - 1, lshift); + + if (sign) { + min =3D -(1ULL << (immsize - 1)); + max =3D (1ULL << (immsize - 1)) - 1; + if ((long long) val < min || (long long) val > max) + goto too_big; + val =3D (Elf64_Addr)(((long) val) >> rshift); + } else { + if (immsize < 64) + umax =3D (1ULL << immsize) - 1; + else + umax =3D -1ULL; + if ((unsigned long long) val > umax) + goto too_big; + val >>=3D rshift; + } + + val <<=3D lshift; + val &=3D mask; + if (bits <=3D 32) + *(u32 *) loc =3D (*(u32 *)loc & ~mask) | val; + else + *(u64 *) loc =3D (*(u64 *)loc & ~mask) | val; + + return 0; +too_big: + pr_err("%s: value %llx does not fit in %d bits for reloc %u", + me->name, val, bits, relocnum); + return -ENOEXEC; +} + +int apply_relocate_add(Elf64_Shdr *sechdrs, + const char *strtab, + unsigned int symindex, + unsigned int relsec, + struct module *me) +{ + unsigned int i; + Elf64_Addr loc; + u64 val; + s64 sval; + Elf64_Sym *sym; + Elf64_Rela *rel =3D (void *)sechdrs[relsec].sh_addr; + int ret =3D 0; + + pr_debug("Applying relocate section %u to %u\n", + relsec, sechdrs[relsec].sh_info); + + for (i =3D 0; i < sechdrs[relsec].sh_size / sizeof(*rel); i++) { + /* This is where to make the change */ + loc =3D (Elf64_Addr)sechdrs[sechdrs[relsec].sh_info].sh_addr + + rel[i].r_offset; + /* This is the symbol it is referring to. Note that all + * undefined symbols have been resolved. + */ + sym =3D (Elf64_Sym *)sechdrs[symindex].sh_addr + + ELF64_R_SYM(rel[i].r_info); + + pr_debug("type %d st_value %llx r_addend %llx loc %llx offset %llx\n", + (int)ELF64_R_TYPE(rel[i].r_info), + sym->st_value, rel[i].r_addend, (uint64_t)loc, + rel[i].r_offset); + + val =3D sym->st_value + rel[i].r_addend; + switch (ELF64_R_TYPE(rel[i].r_info)) { + case R_KVX_NONE: + break; + case R_KVX_32: + ret =3D apply_rela_bits(loc, val, 0, 32, 32, 0, 0, + ELF64_R_TYPE(rel[i].r_info), + me); + break; + case R_KVX_64: + ret =3D apply_rela_bits(loc, val, 0, 64, 64, 0, 0, + ELF64_R_TYPE(rel[i].r_info), + me); + break; + case R_KVX_S43_LO10: + ret =3D apply_rela_bits(loc, val, 1, 43, 10, 0, 6, + ELF64_R_TYPE(rel[i].r_info), + me); + break; + case R_KVX_S64_LO10: + ret =3D apply_rela_bits(loc, val, 1, 64, 10, 0, 6, + ELF64_R_TYPE(rel[i].r_info), + me); + break; + case R_KVX_S43_UP27: + ret =3D apply_rela_bits(loc, val, 1, 43, 27, 10, 0, + ELF64_R_TYPE(rel[i].r_info), + me); + break; + case R_KVX_S64_UP27: + ret =3D apply_rela_bits(loc, val, 1, 64, 27, 10, 0, + ELF64_R_TYPE(rel[i].r_info), + me); + break; + case R_KVX_S43_EX6: + ret =3D apply_rela_bits(loc, val, 1, 43, 6, 37, 0, + ELF64_R_TYPE(rel[i].r_info), + me); + break; + case R_KVX_S64_EX27: + ret =3D apply_rela_bits(loc, val, 1, 64, 27, 37, 0, + ELF64_R_TYPE(rel[i].r_info), + me); + break; + case R_KVX_PCREL27: + if (__builtin_sub_overflow(val, loc, &sval)) { + pr_err("%s: Signed integer overflow, this should not happen\n", + me->name); + return -ENOEXEC; + } + sval >>=3D 2; + ret =3D apply_rela_bits(loc, (Elf64_Addr)sval, 1, 27, 27, + 0, 0, + ELF64_R_TYPE(rel[i].r_info), + me); + break; + default: + pr_err("%s: Unknown relocation: %llu\n", + me->name, ELF64_R_TYPE(rel[i].r_info)); + ret =3D -ENOEXEC; + } + } + return ret; +} + --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout145.security-mail.net (smtpout145.security-mail.net [85.31.212.145]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F004916EB5E for ; Mon, 22 Jul 2024 09:43:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.145 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641431; cv=none; b=KZBpq2KcKGYmtGwTQ0+EKnH6HlLAfCA6MKIeIMLHX3dj2VJ/5tLvxb2uOOnqmESjuHmr5GFmcOLIxoxf9/CyS0cBCC20iQYwoeam5Kvlhb5/nNLmyk8EJJNyaUPRC2dONiaWpgoCA5FVMDpVqyQ8An7nj83rBt62gimUgTLuaas= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641431; c=relaxed/simple; bh=ueplDH8FmOwuHw0oGeEXZslFcpVWycsk0pYHbIwg814=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=UbyzcvYsKpVcAwfOzXjrXMDEtOuviB8laBYojbS9FMuMmYziieNREASy64EC6rBQ5CXN6cN3uGmt2uY+pC6y8p9M/q7bfN2Qcq9JTpxjXzvwwtbRmKW2wjbDdjyYOlbYnsrwyHbmlio2xTR9jF7riwO4qZHOOnYzbYYbxTGG1lk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=mXjs2XZ9; arc=none smtp.client-ip=85.31.212.145 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="mXjs2XZ9" Received: from localhost (fx405.security-mail.net [127.0.0.1]) by fx405.security-mail.net (Postfix) with ESMTP id 80668336046 for ; Mon, 22 Jul 2024 11:43:47 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641427; bh=ueplDH8FmOwuHw0oGeEXZslFcpVWycsk0pYHbIwg814=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=mXjs2XZ94YMHurnc8iVfjCUbI7aPDTc6dSTgFpMumAl++V8rCvIKkbzYMs1MLGC3r RDGrJjFPG8jjynLHCYbMhc5N2MEsgt2yTKn5ul/Xg82typ44j6hRFMrt8bv9OiLKS0 k6M+KIUUQs5gTzBR4y6KYd583riKBKMHar4pike4= Received: from fx405 (fx405.security-mail.net [127.0.0.1]) by fx405.security-mail.net (Postfix) with ESMTP id 477FE336006; Mon, 22 Jul 2024 11:43:47 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx405.security-mail.net (Postfix) with ESMTPS id 74871335FF6; Mon, 22 Jul 2024 11:43:46 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 3802240317; Mon, 22 Jul 2024 11:43:46 +0200 (CEST) X-Secumail-id: <15aa3.669e29d2.7154c.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Guillaume Thouvenin , Julien Villette Subject: [RFC PATCH v3 28/37] kvx: Add misc common routines Date: Mon, 22 Jul 2024 11:41:39 +0200 Message-ID: <20240722094226.21602-29-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add some misc common routines for kvx, including: asm-offsets routines, futex functions, i/o memory access functions. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Guillaume Thouvenin Signed-off-by: Guillaume Thouvenin Co-developed-by: Jonathan Borne Signed-off-by: Jonathan Borne Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Julien Villette Signed-off-by: Julien Villette Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: no changes V2 -> V3: typo, indent, function renamed. --- arch/kvx/include/asm/futex.h | 140 ++++++++++++++++++++++++++++++ arch/kvx/include/asm/io.h | 34 ++++++++ arch/kvx/kernel/asm-offsets.c | 155 ++++++++++++++++++++++++++++++++++ arch/kvx/kernel/io.c | 96 +++++++++++++++++++++ 4 files changed, 425 insertions(+) create mode 100644 arch/kvx/include/asm/futex.h create mode 100644 arch/kvx/include/asm/io.h create mode 100644 arch/kvx/kernel/asm-offsets.c create mode 100644 arch/kvx/kernel/io.c diff --git a/arch/kvx/include/asm/futex.h b/arch/kvx/include/asm/futex.h new file mode 100644 index 0000000000000..15e2a74fa7b43 --- /dev/null +++ b/arch/kvx/include/asm/futex.h @@ -0,0 +1,140 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2018-2023 Kalray Inc. + * Authors: + * Clement Leger + * Yann Sionneau + * Jonathan Borne + * + * Part of code is taken from RiscV port + */ + +#ifndef _ASM_KVX_FUTEX_H +#define _ASM_KVX_FUTEX_H + +#ifdef __KERNEL__ + +#include +#include + +#define __futex_atomic_op(insn, ret, oldval, uaddr, oparg) \ +{ \ + __enable_user_access(); \ + __asm__ __volatile__ ( \ + " fence\n" \ + " ;;\n" \ + "1: lwz $r63 =3D 0[%[u]]\n" \ + " ;;\n" \ + " " insn "\n" \ + " ;;\n" \ + " acswapw 0[%[u]] =3D $r62r63\n" \ + " ;;\n" \ + " cb.deqz $r62? 1b\n" \ + " ;;\n" \ + " copyd %[ov] =3D $r63\n" \ + " ;;\n" \ + "2:\n" \ + " .section .fixup,\"ax\"\n" \ + "3: make %[r] =3D 2b\n" \ + " ;;\n" \ + " make %[r] =3D %[e]\n" \ + " igoto %[r]\n" \ + " ;;\n" \ + " .previous\n" \ + " .section __ex_table,\"a\"\n" \ + " .align 8\n" \ + " .dword 1b,3b\n" \ + " .dword 2b,3b\n" \ + " .previous\n" \ + : [r] "+r" (ret), [ov] "+r" (oldval) \ + : [u] "r" (uaddr), \ + [op] "r" (oparg), [e] "i" (-EFAULT) \ + : "r62", "r63", "memory"); \ + __disable_user_access(); \ +} + +static inline int +arch_futex_atomic_op_inuser(int op, u32 oparg, int *oval, u32 __user *uadd= r) +{ + int oldval =3D 0, ret =3D 0; + + if (!access_ok(uaddr, sizeof(u32))) + return -EFAULT; + switch (op) { + case FUTEX_OP_SET: /* *(int *)UADDR =3D OPARG; */ + __futex_atomic_op("copyd $r62 =3D %[op]", + ret, oldval, uaddr, oparg); + break; + case FUTEX_OP_ADD: /* *(int *)UADDR +=3D OPARG; */ + __futex_atomic_op("addw $r62 =3D $r63, %[op]", + ret, oldval, uaddr, oparg); + break; + case FUTEX_OP_OR: /* *(int *)UADDR |=3D OPARG; */ + __futex_atomic_op("orw $r62 =3D $r63, %[op]", + ret, oldval, uaddr, oparg); + break; + case FUTEX_OP_ANDN: /* *(int *)UADDR &=3D ~OPARG; */ + __futex_atomic_op("andnw $r62 =3D %[op], $r63", + ret, oldval, uaddr, oparg); + break; + case FUTEX_OP_XOR: + __futex_atomic_op("xorw $r62 =3D $r63, %[op]", + ret, oldval, uaddr, oparg); + break; + default: + ret =3D -ENOSYS; + } + + if (!ret) + *oval =3D oldval; + + return ret; +} + +static inline int futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uad= dr, + u32 oldval, u32 newval) +{ + int ret =3D 0; + + if (!access_ok(uaddr, sizeof(u32))) + return -EFAULT; + __enable_user_access(); + __asm__ __volatile__ ( + " fence\n" /* commit previous stores */ + " copyd $r63 =3D %[ov]\n" /* init "expect" with ov */ + " copyd $r62 =3D %[nv]\n" /* init "update" with nv */ + " ;;\n" + "1: acswapw 0[%[u]] =3D $r62r63\n" + " ;;\n" + " cb.dnez $r62? 3f\n" /* if acswap ok -> return */ + " ;;\n" + "2: lws $r63 =3D 0[%[u]]\n" /* fail -> load old value */ + " ;;\n" + " compw.ne $r62 =3D $r63, %[ov]\n" /* check if equal to "old" */ + " ;;\n" + " cb.deqz $r62? 1b\n" /* if not equal, try again */ + " ;;\n" + "3:\n" + " .section .fixup,\"ax\"\n" + "4: make %[r] =3D 3b\n" + " ;;\n" + " make %[r] =3D %[e]\n" + " igoto %[r]\n" /* goto 3b */ + " ;;\n" + " .previous\n" + " .section __ex_table,\"a\"\n" + " .align 8\n" + " .dword 1b,4b\n" + " .dword 2b,4b\n" + ".previous\n" + : [r] "+r" (ret) + : [ov] "r" (oldval), [nv] "r" (newval), + [e] "i" (-EFAULT), [u] "r" (uaddr) + : "r62", "r63", "memory"); + __disable_user_access(); + *uval =3D oldval; + return ret; +} + +#endif +#endif /* _ASM_KVX_FUTEX_H */ diff --git a/arch/kvx/include/asm/io.h b/arch/kvx/include/asm/io.h new file mode 100644 index 0000000000000..c5e458c59bbb2 --- /dev/null +++ b/arch/kvx/include/asm/io.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_IO_H +#define _ASM_KVX_IO_H + +#include + +#include +#include + +#define _PAGE_IOREMAP _PAGE_KERNEL_DEVICE + +/* + * String version of I/O memory access operations. + */ +extern void __memcpy_fromio(void *to, const volatile void __iomem *from, + size_t count); +extern void __memcpy_toio(volatile void __iomem *to, const void *from, + size_t count); +extern void __memset_io(volatile void __iomem *dst, int c, size_t count); + +#define memset_io(c, v, l) __memset_io((c), (v), (l)) +#define memcpy_fromio(a, c, l) __memcpy_fromio((a), (c), (l)) +#define memcpy_toio(c, a, l) __memcpy_toio((c), (a), (l)) + +#include + +extern int devmem_is_allowed(unsigned long pfn); + +#endif /* _ASM_KVX_IO_H */ diff --git a/arch/kvx/kernel/asm-offsets.c b/arch/kvx/kernel/asm-offsets.c new file mode 100644 index 0000000000000..39312f9e64270 --- /dev/null +++ b/arch/kvx/kernel/asm-offsets.c @@ -0,0 +1,155 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + * Yann Sionneau + */ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +int asm_offsets(void); + +int asm_offsets(void) +{ + BUILD_BUG_ON(sizeof(struct pt_regs) !=3D PT_REGS_STRUCT_EXPECTED_SIZE); + /* + * For stack alignment purposes we must make sure the pt_regs size is + * a mutliple of stack_align + */ + BUILD_BUG_ON(!IS_ALIGNED(sizeof(struct pt_regs), STACK_ALIGNMENT)); + + /* Check that user_pt_regs size matches the beginning of pt_regs */ + BUILD_BUG_ON((offsetof(struct user_regs_struct, spc) + sizeof(uint64_t)) = !=3D + sizeof(struct user_regs_struct)); + + DEFINE(FIX_GDB_MEM_BASE_IDX, FIX_GDB_BARE_DISPLACED_MEM_BASE); + + /* + * We allocate a pt_regs on the stack when entering the kernel. This + * ensures the alignment is sane. + */ + DEFINE(PT_SIZE_ON_STACK, sizeof(struct pt_regs)); + DEFINE(TI_FLAGS_SIZE, sizeof(unsigned long)); + DEFINE(QUAD_REG_SIZE, 4 * sizeof(uint64_t)); + + /* + * When restoring registers, we do not want to restore r12 + * right now since this is our stack pointer. Allow to save + * only $r13 by using this offset. + */ + OFFSET(PT_R12, pt_regs, r12); + OFFSET(PT_R13, pt_regs, r13); + OFFSET(PT_TP, pt_regs, tp); + OFFSET(PT_R14R15, pt_regs, r14); + OFFSET(PT_R16R17, pt_regs, r16); + OFFSET(PT_R18R19, pt_regs, r18); + OFFSET(PT_FP, pt_regs, fp); + OFFSET(PT_SPS, pt_regs, sps); + + /* Quad description */ + OFFSET(PT_Q0, pt_regs, r0); + OFFSET(PT_Q4, pt_regs, r4); + OFFSET(PT_Q8, pt_regs, r8); + OFFSET(PT_Q12, pt_regs, r12); + OFFSET(PT_Q16, pt_regs, r16); + OFFSET(PT_Q20, pt_regs, r20); + OFFSET(PT_Q24, pt_regs, r24); + OFFSET(PT_Q28, pt_regs, r28); + OFFSET(PT_Q32, pt_regs, r32); + OFFSET(PT_Q36, pt_regs, r36); + OFFSET(PT_R38, pt_regs, r38); + OFFSET(PT_Q40, pt_regs, r40); + OFFSET(PT_Q44, pt_regs, r44); + OFFSET(PT_Q48, pt_regs, r48); + OFFSET(PT_Q52, pt_regs, r52); + OFFSET(PT_Q56, pt_regs, r56); + OFFSET(PT_Q60, pt_regs, r60); + OFFSET(PT_CS_SPC_SPS_ES, pt_regs, cs); + OFFSET(PT_LC_LE_LS_RA, pt_regs, lc); + OFFSET(PT_ILR, pt_regs, ilr); + OFFSET(PT_ORIG_R0, pt_regs, orig_r0); + + /* + * Flags in thread info + */ + OFFSET(TASK_TI_FLAGS, task_struct, thread_info.flags); + + /* + * Stack pointer + */ + OFFSET(TASK_THREAD_KERNEL_SP, task_struct, thread.kernel_sp); + + /* + * Offsets to save registers in switch_to using quads + */ + OFFSET(CTX_SWITCH_RA_SP_R18_R19, task_struct, thread.ctx_switch.ra); + OFFSET(CTX_SWITCH_Q20, task_struct, thread.ctx_switch.r20); + OFFSET(CTX_SWITCH_Q24, task_struct, thread.ctx_switch.r24); + OFFSET(CTX_SWITCH_Q28, task_struct, thread.ctx_switch.r28); + OFFSET(CTX_SWITCH_FP, task_struct, thread.ctx_switch.fp); + +#ifdef CONFIG_ENABLE_TCA + OFFSET(CTX_SWITCH_TCA_REGS, task_struct, thread.ctx_switch.tca_regs[0]); + OFFSET(CTX_SWITCH_TCA_REGS_SAVED, task_struct, + thread.ctx_switch.tca_regs_saved); + DEFINE(TCA_REG_SIZE, sizeof(struct tca_reg)); +#endif + + /* Save area offset */ + OFFSET(TASK_THREAD_SAVE_AREA, task_struct, thread.save_area); + + /* Fast tlb refill defines */ + OFFSET(TASK_ACTIVE_MM, task_struct, active_mm); + OFFSET(MM_PGD, mm_struct, pgd); +#ifdef CONFIG_KVX_DEBUG_ASN + OFFSET(MM_CTXT_ASN, mm_struct, context.asn); +#endif + +#ifdef CONFIG_KVX_MMU_STATS + DEFINE(MMU_REFILL_SIZE, sizeof(struct mmu_refill_stats)); + + OFFSET(MMU_STATS_REFILL_USER_OFF, mmu_stats, + refill[MMU_REFILL_TYPE_USER]); + OFFSET(MMU_STATS_REFILL_KERNEL_OFF, mmu_stats, + refill[MMU_REFILL_TYPE_KERNEL]); + OFFSET(MMU_STATS_REFILL_KERNEL_DIRECT_OFF, mmu_stats, + refill[MMU_REFILL_TYPE_KERNEL_DIRECT]); + OFFSET(MMU_STATS_CYCLES_BETWEEN_REFILL_OFF, mmu_stats, + cycles_between_refill); + OFFSET(MMU_STATS_LAST_REFILL, mmu_stats, last_refill); + + OFFSET(TASK_THREAD_ENTRY_TS, task_struct, thread.trap_entry_ts); +#endif + + DEFINE(ASM_PGDIR_SHIFT, PGDIR_SHIFT); + DEFINE(ASM_PMD_SHIFT, PMD_SHIFT); + + DEFINE(ASM_PGDIR_BITS, PGDIR_BITS); + DEFINE(ASM_PMD_BITS, PMD_BITS); + DEFINE(ASM_PTE_BITS, PTE_BITS); + + DEFINE(ASM_PTRS_PER_PGD, PTRS_PER_PGD); + DEFINE(ASM_PTRS_PER_PMD, PTRS_PER_PMD); + DEFINE(ASM_PTRS_PER_PTE, PTRS_PER_PTE); + + DEFINE(ASM_TLB_PS, TLB_DEFAULT_PS); + + return 0; +} diff --git a/arch/kvx/kernel/io.c b/arch/kvx/kernel/io.c new file mode 100644 index 0000000000000..0922c1d6d0f7b --- /dev/null +++ b/arch/kvx/kernel/io.c @@ -0,0 +1,96 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * derived from arch/arm/kernel/io.c + * + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#include +#include +#include + +#define REPLICATE_BYTE_MASK 0x0101010101010101 + +/* + * Copy data from IO memory space to "real" memory space. + */ +void __memcpy_fromio(void *to, const volatile void __iomem *from, size_t c= ount) +{ + while (count && !IS_ALIGNED((unsigned long)from, 8)) { + *(u8 *)to =3D __raw_readb(from); + from++; + to++; + count--; + } + + while (count >=3D 8) { + *(u64 *)to =3D __raw_readq(from); + from +=3D 8; + to +=3D 8; + count -=3D 8; + } + + while (count) { + *(u8 *)to =3D __raw_readb(from); + from++; + to++; + count--; + } +} +EXPORT_SYMBOL(__memcpy_fromio); + +/* + * Copy data from "real" memory space to IO memory space. + */ +void __memcpy_toio(volatile void __iomem *to, const void *from, size_t cou= nt) +{ + while (count && !IS_ALIGNED((unsigned long)to, 8)) { + __raw_writeb(*(u8 *)from, to); + from++; + to++; + count--; + } + + while (count >=3D 8) { + __raw_writeq(*(u64 *)from, to); + from +=3D 8; + to +=3D 8; + count -=3D 8; + } + + while (count) { + __raw_writeb(*(u8 *)from, to); + from++; + to++; + count--; + } +} +EXPORT_SYMBOL(__memcpy_toio); + +/* + * "memset" on IO memory space. + */ +void __memset_io(volatile void __iomem *dst, int c, size_t count) +{ + u64 qc =3D __builtin_kvx_sbmm8(c, REPLICATE_BYTE_MASK); + + while (count && !IS_ALIGNED((unsigned long)dst, 8)) { + __raw_writeb(c, dst); + dst++; + count--; + } + + while (count >=3D 8) { + __raw_writeq(qc, dst); + dst +=3D 8; + count -=3D 8; + } + + while (count) { + __raw_writeb(c, dst); + dst++; + count--; + } +} +EXPORT_SYMBOL(__memset_io); --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout42.security-mail.net (smtpout42.security-mail.net [85.31.212.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 922A016EC13 for ; Mon, 22 Jul 2024 09:43:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641435; cv=none; b=KFdAyEzN63OdKvGo0gQmsnFQD89M0gc5a8eu5ToU5aMJ42TgZnodUfDHqSouhXFWSsRE6d9eWCZ6Btd0LUyVGYRgZoEA1BV1OXF7gsPHCknypHLwZbkYIsE5lqkN/AoNg+YWh15C1phLBf65xXcmF3VE8GHSFwxzbvzENEMeIAQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641435; c=relaxed/simple; bh=j+26YT+s2R1JHU4KrZw0Mbr2esOXJjuYIhfVdzOJL+E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=igezAqlc1Oa7T8P79TUxVtNMv606iIfmlHrsLB4Osaabv+62ZGX5hewaaaUsqufiGLqRY7aRFKzZ7bLh7UJBmfWvXPT4Dje8KvZOTFvf2TneBQTqhW/82BSgbr10gqSvkXbdmV9XKO7e1L8IGZLeH5bFSJyCE/0B1GW/ZDfJhMY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=JaS7HPB4; arc=none smtp.client-ip=85.31.212.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="JaS7HPB4" Received: from localhost (localhost [127.0.0.1]) by fx302.security-mail.net (Postfix) with ESMTP id 1EAB080B7B4 for ; Mon, 22 Jul 2024 11:43:49 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641429; bh=j+26YT+s2R1JHU4KrZw0Mbr2esOXJjuYIhfVdzOJL+E=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=JaS7HPB4df8cA2Wmkgpg32Ba99mmjroGEXdpXdyxQrwIBe4bC/SBKiZ4OvQiap5dI x9YfcTmWdBBPwak65JrsJz72UrlO30XTMtNuKV74yRCKuR+qPIkA04NLVSyR6iovy+ hoNqaGTaYZ/zIIMhrfIkuSJkOzWZEa4BMmF4WwG4= Received: from fx302 (localhost [127.0.0.1]) by fx302.security-mail.net (Postfix) with ESMTP id E2E9E80B826; Mon, 22 Jul 2024 11:43:48 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx302.security-mail.net (Postfix) with ESMTPS id 5483380B938; Mon, 22 Jul 2024 11:43:48 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id F32D940317; Mon, 22 Jul 2024 11:43:47 +0200 (CEST) X-Secumail-id: From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Jules Maselbas , Marius Gligor Subject: [RFC PATCH v3 29/37] kvx: Add some library functions Date: Mon, 22 Jul 2024 11:41:40 +0200 Message-ID: <20240722094226.21602-30-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add some library functions for kvx, including: delay, memset, memcpy, strlen, clear_page, copy_page, raw_copy_from/to_user, asm_clear_user and libgcc functions. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Jules Maselbas Signed-off-by: Jules Maselbas Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Marius Gligor Signed-off-by: Marius Gligor Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: no changes V2 -> V3: - Add code to remove dependency on libgcc (lib/div.c) - simplify memset assembly code: remove special handling of memset '\0' - typos - use SYM_FUNC_{START/END} instead of ENTRY/ENDPROC --- arch/kvx/include/asm/string.h | 20 ++++ arch/kvx/lib/clear_page.S | 40 +++++++ arch/kvx/lib/copy_page.S | 90 ++++++++++++++ arch/kvx/lib/delay.c | 40 +++++++ arch/kvx/lib/div.c | 198 +++++++++++++++++++++++++++++++ arch/kvx/lib/libgcc.h | 25 ++++ arch/kvx/lib/memcpy.c | 72 ++++++++++++ arch/kvx/lib/memset.S | 216 ++++++++++++++++++++++++++++++++++ arch/kvx/lib/strlen.S | 122 +++++++++++++++++++ arch/kvx/lib/usercopy.S | 90 ++++++++++++++ 10 files changed, 913 insertions(+) create mode 100644 arch/kvx/include/asm/string.h create mode 100644 arch/kvx/lib/clear_page.S create mode 100644 arch/kvx/lib/copy_page.S create mode 100644 arch/kvx/lib/delay.c create mode 100644 arch/kvx/lib/div.c create mode 100644 arch/kvx/lib/libgcc.h create mode 100644 arch/kvx/lib/memcpy.c create mode 100644 arch/kvx/lib/memset.S create mode 100644 arch/kvx/lib/strlen.S create mode 100644 arch/kvx/lib/usercopy.S diff --git a/arch/kvx/include/asm/string.h b/arch/kvx/include/asm/string.h new file mode 100644 index 0000000000000..677c1393a5cdb --- /dev/null +++ b/arch/kvx/include/asm/string.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Jules Maselbas + */ + +#ifndef _ASM_KVX_STRING_H +#define _ASM_KVX_STRING_H + +#define __HAVE_ARCH_MEMSET +extern void *memset(void *s, int c, size_t n); + +#define __HAVE_ARCH_MEMCPY +extern void *memcpy(void *dest, const void *src, size_t n); + +#define __HAVE_ARCH_STRLEN +extern size_t strlen(const char *s); + +#endif /* _ASM_KVX_STRING_H */ diff --git a/arch/kvx/lib/clear_page.S b/arch/kvx/lib/clear_page.S new file mode 100644 index 0000000000000..8c97b2ae84bb3 --- /dev/null +++ b/arch/kvx/lib/clear_page.S @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Marius Gligor + * Clement Leger + */ + +#include +#include +#include + +#include +#include + +#define CLEAR_PAGE_LOOP_COUNT (PAGE_SIZE / 32) + +/* + * Clear page @dest. + * + * Parameters: + * r0 - dest page + */ +SYM_FUNC_START(clear_page) + make $r1 =3D CLEAR_PAGE_LOOP_COUNT + ;; + make $r4 =3D 0 + make $r5 =3D 0 + make $r6 =3D 0 + make $r7 =3D 0 + ;; + + loopdo $r1, clear_page_done + ;; + so 0[$r0] =3D $r4r5r6r7 + addd $r0 =3D $r0, 32 + ;; + clear_page_done: + ret + ;; +SYM_FUNC_END(clear_page) diff --git a/arch/kvx/lib/copy_page.S b/arch/kvx/lib/copy_page.S new file mode 100644 index 0000000000000..8bffc56ef9752 --- /dev/null +++ b/arch/kvx/lib/copy_page.S @@ -0,0 +1,90 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#include +#include + +#include + +/* We have 8 load/store octuple (32 bytes) per hardware loop */ +#define COPY_SIZE_PER_LOOP (32 * 8) +#define COPY_PAGE_LOOP_COUNT (PAGE_SIZE / COPY_SIZE_PER_LOOP) + +/* + * Copy a page from src to dest (both are page aligned) + * In order to recover from smem latency, unroll the loop to trigger multi= ple + * onfly loads and avoid waiting too much for them to return. + * We use 8 * 32 load even though we could use more (up to 10 loads) to si= mplify + * the handling using a single hardware loop + * + * Parameters: + * r0 - dest + * r1 - src + */ +SYM_FUNC_START(copy_page) + make $r2 =3D COPY_PAGE_LOOP_COUNT + make $r3 =3D 0 + ;; + loopdo $r2, copy_page_done + ;; + /* + * Load 8 * 32 bytes using uncached access to avoid hitting + * the cache + */ + lo.xs $r32r33r34r35 =3D $r3[$r1] + /* Copy current copy index for store */ + copyd $r2 =3D $r3 + addd $r3 =3D $r3, 1 + ;; + lo.xs $r36r37r38r39 =3D $r3[$r1] + addd $r3 =3D $r3, 1 + ;; + lo.xs $r40r41r42r43 =3D $r3[$r1] + addd $r3 =3D $r3, 1 + ;; + lo.xs $r44r45r46r47 =3D $r3[$r1] + addd $r3 =3D $r3, 1 + ;; + lo.xs $r48r49r50r51 =3D $r3[$r1] + addd $r3 =3D $r3, 1 + ;; + lo.xs $r52r53r54r55 =3D $r3[$r1] + addd $r3 =3D $r3, 1 + ;; + lo.xs $r56r57r58r59 =3D $r3[$r1] + addd $r3 =3D $r3, 1 + ;; + lo.xs $r60r61r62r63 =3D $r3[$r1] + addd $r3 =3D $r3, 1 + ;; + /* And then store all of them */ + so.xs $r2[$r0] =3D $r32r33r34r35 + addd $r2 =3D $r2, 1 + ;; + so.xs $r2[$r0] =3D $r36r37r38r39 + addd $r2 =3D $r2, 1 + ;; + so.xs $r2[$r0] =3D $r40r41r42r43 + addd $r2 =3D $r2, 1 + ;; + so.xs $r2[$r0] =3D $r44r45r46r47 + addd $r2 =3D $r2, 1 + ;; + so.xs $r2[$r0] =3D $r48r49r50r51 + addd $r2 =3D $r2, 1 + ;; + so.xs $r2[$r0] =3D $r52r53r54r55 + addd $r2 =3D $r2, 1 + ;; + so.xs $r2[$r0] =3D $r56r57r58r59 + addd $r2 =3D $r2, 1 + ;; + so.xs $r2[$r0] =3D $r60r61r62r63 + ;; + copy_page_done: + ret + ;; +SYM_FUNC_END(copy_page) diff --git a/arch/kvx/lib/delay.c b/arch/kvx/lib/delay.c new file mode 100644 index 0000000000000..96e59114b47dd --- /dev/null +++ b/arch/kvx/lib/delay.c @@ -0,0 +1,40 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#include +#include + +#include +#include + +void __delay(unsigned long loops) +{ + cycles_t target_cycle =3D get_cycles() + loops; + + while (get_cycles() < target_cycle) + ; +} +EXPORT_SYMBOL(__delay); + +inline void __const_udelay(unsigned long xloops) +{ + u64 loops =3D (u64)xloops * (u64)loops_per_jiffy * HZ; + + __delay(loops >> 32); +} +EXPORT_SYMBOL(__const_udelay); + +void __udelay(unsigned long usecs) +{ + __const_udelay(usecs * 0x10C7UL); /* 2**32 / 1000000 (rounded up) */ +} +EXPORT_SYMBOL(__udelay); + +void __ndelay(unsigned long nsecs) +{ + __const_udelay(nsecs * 0x5UL); /* 2**32 / 1000000000 (rounded up) */ +} +EXPORT_SYMBOL(__ndelay); diff --git a/arch/kvx/lib/div.c b/arch/kvx/lib/div.c new file mode 100644 index 0000000000000..1e107732e1a23 --- /dev/null +++ b/arch/kvx/lib/div.c @@ -0,0 +1,198 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Benoit Dinechin + */ +#include +#include "libgcc.h" + +static inline uint64x4_t uint64x2_divmod(uint64x2_t a, uint64x2_t b) +{ + float64x2_t double1 =3D 1.0 - (float64x2_t){}; + int64x2_t bbig =3D (int64x2_t)b < 0; + int64x2_t bin01 =3D (uint64x2_t)b <=3D 1; + int64x2_t special =3D bbig | bin01; + // uint64x2_t q =3D bbig ? a >=3D b : a; + uint64x2_t q =3D __builtin_kvx_selectdp(-(a >=3D b), a, bbig, ".nez"); + // uint64x2_t r =3D bbig ? a - (b&-q) : 0; + uint64x2_t r =3D __builtin_kvx_selectdp(a - (b & -q), 0 - (uint64x2_t){},= bbig, ".nez"); + float64x2_t doublea =3D __builtin_kvx_floatudp(a, 0, ".rn.s"); + float64x2_t doubleb =3D __builtin_kvx_floatudp(b, 0, ".rn.s"); + float floatb_0 =3D __builtin_kvx_fnarrowdw(doubleb[0], ".rn.s"); + float floatb_1 =3D __builtin_kvx_fnarrowdw(doubleb[1], ".rn.s"); + float floatrec_0 =3D __builtin_kvx_frecw(floatb_0, ".rn.s"); + float floatrec_1 =3D __builtin_kvx_frecw(floatb_1, ".rn.s"); + + if (__builtin_kvx_anydp(b, ".eqz")) + goto div0; + + float64x2_t doublerec =3D {__builtin_kvx_fwidenwd(floatrec_0, ".s"), + __builtin_kvx_fwidenwd(floatrec_1, ".s")}; + float64x2_t doubleq0 =3D __builtin_kvx_fmuldp(doublea, doublerec, ".rn.s"= ); + uint64x2_t q0 =3D __builtin_kvx_fixedudp(doubleq0, 0, ".rn.s"); + int64x2_t a1 =3D (int64x2_t)(a - q0 * b); + float64x2_t alpha =3D __builtin_kvx_ffmsdp(doubleb, doublerec, double1, "= .rn.s"); + float64x2_t beta =3D __builtin_kvx_ffmadp(alpha, doublerec, doublerec, ".= rn.s"); + float64x2_t doublea1 =3D __builtin_kvx_floatdp(a1, 0, ".rn.s"); + float64x2_t gamma =3D __builtin_kvx_fmuldp(beta, doublea1, ".rn.s"); + int64x2_t q1 =3D __builtin_kvx_fixeddp(gamma, 0, ".rn.s"); + int64x2_t rem =3D a1 - q1 * b; + uint64x2_t quo =3D q0 + q1; + uint64x2_t cond =3D (uint64x2_t)(rem >> 63); + + // q =3D !special ? quo + cond : q; + q =3D __builtin_kvx_selectdp(quo + cond, q, special, ".eqz"); + // r =3D !special ? rem + (b & cond) : r; + r =3D __builtin_kvx_selectdp(rem + (b & cond), r, special, ".eqz"); + return __builtin_kvx_cat256(q, r); + +div0: + __builtin_trap(); +} + +static inline uint32x4_t uint32x2_divmod(uint32x2_t a, uint32x2_t b) +{ + int i; + + uint64x2_t acc =3D __builtin_kvx_widenwdp(a, ".z"); + uint64x2_t src =3D __builtin_kvx_widenwdp(b, ".z") << (32 - 1); + uint64x2_t wb =3D __builtin_kvx_widenwdp(b, ".z"); + uint32x2_t q, r; + + if (__builtin_kvx_anywp(b, ".eqz")) + goto div0; + // As `src =3D=3D b << (32 -1)` adding src yields `src =3D=3D b << 32`. + src +=3D src & (wb > acc); + + for (i =3D 0; i < 32; i++) + acc =3D __builtin_kvx_stsudp(src, acc); + + q =3D __builtin_kvx_narrowdwp(acc, ""); + r =3D __builtin_kvx_narrowdwp(acc >> 32, ""); + return __builtin_kvx_cat128(q, r); +div0: + __builtin_trap(); +} + + +int32x2_t __divv2si3(int32x2_t a, int32x2_t b) +{ + uint32x2_t absa =3D __builtin_kvx_abswp(a, ""); + uint32x2_t absb =3D __builtin_kvx_abswp(b, ""); + uint32x4_t divmod =3D uint32x2_divmod(absa, absb); + int32x2_t result =3D __builtin_kvx_low64(divmod); + + return __builtin_kvx_selectwp(-result, result, a ^ b, ".ltz"); +} + + +uint64x2_t __udivv2di3(uint64x2_t a, uint64x2_t b) +{ + uint64x4_t divmod =3D uint64x2_divmod(a, b); + + return __builtin_kvx_low128(divmod); +} + +uint64x2_t __umodv2di3(uint64x2_t a, uint64x2_t b) +{ + uint64x4_t divmod =3D uint64x2_divmod(a, b); + + return __builtin_kvx_high128(divmod); +} + +int64x2_t __modv2di3(int64x2_t a, int64x2_t b) +{ + uint64x2_t absa =3D __builtin_kvx_absdp(a, ""); + uint64x2_t absb =3D __builtin_kvx_absdp(b, ""); + uint64x4_t divmod =3D uint64x2_divmod(absa, absb); + int64x2_t result =3D __builtin_kvx_high128(divmod); + + return __builtin_kvx_selectdp(-result, result, a, ".ltz"); +} + +uint64_t __udivdi3(uint64_t a, uint64_t b) +{ + uint64x2_t udivv2di3 =3D __udivv2di3(a - (uint64x2_t){}, b - (uint64x2_t)= {}); + + return (uint64_t)udivv2di3[1]; +} + +static inline uint64x2_t uint64_divmod(uint64_t a, uint64_t b) +{ + double double1 =3D 1.0; + int64_t bbig =3D (int64_t)b < 0; + int64_t bin01 =3D (uint64_t)b <=3D 1; + int64_t special =3D bbig | bin01; + // uint64_t q =3D bbig ? a >=3D b : a; + uint64_t q =3D __builtin_kvx_selectd(a >=3D b, a, bbig, ".dnez"); + // uint64_t r =3D bbig ? a - (b&-q) : 0; + uint64_t r =3D __builtin_kvx_selectd(a - (b & -q), 0, bbig, ".dnez"); + double doublea =3D __builtin_kvx_floatud(a, 0, ".rn.s"); + double doubleb =3D __builtin_kvx_floatud(b, 0, ".rn.s"); + float floatb =3D __builtin_kvx_fnarrowdw(doubleb, ".rn.s"); + float floatrec =3D __builtin_kvx_frecw(floatb, ".rn.s"); + + if (b =3D=3D 0) + goto div0; + + double doublerec =3D __builtin_kvx_fwidenwd(floatrec, ".s"); + double doubleq0 =3D __builtin_kvx_fmuld(doublea, doublerec, ".rn.s"); + uint64_t q0 =3D __builtin_kvx_fixedud(doubleq0, 0, ".rn.s"); + int64_t a1 =3D a - q0 * b; + double alpha =3D __builtin_kvx_ffmsd(doubleb, doublerec, double1, ".rn.s"= ); + double beta =3D __builtin_kvx_ffmad(alpha, doublerec, doublerec, ".rn.s"); + double doublea1 =3D __builtin_kvx_floatd(a1, 0, ".rn.s"); + double gamma =3D __builtin_kvx_fmuld(beta, doublea1, ".rn.s"); + int64_t q1 =3D __builtin_kvx_fixedd(gamma, 0, ".rn.s"); + int64_t rem =3D a1 - q1 * b; + uint64_t quo =3D q0 + q1; + uint64_t cond =3D rem >> 63; + + // q =3D !special ? quo + cond : q; + q =3D __builtin_kvx_selectd(quo + cond, q, special, ".deqz"); + // r =3D !special ? rem + (b & cond) : r; + r =3D __builtin_kvx_selectd(rem + (b & cond), r, special, ".deqz"); + + return (uint64x2_t){q, r}; + +div0: + __builtin_trap(); +} + + +int64_t __divdi3(int64_t a, int64_t b) +{ + uint64_t absa =3D __builtin_kvx_absd(a, ""); + uint64_t absb =3D __builtin_kvx_absd(b, ""); + uint64x2_t divmod =3D uint64_divmod(absa, absb); + + if ((a ^ b) < 0) + divmod[0] =3D -divmod[0]; + + return divmod[0]; +} + + +uint64_t __umoddi3(uint64_t a, uint64_t b) +{ + uint64x2_t umodv2di3 =3D __umodv2di3(a - (uint64x2_t){}, b - (uint64x2_t)= {}); + + return (uint64_t)umodv2di3[1]; +} + +int64_t __moddi3(int64_t a, int64_t b) +{ + int64x2_t modv2di3 =3D __modv2di3(a - (int64x2_t){}, b - (int64x2_t){}); + + return (int64_t)modv2di3[1]; +} + +int64x2_t __divv2di3(int64x2_t a, int64x2_t b) +{ + uint64x2_t absa =3D __builtin_kvx_absdp(a, ""); + uint64x2_t absb =3D __builtin_kvx_absdp(b, ""); + uint64x4_t divmod =3D uint64x2_divmod(absa, absb); + int64x2_t result =3D __builtin_kvx_low128(divmod); + + return __builtin_kvx_selectdp(-result, result, a ^ b, ".ltz"); +} diff --git a/arch/kvx/lib/libgcc.h b/arch/kvx/lib/libgcc.h new file mode 100644 index 0000000000000..cbbe33762ecc4 --- /dev/null +++ b/arch/kvx/lib/libgcc.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Benoit Dinechin + */ + +typedef uint32_t uint32x2_t __attribute((vector_size(2 * sizeof(uint32_t))= )); +typedef uint32_t uint32x4_t __attribute((vector_size(4 * sizeof(uint32_t))= )); +typedef int32_t int32x2_t __attribute((vector_size(2 * sizeof(int32_t)))); +typedef int64_t int64x2_t __attribute((vector_size(2 * sizeof(int64_t)))); +typedef uint64_t uint64x2_t __attribute((vector_size(2 * sizeof(uint64_t))= )); +typedef uint64_t uint64x4_t __attribute((vector_size(4 * sizeof(uint64_t))= )); + +typedef double float64_t; +typedef float64_t float64x2_t __attribute((vector_size(2 * sizeof(float64_= t)))); + +int32x2_t __divv2si3(int32x2_t a, int32x2_t b); +uint64x2_t __udivv2di3(uint64x2_t a, uint64x2_t b); +uint64x2_t __umodv2di3(uint64x2_t a, uint64x2_t b); +int64x2_t __modv2di3(int64x2_t a, int64x2_t b); +uint64_t __udivdi3(uint64_t a, uint64_t b); +int64_t __divdi3(int64_t a, int64_t b); +uint64_t __umoddi3(uint64_t a, uint64_t b); +int64_t __moddi3(int64_t a, int64_t b); +int64x2_t __divv2di3(int64x2_t a, int64x2_t b); diff --git a/arch/kvx/lib/memcpy.c b/arch/kvx/lib/memcpy.c new file mode 100644 index 0000000000000..17343537a4346 --- /dev/null +++ b/arch/kvx/lib/memcpy.c @@ -0,0 +1,72 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Yann Sionneau + */ + +#include +#include + +#include + +void *memcpy(void *dest, const void *src, size_t n) +{ + __uint128_t *tmp128_d =3D dest; + const __uint128_t *tmp128_s =3D src; + uint64_t *tmp64_d; + const uint64_t *tmp64_s; + uint32_t *tmp32_d; + const uint32_t *tmp32_s; + uint16_t *tmp16_d; + const uint16_t *tmp16_s; + uint8_t *tmp8_d; + const uint8_t *tmp8_s; + + while (n >=3D 16) { + *tmp128_d =3D *tmp128_s; + tmp128_d++; + tmp128_s++; + n -=3D 16; + } + + tmp64_d =3D (uint64_t *) tmp128_d; + tmp64_s =3D (uint64_t *) tmp128_s; + while (n >=3D 8) { + *tmp64_d =3D *tmp64_s; + tmp64_d++; + tmp64_s++; + n -=3D 8; + } + + tmp32_d =3D (uint32_t *) tmp64_d; + tmp32_s =3D (uint32_t *) tmp64_s; + while (n >=3D 4) { + *tmp32_d =3D *tmp32_s; + tmp32_d++; + tmp32_s++; + n -=3D 4; + } + + tmp16_d =3D (uint16_t *) tmp32_d; + tmp16_s =3D (uint16_t *) tmp32_s; + while (n >=3D 2) { + *tmp16_d =3D *tmp16_s; + tmp16_d++; + tmp16_s++; + n -=3D 2; + } + + tmp8_d =3D (uint8_t *) tmp16_d; + tmp8_s =3D (uint8_t *) tmp16_s; + while (n >=3D 1) { + *tmp8_d =3D *tmp8_s; + tmp8_d++; + tmp8_s++; + n--; + } + + return dest; +} +EXPORT_SYMBOL(memcpy); + diff --git a/arch/kvx/lib/memset.S b/arch/kvx/lib/memset.S new file mode 100644 index 0000000000000..bd2eec89ae1a3 --- /dev/null +++ b/arch/kvx/lib/memset.S @@ -0,0 +1,216 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Marius Gligor + * Jules Maselbas + */ + +#include + +#include + +#define REPLICATE_BYTE_MASK 0x0101010101010101 +#define MIN_SIZE_FOR_ALIGN 128 + +/* + * Optimized memset for kvx architecture + * + * In order to optimize memset on kvx, we can use various things: + * - conditional store which avoids branch penalty + * - store half/word/double/quad/octuple to store up to 32 bytes at a time + * - hardware loop for steady cases. + * + * First, we start by checking if the size is below a minimum size. If so,= we + * skip the alignment part. The kvx architecture supports misalignment and= the + * penalty for doing unaligned accesses is lower than trying to do realign= ing. + * So for small sizes, we don't even bother to realign. + * The sbmm8 instruction is used to replicate the pattern on all bytes of a + * register in one call. + * Once alignment has been reached, we can use the hardware loop in order = to + * optimize throughput. Care must be taken to align hardware loops on at l= east + * 8 bytes for better performances. + * Once the main loop has been done, we finish the copy by checking length= to do + * the necessary calls to store remaining bytes. + * + * Pseudo code: + * + * int memset(void *dest, char pattern, long length) + * { + * long dest_align =3D -((long) dest); + * long copy; + * long orig_dest =3D dest; + * + * uint64_t pattern =3D sbmm8(pattern, 0x0101010101010101); + * uint128_t pattern128 =3D pattern << 64 | pattern; + * uint256_t pattern128 =3D pattern128 << 128 | pattern128; + * + * // Keep only low bits + * dest_align &=3D 0x1F; + * length -=3D dest_align; + * + * // Byte align + * copy =3D align & (1 << 0); + * if (copy) + * *((u8 *) dest) =3D pattern; + * dest +=3D copy; + * // Half align + * copy =3D align & (1 << 1); + * if (copy) + * *((u16 *) dest) =3D pattern; + * dest +=3D copy; + * // Word align + * copy =3D align & (1 << 2); + * if (copy) + * *((u32 *) dest) =3D pattern; + * dest +=3D copy; + * // Double align + * copy =3D align & (1 << 3); + * if (copy) + * *((u64 *) dest) =3D pattern; + * dest +=3D copy; + * // Quad align + * copy =3D align & (1 << 4); + * if (copy) + * *((u128 *) dest) =3D pattern128; + * dest +=3D copy; + * + * // We are now aligned on 256 bits + * loop_octuple_count =3D size >> 5; + * for (i =3D 0; i < loop_octuple_count; i++) { + * *((u256 *) dest) =3D pattern256; + * dest +=3D 32; + * } + * + * if (length =3D=3D 0) + * return orig_dest; + * + * // Copy remaining part + * remain =3D length & (1 << 4); + * if (copy) + * *((u128 *) dest) =3D pattern128; + * dest +=3D remain; + * remain =3D length & (1 << 3); + * if (copy) + * *((u64 *) dest) =3D pattern; + * dest +=3D remain; + * remain =3D length & (1 << 2); + * if (copy) + * *((u32 *) dest) =3D pattern; + * dest +=3D remain; + * remain =3D length & (1 << 1); + * if (copy) + * *((u16 *) dest) =3D pattern; + * dest +=3D remain; + * remain =3D length & (1 << 0); + * if (copy) + * *((u8 *) dest) =3D pattern; + * dest +=3D remain; + * + * return orig_dest; + * } + */ + +.text +.align 16 +SYM_FUNC_START(memset): + /* Preserve return value */ + copyd $r3 =3D $r0 + /* Replicate the first pattern byte on all bytes */ + sbmm8 $r32 =3D $r1, REPLICATE_BYTE_MASK + /* Check if length < MIN_SIZE_FOR_ALIGN */ + compd.geu $r7 =3D $r2, MIN_SIZE_FOR_ALIGN + /* Invert address to compute size to copy to be aligned on 32 bytes */ + negd $r5 =3D $r0 + ;; + /* Copy second part of pattern for sq */ + copyd $r33 =3D $r32 + /* Compute the size that will be copied to align on 32 bytes boundary */ + andw $r5 =3D $r5, 0x1F + ;; + /* + * If size < MIN_SIZE_FOR_ALIGN bits, directly go to so, it will be done + * unaligned but that is still better that what we can do with sb + */ + cb.deqz $r7? .Laligned_32 + ;; + /* If we are already aligned on 32 bytes, jump to main "so" loop */ + cb.deqz $r5? .Laligned_32 + /* Remove unaligned part from length */ + sbfd $r2 =3D $r5, $r2 + /* Check if we need to copy 1 byte */ + andw $r4 =3D $r5, (1 << 0) + ;; + /* If we are not aligned, store byte */ + sb.dnez $r4? [$r0] =3D $r32 + addd $r0 =3D $r0, $r4 + /* Check if we need to copy 2 bytes */ + andw $r4 =3D $r5, (1 << 1) + ;; + sh.dnez $r4? [$r0] =3D $r32 + addd $r0 =3D $r0, $r4 + /* Check if we need to copy 4 bytes */ + andw $r4 =3D $r5, (1 << 2) + ;; + sw.dnez $r4? [$r0] =3D $r32 + addd $r0 =3D $r0, $r4 + /* Check if we need to copy 8 bytes */ + andw $r4 =3D $r5, (1 << 3) + ;; + sd.dnez $r4? [$r0] =3D $r32 + addd $r0 =3D $r0, $r4 + /* Check if we need to copy 16 bytes */ + andw $r4 =3D $r5, (1 << 4) + ;; + sq.dnez $r4? [$r0] =3D $r32r33 + addd $r0 =3D $r0, $r4 + ;; +.Laligned_32: + /* Prepare amount of data for 32 bytes store */ + srld $r10 =3D $r2, 5 + ;; + copyq $r34r35 =3D $r32, $r33 + /* Remaining bytes for 16 bytes store */ + andw $r8 =3D $r2, (1 << 4) + make $r11 =3D 32 + /* Check if there are enough data for 32 bytes store */ + cb.deqz $r10? .Laligned_32_done + ;; + loopdo $r10, .Laligned_32_done + ;; + so 0[$r0] =3D $r32r33r34r35 + addd $r0 =3D $r0, $r11 + ;; +.Laligned_32_done: + /* + * Now that we have handled every aligned bytes using 'so', we can + * handled the remainder of length using store by decrementing size + * We also exploit the fact we are aligned to simply check remaining + * size */ + sq.dnez $r8? [$r0] =3D $r32r33 + addd $r0 =3D $r0, $r8 + /* Remaining bytes for 8 bytes store */ + andw $r8 =3D $r2, (1 << 3) + cb.deqz $r2? .Lmemset_done + ;; + sd.dnez $r8? [$r0] =3D $r32 + addd $r0 =3D $r0, $r8 + /* Remaining bytes for 4 bytes store */ + andw $r8 =3D $r2, (1 << 2) + ;; + sw.dnez $r8? [$r0] =3D $r32 + addd $r0 =3D $r0, $r8 + /* Remaining bytes for 2 bytes store */ + andw $r8 =3D $r2, (1 << 1) + ;; + sh.dnez $r8? [$r0] =3D $r32 + addd $r0 =3D $r0, $r8 + ;; + sb.odd $r2? [$r0] =3D $r32 + ;; +.Lmemset_done: + /* Restore original value */ + copyd $r0 =3D $r3 + ret + ;; +SYM_FUNC_END(memset) diff --git a/arch/kvx/lib/strlen.S b/arch/kvx/lib/strlen.S new file mode 100644 index 0000000000000..c4bffb56cfa64 --- /dev/null +++ b/arch/kvx/lib/strlen.S @@ -0,0 +1,122 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Jules Maselbas + */ +#include +#include + +/* + * kvx optimized strlen + * + * This implementation of strlen only does aligned memory accesses. + * Since we don't know the total length the idea is to do double word + * load and stop on the first null byte found. As it's always safe to + * read more up to a lower 8-bytes boundary. + * + * This implementation of strlen uses a trick to detect if a double + * word contains a null byte [1]: + * + * > #define haszero(v) (((v) - 0x01010101UL) & ~(v) & 0x80808080UL) + * > The sub-expression (v - 0x01010101UL), evaluates to a high bit set + * > in any byte whenever the corresponding byte in v is zero or greater + * > than 0x80. The sub-expression ~v & 0x80808080UL evaluates to high + * > bits set in bytes where the byte of v doesn't have its high bit set + * > (so the byte was less than 0x80). Finally, by ANDing these two sub- + * > expressions the result is the high bits set where the bytes in v + * > were zero, since the high bits set due to a value greater than 0x80 + * > in the first sub-expression are masked off by the second. + * + * [1] http://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord + * + * A second trick is used to get the exact number of characters before + * the first null byte in a double word: + * + * clz(sbmmt(zero, 0x0102040810204080)) + * + * This trick uses the haszero result which maps null byte to 0x80 and + * others value to 0x00. The idea is to count the number of consecutive + * null byte in the double word (counting from less significant byte + * to most significant byte). To do so, using the bit matrix transpose + * will "pack" all high bit (0x80) to the most significant byte (MSB). + * It is not possible to count the trailing zeros in this MSB, however + * if a byte swap is done before the bit matrix transpose we still have + * all the information in the MSB but now we can count the leading zeros. + * The instruction sbmmt with the matrix 0x0102040810204080 does exactly + * what we need a byte swap followed by a bit transpose. + * + * A last trick is used to handle the first double word misalignment. + * This is done by masking off the N lower bytes (excess read) with N + * between 0 and 7. The mask is applied on haszero results and will + * force the N lower bytes to be considered not null. + * + * This is a C implementation of the algorithm described above: + * + * size_t strlen(char *s) { + * uint64_t *p =3D (uint64_t *)((uintptr_t)s) & ~0x7; + * uint64_t rem =3D ((uintptr_t)s) % 8; + * uint64_t low =3D -0x0101010101010101; + * uint64_t high =3D 0x8080808080808080; + * uint64_t dword, zero; + * uint64_t msk, len; + * + * dword =3D *p++; + * zero =3D (dword + low) & ~dword & high; + * msk =3D 0xffffffffffffffff << (rem * 8); + * zero &=3D msk; + * + * while (!zero) { + * dword =3D *p++; + * zero =3D (dword + low) & ~dword & high; + * } + * + * zero =3D __builtin_kvx_sbmmt8(zero, 0x0102040810204080); + * len =3D ((void *)p - (void *)s) - 8; + * len +=3D __builtin_kvx_clzd(zero); + * + * return len; + * } + */ + +.text +.align 16 +SYM_FUNC_START(strlen) + andd $r1 =3D $r0, ~0x7 + andd $r2 =3D $r0, 0x7 + make $r10 =3D -0x0101010101010101 + make $r11 =3D 0x8080808080808080 + ;; + ld $r4 =3D 0[$r1] + sllw $r2 =3D $r2, 3 + make $r3 =3D 0xffffffffffffffff + ;; + slld $r2 =3D $r3, $r2 + addd $r5 =3D $r4, $r10 + andnd $r6 =3D $r4, $r11 + ;; + andd $r6 =3D $r6, $r2 + make $r3 =3D 0 + ;; +.loop: + andd $r4 =3D $r5, $r6 + addd $r1 =3D $r1, 0x8 + ;; + cb.dnez $r4? .end + ld.deqz $r4? $r4 =3D [$r1] + ;; + addd $r5 =3D $r4, $r10 + andnd $r6 =3D $r4, $r11 + goto .loop + ;; +.end: + addd $r1 =3D $r1, -0x8 + sbmmt8 $r4 =3D $r4, 0x0102040810204080 + ;; + clzd $r4 =3D $r4 + sbfd $r1 =3D $r0, $r1 + ;; + addd $r0 =3D $r4, $r1 + ret + ;; +SYM_FUNC_END(strlen) +EXPORT_SYMBOL(strlen) diff --git a/arch/kvx/lib/usercopy.S b/arch/kvx/lib/usercopy.S new file mode 100644 index 0000000000000..950df1e88479d --- /dev/null +++ b/arch/kvx/lib/usercopy.S @@ -0,0 +1,90 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ +#include + +/** + * Copy from/to a user buffer + * r0 =3D to buffer + * r1 =3D from buffer + * r2 =3D size to copy + * This function can trapped when hitting a non-mapped page. + * It will trigger a trap NOMAPPING and the trap handler will interpret + * it and check if instruction pointer is inside __ex_table. + * The next step are described later ! + */ +.text +SYM_FUNC_START(raw_copy_from_user) +SYM_FUNC_START(raw_copy_to_user) + /** + * naive implementation byte per byte + */ + make $r33 =3D 0x0; + /* If size =3D=3D 0, exit directly */ + cb.deqz $r2? copy_exit + ;; + loopdo $r2, copy_exit + ;; +0: lbz $r34 =3D $r33[$r1] + ;; +1: sb $r33[$r0] =3D $r34 + addd $r33 =3D $r33, 1 /* Ptr increment */ + addd $r2 =3D $r2, -1 /* Remaining bytes to copy */ + ;; + copy_exit: + copyd $r0 =3D $r2 + ret + ;; +SYM_FUNC_END(raw_copy_to_user) +SYM_FUNC_END(raw_copy_from_user) + +/** + * Exception table + * each entry correspond to the following: + * .dword trapping_addr, restore_addr + * + * On trap, the handler will try to locate if $spc is matching a + * trapping address in the exception table. If so, the restore addr + * will be put in the return address of the trap handler, allowing + * to properly finish the copy and return only the bytes copied/cleared + */ +.pushsection __ex_table,"a" +.balign 8 +.dword 0b, copy_exit +.dword 1b, copy_exit +.popsection + +/** + * Clear a user buffer + * r0 =3D buffer to clear + * r1 =3D size to clear + */ +.text +SYM_FUNC_START(asm_clear_user) + /** + * naive implementation byte per byte + */ + make $r33 =3D 0x0; + make $r34 =3D 0x0; + /* If size =3D=3D 0, exit directly */ + cb.deqz $r1? clear_exit + ;; + loopdo $r1, clear_exit + ;; +40: sb $r33[$r0] =3D $r34 + addd $r33 =3D $r33, 1 /* Ptr increment */ + addd $r1 =3D $r1, -1 /* Remaining bytes to copy */ + ;; + clear_exit: + copyd $r0 =3D $r1 + ret + ;; +SYM_FUNC_END(asm_clear_user) + +.pushsection __ex_table,"a" +.balign 8 +.dword 40b, clear_exit +.popsection + --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout148.security-mail.net (smtpout148.security-mail.net [85.31.212.148]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4519316EC16 for ; Mon, 22 Jul 2024 09:43:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.148 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641433; cv=none; b=G+wyWsWiorrPNMW81GXW2M7pnBZ5BG0YNKZ1KUGIKJS5LuYFIAuoIkJCNpn38GCIFzavJm3iCyodC0kSfQOdXS5tCASEaQrrANVcTLNgrF0QkJzm4z7cFYpbBtGXPUrjEwqrAW1Od8u3VXAplemJ5dx4YrwhfFLn4b/ubq0rkFw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641433; c=relaxed/simple; bh=E3NqyE5MkJRoUNLDK5J4AjC7b5sHyADHFow0omJkUhY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Z3D0DZ0xHKYBfn1DsR8NvFQFRgFTZv9TDw86iuYIWYwtZBD+zaVPPJWEk2bp9moV4uoeML6fhM5mkGrDH7Lc83iCxXvAx3trVsXNNXo6uI6sR+cipNTWDYdj/+VIh9wqtktwfZV//QVFUfzYcFwZMm8Ls074SXDXSi064baNG2M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=LC8viOLr; arc=none smtp.client-ip=85.31.212.148 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="LC8viOLr" Received: from localhost (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id E27683224EC for ; Mon, 22 Jul 2024 11:43:49 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641429; bh=E3NqyE5MkJRoUNLDK5J4AjC7b5sHyADHFow0omJkUhY=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=LC8viOLrLNFGD0z5pVHtUPhNwpdYUln70sJxz2ghkLg/TvbZ7xeTjCDBlMamU2UAQ sPcYB5OhOFoBajQx48nMJM8zCqwsVxOEce7WkHJS3SUNQdAxsNPgg8E0UJ8BQw34ik m6kUtya+x9ezEIpX6KUgCJaiqJ0KIKBpOI0O5E4M= Received: from fx408 (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id BD7A73223EC; Mon, 22 Jul 2024 11:43:49 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx408.security-mail.net (Postfix) with ESMTPS id 2500B3221BA; Mon, 22 Jul 2024 11:43:49 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id E3D8E40317; Mon, 22 Jul 2024 11:43:48 +0200 (CEST) X-Secumail-id: <9d64.669e29d5.235af.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Thomas Gleixner , Peter Zijlstra Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Julien Hascoet , Louis Morhet , Luc Michel , Marius Gligor , bpf@vger.kernel.org Subject: [RFC PATCH v3 30/37] kvx: Add multi-processor (SMP) support Date: Mon, 22 Jul 2024 11:41:41 +0200 Message-ID: <20240722094226.21602-31-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Coolidge v1 SoC has 5 clusters of 17 kvx cores: - 16 application cores aka PE - 1 privileged core, the Resource Manager, aka RM. Linux can run in SMP config on the 16 cores of a Cluster. Memory coherency between all cores is guaranteed by the L2 cache. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Julien Hascoet Signed-off-by: Julien Hascoet Co-developed-by: Louis Morhet Signed-off-by: Louis Morhet Co-developed-by: Luc Michel Signed-off-by: Luc Michel Co-developed-by: Marius Gligor Signed-off-by: Marius Gligor Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: - removed L2 cache driver - removed ipi and pwr-ctrl driver (split into their own patch) V2 -> V3: - Refactored smp_init_cpus function to use `of_get_cpu_hwid` and `set_cpu_possible` - boot secondary CPUs via "smp_ops" / DT enable-method - put most IPI code in its own driver in drivers/irqchip which fills a smp_cross_call func pointer. see remarks in there: - https://lore.kernel.org/bpf/Y8qlOpYgDefMPqWH@zx2c4.com/T/#m5786c05e937e= 99a4c7e5353a4394f870240853d8 - don't hardcode probing of pwr-ctrl in smpboot.c instead let a driver in drivers/soc/kvx probe and register smp_ops see remarks in there: - https://lore.kernel.org/bpf/Y8qlOpYgDefMPqWH@zx2c4.com/T/#mf43bfb87d7a8= f03ec98fb15e66f0bec19e85839c - https://lore.kernel.org/bpf/Y8qlOpYgDefMPqWH@zx2c4.com/T/#m1da9ac16c5ed= 93f895a82687b3e53ba9cdb26578 --- arch/kvx/include/asm/smp.h | 63 ++++++++++++++++ arch/kvx/kernel/smp.c | 83 +++++++++++++++++++++ arch/kvx/kernel/smpboot.c | 146 +++++++++++++++++++++++++++++++++++++ include/linux/cpuhotplug.h | 2 + 4 files changed, 294 insertions(+) create mode 100644 arch/kvx/include/asm/smp.h create mode 100644 arch/kvx/kernel/smp.c create mode 100644 arch/kvx/kernel/smpboot.c diff --git a/arch/kvx/include/asm/smp.h b/arch/kvx/include/asm/smp.h new file mode 100644 index 0000000000000..7a8dab6b1300e --- /dev/null +++ b/arch/kvx/include/asm/smp.h @@ -0,0 +1,63 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2024 Kalray Inc. + * Author(s): Clement Leger + * Jonathan Borne + * Yann Sionneau + */ + +#ifndef _ASM_KVX_SMP_H +#define _ASM_KVX_SMP_H + +#include +#include + +#include + +void smp_init_cpus(void); + +#ifdef CONFIG_SMP + +extern void set_smp_cross_call(void (*)(const struct cpumask *, unsigned i= nt)); +asmlinkage void __init start_kernel_secondary(void); + +/* Hook for the generic smp_call_function_many() routine. */ +void arch_send_call_function_ipi_mask(struct cpumask *mask); + +/* Hook for the generic smp_call_function_single() routine. */ +void arch_send_call_function_single_ipi(int cpu); + +void __init setup_processor(void); +int __init setup_smp(void); + +#define raw_smp_processor_id() ((int) \ + ((kvx_sfr_get(PCR) & KVX_SFR_PCR_PID_MASK) \ + >> KVX_SFR_PCR_PID_SHIFT)) + +#define flush_cache_vmap(start, end) do { } while (0) +#define flush_cache_vunmap(start, end) do { } while (0) +extern void handle_IPI(unsigned long ops); + +struct smp_operations { + int (*smp_boot_secondary)(unsigned int cpu); +}; + +struct of_cpu_method { + const char *method; + const struct smp_operations *ops; +}; + +#define CPU_METHOD_OF_DECLARE(name, _method, _ops) \ + static const struct of_cpu_method __cpu_method_of_table_##name \ + __used __section("__cpu_method_of_table") \ + =3D { .method =3D _method, .ops =3D _ops } + +extern void smp_set_ops(const struct smp_operations *ops); + +#else + +void smp_init_cpus(void) {} + +#endif /* CONFIG_SMP */ + +#endif diff --git a/arch/kvx/kernel/smp.c b/arch/kvx/kernel/smp.c new file mode 100644 index 0000000000000..c2cb96797c90b --- /dev/null +++ b/arch/kvx/kernel/smp.c @@ -0,0 +1,83 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2024 Kalray Inc. + * Author(s): Clement Leger + * Jonathan Borne + * Yann Sionneau + */ + +#include +#include + +static void (*smp_cross_call)(const struct cpumask *, unsigned int); + +enum ipi_message_type { + IPI_RESCHEDULE, + IPI_CALL_FUNC, + IPI_IRQ_WORK, + IPI_MAX +}; + +void __init set_smp_cross_call(void (*fn)(const struct cpumask *, unsigned= int)) +{ + smp_cross_call =3D fn; +} + +static void send_ipi_message(const struct cpumask *mask, + enum ipi_message_type operation) +{ + if (!smp_cross_call) + panic("ipi controller init failed\n"); + smp_cross_call(mask, (unsigned int)operation); +} + +void arch_send_call_function_ipi_mask(struct cpumask *mask) +{ + send_ipi_message(mask, IPI_CALL_FUNC); +} + +void arch_send_call_function_single_ipi(int cpu) +{ + send_ipi_message(cpumask_of(cpu), IPI_CALL_FUNC); +} + +#ifdef CONFIG_IRQ_WORK +void arch_irq_work_raise(void) +{ + send_ipi_message(cpumask_of(smp_processor_id()), IPI_IRQ_WORK); +} +#endif + +static void ipi_stop(void *unused) +{ + local_cpu_stop(); +} + +void smp_send_stop(void) +{ + struct cpumask targets; + + cpumask_copy(&targets, cpu_online_mask); + cpumask_clear_cpu(smp_processor_id(), &targets); + + smp_call_function_many(&targets, ipi_stop, NULL, 0); +} + +void arch_smp_send_reschedule(int cpu) +{ + send_ipi_message(cpumask_of(cpu), IPI_RESCHEDULE); +} + +void handle_IPI(unsigned long ops) +{ + if (ops & (1 << IPI_RESCHEDULE)) + scheduler_ipi(); + + if (ops & (1 << IPI_CALL_FUNC)) + generic_smp_call_function_interrupt(); + + if (ops & (1 << IPI_IRQ_WORK)) + irq_work_run(); + + WARN_ON_ONCE((ops >> IPI_MAX) !=3D 0); +} diff --git a/arch/kvx/kernel/smpboot.c b/arch/kvx/kernel/smpboot.c new file mode 100644 index 0000000000000..ab7f29708fed2 --- /dev/null +++ b/arch/kvx/kernel/smpboot.c @@ -0,0 +1,146 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2024 Kalray Inc. + * Author(s): Clement Leger + * Julian Vetter + * Yann Sionneau + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +void *__cpu_up_stack_pointer[NR_CPUS]; +void *__cpu_up_task_pointer[NR_CPUS]; +static struct smp_operations smp_ops __ro_after_init; +extern struct of_cpu_method __cpu_method_of_table[]; + +void __init smp_prepare_boot_cpu(void) +{ +} + +void __init smp_set_ops(const struct smp_operations *ops) +{ + if (ops) + smp_ops =3D *ops; +}; + +int __cpu_up(unsigned int cpu, struct task_struct *tidle) +{ + int ret; + + __cpu_up_stack_pointer[cpu] =3D task_stack_page(tidle) + THREAD_SIZE; + __cpu_up_task_pointer[cpu] =3D tidle; + /* We need to be sure writes are committed */ + smp_mb(); + + if (!smp_ops.smp_boot_secondary) { + pr_err_once("No smp_ops registered: could not bring up secondary CPUs\n"= ); + return -ENOSYS; + } + + ret =3D smp_ops.smp_boot_secondary(cpu); + if (ret =3D=3D 0) { + /* CPU was successfully started */ + while (!cpu_online(cpu)) + cpu_relax(); + } else { + pr_err("CPU%u: failed to boot: %d\n", cpu, ret); + } + + return ret; +} + + + +static int __init set_smp_ops_by_method(struct device_node *node) +{ + const char *method; + struct of_cpu_method *m =3D __cpu_method_of_table; + + if (of_property_read_string(node, "enable-method", &method)) + return 0; + + for (; m->method; m++) + if (!strcmp(m->method, method)) { + smp_set_ops(m->ops); + return 1; + } + + return 0; +} + +void __init smp_cpus_done(unsigned int max_cpus) +{ +} + +void __init smp_init_cpus(void) +{ + struct device_node *cpu, *cpus; + u32 cpu_id; + unsigned int nr_cpus =3D 0; + int found_method =3D 0; + + cpus =3D of_find_node_by_path("/cpus"); + for_each_of_cpu_node(cpu) { + if (!of_device_is_available(cpu)) + continue; + + cpu_id =3D of_get_cpu_hwid(cpu, 0); + if ((cpu_id < NR_CPUS) && (nr_cpus < nr_cpu_ids)) { + nr_cpus++; + set_cpu_possible(cpu_id, true); + if (!found_method) + found_method =3D set_smp_ops_by_method(cpu); + } + } + + if (!found_method) + set_smp_ops_by_method(cpus); + + pr_info("%d possible cpus\n", nr_cpus); +} + +void __init smp_prepare_cpus(unsigned int max_cpus) +{ + if (num_present_cpus() <=3D 1) + init_cpu_present(cpu_possible_mask); +} + +/* + * C entry point for a secondary processor. + */ +asmlinkage void __init start_kernel_secondary(void) +{ + struct mm_struct *mm =3D &init_mm; + unsigned int cpu =3D smp_processor_id(); + + setup_processor(); + kvx_mmu_early_setup(); + + /* All kernel threads share the same mm context. */ + mmgrab(mm); + current->active_mm =3D mm; + cpumask_set_cpu(cpu, mm_cpumask(mm)); + + notify_cpu_starting(cpu); + set_cpu_online(cpu, true); + trace_hardirqs_off(); + + local_flush_tlb_all(); + + local_irq_enable(); + cpu_startup_entry(CPUHP_AP_ONLINE_IDLE); +} diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 7a5785f405b62..aa35c19dbd99a 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -147,6 +147,7 @@ enum cpuhp_state { CPUHP_AP_IRQ_LOONGARCH_STARTING, CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING, CPUHP_AP_IRQ_RISCV_IMSIC_STARTING, + CPUHP_AP_IRQ_KVX_STARTING, CPUHP_AP_ARM_MVEBU_COHERENCY, CPUHP_AP_PERF_X86_AMD_UNCORE_STARTING, CPUHP_AP_PERF_X86_STARTING, @@ -176,6 +177,7 @@ enum cpuhp_state { CPUHP_AP_CSKY_TIMER_STARTING, CPUHP_AP_TI_GP_TIMER_STARTING, CPUHP_AP_HYPERV_TIMER_STARTING, + CPUHP_AP_KVX_TIMER_STARTING, /* Must be the last timer callback */ CPUHP_AP_DUMMY_TIMER_STARTING, CPUHP_AP_ARM_XEN_STARTING, --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout149.security-mail.net (smtpout149.security-mail.net [85.31.212.149]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 462CD16C847 for ; Mon, 22 Jul 2024 09:43:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.149 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641434; cv=none; b=oY2i+2UjeoPyZjkrfb4TT+Qu23+h9djnY9J3RjC6HxA9HzGYgEALdbncTfqMv0U6h0yBIndLOZtGaM6yUdAb1R0K5AbAvcYizB7Xh/Q1GevQbY7fi3hWb7icn5sZ8dAkxMDr5ZejczPRMz4FZYVRN5hbGhqnDE5yTsidItbzqWA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641434; c=relaxed/simple; bh=MWUsgTr5EuOubB9gzKhFGSdMC/v2HrUmZunk2jYGDNE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=AWn060bojLF/zaEriUJnxQkarrQx28j4j831hdMt52/iYw+2INsfdkV6KAshBh4fdgdbIVHXa70ATTxbe8GdoLFs3+LZK0QMnaeOiV5YZ0IKZdfttnDyUQB6v4utxJ5XemBt1Lc0IswACs5B8TVKBGmuQCmdM49H8NxMmi5xhwk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=FbS9DoRP; arc=none smtp.client-ip=85.31.212.149 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="FbS9DoRP" Received: from localhost (fx409.security-mail.net [127.0.0.1]) by fx409.security-mail.net (Postfix) with ESMTP id DF42534971F for ; Mon, 22 Jul 2024 11:43:50 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641430; bh=MWUsgTr5EuOubB9gzKhFGSdMC/v2HrUmZunk2jYGDNE=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=FbS9DoRPlyPcACDgyYXadIgMZDJ3zelKPoHC/1nZtPbYGKc2z7BPTRFqu5Z7cTP+Q +HVMky8cZD946R2UEE+4vRAHOMcAOmJ5lTJehMbKoesQnIJgyWaUOhXHa8zxE6f4ul mEarlDlGH8AdAg+m59C/xS0/sRg9c2qtI3ZRFjKg= Received: from fx409 (fx409.security-mail.net [127.0.0.1]) by fx409.security-mail.net (Postfix) with ESMTP id 9E309349696; Mon, 22 Jul 2024 11:43:50 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx409.security-mail.net (Postfix) with ESMTPS id A90A4349619; Mon, 22 Jul 2024 11:43:49 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 7AEB640317; Mon, 22 Jul 2024 11:43:49 +0200 (CEST) X-Secumail-id: <53b3.669e29d5.a537a.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Ashley Lesdalons , Benjamin Mugnier , Clement Leger , Guillaume Thouvenin , Jules Maselbas , Samuel Jones , Thomas Costis , Vincent Chardon Subject: [RFC PATCH v3 31/37] kvx: Add kvx default config file Date: Mon, 22 Jul 2024 11:41:42 +0200 Message-ID: <20240722094226.21602-32-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add a default config file for kvx based Coolidge SoC. Co-developed-by: Ashley Lesdalons Signed-off-by: Ashley Lesdalons Co-developed-by: Benjamin Mugnier Signed-off-by: Benjamin Mugnier Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Guillaume Thouvenin Signed-off-by: Guillaume Thouvenin Co-developed-by: Jules Maselbas Signed-off-by: Jules Maselbas Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Samuel Jones Signed-off-by: Samuel Jones Co-developed-by: Thomas Costis Signed-off-by: Thomas Costis Co-developed-by: Vincent Chardon Signed-off-by: Vincent Chardon Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: default_defconfig renamed to defconfig V2 -> V3: ext4 now builtin, add virtio-blk and devtmpfs --- arch/kvx/configs/defconfig | 130 +++++++++++++++++++++++++++++++++++++ 1 file changed, 130 insertions(+) create mode 100644 arch/kvx/configs/defconfig diff --git a/arch/kvx/configs/defconfig b/arch/kvx/configs/defconfig new file mode 100644 index 0000000000000..9d7a37801c028 --- /dev/null +++ b/arch/kvx/configs/defconfig @@ -0,0 +1,130 @@ +CONFIG_DEFAULT_HOSTNAME=3D"KVXlinux" +CONFIG_SERIAL_KVX_SCALL_COMM=3Dy +CONFIG_CONFIGFS_FS=3Dy +CONFIG_DEBUG_KERNEL=3Dy +CONFIG_DEBUG_INFO=3Dy +CONFIG_DEBUG_INFO_DWARF4=3Dy +CONFIG_PRINTK_TIME=3Dy +CONFIG_CONSOLE_LOGLEVEL_DEFAULT=3D15 +CONFIG_MESSAGE_LOGLEVEL_DEFAULT=3D7 +CONFIG_PANIC_TIMEOUT=3D-1 +CONFIG_BLK_DEV_INITRD=3Dy +CONFIG_GDB_SCRIPTS=3Dy +CONFIG_FRAME_POINTER=3Dy +CONFIG_HZ_100=3Dy +CONFIG_SERIAL_EARLYCON=3Dy +CONFIG_HOTPLUG_PCI_PCIE=3Dy +CONFIG_PCIEAER=3Dy +CONFIG_PCIE_DPC=3Dy +CONFIG_HOTPLUG_PCI=3Dy +CONFIG_SERIAL_8250=3Dy +CONFIG_SERIAL_8250_CONSOLE=3Dy +CONFIG_SERIAL_8250_DW=3Dy +CONFIG_SERIAL_8250_NR_UARTS=3D8 +CONFIG_SERIAL_8250_RUNTIME_UARTS=3D8 +CONFIG_PINCTRL=3Dy +CONFIG_PINCTRL_SINGLE=3Dy +CONFIG_POWER_RESET_KVX_SCALL_POWEROFF=3Dy +CONFIG_PCI=3Dy +CONFIG_PCI_MSI=3Dy +CONFIG_PCIE_KVX_NWL=3Dy +CONFIG_PCIEPORTBUS=3Dy +# CONFIG_PCIEASPM is not set +CONFIG_PCIEAER_INJECT=3Dy +CONFIG_TMPFS=3Dy +CONFIG_DMADEVICES=3Dy +CONFIG_KVX_DMA_NOC=3Dm +CONFIG_KVX_IOMMU=3Dy +CONFIG_KVX_OTP_NV=3Dy +CONFIG_PACKET=3Dy +CONFIG_NET=3Dy +# CONFIG_WLAN is not set +CONFIG_INET=3Dy +CONFIG_IPV6=3Dy +CONFIG_NETDEVICES=3Dy +CONFIG_NET_CORE=3Dy +CONFIG_E1000E=3Dy +CONFIG_BLK_DEV_NVME=3Dy +CONFIG_VFAT_FS=3Dy +CONFIG_NLS_DEFAULT=3D"iso8859-1" +CONFIG_NLS_CODEPAGE_437=3Dy +CONFIG_NLS_ISO8859_1=3Dy +CONFIG_WATCHDOG=3Dy +CONFIG_KVX_WATCHDOG=3Dy +CONFIG_HUGETLBFS=3Dy +CONFIG_MAILBOX=3Dy +CONFIG_KVX_MBOX=3Dy +CONFIG_REMOTEPROC=3Dy +CONFIG_KVX_REMOTEPROC=3Dy +CONFIG_VIRTIO_NET=3Dy +CONFIG_VIRTIO_MMIO=3Dy +CONFIG_RPMSG_VIRTIO=3Dy +CONFIG_RPMSG_CHAR=3Dy +CONFIG_MODULES=3Dy +CONFIG_MODULE_UNLOAD=3Dy +CONFIG_MODVERSIONS=3Dy +CONFIG_MODULE_SRCVERSION_ALL=3Dy +CONFIG_BLK_DEV=3Dy +CONFIG_BLK_DEV_LOOP=3Dm +CONFIG_BLK_DEV_LOOP_MIN_COUNT=3D8 +CONFIG_EXT4_FS=3Dy +CONFIG_EXT4_USE_FOR_EXT2=3Dy +CONFIG_SYSVIPC=3Dy +CONFIG_UNIX=3Dy +CONFIG_NET_VENDOR_KALRAY=3Dy +CONFIG_NET_KVX_SOC=3Dm +CONFIG_STACKPROTECTOR=3Dy +CONFIG_GPIO_DWAPB=3Dy +CONFIG_I2C=3Dy +CONFIG_I2C_SLAVE=3Dy +CONFIG_I2C_CHARDEV=3Dy +CONFIG_I2C_DESIGNWARE_PLATFORM=3Dy +CONFIG_I2C_DESIGNWARE_CORE=3Dy +CONFIG_I2C_DESIGNWARE_SLAVE=3Dy +CONFIG_I2C_SLAVE_USPACE=3Dy +CONFIG_POWER_RESET=3Dy +CONFIG_POWER_RESET_SYSCON=3Dy +CONFIG_SPI=3Dy +CONFIG_SPI_DESIGNWARE=3Dy +CONFIG_SPI_DW_MMIO=3Dy +CONFIG_SPI_DW_KVX=3Dy +CONFIG_MTD=3Dy +CONFIG_MTD_SPI_NOR=3Dy +# CONFIG_MTD_SPI_NOR_USE_4K_SECTORS is not set +CONFIG_SQUASHFS=3Dm +CONFIG_USB=3Dy +CONFIG_USB_CONFIGFS=3Dm +CONFIG_USB_CONFIGFS_ACM=3Dy +CONFIG_USB_CONFIGFS_ECM=3Dy +CONFIG_USB_DWC2=3Dy +CONFIG_USB_DWC2_DUAL_ROLE=3Dy +CONFIG_USB_GADGET=3Dy +CONFIG_U_SERIAL_CONSOLE=3Dy +CONFIG_USB_USBNET=3Dm +CONFIG_USB_NET_SMSC95XX=3Dm +# CONFIG_NOP_USB_XCEIV is not set +CONFIG_USB_PHY=3Dy +CONFIG_GENERIC_PHY=3Dy +CONFIG_GENERIC_PHY_USB=3Dy +CONFIG_MMC=3Dy +CONFIG_MMC_SDHCI=3Dy +CONFIG_MMC_SDHCI_PLTFM=3Dy +CONFIG_MMC_SDHCI_OF_DWCMSHC=3Dy +CONFIG_MDIO_BITBANG=3Dm +CONFIG_MDIO_GPIO=3Dm +CONFIG_MARVELL_PHY=3Dm +CONFIG_GPIO_PCA953X=3Dy +CONFIG_NETFILTER=3Dy +CONFIG_NF_CONNTRACK=3Dm +CONFIG_NF_NAT=3Dm +CONFIG_IP_NF_IPTABLES=3Dm +CONFIG_IP_NF_NAT=3Dm +CONFIG_LEDS_GPIO=3Dy +CONFIG_LEDS_CLASS=3Dy +CONFIG_LEDS_TRIGGERS=3Dy +CONFIG_LEDS_TRIGGER_NETDEV=3Dy +CONFIG_LEDS_TRIGGER_PATTERN=3Dy +CONFIG_DCB=3Dy +CONFIG_VIRTIO_BLK=3Dy +CONFIG_DEVTMPFS=3Dy +CONFIG_DEVTMPFS_MOUNT=3Dy --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout42.security-mail.net (smtpout42.security-mail.net [85.31.212.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4AC2D16F0D6 for ; Mon, 22 Jul 2024 09:43:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641436; cv=none; b=bfYAs1sxAur2BMOvEqIF8HksLTolmhOFmV+wkXxyNFra725aQRgrxK7lHOWBG+jTFaPQLut0XCCu00mbO6A7LrQXLUSFTZazRI1zuNNTbzGAFbmnhAIc4uosLI/33ELy976PtDFaqsw5bA9R3fT4OavGJbVokYMDXbmbfbCrB2w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641436; c=relaxed/simple; bh=FBe5Uuw5SfUjNBt/sxJT44FNRpJhs/lZU+FiDDZ0mgc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=FqKk8xzgo0Km7w8ki5POtDgcpb+0eBn42YjXiT6pbzaRffPMnAN6ltbGczLnRDnPOeZpepQ4vaoBSjrqGrTo4Y/R+WyXAgorzu2XQvdbbHP56h/7ZGDUnMzJcG4wDrmB8fDKW/Hr673WtY4+//b5XnVfAEazFmLg7HLxb2F4kfI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=lM23Y7oQ; arc=none smtp.client-ip=85.31.212.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="lM23Y7oQ" Received: from localhost (localhost [127.0.0.1]) by fx302.security-mail.net (Postfix) with ESMTP id A745A80BD8A for ; Mon, 22 Jul 2024 11:43:51 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641431; bh=FBe5Uuw5SfUjNBt/sxJT44FNRpJhs/lZU+FiDDZ0mgc=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=lM23Y7oQK+AeQyzvhjikYoAxQUJe9hzXRS0bPHNxe6lexxT5UWen1+t4GhUhzWJEG 6q6uGoWcemzckOcfA5rUkmp64EGwSdHziyoECg8SDuyWZkN56wNI6uTF9oQYyrmnfP jOuuZytoN/YNfslPtxOzZwD1wiZRBRNnV0vZWBM8= Received: from fx302 (localhost [127.0.0.1]) by fx302.security-mail.net (Postfix) with ESMTP id 7A74E80B55D; Mon, 22 Jul 2024 11:43:51 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx302.security-mail.net (Postfix) with ESMTPS id E499F80B27D; Mon, 22 Jul 2024 11:43:50 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id ACC4C40317; Mon, 22 Jul 2024 11:43:50 +0200 (CEST) X-Secumail-id: From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Guillaume Thouvenin , Marius Gligor Subject: [RFC PATCH v3 32/37] kvx: Add debugging related support Date: Mon, 22 Jul 2024 11:41:43 +0200 Message-ID: <20240722094226.21602-33-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add kvx support for debugging Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Guillaume Thouvenin Signed-off-by: Guillaume Thouvenin Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Marius Gligor Signed-off-by: Marius Gligor Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: No changes V2 -> V3: - remove useless do { } while (0) for one-line macro - change function arg order related to struct pt_regs because of 'generic entry' usage - improved comments for patch_insns_percpu --- arch/kvx/include/asm/debug.h | 36 ++++++ arch/kvx/include/asm/insns.h | 16 +++ arch/kvx/include/asm/insns_defs.h | 185 ++++++++++++++++++++++++++++++ arch/kvx/kernel/break_hook.c | 76 ++++++++++++ arch/kvx/kernel/debug.c | 54 +++++++++ arch/kvx/kernel/insns.c | 166 +++++++++++++++++++++++++++ 6 files changed, 533 insertions(+) create mode 100644 arch/kvx/include/asm/debug.h create mode 100644 arch/kvx/include/asm/insns.h create mode 100644 arch/kvx/include/asm/insns_defs.h create mode 100644 arch/kvx/kernel/break_hook.c create mode 100644 arch/kvx/kernel/debug.c create mode 100644 arch/kvx/kernel/insns.c diff --git a/arch/kvx/include/asm/debug.h b/arch/kvx/include/asm/debug.h new file mode 100644 index 0000000000000..6eea3fafde5e8 --- /dev/null +++ b/arch/kvx/include/asm/debug.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef __ASM_KVX_DEBUG_HOOK_H_ +#define __ASM_KVX_DEBUG_HOOK_H_ + +/** + * enum debug_ret - Break return value + * @DEBUG_HOOK_HANDLED: Hook handled successfully + * @DEBUG_HOOK_IGNORED: Hook call has been ignored + */ +enum debug_ret { + DEBUG_HOOK_HANDLED =3D 0, + DEBUG_HOOK_IGNORED =3D 1, +}; + +/** + * struct debug_hook - Debug hook description + * @node: List node + * @handler: handler called on debug entry + * @mode: Hook mode (user/kernel) + */ +struct debug_hook { + struct list_head node; + int (*handler)(struct pt_regs *regs, u64 ea); + u8 mode; +}; + +void debug_handler(struct pt_regs *regs, u64 ea); +void debug_hook_register(struct debug_hook *dbg_hook); +void debug_hook_unregister(struct debug_hook *dbg_hook); + +#endif /* __ASM_KVX_DEBUG_HOOK_H_ */ diff --git a/arch/kvx/include/asm/insns.h b/arch/kvx/include/asm/insns.h new file mode 100644 index 0000000000000..36a9e8335ce88 --- /dev/null +++ b/arch/kvx/include/asm/insns.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_INSNS_H +#define _ASM_KVX_INSNS_H + +int kvx_insns_write_nostop(u32 *insns, u8 insns_len, u32 *insn_addr); + +int kvx_insns_write(u32 *insns, unsigned long insns_len, u32 *addr); + +int kvx_insns_read(u32 *insns, unsigned long insns_len, u32 *addr); + +#endif diff --git a/arch/kvx/include/asm/insns_defs.h b/arch/kvx/include/asm/insns= _defs.h new file mode 100644 index 0000000000000..806fb10b62a9c --- /dev/null +++ b/arch/kvx/include/asm/insns_defs.h @@ -0,0 +1,185 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + */ + +#ifndef __ASM_KVX_INSNS_DEFS_H_ +#define __ASM_KVX_INSNS_DEFS_H_ + +#include + +#ifndef __ASSEMBLY__ +static inline int check_signed_imm(long long imm, int bits) +{ + long long min, max; + + min =3D -BIT_ULL(bits - 1); + max =3D BIT_ULL(bits - 1) - 1; + if (imm < min || imm > max) + return 1; + + return 0; +} +#endif /* __ASSEMBLY__ */ + +#define BITMASK(bits) (BIT_ULL(bits) - 1) + +#define KVX_INSN_SYLLABLE_WIDTH 4 + +#define IS_INSN(__insn, __mnemo) \ + ((__insn & KVX_INSN_ ## __mnemo ## _MASK_0) =3D=3D \ + KVX_INSN_ ## __mnemo ## _OPCODE_0) + +#define INSN_SIZE(__insn) \ + (KVX_INSN_ ## __insn ## _SIZE * KVX_INSN_SYLLABLE_WIDTH) + +/* Values for general registers */ +#define KVX_REG_R0 0 +#define KVX_REG_R1 1 +#define KVX_REG_R2 2 +#define KVX_REG_R3 3 +#define KVX_REG_R4 4 +#define KVX_REG_R5 5 +#define KVX_REG_R6 6 +#define KVX_REG_R7 7 +#define KVX_REG_R8 8 +#define KVX_REG_R9 9 +#define KVX_REG_R10 10 +#define KVX_REG_R11 11 +#define KVX_REG_R12 12 +#define KVX_REG_SP 12 +#define KVX_REG_R13 13 +#define KVX_REG_TP 13 +#define KVX_REG_R14 14 +#define KVX_REG_FP 14 +#define KVX_REG_R15 15 +#define KVX_REG_R16 16 +#define KVX_REG_R17 17 +#define KVX_REG_R18 18 +#define KVX_REG_R19 19 +#define KVX_REG_R20 20 +#define KVX_REG_R21 21 +#define KVX_REG_R22 22 +#define KVX_REG_R23 23 +#define KVX_REG_R24 24 +#define KVX_REG_R25 25 +#define KVX_REG_R26 26 +#define KVX_REG_R27 27 +#define KVX_REG_R28 28 +#define KVX_REG_R29 29 +#define KVX_REG_R30 30 +#define KVX_REG_R31 31 +#define KVX_REG_R32 32 +#define KVX_REG_R33 33 +#define KVX_REG_R34 34 +#define KVX_REG_R35 35 +#define KVX_REG_R36 36 +#define KVX_REG_R37 37 +#define KVX_REG_R38 38 +#define KVX_REG_R39 39 +#define KVX_REG_R40 40 +#define KVX_REG_R41 41 +#define KVX_REG_R42 42 +#define KVX_REG_R43 43 +#define KVX_REG_R44 44 +#define KVX_REG_R45 45 +#define KVX_REG_R46 46 +#define KVX_REG_R47 47 +#define KVX_REG_R48 48 +#define KVX_REG_R49 49 +#define KVX_REG_R50 50 +#define KVX_REG_R51 51 +#define KVX_REG_R52 52 +#define KVX_REG_R53 53 +#define KVX_REG_R54 54 +#define KVX_REG_R55 55 +#define KVX_REG_R56 56 +#define KVX_REG_R57 57 +#define KVX_REG_R58 58 +#define KVX_REG_R59 59 +#define KVX_REG_R60 60 +#define KVX_REG_R61 61 +#define KVX_REG_R62 62 +#define KVX_REG_R63 63 + +/* Value for bitfield parallel */ +#define KVX_INSN_PARALLEL_EOB 0x0 +#define KVX_INSN_PARALLEL_NONE 0x1 + +#define KVX_INSN_PARALLEL(__insn) (((__insn) >> 31) & 0x1) + +#define KVX_INSN_MAKE_IMM64_SIZE 3 +#define KVX_INSN_MAKE_IMM64_W64_CHECK(__val) \ + (check_signed_imm(__val, 64)) +#define KVX_INSN_MAKE_IMM64_MASK_0 0xff030000 +#define KVX_INSN_MAKE_IMM64_OPCODE_0 0xe0000000 +#define KVX_INSN_MAKE_IMM64_SYLLABLE_0(__rw, __w64) \ + (KVX_INSN_MAKE_IMM64_OPCODE_0 | (((__rw) & 0x3f) << 18) | (((__w64) & 0x3= ff) << 6)) +#define KVX_INSN_MAKE_IMM64_MASK_1 0xe0000000 +#define KVX_INSN_MAKE_IMM64_OPCODE_1 0x80000000 +#define KVX_INSN_MAKE_IMM64_SYLLABLE_1(__w64) \ + (KVX_INSN_MAKE_IMM64_OPCODE_1 | (((__w64) >> 10) & 0x7ffffff)) +#define KVX_INSN_MAKE_IMM64_SYLLABLE_2(__p, __w64) \ + (((__p) << 31) | (((__w64) >> 37) & 0x7ffffff)) +#define KVX_INSN_MAKE_IMM64(__buf, __p, __rw, __w64) \ +do { \ + (__buf)[0] =3D KVX_INSN_MAKE_IMM64_SYLLABLE_0(__rw, __w64); \ + (__buf)[1] =3D KVX_INSN_MAKE_IMM64_SYLLABLE_1(__w64); \ + (__buf)[2] =3D KVX_INSN_MAKE_IMM64_SYLLABLE_2(__p, __w64); \ +} while (0) + +#define KVX_INSN_ICALL_SIZE 1 +#define KVX_INSN_ICALL_MASK_0 0x7ffc0000 +#define KVX_INSN_ICALL_OPCODE_0 0xfdc0000 +#define KVX_INSN_ICALL_SYLLABLE_0(__p, __rz) \ + (KVX_INSN_ICALL_OPCODE_0 | ((__p) << 31) | ((__rz) & 0x3f)) +#define KVX_INSN_ICALL(__buf, __p, __rz) \ + (__buf)[0] =3D KVX_INSN_ICALL_SYLLABLE_0(__p, __rz); + +#define KVX_INSN_IGOTO_SIZE 1 +#define KVX_INSN_IGOTO_MASK_0 0x7ffc0000 +#define KVX_INSN_IGOTO_OPCODE_0 0xfd80000 +#define KVX_INSN_IGOTO_SYLLABLE_0(__p, __rz) \ + (KVX_INSN_IGOTO_OPCODE_0 | ((__p) << 31) | ((__rz) & 0x3f)) +#define KVX_INSN_IGOTO(__buf, __p, __rz) \ + (__buf)[0] =3D KVX_INSN_IGOTO_SYLLABLE_0(__p, __rz); + +#define KVX_INSN_CALL_SIZE 1 +#define KVX_INSN_CALL_PCREL27_CHECK(__val) \ + (((__val) & BITMASK(2)) || check_signed_imm((__val) >> 2, 27)) +#define KVX_INSN_CALL_MASK_0 0x78000000 +#define KVX_INSN_CALL_OPCODE_0 0x18000000 +#define KVX_INSN_CALL_SYLLABLE_0(__p, __pcrel27) \ + (KVX_INSN_CALL_OPCODE_0 | ((__p) << 31) | (((__pcrel27) >> 2) & 0x7ffffff= )) +#define KVX_INSN_CALL(__buf, __p, __pcrel27) \ + (__buf)[0] =3D KVX_INSN_CALL_SYLLABLE_0(__p, __pcrel27); + +#define KVX_INSN_GOTO_SIZE 1 +#define KVX_INSN_GOTO_PCREL27_CHECK(__val) \ + (((__val) & BITMASK(2)) || check_signed_imm((__val) >> 2, 27)) +#define KVX_INSN_GOTO_MASK_0 0x78000000 +#define KVX_INSN_GOTO_OPCODE_0 0x10000000 +#define KVX_INSN_GOTO_SYLLABLE_0(__p, __pcrel27) \ + (KVX_INSN_GOTO_OPCODE_0 | ((__p) << 31) | (((__pcrel27) >> 2) & 0x7ffffff= )) +#define KVX_INSN_GOTO(__buf, __p, __pcrel27) \ + (__buf)[0] =3D KVX_INSN_GOTO_SYLLABLE_0(__p, __pcrel27); + +#define KVX_INSN_NOP_SIZE 1 +#define KVX_INSN_NOP_MASK_0 0x7f03f000 +#define KVX_INSN_NOP_OPCODE_0 0x7f03f000 +#define KVX_INSN_NOP_SYLLABLE_0(__p) \ + (KVX_INSN_NOP_OPCODE_0 | ((__p) << 31)) +#define KVX_INSN_NOP(__buf, __p) \ + (__buf)[0] =3D KVX_INSN_NOP_SYLLABLE_0(__p); + +#define KVX_INSN_SET_SIZE 1 +#define KVX_INSN_SET_MASK_0 0x7ffc0000 +#define KVX_INSN_SET_OPCODE_0 0xfc00000 +#define KVX_INSN_SET_SYLLABLE_0(__p, __systemT3, __rz) \ + (KVX_INSN_SET_OPCODE_0 | ((__p) << 31) | (((__systemT3) & 0x1ff) << 6) | = ((__rz) & 0x3f)) +#define KVX_INSN_SET(__buf, __p, __systemT3, __rz) \ + (__buf)[0] =3D KVX_INSN_SET_SYLLABLE_0(__p, __systemT3, __rz); + +#endif /* __ASM_KVX_INSNS_DEFS_H_ */ diff --git a/arch/kvx/kernel/break_hook.c b/arch/kvx/kernel/break_hook.c new file mode 100644 index 0000000000000..a517b40804419 --- /dev/null +++ b/arch/kvx/kernel/break_hook.c @@ -0,0 +1,76 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#include +#include +#include +#include + +#include +#include +#include + +static DEFINE_SPINLOCK(debug_hook_lock); +static LIST_HEAD(user_break_hook); +static LIST_HEAD(kernel_break_hook); + +void kvx_skip_break_insn(struct pt_regs *regs) +{ + regs->spc +=3D KVX_BREAK_INSN_SIZE; +} + +int break_hook_handler(struct pt_regs *regs, uint64_t es) +{ + int (*fn)(struct pt_regs *regs, struct break_hook *brk_hook) =3D NULL; + struct break_hook *tmp_hook, *hook =3D NULL; + struct list_head *list; + unsigned long flags; + u32 idx; + + if (trap_sfri(es) !=3D KVX_TRAP_SFRI_SET || + trap_sfrp(es) !=3D KVX_SFR_VSFR0) + return BREAK_HOOK_ERROR; + + idx =3D trap_gprp(es); + list =3D user_mode(regs) ? &user_break_hook : &kernel_break_hook; + + local_irq_save(flags); + list_for_each_entry_rcu(tmp_hook, list, node) { + if (idx =3D=3D tmp_hook->id) { + hook =3D tmp_hook; + break; + } + } + local_irq_restore(flags); + + if (!hook) + return BREAK_HOOK_ERROR; + + fn =3D hook->handler; + return fn(regs, hook); +} + +void break_hook_register(struct break_hook *brk_hook) +{ + struct list_head *list; + + if (brk_hook->mode =3D=3D MODE_USER) + list =3D &user_break_hook; + else + list =3D &kernel_break_hook; + + spin_lock(&debug_hook_lock); + list_add_rcu(&brk_hook->node, list); + spin_unlock(&debug_hook_lock); +} + +void break_hook_unregister(struct break_hook *brk_hook) +{ + spin_lock(&debug_hook_lock); + list_del_rcu(&brk_hook->node); + spin_unlock(&debug_hook_lock); + synchronize_rcu(); +} diff --git a/arch/kvx/kernel/debug.c b/arch/kvx/kernel/debug.c new file mode 100644 index 0000000000000..cdd9286684a15 --- /dev/null +++ b/arch/kvx/kernel/debug.c @@ -0,0 +1,54 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#include +#include +#include +#include +#include + +#include + +static DEFINE_SPINLOCK(debug_hook_lock); +static LIST_HEAD(user_debug_hook); +static LIST_HEAD(kernel_debug_hook); + +static struct list_head *debug_hook_list(bool user_mode) +{ + return user_mode ? &user_debug_hook : &kernel_debug_hook; +} + +void debug_handler(struct pt_regs *regs, u64 ea) +{ + int ret; + struct debug_hook *hook; + struct list_head *list =3D debug_hook_list(user_mode(regs)); + + list_for_each_entry_rcu(hook, list, node) { + ret =3D hook->handler(regs, ea); + if (ret =3D=3D DEBUG_HOOK_HANDLED) + return; + } + + panic("Entered debug but no requester !"); +} + +void debug_hook_register(struct debug_hook *dbg_hook) +{ + struct list_head *list =3D debug_hook_list(dbg_hook->mode =3D=3D MODE_USE= R); + + spin_lock(&debug_hook_lock); + list_add_rcu(&dbg_hook->node, list); + spin_unlock(&debug_hook_lock); +} + +void debug_hook_unregister(struct debug_hook *dbg_hook) +{ + spin_lock(&debug_hook_lock); + list_del_rcu(&dbg_hook->node); + spin_unlock(&debug_hook_lock); + synchronize_rcu(); +} diff --git a/arch/kvx/kernel/insns.c b/arch/kvx/kernel/insns.c new file mode 100644 index 0000000000000..dfdf33dfbce3a --- /dev/null +++ b/arch/kvx/kernel/insns.c @@ -0,0 +1,166 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Yann Sionneau + * Marius Gligor + * Guillaume Thouvenin + */ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +struct insns_patch { + atomic_t cpu_count; + u32 *addr; + u32 *insns; + unsigned long insns_len; +}; + +static void *insn_patch_map(void *addr) +{ + unsigned long uintaddr =3D (uintptr_t) addr; + bool module =3D !core_kernel_text(uintaddr); + struct page *page; + + if (module && IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) + page =3D vmalloc_to_page(addr); + else + return addr; + + BUG_ON(!page); + return (void *)set_fixmap_offset(FIX_TEXT_PATCH, page_to_phys(page) + + (uintaddr & ~PAGE_MASK)); +} + +static void insn_patch_unmap(void) +{ + clear_fixmap(FIX_TEXT_PATCH); +} + +int kvx_insns_write_nostop(u32 *insns, u8 insns_len, u32 *insn_addr) +{ + unsigned long current_insn_addr =3D (unsigned long) insn_addr; + unsigned long len_remain =3D insns_len; + unsigned long next_insn_page, patch_len; + void *map_patch_addr; + int ret =3D 0; + + do { + /* Compute next upper page boundary */ + next_insn_page =3D (current_insn_addr + PAGE_SIZE) & PAGE_MASK; + + patch_len =3D min(next_insn_page - current_insn_addr, len_remain); + len_remain -=3D patch_len; + + /* Map & patch insns */ + map_patch_addr =3D insn_patch_map((void *) current_insn_addr); + ret =3D copy_to_kernel_nofault(map_patch_addr, insns, patch_len); + if (ret) + break; + + insns =3D (void *) insns + patch_len; + current_insn_addr =3D next_insn_page; + + } while (len_remain); + + insn_patch_unmap(); + + /* + * Flush & invalidate L2 + L1 icache to reload instructions from memory + * L2 wbinval is necessary because we write through DEVICE cache policy + * mapping which is uncached therefore L2 is bypassed + */ + wbinval_icache_range(virt_to_phys(insn_addr), insns_len); + + return ret; +} + +/* This function is called by all CPUs in parallel */ +static int patch_insns_percpu(void *data) +{ + struct insns_patch *ip =3D data; + unsigned long insn_addr =3D (unsigned long) ip->addr; + int ret; + + /* Only the first CPU writes the instruction(s) */ + if (atomic_inc_return(&ip->cpu_count) =3D=3D 1) { + + /* + * This will write the instruction(s) through uncached mapping + * and then inval all PE L1d$ lines. + * It will after Invalid current PE L1i$ lines. + * Therefore cache coherency is handled for the calling CPU. + */ + ret =3D kvx_insns_write_nostop(ip->insns, ip->insns_len, + insn_addr); + /* + * Increment once more to signal that instructions have been written + * to memory. + */ + atomic_inc(&ip->cpu_count); + + return ret; + } + + /* + * Synchronization point: remaining CPUs need to wait for the first + * CPU to complete patching the instruction(s). + * The completion is signaled when the counter reaches + * `num_online_cpus + 1`: meaning that every CPU has entered this + * function and the first CPU has completed its operations. + */ + while (atomic_read(&ip->cpu_count) <=3D num_online_cpus()) + cpu_relax(); + + /* + * Other CPUs simply invalidate their L1 I-cache to reload + * from memory. Their L1 D-cache have + * already been invalidated by the kvx_insns_write_nostop() call. + */ + l1_inval_icache_range(insn_addr, insn_addr + ip->insns_len); + return 0; +} + +/** + * kvx_insns_write() Patch instructions at a specified address + * @insns: Instructions to be written at @addr + * @insns_len: Size of instructions to patch + * @addr: Address of the first instruction to patch + */ +int kvx_insns_write(u32 *insns, unsigned long insns_len, u32 *addr) +{ + struct insns_patch ip =3D { + .cpu_count =3D ATOMIC_INIT(0), + .addr =3D addr, + .insns =3D insns, + .insns_len =3D insns_len + }; + + if (!insns_len) + return -EINVAL; + + if (!IS_ALIGNED((unsigned long) addr, KVX_INSN_SYLLABLE_WIDTH)) + return -EINVAL; + + /* + * Function name is a "bit" misleading. while being named + * stop_machine, this function does not stop the machine per se + * but execute the provided function on all CPU in a safe state. + */ + return stop_machine(patch_insns_percpu, &ip, cpu_online_mask); +} + +int kvx_insns_read(u32 *insns, unsigned long insns_len, u32 *addr) +{ + l1_inval_dcache_range((unsigned long)addr, insns_len); + return copy_from_kernel_nofault(insns, addr, insns_len); +} --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout148.security-mail.net (smtpout148.security-mail.net [85.31.212.148]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7C3FF16F0DC for ; Mon, 22 Jul 2024 09:43:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.148 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641435; cv=none; b=rr7oa44qBQ1BtVsgsupJFw+46K3qki1Ij0INQ2kodpwr7I+Yj82dj7pw79nnHTBt4QYaIgJmq8loA7C2C5VTDqaK+BvrOBkRfOQHB83uQbbe4BN85tbf5UEETPB2ttylcK6nBk1zr71xMp8lSorj3uUZMqj1y6GgXLppin1qYOI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641435; c=relaxed/simple; bh=55eerdBP4o0/u9Txs6EDQqNf+WvNLBqBndnEG4NkhBI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=NSInVIErktUUgfl3TK8XWz0+bHtynyLpxuyk7TbLzEZVmTA14xZmiv24MmL/e50vPIOF+uVg0H++NwWkYkulKgVjqljUMf+VQG+4J3tUF8OR+bbhQOqVQWL8uuBkkvuaA1ADp4IuU1sj5K/hqmWZ9p2s9uHxR/AdxyYHjk8j2Vo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=PcnEOcQc; arc=none smtp.client-ip=85.31.212.148 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="PcnEOcQc" Received: from localhost (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id 37FFD322A9C for ; Mon, 22 Jul 2024 11:43:52 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641432; bh=55eerdBP4o0/u9Txs6EDQqNf+WvNLBqBndnEG4NkhBI=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=PcnEOcQc8VAQdfDX6dwOGYKMiJ9+7ANO20I++44QqOBRPv2TVeg/Te0UbMG/hEXLF LH5OxB5JQiMabkPMYYVpk2cad9lxkxiCdgdpf+awMBsKciAVksoZkBxh0VqDz3a4DU D8nd18hFQHalJFZRhyv+cL8V1Uxizf4nKBWc80yU= Received: from fx408 (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id 1076332211E; Mon, 22 Jul 2024 11:43:52 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx408.security-mail.net (Postfix) with ESMTPS id 92B893228EB; Mon, 22 Jul 2024 11:43:51 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 5259E40317; Mon, 22 Jul 2024 11:43:51 +0200 (CEST) X-Secumail-id: From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Guillaume Thouvenin , Jules Maselbas Subject: [RFC PATCH v3 33/37] kvx: Add support for cpuinfo Date: Mon, 22 Jul 2024 11:41:44 +0200 Message-ID: <20240722094226.21602-34-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add support for cpuinfo on kvx arch. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Guillaume Thouvenin Signed-off-by: Guillaume Thouvenin Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Signed-off-by: Jules Maselbas Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: No changes V2 -> V3: - add missing function declaration for setup_cpuinfo() - replace printk(KERN_WARNING... with pr_warn(... --- arch/kvx/include/asm/cpuinfo.h | 7 +++ arch/kvx/kernel/cpuinfo.c | 95 ++++++++++++++++++++++++++++++++++ 2 files changed, 102 insertions(+) create mode 100644 arch/kvx/include/asm/cpuinfo.h create mode 100644 arch/kvx/kernel/cpuinfo.c diff --git a/arch/kvx/include/asm/cpuinfo.h b/arch/kvx/include/asm/cpuinfo.h new file mode 100644 index 0000000000000..ace9b85fbafaf --- /dev/null +++ b/arch/kvx/include/asm/cpuinfo.h @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef __ASM_KVX_CPUINFO_H +#define __ASM_KVX_CPUINFO_H + +extern void setup_cpuinfo(void); + +#endif /* __ASM_KVX_CPUINFO_H */ diff --git a/arch/kvx/kernel/cpuinfo.c b/arch/kvx/kernel/cpuinfo.c new file mode 100644 index 0000000000000..6d46fd2a1bd93 --- /dev/null +++ b/arch/kvx/kernel/cpuinfo.c @@ -0,0 +1,95 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + * Guillaume Thouvenin + */ + +#include +#include +#include +#include +#include + +unsigned long elf_hwcap __read_mostly; + +static int show_cpuinfo(struct seq_file *m, void *v) +{ + int cpu_num =3D *(unsigned int *)v; + struct cpuinfo_kvx *n =3D per_cpu_ptr(&cpu_info, cpu_num); + + seq_printf(m, "processor\t: %d\nvendor_id\t: Kalray\n", cpu_num); + + seq_printf(m, + "copro enabled\t: %s\n" + "arch revision\t: %d\n" + "uarch revision\t: %d\n", + n->copro_enable ? "yes" : "no", + n->arch_rev, + n->uarch_rev); + + seq_printf(m, + "bogomips\t: %lu.%02lu\n" + "cpu MHz\t\t: %llu.%03llu\n\n", + (loops_per_jiffy * HZ) / 500000, + ((loops_per_jiffy * HZ) / 5000) % 100, + n->freq / 1000000, (n->freq / 10000) % 100); + + return 0; +} + +static void *c_start(struct seq_file *m, loff_t *pos) +{ + if (*pos =3D=3D 0) + *pos =3D cpumask_first(cpu_online_mask); + if (*pos >=3D num_online_cpus()) + return NULL; + + return pos; +} + +static void *c_next(struct seq_file *m, void *v, loff_t *pos) +{ + *pos =3D cpumask_next(*pos, cpu_online_mask); + + return c_start(m, pos); +} + +static void c_stop(struct seq_file *m, void *v) +{ +} + +const struct seq_operations cpuinfo_op =3D { + .start =3D c_start, + .next =3D c_next, + .stop =3D c_stop, + .show =3D show_cpuinfo, +}; + +static int __init setup_cpuinfo(void) +{ + int cpu; + struct clk *clk; + unsigned long cpu_freq =3D 1000000000; + struct device_node *node =3D of_get_cpu_node(0, NULL); + + clk =3D of_clk_get(node, 0); + if (IS_ERR(clk)) { + pr_warn("Device tree missing CPU 'clock' parameter. Assuming frequency i= s 1GHZ"); + goto setup_cpu_freq; + } + + cpu_freq =3D clk_get_rate(clk); + + clk_put(clk); + +setup_cpu_freq: + of_node_put(node); + + for_each_possible_cpu(cpu) + per_cpu_ptr(&cpu_info, cpu)->freq =3D cpu_freq; + + return 0; +} + +late_initcall(setup_cpuinfo); --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout149.security-mail.net (smtpout149.security-mail.net [85.31.212.149]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E71CF16EC17 for ; Mon, 22 Jul 2024 09:43:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.149 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641438; cv=none; b=aw7q0vb4hGsLfOEgFeU43spsRiI+faLZg0NGwdRsujmWGLaKK39BmF1gtGHHVJfEH1XV6veAUpnSE7HjIGa48UHi6lEy6FRYMBI1gLAwdcMLb8+dc+2U7SREORuzuGqv9su5MyQHI/3NTutQqOnKifPvORrFZyU904sqWXIb79k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641438; c=relaxed/simple; bh=JOk3YGDP57h8sheVze5lvJ8wPFvJCP6dakAjbTJPqGM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=MVoNEnNXShMp077GIISIEm98n4Rx6aK28LcdatxHtu+04rUFUUWuCRM9WKQ8OqjmvfcdBARJUGet0/Zx27TplgJa8MSTL4ZTrX5gbLP9dt25k6zfvLqESt0rp1HLk2IRyQYeiz8ZscMlX87Fv3IVkb7K7CcBXuyDcRtbiLVXCCE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=TPgGPE93; arc=none smtp.client-ip=85.31.212.149 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="TPgGPE93" Received: from localhost (fx409.security-mail.net [127.0.0.1]) by fx409.security-mail.net (Postfix) with ESMTP id 8B08E3498AB for ; Mon, 22 Jul 2024 11:43:53 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641433; bh=JOk3YGDP57h8sheVze5lvJ8wPFvJCP6dakAjbTJPqGM=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=TPgGPE93gX6+OZhIkGruFxnj8uIiwHOpasr5x6WGdS+45shMMl5CIYcAkMJUHcwSb uGCKWpAsSiU1OlWZFuOTEKBOYYzbooOJJIXHmTPfB8gQO4Z1SE9DjJbLbtG9Mkx6yj XnIkQFxBA1nAHDOTZioBh9RaIRUECVclTx7NiGtk= Received: from fx409 (fx409.security-mail.net [127.0.0.1]) by fx409.security-mail.net (Postfix) with ESMTP id 4DB2E34978B; Mon, 22 Jul 2024 11:43:53 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx409.security-mail.net (Postfix) with ESMTPS id A51BA349658; Mon, 22 Jul 2024 11:43:52 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 705FA40317; Mon, 22 Jul 2024 11:43:52 +0200 (CEST) X-Secumail-id: <2c9b.669e29d8.a2756.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Louis Morhet , Marius Gligor , Jules Maselbas , bpf@vger.kernel.org Subject: [RFC PATCH v3 34/37] kvx: Add power controller driver Date: Mon, 22 Jul 2024 11:41:45 +0200 Message-ID: <20240722094226.21602-35-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau The Power Controller (pwr-ctrl) controls cores reset and wake-up procedure. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Louis Morhet Signed-off-by: Louis Morhet Co-developed-by: Marius Gligor Signed-off-by: Marius Gligor Signed-off-by: Jules Maselbas Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: new patch V2 -> V3: - Moved driver from arch/kvx/platform to drivers/soc/kvx/ see discussions there: - https://lore.kernel.org/bpf/Y8qlOpYgDefMPqWH@zx2c4.com/T/#m722d8f7c7501= 615251e4f97705198f5485865ce2 - indent - add missing static qualifier - driver now registers a cpu_method/smp_op via CPU_METHOD_OF_DECLARE like arm and sh, it puts a struct into a __cpu_method_of_table ELF sectio= n. the smp_ops is used by smpboot.c if its name matches the DT 'cpus' node enable-method property. --- arch/kvx/include/asm/pwr_ctrl.h | 57 ++++++++++++++++++++ drivers/soc/Kconfig | 1 + drivers/soc/Makefile | 1 + drivers/soc/kvx/Kconfig | 10 ++++ drivers/soc/kvx/Makefile | 2 + drivers/soc/kvx/coolidge_pwr_ctrl.c | 84 +++++++++++++++++++++++++++++ 6 files changed, 155 insertions(+) create mode 100644 arch/kvx/include/asm/pwr_ctrl.h create mode 100644 drivers/soc/kvx/Kconfig create mode 100644 drivers/soc/kvx/Makefile create mode 100644 drivers/soc/kvx/coolidge_pwr_ctrl.c diff --git a/arch/kvx/include/asm/pwr_ctrl.h b/arch/kvx/include/asm/pwr_ctr= l.h new file mode 100644 index 0000000000000..715eddd45a88c --- /dev/null +++ b/arch/kvx/include/asm/pwr_ctrl.h @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2024 Kalray Inc. + * Author(s): Clement Leger + * Marius Gligor + * Julian Vetter + * Yann Sionneau + */ + +#ifndef _ASM_KVX_PWR_CTRL_H +#define _ASM_KVX_PWR_CTRL_H + +#ifndef __ASSEMBLY__ + +static int kvx_pwr_ctrl_probe(void); + +int kvx_pwr_ctrl_cpu_poweron(unsigned int cpu); + +#endif + +#define PWR_CTRL_ADDR 0xA40000 + +/* Power controller vector register definitions */ +#define KVX_PWR_CTRL_VEC_OFFSET 0x1000 +#define KVX_PWR_CTRL_VEC_WUP_SET_OFFSET 0x10 +#define KVX_PWR_CTRL_VEC_WUP_CLEAR_OFFSET 0x20 + +/* Power controller PE reset PC register definitions */ +#define KVX_PWR_CTRL_RESET_PC_OFFSET 0x2000 + +/* Power controller global register definitions */ +#define KVX_PWR_CTRL_GLOBAL_OFFSET 0x4040 + +#define KVX_PWR_CTRL_GLOBAL_SET_OFFSET 0x10 +#define KVX_PWR_CTRL_GLOBAL_CLEAR_OFFSET 0x20 +#define KVX_PWR_CTRL_GLOBAL_SET_PE_EN_SHIFT 0x1 + +#define PWR_CTRL_WUP_SET_OFFSET \ + (KVX_PWR_CTRL_VEC_OFFSET + \ + KVX_PWR_CTRL_VEC_WUP_SET_OFFSET) + +#define PWR_CTRL_WUP_CLEAR_OFFSET \ + (KVX_PWR_CTRL_VEC_OFFSET + \ + KVX_PWR_CTRL_VEC_WUP_CLEAR_OFFSET) + +#define PWR_CTRL_GLOBAL_CONFIG_SET_OFFSET \ + (KVX_PWR_CTRL_GLOBAL_OFFSET + \ + KVX_PWR_CTRL_GLOBAL_SET_OFFSET) + +#define PWR_CTRL_GLOBAL_CONFIG_CLEAR_OFFSET \ + (KVX_PWR_CTRL_GLOBAL_OFFSET + \ + KVX_PWR_CTRL_GLOBAL_CLEAR_OFFSET) + +#define PWR_CTRL_GLOBAL_CONFIG_PE_EN \ + (1 << KVX_PWR_CTRL_GLOBAL_SET_PE_EN_SHIFT) + +#endif /* _ASM_KVX_PWR_CTRL_H */ diff --git a/drivers/soc/Kconfig b/drivers/soc/Kconfig index 5d924e946507b..f28078620da14 100644 --- a/drivers/soc/Kconfig +++ b/drivers/soc/Kconfig @@ -12,6 +12,7 @@ source "drivers/soc/fujitsu/Kconfig" source "drivers/soc/hisilicon/Kconfig" source "drivers/soc/imx/Kconfig" source "drivers/soc/ixp4xx/Kconfig" +source "drivers/soc/kvx/Kconfig" source "drivers/soc/litex/Kconfig" source "drivers/soc/loongson/Kconfig" source "drivers/soc/mediatek/Kconfig" diff --git a/drivers/soc/Makefile b/drivers/soc/Makefile index fb2bd31387d07..240e148eaaff8 100644 --- a/drivers/soc/Makefile +++ b/drivers/soc/Makefile @@ -16,6 +16,7 @@ obj-$(CONFIG_ARCH_GEMINI) +=3D gemini/ obj-y +=3D hisilicon/ obj-y +=3D imx/ obj-y +=3D ixp4xx/ +obj-$(CONFIG_KVX) +=3D kvx/ obj-$(CONFIG_SOC_XWAY) +=3D lantiq/ obj-$(CONFIG_LITEX_SOC_CONTROLLER) +=3D litex/ obj-y +=3D loongson/ diff --git a/drivers/soc/kvx/Kconfig b/drivers/soc/kvx/Kconfig new file mode 100644 index 0000000000000..96d05efe4bfb5 --- /dev/null +++ b/drivers/soc/kvx/Kconfig @@ -0,0 +1,10 @@ +# SPDX-License-Identifier: GPL-2.0 + +config COOLIDGE_POWER_CONTROLLER + bool "Coolidge power controller" + default n + depends on KVX + help + The Kalray Coolidge Power Controller is used to manage the power + state of secondary CPU cores. Currently only powering up is + supported. diff --git a/drivers/soc/kvx/Makefile b/drivers/soc/kvx/Makefile new file mode 100644 index 0000000000000..c7b0b3e99eabc --- /dev/null +++ b/drivers/soc/kvx/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_COOLIDGE_POWER_CONTROLLER) +=3D coolidge_pwr_ctrl.o diff --git a/drivers/soc/kvx/coolidge_pwr_ctrl.c b/drivers/soc/kvx/coolidge= _pwr_ctrl.c new file mode 100644 index 0000000000000..67af3e446d0e7 --- /dev/null +++ b/drivers/soc/kvx/coolidge_pwr_ctrl.c @@ -0,0 +1,84 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2024 Kalray Inc. + * Author(s): Clement Leger + * Yann Sionneau + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +struct kvx_pwr_ctrl { + void __iomem *regs; +}; + +static struct kvx_pwr_ctrl kvx_pwr_controller; + +static bool pwr_ctrl_not_initialized =3D true; + +/** + * kvx_pwr_ctrl_cpu_poweron() - Wakeup a cpu + * @cpu: cpu to wakeup + */ +int __init kvx_pwr_ctrl_cpu_poweron(unsigned int cpu) +{ + int ret =3D 0; + + if (pwr_ctrl_not_initialized) { + pr_err("KVX power controller not initialized!\n"); + return -ENODEV; + } + + /* Set PE boot address */ + writeq((unsigned long long)kvx_start, + kvx_pwr_controller.regs + KVX_PWR_CTRL_RESET_PC_OFFSET); + /* Wake up processor ! */ + writeq(1ULL << cpu, + kvx_pwr_controller.regs + PWR_CTRL_WUP_SET_OFFSET); + /* Then clear wakeup to allow processor to sleep */ + writeq(1ULL << cpu, + kvx_pwr_controller.regs + PWR_CTRL_WUP_CLEAR_OFFSET); + + return ret; +} + +static const struct smp_operations coolidge_smp_ops __initconst =3D { + .smp_boot_secondary =3D kvx_pwr_ctrl_cpu_poweron, +}; + +static int __init kvx_pwr_ctrl_probe(void) +{ + struct device_node *ctrl; + + ctrl =3D of_find_compatible_node(NULL, NULL, "kalray,coolidge-pwr-ctrl"); + if (!ctrl) { + pr_err("Failed to get power controller node\n"); + return -EINVAL; + } + + kvx_pwr_controller.regs =3D of_iomap(ctrl, 0); + if (!kvx_pwr_controller.regs) { + pr_err("Failed ioremap\n"); + return -EINVAL; + } + + pwr_ctrl_not_initialized =3D false; + pr_info("KVX power controller probed\n"); + + return 0; +} + +CPU_METHOD_OF_DECLARE(coolidge_pwr_ctrl, "kalray,coolidge-pwr-ctrl", + &coolidge_smp_ops); + +early_initcall(kvx_pwr_ctrl_probe); --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout42.security-mail.net (smtpout42.security-mail.net [85.31.212.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B126B16F831 for ; Mon, 22 Jul 2024 09:43:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641438; cv=none; b=QZLCavQly1DcxpXhEpKcAfqTPI6lHYT5jD4E1tC7hFHxVAe6FyCUZbor9jALHIsDZNl/LGMAo5iYMCqQ58dU5/QNRFpneImXrzpvU/ztw7lBsz+dgBxrh1nLYhWCAgaT5CDGVAq9Q4RBrG2W9oghPAGJ31dVNsBe+BsOFicMPWs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641438; c=relaxed/simple; bh=i2aOZ3R/x/jElpWRK+yCYwSFaQECMP30EAy6IJbselM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=amFV5npnXDfYaS1WxqKpGQpZceRKy0t55XQKArMmJPoiwUzohuKgr2+fWrgqcI/UZ/+xPMyxAh/8IaNzaTaQW8ty87I9X37A8IXDjOBiiHs5TMDX309UdjWKOos31XqKff7LkgKz3LUOZ9mSAMBX7p6SODJtJcC1/pNsdaeI+lE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=OTFHcMlH; arc=none smtp.client-ip=85.31.212.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="OTFHcMlH" Received: from localhost (localhost [127.0.0.1]) by fx302.security-mail.net (Postfix) with ESMTP id 5A56680BD0F for ; Mon, 22 Jul 2024 11:43:54 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641434; bh=i2aOZ3R/x/jElpWRK+yCYwSFaQECMP30EAy6IJbselM=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=OTFHcMlHaC2e0XvVxtUvsBhWBDbf40/ljDQqBMEYcYgOd1fPN5F7VJLGUTuCHRQe/ AX6RLWNnba28LpgfDcMXRV8VzNZMfPCrA+IahAq7M+6YuW7Hp62PO4c4j27ceN0tAl BKgtCi3yfoLECKhIT48lFOtXkHVZwiH8unu/u+tQ= Received: from fx302 (localhost [127.0.0.1]) by fx302.security-mail.net (Postfix) with ESMTP id 2515E80ADFD; Mon, 22 Jul 2024 11:43:54 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx302.security-mail.net (Postfix) with ESMTPS id 8995280BC25; Mon, 22 Jul 2024 11:43:53 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 4F84C40317; Mon, 22 Jul 2024 11:43:53 +0200 (CEST) X-Secumail-id: From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , Clement Leger , Guillaume Thouvenin , Luc Michel , Jules Maselbas , bpf@vger.kernel.org Subject: [RFC PATCH v3 35/37] kvx: Add IPI driver Date: Mon, 22 Jul 2024 11:41:46 +0200 Message-ID: <20240722094226.21602-36-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau The Inter-Processor Interrupt Controller (IPI) provides a fast synchronization mechanism to the software. It exposes eight independent sets of registers that can be used to notify each processor in the cluster. Co-developed-by: Clement Leger Signed-off-by: Clement Leger Co-developed-by: Guillaume Thouvenin Signed-off-by: Guillaume Thouvenin Co-developed-by: Julian Vetter Signed-off-by: Julian Vetter Co-developed-by: Luc Michel Signed-off-by: Luc Michel Signed-off-by: Jules Maselbas Signed-off-by: Yann Sionneau --- Notes: V1 -> V2: new patch V2 -> V3: - Restructured IPI code according to reviewer feedback - move from arch/kvx/platform to drivers/irqchip/ - remove bogus comment - call set_smp_cross_call() to set smpboot.c's ipi function pointer - feedbacks: https://lore.kernel.org/bpf/Y8qlOpYgDefMPqWH@zx2c4.com/T/#mb= 02884ea498e627c2621973157330f2ea9977190 --- arch/kvx/include/asm/ipi.h | 16 ++++ drivers/irqchip/Kconfig | 4 + drivers/irqchip/Makefile | 1 + drivers/irqchip/irq-kvx-ipi-ctrl.c | 143 +++++++++++++++++++++++++++++ 4 files changed, 164 insertions(+) create mode 100644 arch/kvx/include/asm/ipi.h create mode 100644 drivers/irqchip/irq-kvx-ipi-ctrl.c diff --git a/arch/kvx/include/asm/ipi.h b/arch/kvx/include/asm/ipi.h new file mode 100644 index 0000000000000..a23275d19d225 --- /dev/null +++ b/arch/kvx/include/asm/ipi.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017-2023 Kalray Inc. + * Author(s): Clement Leger + */ + +#ifndef _ASM_KVX_IPI_H +#define _ASM_KVX_IPI_H + +#include + +int kvx_ipi_ctrl_init(struct device_node *node, struct device_node *parent= ); + +void kvx_ipi_send(const struct cpumask *mask, unsigned int operation); + +#endif /* _ASM_KVX_IPI_H */ diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig index da1dbd79dab54..65db9990cf475 100644 --- a/drivers/irqchip/Kconfig +++ b/drivers/irqchip/Kconfig @@ -337,6 +337,10 @@ config KVX_CORE_INTC depends on KVX select IRQ_DOMAIN =20 +config KVX_IPI_CTRL + bool + depends on KVX + config KVX_APIC_GIC bool depends on KVX diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile index 30b69db8789f7..f8fa246df74d2 100644 --- a/drivers/irqchip/Makefile +++ b/drivers/irqchip/Makefile @@ -69,6 +69,7 @@ obj-$(CONFIG_BRCMSTB_L2_IRQ) +=3D irq-brcmstb-l2.o obj-$(CONFIG_KEYSTONE_IRQ) +=3D irq-keystone.o obj-$(CONFIG_MIPS_GIC) +=3D irq-mips-gic.o obj-$(CONFIG_KVX_CORE_INTC) +=3D irq-kvx-core-intc.o +obj-$(CONFIG_KVX_IPI_CTRL) +=3D irq-kvx-ipi-ctrl.o obj-$(CONFIG_KVX_APIC_GIC) +=3D irq-kvx-apic-gic.o obj-$(CONFIG_KVX_ITGEN) +=3D irq-kvx-itgen.o obj-$(CONFIG_KVX_APIC_MAILBOX) +=3D irq-kvx-apic-mailbox.o diff --git a/drivers/irqchip/irq-kvx-ipi-ctrl.c b/drivers/irqchip/irq-kvx-i= pi-ctrl.c new file mode 100644 index 0000000000000..09d955a5c109a --- /dev/null +++ b/drivers/irqchip/irq-kvx-ipi-ctrl.c @@ -0,0 +1,143 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2017-2024 Kalray Inc. + * + * Author(s): Clement Leger + * Jonathan Borne + * Luc Michel + */ + +#define pr_fmt(fmt) "kvx_ipi_ctrl: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#define IPI_INTERRUPT_OFFSET 0x0 +#define IPI_MASK_OFFSET 0x20 + +/* + * IPI controller can signal RM and PE0 -> 15 + * In order to restrict that to the PE, write the corresponding mask + */ +#define KVX_IPI_CPU_MASK (~0xFFFF) + +/* A collection of single bit ipi messages. */ +static DEFINE_PER_CPU_ALIGNED(unsigned long, ipi_data); + +struct kvx_ipi_ctrl { + void __iomem *regs; + unsigned int ipi_irq; +}; + +static struct kvx_ipi_ctrl kvx_ipi_controller; + +void kvx_ipi_send(const struct cpumask *mask, unsigned int operation) +{ + const unsigned long *maskb =3D cpumask_bits(mask); + unsigned long flags; + int cpu; + + /* Set operation that must be done by receiver */ + for_each_cpu(cpu, mask) + set_bit(operation, &per_cpu(ipi_data, cpu)); + + /* Commit the write before sending IPI */ + smp_wmb(); + + local_irq_save(flags); + + WARN_ON(*maskb & KVX_IPI_CPU_MASK); + writel(*maskb, kvx_ipi_controller.regs + IPI_INTERRUPT_OFFSET); + + local_irq_restore(flags); +} + +static int kvx_ipi_starting_cpu(unsigned int cpu) +{ + enable_percpu_irq(kvx_ipi_controller.ipi_irq, IRQ_TYPE_NONE); + + return 0; +} + +static int kvx_ipi_dying_cpu(unsigned int cpu) +{ + disable_percpu_irq(kvx_ipi_controller.ipi_irq); + + return 0; +} + +static irqreturn_t ipi_irq_handler(int irq, void *dev_id) +{ + unsigned long *pending_ipis =3D &per_cpu(ipi_data, smp_processor_id()); + + while (true) { + unsigned long ops =3D xchg(pending_ipis, 0); + + if (!ops) + return IRQ_HANDLED; + + handle_IPI(ops); + } + + return IRQ_HANDLED; +} + +int __init kvx_ipi_ctrl_init(struct device_node *node, + struct device_node *parent) +{ + int ret; + unsigned int ipi_irq; + void __iomem *ipi_base; + + BUG_ON(!node); + + ipi_base =3D of_iomap(node, 0); + BUG_ON(!ipi_base); + + kvx_ipi_controller.regs =3D ipi_base; + + /* Init mask for interrupts to PE0 -> PE15 */ + writel(KVX_IPI_CPU_MASK, kvx_ipi_controller.regs + IPI_MASK_OFFSET); + + ipi_irq =3D irq_of_parse_and_map(node, 0); + if (!ipi_irq) { + pr_err("Failed to parse irq: %d\n", ipi_irq); + return -EINVAL; + } + + ret =3D request_percpu_irq(ipi_irq, ipi_irq_handler, + "kvx_ipi", &kvx_ipi_controller); + if (ret) { + pr_err("can't register interrupt %d (%d)\n", + ipi_irq, ret); + return ret; + } + kvx_ipi_controller.ipi_irq =3D ipi_irq; + + ret =3D cpuhp_setup_state(CPUHP_AP_IRQ_KVX_STARTING, + "kvx/ipi:online", + kvx_ipi_starting_cpu, + kvx_ipi_dying_cpu); + if (ret < 0) { + pr_err("Failed to setup hotplug state"); + return ret; + } + + set_smp_cross_call(kvx_ipi_send); + pr_info("controller probed\n"); + + return 0; +} +IRQCHIP_DECLARE(kvx_ipi_ctrl, "kalray,coolidge-ipi-ctrl", kvx_ipi_ctrl_ini= t); --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout148.security-mail.net (smtpout148.security-mail.net [85.31.212.148]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7156216E88E for ; Mon, 22 Jul 2024 09:43:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.148 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641441; cv=none; b=EdMUlLT1cvKRD4f0ODQOvcdWUZ+61L13UhJtRl2ujE2Ln8LkkMD3oYD3gw9jwjY1EaRQEV4M+UAOv6/Xq7CUy8Pkvs2jdmpi4j+Bx8TNWPDrX4MLLuqSVoN23440a13njSJQkuwrdLOEaXK0856OH9pFDDdafhWdbjTw/XLbHeU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641441; c=relaxed/simple; bh=o+ZeZemlYweYAhW2nOoXGrtnO8/h6eE+gaWwqbTXEkk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=E//vg+59Zuhyvgu27WLHfa5yMNzD0H0rKLTCQVMWgPWUJQNA0Zd34R5o5tqthy592mfqw+OLt73kME0dYnIpKVyMSkj5EciJxUrv+tB4Zfg8NsvUxw6GtV1haIusO79frQB+l9JIVllGN1cU+uyidzAFQm07Q2vQQ+VnHt7rUtU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=ZRW4Z1Xr; arc=none smtp.client-ip=85.31.212.148 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="ZRW4Z1Xr" Received: from localhost (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id DEBC0322B53 for ; Mon, 22 Jul 2024 11:43:56 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641436; bh=o+ZeZemlYweYAhW2nOoXGrtnO8/h6eE+gaWwqbTXEkk=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=ZRW4Z1Xr+zOn2ay+enqLemspVggx5fAvFc5QTTGN8zQx2qkAvmZjEUnvjJ4V06QA9 QbSN90Fjjuy+ZnptrJCmA28hPyF+qYeiujbMkRc/gCVyCMjEd8m2Gs+z66s3Zac02G hFuWNso5XP7wczDBMtjvcSWDwE4BPtBxwtF5yFs4= Received: from fx408 (fx408.security-mail.net [127.0.0.1]) by fx408.security-mail.net (Postfix) with ESMTP id AE4F6322BAC; Mon, 22 Jul 2024 11:43:56 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx408.security-mail.net (Postfix) with ESMTPS id 4C5B3322B53; Mon, 22 Jul 2024 11:43:54 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id 145B240317; Mon, 22 Jul 2024 11:43:54 +0200 (CEST) X-Quarantine-ID: <2hqi8tId6onf> X-Secumail-id: <1813.669e29da.4a0de.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Rob Herring , Krzysztof Kozlowski , Conor Dooley Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , devicetree@vger.kernel.org Subject: [RFC PATCH v3 36/37] kvx: dts: DeviceTree for qemu emulated Coolidge SoC Date: Mon, 22 Jul 2024 11:41:47 +0200 Message-ID: <20240722094226.21602-37-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Yann Sionneau Add device tree for QEMU that emulates a Coolidge V1 SoC. Signed-off-by: Yann Sionneau --- Notes: V2 -> V3: New patch --- arch/kvx/boot/dts/Makefile | 1 + arch/kvx/boot/dts/coolidge-qemu.dts | 444 ++++++++++++++++++++++++++++ 2 files changed, 445 insertions(+) create mode 100644 arch/kvx/boot/dts/Makefile create mode 100644 arch/kvx/boot/dts/coolidge-qemu.dts diff --git a/arch/kvx/boot/dts/Makefile b/arch/kvx/boot/dts/Makefile new file mode 100644 index 0000000000000..cd27ceb7a6cce --- /dev/null +++ b/arch/kvx/boot/dts/Makefile @@ -0,0 +1 @@ +dtb-y +=3D coolidge-qemu.dtb diff --git a/arch/kvx/boot/dts/coolidge-qemu.dts b/arch/kvx/boot/dts/coolid= ge-qemu.dts new file mode 100644 index 0000000000000..1d5af0d2e687d --- /dev/null +++ b/arch/kvx/boot/dts/coolidge-qemu.dts @@ -0,0 +1,444 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/dts-v1/; +/* + * Copyright (C) 2024, Kalray Inc. + */ + +/ { + model =3D "Kalray Coolidge processor (QEMU)"; + compatible =3D "kalray,coolidge-qemu"; + #address-cells =3D <0x02>; + #size-cells =3D <0x02>; + + chosen { + stdout-path =3D "/axi/serial@20210000"; + }; + + memory@100000000 { + phandle =3D <0x40>; + reg =3D <0x01 0x00 0x00 0x8000000>; + device_type =3D "memory"; + }; + + axi { + compatible =3D "simple-bus"; + #address-cells =3D <0x02>; + #size-cells =3D <0x02>; + ranges; + + virtio-mmio@30003c00 { + compatible =3D "virtio,mmio"; + reg =3D <0x00 0x30003c00 0x00 0x200>; + interrupt-parent =3D <&itgen0>; + interrupts =3D <0x9e 0x04>; + }; + + virtio-mmio@30003e00 { + compatible =3D "virtio,mmio"; + reg =3D <0x00 0x30003e00 0x00 0x200>; + interrupt-parent =3D <&itgen0>; + interrupts =3D <0x9f 0x04>; + }; + + itgen0: itgen_soc_periph0@27000000 { + compatible =3D "kalray,coolidge-itgen"; + reg =3D <0x00 0x27000000 0x00 0x1104>; + msi-parent =3D <&apic_mailbox>; + #interrupt-cells =3D <0x02>; + interrupt-controller; + }; + + serial@20210000 { + reg-shift =3D <0x02>; + reg-io-width =3D <0x04>; + clocks =3D <&ref_clk>; + interrupts =3D <0x29 0x04>; + interrupt-parent =3D <&itgen0>; + reg =3D <0x00 0x20210000 0x00 0x100>; + compatible =3D "snps,dw-apb-uart"; + }; + + serial@20211000 { + reg-shift =3D <0x02>; + reg-io-width =3D <0x04>; + phandle =3D <0x3c>; + clocks =3D <&ref_clk>; + interrupts =3D <0x2a 0x04>; + interrupt-parent =3D <&itgen0>; + reg =3D <0x00 0x20211000 0x00 0x100>; + compatible =3D "snps,dw-apb-uart"; + }; + + serial@20212000 { + reg-shift =3D <0x02>; + reg-io-width =3D <0x04>; + phandle =3D <0x3b>; + clocks =3D <&ref_clk>; + interrupts =3D <0x2b 0x04>; + interrupt-parent =3D <&itgen0>; + reg =3D <0x00 0x20212000 0x00 0x100>; + compatible =3D "snps,dw-apb-uart"; + }; + + serial@20213000 { + reg-shift =3D <0x02>; + reg-io-width =3D <0x04>; + phandle =3D <0x3a>; + clocks =3D <&ref_clk>; + interrupts =3D <0x2c 0x04>; + interrupt-parent =3D <&itgen0>; + reg =3D <0x00 0x20213000 0x00 0x100>; + compatible =3D "snps,dw-apb-uart"; + }; + + serial@20214000 { + reg-shift =3D <0x02>; + reg-io-width =3D <0x04>; + phandle =3D <0x39>; + clocks =3D <&ref_clk>; + interrupts =3D <0x2d 0x04>; + interrupt-parent =3D <&itgen0>; + reg =3D <0x00 0x20214000 0x00 0x100>; + compatible =3D "snps,dw-apb-uart"; + }; + + serial@20215000 { + reg-shift =3D <0x02>; + reg-io-width =3D <0x04>; + phandle =3D <0x38>; + clocks =3D <&ref_clk>; + interrupts =3D <0x2e 0x04>; + interrupt-parent =3D <&itgen0>; + reg =3D <0x00 0x20215000 0x00 0x100>; + compatible =3D "snps,dw-apb-uart"; + }; + }; + + memory@0 { + device_type =3D "memory"; + reg =3D <0x00 0x00 0x00 0x400000>; + }; + + apic_mailbox: apic_mailbox@a00000 { + compatible =3D "kalray,coolidge-apic-mailbox"; + reg =3D <0x00 0xa00000 0x00 0xea00>; + #interrupt-cells =3D <0x00>; + #address-cells =3D <0>; + interrupt-parent =3D <&apic_gic>; + interrupts =3D <0x00>, <0x01>, <0x02>, <0x03>, <0x04>, <0x05>, + <0x06>, <0x07>, <0x08>, <0x09>, <0x0a>, <0x0b>, + <0x0c>, <0x0d>, <0x0e>, <0x0f>, <0x10>, <0x11>, + <0x12>, <0x13>, <0x14>, <0x15>, <0x16>, <0x17>, + <0x18>, <0x19>, <0x1a>, <0x1b>, <0x1c>, <0x1d>, + <0x1e>, <0x1f>, <0x20>, <0x21>, <0x22>, <0x23>, + <0x24>, <0x25>, <0x26>, <0x27>, <0x28>, <0x29>, + <0x2a>, <0x2b>, <0x2c>, <0x2d>, <0x2e>, <0x2f>, + <0x30>, <0x31>, <0x32>, <0x33>, <0x34>, <0x35>, + <0x36>, <0x37>, <0x38>, <0x39>, <0x3a>, <0x3b>, + <0x3c>, <0x3d>, <0x3e>, <0x3f>, <0x40>, <0x41>, + <0x42>, <0x43>, <0x44>, <0x45>, <0x46>, <0x47>, + <0x48>, <0x49>, <0x4a>, <0x4b>, <0x4c>, <0x4d>, + <0x4e>, <0x4f>, <0x50>, <0x51>, <0x52>, <0x53>, + <0x54>, <0x55>, <0x56>, <0x57>, <0x58>, <0x59>, + <0x5a>, <0x5b>, <0x5c>, <0x5d>, <0x5e>, <0x5f>, + <0x60>, <0x61>, <0x62>, <0x63>, <0x64>, <0x65>, + <0x66>, <0x67>, <0x68>, <0x69>, <0x6a>, <0x6b>, + <0x6c>, <0x6d>, <0x6e>, <0x6f>, <0x70>, <0x71>, + <0x72>, <0x73>, <0x74>; + interrupt-controller; + msi-controller; + }; + + apic_gic: apic_gic@a20000 { + compatible =3D "kalray,coolidge-apic-gic"; + reg =3D <0x00 0xa20000 0x00 0x12000>; + #interrupt-cells =3D <0x01>; + interrupts-extended =3D <&core_intc0 0x4>, + <&core_intc1 0x4>, + <&core_intc2 0x4>, + <&core_intc3 0x4>, + <&core_intc4 0x4>, + <&core_intc5 0x4>, + <&core_intc6 0x4>, + <&core_intc7 0x4>, + <&core_intc8 0x4>, + <&core_intc9 0x4>, + <&core_intc10 0x4>, + <&core_intc11 0x4>, + <&core_intc12 0x4>, + <&core_intc13 0x4>, + <&core_intc14 0x4>, + <&core_intc15 0x4>; + interrupt-controller; + }; + + pwr_ctrl: pwr_ctrl@a40000 { + compatible =3D "kalray,coolidge-pwr-ctrl"; + reg =3D <0x00 0xa40000 0x00 0x4188>; + }; + + dsu_clock@a44180 { + compatible =3D "kalray,coolidge-dsu-clock"; + reg =3D <0x00 0xa44180 0x00 0x08>; + clocks =3D <&core_clk>; + }; + + ipi_ctrl@ad0000 { + compatible =3D "kalray,coolidge-ipi-ctrl"; + reg =3D <0x00 0xad0000 0x00 0x1000>; + #interrupt-cells =3D <0>; + interrupt-controller; + interrupts-extended =3D <&core_intc0 0x18>, + <&core_intc1 0x18>, + <&core_intc2 0x18>, + <&core_intc3 0x18>, + <&core_intc4 0x18>, + <&core_intc5 0x18>, + <&core_intc6 0x18>, + <&core_intc7 0x18>, + <&core_intc8 0x18>, + <&core_intc9 0x18>, + <&core_intc10 0x18>, + <&core_intc11 0x18>, + <&core_intc12 0x18>, + <&core_intc13 0x18>, + <&core_intc14 0x18>, + <&core_intc15 0x18>; + }; + + core_timer { + compatible =3D "kalray,kv3-1-timer"; + clocks =3D <&core_clk>; + interrupts-extended =3D <&core_intc0 0>, + <&core_intc1 0>, + <&core_intc2 0>, + <&core_intc3 0>, + <&core_intc4 0>, + <&core_intc5 0>, + <&core_intc6 0>, + <&core_intc7 0>, + <&core_intc8 0>, + <&core_intc9 0>, + <&core_intc10 0>, + <&core_intc11 0>, + <&core_intc12 0>, + <&core_intc13 0>, + <&core_intc14 0>, + <&core_intc15 0>; + }; + + clocks { + + core_clk: core_clk { + compatible =3D "fixed-clock"; + clock-frequency =3D <0x3b9aca00>; + #clock-cells =3D <0x00>; + }; + + ref_clk: ref_clk { + clock-frequency =3D <0x5f5e100>; + #clock-cells =3D <0x00>; + compatible =3D "fixed-clock"; + }; + }; + + cpus { + #address-cells =3D <0x01>; + #size-cells =3D <0x00>; + enable-method =3D "kalray,coolidge-pwr-ctrl"; + + cpu@0 { + compatible =3D "kalray,kv3-1-pe","kalray,kv3-pe"; + device_type =3D "cpu"; + reg =3D <0x00>; + clocks =3D <&core_clk>; + core_intc0: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <0x01>; + #address-cells =3D <0x0>; + interrupt-controller; + }; + }; + + cpu@1 { + device_type =3D "cpu"; + reg =3D <0x01>; + compatible =3D "kalray,kv3-1-pe","kalray,kv3-pe"; + core_intc1: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <0x01>; + #address-cells =3D <0x0>; + interrupt-controller; + }; + }; + + cpu@2 { + device_type =3D "cpu"; + reg =3D <0x02>; + compatible =3D "kalray,kv3-1-pe","kalray,kv3-pe"; + core_intc2: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <0x01>; + #address-cells =3D <0x0>; + interrupt-controller; + }; + }; + + cpu@3 { + device_type =3D "cpu"; + reg =3D <0x03>; + compatible =3D "kalray,kv3-1-pe","kalray,kv3-pe"; + core_intc3: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <0x01>; + #address-cells =3D <0x0>; + interrupt-controller; + }; + }; + + cpu@4 { + device_type =3D "cpu"; + reg =3D <0x04>; + compatible =3D "kalray,kv3-1-pe","kalray,kv3-pe"; + core_intc4: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <0x01>; + #address-cells =3D <0x0>; + interrupt-controller; + }; + }; + + cpu@5 { + device_type =3D "cpu"; + reg =3D <0x05>; + compatible =3D "kalray,kv3-1-pe","kalray,kv3-pe"; + core_intc5: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <0x01>; + #address-cells =3D <0x0>; + interrupt-controller; + }; + }; + + cpu@6 { + device_type =3D "cpu"; + reg =3D <0x06>; + compatible =3D "kalray,kv3-1-pe","kalray,kv3-pe"; + core_intc6: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <0x01>; + #address-cells =3D <0x0>; + interrupt-controller; + }; + }; + + cpu@7 { + device_type =3D "cpu"; + reg =3D <0x07>; + compatible =3D "kalray,kv3-1-pe","kalray,kv3-pe"; + core_intc7: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <0x01>; + #address-cells =3D <0x0>; + interrupt-controller; + }; + }; + + cpu@8 { + device_type =3D "cpu"; + reg =3D <0x08>; + compatible =3D "kalray,kv3-1-pe","kalray,kv3-pe"; + core_intc8: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <0x01>; + #address-cells =3D <0x0>; + interrupt-controller; + }; + }; + + cpu@9 { + device_type =3D "cpu"; + reg =3D <0x09>; + compatible =3D "kalray,kv3-1-pe","kalray,kv3-pe"; + core_intc9: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <0x01>; + #address-cells =3D <0x0>; + interrupt-controller; + }; + }; + + cpu@10 { + device_type =3D "cpu"; + reg =3D <0x0a>; + compatible =3D "kalray,kv3-1-pe","kalray,kv3-pe"; + core_intc10: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <0x01>; + #address-cells =3D <0x0>; + interrupt-controller; + }; + }; + + cpu@11 { + device_type =3D "cpu"; + reg =3D <0x0b>; + compatible =3D "kalray,kv3-1-pe","kalray,kv3-pe"; + core_intc11: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <0x01>; + #address-cells =3D <0x0>; + interrupt-controller; + }; + }; + + cpu@12 { + device_type =3D "cpu"; + reg =3D <0x0c>; + compatible =3D "kalray,kv3-1-pe","kalray,kv3-pe"; + core_intc12: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <0x01>; + #address-cells =3D <0x0>; + interrupt-controller; + }; + }; + + cpu@13 { + device_type =3D "cpu"; + reg =3D <0x0d>; + compatible =3D "kalray,kv3-1-pe","kalray,kv3-pe"; + core_intc13: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <0x01>; + #address-cells =3D <0x0>; + interrupt-controller; + }; + }; + + cpu@14 { + device_type =3D "cpu"; + reg =3D <0x0e>; + compatible =3D "kalray,kv3-1-pe","kalray,kv3-pe"; + core_intc14: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <0x01>; + #address-cells =3D <0x0>; + interrupt-controller; + }; + }; + + cpu@15 { + device_type =3D "cpu"; + reg =3D <0x0f>; + compatible =3D "kalray,kv3-1-pe","kalray,kv3-pe"; + core_intc15: interrupt-controller { + compatible =3D "kalray,kv3-1-intc"; + #interrupt-cells =3D <0x01>; + #address-cells =3D <0x0>; + interrupt-controller; + }; + }; + + }; +}; --=20 2.45.2 From nobody Sun Sep 14 14:33:40 2025 Received: from smtpout34.security-mail.net (smtpout34.security-mail.net [85.31.212.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 14AF9171E5A for ; Mon, 22 Jul 2024 09:44:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=85.31.212.34 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641445; cv=none; b=h3OzBq3XBW4STgLJlEoVnvtBonbkKmcCR5qYJ1ioJtgPOsiJKhEEknQDixx4+uyLp/lnzN+DMoOPq3T0+hsgwkzr0imFz092I3FJecax8RVu74pa725tclkNi1XCc/07k+rsl5dcNWgezAouLsNufr4tw423f/o/60Q0PQ+E4AE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721641445; c=relaxed/simple; bh=EDin7GOUDLOxcNYm0ukHHaGK6/4+5NlqKHtphjQOXC0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=vEA5jJhWU+qSAy3K/erP26GFkpqghAX/HxBhYtnMtqAHvlDbmtMF5LE1ptIADMGaF6K3zygt+an1yGH6y5EqXYgRoGPQh6U4uGgkwYlPzqqHarVHJ3j0o6u2NyqP1HgRK4vjEad5xkpPHJHlFFh/Er2EuarF5s+jAQzxumtZoec= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com; spf=pass smtp.mailfrom=kalrayinc.com; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b=Gd9+bBtb; arc=none smtp.client-ip=85.31.212.34 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kalrayinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=kalrayinc.com header.i=@kalrayinc.com header.b="Gd9+bBtb" Received: from localhost (localhost [127.0.0.1]) by fx304.security-mail.net (Postfix) with ESMTP id BCA95835E42 for ; Mon, 22 Jul 2024 11:43:55 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kalrayinc.com; s=sec-sig-email; t=1721641435; bh=EDin7GOUDLOxcNYm0ukHHaGK6/4+5NlqKHtphjQOXC0=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=Gd9+bBtbW1BmeV67QCjDcEABzUFljyyettzgbdi7PgInBb0sfUmW0Z2NTQy6Xmew2 XnzzHgHlFRxh0Zf78RyHvRcxrLErG4iWcDiO0+eV4wp1fCNm5WF8/dWUmxPSpZ/xOS YlVZ7isTKT1Vn/of5MmIEBB0CenjBGd/qqpR2Dkk= Received: from fx304 (localhost [127.0.0.1]) by fx304.security-mail.net (Postfix) with ESMTP id 77AAA836509; Mon, 22 Jul 2024 11:43:55 +0200 (CEST) Received: from srvsmtp.lin.mbt.kalray.eu (unknown [217.181.231.53]) by fx304.security-mail.net (Postfix) with ESMTPS id DE6378359FB; Mon, 22 Jul 2024 11:43:54 +0200 (CEST) Received: from junon.lan.kalrayinc.com (unknown [192.168.37.161]) by srvsmtp.lin.mbt.kalray.eu (Postfix) with ESMTPS id B08DB40317; Mon, 22 Jul 2024 11:43:54 +0200 (CEST) X-Quarantine-ID: X-Secumail-id: <137c4.669e29da.dcdf0.0> From: ysionneau@kalrayinc.com To: linux-kernel@vger.kernel.org, Rob Herring , Krzysztof Kozlowski , Conor Dooley Cc: Jonathan Borne , Julian Vetter , Yann Sionneau , devicetree@vger.kernel.org Subject: [RFC PATCH v3 37/37] Add Kalray Inc. to the list of vendor-prefixes.yaml Date: Mon, 22 Jul 2024 11:41:48 +0200 Message-ID: <20240722094226.21602-38-ysionneau@kalrayinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240722094226.21602-1-ysionneau@kalrayinc.com> References: <20240722094226.21602-1-ysionneau@kalrayinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ALTERMIMEV2_out: done Content-Type: text/plain; charset="utf-8" From: Julian Vetter Signed-off-by: Julian Vetter Signed-off-by: Yann Sionneau --- Notes: V2 -> V3: New patch --- Documentation/devicetree/bindings/vendor-prefixes.yaml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/Documentation/devicetree/bindings/vendor-prefixes.yaml b/Docum= entation/devicetree/bindings/vendor-prefixes.yaml index fbf47f0bacf1a..65954a1077ed7 100644 --- a/Documentation/devicetree/bindings/vendor-prefixes.yaml +++ b/Documentation/devicetree/bindings/vendor-prefixes.yaml @@ -754,6 +754,8 @@ patternProperties: description: Jide Tech "^joz,.*": description: JOZ BV + "^kalray,.*": + description: Kalray Inc. "^kam,.*": description: Kamstrup A/S "^karo,.*": --=20 2.45.2