From nobody Tue Feb 10 19:15:22 2026 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by smtp.subspace.kernel.org (Postfix) with ESMTP id CDF6A25B662; Fri, 30 Jan 2026 03:00:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=114.242.206.163 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769742009; cv=none; b=XQ02t+GUPZ3ogxbJY4hdG20iNDOewvk2/2Bllbzw1Sk62MRQcvGYDBOMLs1V49ncJGvBWsFQp6pNsC1ioQ7HYrOo78LvyRDGWyURmVTovaG+ZPXdm0/yhqAFgHJfWBJh8EWHDXUhC1Fal4OKHs1W41W7GNwhX2svby+XKLcJZ14= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769742009; c=relaxed/simple; bh=UnV73sncVckeAA4dm5TrQ2td2kLBGWo4xOcs/zQLmuA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JiNS4Q2CIzoPP2gblvtG/NThlkCedXmXPbmubDMT/Y/zPiiHkiOGBoOsjKFpE/e8pbtfKcCpoPg3vfZTaqbQ1E407gZJ5SXC5OqeHU7+3hu6Xfqf7voRdaOBOp29/EkqUn7LPXAwfGnTdSHIRjyRl6X5mJU/SUeQp+beaqNrGK8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=loongson.cn; spf=pass smtp.mailfrom=loongson.cn; arc=none smtp.client-ip=114.242.206.163 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=loongson.cn Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=loongson.cn Received: from loongson.cn (unknown [10.40.57.136]) by gateway (Coremail) with SMTP id _____8DxHvCtHnxpCyUOAA--.47263S3; Fri, 30 Jan 2026 10:59:57 +0800 (CST) Received: from localhost.localdomain (unknown [10.40.57.136]) by front1 (Coremail) with SMTP id qMiowJDxaeCfHnxpR_Q5AA--.45699S6; Fri, 30 Jan 2026 10:59:53 +0800 (CST) From: Tianyang Zhang To: chenhuacai@kernel.org, kernel@xen0n.name, corbet@lwn.net, alexs@kernel.org, si.yanteng@linux.dev, tglx@linutronix.de, jiaxun.yang@flygoat.com, maobibo@loongson.cn Cc: loongarch@lists.linux.dev, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, Tianyang Zhang , Liupu Wang Subject: [PATCH v9 4/4] irqchip/irq-loongarch-ir:Add Redirect irqchip support Date: Fri, 30 Jan 2026 10:59:39 +0800 Message-ID: <20260130025941.2140582-5-zhangtianyang@loongson.cn> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20260130025941.2140582-1-zhangtianyang@loongson.cn> References: <20260130025941.2140582-1-zhangtianyang@loongson.cn> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: qMiowJDxaeCfHnxpR_Q5AA--.45699S6 X-CM-SenderInfo: x2kd0wxwld05hdqjqz5rrqw2lrqou0/ X-Coremail-Antispam: 1Uk129KBj9fXoW3Zw17uw1UGryDWFWfGF1fKrX_yoW8Ar4DAo WfJrs3G34rWr18GrWFka17tFyxZ3y5Gw4Fyw1ruFWDZ3Wqqr4rtrW7GrW3Za42gF1FqF17 JFy0qwn3GFWxtrn3l-sFpf9Il3svdjkaLaAFLSUrUUUUnb8apTn2vfkv8UJUUUU8wcxFpf 9Il3svdxBIdaVrn0xqx4xG64xvF2IEw4CE5I8CrVC2j2Jv73VFW2AGmfu7bjvjm3AaLaJ3 UjIYCTnIWjp_UUUOc7kC6x804xWl14x267AKxVWUJVW8JwAFc2x0x2IEx4CE42xK8VAvwI 8IcIk0rVWrJVCq3wAFIxvE14AKwVWUAVWUZwA2ocxC64kIII0Yj41l84x0c7CEw4AK67xG Y2AK021l84ACjcxK6xIIjxv20xvE14v26F1j6w1UM28EF7xvwVC0I7IYx2IY6xkF7I0E14 v26F4j6r4UJwA2z4x0Y4vEx4A2jsIE14v26r4UJVWxJr1l84ACjcxK6I8E87Iv6xkF7I0E 14v26F4UJVW0owAaw2AFwI0_JF0_Jw1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqjxCEc2 xF0cIa020Ex4CE44I27wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_ ZF0_GryDMcIj6I8E87Iv67AKxVW8Jr0_Cr1UMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0x vY0x0EwIxGrwACjcxG6xCI17CEII8vrVW3JVW8Jr1lc7CjxVAaw2AFwI0_JF0_Jw1l42xK 82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1l4IxYO2xFxVAFwI0_JF0_Jw1lx2 IqxVAqx4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v2 6r1q6r43MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Ar0_tr1lIxAIcVC0I7IYx2 IY6xkF7I0E14v26F4j6r4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280 aVAFwI0_Gr1j6F4UJwCI42IY6I8E87Iv6xkF7I0E14v26r4UJVWxJrUvcSsGvfC2KfnxnU UI43ZEXa7IU5kDG7UUUUU== Content-Type: text/plain; charset="utf-8" The main function of the Redirected interrupt controller is to manage the redirected-interrupt table, which consists of many redirected entries. When MSI interrupts are requested, the driver creates a corresponding redirected entry that describes the target CPU/vector number and the operating mode of the interrupt. The redirected interrupt module has an independent cache, and during the interrupt routing process, it will prioritize the redirected entries that hit the cache. The driver invalidates certain entry caches via a command queue. Co-developed-by: Liupu Wang Signed-off-by: Liupu Wang Signed-off-by: Tianyang Zhang --- drivers/irqchip/Makefile | 2 +- drivers/irqchip/irq-loongarch-avec.c | 6 +- drivers/irqchip/irq-loongarch-ir.c | 534 +++++++++++++++++++++++++++ 3 files changed, 540 insertions(+), 2 deletions(-) create mode 100644 drivers/irqchip/irq-loongarch-ir.c diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile index 6a229443efe0..2fc15a8e6ada 100644 --- a/drivers/irqchip/Makefile +++ b/drivers/irqchip/Makefile @@ -118,7 +118,7 @@ obj-$(CONFIG_LS1X_IRQ) +=3D irq-ls1x.o obj-$(CONFIG_TI_SCI_INTR_IRQCHIP) +=3D irq-ti-sci-intr.o obj-$(CONFIG_TI_SCI_INTA_IRQCHIP) +=3D irq-ti-sci-inta.o obj-$(CONFIG_TI_PRUSS_INTC) +=3D irq-pruss-intc.o -obj-$(CONFIG_IRQ_LOONGARCH_CPU) +=3D irq-loongarch-cpu.o irq-loongarch-av= ec.o +obj-$(CONFIG_IRQ_LOONGARCH_CPU) +=3D irq-loongarch-cpu.o irq-loongarch-av= ec.o irq-loongarch-ir.o obj-$(CONFIG_LOONGSON_LIOINTC) +=3D irq-loongson-liointc.o obj-$(CONFIG_LOONGSON_EIOINTC) +=3D irq-loongson-eiointc.o obj-$(CONFIG_LOONGSON_HTPIC) +=3D irq-loongson-htpic.o diff --git a/drivers/irqchip/irq-loongarch-avec.c b/drivers/irqchip/irq-loo= ngarch-avec.c index 7bdf3f678db4..972203182079 100644 --- a/drivers/irqchip/irq-loongarch-avec.c +++ b/drivers/irqchip/irq-loongarch-avec.c @@ -111,7 +111,8 @@ static int avecintc_set_affinity(struct irq_data *data,= const struct cpumask *de adata->cpu =3D cpu; adata->vec =3D vector; per_cpu_ptr(irq_map, adata->cpu)[adata->vec] =3D irq_data_to_desc(data); - avecintc_sync(adata); + if (!cpu_has_redirectint) + avecintc_sync(adata); } =20 irq_data_update_effective_affinity(data, cpumask_of(cpu)); @@ -403,6 +404,9 @@ static int __init pch_msi_parse_madt(union acpi_subtabl= e_headers *header, =20 static inline int __init acpi_cascade_irqdomain_init(void) { + if (cpu_has_redirectint) + return redirect_acpi_init(loongarch_avec.domain); + return acpi_table_parse_madt(ACPI_MADT_TYPE_MSI_PIC, pch_msi_parse_madt, = 1); } =20 diff --git a/drivers/irqchip/irq-loongarch-ir.c b/drivers/irqchip/irq-loong= arch-ir.c new file mode 100644 index 000000000000..0ee7c37487f7 --- /dev/null +++ b/drivers/irqchip/irq-loongarch-ir.c @@ -0,0 +1,534 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020 Loongson Technologies, Inc. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#include "irq-loongson.h" + +#define LOONGARCH_IOCSR_REDIRECT_CFG 0x15e0 +#define LOONGARCH_IOCSR_REDIRECT_TBR 0x15e8 /* IRT BASE REG*/ +#define LOONGARCH_IOCSR_REDIRECT_CQB 0x15f0 /* IRT CACHE QUEUE = BASE */ +#define LOONGARCH_IOCSR_REDIRECT_CQH 0x15f8 /* IRT CACHE QUEUE = HEAD, 32bit */ +#define LOONGARCH_IOCSR_REDIRECT_CQT 0x15fc /* IRT CACHE QUEUE = TAIL, 32bit */ + +#define GPID_ADDR_MASK GENMASK(47, 6) +#define GPID_ADDR_SHIFT 6 + +#define CQB_SIZE_MASK 0xf +#define CQB_ADDR_MASK GENMASK(47, 12) + +#define CFG_DISABLE_IDLE 2 +#define INVALID_INDEX 0 + +#define MAX_IR_ENGINES 16 + +struct redirect_entry { + struct { + u64 valid : 1, + res1 : 5, + gpid : 42, + res2 : 8, + vector : 8; + } lo; + u64 hi; +}; +#define IRD_ENTRY_SIZE sizeof(struct redirect_entry) +#define IRD_ENTRIES 65536 +#define IRD_TABLE_PAGE_ORDER get_order(IRD_ENTRIES * IRD_ENTRY_SIZE) + +struct redirect_gpid { + u64 pir[4]; // Pending interrupt requested + u8 en : 1, // doorbell + res0 : 7; + u8 irqnum; + u16 res1; + u32 dst; + u32 rsvd[6]; +}; + +struct irde_inv_cmd { + union { + u64 cmd_info; + struct { + u64 res1 : 4, + type : 1, + need_notice : 1, + pad : 2, + index : 16, + pad2 : 40; + } index; + }; + u64 notice_addr; +}; +#define IRD_CMD_SIZE sizeof(struct irde_inv_cmd) +#define INVALID_QUEUE_SIZE 4096 +#define INV_QUEUE_PAGE_ORDER get_order(INVALID_QUEUE_SIZE * IRD_CMD_SIZE) + +struct redirect_table { + struct redirect_entry *table; + unsigned long *bitmap; + raw_spinlock_t lock; +}; + +struct redirect_queue { + struct irde_inv_cmd *cmd_base; + int head; + int tail; + raw_spinlock_t lock; +}; + +struct irde_desc { + struct redirect_table ird_table; + struct redirect_queue inv_queue; + int node; +}; + +struct redirect_item { + int index; + struct irde_desc *irde; + struct redirect_gpid *gpid; +}; + + +static struct irq_domain *redirect_domain; +static struct irde_desc irde_descs[MAX_IR_ENGINES]; +static phys_addr_t msi_base_addr; +static phys_addr_t redirect_reg_base =3D LOONGSON_REG_BASE; + +#define REDIRECT_REG(reg, node) \ + ((void __iomem *)(IO_BASE | redirect_reg_base | (u64)(node) << NODE_ADDRS= PACE_SHIFT | (reg))) + +static inline void redirect_write_reg64(u32 node, u64 val, u32 reg) +{ + void __iomem *reg_addr =3D REDIRECT_REG(reg, node); + + return writeq(val, reg_addr); +} + +static inline void redirect_write_reg32(u32 node, u32 val, u32 reg) +{ + void __iomem *reg_addr =3D REDIRECT_REG(reg, node); + + return writel(val, reg_addr); +} + +static inline u32 redirect_read_reg32(u32 node, u32 reg) +{ + void __iomem *reg_addr =3D REDIRECT_REG(reg, node); + + return readl(reg_addr); +} + +static inline struct redirect_entry *item_get_entry(struct redirect_item *= item) +{ + return item->irde->ird_table.table + item->index; +} + +static inline bool invalid_queue_is_full(int node, u32 *tail) +{ + u32 head =3D redirect_read_reg32(node, LOONGARCH_IOCSR_REDIRECT_CQH); + + *tail =3D redirect_read_reg32(node, LOONGARCH_IOCSR_REDIRECT_CQT); + + return head =3D=3D ((*tail + 1) % INVALID_QUEUE_SIZE); +} + +static void invalid_enqueue(struct redirect_item *item, struct irde_inv_cm= d *cmd) +{ + struct redirect_queue *inv_queue =3D &item->irde->inv_queue; + u32 tail; + + guard(raw_spinlock_irqsave)(&inv_queue->lock); + + while (invalid_queue_is_full(item->irde->node, &tail)) + cpu_relax(); + + memcpy(&inv_queue->cmd_base[tail], cmd, sizeof(*cmd)); + + redirect_write_reg32(item->irde->node, (tail + 1) % INVALID_QUEUE_SIZE, + LOONGARCH_IOCSR_REDIRECT_CQT); +} + +static void irde_invalid_entry(struct redirect_item *item) +{ + struct irde_inv_cmd cmd; + u64 raddr =3D 0; + + cmd.cmd_info =3D 0; + cmd.index.type =3D INVALID_INDEX; + cmd.index.need_notice =3D 1; + cmd.index.index =3D item->index; + cmd.notice_addr =3D (u64)(__pa(&raddr)); + + invalid_enqueue(item, &cmd); + + /* + * CPU needs to wait here for cmd to complete, and it determines this + * by checking whether invalid queue has already written a valid value + * to cmd.notice_addr. + */ + while (!raddr) + cpu_relax(); + +} + +static inline struct avecintc_data *irq_data_get_avec_data(struct irq_data= *data) +{ + return data->parent_data->chip_data; +} + +static int redirect_table_alloc(int node, u32 nr_irqs) +{ + struct redirect_table *ird_table =3D &irde_descs[node].ird_table; + unsigned int index, order; + + if (nr_irqs > 1) { + nr_irqs =3D __roundup_pow_of_two(nr_irqs); + order =3D ilog2(nr_irqs); + } + + guard(raw_spinlock_irqsave)(&ird_table->lock); + + index =3D bitmap_find_free_region(ird_table->bitmap, + IRD_ENTRIES, order); + if (index < 0) { + pr_err("No redirect entry to use\n"); + return -ENOMEM; + } + + return index; +} + +static void redirect_table_free(struct redirect_item *item) +{ + struct redirect_table *ird_table =3D &item->irde->ird_table; + struct redirect_entry *entry =3D item_get_entry(item); + + memset(entry, 0, sizeof(*entry)); + + scoped_guard(raw_spinlock_irq, &ird_table->lock) + clear_bit(item->index, ird_table->bitmap); + + kfree(item->gpid); + + irde_invalid_entry(item); +} + +static inline void redirect_domain_prepare_entry(struct redirect_item *ite= m, + struct avecintc_data *adata) +{ + struct redirect_entry *entry =3D item_get_entry(item); + + item->gpid->en =3D 1; + item->gpid->irqnum =3D adata->vec; + item->gpid->dst =3D adata->cpu; + + entry->lo.valid =3D 1; + entry->lo.gpid =3D ((u64)item->gpid & GPID_ADDR_MASK) >> GPID_ADDR_SHIFT; + entry->lo.vector =3D 0xff; +} + +static int redirect_set_affinity(struct irq_data *data, const struct cpuma= sk *dest, bool force) +{ + struct redirect_item *item =3D data->chip_data; + int ret; + + ret =3D irq_chip_set_affinity_parent(data, dest, force); + if (ret =3D=3D IRQ_SET_MASK_OK_DONE) { + return ret; + } else if (ret) { + pr_err("IRDE:set_affinity error %d\n", ret); + return ret; + } + struct avecintc_data *adata =3D irq_data_get_avec_data(data); + + redirect_domain_prepare_entry(item, adata); + irde_invalid_entry(item); + avecintc_sync(adata); + + return IRQ_SET_MASK_OK; +} + +static void redirect_compose_msi_msg(struct irq_data *d, struct msi_msg *m= sg) +{ + struct redirect_item *item =3D irq_data_get_irq_chip_data(d); + + msg->address_lo =3D (msi_base_addr | 1 << 2) ; + msg->address_hi =3D 0x0; + msg->data =3D item->index; +} + +static struct irq_chip loongarch_redirect_chip =3D { + .name =3D "REDIRECT", + .irq_ack =3D irq_chip_ack_parent, + .irq_mask =3D irq_chip_mask_parent, + .irq_unmask =3D irq_chip_unmask_parent, + .irq_set_affinity =3D redirect_set_affinity, + .irq_compose_msi_msg =3D redirect_compose_msi_msg, +}; + +static void redirect_free_resources(struct irq_domain *domain, unsigned in= t virq, + unsigned int nr_irqs) +{ + for (int i =3D 0; i < nr_irqs; i++) { + struct irq_data *irq_data =3D irq_domain_get_irq_data(domain, virq + i); + + if (irq_data && irq_data->chip_data) { + struct redirect_item *item =3D irq_data->chip_data; + + redirect_table_free(item); + kfree(item); + } + } +} + +static int redirect_domain_alloc(struct irq_domain *domain, unsigned int v= irq, + unsigned int nr_irqs, void *arg) +{ + msi_alloc_info_t *info =3D arg; + int ret, i, node, index; + + node =3D dev_to_node(info->desc->dev); + + ret =3D irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, arg); + if (ret < 0) + return ret; + + index =3D redirect_table_alloc(node, nr_irqs); + if (index < 0) { + pr_err("Alloc redirect table entry failed\n"); + return -ENOMEM; + } + + for (i =3D 0; i < nr_irqs; i++) { + struct irq_data *irq_data =3D irq_domain_get_irq_data(domain, virq + i); + struct redirect_item *item; + + item =3D kzalloc(sizeof(*item), GFP_KERNEL); + if (!item) { + pr_err("Alloc redirect descriptor failed\n"); + goto out_free_resources; + } + item->irde =3D &irde_descs[node]; + + /* + * Only bits 47:6 of the GPID are passed to the controller, + * 64-byte alignment must be guarantee and make kzalloc can + * align to the respective size. + */ + static_assert(sizeof(*item->gpid) =3D=3D 64); + item->gpid =3D kzalloc_node(sizeof(*item->gpid), GFP_KERNEL, node); + if (!item->gpid) { + pr_err("Alloc redirect GPID failed\n"); + goto out_free_resources; + } + item->index =3D index + i; + + irq_data->chip_data =3D item; + irq_data->chip =3D &loongarch_redirect_chip; + + redirect_domain_prepare_entry(item, irq_data_get_avec_data(irq_data)); + } + return 0; + +out_free_resources: + redirect_free_resources(domain, virq, nr_irqs); + irq_domain_free_irqs_common(domain, virq, nr_irqs); + + return -ENOMEM; +} + +static void redirect_domain_free(struct irq_domain *domain, unsigned int v= irq, unsigned int nr_irqs) +{ + redirect_free_resources(domain, virq, nr_irqs); + return irq_domain_free_irqs_common(domain, virq, nr_irqs); +} + +static const struct irq_domain_ops redirect_domain_ops =3D { + .alloc =3D redirect_domain_alloc, + .free =3D redirect_domain_free, + .select =3D msi_lib_irq_domain_select, +}; + +static int redirect_queue_init(struct irde_desc *irde) +{ + struct redirect_queue *inv_queue =3D &irde->inv_queue; + struct folio *folio; + + folio =3D __folio_alloc_node(GFP_KERNEL | __GFP_ZERO, INV_QUEUE_PAGE_ORDE= R, irde->node); + if (!folio) { + pr_err("Node [%d] invalid queue alloc pages failed!\n", irde->node); + return -ENOMEM; + } + + inv_queue->cmd_base =3D folio_address(folio); + inv_queue->head =3D 0; + inv_queue->tail =3D 0; + raw_spin_lock_init(&inv_queue->lock); + + return 0; +} + +static int redirect_table_init(struct irde_desc *irde) +{ + struct redirect_table *ird_table =3D &irde->ird_table; + unsigned long *bitmap; + struct folio *folio; + int node =3D irde->node; + + folio =3D __folio_alloc_node(GFP_KERNEL | __GFP_ZERO, IRD_TABLE_PAGE_ORDE= R, node); + if (!folio) { + pr_err("Node [%d] redirect table alloc pages failed!\n", node); + return -ENOMEM; + } + ird_table->table =3D folio_address(folio); + + bitmap =3D bitmap_zalloc(IRD_ENTRIES, GFP_KERNEL); + if (!bitmap) { + pr_err("Node [%d] redirect table bitmap alloc pages failed!\n", node); + folio_put(folio); + ird_table->table =3D NULL; + return -ENOMEM; + } + + ird_table->bitmap =3D bitmap; + + raw_spin_lock_init(&ird_table->lock); + + return 0; +} + +static void redirect_irde_cfg(struct irde_desc *irde) +{ + redirect_write_reg64(irde->node, CFG_DISABLE_IDLE, LOONGARCH_IOCSR_REDIRE= CT_CFG); + redirect_write_reg64(irde->node, __pa(irde->ird_table.table), LOONGARCH_I= OCSR_REDIRECT_TBR); + redirect_write_reg32(irde->node, 0, LOONGARCH_IOCSR_REDIRECT_CQH); + redirect_write_reg32(irde->node, 0, LOONGARCH_IOCSR_REDIRECT_CQT); + redirect_write_reg64(irde->node, ((u64)irde->inv_queue.cmd_base & CQB_ADD= R_MASK) | + CQB_SIZE_MASK, LOONGARCH_IOCSR_REDIRECT_CQB); +} + +static void __redirect_irde_fini(struct irde_desc *irde) +{ + struct redirect_table *ird_table =3D &irde_descs->ird_table; + struct redirect_queue *inv_queue =3D &irde_descs->inv_queue; + + if (ird_table->table) { + folio_put(virt_to_folio(ird_table->table)); + ird_table->table =3D NULL; + } + + if (ird_table->bitmap) { + bitmap_free(ird_table->bitmap); + ird_table->bitmap =3D NULL; + } + + if (inv_queue->cmd_base) { + folio_put(virt_to_folio(inv_queue->cmd_base)); + inv_queue->cmd_base =3D NULL; + } +} + +static inline void redirect_irde_fini(int node) +{ + __redirect_irde_fini(&irde_descs[node]); +} + +static int redirect_irde_init(int node) +{ + struct irde_desc *irde =3D &irde_descs[node]; + int ret; + + irde->node =3D node; + + ret =3D redirect_table_init(irde); + if (ret) + return ret; + + ret =3D redirect_queue_init(irde); + if (ret) { + __redirect_irde_fini(irde); + return ret; + } + + redirect_irde_cfg(irde); + + return 0; +} + +static int __init pch_msi_parse_madt(union acpi_subtable_headers *header, = const unsigned long end) +{ + struct acpi_madt_msi_pic *pchmsi_entry =3D (struct acpi_madt_msi_pic *)he= ader; + + msi_base_addr =3D pchmsi_entry->msg_address - AVEC_MSG_OFFSET; + + return pch_msi_acpi_init_avec(redirect_domain); +} + +static int __init acpi_cascade_irqdomain_init(void) +{ + return acpi_table_parse_madt(ACPI_MADT_TYPE_MSI_PIC, pch_msi_parse_madt, = 1); +} + +int __init redirect_acpi_init(struct irq_domain *parent) +{ + struct fwnode_handle *fwnode; + int ret =3D -EINVAL, node; + + fwnode =3D irq_domain_alloc_named_fwnode("redirect"); + if (!fwnode) { + pr_err("Unable to alloc redirect domain handle\n"); + goto fail; + } + + redirect_domain =3D irq_domain_create_hierarchy(parent, 0, IRD_ENTRIES, f= wnode, + &redirect_domain_ops, irde_descs); + if (!redirect_domain) { + pr_err("Unable to alloc redirect domain\n"); + goto out_free_fwnode; + } + + + for_each_node_mask(node, node_possible_map) { + ret =3D redirect_irde_init(node); + if (ret) + goto out_clear_irde; + } + + ret =3D acpi_cascade_irqdomain_init(); + if (ret < 0) { + pr_err("Failed to cascade IRQ domain, ret=3D%d\n", ret); + goto out_clear_irde; + } + + pr_info("loongarch irq redirect modules init succeeded\n"); + return 0; + +out_clear_irde: + for_each_node_mask(node, node_possible_map) + redirect_irde_fini(node); + + irq_domain_remove(redirect_domain); +out_free_fwnode: + irq_domain_free_fwnode(fwnode); +fail: + return ret; +} --=20 2.41.0