From nobody Thu Nov 28 12:54:01 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; arc=pass (i=1 dmarc=pass fromdomain=amd.com); dmarc=pass(p=quarantine dis=none) header.from=amd.com ARC-Seal: i=2; a=rsa-sha256; t=1670393789; cv=pass; d=zohomail.com; s=zohoarc; b=ZPfWpr6qF7RM39sd/JwG/WMLF1Dv+tiXejbG+A1ApzaXv1iwKXlDyt783RcKB6W37ZCafQnaG7g8mxO50xQXZDAI8hfoDxYaoihwTj4mbSVddaeb2c1zOXF+AcMlJVi01xsEHcSF1hRc4sYhcS7zJWKns46upCcjwPtbZ7vN5Hg= ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1670393789; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=DS/nF2bIfjgp31HI6UzUoMJVBvMsV8pu8GheBmaD6q4=; b=K0+KKcCVXz+RNtH0vNI8CVWCtgUyRuprZWA0zx7Vfl3s9Or5suxMNESghx8nRIUs2GW3ofVCSoq/Ld3gVwubeRlOCdKz2Tl7EKT4T2bz3CUEDmU39bNGxlvAtuVC8QRm/qz110yUHZ/tO1Ur5VYM1GtsP1edRff2oAOHCTdGNMo= ARC-Authentication-Results: i=2; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; arc=pass (i=1 dmarc=pass fromdomain=amd.com); dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1670393789067775.1956580380363; Tue, 6 Dec 2022 22:16:29 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.455824.713414 (Exim 4.92) (envelope-from ) id 1p2niy-0006vt-RS; Wed, 07 Dec 2022 06:16:08 +0000 Received: by outflank-mailman (output) from mailman id 455824.713414; Wed, 07 Dec 2022 06:16:08 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p2niy-0006vm-Oa; Wed, 07 Dec 2022 06:16:08 +0000 Received: by outflank-mailman (input) for mailman id 455824; Wed, 07 Dec 2022 06:16:07 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p2nix-0006dw-EH for xen-devel@lists.xenproject.org; Wed, 07 Dec 2022 06:16:07 +0000 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on20601.outbound.protection.outlook.com [2a01:111:f400:7eae::601]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id a127ef5c-75f6-11ed-91b6-6bf2151ebd3b; Wed, 07 Dec 2022 07:16:05 +0100 (CET) Received: from DM6PR07CA0130.namprd07.prod.outlook.com (2603:10b6:5:330::22) by DM6PR12MB4960.namprd12.prod.outlook.com (2603:10b6:5:1bc::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5880.14; Wed, 7 Dec 2022 06:16:01 +0000 Received: from DS1PEPF0000E652.namprd02.prod.outlook.com (2603:10b6:5:330:cafe::8) by DM6PR07CA0130.outlook.office365.com (2603:10b6:5:330::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5880.14 via Frontend Transport; Wed, 7 Dec 2022 06:16:01 +0000 Received: from SATLEXMB03.amd.com (165.204.84.17) by DS1PEPF0000E652.mail.protection.outlook.com (10.167.18.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5880.8 via Frontend Transport; Wed, 7 Dec 2022 06:16:00 +0000 Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 7 Dec 2022 00:15:58 -0600 Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 6 Dec 2022 22:15:57 -0800 Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend Transport; Wed, 7 Dec 2022 00:15:57 -0600 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a127ef5c-75f6-11ed-91b6-6bf2151ebd3b ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DYiug1ac1Tf1VuR2CXydiKM2Gk6ZFwmItFObYldMFHZdIfy/X/jfv3+SULDL8jSOkDXDnZ3U8Lnajz7KDD1hlbr+DG4sEjMUhE8/1hoBLZ30ZlvavyFDI4BDx9dZUCfuM9KoWQ7BxiWUsNVyDSipQmwGP0wuBUvyLYdU5SosMgqgTk2BztDy5nW72/KSef3l0Tj7lVLW5NzktGD3+WYl+yd3aEd1UzbC0UCAobEbEZ/2CQaSjHELD8tn5rZFz4WEqHr4O9RDY0GTA3nm75VHoB+NdOovPheNDafwj+5x5kDn4EkvRt+03052SR48WNTkvXU/1M/e9fkBJwRem+9elg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DS/nF2bIfjgp31HI6UzUoMJVBvMsV8pu8GheBmaD6q4=; b=js01XpdZqv0bKHrDrgV7ZmbNTcq15BmxJtiAGII4XkzjTs1tEGziLgM0cfPm+fcRp9S69AE8lPtpo4RVkqf9nnIrghFsn2TGOIXb3QawbhIHF4QfkGCQ8jZkxVS23BlCa4L1Wm/WiliS2l5lbSDdQ3pj0lIdgo+xaP34WGgAebjoab8onwM1+WVFLN8VtWKO3DHXm2ZY2ngDy0LhPOALS486nuwuMZpSC7yZjs1ZJ47fTy1TarJXy1tbXc+fhTSV4nDs7LK28ZtVSGPXna3SPYy22Nnfe6TWYbJfiAfy1FOT9I3T6uQ7B+pne6if/bg8lZYE3TwM0DnGu9jL14ApjQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DS/nF2bIfjgp31HI6UzUoMJVBvMsV8pu8GheBmaD6q4=; b=WFlRl/7QwMrvV/rE3S7uJTjoqGUj/6QbOs1AudiAzHncPihGizaLhk/6vxOyG/UEMsQss6sKx1EDjNq4alPUXjwdIrAtXhjlWggUrqJJKY8nR+XtFJ8muTVFQPg7YVp3kBY9riME/L5mkJJsEwKSoymZ3PQ9YlHbyRX/gfkz/ZU= X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C From: Vikram Garhwal To: CC: , , , Bertrand Marquis , Volodymyr Babchuk Subject: [XEN][RFC PATCH v4 01/16] xen/arm/device: Remove __init from function type Date: Tue, 6 Dec 2022 22:15:22 -0800 Message-ID: <20221207061537.7266-2-vikram.garhwal@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221207061537.7266-1-vikram.garhwal@amd.com> References: <20221207061537.7266-1-vikram.garhwal@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E652:EE_|DM6PR12MB4960:EE_ X-MS-Office365-Filtering-Correlation-Id: 7884a8d5-e52b-445f-f0ec-08dad81a8317 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 1iq7w3dl102B2XC1NS3jM5BQ9l6xtCEq/8YSqexstkAmySsAUsSQ8UHUmsc+rqUA2Hyiu9JS2GItmuoN3INeACCT21Fj0wNuGjE0KYAGeheJOzeUMGaQ30NjfvWcQuejjKXN76p2/iMwzYay7B5+fmS5JhEOXYRE+8u3s3MOt99xZE1YiYr1f8RG/cWTUQbwdg3VyBxu5GacqiwIa4hBROCjdY/vivy/pdikOWKsuISi2+/WpvwMlfZSaAdv6e9TZsXW3/7QB24oCZIxgIDG+eg14j1OW359W+Spn6NelM8bQJdc0TW2cdix9MpOGbIguVuiF/vrBQnRHGfkMDhPtExKsUZLOawkrugyDE4EDYKqBa90PuPxZm9NzqMbDLPLij+2wumaI50LuCbm4egoE1Qd19MHVM7Zp/uGQgvZoWLsKdu13K0pfHyihrv3FVNH6GLxmHEfiPp348YXbbq4KgsfPVYY3/HVL03kBqO/ilsVIUbYC1j6Gp4jQBizuO9qQMgvIY7gvk64MtMcT9gDiKGNCejkQGKJg235fPt9j28LoQIKOUeOXMv6ZiJRCHHFKVsd79GO9XciFmlxCIgGMCnl+2lmnIfgyR+XwXKAqcZ4Wp9+q0rMYOVqbOHLIbObXFBJO7t46/3nFwFUmeAkfL1LxEuYS2R0QZ8I1bsexwbgp/QS0i3WZbWfxzBkqWRYm51K4OgEGUKqlShqQJlHQtNGK3h6hc0OC2u1DQ9ZAWI= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(136003)(376002)(346002)(451199015)(46966006)(36840700001)(40470700004)(30864003)(2906002)(44832011)(41300700001)(82740400003)(478600001)(81166007)(40460700003)(356005)(82310400005)(4326008)(8676002)(86362001)(40480700001)(8936002)(5660300002)(47076005)(70206006)(36756003)(70586007)(426003)(54906003)(26005)(186003)(1076003)(336012)(2616005)(6916009)(6666004)(83380400001)(316002)(36860700001)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Dec 2022 06:16:00.5542 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7884a8d5-e52b-445f-f0ec-08dad81a8317 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E652.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4960 X-ZohoMail-DKIM: pass (identity @amd.com) X-ZM-MESSAGEID: 1670393791520100001 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Change function type of following function to access during runtime: 1. map_irq_to_domain() 2. handle_device_interrupt() 3. map_range_to_domain() 4. unflatten_dt_node() 5. unflatten_device_tree() Move map_irq_to_domain(), handle_device_interrupt() and map_range_to_domain= () to device.c. unflatten_device_tree(): Add handling of memory allocation failure. These changes are done to support the dynamic programming of a nodes where = an overlay node will be added to fdt and unflattened node will be added to dt_= host. Furthermore, IRQ and mmio mapping will be done for the added node. Signed-off-by: Vikram Garhwal --- xen/arch/arm/device.c | 145 +++++++++++++++++++++++++++++++ xen/arch/arm/domain_build.c | 142 ------------------------------ xen/arch/arm/include/asm/setup.h | 3 + xen/common/device_tree.c | 27 +++--- xen/include/xen/device_tree.h | 5 ++ 5 files changed, 170 insertions(+), 152 deletions(-) diff --git a/xen/arch/arm/device.c b/xen/arch/arm/device.c index 70cd6c1a19..d299c04e62 100644 --- a/xen/arch/arm/device.c +++ b/xen/arch/arm/device.c @@ -21,6 +21,9 @@ #include #include #include +#include +#include +#include =20 extern const struct device_desc _sdevice[], _edevice[]; extern const struct acpi_device_desc _asdevice[], _aedevice[]; @@ -84,6 +87,148 @@ enum device_class device_get_class(const struct dt_devi= ce_node *dev) return DEVICE_UNKNOWN; } =20 +int map_irq_to_domain(struct domain *d, unsigned int irq, + bool need_mapping, const char *devname) +{ + int res; + + res =3D irq_permit_access(d, irq); + if ( res ) + { + printk(XENLOG_ERR "Unable to permit to dom%u access to IRQ %u\n", + d->domain_id, irq); + return res; + } + + if ( need_mapping ) + { + /* + * Checking the return of vgic_reserve_virq is not + * necessary. It should not fail except when we try to map + * the IRQ twice. This can legitimately happen if the IRQ is shared + */ + vgic_reserve_virq(d, irq); + + res =3D route_irq_to_guest(d, irq, irq, devname); + if ( res < 0 ) + { + printk(XENLOG_ERR "Unable to map IRQ%"PRId32" to dom%d\n", + irq, d->domain_id); + return res; + } + } + + dt_dprintk(" - IRQ: %u\n", irq); + return 0; +} + +int map_range_to_domain(const struct dt_device_node *dev, + u64 addr, u64 len, void *data) +{ + struct map_range_data *mr_data =3D data; + struct domain *d =3D mr_data->d; + int res; + + /* + * reserved-memory regions are RAM carved out for a special purpose. + * They are not MMIO and therefore a domain should not be able to + * manage them via the IOMEM interface. + */ + if ( strncasecmp(dt_node_full_name(dev), "/reserved-memory/", + strlen("/reserved-memory/")) !=3D 0 ) + { + res =3D iomem_permit_access(d, paddr_to_pfn(addr), + paddr_to_pfn(PAGE_ALIGN(addr + len - 1))); + if ( res ) + { + printk(XENLOG_ERR "Unable to permit to dom%d access to" + " 0x%"PRIx64" - 0x%"PRIx64"\n", + d->domain_id, + addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1); + return res; + } + } + + if ( !mr_data->skip_mapping ) + { + res =3D map_regions_p2mt(d, + gaddr_to_gfn(addr), + PFN_UP(len), + maddr_to_mfn(addr), + mr_data->p2mt); + + if ( res < 0 ) + { + printk(XENLOG_ERR "Unable to map 0x%"PRIx64 + " - 0x%"PRIx64" in domain %d\n", + addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1, + d->domain_id); + return res; + } + } + + dt_dprintk(" - MMIO: %010"PRIx64" - %010"PRIx64" P2MType=3D%x\n", + addr, addr + len, mr_data->p2mt); + + return 0; +} + +/* + * handle_device_interrupts retrieves the interrupts configuration from + * a device tree node and maps those interrupts to the target domain. + * + * Returns: + * < 0 error + * 0 success + */ +int handle_device_interrupts(struct domain *d, + struct dt_device_node *dev, + bool need_mapping) +{ + unsigned int i, nirq; + int res; + struct dt_raw_irq rirq; + + nirq =3D dt_number_of_irq(dev); + + /* Give permission and map IRQs */ + for ( i =3D 0; i < nirq; i++ ) + { + res =3D dt_device_get_raw_irq(dev, i, &rirq); + if ( res ) + { + printk(XENLOG_ERR "Unable to retrieve irq %u for %s\n", + i, dt_node_full_name(dev)); + return res; + } + + /* + * Don't map IRQ that have no physical meaning + * ie: IRQ whose controller is not the GIC + */ + if ( rirq.controller !=3D dt_interrupt_controller ) + { + dt_dprintk("irq %u not connected to primary controller. Connec= ted to %s\n", + i, dt_node_full_name(rirq.controller)); + continue; + } + + res =3D platform_get_irq(dev, i); + if ( res < 0 ) + { + printk(XENLOG_ERR "Unable to get irq %u for %s\n", + i, dt_node_full_name(dev)); + return res; + } + + res =3D map_irq_to_domain(d, res, need_mapping, dt_node_name(dev)); + if ( res ) + return res; + } + + return 0; +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 4fb5c20b13..acde8e714e 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -2229,41 +2229,6 @@ int __init make_chosen_node(const struct kernel_info= *kinfo) return res; } =20 -int __init map_irq_to_domain(struct domain *d, unsigned int irq, - bool need_mapping, const char *devname) -{ - int res; - - res =3D irq_permit_access(d, irq); - if ( res ) - { - printk(XENLOG_ERR "Unable to permit to dom%u access to IRQ %u\n", - d->domain_id, irq); - return res; - } - - if ( need_mapping ) - { - /* - * Checking the return of vgic_reserve_virq is not - * necessary. It should not fail except when we try to map - * the IRQ twice. This can legitimately happen if the IRQ is shared - */ - vgic_reserve_virq(d, irq); - - res =3D route_irq_to_guest(d, irq, irq, devname); - if ( res < 0 ) - { - printk(XENLOG_ERR "Unable to map IRQ%"PRId32" to dom%d\n", - irq, d->domain_id); - return res; - } - } - - dt_dprintk(" - IRQ: %u\n", irq); - return 0; -} - static int __init map_dt_irq_to_domain(const struct dt_device_node *dev, const struct dt_irq *dt_irq, void *data) @@ -2295,57 +2260,6 @@ static int __init map_dt_irq_to_domain(const struct = dt_device_node *dev, return 0; } =20 -int __init map_range_to_domain(const struct dt_device_node *dev, - u64 addr, u64 len, void *data) -{ - struct map_range_data *mr_data =3D data; - struct domain *d =3D mr_data->d; - int res; - - /* - * reserved-memory regions are RAM carved out for a special purpose. - * They are not MMIO and therefore a domain should not be able to - * manage them via the IOMEM interface. - */ - if ( strncasecmp(dt_node_full_name(dev), "/reserved-memory/", - strlen("/reserved-memory/")) !=3D 0 ) - { - res =3D iomem_permit_access(d, paddr_to_pfn(addr), - paddr_to_pfn(PAGE_ALIGN(addr + len - 1))); - if ( res ) - { - printk(XENLOG_ERR "Unable to permit to dom%d access to" - " 0x%"PRIx64" - 0x%"PRIx64"\n", - d->domain_id, - addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1); - return res; - } - } - - if ( !mr_data->skip_mapping ) - { - res =3D map_regions_p2mt(d, - gaddr_to_gfn(addr), - PFN_UP(len), - maddr_to_mfn(addr), - mr_data->p2mt); - - if ( res < 0 ) - { - printk(XENLOG_ERR "Unable to map 0x%"PRIx64 - " - 0x%"PRIx64" in domain %d\n", - addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1, - d->domain_id); - return res; - } - } - - dt_dprintk(" - MMIO: %010"PRIx64" - %010"PRIx64" P2MType=3D%x\n", - addr, addr + len, mr_data->p2mt); - - return 0; -} - /* * For a node which describes a discoverable bus (such as a PCI bus) * then we may need to perform additional mappings in order to make @@ -2373,62 +2287,6 @@ static int __init map_device_children(const struct d= t_device_node *dev, return 0; } =20 -/* - * handle_device_interrupts retrieves the interrupts configuration from - * a device tree node and maps those interrupts to the target domain. - * - * Returns: - * < 0 error - * 0 success - */ -static int __init handle_device_interrupts(struct domain *d, - struct dt_device_node *dev, - bool need_mapping) -{ - unsigned int i, nirq; - int res; - struct dt_raw_irq rirq; - - nirq =3D dt_number_of_irq(dev); - - /* Give permission and map IRQs */ - for ( i =3D 0; i < nirq; i++ ) - { - res =3D dt_device_get_raw_irq(dev, i, &rirq); - if ( res ) - { - printk(XENLOG_ERR "Unable to retrieve irq %u for %s\n", - i, dt_node_full_name(dev)); - return res; - } - - /* - * Don't map IRQ that have no physical meaning - * ie: IRQ whose controller is not the GIC - */ - if ( rirq.controller !=3D dt_interrupt_controller ) - { - dt_dprintk("irq %u not connected to primary controller. Connec= ted to %s\n", - i, dt_node_full_name(rirq.controller)); - continue; - } - - res =3D platform_get_irq(dev, i); - if ( res < 0 ) - { - printk(XENLOG_ERR "Unable to get irq %u for %s\n", - i, dt_node_full_name(dev)); - return res; - } - - res =3D map_irq_to_domain(d, res, need_mapping, dt_node_name(dev)); - if ( res ) - return res; - } - - return 0; -} - /* * For a given device node: * - Give permission to the guest to manage IRQ and MMIO range diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/se= tup.h index fdbf68aadc..ec050848aa 100644 --- a/xen/arch/arm/include/asm/setup.h +++ b/xen/arch/arm/include/asm/setup.h @@ -163,6 +163,9 @@ void device_tree_get_reg(const __be32 **cell, u32 addre= ss_cells, u32 device_tree_get_u32(const void *fdt, int node, const char *prop_name, u32 dflt); =20 +int handle_device_interrupts(struct domain *d, struct dt_device_node *dev, + bool need_mapping); + int map_range_to_domain(const struct dt_device_node *dev, u64 addr, u64 len, void *data); =20 diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c index 6c9712ab7b..6518eff9b0 100644 --- a/xen/common/device_tree.c +++ b/xen/common/device_tree.c @@ -1811,12 +1811,12 @@ int dt_count_phandle_with_args(const struct dt_devi= ce_node *np, * @allnextpp: pointer to ->allnext from last allocated device_node * @fpsize: Size of the node path up at the current depth. */ -static unsigned long __init unflatten_dt_node(const void *fdt, - unsigned long mem, - unsigned long *p, - struct dt_device_node *dad, - struct dt_device_node ***all= nextpp, - unsigned long fpsize) +static unsigned long unflatten_dt_node(const void *fdt, + unsigned long mem, + unsigned long *p, + struct dt_device_node *dad, + struct dt_device_node ***allnextpp, + unsigned long fpsize) { struct dt_device_node *np; struct dt_property *pp, **prev_pp =3D NULL; @@ -2047,7 +2047,7 @@ static unsigned long __init unflatten_dt_node(const v= oid *fdt, } =20 /** - * __unflatten_device_tree - create tree of device_nodes from flat blob + * unflatten_device_tree - create tree of device_nodes from flat blob * * unflattens a device-tree, creating the * tree of struct device_node. It also fills the "name" and "type" @@ -2056,8 +2056,7 @@ static unsigned long __init unflatten_dt_node(const v= oid *fdt, * @fdt: The fdt to expand * @mynodes: The device_node tree created by the call */ -static void __init __unflatten_device_tree(const void *fdt, - struct dt_device_node **mynodes) +int unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes) { unsigned long start, mem, size; struct dt_device_node **allnextp =3D mynodes; @@ -2079,6 +2078,12 @@ static void __init __unflatten_device_tree(const voi= d *fdt, /* Allocate memory for the expanded device tree */ mem =3D (unsigned long)_xmalloc (size + 4, __alignof__(struct dt_devic= e_node)); =20 + if ( mem =3D=3D 0 ) + { + printk(XENLOG_ERR "Cannot allocate memory for unflatten device tre= e\n"); + return -ENOMEM; + } + ((__be32 *)mem)[size / 4] =3D cpu_to_be32(0xdeadbeef); =20 dt_dprintk(" unflattening %lx...\n", mem); @@ -2095,6 +2100,8 @@ static void __init __unflatten_device_tree(const void= *fdt, *allnextp =3D NULL; =20 dt_dprintk(" <- unflatten_device_tree()\n"); + + return 0; } =20 static void dt_alias_add(struct dt_alias_prop *ap, @@ -2179,7 +2186,7 @@ dt_find_interrupt_controller(const struct dt_device_m= atch *matches) =20 void __init dt_unflatten_host_device_tree(void) { - __unflatten_device_tree(device_tree_flattened, &dt_host); + unflatten_device_tree(device_tree_flattened, &dt_host); dt_alias_scan(); } =20 diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h index a28937d12a..bde46d7120 100644 --- a/xen/include/xen/device_tree.h +++ b/xen/include/xen/device_tree.h @@ -181,6 +181,11 @@ int device_tree_for_each_node(const void *fdt, int nod= e, */ void dt_unflatten_host_device_tree(void); =20 +/** + * unflatten any device tree. + */ +int unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes= ); + /** * IRQ translation callback * TODO: For the moment we assume that we only have ONE --=20 2.17.1