From nobody Tue Feb 10 01:32:34 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F4B7EB64DC for ; Sun, 25 Jun 2023 14:11:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230041AbjFYOLB (ORCPT ); Sun, 25 Jun 2023 10:11:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229964AbjFYOKz (ORCPT ); Sun, 25 Jun 2023 10:10:55 -0400 Received: from bg4.exmail.qq.com (bg4.exmail.qq.com [43.154.54.12]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9AB7B1B1 for ; Sun, 25 Jun 2023 07:10:53 -0700 (PDT) X-QQ-mid: bizesmtp73t1687702208t869bq9s Received: from localhost.localdomain ( [112.2.230.41]) by bizesmtp.qq.com (ESMTP) with id ; Sun, 25 Jun 2023 22:10:04 +0800 (CST) X-QQ-SSF: 01200000000000B0B000000A0000000 X-QQ-FEAT: TVZM0Uoyj00c0bZeOtQsMG0DaWKF8k5IP29sSaNKu2IqHl0SZUO1pXN9aQB5m So+nB8VGz5FuKu2vXt+QJgjvfdapKB5QFP/nycPWDjuq5n+govmjlnnfC3eg5zAOuPs3RLJ tidw5jfJcMk/f3odNds/Y35dsaNOgrDwb/1JqzDV+lozOChS18Qcsa48sKJp4BW5GTiQIy+ uENrBlNboO9pUCSceB6gdweAS2JHCzmhrhNjhlZNzbmvK9cEFaJe9JopuWLI/HG/hHN1hbp YC5V/dL5eDx0XIYB3rpFxJsYuPt+2YzAPJHuZUvik2ApM8SgDnrqQS532EawfgLqH9EsUe7 JIjbNrfBR84Y/3K7AbHzIySaufKumCejpKOrEBPytWTAKTadhs= X-QQ-GoodBg: 0 X-BIZMAIL-ID: 11597191393074232432 From: Song Shuai To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, robh+dt@kernel.org, frowand.list@gmail.com, ajones@ventanamicro.com, alexghiti@rivosinc.com, mpe@ellerman.id.au, arnd@arndb.de, songshuaishuai@tinylab.org, rppt@kernel.org, samuel@sholland.org, panqinglin2020@iscas.ac.cn, conor.dooley@microchip.com, anup@brainfault.org, xianting.tian@linux.alibaba.com, anshuman.khandual@arm.com, heiko@sntech.de Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org Subject: [PATCH V1 1/3] Revert "RISC-V: mark hibernation as nonportable" Date: Sun, 25 Jun 2023 22:09:29 +0800 Message-Id: <20230625140931.1266216-2-songshuaishuai@tinylab.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20230625140931.1266216-1-songshuaishuai@tinylab.org> References: <20230625140931.1266216-1-songshuaishuai@tinylab.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:tinylab.org:qybglogicsvrgz:qybglogicsvrgz5a-3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This reverts commit ed309ce522185583b163bd0c74f0d9f299fe1826. With the commit 3335068f8721 ("riscv: Use PUD/P4D/PGD pages for the linear mapping") reverted, the MIN_MEMBLOCK_ADDR points the kernel load address which was placed at a PMD boundary. And firmware always correctly mark resident memory, or memory protected with PMP as per the devicetree specification and/or the UEFI specification. So those regions will not be mapped in the linear mapping and they can be safely saved/restored by hibernation. Signed-off-by: Song Shuai --- arch/riscv/Kconfig | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 5966ad97c30c..17b5fc7f54d4 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -800,11 +800,8 @@ menu "Power management options" =20 source "kernel/power/Kconfig" =20 -# Hibernation is only possible on systems where the SBI implementation has -# marked its reserved memory as not accessible from, or does not run -# from the same memory as, Linux config ARCH_HIBERNATION_POSSIBLE - def_bool NONPORTABLE + def_bool y =20 config ARCH_HIBERNATION_HEADER def_bool HIBERNATION --=20 2.20.1 From nobody Tue Feb 10 01:32:34 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0DFAEB64DC for ; Sun, 25 Jun 2023 14:10:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230001AbjFYOK5 (ORCPT ); Sun, 25 Jun 2023 10:10:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229947AbjFYOKx (ORCPT ); Sun, 25 Jun 2023 10:10:53 -0400 Received: from bg4.exmail.qq.com (bg4.exmail.qq.com [43.155.65.254]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 119E9E43 for ; Sun, 25 Jun 2023 07:10:51 -0700 (PDT) X-QQ-mid: bizesmtp73t1687702214tvcj0zr9 Received: from localhost.localdomain ( [112.2.230.41]) by bizesmtp.qq.com (ESMTP) with id ; Sun, 25 Jun 2023 22:10:11 +0800 (CST) X-QQ-SSF: 01200000000000B0B000000A0000000 X-QQ-FEAT: lm7sZZPcOdayF/iPi2BR4TEQZ9thVeHoHrWDwBsJL4ncQ7c+lAS11zf54xTy3 Fo3M0/o47zsFTuHKr0OZtWzSNREVR8bTElBgt+LPq2dRqUYvWaBDaY7slSNJw+xNRpQWEA+ Rjv+tFEu+l41GgNbJ595Tpfo1ghEl1eY2WMgh2qBvy0mSydhFy6oAbkbAVm1bZzrsEpYThM wNZ8EOsZjsJoW9egl0Ac058twNt2/hFuBOikHE40Oz6wyQzk44N8LcxXwoDcmzlKWZUWryH FH200fkpS3gW0ZRaqqxfSyg8II+IuY93paiLI5RgaoFXH0rFo3Iaj/JntyYdOJtBftM01yq CGxtWz18la1yzqzwCO2TF510Ar3DePwU1NY0qSz X-QQ-GoodBg: 0 X-BIZMAIL-ID: 3093032413490913072 From: Song Shuai To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, robh+dt@kernel.org, frowand.list@gmail.com, ajones@ventanamicro.com, alexghiti@rivosinc.com, mpe@ellerman.id.au, arnd@arndb.de, songshuaishuai@tinylab.org, rppt@kernel.org, samuel@sholland.org, panqinglin2020@iscas.ac.cn, conor.dooley@microchip.com, anup@brainfault.org, xianting.tian@linux.alibaba.com, anshuman.khandual@arm.com, heiko@sntech.de Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org Subject: [PATCH V1 2/3] Revert "riscv: Check the virtual alignment before choosing a map size" Date: Sun, 25 Jun 2023 22:09:30 +0800 Message-Id: <20230625140931.1266216-3-songshuaishuai@tinylab.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20230625140931.1266216-1-songshuaishuai@tinylab.org> References: <20230625140931.1266216-1-songshuaishuai@tinylab.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:tinylab.org:qybglogicsvrgz:qybglogicsvrgz5a-3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This reverts commit 49a0a3731596fc004db6eec3fc674d92a09ef383. With the commit 3335068f8721 ("riscv: Use PUD/P4D/PGD pages for the linear mapping") reverted, best_map_size() only uses PMD_SIZE or PAGE_SIZE for linear mapping and the phys_ram_base that va_pa_offset is based on points the kernel load address which is 2M-aligned for rv64. So no need to check the virtual alignment before choosing a map size. Signed-off-by: Song Shuai --- arch/riscv/mm/init.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 4fa420faa780..38c4b4d6b64f 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -660,19 +660,18 @@ void __init create_pgd_mapping(pgd_t *pgdp, create_pgd_next_mapping(nextp, va, pa, sz, prot); } =20 -static uintptr_t __init best_map_size(phys_addr_t pa, uintptr_t va, - phys_addr_t size) +static uintptr_t __init best_map_size(phys_addr_t base, phys_addr_t size) { - if (!(pa & (PGDIR_SIZE - 1)) && !(va & (PGDIR_SIZE - 1)) && size >=3D PGD= IR_SIZE) + if (!(base & (PGDIR_SIZE - 1)) && size >=3D PGDIR_SIZE) return PGDIR_SIZE; =20 - if (!(pa & (P4D_SIZE - 1)) && !(va & (P4D_SIZE - 1)) && size >=3D P4D_SIZ= E) + if (!(base & (P4D_SIZE - 1)) && size >=3D P4D_SIZE) return P4D_SIZE; =20 - if (!(pa & (PUD_SIZE - 1)) && !(va & (PUD_SIZE - 1)) && size >=3D PUD_SIZ= E) + if (!(base & (PUD_SIZE - 1)) && size >=3D PUD_SIZE) return PUD_SIZE; =20 - if (!(pa & (PMD_SIZE - 1)) && !(va & (PMD_SIZE - 1)) && size >=3D PMD_SIZ= E) + if (!(base & (PMD_SIZE - 1)) && size >=3D PMD_SIZE) return PMD_SIZE; =20 return PAGE_SIZE; @@ -1178,7 +1177,7 @@ static void __init create_linear_mapping_range(phys_a= ddr_t start, for (pa =3D start; pa < end; pa +=3D map_size) { va =3D (uintptr_t)__va(pa); map_size =3D fixed_map_size ? fixed_map_size : - best_map_size(pa, va, end - pa); + best_map_size(pa, end - pa); =20 create_pgd_mapping(swapper_pg_dir, va, pa, map_size, pgprot_from_va(va)); --=20 2.20.1 From nobody Tue Feb 10 01:32:34 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12CACEB64DD for ; Sun, 25 Jun 2023 15:29:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229536AbjFYP3I (ORCPT ); Sun, 25 Jun 2023 11:29:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229446AbjFYP3H (ORCPT ); Sun, 25 Jun 2023 11:29:07 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC89F1AA; Sun, 25 Jun 2023 08:29:05 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id d2e1a72fcca58-6689430d803so1082740b3a.0; Sun, 25 Jun 2023 08:29:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687706945; x=1690298945; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=J+WOYDIvLdd2NWRRa7PZxlpSE8oO3O/uffxAnfRWFTQ=; b=M/P1iEtEmWvR4gNy5Xi0GhWXMDNv1iAV49eqjRiWX/SAZBBrZ4s/wFnFIiajnvMR4U Cq1f9q3461N2I20MICOALV/oq3Y75DDRsxKW1LcN3YmT0VMpcBO192i2R/Tnray4uCLy qlguMhROpd4O4D0+1D39cIOsXUNWM1KGSHyLzylZe4sBdvfCgnfcCkK1IUM3FALO35Xj ESLoTOb/a6y+uu7bEc76FcwrssFtglHWWl1AsX9M1choSG87grsiErSmdrjGFyaW8M7I 5DGHbp/HmQuets+G9z+XnvhIyKsSjFiGne8NHRtO3iGjmISL0FDVioxo/FCmNlSMUbVn DwjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687706945; x=1690298945; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=J+WOYDIvLdd2NWRRa7PZxlpSE8oO3O/uffxAnfRWFTQ=; b=i94Kg88Pb4xMJzNa65PqhuaTJrTSufF72ZXdK2wcVICM1CgMWkmrQ6nK7g9mdno9W8 0coFWHNrveyxycKkOhIbTKCWRLwPL/72CJDsthd7b1C050KzpteduKXTdUK4S6+7ffAI TPDksOyqu+/ibm2/70DKUmmxeqyQVa1D35MXWgPmfjk62KYYuTNetLxLkfmhRpXVq6ol 4K2G75tiWpmd3PxQGm+rSd3pfkaFHw6U+mewi1nTgY0vrcDXW8zaVKyNK4iY9HmMWPae 5ebnEFQy3mT7u4cDdgeH2brFlpB5X+bPQrqVl1t7/W6tol4ntc1MxK6blbysMHRoB1Q3 5ABw== X-Gm-Message-State: AC+VfDy70/xGoZDT4y6XtT6ENwKO8a1ddGZjQkwpRnIKeo7OeO5DizUw XXZPV4yUOBkhR1jzy5f5Gow= X-Google-Smtp-Source: ACHHUZ6mNVKRC/wAju5m3w9CI3n8kce2f6IYE2Xf6XLkhFJuxLumcPtstkhxjS+602cCUYMD2xfLoQ== X-Received: by 2002:a05:6a20:96db:b0:126:6c67:18bc with SMTP id hq27-20020a056a2096db00b001266c6718bcmr2001479pzc.38.1687706945039; Sun, 25 Jun 2023 08:29:05 -0700 (PDT) Received: from localhost.localdomain ([112.2.230.41]) by smtp.gmail.com with ESMTPSA id e19-20020a62aa13000000b0062bc045bf4fsm1710619pff.19.2023.06.25.08.28.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jun 2023 08:29:04 -0700 (PDT) From: Song Shuai To: songshuaishuai@tinylab.org Cc: ajones@ventanamicro.com, alexghiti@rivosinc.com, anshuman.khandual@arm.com, anup@brainfault.org, aou@eecs.berkeley.edu, arnd@arndb.de, conor.dooley@microchip.com, devicetree@vger.kernel.org, frowand.list@gmail.com, heiko@sntech.de, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, mpe@ellerman.id.au, palmer@dabbelt.com, panqinglin2020@iscas.ac.cn, paul.walmsley@sifive.com, robh+dt@kernel.org, rppt@kernel.org, samuel@sholland.org, xianting.tian@linux.alibaba.com Subject: [PATCH V1 3/3] Revert "riscv: Use PUD/P4D/PGD pages for the linear mapping" Date: Sun, 25 Jun 2023 23:28:41 +0800 Message-Id: <20230625152841.1280937-1-suagrfillet@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20230625140931.1266216-1-songshuaishuai@tinylab.org> References: <20230625140931.1266216-1-songshuaishuai@tinylab.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Song Shuai This reverts commit 3335068f87217ea59d08f462187dc856652eea15. This commit maps the PMP regions from some versions of OpenSbi in the linear mapping, that will lead to an access fault when doing hibernation[1] or some speculative accesses. The best_map_size() function from this commit doesn't check the virtual alignment before choosing a map size, that will cause a page fault[2]. We can let best_map_size() take the VA into consideration via commit 49a0a3731596 ("riscv: Check the virtual alignment before choosing a map size"), but that commit slows down the boot time and consumes some system memory when UEFI booting. This commit uses PUD/P4D/PGD pages for the linear mapping to improve the performance is marginal from a recent talk [3] from Mike Rapoport. OpenSbi had marked all the PMP-protected regions as "no-map" [4] to practice this talk. For all those reasons, let's revert this commit. [1] https://lore.kernel.org/linux-riscv/CAAYs2=3DgQvkhTeioMmqRDVGjdtNF_vhB+= vm_1dHJxPNi75YDQ_Q@mail.gmail.com/ [2] https://lore.kernel.org/linux-riscv/tencent_7C3B580B47C1B17C16488EC1@qq= .com/ [3] https://lwn.net/Articles/931406/ [4] https://github.com/riscv-software-src/opensbi/commit/8153b2622b08802cc5= 42f30a1fcba407a5667ab9 Signed-off-by: Song Shuai --- arch/riscv/include/asm/page.h | 16 --------------- arch/riscv/mm/init.c | 38 ++++++----------------------------- arch/riscv/mm/physaddr.c | 16 --------------- drivers/of/fdt.c | 11 +++++----- 4 files changed, 11 insertions(+), 70 deletions(-) diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h index b55ba20903ec..21b346ab81c2 100644 --- a/arch/riscv/include/asm/page.h +++ b/arch/riscv/include/asm/page.h @@ -89,14 +89,6 @@ typedef struct page *pgtable_t; #define PTE_FMT "%08lx" #endif =20 -#ifdef CONFIG_64BIT -/* - * We override this value as its generic definition uses __pa too early in - * the boot process (before kernel_map.va_pa_offset is set). - */ -#define MIN_MEMBLOCK_ADDR 0 -#endif - #ifdef CONFIG_MMU #define ARCH_PFN_OFFSET (PFN_DOWN((unsigned long)phys_ram_base)) #else @@ -128,11 +120,7 @@ extern phys_addr_t phys_ram_base; #define is_linear_mapping(x) \ ((x) >=3D PAGE_OFFSET && (!IS_ENABLED(CONFIG_64BIT) || (x) < PAGE_OFFSET = + KERN_VIRT_SIZE)) =20 -#ifndef CONFIG_DEBUG_VIRTUAL #define linear_mapping_pa_to_va(x) ((void *)((unsigned long)(x) + kernel_m= ap.va_pa_offset)) -#else -void *linear_mapping_pa_to_va(unsigned long x); -#endif #define kernel_mapping_pa_to_va(y) ({ \ unsigned long _y =3D (unsigned long)(y); \ (IS_ENABLED(CONFIG_XIP_KERNEL) && _y < phys_ram_base) ? \ @@ -141,11 +129,7 @@ void *linear_mapping_pa_to_va(unsigned long x); }) #define __pa_to_va_nodebug(x) linear_mapping_pa_to_va(x) =20 -#ifndef CONFIG_DEBUG_VIRTUAL #define linear_mapping_va_to_pa(x) ((unsigned long)(x) - kernel_map.va_pa_= offset) -#else -phys_addr_t linear_mapping_va_to_pa(unsigned long x); -#endif #define kernel_mapping_va_to_pa(y) ({ \ unsigned long _y =3D (unsigned long)(y); \ (IS_ENABLED(CONFIG_XIP_KERNEL) && _y < kernel_map.virt_addr + XIP_OFFSET)= ? \ diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 38c4b4d6b64f..4561781bcf60 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -216,14 +216,6 @@ static void __init setup_bootmem(void) phys_ram_end =3D memblock_end_of_DRAM(); if (!IS_ENABLED(CONFIG_XIP_KERNEL)) phys_ram_base =3D memblock_start_of_DRAM(); - - /* - * In 64-bit, any use of __va/__pa before this point is wrong as we - * did not know the start of DRAM before. - */ - if (IS_ENABLED(CONFIG_64BIT)) - kernel_map.va_pa_offset =3D PAGE_OFFSET - phys_ram_base; - /* * memblock allocator is not aware of the fact that last 4K bytes of * the addressable memory can not be mapped because of IS_ERR_VALUE @@ -662,16 +654,9 @@ void __init create_pgd_mapping(pgd_t *pgdp, =20 static uintptr_t __init best_map_size(phys_addr_t base, phys_addr_t size) { - if (!(base & (PGDIR_SIZE - 1)) && size >=3D PGDIR_SIZE) - return PGDIR_SIZE; - - if (!(base & (P4D_SIZE - 1)) && size >=3D P4D_SIZE) - return P4D_SIZE; - - if (!(base & (PUD_SIZE - 1)) && size >=3D PUD_SIZE) - return PUD_SIZE; - - if (!(base & (PMD_SIZE - 1)) && size >=3D PMD_SIZE) + /* Upgrade to PMD_SIZE mappings whenever possible */ + base &=3D PMD_SIZE - 1; + if (!base && size >=3D PMD_SIZE) return PMD_SIZE; =20 return PAGE_SIZE; @@ -1037,22 +1022,11 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) set_satp_mode(dtb_pa); #endif =20 - /* - * In 64-bit, we defer the setup of va_pa_offset to setup_bootmem, - * where we have the system memory layout: this allows us to align - * the physical and virtual mappings and then make use of PUD/P4D/PGD - * for the linear mapping. This is only possible because the kernel - * mapping lies outside the linear mapping. - * In 32-bit however, as the kernel resides in the linear mapping, - * setup_vm_final can not change the mapping established here, - * otherwise the same kernel addresses would get mapped to different - * physical addresses (if the start of dram is different from the - * kernel physical address start). - */ - kernel_map.va_pa_offset =3D IS_ENABLED(CONFIG_64BIT) ? - 0UL : PAGE_OFFSET - kernel_map.phys_addr; + kernel_map.va_pa_offset =3D PAGE_OFFSET - kernel_map.phys_addr; kernel_map.va_kernel_pa_offset =3D kernel_map.virt_addr - kernel_map.phys= _addr; =20 + phys_ram_base =3D kernel_map.phys_addr; + /* * The default maximal physical memory size is KERN_VIRT_SIZE for 32-bit * kernel, whereas for 64-bit kernel, the end of the virtual address diff --git a/arch/riscv/mm/physaddr.c b/arch/riscv/mm/physaddr.c index 18706f457da7..9b18bda74154 100644 --- a/arch/riscv/mm/physaddr.c +++ b/arch/riscv/mm/physaddr.c @@ -33,19 +33,3 @@ phys_addr_t __phys_addr_symbol(unsigned long x) return __va_to_pa_nodebug(x); } EXPORT_SYMBOL(__phys_addr_symbol); - -phys_addr_t linear_mapping_va_to_pa(unsigned long x) -{ - BUG_ON(!kernel_map.va_pa_offset); - - return ((unsigned long)(x) - kernel_map.va_pa_offset); -} -EXPORT_SYMBOL(linear_mapping_va_to_pa); - -void *linear_mapping_pa_to_va(unsigned long x) -{ - BUG_ON(!kernel_map.va_pa_offset); - - return ((void *)((unsigned long)(x) + kernel_map.va_pa_offset)); -} -EXPORT_SYMBOL(linear_mapping_pa_to_va); diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index bf502ba8da95..c28aedd7ae1f 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -888,13 +888,12 @@ const void * __init of_flat_dt_match_machine(const vo= id *default_match, static void __early_init_dt_declare_initrd(unsigned long start, unsigned long end) { - /* - * __va() is not yet available this early on some platforms. In that - * case, the platform uses phys_initrd_start/phys_initrd_size instead - * and does the VA conversion itself. + /* ARM64 would cause a BUG to occur here when CONFIG_DEBUG_VM is + * enabled since __va() is called too early. ARM64 does make use + * of phys_initrd_start/phys_initrd_size so we can skip this + * conversion. */ - if (!IS_ENABLED(CONFIG_ARM64) && - !(IS_ENABLED(CONFIG_RISCV) && IS_ENABLED(CONFIG_64BIT))) { + if (!IS_ENABLED(CONFIG_ARM64)) { initrd_start =3D (unsigned long)__va(start); initrd_end =3D (unsigned long)__va(end); initrd_below_start_ok =3D 1; --=20 2.20.1