From nobody Wed Dec 17 16:02:47 2025 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8D20418A950 for ; Fri, 3 Jan 2025 09:20:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735896033; cv=none; b=Aa+L/vB/cp6Sgu1BGtq7q11Lc6jrqCWYHgW2ZlfUTsS5TIbrvOBsioI3pN5Qihs5I9q/MwwtCl9szQsvEBwvX/fgOSuMWfbHMQ6DLumH3ldakRxWawYtzTWQpEnLts0Li07qjLNChEINU6RKGJNqUkWkbKu9ChMDxnC4AmH48J8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735896033; c=relaxed/simple; bh=nbmBA3evYNE8QxRye9K8ICfrFkvKlikj8yZZDbZaOcA=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version:Content-Type; b=GEZiRjt+ae+pCNUwM8CRZ2dt6oqYzO7sm4op84U2LbCi29/bBLPGwxnMK3Spou4NYYt96pUdbCv4Mne0r+kWc7PtG/NjOnfVVWDZJTK+8qR5xBFHfKTZFEQwcm8hr5AKlkTN1Wm33mdvTE/c2LbpofndTjULE7vUmuU0y5Xdw5w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=hzbEFE3M; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="hzbEFE3M" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-2164b1f05caso164535985ad.3 for ; Fri, 03 Jan 2025 01:20:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1735896031; x=1736500831; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=ET8b8sRPCO4IU1SFY+SvkpPtWQBm6OdnGtfJMT58fac=; b=hzbEFE3MnbaJxV85Fnw6u4/bkKsnPXQXMz9qG83jE7rCxRHuKXcbnGd/cA3HxOWxzi BgK0SuC6hKRC7JXm+PCfSBTrGP07eyMobW44mHeaPXUeeSHdGovEZywcnXJ7Su+DMjXr lkJKI+mD5ZsVkKTdC/p286OZcXSCdjEHJhbDeSJu244usfYvVtnPosf5mQMXw+Pfl072 3HvkCKmh27UMuYqPUQohFSgKRnleIJNgawZCIS9nEdt469iLxODnPFtJU6vK6tXOiV/K NEmGTp33VP8vCdZW65TrP15+ZU592J7F6Fu9aFev9+FxrbUxhkGKdj8znsAx6cOTXYG/ kyhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735896031; x=1736500831; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ET8b8sRPCO4IU1SFY+SvkpPtWQBm6OdnGtfJMT58fac=; b=Ur7jzewSBp2tCCSS939rXD6gKFcPTAYytkD0cWSZitqgq+v8gzzaArsperO4NWO7y/ W2EdS9lKz8g2NuyGSwjefNqiRy8kMZV347Z1RPw6hB+R7fZiYAibVEAgSAGPS4fNN4Ug m2Fwz5d5eszsI+K6SuKyisWFKVHsMo/GF7SkYHLOlWZe9OBt1fwpqYUAzKYR38IYOn1t BrLF3uyeK6+hur3aiRsEh7PAv1UNKrFVGRAbDER6OQ7qYeQ8OonPg6n4AuQvJraKjWla VR54a8xmU/jLAG5g2Tq3HUs+8O6g0YGkjOckT0o2ejjJiqbx17ZSxjCgFGmjD0hKsh48 NZ9A== X-Forwarded-Encrypted: i=1; AJvYcCUVNH1YdOPryBhanqib9EiZ5IoMlFua0Wyo/j2U67zK3X8+gMe1VSmqQaGD8+peNO4shUx3S48KKzHnSSE=@vger.kernel.org X-Gm-Message-State: AOJu0YxdUfYCmCwxCYX+h3rzXIOD0W6T2v2qM8YtiS1S2bYgHe0qcJF4 ntwcwziYfJPJcInF6/UmYz7wsF/E7VE+1f+yuuNoNUBxypl9Jr6n7uF1SesxHwY= X-Gm-Gg: ASbGncs4+fIIB9AwWwzeRAUkfO1FRyXoH6/EsT99aIQDdCXihxlbbe4NIs1u3tWkG+w iHEiBcPYK1DmaMxUgvC6dFtfoVGyfoG7Bkh5ObA40cC7peQz2HCxHqFjxfLPrl2CST+Q5tHuD5N YCVQukz6/+JTtkgIvDgyj4z8X7neir3uNW3Ic7CGzrXU/hHbOG7BwR+etmRRq10Y8QmylpIMmXb /2KljyxKu/6XWlu7T/mpACJzbZeYU3i16DGyTattPgqWEcH83j0yXyzbgBQdXPSgiwAY1K9RFuf BV1ODTO9GC/qtZQpNh89Fq8lwXWk3SZKab6Mlyw= X-Google-Smtp-Source: AGHT+IFfovCSRx3AcqtkWwLuhdout5NOGCAZSfyM3+u+DoqFfMNkSSa+F+IUKGlVCWqGfXK4bOVy9g== X-Received: by 2002:a17:902:d2c9:b0:216:4c88:d939 with SMTP id d9443c01a7336-219e6f133b6mr716758005ad.38.1735896030803; Fri, 03 Jan 2025 01:20:30 -0800 (PST) Received: from J9GPGXL7NT.bytedance.net ([61.213.176.55]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-219dc96e585sm241417515ad.70.2025.01.03.01.20.27 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Fri, 03 Jan 2025 01:20:30 -0800 (PST) From: Xu Lu To: paul.walmsley@sifive.com, palmer@dabbelt.com, alexghiti@rivosinc.com, bjorn@rivosinc.com Cc: lihangjing@bytedance.com, xieyongji@bytedance.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Xu Lu Subject: [PATCH RESEND v4] riscv: mm: Fix the out of bound issue of vmemmap address Date: Fri, 3 Jan 2025 17:20:23 +0800 Message-Id: <20250103092023.37083-1-luxu.kernel@bytedance.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable In sparse vmemmap model, the virtual address of vmemmap is calculated as: ((struct page *)VMEMMAP_START - (phys_ram_base >> PAGE_SHIFT)). And the struct page's va can be calculated with an offset: (vmemmap + (pfn)). However, when initializing struct pages, kernel actually starts from the first page from the same section that phys_ram_base belongs to. If the first page's physical address is not (phys_ram_base >> PAGE_SHIFT), then we get an va below VMEMMAP_START when calculating va for it's struct page. For example, if phys_ram_base starts from 0x82000000 with pfn 0x82000, the first page in the same section is actually pfn 0x80000. During init_unavailable_range(), we will initialize struct page for pfn 0x80000 with virtual address ((struct page *)VMEMMAP_START - 0x2000), which is below VMEMMAP_START as well as PCI_IO_END. This commit fixes this bug by introducing a new variable 'vmemmap_start_pfn' which is aligned with memory section size and using it to calculate vmemmap address instead of phys_ram_base. Fixes: a11dd49dcb93 ("riscv: Sparse-Memory/vmemmap out-of-bounds fix") Tested-by: Bj=C3=B6rn T=C3=B6pel Reviewed-by: Bj=C3=B6rn T=C3=B6pel Reviewed-by: Alexandre Ghiti Signed-off-by: Xu Lu --- arch/riscv/include/asm/page.h | 1 + arch/riscv/include/asm/pgtable.h | 2 +- arch/riscv/mm/init.c | 17 ++++++++++++++++- 3 files changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h index 71aabc5c6713..125f5ecd9565 100644 --- a/arch/riscv/include/asm/page.h +++ b/arch/riscv/include/asm/page.h @@ -122,6 +122,7 @@ struct kernel_mapping { =20 extern struct kernel_mapping kernel_map; extern phys_addr_t phys_ram_base; +extern unsigned long vmemmap_start_pfn; =20 #define is_kernel_mapping(x) \ ((x) >=3D kernel_map.virt_addr && (x) < (kernel_map.virt_addr + kernel_ma= p.size)) diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgta= ble.h index d4e99eef90ac..050fdc49b5ad 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -87,7 +87,7 @@ * Define vmemmap for pfn_to_page & page_to_pfn calls. Needed if kernel * is configured with CONFIG_SPARSEMEM_VMEMMAP enabled. */ -#define vmemmap ((struct page *)VMEMMAP_START - (phys_ram_base >> PAGE_SH= IFT)) +#define vmemmap ((struct page *)VMEMMAP_START - vmemmap_start_pfn) =20 #define PCI_IO_SIZE SZ_16M #define PCI_IO_END VMEMMAP_START diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index fc53ce748c80..8d167e09f1fe 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -33,6 +33,7 @@ #include #include #include +#include #include =20 #include "../kernel/head.h" @@ -62,6 +63,13 @@ EXPORT_SYMBOL(pgtable_l5_enabled); phys_addr_t phys_ram_base __ro_after_init; EXPORT_SYMBOL(phys_ram_base); =20 +#ifdef CONFIG_SPARSEMEM_VMEMMAP +#define VMEMMAP_ADDR_ALIGN (1ULL << SECTION_SIZE_BITS) + +unsigned long vmemmap_start_pfn __ro_after_init; +EXPORT_SYMBOL(vmemmap_start_pfn); +#endif + unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss; EXPORT_SYMBOL(empty_zero_page); @@ -240,8 +248,12 @@ static void __init setup_bootmem(void) * Make sure we align the start of the memory on a PMD boundary so that * at worst, we map the linear mapping with PMD mappings. */ - if (!IS_ENABLED(CONFIG_XIP_KERNEL)) + if (!IS_ENABLED(CONFIG_XIP_KERNEL)) { phys_ram_base =3D memblock_start_of_DRAM() & PMD_MASK; +#ifdef CONFIG_SPARSEMEM_VMEMMAP + vmemmap_start_pfn =3D round_down(phys_ram_base, VMEMMAP_ADDR_ALIGN) >> P= AGE_SHIFT; +#endif + } =20 /* * In 64-bit, any use of __va/__pa before this point is wrong as we @@ -1101,6 +1113,9 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) kernel_map.xiprom_sz =3D (uintptr_t)(&_exiprom) - (uintptr_t)(&_xiprom); =20 phys_ram_base =3D CONFIG_PHYS_RAM_BASE; +#ifdef CONFIG_SPARSEMEM_VMEMMAP + vmemmap_start_pfn =3D round_down(phys_ram_base, VMEMMAP_ADDR_ALIGN) >> PA= GE_SHIFT; +#endif kernel_map.phys_addr =3D (uintptr_t)CONFIG_PHYS_RAM_BASE; kernel_map.size =3D (uintptr_t)(&_end) - (uintptr_t)(&_start); =20 --=20 2.20.1