From nobody Thu Sep 18 18:47:27 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C312BC352A1 for ; Sun, 4 Dec 2022 01:32:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229812AbiLDBcL (ORCPT ); Sat, 3 Dec 2022 20:32:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35630 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229599AbiLDBcE (ORCPT ); Sat, 3 Dec 2022 20:32:04 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D51FF1B9F7 for ; Sat, 3 Dec 2022 17:31:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670117479; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B30MFqWNu+rcOTLOH6gG9qPqyD6wQQ4N98WacTEX/8c=; b=L+HLHYdOQuD8ZpH2EpOOeFGnyE0siqcgpEd2UIbH/zOnFeO8d2uwR9hAGND47f7elm5ue1 czrse5bouljW6oR0ZR03sz2sgUPDLakQuwhgU11vCpyk7XqM1MhdfpL4vbo5J0SZRsx9mz LJFlc24wORnL5allA86gWkpGID6i8fY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-208-Jh9T0IHRNhediakKabRqEg-1; Sat, 03 Dec 2022 20:31:15 -0500 X-MC-Unique: Jh9T0IHRNhediakKabRqEg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5B1C5101A52A; Sun, 4 Dec 2022 01:31:15 +0000 (UTC) Received: from MiWiFi-R3L-srv.redhat.com (ovpn-12-31.pek2.redhat.com [10.72.12.31]) by smtp.corp.redhat.com (Postfix) with ESMTP id 250D5141511E; Sun, 4 Dec 2022 01:31:10 +0000 (UTC) From: Baoquan He To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, stephen.s.brennan@oracle.com, urezki@gmail.com, willy@infradead.org, akpm@linux-foundation.org, hch@infradead.org, Baoquan He Subject: [PATCH v1 3/7] mm/vmalloc.c: allow vread() to read out vm_map_ram areas Date: Sun, 4 Dec 2022 09:30:42 +0800 Message-Id: <20221204013046.154960-4-bhe@redhat.com> In-Reply-To: <20221204013046.154960-1-bhe@redhat.com> References: <20221204013046.154960-1-bhe@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Currently, vread can read out vmalloc areas which is associated with a vm_struct. While this doesn't work for areas created by vm_map_ram() interface because it doesn't have an associated vm_struct. Then in vread(), these areas will be skipped. Here, add a new function vb_vread() to read out areas managed by vmap_block specifically. Then recognize vm_map_ram areas via vmap->flags and handle them respectively. Signed-off-by: Baoquan He --- mm/vmalloc.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 54 insertions(+), 7 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d6f376060d83..e6b46da3e044 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3519,6 +3519,46 @@ static int aligned_vread(char *buf, char *addr, unsi= gned long count) return copied; } =20 +static void vb_vread(char *buf, char *addr, int count) +{ + char *start; + struct vmap_block *vb; + unsigned long offset; + unsigned int rs, re, n; + + offset =3D ((unsigned long)addr & (VMAP_BLOCK_SIZE - 1)) >> PAGE_SHIFT; + vb =3D xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr)); + + spin_lock(&vb->lock); + if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) { + spin_unlock(&vb->lock); + memset(buf, 0, count); + return; + } + for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) { + if (!count) + break; + start =3D vmap_block_vaddr(vb->va->va_start, rs); + if (addr < start) { + if (count =3D=3D 0) + break; + *buf =3D '\0'; + buf++; + addr++; + count--; + } + n =3D (re - rs + 1) << PAGE_SHIFT; + if (n > count) + n =3D count; + aligned_vread(buf, start, n); + + buf +=3D n; + addr +=3D n; + count -=3D n; + } + spin_unlock(&vb->lock); +} + /** * vread() - read vmalloc area in a safe way. * @buf: buffer for reading data @@ -3549,7 +3589,7 @@ long vread(char *buf, char *addr, unsigned long count) struct vm_struct *vm; char *vaddr, *buf_start =3D buf; unsigned long buflen =3D count; - unsigned long n; + unsigned long n, size, flags; =20 addr =3D kasan_reset_tag(addr); =20 @@ -3570,12 +3610,16 @@ long vread(char *buf, char *addr, unsigned long cou= nt) if (!count) break; =20 - if (!va->vm) + vm =3D va->vm; + flags =3D va->flags & VMAP_FLAGS_MASK; + + if (!vm && !flags) continue; =20 - vm =3D va->vm; - vaddr =3D (char *) vm->addr; - if (addr >=3D vaddr + get_vm_area_size(vm)) + vaddr =3D (char *) va->va_start; + size =3D flags ? va_size(va) : get_vm_area_size(vm); + + if (addr >=3D vaddr + size) continue; while (addr < vaddr) { if (count =3D=3D 0) @@ -3585,10 +3629,13 @@ long vread(char *buf, char *addr, unsigned long cou= nt) addr++; count--; } - n =3D vaddr + get_vm_area_size(vm) - addr; + n =3D vaddr + size - addr; if (n > count) n =3D count; - if (!(vm->flags & VM_IOREMAP)) + + if ((flags & (VMAP_RAM|VMAP_BLOCK)) =3D=3D (VMAP_RAM|VMAP_BLOCK)) + vb_vread(buf, addr, n); + else if ((flags & VMAP_RAM) || !(vm->flags & VM_IOREMAP)) aligned_vread(buf, addr, n); else /* IOREMAP area is treated as memory hole */ memset(buf, 0, n); --=20 2.34.1