From nobody Mon Sep 15 11:10:09 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2B74C54EBD for ; Fri, 13 Jan 2023 03:21:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231233AbjAMDU7 (ORCPT ); Thu, 12 Jan 2023 22:20:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231347AbjAMDUm (ORCPT ); Thu, 12 Jan 2023 22:20:42 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B11B0559C5 for ; Thu, 12 Jan 2023 19:19:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1673579994; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=11FXsd5OGrkSG+2jGllxahOR0gT94+BYaKre9aO1JhQ=; b=J3uKG+P0Hj/jV30yiEnuEI+REKNjobDBFpveka0YdqgcoEHhKgW9Tu8lkcjFckU+SFTWuX N8YquBpB1iq6jUGtu7p/5q2bcDCkiFpnRnlunw0tMPLdjrx2ZENNo8efNOJDsER6j8WajG kAwCtrwQWg0Xv/faJpMEw+ZJPZYjxmQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-215-NpIfbxH0N0Se2tE09W6OBg-1; Thu, 12 Jan 2023 22:19:49 -0500 X-MC-Unique: NpIfbxH0N0Se2tE09W6OBg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 12D3F85C064; Fri, 13 Jan 2023 03:19:49 +0000 (UTC) Received: from fedora.redhat.com (ovpn-12-229.pek2.redhat.com [10.72.12.229]) by smtp.corp.redhat.com (Postfix) with ESMTP id C67C74078903; Fri, 13 Jan 2023 03:19:44 +0000 (UTC) From: Baoquan He To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, urezki@gmail.com, lstoakes@gmail.com, stephen.s.brennan@oracle.com, willy@infradead.org, akpm@linux-foundation.org, hch@infradead.org, Baoquan He Subject: [PATCH v3 3/7] mm/vmalloc.c: allow vread() to read out vm_map_ram areas Date: Fri, 13 Jan 2023 11:19:17 +0800 Message-Id: <20230113031921.64716-4-bhe@redhat.com> In-Reply-To: <20230113031921.64716-1-bhe@redhat.com> References: <20230113031921.64716-1-bhe@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Currently, vread can read out vmalloc areas which is associated with a vm_struct. While this doesn't work for areas created by vm_map_ram() interface because it doesn't have an associated vm_struct. Then in vread(), these areas are all skipped. Here, add a new function vmap_ram_vread() to read out vm_map_ram areas. The area created with vmap_ram_vread() interface directly can be handled like the other normal vmap areas with aligned_vread(). While areas which will be further subdivided and managed with vmap_block need carefully read out page-aligned small regions and zero fill holes. Signed-off-by: Baoquan He --- mm/vmalloc.c | 80 +++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 73 insertions(+), 7 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ab4825050b5c..13875bc41e27 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3544,6 +3544,65 @@ static int aligned_vread(char *buf, char *addr, unsi= gned long count) return copied; } =20 +static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long= flags) +{ + char *start; + struct vmap_block *vb; + unsigned long offset; + unsigned int rs, re, n; + + /* + * If it's area created by vm_map_ram() interface directly, but + * not further subdividing and delegating management to vmap_block, + * handle it here. + */ + if (!(flags & VMAP_BLOCK)) { + aligned_vread(buf, addr, count); + return; + } + + /* + * Area is split into regions and tracked with vmap_block, read out + * each region and zero fill the hole between regions. + */ + vb =3D xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr)); + + spin_lock(&vb->lock); + if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) { + spin_unlock(&vb->lock); + memset(buf, 0, count); + return; + } + for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) { + if (!count) + break; + start =3D vmap_block_vaddr(vb->va->va_start, rs); + while (addr < start) { + if (count =3D=3D 0) + break; + *buf =3D '\0'; + buf++; + addr++; + count--; + } + /*it could start reading from the middle of used region*/ + offset =3D offset_in_page(addr); + n =3D ((re - rs + 1) << PAGE_SHIFT) - offset; + if (n > count) + n =3D count; + aligned_vread(buf, start+offset, n); + + buf +=3D n; + addr +=3D n; + count -=3D n; + } + spin_unlock(&vb->lock); + + /* zero-fill the left dirty or free regions */ + if (count) + memset(buf, 0, count); +} + /** * vread() - read vmalloc area in a safe way. * @buf: buffer for reading data @@ -3574,7 +3633,7 @@ long vread(char *buf, char *addr, unsigned long count) struct vm_struct *vm; char *vaddr, *buf_start =3D buf; unsigned long buflen =3D count; - unsigned long n; + unsigned long n, size, flags; =20 addr =3D kasan_reset_tag(addr); =20 @@ -3595,12 +3654,16 @@ long vread(char *buf, char *addr, unsigned long cou= nt) if (!count) break; =20 - if (!va->vm) + vm =3D va->vm; + flags =3D va->flags & VMAP_FLAGS_MASK; + + if (!vm && !flags) continue; =20 - vm =3D va->vm; - vaddr =3D (char *) vm->addr; - if (addr >=3D vaddr + get_vm_area_size(vm)) + vaddr =3D (char *) va->va_start; + size =3D vm ? get_vm_area_size(vm) : va_size(va); + + if (addr >=3D vaddr + size) continue; while (addr < vaddr) { if (count =3D=3D 0) @@ -3610,10 +3673,13 @@ long vread(char *buf, char *addr, unsigned long cou= nt) addr++; count--; } - n =3D vaddr + get_vm_area_size(vm) - addr; + n =3D vaddr + size - addr; if (n > count) n =3D count; - if (!(vm->flags & VM_IOREMAP)) + + if (flags & VMAP_RAM) + vmap_ram_vread(buf, addr, n, flags); + else if (!(vm->flags & VM_IOREMAP)) aligned_vread(buf, addr, n); else /* IOREMAP area is treated as memory hole */ memset(buf, 0, n); --=20 2.34.1