From nobody Mon Feb 9 12:10:36 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=intel.com ARC-Seal: i=1; a=rsa-sha256; t=1602273142; cv=none; d=zohomail.com; s=zohoarc; b=kvfK1jtDEw3tiMSliRSFDWx5rQo6FF9Ouv2rfyQrATSwZVaTc+2rQrI22kHcxb2yCdJOYSxk9FOOkDzlZif+gfvj8Gxxgm9HaRGA7TCNfSrPoViQlO5AUfPCBG2ORNAK/hhFIGWCzlRnVPSYxg4V82SmbSTFQTo57BvFcubQgnM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602273142; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=dknWcm7c825gFo6AiPtKV1AeuwXGqKan4NTnElau3io=; b=KW4MyMTsvSgP8STi7nvNulcNhf0SYzNC4xDuOUb/bHagjVlP0s2bMEcMSCfi/Che0+zJn/FF+q/pmD+1adbKkqWOfVHc5HY18NK6/SW67HxSAxZQ6xaAnb4n0qOKjcESwEbSgSuKO1tNSAquMEv6n4YPl0kacH2FFIESGrsePF0= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602273142132861.5654527480824; Fri, 9 Oct 2020 12:52:22 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.5068.13085 (Exim 4.92) (envelope-from ) id 1kQyQr-0002cU-DZ; Fri, 09 Oct 2020 19:52:01 +0000 Received: by outflank-mailman (output) from mailman id 5068.13085; Fri, 09 Oct 2020 19:52:01 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kQyQr-0002cD-6s; Fri, 09 Oct 2020 19:52:01 +0000 Received: by outflank-mailman (input) for mailman id 5068; Fri, 09 Oct 2020 19:52:00 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kQyQq-0002Bq-8U for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:00 +0000 Received: from mga09.intel.com (unknown [134.134.136.24]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8fca483a-f979-4b79-8dd1-10f1925524ff; Fri, 09 Oct 2020 19:51:56 +0000 (UTC) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:55 -0700 Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:54 -0700 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kQyQq-0002Bq-8U for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:00 +0000 Received: from mga09.intel.com (unknown [134.134.136.24]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8fca483a-f979-4b79-8dd1-10f1925524ff; Fri, 09 Oct 2020 19:51:56 +0000 (UTC) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:55 -0700 Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:54 -0700 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8fca483a-f979-4b79-8dd1-10f1925524ff IronPort-SDR: 4yfB73SxYF7XBF8UGI/c48/8kLz3A3unWQKVAAx1vb3BnR+jVKIia0bIpw3KbebLm9HwpFLIRq AyN6kBu/ZHnQ== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165643001" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="165643001" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False IronPort-SDR: vmw0BK7OYY8h0fZsnTgtJRpvmXsWJvpJiZ/O3ONyZujh0oi54UYOLunW4B1xesRkeTs3uxMx1d pOHAzA+ZEWzg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="529053305" From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 19/58] fs/hfsplus: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:54 -0700 Message-Id: <20201009195033.3208459-20-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Signed-off-by: Ira Weiny --- fs/hfsplus/bitmap.c | 20 ++++----- fs/hfsplus/bnode.c | 102 ++++++++++++++++++++++---------------------- fs/hfsplus/btree.c | 18 ++++---- 3 files changed, 70 insertions(+), 70 deletions(-) diff --git a/fs/hfsplus/bitmap.c b/fs/hfsplus/bitmap.c index cebce0cfe340..9ec7c1559a0c 100644 --- a/fs/hfsplus/bitmap.c +++ b/fs/hfsplus/bitmap.c @@ -39,7 +39,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 si= ze, start =3D size; goto out; } - pptr =3D kmap(page); + pptr =3D kmap_thread(page); curr =3D pptr + (offset & (PAGE_CACHE_BITS - 1)) / 32; i =3D offset % 32; offset &=3D ~(PAGE_CACHE_BITS - 1); @@ -74,7 +74,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 si= ze, } curr++; } - kunmap(page); + kunmap_thread(page); offset +=3D PAGE_CACHE_BITS; if (offset >=3D size) break; @@ -84,7 +84,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 si= ze, start =3D size; goto out; } - curr =3D pptr =3D kmap(page); + curr =3D pptr =3D kmap_thread(page); if ((size ^ offset) / PAGE_CACHE_BITS) end =3D pptr + PAGE_CACHE_BITS / 32; else @@ -127,7 +127,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 = size, len -=3D 32; } set_page_dirty(page); - kunmap(page); + kunmap_thread(page); offset +=3D PAGE_CACHE_BITS; page =3D read_mapping_page(mapping, offset / PAGE_CACHE_BITS, NULL); @@ -135,7 +135,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 = size, start =3D size; goto out; } - pptr =3D kmap(page); + pptr =3D kmap_thread(page); curr =3D pptr; end =3D pptr + PAGE_CACHE_BITS / 32; } @@ -151,7 +151,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 = size, done: *curr =3D cpu_to_be32(n); set_page_dirty(page); - kunmap(page); + kunmap_thread(page); *max =3D offset + (curr - pptr) * 32 + i - start; sbi->free_blocks -=3D *max; hfsplus_mark_mdb_dirty(sb); @@ -185,7 +185,7 @@ int hfsplus_block_free(struct super_block *sb, u32 offs= et, u32 count) page =3D read_mapping_page(mapping, pnr, NULL); if (IS_ERR(page)) goto kaboom; - pptr =3D kmap(page); + pptr =3D kmap_thread(page); curr =3D pptr + (offset & (PAGE_CACHE_BITS - 1)) / 32; end =3D pptr + PAGE_CACHE_BITS / 32; len =3D count; @@ -215,11 +215,11 @@ int hfsplus_block_free(struct super_block *sb, u32 of= fset, u32 count) if (!count) break; set_page_dirty(page); - kunmap(page); + kunmap_thread(page); page =3D read_mapping_page(mapping, ++pnr, NULL); if (IS_ERR(page)) goto kaboom; - pptr =3D kmap(page); + pptr =3D kmap_thread(page); curr =3D pptr; end =3D pptr + PAGE_CACHE_BITS / 32; } @@ -231,7 +231,7 @@ int hfsplus_block_free(struct super_block *sb, u32 offs= et, u32 count) } out: set_page_dirty(page); - kunmap(page); + kunmap_thread(page); sbi->free_blocks +=3D len; hfsplus_mark_mdb_dirty(sb); mutex_unlock(&sbi->alloc_mutex); diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c index 177fae4e6581..62757d92fbbd 100644 --- a/fs/hfsplus/bnode.c +++ b/fs/hfsplus/bnode.c @@ -29,14 +29,14 @@ void hfs_bnode_read(struct hfs_bnode *node, void *buf, = int off, int len) off &=3D ~PAGE_MASK; =20 l =3D min_t(int, len, PAGE_SIZE - off); - memcpy(buf, kmap(*pagep) + off, l); - kunmap(*pagep); + memcpy(buf, kmap_thread(*pagep) + off, l); + kunmap_thread(*pagep); =20 while ((len -=3D l) !=3D 0) { buf +=3D l; l =3D min_t(int, len, PAGE_SIZE); - memcpy(buf, kmap(*++pagep), l); - kunmap(*pagep); + memcpy(buf, kmap_thread(*++pagep), l); + kunmap_thread(*pagep); } } =20 @@ -82,16 +82,16 @@ void hfs_bnode_write(struct hfs_bnode *node, void *buf,= int off, int len) off &=3D ~PAGE_MASK; =20 l =3D min_t(int, len, PAGE_SIZE - off); - memcpy(kmap(*pagep) + off, buf, l); + memcpy(kmap_thread(*pagep) + off, buf, l); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); =20 while ((len -=3D l) !=3D 0) { buf +=3D l; l =3D min_t(int, len, PAGE_SIZE); - memcpy(kmap(*++pagep), buf, l); + memcpy(kmap_thread(*++pagep), buf, l); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); } } =20 @@ -112,15 +112,15 @@ void hfs_bnode_clear(struct hfs_bnode *node, int off,= int len) off &=3D ~PAGE_MASK; =20 l =3D min_t(int, len, PAGE_SIZE - off); - memset(kmap(*pagep) + off, 0, l); + memset(kmap_thread(*pagep) + off, 0, l); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); =20 while ((len -=3D l) !=3D 0) { l =3D min_t(int, len, PAGE_SIZE); - memset(kmap(*++pagep), 0, l); + memset(kmap_thread(*++pagep), 0, l); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); } } =20 @@ -142,24 +142,24 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int d= st, =20 if (src =3D=3D dst) { l =3D min_t(int, len, PAGE_SIZE - src); - memcpy(kmap(*dst_page) + src, kmap(*src_page) + src, l); - kunmap(*src_page); + memcpy(kmap_thread(*dst_page) + src, kmap_thread(*src_page) + src, l); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); =20 while ((len -=3D l) !=3D 0) { l =3D min_t(int, len, PAGE_SIZE); - memcpy(kmap(*++dst_page), kmap(*++src_page), l); - kunmap(*src_page); + memcpy(kmap_thread(*++dst_page), kmap_thread(*++src_page), l); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); } } else { void *src_ptr, *dst_ptr; =20 do { - src_ptr =3D kmap(*src_page) + src; - dst_ptr =3D kmap(*dst_page) + dst; + src_ptr =3D kmap_thread(*src_page) + src; + dst_ptr =3D kmap_thread(*dst_page) + dst; if (PAGE_SIZE - src < PAGE_SIZE - dst) { l =3D PAGE_SIZE - src; src =3D 0; @@ -171,9 +171,9 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst, } l =3D min(len, l); memcpy(dst_ptr, src_ptr, l); - kunmap(*src_page); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); if (!dst) dst_page++; else @@ -202,27 +202,27 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, = int src, int len) =20 if (src =3D=3D dst) { while (src < len) { - memmove(kmap(*dst_page), kmap(*src_page), src); - kunmap(*src_page); + memmove(kmap_thread(*dst_page), kmap_thread(*src_page), src); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); len -=3D src; src =3D PAGE_SIZE; src_page--; dst_page--; } src -=3D len; - memmove(kmap(*dst_page) + src, - kmap(*src_page) + src, len); - kunmap(*src_page); + memmove(kmap_thread(*dst_page) + src, + kmap_thread(*src_page) + src, len); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); } else { void *src_ptr, *dst_ptr; =20 do { - src_ptr =3D kmap(*src_page) + src; - dst_ptr =3D kmap(*dst_page) + dst; + src_ptr =3D kmap_thread(*src_page) + src; + dst_ptr =3D kmap_thread(*dst_page) + dst; if (src < dst) { l =3D src; src =3D PAGE_SIZE; @@ -234,9 +234,9 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, in= t src, int len) } l =3D min(len, l); memmove(dst_ptr - l, src_ptr - l, l); - kunmap(*src_page); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); if (dst =3D=3D PAGE_SIZE) dst_page--; else @@ -251,26 +251,26 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, = int src, int len) =20 if (src =3D=3D dst) { l =3D min_t(int, len, PAGE_SIZE - src); - memmove(kmap(*dst_page) + src, - kmap(*src_page) + src, l); - kunmap(*src_page); + memmove(kmap_thread(*dst_page) + src, + kmap_thread(*src_page) + src, l); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); =20 while ((len -=3D l) !=3D 0) { l =3D min_t(int, len, PAGE_SIZE); - memmove(kmap(*++dst_page), - kmap(*++src_page), l); - kunmap(*src_page); + memmove(kmap_thread(*++dst_page), + kmap_thread(*++src_page), l); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); } } else { void *src_ptr, *dst_ptr; =20 do { - src_ptr =3D kmap(*src_page) + src; - dst_ptr =3D kmap(*dst_page) + dst; + src_ptr =3D kmap_thread(*src_page) + src; + dst_ptr =3D kmap_thread(*dst_page) + dst; if (PAGE_SIZE - src < PAGE_SIZE - dst) { l =3D PAGE_SIZE - src; @@ -283,9 +283,9 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, in= t src, int len) } l =3D min(len, l); memmove(dst_ptr, src_ptr, l); - kunmap(*src_page); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); if (!dst) dst_page++; else @@ -502,14 +502,14 @@ struct hfs_bnode *hfs_bnode_find(struct hfs_btree *tr= ee, u32 num) if (!test_bit(HFS_BNODE_NEW, &node->flags)) return node; =20 - desc =3D (struct hfs_bnode_desc *)(kmap(node->page[0]) + + desc =3D (struct hfs_bnode_desc *)(kmap_thread(node->page[0]) + node->page_offset); node->prev =3D be32_to_cpu(desc->prev); node->next =3D be32_to_cpu(desc->next); node->num_recs =3D be16_to_cpu(desc->num_recs); node->type =3D desc->type; node->height =3D desc->height; - kunmap(node->page[0]); + kunmap_thread(node->page[0]); =20 switch (node->type) { case HFS_NODE_HEADER: @@ -593,14 +593,14 @@ struct hfs_bnode *hfs_bnode_create(struct hfs_btree *= tree, u32 num) } =20 pagep =3D node->page; - memset(kmap(*pagep) + node->page_offset, 0, + memset(kmap_thread(*pagep) + node->page_offset, 0, min_t(int, PAGE_SIZE, tree->node_size)); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); for (i =3D 1; i < tree->pages_per_bnode; i++) { - memset(kmap(*++pagep), 0, PAGE_SIZE); + memset(kmap_thread(*++pagep), 0, PAGE_SIZE); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); } clear_bit(HFS_BNODE_NEW, &node->flags); wake_up(&node->lock_wq); diff --git a/fs/hfsplus/btree.c b/fs/hfsplus/btree.c index 66774f4cb4fd..74fcef3a1628 100644 --- a/fs/hfsplus/btree.c +++ b/fs/hfsplus/btree.c @@ -394,7 +394,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree) =20 off +=3D node->page_offset; pagep =3D node->page + (off >> PAGE_SHIFT); - data =3D kmap(*pagep); + data =3D kmap_thread(*pagep); off &=3D ~PAGE_MASK; idx =3D 0; =20 @@ -407,7 +407,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree) idx +=3D i; data[off] |=3D m; set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); tree->free_nodes--; mark_inode_dirty(tree->inode); hfs_bnode_put(node); @@ -417,14 +417,14 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tr= ee) } } if (++off >=3D PAGE_SIZE) { - kunmap(*pagep); - data =3D kmap(*++pagep); + kunmap_thread(*pagep); + data =3D kmap_thread(*++pagep); off =3D 0; } idx +=3D 8; len--; } - kunmap(*pagep); + kunmap_thread(*pagep); nidx =3D node->next; if (!nidx) { hfs_dbg(BNODE_MOD, "create new bmap node\n"); @@ -440,7 +440,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree) off =3D off16; off +=3D node->page_offset; pagep =3D node->page + (off >> PAGE_SHIFT); - data =3D kmap(*pagep); + data =3D kmap_thread(*pagep); off &=3D ~PAGE_MASK; } } @@ -490,7 +490,7 @@ void hfs_bmap_free(struct hfs_bnode *node) } off +=3D node->page_offset + nidx / 8; page =3D node->page[off >> PAGE_SHIFT]; - data =3D kmap(page); + data =3D kmap_thread(page); off &=3D ~PAGE_MASK; m =3D 1 << (~nidx & 7); byte =3D data[off]; @@ -498,13 +498,13 @@ void hfs_bmap_free(struct hfs_bnode *node) pr_crit("trying to free free bnode " "%u(%d)\n", node->this, node->type); - kunmap(page); + kunmap_thread(page); hfs_bnode_put(node); return; } data[off] =3D byte & ~m; set_page_dirty(page); - kunmap(page); + kunmap_thread(page); hfs_bnode_put(node); tree->free_nodes++; mark_inode_dirty(tree->inode); --=20 2.28.0.rc0.12.gb6a658bd00c9