From nobody Mon Dec 1 22:35:45 2025 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B057A283121 for ; Thu, 27 Nov 2025 05:00:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764219656; cv=none; b=Q5kbSQc3YGSl2w/1FvkkQmShNc7uKgtoSzX6OtrdHTCVVM0XjJ9JEAGlm8zxI8l5toZ33GZPK/uDxl1d//ii4Jc8l3NjCu1JrFOpgVs6pojQz5PCim6a6TKECIMx1w7/Yif92VE8LKL3yLCoCVY/I6/JVh1QHFsg0EP8paP3XkM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764219656; c=relaxed/simple; bh=WvgqJ+r/HjTcRKYha7Et11rs7/EJBxvAEu52CuOp3IM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=bvqTsqt+BED9Lb55jLtvXz8O8Wc1zfxZtwLqxdZisyGb5CAUhGgJ4MsmTPBTDKCvXo+jqCulRkE1q0c+KC/uvc8W7SudWhwZQSe4SRrIgjdzKJN1A74mLhO8+KG8MLSbo3nQ1THVsMY4bQ7K17rfo3DIUPeaIqI9iKlI1zfL/aU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; arc=none smtp.client-ip=209.85.214.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-298287a26c3so5576935ad.0 for ; Wed, 26 Nov 2025 21:00:52 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764219652; x=1764824452; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=SbvyElh7wwEzZnGerfTKPBi+NGwrOd4j/Rr31hSN/gQ=; b=OPLKMiOB/ujNl3C/i6pPhsdWhKX6ANBqImbyiEqM1ZcahclHv1PBY+j7+s9q1XANet cxzkJ8jR1JryCMpAPTrD7UuORtcgLRXbSc0aCxOY7MKOOKRMAmqKXX48CRHwUSou3wJg XhoXM1Bh0F1d6/gBsantsq1fzPXOyO1Ci6P0k0ce8nvzPiu0b0MQanfxh3Z7bgHmTtea OwtRTBj8uO2wUdCVL70aMyUpNdsirlFJOUtMNsOM5OnHK0T5N5NWBqOiXZ8pPbqrZeVN 3WfsVKBV6lcvErvOhLe0jPIkvEvM8XUIUON/x/tVBetRw6uKaavFPLRK91BOMNXafxDv AFkQ== X-Forwarded-Encrypted: i=1; AJvYcCXcE0PljgzN4CwL51IvmFbfDVc4KoKQrXSJxxqB4g5eu/nssgq66xLv9hrAzrmxdo4Cs4e2E+K/E6Hzfg0=@vger.kernel.org X-Gm-Message-State: AOJu0YyIHi8oor4bRz4gRtPjF/EKYmVurtw9uB2Zm2+hACMPQTUfmaJ8 W6aJ/KgZSrddgNg6Dm3+lZ8OdPG751uPS6WbzyxV+z1x2409JPA9MCmU X-Gm-Gg: ASbGncvqH0sKp/iTGOVZnx7JsSAzXuDyp351ZUmzLimihRlzT/86WgOkCHSrMLwAzBL mkrCZ8QULNgdyp7w4lNWZNjrvCdb9xKiQojAAHO/c1k3M4FNwyZiIiOMeAFNsSKZAKugzw3ZxVT +3IwMu/6s9i7ueDoAWCe6mxf/lEkccLHxFJJIOSPaVqQOmroLmPi4G6iHhcAWkfeJ5mji42YbEK Jz7lRKHUYG8oGHt1Z1BXuYbny+QkCzYko5qq8fJm57bE4SKh2ogvbiN6Wk8BckfpM1uGhKXAsUN 0dsQEuqMtsffW95LP7abyqBItjbCAzvf/RqiQzHOmnYjpwb93s3jMBM+FwLkU/OhJKQsIinTgPr 87koF7RxIXEn2EBQQjwhvzM16Gu7449YzlSD6i1+umpsIcK3JEgXc30SJvzDcrQwczpaagw6udw rc9KB05O/44gC4w0e8IMGl1TX+OA== X-Google-Smtp-Source: AGHT+IG0VsD0yn9JHowhXxbnPXDPLEaPopFHzv8VEDaP8j840HV02P3HFhz82VCrrlgsKRrUwafxVg== X-Received: by 2002:a17:903:1212:b0:295:f47:75cb with SMTP id d9443c01a7336-29b6becc173mr235722825ad.23.1764219651402; Wed, 26 Nov 2025 21:00:51 -0800 (PST) Received: from localhost.localdomain ([1.227.206.162]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29bceb54454sm2719825ad.84.2025.11.26.21.00.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Nov 2025 21:00:50 -0800 (PST) From: Namjae Jeon To: viro@zeniv.linux.org.uk, brauner@kernel.org, hch@infradead.org, hch@lst.de, tytso@mit.edu, willy@infradead.org, jack@suse.cz, djwong@kernel.org, josef@toxicpanda.com, sandeen@sandeen.net, rgoldwyn@suse.com, xiang@kernel.org, dsterba@suse.com, pali@kernel.org, ebiggers@kernel.org, neil@brown.name, amir73il@gmail.com Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, iamjoonsoo.kim@lge.com, cheol.lee@lge.com, jay.sim@lge.com, gunho.lee@lge.com, Namjae Jeon , Hyunchul Lee Subject: [PATCH v2 06/11] ntfsplus: add iomap and address space operations Date: Thu, 27 Nov 2025 13:59:39 +0900 Message-Id: <20251127045944.26009-7-linkinjeon@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20251127045944.26009-1-linkinjeon@kernel.org> References: <20251127045944.26009-1-linkinjeon@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This adds the implementation of iomap and address space operations for ntfsplus. Signed-off-by: Hyunchul Lee Signed-off-by: Namjae Jeon --- fs/ntfsplus/aops.c | 617 ++++++++++++++++++++++++++++++++++ fs/ntfsplus/ntfs_iomap.c | 700 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 1317 insertions(+) create mode 100644 fs/ntfsplus/aops.c create mode 100644 fs/ntfsplus/ntfs_iomap.c diff --git a/fs/ntfsplus/aops.c b/fs/ntfsplus/aops.c new file mode 100644 index 000000000000..9a1b3b80a146 --- /dev/null +++ b/fs/ntfsplus/aops.c @@ -0,0 +1,617 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/** + * NTFS kernel address space operations and page cache handling. + * + * Copyright (c) 2001-2014 Anton Altaparmakov and Tuxera Inc. + * Copyright (c) 2002 Richard Russon + * Copyright (c) 2025 LG Electronics Co., Ltd. + */ + +#include +#include +#include + +#include "aops.h" +#include "attrib.h" +#include "mft.h" +#include "ntfs.h" +#include "misc.h" +#include "ntfs_iomap.h" + +static s64 ntfs_convert_page_index_into_lcn(struct ntfs_volume *vol, struc= t ntfs_inode *ni, + unsigned long page_index) +{ + sector_t iblock; + s64 vcn; + s64 lcn; + unsigned char blocksize_bits =3D vol->sb->s_blocksize_bits; + + iblock =3D (s64)page_index << (PAGE_SHIFT - blocksize_bits); + vcn =3D (s64)iblock << blocksize_bits >> vol->cluster_size_bits; + + down_read(&ni->runlist.lock); + lcn =3D ntfs_attr_vcn_to_lcn_nolock(ni, vcn, false); + up_read(&ni->runlist.lock); + + return lcn; +} + +struct bio *ntfs_setup_bio(struct ntfs_volume *vol, blk_opf_t opf, s64 lcn, + unsigned int pg_ofs) +{ + struct bio *bio; + + bio =3D bio_alloc(vol->sb->s_bdev, 1, opf, GFP_NOIO); + if (!bio) + return NULL; + bio->bi_iter.bi_sector =3D ((lcn << vol->cluster_size_bits) + pg_ofs) >> + vol->sb->s_blocksize_bits; + + return bio; +} + +/** + * ntfs_read_folio - fill a @folio of a @file with data from the device + * @file: open file to which the folio @folio belongs or NULL + * @folio: page cache folio to fill with data + * + * For non-resident attributes, ntfs_read_folio() fills the @folio of the = open + * file @file by calling the ntfs version of the generic block_read_full_f= olio() + * function, which in turn creates and reads in the buffers associated with + * the folio asynchronously. + * + * For resident attributes, OTOH, ntfs_read_folio() fills @folio by copyin= g the + * data from the mft record (which at this stage is most likely in memory)= and + * fills the remainder with zeroes. Thus, in this case, I/O is synchronous= , as + * even if the mft record is not cached at this point in time, we need to = wait + * for it to be read in before we can do the copy. + * + * Return 0 on success and -errno on error. + */ +static int ntfs_read_folio(struct file *file, struct folio *folio) +{ + loff_t i_size; + struct inode *vi; + struct ntfs_inode *ni; + + vi =3D folio->mapping->host; + i_size =3D i_size_read(vi); + /* Is the page fully outside i_size? (truncate in progress) */ + if (unlikely(folio->index >=3D (i_size + PAGE_SIZE - 1) >> + PAGE_SHIFT)) { + folio_zero_segment(folio, 0, PAGE_SIZE); + ntfs_debug("Read outside i_size - truncated?"); + folio_mark_uptodate(folio); + folio_unlock(folio); + return 0; + } + /* + * This can potentially happen because we clear PageUptodate() during + * ntfs_writepage() of MstProtected() attributes. + */ + if (folio_test_uptodate(folio)) { + folio_unlock(folio); + return 0; + } + ni =3D NTFS_I(vi); + + /* + * Only $DATA attributes can be encrypted and only unnamed $DATA + * attributes can be compressed. Index root can have the flags set but + * this means to create compressed/encrypted files, not that the + * attribute is compressed/encrypted. Note we need to check for + * AT_INDEX_ALLOCATION since this is the type of both directory and + * index inodes. + */ + if (ni->type !=3D AT_INDEX_ALLOCATION) { + /* If attribute is encrypted, deny access, just like NT4. */ + if (NInoEncrypted(ni)) { + folio_unlock(folio); + return -EACCES; + } + /* Compressed data streams are handled in compress.c. */ + if (NInoNonResident(ni) && NInoCompressed(ni)) + return ntfs_read_compressed_block(folio); + } + + return iomap_read_folio(folio, &ntfs_read_iomap_ops); +} + +static int ntfs_write_mft_block(struct ntfs_inode *ni, struct folio *folio, + struct writeback_control *wbc) +{ + struct inode *vi =3D VFS_I(ni); + struct ntfs_volume *vol =3D ni->vol; + u8 *kaddr; + struct ntfs_inode *locked_nis[PAGE_SIZE / NTFS_BLOCK_SIZE]; + int nr_locked_nis =3D 0, err =3D 0, mft_ofs, prev_mft_ofs; + struct bio *bio =3D NULL; + unsigned long mft_no; + struct ntfs_inode *tni; + s64 lcn; + s64 vcn =3D (s64)folio->index << PAGE_SHIFT >> vol->cluster_size_bits; + s64 end_vcn =3D ni->allocated_size >> vol->cluster_size_bits; + unsigned int folio_sz; + struct runlist_element *rl; + + ntfs_debug("Entering for inode 0x%lx, attribute type 0x%x, folio index 0x= %lx.", + vi->i_ino, ni->type, folio->index); + + lcn =3D ntfs_convert_page_index_into_lcn(vol, ni, folio->index); + if (lcn <=3D LCN_HOLE) { + folio_start_writeback(folio); + folio_unlock(folio); + folio_end_writeback(folio); + return -EIO; + } + + /* Map folio so we can access its contents. */ + kaddr =3D kmap_local_folio(folio, 0); + /* Clear the page uptodate flag whilst the mst fixups are applied. */ + folio_clear_uptodate(folio); + + for (mft_ofs =3D 0; mft_ofs < PAGE_SIZE && vcn < end_vcn; + mft_ofs +=3D vol->mft_record_size) { + /* Get the mft record number. */ + mft_no =3D (((s64)folio->index << PAGE_SHIFT) + mft_ofs) >> + vol->mft_record_size_bits; + vcn =3D mft_no << vol->mft_record_size_bits >> vol->cluster_size_bits; + /* Check whether to write this mft record. */ + tni =3D NULL; + if (ntfs_may_write_mft_record(vol, mft_no, + (struct mft_record *)(kaddr + mft_ofs), &tni)) { + unsigned int mft_record_off =3D 0; + s64 vcn_off =3D vcn; + + /* + * The record should be written. If a locked ntfs + * inode was returned, add it to the array of locked + * ntfs inodes. + */ + if (tni) + locked_nis[nr_locked_nis++] =3D tni; + + if (bio && (mft_ofs !=3D prev_mft_ofs + vol->mft_record_size)) { +flush_bio: + flush_dcache_folio(folio); + submit_bio_wait(bio); + bio_put(bio); + bio =3D NULL; + } + + if (vol->cluster_size < folio_size(folio)) { + down_write(&ni->runlist.lock); + rl =3D ntfs_attr_vcn_to_rl(ni, vcn_off, &lcn); + up_write(&ni->runlist.lock); + if (IS_ERR(rl) || lcn < 0) { + err =3D -EIO; + goto unm_done; + } + + if (bio && + (bio_end_sector(bio) >> (vol->cluster_size_bits - 9)) !=3D + lcn) { + flush_dcache_folio(folio); + submit_bio_wait(bio); + bio_put(bio); + bio =3D NULL; + } + } + + if (!bio) { + unsigned int off; + + off =3D ((mft_no << vol->mft_record_size_bits) + + mft_record_off) & vol->cluster_size_mask; + + bio =3D ntfs_setup_bio(vol, REQ_OP_WRITE, lcn, off); + if (!bio) { + err =3D -ENOMEM; + goto unm_done; + } + } + + if (vol->cluster_size =3D=3D NTFS_BLOCK_SIZE && + (mft_record_off || + rl->length - (vcn_off - rl->vcn) =3D=3D 1 || + mft_ofs + NTFS_BLOCK_SIZE >=3D PAGE_SIZE)) + folio_sz =3D NTFS_BLOCK_SIZE; + else + folio_sz =3D vol->mft_record_size; + if (!bio_add_folio(bio, folio, folio_sz, + mft_ofs + mft_record_off)) { + err =3D -EIO; + bio_put(bio); + goto unm_done; + } + mft_record_off +=3D folio_sz; + + if (mft_record_off !=3D vol->mft_record_size) { + vcn_off++; + goto flush_bio; + } + prev_mft_ofs =3D mft_ofs; + + if (mft_no < vol->mftmirr_size) + ntfs_sync_mft_mirror(vol, mft_no, + (struct mft_record *)(kaddr + mft_ofs)); + } + + } + + if (bio) { + flush_dcache_folio(folio); + submit_bio_wait(bio); + bio_put(bio); + } + flush_dcache_folio(folio); +unm_done: + folio_mark_uptodate(folio); + kunmap_local(kaddr); + + folio_start_writeback(folio); + folio_unlock(folio); + folio_end_writeback(folio); + + /* Unlock any locked inodes. */ + while (nr_locked_nis-- > 0) { + struct ntfs_inode *base_tni; + + tni =3D locked_nis[nr_locked_nis]; + mutex_unlock(&tni->mrec_lock); + + /* Get the base inode. */ + mutex_lock(&tni->extent_lock); + if (tni->nr_extents >=3D 0) + base_tni =3D tni; + else + base_tni =3D tni->ext.base_ntfs_ino; + mutex_unlock(&tni->extent_lock); + ntfs_debug("Unlocking %s inode 0x%lx.", + tni =3D=3D base_tni ? "base" : "extent", + tni->mft_no); + atomic_dec(&tni->count); + iput(VFS_I(base_tni)); + } + + if (unlikely(err && err !=3D -ENOMEM)) + NVolSetErrors(vol); + if (likely(!err)) + ntfs_debug("Done."); + return err; +} + +/** + * ntfs_bmap - map logical file block to physical device block + * @mapping: address space mapping to which the block to be mapped belongs + * @block: logical block to map to its physical device block + * + * For regular, non-resident files (i.e. not compressed and not encrypted)= , map + * the logical @block belonging to the file described by the address space + * mapping @mapping to its physical device block. + * + * The size of the block is equal to the @s_blocksize field of the super b= lock + * of the mounted file system which is guaranteed to be smaller than or eq= ual + * to the cluster size thus the block is guaranteed to fit entirely inside= the + * cluster which means we do not need to care how many contiguous bytes are + * available after the beginning of the block. + * + * Return the physical device block if the mapping succeeded or 0 if the b= lock + * is sparse or there was an error. + * + * Note: This is a problem if someone tries to run bmap() on $Boot system = file + * as that really is in block zero but there is nothing we can do. bmap()= is + * just broken in that respect (just like it cannot distinguish sparse from + * not available or error). + */ +static sector_t ntfs_bmap(struct address_space *mapping, sector_t block) +{ + s64 ofs, size; + loff_t i_size; + s64 lcn; + unsigned long blocksize, flags; + struct ntfs_inode *ni =3D NTFS_I(mapping->host); + struct ntfs_volume *vol =3D ni->vol; + unsigned int delta; + unsigned char blocksize_bits, cluster_size_shift; + + ntfs_debug("Entering for mft_no 0x%lx, logical block 0x%llx.", + ni->mft_no, (unsigned long long)block); + if (ni->type !=3D AT_DATA || !NInoNonResident(ni) || NInoEncrypted(ni)) { + ntfs_error(vol->sb, "BMAP does not make sense for %s attributes, returni= ng 0.", + (ni->type !=3D AT_DATA) ? "non-data" : + (!NInoNonResident(ni) ? "resident" : + "encrypted")); + return 0; + } + /* None of these can happen. */ + blocksize =3D vol->sb->s_blocksize; + blocksize_bits =3D vol->sb->s_blocksize_bits; + ofs =3D (s64)block << blocksize_bits; + read_lock_irqsave(&ni->size_lock, flags); + size =3D ni->initialized_size; + i_size =3D i_size_read(VFS_I(ni)); + read_unlock_irqrestore(&ni->size_lock, flags); + /* + * If the offset is outside the initialized size or the block straddles + * the initialized size then pretend it is a hole unless the + * initialized size equals the file size. + */ + if (unlikely(ofs >=3D size || (ofs + blocksize > size && size < i_size))) + goto hole; + cluster_size_shift =3D vol->cluster_size_bits; + down_read(&ni->runlist.lock); + lcn =3D ntfs_attr_vcn_to_lcn_nolock(ni, ofs >> cluster_size_shift, false); + up_read(&ni->runlist.lock); + if (unlikely(lcn < LCN_HOLE)) { + /* + * Step down to an integer to avoid gcc doing a long long + * comparision in the switch when we know @lcn is between + * LCN_HOLE and LCN_EIO (i.e. -1 to -5). + * + * Otherwise older gcc (at least on some architectures) will + * try to use __cmpdi2() which is of course not available in + * the kernel. + */ + switch ((int)lcn) { + case LCN_ENOENT: + /* + * If the offset is out of bounds then pretend it is a + * hole. + */ + goto hole; + case LCN_ENOMEM: + ntfs_error(vol->sb, + "Not enough memory to complete mapping for inode 0x%lx. Returning 0.", + ni->mft_no); + break; + default: + ntfs_error(vol->sb, + "Failed to complete mapping for inode 0x%lx. Run chkdsk. Returning 0.= ", + ni->mft_no); + break; + } + return 0; + } + if (lcn < 0) { + /* It is a hole. */ +hole: + ntfs_debug("Done (returning hole)."); + return 0; + } + /* + * The block is really allocated and fullfils all our criteria. + * Convert the cluster to units of block size and return the result. + */ + delta =3D ofs & vol->cluster_size_mask; + if (unlikely(sizeof(block) < sizeof(lcn))) { + block =3D lcn =3D ((lcn << cluster_size_shift) + delta) >> + blocksize_bits; + /* If the block number was truncated return 0. */ + if (unlikely(block !=3D lcn)) { + ntfs_error(vol->sb, + "Physical block 0x%llx is too large to be returned, returning 0.", + (long long)lcn); + return 0; + } + } else + block =3D ((lcn << cluster_size_shift) + delta) >> + blocksize_bits; + ntfs_debug("Done (returning block 0x%llx).", (unsigned long long)lcn); + return block; +} + +static void ntfs_readahead(struct readahead_control *rac) +{ + struct address_space *mapping =3D rac->mapping; + struct inode *inode =3D mapping->host; + struct ntfs_inode *ni =3D NTFS_I(inode); + + if (!NInoNonResident(ni) || NInoCompressed(ni)) { + /* No readahead for resident and compressed. */ + return; + } + + if (NInoMstProtected(ni) && + (ni->mft_no =3D=3D FILE_MFT || ni->mft_no =3D=3D FILE_MFTMirr)) + return; + + iomap_readahead(rac, &ntfs_read_iomap_ops); +} + +static int ntfs_mft_writepage(struct folio *folio, struct writeback_contro= l *wbc) +{ + struct address_space *mapping =3D folio->mapping; + struct inode *vi =3D mapping->host; + struct ntfs_inode *ni =3D NTFS_I(vi); + loff_t i_size; + int ret; + + i_size =3D i_size_read(vi); + + /* We have to zero every time due to mmap-at-end-of-file. */ + if (folio->index >=3D (i_size >> PAGE_SHIFT)) { + /* The page straddles i_size. */ + unsigned int ofs =3D i_size & ~PAGE_MASK; + + folio_zero_segment(folio, ofs, PAGE_SIZE); + } + + ret =3D ntfs_write_mft_block(ni, folio, wbc); + mapping_set_error(mapping, ret); + return ret; +} + +static int ntfs_writepages(struct address_space *mapping, + struct writeback_control *wbc) +{ + struct inode *inode =3D mapping->host; + struct ntfs_inode *ni =3D NTFS_I(inode); + struct iomap_writepage_ctx wpc =3D { + .inode =3D mapping->host, + .wbc =3D wbc, + .ops =3D &ntfs_writeback_ops, + }; + + if (NVolShutdown(ni->vol)) + return -EIO; + + if (!NInoNonResident(ni)) + return 0; + + if (NInoMstProtected(ni) && ni->mft_no =3D=3D FILE_MFT) { + struct folio *folio =3D NULL; + int error; + + while ((folio =3D writeback_iter(mapping, wbc, folio, &error))) + error =3D ntfs_mft_writepage(folio, wbc); + return error; + } + + /* If file is encrypted, deny access, just like NT4. */ + if (NInoEncrypted(ni)) { + ntfs_debug("Denying write access to encrypted file."); + return -EACCES; + } + + return iomap_writepages(&wpc); +} + +static int ntfs_swap_activate(struct swap_info_struct *sis, + struct file *swap_file, sector_t *span) +{ + return iomap_swapfile_activate(sis, swap_file, span, + &ntfs_read_iomap_ops); +} + +/** + * ntfs_normal_aops - address space operations for normal inodes and attri= butes + * + * Note these are not used for compressed or mst protected inodes and + * attributes. + */ +const struct address_space_operations ntfs_normal_aops =3D { + .read_folio =3D ntfs_read_folio, + .readahead =3D ntfs_readahead, + .writepages =3D ntfs_writepages, + .direct_IO =3D noop_direct_IO, + .dirty_folio =3D iomap_dirty_folio, + .bmap =3D ntfs_bmap, + .migrate_folio =3D filemap_migrate_folio, + .is_partially_uptodate =3D iomap_is_partially_uptodate, + .error_remove_folio =3D generic_error_remove_folio, + .release_folio =3D iomap_release_folio, + .invalidate_folio =3D iomap_invalidate_folio, + .swap_activate =3D ntfs_swap_activate, +}; + +/** + * ntfs_compressed_aops - address space operations for compressed inodes + */ +const struct address_space_operations ntfs_compressed_aops =3D { + .read_folio =3D ntfs_read_folio, + .direct_IO =3D noop_direct_IO, + .writepages =3D ntfs_writepages, + .dirty_folio =3D iomap_dirty_folio, + .migrate_folio =3D filemap_migrate_folio, + .is_partially_uptodate =3D iomap_is_partially_uptodate, + .error_remove_folio =3D generic_error_remove_folio, + .release_folio =3D iomap_release_folio, + .invalidate_folio =3D iomap_invalidate_folio, +}; + +/** + * ntfs_mst_aops - general address space operations for mst protecteed ino= des + * and attributes + */ +const struct address_space_operations ntfs_mst_aops =3D { + .read_folio =3D ntfs_read_folio, /* Fill page with data. */ + .readahead =3D ntfs_readahead, + .writepages =3D ntfs_writepages, /* Write dirty page to disk. */ + .dirty_folio =3D iomap_dirty_folio, + .migrate_folio =3D filemap_migrate_folio, + .is_partially_uptodate =3D iomap_is_partially_uptodate, + .error_remove_folio =3D generic_error_remove_folio, + .release_folio =3D iomap_release_folio, + .invalidate_folio =3D iomap_invalidate_folio, +}; + +void mark_ntfs_record_dirty(struct folio *folio) +{ + iomap_dirty_folio(folio->mapping, folio); +} + +int ntfs_dev_read(struct super_block *sb, void *buf, loff_t start, loff_t = size) +{ + pgoff_t idx, idx_end; + loff_t offset, end =3D start + size; + u32 from, to, buf_off =3D 0; + struct folio *folio; + char *kaddr; + + idx =3D start >> PAGE_SHIFT; + idx_end =3D end >> PAGE_SHIFT; + from =3D start & ~PAGE_MASK; + + if (idx =3D=3D idx_end) + idx_end++; + + for (; idx < idx_end; idx++, from =3D 0) { + folio =3D ntfs_read_mapping_folio(sb->s_bdev->bd_mapping, idx); + if (IS_ERR(folio)) { + ntfs_error(sb, "Unable to read %ld page", idx); + return PTR_ERR(folio); + } + + kaddr =3D kmap_local_folio(folio, 0); + offset =3D (loff_t)idx << PAGE_SHIFT; + to =3D min_t(u32, end - offset, PAGE_SIZE); + + memcpy(buf + buf_off, kaddr + from, to); + buf_off +=3D to; + kunmap_local(kaddr); + folio_put(folio); + } + + return 0; +} + +int ntfs_dev_write(struct super_block *sb, void *buf, loff_t start, + loff_t size, bool wait) +{ + pgoff_t idx, idx_end; + loff_t offset, end =3D start + size; + u32 from, to, buf_off =3D 0; + struct folio *folio; + char *kaddr; + + idx =3D start >> PAGE_SHIFT; + idx_end =3D end >> PAGE_SHIFT; + from =3D start & ~PAGE_MASK; + + if (idx =3D=3D idx_end) + idx_end++; + + for (; idx < idx_end; idx++, from =3D 0) { + folio =3D ntfs_read_mapping_folio(sb->s_bdev->bd_mapping, idx); + if (IS_ERR(folio)) { + ntfs_error(sb, "Unable to read %ld page", idx); + return PTR_ERR(folio); + } + + kaddr =3D kmap_local_folio(folio, 0); + offset =3D (loff_t)idx << PAGE_SHIFT; + to =3D min_t(u32, end - offset, PAGE_SIZE); + + memcpy(kaddr + from, buf + buf_off, to); + buf_off +=3D to; + kunmap_local(kaddr); + folio_mark_uptodate(folio); + folio_mark_dirty(folio); + if (wait) + folio_wait_stable(folio); + folio_put(folio); + } + + return 0; +} diff --git a/fs/ntfsplus/ntfs_iomap.c b/fs/ntfsplus/ntfs_iomap.c new file mode 100644 index 000000000000..c9fd999820f4 --- /dev/null +++ b/fs/ntfsplus/ntfs_iomap.c @@ -0,0 +1,700 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/** + * iomap callack functions + * + * Copyright (c) 2025 LG Electronics Co., Ltd. + */ + +#include +#include +#include + +#include "aops.h" +#include "attrib.h" +#include "mft.h" +#include "ntfs.h" +#include "misc.h" +#include "ntfs_iomap.h" + +static void ntfs_iomap_put_folio(struct inode *inode, loff_t pos, + unsigned int len, struct folio *folio) +{ + struct ntfs_inode *ni =3D NTFS_I(inode); + unsigned long sector_size =3D 1UL << inode->i_blkbits; + loff_t start_down, end_up, init; + + if (!NInoNonResident(ni)) + goto out; + + start_down =3D round_down(pos, sector_size); + end_up =3D (pos + len - 1) | (sector_size - 1); + init =3D ni->initialized_size; + + if (init >=3D start_down && init <=3D end_up) { + if (init < pos) { + loff_t offset =3D offset_in_folio(folio, pos + len); + + if (offset =3D=3D 0) + offset =3D folio_size(folio); + folio_zero_segments(folio, + offset_in_folio(folio, init), + offset_in_folio(folio, pos), + offset, + folio_size(folio)); + + } else { + loff_t offset =3D max_t(loff_t, pos + len, init); + + offset =3D offset_in_folio(folio, offset); + if (offset =3D=3D 0) + offset =3D folio_size(folio); + folio_zero_segment(folio, + offset, + folio_size(folio)); + } + } else if (init <=3D pos) { + loff_t offset =3D 0, offset2 =3D offset_in_folio(folio, pos + len); + + if ((init >> folio_shift(folio)) =3D=3D (pos >> folio_shift(folio))) + offset =3D offset_in_folio(folio, init); + if (offset2 =3D=3D 0) + offset2 =3D folio_size(folio); + folio_zero_segments(folio, + offset, + offset_in_folio(folio, pos), + offset2, + folio_size(folio)); + } + +out: + folio_unlock(folio); + folio_put(folio); +} + +const struct iomap_write_ops ntfs_iomap_folio_ops =3D { + .put_folio =3D ntfs_iomap_put_folio, +}; + +static int ntfs_read_iomap_begin(struct inode *inode, loff_t offset, loff_= t length, + unsigned int flags, struct iomap *iomap, struct iomap *srcmap) +{ + struct ntfs_inode *base_ni, *ni =3D NTFS_I(inode); + struct ntfs_attr_search_ctx *ctx; + loff_t i_size; + u32 attr_len; + int err =3D 0; + char *kattr; + struct page *ipage; + + if (NInoNonResident(ni)) { + s64 vcn; + s64 lcn; + struct runlist_element *rl; + struct ntfs_volume *vol =3D ni->vol; + loff_t vcn_ofs; + loff_t rl_length; + + vcn =3D offset >> vol->cluster_size_bits; + vcn_ofs =3D offset & vol->cluster_size_mask; + + down_write(&ni->runlist.lock); + rl =3D ntfs_attr_vcn_to_rl(ni, vcn, &lcn); + if (IS_ERR(rl)) { + up_write(&ni->runlist.lock); + return PTR_ERR(rl); + } + + if (flags & IOMAP_REPORT) { + if (lcn < LCN_HOLE) { + up_write(&ni->runlist.lock); + return -ENOENT; + } + } else if (lcn < LCN_ENOENT) { + up_write(&ni->runlist.lock); + return -EINVAL; + } + + iomap->bdev =3D inode->i_sb->s_bdev; + iomap->offset =3D offset; + + if (lcn <=3D LCN_DELALLOC) { + if (lcn =3D=3D LCN_DELALLOC) + iomap->type =3D IOMAP_DELALLOC; + else + iomap->type =3D IOMAP_HOLE; + iomap->addr =3D IOMAP_NULL_ADDR; + } else { + if (!(flags & IOMAP_ZERO) && offset >=3D ni->initialized_size) + iomap->type =3D IOMAP_UNWRITTEN; + else + iomap->type =3D IOMAP_MAPPED; + iomap->addr =3D (lcn << vol->cluster_size_bits) + vcn_ofs; + } + + rl_length =3D (rl->length - (vcn - rl->vcn)) << ni->vol->cluster_size_bi= ts; + + if (rl_length =3D=3D 0 && rl->lcn > LCN_DELALLOC) { + ntfs_error(inode->i_sb, + "runlist(vcn : %lld, length : %lld, lcn : %lld) is corrupted\n", + rl->vcn, rl->length, rl->lcn); + up_write(&ni->runlist.lock); + return -EIO; + } + + if (rl_length && length > rl_length - vcn_ofs) + iomap->length =3D rl_length - vcn_ofs; + else + iomap->length =3D length; + up_write(&ni->runlist.lock); + + if (!(flags & IOMAP_ZERO) && + iomap->type =3D=3D IOMAP_MAPPED && + iomap->offset < ni->initialized_size && + iomap->offset + iomap->length > ni->initialized_size) { + iomap->length =3D round_up(ni->initialized_size, 1 << inode->i_blkbits)= - + iomap->offset; + } + iomap->flags |=3D IOMAP_F_MERGED; + return 0; + } + + if (NInoAttr(ni)) + base_ni =3D ni->ext.base_ntfs_ino; + else + base_ni =3D ni; + + ctx =3D ntfs_attr_get_search_ctx(base_ni, NULL); + if (!ctx) { + err =3D -ENOMEM; + goto out; + } + + err =3D ntfs_attr_lookup(ni->type, ni->name, ni->name_len, + CASE_SENSITIVE, 0, NULL, 0, ctx); + if (unlikely(err)) + goto out; + + attr_len =3D le32_to_cpu(ctx->attr->data.resident.value_length); + if (unlikely(attr_len > ni->initialized_size)) + attr_len =3D ni->initialized_size; + i_size =3D i_size_read(inode); + + if (unlikely(attr_len > i_size)) { + /* Race with shrinking truncate. */ + attr_len =3D i_size; + } + + if (offset >=3D attr_len) { + if (flags & IOMAP_REPORT) + err =3D -ENOENT; + else + err =3D -EFAULT; + goto out; + } + + kattr =3D (u8 *)ctx->attr + le16_to_cpu(ctx->attr->data.resident.value_of= fset); + + ipage =3D alloc_page(__GFP_NOWARN | __GFP_IO | __GFP_ZERO); + if (!ipage) { + err =3D -ENOMEM; + goto out; + } + + memcpy(page_address(ipage), kattr, attr_len); + iomap->type =3D IOMAP_INLINE; + iomap->inline_data =3D page_address(ipage); + iomap->offset =3D 0; + iomap->length =3D min_t(loff_t, attr_len, PAGE_SIZE); + iomap->private =3D ipage; + +out: + if (ctx) + ntfs_attr_put_search_ctx(ctx); + return err; +} + +static int ntfs_read_iomap_end(struct inode *inode, loff_t pos, loff_t len= gth, + ssize_t written, unsigned int flags, struct iomap *iomap) +{ + if (iomap->type =3D=3D IOMAP_INLINE) { + struct page *ipage =3D iomap->private; + + put_page(ipage); + } + return written; +} + +const struct iomap_ops ntfs_read_iomap_ops =3D { + .iomap_begin =3D ntfs_read_iomap_begin, + .iomap_end =3D ntfs_read_iomap_end, +}; + +static int ntfs_buffered_zeroed_clusters(struct inode *vi, s64 vcn) +{ + struct ntfs_inode *ni =3D NTFS_I(vi); + struct ntfs_volume *vol =3D ni->vol; + struct address_space *mapping =3D vi->i_mapping; + struct folio *folio; + pgoff_t idx, idx_end; + u32 from, to; + + idx =3D (vcn << vol->cluster_size_bits) >> PAGE_SHIFT; + idx_end =3D ((vcn + 1) << vol->cluster_size_bits) >> PAGE_SHIFT; + from =3D (vcn << vol->cluster_size_bits) & ~PAGE_MASK; + if (idx =3D=3D idx_end) + idx_end++; + + to =3D min_t(u32, vol->cluster_size, PAGE_SIZE); + for (; idx < idx_end; idx++, from =3D 0) { + if (to !=3D PAGE_SIZE) { + folio =3D ntfs_read_mapping_folio(mapping, idx); + if (IS_ERR(folio)) + return PTR_ERR(folio); + folio_lock(folio); + } else { + folio =3D __filemap_get_folio(mapping, idx, + FGP_WRITEBEGIN | FGP_NOFS, mapping_gfp_mask(mapping)); + if (IS_ERR(folio)) + return PTR_ERR(folio); + } + + if (folio_test_uptodate(folio) || + iomap_is_partially_uptodate(folio, from, to)) + goto next_folio; + + folio_zero_segment(folio, from, from + to); + folio_mark_uptodate(folio); + +next_folio: + iomap_dirty_folio(mapping, folio); + folio_unlock(folio); + folio_put(folio); + balance_dirty_pages_ratelimited(mapping); + cond_resched(); + } + + return 0; +} + +int ntfs_zeroed_clusters(struct inode *vi, s64 lcn, s64 num) +{ + struct ntfs_inode *ni =3D NTFS_I(vi); + struct ntfs_volume *vol =3D ni->vol; + u32 to; + struct bio *bio =3D NULL; + s64 err =3D 0, zero_len =3D num << vol->cluster_size_bits; + s64 loc =3D lcn << vol->cluster_size_bits, curr =3D 0; + + while (zero_len > 0) { +setup_bio: + if (!bio) { + bio =3D bio_alloc(vol->sb->s_bdev, + bio_max_segs(DIV_ROUND_UP(zero_len, PAGE_SIZE)), + REQ_OP_WRITE | REQ_SYNC | REQ_IDLE, GFP_NOIO); + if (!bio) + return -ENOMEM; + bio->bi_iter.bi_sector =3D (loc + curr) >> vol->sb->s_blocksize_bits; + } + + to =3D min_t(u32, zero_len, PAGE_SIZE); + if (!bio_add_page(bio, ZERO_PAGE(0), to, 0)) { + err =3D submit_bio_wait(bio); + bio_put(bio); + bio =3D NULL; + if (err) + break; + goto setup_bio; + } + zero_len -=3D to; + curr +=3D to; + } + + if (bio) { + err =3D submit_bio_wait(bio); + bio_put(bio); + } + + return err; +} + +static int __ntfs_write_iomap_begin(struct inode *inode, loff_t offset, + loff_t length, unsigned int flags, + struct iomap *iomap, bool da, bool mapped) +{ + struct ntfs_inode *ni =3D NTFS_I(inode); + struct ntfs_volume *vol =3D ni->vol; + struct attr_record *a; + struct ntfs_attr_search_ctx *ctx; + u32 attr_len; + int err =3D 0; + char *kattr; + struct page *ipage; + + if (NVolShutdown(vol)) + return -EIO; + + mutex_lock(&ni->mrec_lock); + if (NInoNonResident(ni)) { + s64 vcn; + loff_t vcn_ofs; + loff_t rl_length; + s64 max_clu_count =3D + round_up(length, vol->cluster_size) >> vol->cluster_size_bits; + + vcn =3D offset >> vol->cluster_size_bits; + vcn_ofs =3D offset & vol->cluster_size_mask; + + if (da) { + bool balloc =3D false; + s64 start_lcn, lcn_count; + bool update_mp; + + update_mp =3D (flags & IOMAP_DIRECT) || mapped || + NInoAttr(ni) || ni->mft_no < FILE_first_user; + down_write(&ni->runlist.lock); + err =3D ntfs_attr_map_cluster(ni, vcn, &start_lcn, &lcn_count, + max_clu_count, &balloc, update_mp, + !(flags & IOMAP_DIRECT) && !mapped); + up_write(&ni->runlist.lock); + mutex_unlock(&ni->mrec_lock); + if (err) { + ni->i_dealloc_clusters =3D 0; + return err; + } + + iomap->bdev =3D inode->i_sb->s_bdev; + iomap->offset =3D offset; + + rl_length =3D lcn_count << ni->vol->cluster_size_bits; + if (length > rl_length - vcn_ofs) + iomap->length =3D rl_length - vcn_ofs; + else + iomap->length =3D length; + + if (start_lcn =3D=3D LCN_HOLE) + iomap->type =3D IOMAP_HOLE; + else + iomap->type =3D IOMAP_MAPPED; + if (balloc =3D=3D true) + iomap->flags =3D IOMAP_F_NEW; + + iomap->addr =3D (start_lcn << vol->cluster_size_bits) + vcn_ofs; + + if (balloc =3D=3D true) { + if (flags & IOMAP_DIRECT || mapped =3D=3D true) { + loff_t end =3D offset + length; + + if (vcn_ofs || ((vol->cluster_size > iomap->length) && + end < ni->initialized_size)) + err =3D ntfs_zeroed_clusters(inode, + start_lcn, 1); + if (!err && lcn_count > 1 && + (iomap->length & vol->cluster_size_mask && + end < ni->initialized_size)) + err =3D ntfs_zeroed_clusters(inode, + start_lcn + (lcn_count - 1), 1); + } else { + if (lcn_count > ni->i_dealloc_clusters) + ni->i_dealloc_clusters =3D 0; + else + ni->i_dealloc_clusters -=3D lcn_count; + } + if (err < 0) + return err; + } + + if (mapped && iomap->offset + iomap->length > + ni->initialized_size) { + err =3D ntfs_attr_set_initialized_size(ni, iomap->offset + + iomap->length); + if (err) + return err; + } + } else { + struct runlist_element *rl, *rlc; + s64 lcn; + bool is_retry =3D false; + + down_read(&ni->runlist.lock); + rl =3D ni->runlist.rl; + if (!rl) { + up_read(&ni->runlist.lock); + err =3D ntfs_map_runlist(ni, vcn); + if (err) { + mutex_unlock(&ni->mrec_lock); + return -ENOENT; + } + down_read(&ni->runlist.lock); + rl =3D ni->runlist.rl; + } + up_read(&ni->runlist.lock); + + down_write(&ni->runlist.lock); +remap_rl: + /* Seek to element containing target vcn. */ + while (rl->length && rl[1].vcn <=3D vcn) + rl++; + lcn =3D ntfs_rl_vcn_to_lcn(rl, vcn); + + if (lcn <=3D LCN_RL_NOT_MAPPED && is_retry =3D=3D false) { + is_retry =3D true; + if (!ntfs_map_runlist_nolock(ni, vcn, NULL)) { + rl =3D ni->runlist.rl; + goto remap_rl; + } + } + + max_clu_count =3D min(max_clu_count, rl->length - (vcn - rl->vcn)); + if (max_clu_count =3D=3D 0) { + ntfs_error(inode->i_sb, + "runlist(vcn : %lld, length : %lld) is corrupted\n", + rl->vcn, rl->length); + up_write(&ni->runlist.lock); + mutex_unlock(&ni->mrec_lock); + return -EIO; + } + + iomap->bdev =3D inode->i_sb->s_bdev; + iomap->offset =3D offset; + + if (lcn <=3D LCN_DELALLOC) { + if (lcn < LCN_DELALLOC) { + max_clu_count =3D + ntfs_available_clusters_count(vol, max_clu_count); + if (max_clu_count < 0) { + err =3D max_clu_count; + up_write(&ni->runlist.lock); + mutex_unlock(&ni->mrec_lock); + return err; + } + } + + iomap->type =3D IOMAP_DELALLOC; + iomap->addr =3D IOMAP_NULL_ADDR; + + if (lcn <=3D LCN_HOLE) { + size_t new_rl_count; + + rlc =3D ntfs_malloc_nofs(sizeof(struct runlist_element) * 2); + if (!rlc) { + up_write(&ni->runlist.lock); + mutex_unlock(&ni->mrec_lock); + return -ENOMEM; + } + + rlc->vcn =3D vcn; + rlc->lcn =3D LCN_DELALLOC; + rlc->length =3D max_clu_count; + + rlc[1].vcn =3D vcn + max_clu_count; + rlc[1].lcn =3D LCN_RL_NOT_MAPPED; + rlc[1].length =3D 0; + + rl =3D ntfs_runlists_merge(&ni->runlist, rlc, 0, + &new_rl_count); + if (IS_ERR(rl)) { + ntfs_error(vol->sb, "Failed to merge runlists"); + up_write(&ni->runlist.lock); + mutex_unlock(&ni->mrec_lock); + ntfs_free(rlc); + return PTR_ERR(rl); + } + + ni->runlist.rl =3D rl; + ni->runlist.count =3D new_rl_count; + ni->i_dealloc_clusters +=3D max_clu_count; + } + up_write(&ni->runlist.lock); + mutex_unlock(&ni->mrec_lock); + + if (lcn < LCN_DELALLOC) + ntfs_hold_dirty_clusters(vol, max_clu_count); + + rl_length =3D max_clu_count << ni->vol->cluster_size_bits; + if (length > rl_length - vcn_ofs) + iomap->length =3D rl_length - vcn_ofs; + else + iomap->length =3D length; + + iomap->flags =3D IOMAP_F_NEW; + if (lcn <=3D LCN_HOLE) { + loff_t end =3D offset + length; + + if (vcn_ofs || ((vol->cluster_size > iomap->length) && + end < ni->initialized_size)) + err =3D ntfs_buffered_zeroed_clusters(inode, vcn); + if (!err && max_clu_count > 1 && + (iomap->length & vol->cluster_size_mask && + end < ni->initialized_size)) + err =3D ntfs_buffered_zeroed_clusters(inode, + vcn + (max_clu_count - 1)); + if (err) { + ntfs_release_dirty_clusters(vol, max_clu_count); + return err; + } + } + } else { + up_write(&ni->runlist.lock); + mutex_unlock(&ni->mrec_lock); + + iomap->type =3D IOMAP_MAPPED; + iomap->addr =3D (lcn << vol->cluster_size_bits) + vcn_ofs; + + rl_length =3D max_clu_count << ni->vol->cluster_size_bits; + if (length > rl_length - vcn_ofs) + iomap->length =3D rl_length - vcn_ofs; + else + iomap->length =3D length; + } + } + + return 0; + } + + ctx =3D ntfs_attr_get_search_ctx(ni, NULL); + if (!ctx) { + err =3D -ENOMEM; + goto out; + } + + err =3D ntfs_attr_lookup(ni->type, ni->name, ni->name_len, + CASE_SENSITIVE, 0, NULL, 0, ctx); + if (err) { + if (err =3D=3D -ENOENT) + err =3D -EIO; + goto out; + } + + a =3D ctx->attr; + /* The total length of the attribute value. */ + attr_len =3D le32_to_cpu(a->data.resident.value_length); + kattr =3D (u8 *)a + le16_to_cpu(a->data.resident.value_offset); + + ipage =3D alloc_page(__GFP_NOWARN | __GFP_IO | __GFP_ZERO); + if (!ipage) { + err =3D -ENOMEM; + goto out; + } + memcpy(page_address(ipage), kattr, attr_len); + + iomap->type =3D IOMAP_INLINE; + iomap->inline_data =3D page_address(ipage); + iomap->offset =3D 0; + /* iomap requires there is only one INLINE_DATA extent */ + iomap->length =3D attr_len; + iomap->private =3D ipage; + +out: + if (ctx) + ntfs_attr_put_search_ctx(ctx); + mutex_unlock(&ni->mrec_lock); + + return err; +} + +static int ntfs_write_iomap_begin(struct inode *inode, loff_t offset, + loff_t length, unsigned int flags, + struct iomap *iomap, struct iomap *srcmap) +{ + return __ntfs_write_iomap_begin(inode, offset, length, flags, iomap, + false, false); +} + +static int ntfs_write_iomap_end(struct inode *inode, loff_t pos, loff_t le= ngth, + ssize_t written, unsigned int flags, struct iomap *iomap) +{ + if (iomap->type =3D=3D IOMAP_INLINE) { + struct page *ipage =3D iomap->private; + struct ntfs_inode *ni =3D NTFS_I(inode); + struct ntfs_attr_search_ctx *ctx; + u32 attr_len; + int err; + char *kattr; + + mutex_lock(&ni->mrec_lock); + ctx =3D ntfs_attr_get_search_ctx(ni, NULL); + if (!ctx) { + written =3D -ENOMEM; + mutex_unlock(&ni->mrec_lock); + goto out; + } + + err =3D ntfs_attr_lookup(ni->type, ni->name, ni->name_len, + CASE_SENSITIVE, 0, NULL, 0, ctx); + if (err) { + if (err =3D=3D -ENOENT) + err =3D -EIO; + written =3D err; + goto err_out; + } + + /* The total length of the attribute value. */ + attr_len =3D le32_to_cpu(ctx->attr->data.resident.value_length); + if (pos >=3D attr_len || pos + written > attr_len) + goto err_out; + + kattr =3D (u8 *)ctx->attr + le16_to_cpu(ctx->attr->data.resident.value_o= ffset); + memcpy(kattr + pos, iomap_inline_data(iomap, pos), written); + mark_mft_record_dirty(ctx->ntfs_ino); +err_out: + ntfs_attr_put_search_ctx(ctx); + put_page(ipage); + mutex_unlock(&ni->mrec_lock); + } + +out: + return written; +} + +const struct iomap_ops ntfs_write_iomap_ops =3D { + .iomap_begin =3D ntfs_write_iomap_begin, + .iomap_end =3D ntfs_write_iomap_end, +}; + +static int ntfs_page_mkwrite_iomap_begin(struct inode *inode, loff_t offse= t, + loff_t length, unsigned int flags, + struct iomap *iomap, struct iomap *srcmap) +{ + return __ntfs_write_iomap_begin(inode, offset, length, flags, iomap, + true, true); +} + +const struct iomap_ops ntfs_page_mkwrite_iomap_ops =3D { + .iomap_begin =3D ntfs_page_mkwrite_iomap_begin, + .iomap_end =3D ntfs_write_iomap_end, +}; + +static int ntfs_dio_iomap_begin(struct inode *inode, loff_t offset, + loff_t length, unsigned int flags, + struct iomap *iomap, struct iomap *srcmap) +{ + return __ntfs_write_iomap_begin(inode, offset, length, flags, iomap, + true, false); +} + +const struct iomap_ops ntfs_dio_iomap_ops =3D { + .iomap_begin =3D ntfs_dio_iomap_begin, + .iomap_end =3D ntfs_write_iomap_end, +}; + +static ssize_t ntfs_writeback_range(struct iomap_writepage_ctx *wpc, + struct folio *folio, u64 offset, unsigned int len, u64 end_pos) +{ + if (offset < wpc->iomap.offset || + offset >=3D wpc->iomap.offset + wpc->iomap.length) { + int error; + + error =3D __ntfs_write_iomap_begin(wpc->inode, offset, + NTFS_I(wpc->inode)->allocated_size - offset, + IOMAP_WRITE, &wpc->iomap, true, false); + if (error) + return error; + } + + return iomap_add_to_ioend(wpc, folio, offset, end_pos, len); +} + +const struct iomap_writeback_ops ntfs_writeback_ops =3D { + .writeback_range =3D ntfs_writeback_range, + .writeback_submit =3D iomap_ioend_writeback_submit, +}; --=20 2.25.1