From nobody Mon Dec 1 22:35:44 2025 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E553530F55F for ; Thu, 27 Nov 2025 05:00:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764219647; cv=none; b=qLRXdKBw1FDhZaHfCGZmmxSHSRRkRdgQxAXRCqi6Z7xDTVyHkrf6bc9ttZrLFfryna8fJfPY0ehWAHEM3tKbCp1g8fyreWuXhX29gsQN/55aY1kBVjutoNdQf7mP86fpwv7nGtlyuvaUa1pSH6TaudYsr9Fjpg31b+Bfs8gKbc8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764219647; c=relaxed/simple; bh=TQ1MZtfdPr3ez8lxDd/dx0tztxnlc6kTwMaZLHPnY74=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=qrSUpO1R2UjJRiU7JLU9E0CRlOLH14DER0IyF1StAtGI6uiu4XOIgqeQog3/mC+wCA/2ZCluC9v6QCssxgSiIV/PbelJtnE0hjWmbv8QxAN9ncYk/CIQ7rRNKJOuz/CpJVJr+rNQ+ap5vDiI9DAMeJz28YmWGaZ8kr6mjLqtP3Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; arc=none smtp.client-ip=209.85.214.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-297ef378069so4777075ad.3 for ; Wed, 26 Nov 2025 21:00:44 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764219644; x=1764824444; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=BGJbg29Ek8FIjpP4eBocE0E3tlcBTfp/2pGJqFaPmF8=; b=Hq+Uv0+AN2LwoTuZkoW2nzroLPiKHodtNrmL32dBRODXAk9vpM5uPpfGjdl+mQelUp Kv12+Mr6w5Mo3+t53keiC2BEUCuC/0mWO4nsy0N6K5OkJgeKmwnx6mJXXJJVYaiH4pLM kCFbUgP+O/N1fOho2NE6394dRJ3NUf79RYR6hLu++CoH2gmUyOhiZevrMX0rfN/Myia4 zqDFUPmzvQOrkkgLrURusjrva9owdIIwAGumwHafk4WON1vf8EZ1OgXPWbObyfG/bxq4 YpRr2HoBEpBwinm1jMx6KB5Ex/QshEaN+Jm1sJWtbjnv18fid54ZZLLrJcSuAAkLcsr+ NXUQ== X-Forwarded-Encrypted: i=1; AJvYcCXrAuXhBPH8g/XcEtQlKH+8FYfvtOasYXU1WkUxozufviAxAPzj5b4iUMxyMEQ5uGnNgM1QjIVHbMsMRmw=@vger.kernel.org X-Gm-Message-State: AOJu0YzyvFpo+aM1ZaIi9ZaTcToWWVBvUdhogXqFzOOU6piv4t4arBm0 FZuhye88x9N/N7kBpnHgTlbyoy6BZO5eukVH2MH756spSyn7NL1uBUf4 X-Gm-Gg: ASbGnct44VR7ob2YrU07dqgoeGG0idm45ehWBe2Tji6LcpnHYDgd41Of7EDAzgdrIWS ZBQLOhAJFZLyJuDUKCIMwfrrr+WRxKB8PauMQoiiVyKU4WmqI/B9B6SmjMslw14zijdhgQMeVUn d1Ijr8wkJKmqA4fQJQpELPciTbzAN9yyBu2EHZf9BG51yt6oP2V0vLLTJgKpl7kn6YcpNzyRn3n egS/70kyn1I2VCF+kz3Ab2JI1Putzrbw5q/ChXsZXNk7xUxCOP37XJejjN8yjoRbwb3gfC0e3rj NB0d0r/Kbu2N9koAyMTa98d0uY99HnqfutmtQvEK2SGQlnsfEtbPswe3H6YZkW7W1wMrKXKhNN7 AXoNoAO5DHv5Nm0Vd502RFwTtWjFzIkiGYz09v2np+yxhqVDEaJFKqhYcKf9e0ZhkUlNgY78+y4 mPr8gyL3IVodKYJX9HXSSVQFE3fA== X-Google-Smtp-Source: AGHT+IGl5r1uIkSd3eAo0Rmrhr0kizPMHK5dGsHrauLVbij6N5TZrpJnhI0PikdUOxsyAAkVWCiFzg== X-Received: by 2002:a17:903:11c7:b0:295:5972:4363 with SMTP id d9443c01a7336-29baacae71emr114866685ad.0.1764219643774; Wed, 26 Nov 2025 21:00:43 -0800 (PST) Received: from localhost.localdomain ([1.227.206.162]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29bceb54454sm2719825ad.84.2025.11.26.21.00.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Nov 2025 21:00:42 -0800 (PST) From: Namjae Jeon To: viro@zeniv.linux.org.uk, brauner@kernel.org, hch@infradead.org, hch@lst.de, tytso@mit.edu, willy@infradead.org, jack@suse.cz, djwong@kernel.org, josef@toxicpanda.com, sandeen@sandeen.net, rgoldwyn@suse.com, xiang@kernel.org, dsterba@suse.com, pali@kernel.org, ebiggers@kernel.org, neil@brown.name, amir73il@gmail.com Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, iamjoonsoo.kim@lge.com, cheol.lee@lge.com, jay.sim@lge.com, gunho.lee@lge.com, Namjae Jeon , Hyunchul Lee Subject: [PATCH v2 05/11] ntfsplus: add file operations Date: Thu, 27 Nov 2025 13:59:38 +0900 Message-Id: <20251127045944.26009-6-linkinjeon@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20251127045944.26009-1-linkinjeon@kernel.org> References: <20251127045944.26009-1-linkinjeon@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This adds the implementation of file operations for ntfsplus. Signed-off-by: Hyunchul Lee Signed-off-by: Namjae Jeon --- fs/ntfsplus/file.c | 1142 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 1142 insertions(+) create mode 100644 fs/ntfsplus/file.c diff --git a/fs/ntfsplus/file.c b/fs/ntfsplus/file.c new file mode 100644 index 000000000000..aebc2b48b0d5 --- /dev/null +++ b/fs/ntfsplus/file.c @@ -0,0 +1,1142 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * NTFS kernel file operations. Part of the Linux-NTFS project. + * + * Copyright (c) 2001-2015 Anton Altaparmakov and Tuxera Inc. + * Copyright (c) 2025 LG Electronics Co., Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "lcnalloc.h" +#include "ntfs.h" +#include "aops.h" +#include "reparse.h" +#include "ea.h" +#include "ntfs_iomap.h" +#include "misc.h" +#include "bitmap.h" + +/** + * ntfs_file_open - called when an inode is about to be opened + * @vi: inode to be opened + * @filp: file structure describing the inode + * + * Limit file size to the page cache limit on architectures where unsigned= long + * is 32-bits. This is the most we can do for now without overflowing the = page + * cache page index. Doing it this way means we don't run into problems be= cause + * of existing too large files. It would be better to allow the user to re= ad + * the beginning of the file but I doubt very much anyone is going to hit = this + * check on a 32-bit architecture, so there is no point in adding the extra + * complexity required to support this. + * + * On 64-bit architectures, the check is hopefully optimized away by the + * compiler. + * + * After the check passes, just call generic_file_open() to do its work. + */ +static int ntfs_file_open(struct inode *vi, struct file *filp) +{ + struct ntfs_inode *ni =3D NTFS_I(vi); + + if (NVolShutdown(ni->vol)) + return -EIO; + + if (sizeof(unsigned long) < 8) { + if (i_size_read(vi) > MAX_LFS_FILESIZE) + return -EOVERFLOW; + } + + if (filp->f_flags & O_TRUNC && NInoNonResident(ni)) { + int err; + + mutex_lock(&ni->mrec_lock); + down_read(&ni->runlist.lock); + if (!ni->runlist.rl) { + err =3D ntfs_attr_map_whole_runlist(ni); + if (err) { + up_read(&ni->runlist.lock); + mutex_unlock(&ni->mrec_lock); + return err; + } + } + ni->lcn_seek_trunc =3D ni->runlist.rl->lcn; + up_read(&ni->runlist.lock); + mutex_unlock(&ni->mrec_lock); + } + + filp->f_mode |=3D FMODE_NOWAIT; + + return generic_file_open(vi, filp); +} + +static int ntfs_file_release(struct inode *vi, struct file *filp) +{ + struct ntfs_inode *ni =3D NTFS_I(vi); + struct ntfs_volume *vol =3D ni->vol; + s64 aligned_data_size =3D round_up(ni->data_size, vol->cluster_size); + + if (NInoCompressed(ni)) + return 0; + + inode_lock(vi); + mutex_lock(&ni->mrec_lock); + down_write(&ni->runlist.lock); + if (aligned_data_size < ni->allocated_size) { + int err; + s64 vcn_ds =3D aligned_data_size >> vol->cluster_size_bits; + s64 vcn_tr =3D -1; + struct runlist_element *rl =3D ni->runlist.rl; + ssize_t rc =3D ni->runlist.count - 2; + + while (rc >=3D 0 && rl[rc].lcn =3D=3D LCN_HOLE && vcn_ds <=3D rl[rc].vcn= ) { + vcn_tr =3D rl[rc].vcn; + rc--; + } + + if (vcn_tr >=3D 0) { + err =3D ntfs_rl_truncate_nolock(vol, &ni->runlist, vcn_tr); + if (err) { + ntfs_free(ni->runlist.rl); + ni->runlist.rl =3D NULL; + ntfs_error(vol->sb, "Preallocated block rollback failed"); + } else { + ni->allocated_size =3D vcn_tr << vol->cluster_size_bits; + err =3D ntfs_attr_update_mapping_pairs(ni, 0); + if (err) + ntfs_error(vol->sb, + "Failed to rollback mapping pairs for prealloc"); + } + } + } + up_write(&ni->runlist.lock); + mutex_unlock(&ni->mrec_lock); + inode_unlock(vi); + + return 0; +} + +/** + * ntfs_file_fsync - sync a file to disk + * @filp: file to be synced + * @start: start offset to be synced + * @end: end offset to be synced + * @datasync: if non-zero only flush user data and not metadata + * + * Data integrity sync of a file to disk. Used for fsync, fdatasync, and = msync + * system calls. This function is inspired by fs/buffer.c::file_fsync(). + * + * If @datasync is false, write the mft record and all associated extent m= ft + * records as well as the $DATA attribute and then sync the block device. + * + * If @datasync is true and the attribute is non-resident, we skip the wri= ting + * of the mft record and all associated extent mft records (this might sti= ll + * happen due to the write_inode_now() call). + * + * Also, if @datasync is true, we do not wait on the inode to be written o= ut + * but we always wait on the page cache pages to be written out. + */ +static int ntfs_file_fsync(struct file *filp, loff_t start, loff_t end, + int datasync) +{ + struct inode *vi =3D filp->f_mapping->host; + struct ntfs_inode *ni =3D NTFS_I(vi); + struct ntfs_volume *vol =3D ni->vol; + int err, ret =3D 0; + struct inode *parent_vi, *ia_vi; + struct ntfs_attr_search_ctx *ctx; + + ntfs_debug("Entering for inode 0x%lx.", vi->i_ino); + + if (NVolShutdown(vol)) + return -EIO; + + err =3D file_write_and_wait_range(filp, start, end); + if (err) + return err; + + if (!datasync || !NInoNonResident(NTFS_I(vi))) + ret =3D __ntfs_write_inode(vi, 1); + write_inode_now(vi, !datasync); + + ctx =3D ntfs_attr_get_search_ctx(ni, NULL); + if (!ctx) + return -ENOMEM; + + mutex_lock_nested(&ni->mrec_lock, NTFS_INODE_MUTEX_NORMAL_2); + while (!(err =3D ntfs_attr_lookup(AT_UNUSED, NULL, 0, 0, 0, NULL, 0, ctx)= )) { + if (ctx->attr->type =3D=3D AT_FILE_NAME) { + struct file_name_attr *fn =3D (struct file_name_attr *)((u8 *)ctx->attr= + + le16_to_cpu(ctx->attr->data.resident.value_offset)); + + parent_vi =3D ntfs_iget(vi->i_sb, MREF_LE(fn->parent_directory)); + if (IS_ERR(parent_vi)) + continue; + mutex_lock_nested(&NTFS_I(parent_vi)->mrec_lock, NTFS_INODE_MUTEX_PAREN= T_2); + ia_vi =3D ntfs_index_iget(parent_vi, I30, 4); + mutex_unlock(&NTFS_I(parent_vi)->mrec_lock); + if (IS_ERR(ia_vi)) { + iput(parent_vi); + continue; + } + write_inode_now(ia_vi, 1); + iput(ia_vi); + write_inode_now(parent_vi, 1); + iput(parent_vi); + } else if (ctx->attr->non_resident) { + struct inode *attr_vi; + __le16 *name; + + name =3D (__le16 *)((u8 *)ctx->attr + le16_to_cpu(ctx->attr->name_offse= t)); + if (ctx->attr->type =3D=3D AT_DATA && ctx->attr->name_length =3D=3D 0) + continue; + + attr_vi =3D ntfs_attr_iget(vi, ctx->attr->type, + name, ctx->attr->name_length); + if (IS_ERR(attr_vi)) + continue; + spin_lock(&attr_vi->i_lock); + if (attr_vi->i_state & I_DIRTY_PAGES) { + spin_unlock(&attr_vi->i_lock); + filemap_write_and_wait(attr_vi->i_mapping); + } else + spin_unlock(&attr_vi->i_lock); + iput(attr_vi); + } + } + mutex_unlock(&ni->mrec_lock); + ntfs_attr_put_search_ctx(ctx); + + write_inode_now(vol->mftbmp_ino, 1); + down_write(&vol->lcnbmp_lock); + write_inode_now(vol->lcnbmp_ino, 1); + up_write(&vol->lcnbmp_lock); + write_inode_now(vol->mft_ino, 1); + + /* + * NOTE: If we were to use mapping->private_list (see ext2 and + * fs/buffer.c) for dirty blocks then we could optimize the below to be + * sync_mapping_buffers(vi->i_mapping). + */ + err =3D sync_blockdev(vi->i_sb->s_bdev); + if (unlikely(err && !ret)) + ret =3D err; + if (likely(!ret)) + ntfs_debug("Done."); + else + ntfs_warning(vi->i_sb, + "Failed to f%ssync inode 0x%lx. Error %u.", + datasync ? "data" : "", vi->i_ino, -ret); + if (!ret) + blkdev_issue_flush(vi->i_sb->s_bdev); + return ret; +} + +/** + * ntfsp_setattr - called from notify_change() when an attribute is being = changed + * @idmap: idmap of the mount the inode was found from + * @dentry: dentry whose attributes to change + * @attr: structure describing the attributes and the changes + * + * We have to trap VFS attempts to truncate the file described by @dentry = as + * soon as possible, because we do not implement changes in i_size yet. S= o we + * abort all i_size changes here. + * + * We also abort all changes of user, group, and mode as we do not impleme= nt + * the NTFS ACLs yet. + */ +int ntfsp_setattr(struct mnt_idmap *idmap, struct dentry *dentry, + struct iattr *attr) +{ + struct inode *vi =3D d_inode(dentry); + int err; + unsigned int ia_valid =3D attr->ia_valid; + struct ntfs_inode *ni =3D NTFS_I(vi); + struct ntfs_volume *vol =3D ni->vol; + + if (NVolShutdown(vol)) + return -EIO; + + err =3D setattr_prepare(idmap, dentry, attr); + if (err) + goto out; + + if (!(vol->vol_flags & VOLUME_IS_DIRTY)) + ntfs_set_volume_flags(vol, VOLUME_IS_DIRTY); + + if (ia_valid & ATTR_SIZE) { + if (NInoCompressed(ni) || NInoEncrypted(ni)) { + ntfs_warning(vi->i_sb, + "Changes in inode size are not supported yet for %s files, ignori= ng.", + NInoCompressed(ni) ? "compressed" : "encrypted"); + err =3D -EOPNOTSUPP; + } else { + loff_t old_size =3D vi->i_size; + + err =3D inode_newsize_ok(vi, attr->ia_size); + if (err) + goto out; + + inode_dio_wait(vi); + /* Serialize against page faults */ + if (NInoNonResident(NTFS_I(vi)) && + attr->ia_size < old_size) { + err =3D iomap_truncate_page(vi, attr->ia_size, NULL, + &ntfs_read_iomap_ops, + &ntfs_iomap_folio_ops, NULL); + if (err) + goto out; + } + + truncate_setsize(vi, attr->ia_size); + err =3D ntfs_truncate_vfs(vi, attr->ia_size, old_size); + if (err) { + i_size_write(vi, old_size); + goto out; + } + + if (NInoNonResident(ni) && attr->ia_size > old_size && + old_size % PAGE_SIZE !=3D 0) { + loff_t len =3D min_t(loff_t, + round_up(old_size, PAGE_SIZE) - old_size, + attr->ia_size - old_size); + err =3D iomap_zero_range(vi, old_size, len, + NULL, &ntfs_read_iomap_ops, + &ntfs_iomap_folio_ops, NULL); + } + } + if (ia_valid =3D=3D ATTR_SIZE) + goto out; + ia_valid |=3D ATTR_MTIME | ATTR_CTIME; + } + + setattr_copy(idmap, vi, attr); + + if (vol->sb->s_flags & SB_POSIXACL && !S_ISLNK(vi->i_mode)) { + err =3D posix_acl_chmod(idmap, dentry, vi->i_mode); + if (err) + goto out; + } + + if (0222 & vi->i_mode) + ni->flags &=3D ~FILE_ATTR_READONLY; + else + ni->flags |=3D FILE_ATTR_READONLY; + + if (ia_valid & (ATTR_UID | ATTR_GID | ATTR_MODE)) { + unsigned int flags =3D 0; + + if (ia_valid & ATTR_UID) + flags |=3D NTFS_EA_UID; + if (ia_valid & ATTR_GID) + flags |=3D NTFS_EA_GID; + if (ia_valid & ATTR_MODE) + flags |=3D NTFS_EA_MODE; + + if (S_ISDIR(vi->i_mode)) + vi->i_mode &=3D ~vol->dmask; + else + vi->i_mode &=3D ~vol->fmask; + + mutex_lock(&ni->mrec_lock); + ntfs_ea_set_wsl_inode(vi, 0, NULL, flags); + mutex_unlock(&ni->mrec_lock); + } + + mark_inode_dirty(vi); +out: + return err; +} + +int ntfsp_getattr(struct mnt_idmap *idmap, const struct path *path, + struct kstat *stat, unsigned int request_mask, + unsigned int query_flags) +{ + struct inode *inode =3D d_backing_inode(path->dentry); + + generic_fillattr(idmap, request_mask, inode, stat); + + stat->blksize =3D NTFS_SB(inode->i_sb)->cluster_size; + stat->blocks =3D (((u64)NTFS_I(inode)->i_dealloc_clusters << + NTFS_SB(inode->i_sb)->cluster_size_bits) >> 9) + inode->i_blocks; + stat->result_mask |=3D STATX_BTIME; + stat->btime =3D NTFS_I(inode)->i_crtime; + + return 0; +} + +static loff_t ntfs_file_llseek(struct file *file, loff_t offset, int whenc= e) +{ + struct inode *vi =3D file->f_mapping->host; + + if (whence =3D=3D SEEK_DATA || whence =3D=3D SEEK_HOLE) { + struct ntfs_inode *ni =3D NTFS_I(vi); + struct ntfs_volume *vol =3D ni->vol; + struct runlist_element *rl; + s64 vcn; + unsigned int vcn_off; + loff_t end_off; + unsigned long flags; + int i; + + inode_lock_shared(vi); + + if (NInoCompressed(ni) || NInoEncrypted(ni)) + goto error; + + read_lock_irqsave(&ni->size_lock, flags); + end_off =3D ni->data_size; + read_unlock_irqrestore(&ni->size_lock, flags); + + if (offset < 0 || offset >=3D end_off) + goto error; + + if (!NInoNonResident(ni)) { + if (whence =3D=3D SEEK_HOLE) + offset =3D end_off; + goto found_no_runlist_lock; + } + + vcn =3D offset >> vol->cluster_size_bits; + vcn_off =3D offset & vol->cluster_size_mask; + + down_read(&ni->runlist.lock); + rl =3D ni->runlist.rl; + i =3D 0; + +#ifdef DEBUG + ntfs_debug("init:"); + ntfs_debug_dump_runlist(rl); +#endif + while (1) { + if (!rl || !NInoFullyMapped(ni) || rl[i].lcn =3D=3D LCN_RL_NOT_MAPPED) { + int ret; + + up_read(&ni->runlist.lock); + ret =3D ntfs_map_runlist(ni, rl ? rl[i].vcn : 0); + if (ret) + goto error; + down_read(&ni->runlist.lock); + rl =3D ni->runlist.rl; +#ifdef DEBUG + ntfs_debug("mapped:"); + ntfs_debug_dump_runlist(ni->runlist.rl); +#endif + continue; + } else if (rl[i].lcn =3D=3D LCN_ENOENT) { + if (whence =3D=3D SEEK_DATA) { + up_read(&ni->runlist.lock); + goto error; + } else { + offset =3D end_off; + goto found; + } + } else if (rl[i + 1].vcn > vcn) { + if ((whence =3D=3D SEEK_DATA && (rl[i].lcn >=3D 0 || + rl[i].lcn =3D=3D LCN_DELALLOC)) || + (whence =3D=3D SEEK_HOLE && rl[i].lcn =3D=3D LCN_HOLE)) { + offset =3D (vcn << vol->cluster_size_bits) + vcn_off; + if (offset < ni->data_size) + goto found; + } + vcn =3D rl[i + 1].vcn; + vcn_off =3D 0; + } + i++; + } + up_read(&ni->runlist.lock); + inode_unlock_shared(vi); + return -EIO; +found: + up_read(&ni->runlist.lock); +found_no_runlist_lock: + inode_unlock_shared(vi); + return vfs_setpos(file, offset, vi->i_sb->s_maxbytes); +error: + inode_unlock_shared(vi); + return -ENXIO; + } else { + return generic_file_llseek_size(file, offset, whence, + vi->i_sb->s_maxbytes, + i_size_read(vi)); + } +} + +static ssize_t ntfs_file_read_iter(struct kiocb *iocb, struct iov_iter *to) +{ + struct inode *vi =3D file_inode(iocb->ki_filp); + struct super_block *sb =3D vi->i_sb; + ssize_t ret; + + if (NVolShutdown(NTFS_SB(sb))) + return -EIO; + + if (NInoCompressed(NTFS_I(vi)) && iocb->ki_flags & IOCB_DIRECT) + return -EOPNOTSUPP; + + inode_lock_shared(vi); + + if (iocb->ki_flags & IOCB_DIRECT) { + size_t count =3D iov_iter_count(to); + + if ((iocb->ki_pos | count) & (sb->s_blocksize - 1)) { + ret =3D -EINVAL; + goto inode_unlock; + } + + file_accessed(iocb->ki_filp); + ret =3D iomap_dio_rw(iocb, to, &ntfs_read_iomap_ops, NULL, IOMAP_DIO_PAR= TIAL, + NULL, 0); + } else { + ret =3D generic_file_read_iter(iocb, to); + } + +inode_unlock: + inode_unlock_shared(vi); + + return ret; +} + +static int ntfs_file_write_dio_end_io(struct kiocb *iocb, ssize_t size, + int error, unsigned int flags) +{ + struct inode *inode =3D file_inode(iocb->ki_filp); + + if (error) + return error; + + if (size) { + if (i_size_read(inode) < iocb->ki_pos + size) { + i_size_write(inode, iocb->ki_pos + size); + mark_inode_dirty(inode); + } + } + + return 0; +} + +static const struct iomap_dio_ops ntfs_write_dio_ops =3D { + .end_io =3D ntfs_file_write_dio_end_io, +}; + +static ssize_t ntfs_file_write_iter(struct kiocb *iocb, struct iov_iter *f= rom) +{ + struct file *file =3D iocb->ki_filp; + struct inode *vi =3D file->f_mapping->host; + struct ntfs_inode *ni =3D NTFS_I(vi); + struct ntfs_volume *vol =3D ni->vol; + ssize_t ret; + ssize_t count; + loff_t pos; + int err; + loff_t old_data_size, old_init_size; + + if (NVolShutdown(vol)) + return -EIO; + + if (NInoEncrypted(ni)) { + ntfs_error(vi->i_sb, "Writing for %s files is not supported yet", + NInoCompressed(ni) ? "Compressed" : "Encrypted"); + return -EOPNOTSUPP; + } + + if (NInoCompressed(ni) && iocb->ki_flags & IOCB_DIRECT) + return -EOPNOTSUPP; + + if (iocb->ki_flags & IOCB_NOWAIT) { + if (!inode_trylock(vi)) + return -EAGAIN; + } else + inode_lock(vi); + + ret =3D generic_write_checks(iocb, from); + if (ret <=3D 0) + goto out_lock; + + if (NInoNonResident(ni) && (iocb->ki_flags & IOCB_DIRECT) && + ((iocb->ki_pos | ret) & (vi->i_sb->s_blocksize - 1))) { + ret =3D -EINVAL; + goto out_lock; + } + + err =3D file_modified(iocb->ki_filp); + if (err) { + ret =3D err; + goto out_lock; + } + + if (!(vol->vol_flags & VOLUME_IS_DIRTY)) + ntfs_set_volume_flags(vol, VOLUME_IS_DIRTY); + + pos =3D iocb->ki_pos; + count =3D ret; + + old_data_size =3D ni->data_size; + old_init_size =3D ni->initialized_size; + if (iocb->ki_pos + ret > old_data_size) { + mutex_lock(&ni->mrec_lock); + if (!NInoCompressed(ni) && iocb->ki_pos + ret > ni->allocated_size && + iocb->ki_pos + ret < ni->allocated_size + vol->preallocated_size) + ret =3D ntfs_attr_expand(ni, iocb->ki_pos + ret, + ni->allocated_size + vol->preallocated_size); + else if (NInoCompressed(ni) && iocb->ki_pos + ret > ni->allocated_size) + ret =3D ntfs_attr_expand(ni, iocb->ki_pos + ret, + round_up(iocb->ki_pos + ret, ni->itype.compressed.block_size)); + else + ret =3D ntfs_attr_expand(ni, iocb->ki_pos + ret, 0); + mutex_unlock(&ni->mrec_lock); + if (ret < 0) + goto out; + } + + if (NInoNonResident(ni) && iocb->ki_pos + count > old_init_size) { + ret =3D ntfs_extend_initialized_size(vi, iocb->ki_pos, + iocb->ki_pos + count); + if (ret < 0) + goto out; + } + + if (NInoNonResident(ni) && NInoCompressed(ni)) { + ret =3D ntfs_compress_write(ni, pos, count, from); + if (ret > 0) + iocb->ki_pos +=3D ret; + goto out; + } + + if (NInoNonResident(ni) && iocb->ki_flags & IOCB_DIRECT) { + ret =3D iomap_dio_rw(iocb, from, &ntfs_dio_iomap_ops, + &ntfs_write_dio_ops, 0, NULL, 0); + if (ret =3D=3D -ENOTBLK) + ret =3D 0; + else if (ret < 0) + goto out; + + if (iov_iter_count(from)) { + loff_t offset, end; + ssize_t written; + int ret2; + + offset =3D iocb->ki_pos; + iocb->ki_flags &=3D ~IOCB_DIRECT; + written =3D iomap_file_buffered_write(iocb, from, + &ntfs_write_iomap_ops, &ntfs_iomap_folio_ops, + NULL); + if (written < 0) { + err =3D written; + goto out; + } + + ret +=3D written; + end =3D iocb->ki_pos + written - 1; + ret2 =3D filemap_write_and_wait_range(iocb->ki_filp->f_mapping, + offset, end); + if (ret2) + goto out_err; + if (!ret2) + invalidate_mapping_pages(iocb->ki_filp->f_mapping, + offset >> PAGE_SHIFT, + end >> PAGE_SHIFT); + } + } else { + ret =3D iomap_file_buffered_write(iocb, from, &ntfs_write_iomap_ops, + &ntfs_iomap_folio_ops, NULL); + } +out: + if (ret < 0 && ret !=3D -EIOCBQUEUED) { +out_err: + if (ni->initialized_size !=3D old_init_size) { + mutex_lock(&ni->mrec_lock); + ntfs_attr_set_initialized_size(ni, old_init_size); + mutex_unlock(&ni->mrec_lock); + } + if (ni->data_size !=3D old_data_size) { + truncate_setsize(vi, old_data_size); + ntfs_attr_truncate(ni, old_data_size); + } + } +out_lock: + inode_unlock(vi); + if (ret > 0) + ret =3D generic_write_sync(iocb, ret); + return ret; +} + +static vm_fault_t ntfs_filemap_page_mkwrite(struct vm_fault *vmf) +{ + struct inode *inode =3D file_inode(vmf->vma->vm_file); + vm_fault_t ret; + + if (unlikely(IS_IMMUTABLE(inode))) + return VM_FAULT_SIGBUS; + + sb_start_pagefault(inode->i_sb); + file_update_time(vmf->vma->vm_file); + + ret =3D iomap_page_mkwrite(vmf, &ntfs_page_mkwrite_iomap_ops, NULL); + sb_end_pagefault(inode->i_sb); + return ret; +} + +static const struct vm_operations_struct ntfs_file_vm_ops =3D { + .fault =3D filemap_fault, + .map_pages =3D filemap_map_pages, + .page_mkwrite =3D ntfs_filemap_page_mkwrite, +}; + +static int ntfs_file_mmap_prepare(struct vm_area_desc *desc) +{ + struct file *file =3D desc->file; + struct inode *inode =3D file_inode(file); + + if (NVolShutdown(NTFS_SB(file->f_mapping->host->i_sb))) + return -EIO; + + if (NInoCompressed(NTFS_I(inode))) + return -EOPNOTSUPP; + + if (desc->vm_flags & VM_WRITE) { + struct inode *inode =3D file_inode(file); + loff_t from, to; + int err; + + from =3D ((loff_t)desc->pgoff << PAGE_SHIFT); + to =3D min_t(loff_t, i_size_read(inode), + from + desc->end - desc->start); + + if (NTFS_I(inode)->initialized_size < to) { + err =3D ntfs_extend_initialized_size(inode, to, to); + if (err) + return err; + } + } + + + file_accessed(file); + desc->vm_ops =3D &ntfs_file_vm_ops; + return 0; +} + +static int ntfs_fiemap(struct inode *inode, struct fiemap_extent_info *fie= info, + u64 start, u64 len) +{ + return iomap_fiemap(inode, fieinfo, start, len, &ntfs_read_iomap_ops); +} + +static const char *ntfs_get_link(struct dentry *dentry, struct inode *inod= e, + struct delayed_call *done) +{ + if (!NTFS_I(inode)->target) + return ERR_PTR(-EINVAL); + + return NTFS_I(inode)->target; +} + +static ssize_t ntfs_file_splice_read(struct file *in, loff_t *ppos, + struct pipe_inode_info *pipe, size_t len, unsigned int flags) +{ + if (NVolShutdown(NTFS_SB(in->f_mapping->host->i_sb))) + return -EIO; + + return filemap_splice_read(in, ppos, pipe, len, flags); +} + +static int ntfs_ioctl_shutdown(struct super_block *sb, unsigned long arg) +{ + u32 flags; + + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + + if (get_user(flags, (__u32 __user *)arg)) + return -EFAULT; + + return ntfs_force_shutdown(sb, flags); +} + +static int ntfs_ioctl_get_volume_label(struct file *filp, unsigned long ar= g) +{ + struct ntfs_volume *vol =3D NTFS_SB(file_inode(filp)->i_sb); + char __user *buf =3D (char __user *)arg; + + if (!vol->volume_label) { + if (copy_to_user(buf, "", 1)) + return -EFAULT; + } else if (copy_to_user(buf, vol->volume_label, + MIN(FSLABEL_MAX, strlen(vol->volume_label) + 1))) + return -EFAULT; + return 0; +} + +static int ntfs_ioctl_set_volume_label(struct file *filp, unsigned long ar= g) +{ + struct ntfs_volume *vol =3D NTFS_SB(file_inode(filp)->i_sb); + char *label; + int ret; + + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + + label =3D strndup_user((const char __user *)arg, FSLABEL_MAX); + if (IS_ERR(label)) + return PTR_ERR(label); + + ret =3D mnt_want_write_file(filp); + if (ret) + goto out; + + ret =3D ntfs_write_volume_label(vol, label); + mnt_drop_write_file(filp); +out: + kfree(label); + return ret; +} + +static int ntfs_ioctl_fitrim(struct ntfs_volume *vol, unsigned long arg) +{ + struct fstrim_range __user *user_range; + struct fstrim_range range; + struct block_device *dev; + int err; + + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + + dev =3D vol->sb->s_bdev; + if (!bdev_max_discard_sectors(dev)) + return -EOPNOTSUPP; + + user_range =3D (struct fstrim_range __user *)arg; + if (copy_from_user(&range, user_range, sizeof(range))) + return -EFAULT; + + if (range.len =3D=3D 0) + return -EINVAL; + + if (range.len < vol->cluster_size) + return -EINVAL; + + range.minlen =3D max_t(u32, range.minlen, bdev_discard_granularity(dev)); + + err =3D ntfsp_trim_fs(vol, &range); + if (err < 0) + return err; + + if (copy_to_user(user_range, &range, sizeof(range))) + return -EFAULT; + + return 0; +} + +long ntfsp_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) +{ + switch (cmd) { + case NTFS_IOC_SHUTDOWN: + return ntfs_ioctl_shutdown(file_inode(filp)->i_sb, arg); + case FS_IOC_GETFSLABEL: + return ntfs_ioctl_get_volume_label(filp, arg); + case FS_IOC_SETFSLABEL: + return ntfs_ioctl_set_volume_label(filp, arg); + case FITRIM: + return ntfs_ioctl_fitrim(NTFS_SB(file_inode(filp)->i_sb), arg); + default: + return -ENOTTY; + } +} + +#ifdef CONFIG_COMPAT +long ntfsp_compat_ioctl(struct file *filp, unsigned int cmd, + unsigned long arg) +{ + return ntfsp_ioctl(filp, cmd, (unsigned long)compat_ptr(arg)); +} +#endif + +static long ntfs_fallocate(struct file *file, int mode, loff_t offset, lof= f_t len) +{ + struct inode *vi =3D file_inode(file); + struct ntfs_inode *ni =3D NTFS_I(vi); + struct ntfs_volume *vol =3D ni->vol; + int err =3D 0; + loff_t end_offset =3D offset + len; + loff_t old_size, new_size; + s64 start_vcn, end_vcn; + bool map_locked =3D false; + + if (!S_ISREG(vi->i_mode)) + return -EOPNOTSUPP; + + if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_INSERT_RANGE | + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_COLLAPSE_RANGE)) + return -EOPNOTSUPP; + + if (!NVolFreeClusterKnown(vol)) + wait_event(vol->free_waitq, NVolFreeClusterKnown(vol)); + + if ((ni->vol->mft_zone_end - ni->vol->mft_zone_start) =3D=3D 0) + return -ENOSPC; + + if (NInoNonResident(ni) && !NInoFullyMapped(ni)) { + down_write(&ni->runlist.lock); + err =3D ntfs_attr_map_whole_runlist(ni); + up_write(&ni->runlist.lock); + if (err) + return err; + } + + if (!(vol->vol_flags & VOLUME_IS_DIRTY)) { + err =3D ntfs_set_volume_flags(vol, VOLUME_IS_DIRTY); + if (err) + return err; + } + + old_size =3D i_size_read(vi); + new_size =3D max_t(loff_t, old_size, end_offset); + start_vcn =3D offset >> vol->cluster_size_bits; + end_vcn =3D ((end_offset - 1) >> vol->cluster_size_bits) + 1; + + inode_lock(vi); + if (NInoCompressed(ni) || NInoEncrypted(ni)) { + err =3D -EOPNOTSUPP; + goto out; + } + + inode_dio_wait(vi); + if (mode & (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_COLLAPSE_RANGE | + FALLOC_FL_INSERT_RANGE)) { + filemap_invalidate_lock(vi->i_mapping); + map_locked =3D true; + } + + if (mode & FALLOC_FL_INSERT_RANGE) { + loff_t offset_down =3D round_down(offset, + max_t(unsigned long, vol->cluster_size, PAGE_SIZE)); + loff_t alloc_size; + + if (NVolDisableSparse(vol)) { + err =3D -EOPNOTSUPP; + goto out; + } + + if ((offset & vol->cluster_size_mask) || + (len & vol->cluster_size_mask) || + offset >=3D ni->allocated_size) { + err =3D -EINVAL; + goto out; + } + + new_size =3D old_size + + ((end_vcn - start_vcn) << vol->cluster_size_bits); + alloc_size =3D ni->allocated_size + + ((end_vcn - start_vcn) << vol->cluster_size_bits); + if (alloc_size < 0) { + err =3D -EFBIG; + goto out; + } + err =3D inode_newsize_ok(vi, alloc_size); + if (err) + goto out; + + err =3D filemap_write_and_wait_range(vi->i_mapping, + offset_down, LLONG_MAX); + if (err) + goto out; + + truncate_pagecache(vi, offset_down); + + mutex_lock_nested(&ni->mrec_lock, NTFS_INODE_MUTEX_NORMAL); + err =3D ntfs_non_resident_attr_insert_range(ni, start_vcn, + end_vcn - start_vcn); + mutex_unlock(&ni->mrec_lock); + if (err) + goto out; + } else if (mode & FALLOC_FL_COLLAPSE_RANGE) { + loff_t offset_down =3D round_down(offset, + max_t(unsigned long, vol->cluster_size, PAGE_SIZE)); + + if ((offset & vol->cluster_size_mask) || + (len & vol->cluster_size_mask) || + offset >=3D ni->allocated_size) { + err =3D -EINVAL; + goto out; + } + + if ((end_vcn << vol->cluster_size_bits) > ni->allocated_size) + end_vcn =3D DIV_ROUND_UP(ni->allocated_size - 1, + vol->cluster_size) + 1; + new_size =3D old_size - + ((end_vcn - start_vcn) << vol->cluster_size_bits); + if (new_size < 0) + new_size =3D 0; + err =3D filemap_write_and_wait_range(vi->i_mapping, + offset_down, LLONG_MAX); + if (err) + goto out; + + truncate_pagecache(vi, offset_down); + + mutex_lock_nested(&ni->mrec_lock, NTFS_INODE_MUTEX_NORMAL); + err =3D ntfs_non_resident_attr_collapse_range(ni, start_vcn, + end_vcn - start_vcn); + mutex_unlock(&ni->mrec_lock); + if (err) + goto out; + } else if (mode & FALLOC_FL_PUNCH_HOLE) { + loff_t offset_down =3D round_down(offset, max_t(unsigned int, + vol->cluster_size, PAGE_SIZE)); + + if (NVolDisableSparse(vol)) { + err =3D -EOPNOTSUPP; + goto out; + } + + if (!(mode & FALLOC_FL_KEEP_SIZE)) { + err =3D -EINVAL; + goto out; + } + + if (offset >=3D ni->data_size) + goto out; + + if (offset + len > ni->data_size) { + end_offset =3D ni->data_size; + end_vcn =3D ((end_offset - 1) >> vol->cluster_size_bits) + 1; + } + + err =3D filemap_write_and_wait_range(vi->i_mapping, offset_down, LLONG_M= AX); + if (err) + goto out; + truncate_pagecache(vi, offset_down); + + if (offset & vol->cluster_size_mask) { + loff_t to; + + to =3D min_t(loff_t, (start_vcn + 1) << vol->cluster_size_bits, + end_offset); + err =3D iomap_zero_range(vi, offset, to - offset, NULL, + &ntfs_read_iomap_ops, + &ntfs_iomap_folio_ops, NULL); + if (err < 0 || (end_vcn - start_vcn) =3D=3D 1) + goto out; + start_vcn++; + } + if (end_offset & vol->cluster_size_mask) { + loff_t from; + + from =3D (end_vcn - 1) << vol->cluster_size_bits; + err =3D iomap_zero_range(vi, from, end_offset - from, NULL, + &ntfs_read_iomap_ops, + &ntfs_iomap_folio_ops, NULL); + if (err < 0 || (end_vcn - start_vcn) =3D=3D 1) + goto out; + end_vcn--; + } + + mutex_lock_nested(&ni->mrec_lock, NTFS_INODE_MUTEX_NORMAL); + err =3D ntfs_non_resident_attr_punch_hole(ni, start_vcn, + end_vcn - start_vcn); + mutex_unlock(&ni->mrec_lock); + if (err) + goto out; + } else if (mode =3D=3D 0 || mode =3D=3D FALLOC_FL_KEEP_SIZE) { + s64 need_space; + + err =3D inode_newsize_ok(vi, new_size); + if (err) + goto out; + + need_space =3D ni->allocated_size << vol->cluster_size_bits; + if (need_space > start_vcn) + need_space =3D end_vcn - need_space; + else + need_space =3D end_vcn - start_vcn; + if (need_space > 0 && + need_space > (atomic64_read(&vol->free_clusters) - + atomic64_read(&vol->dirty_clusters))) { + err =3D -ENOSPC; + goto out; + } + + err =3D ntfs_attr_fallocate(ni, offset, len, + mode & FALLOC_FL_KEEP_SIZE ? true : false); + if (err) + goto out; + } + + /* inode->i_blocks is already updated in ntfs_attr_update_mapping_pairs */ + if (!(mode & FALLOC_FL_KEEP_SIZE) && new_size !=3D old_size) + i_size_write(vi, ni->data_size); + +out: + if (map_locked) + filemap_invalidate_unlock(vi->i_mapping); + if (!err) { + if (mode =3D=3D 0 && NInoNonResident(ni) && + offset > old_size && old_size % PAGE_SIZE !=3D 0) { + loff_t len =3D min_t(loff_t, + round_up(old_size, PAGE_SIZE) - old_size, + offset - old_size); + err =3D iomap_zero_range(vi, old_size, len, NULL, + &ntfs_read_iomap_ops, + &ntfs_iomap_folio_ops, NULL); + } + NInoSetFileNameDirty(ni); + inode_set_mtime_to_ts(vi, inode_set_ctime_current(vi)); + mark_inode_dirty(vi); + } + + inode_unlock(vi); + return err; +} + +const struct file_operations ntfs_file_ops =3D { + .llseek =3D ntfs_file_llseek, + .read_iter =3D ntfs_file_read_iter, + .write_iter =3D ntfs_file_write_iter, + .fsync =3D ntfs_file_fsync, + .mmap_prepare =3D ntfs_file_mmap_prepare, + .open =3D ntfs_file_open, + .release =3D ntfs_file_release, + .splice_read =3D ntfs_file_splice_read, + .splice_write =3D iter_file_splice_write, + .unlocked_ioctl =3D ntfsp_ioctl, +#ifdef CONFIG_COMPAT + .compat_ioctl =3D ntfsp_compat_ioctl, +#endif + .fallocate =3D ntfs_fallocate, +}; + +const struct inode_operations ntfs_file_inode_ops =3D { + .setattr =3D ntfsp_setattr, + .getattr =3D ntfsp_getattr, + .listxattr =3D ntfsp_listxattr, + .get_acl =3D ntfsp_get_acl, + .set_acl =3D ntfsp_set_acl, + .fiemap =3D ntfs_fiemap, +}; + +const struct inode_operations ntfs_symlink_inode_operations =3D { + .get_link =3D ntfs_get_link, + .setattr =3D ntfsp_setattr, + .listxattr =3D ntfsp_listxattr, +}; + +const struct inode_operations ntfsp_special_inode_operations =3D { + .setattr =3D ntfsp_setattr, + .getattr =3D ntfsp_getattr, + .listxattr =3D ntfsp_listxattr, + .get_acl =3D ntfsp_get_acl, + .set_acl =3D ntfsp_set_acl, +}; + +const struct file_operations ntfs_empty_file_ops =3D {}; + +const struct inode_operations ntfs_empty_inode_ops =3D {}; --=20 2.25.1