From nobody Mon Dec 1 22:35:44 2025 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2227B30FF29 for ; Thu, 27 Nov 2025 05:01:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764219674; cv=none; b=mgfQmNtYBOiMvFM9osCTQxGiczKoQ/hVXkNzqvX3fznM5H8lvVG+/4zyxQe6z4Q2+kO6xXeUy7Jnm2CcPZoy9TJF5v+PuI82rxmi4IrrGSWpRqYtPEtBoNOxCIjN9lXGonzRBDsFE5LBRmMl0Ip+e4lMm/QvPLisRb6znB66JjQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764219674; c=relaxed/simple; bh=oEQ8ZPu4SBIXWKMcDp3qkdlB1Vq/8PvMNkk9ETkNDBk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=WGmSFsE5kJV4MmYlG/M4ljvnny9mjITbjz2U1ROpcxgJ+jwYbKiBKGpD8vY9k7n7sWKxXsRl079qnvdfoj5fiMJ5Sf9TGzcUTSVzavbnCwJyjilizJImA8w9fxX1TksohxTqAIiM4v0lKnqADqXkkSaqOFsRIujuWluwz5c6gRI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; arc=none smtp.client-ip=209.85.214.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-297d4a56f97so7206315ad.1 for ; Wed, 26 Nov 2025 21:01:01 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764219661; x=1764824461; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=NR2u3D4tHuScHTPeTtPQ6i+BxIU6XxN+KjrEC2ybrrk=; b=DuG3iBBzBaa97UeLuWNyLqVH6EoHBQk7x0u2LbFk+Zyty2qLo1X1RS7m5j/YXyxgVv C1ge5KcvY/EMv3ZMnL9Z267A2h45mxQROfKdgRnbcVnfcjwhfuBk5o0cj4ZVufEClaXu pVeNiEk3LW0wQP7BdJB4u26ChWYqwhAbVuOVCjVqJLUMKGiTFPA3VyhrXeDN2g80F6rn SD0uoeoQ/B9oHRT8/LceUt5H4j3vpQZfv7Kb5S+6m5mZq5ZXg8mV+KqEkakqGMfjRGe+ scFneYDbbQP7fO3Qxv4WrVl+CVLIWd6h1cPqM8QiQSyGViIwbWq+/p2NO3sluQAi2rmG JQmQ== X-Forwarded-Encrypted: i=1; AJvYcCVmiR4MBHxmJAbZbDz3NRvPStMvgyQ6MANP4PiR2pWqQG5KzCyyX+axjyD1c0OdGVg7KH/0AjbY+9YriuA=@vger.kernel.org X-Gm-Message-State: AOJu0Yyh2JTqxE/8axqoLKo5SfuOQ+xrNONAjsJVYHe3LpMgFu2YvXPD dgL07R0Gw54cLA8qvKd4Qw93WQqkBdZqWiGyXpeyn0WjNkTcMvT8hsba X-Gm-Gg: ASbGncuhpK6HRaE8obkyEa7Rcy/L3OS1gQaEnKlcaiz5hpTro2SH2+KZBD6vSwYFOmD 9tN59XOKEgJ/hTrDC4r9CYjN+/dhTSNeQ3ykQEsdyCO9oabZKVu/Ehzrucy7Ro8PLVUrWfIm6Iz md/UEN/ZRsN6EPYG7jfEtDTBTRjZ6eWClZBIirPCPWLdAtX/4FRFFNW6k4n0tOKcEt8nmKsT7Gw nO7xkT+zLLu5+r67dc3J+s1xO9q4IG9E3ZiGvYhEtgZKdvsRkBwG3kgNLpfs34ueYbL521ZBsA7 +NngS85jJ+VPTuJVEnwWnlo2ffU1RNW68M2gXTUqtK6WI1eXV5lMK0dDkqc8ZSDBRzTOyE/JWrg KfT1PuunHB2VgpnLy8DtLtQZ6mjhNFT7tTl+4YR0DvUKHc6HdssdmMYHqhHbZNF7XcAFQkV63EV dZW0m71nGL3ZeRRCe5YN5c/EVe6w== X-Google-Smtp-Source: AGHT+IFyEmW31Ccy1VotTugsLPE0kOPamcArAijZqsUZz16gONsNDXbTEpP86XNgQSr3cDNVl2YOWw== X-Received: by 2002:a17:902:ce11:b0:298:49db:a9c5 with SMTP id d9443c01a7336-29b6c692349mr204560195ad.43.1764219659791; Wed, 26 Nov 2025 21:00:59 -0800 (PST) Received: from localhost.localdomain ([1.227.206.162]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29bceb54454sm2719825ad.84.2025.11.26.21.00.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Nov 2025 21:00:58 -0800 (PST) From: Namjae Jeon To: viro@zeniv.linux.org.uk, brauner@kernel.org, hch@infradead.org, hch@lst.de, tytso@mit.edu, willy@infradead.org, jack@suse.cz, djwong@kernel.org, josef@toxicpanda.com, sandeen@sandeen.net, rgoldwyn@suse.com, xiang@kernel.org, dsterba@suse.com, pali@kernel.org, ebiggers@kernel.org, neil@brown.name, amir73il@gmail.com Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, iamjoonsoo.kim@lge.com, cheol.lee@lge.com, jay.sim@lge.com, gunho.lee@lge.com, Namjae Jeon , Hyunchul Lee Subject: [PATCH v2 07/11] ntfsplus: add attrib operatrions Date: Thu, 27 Nov 2025 13:59:40 +0900 Message-Id: <20251127045944.26009-8-linkinjeon@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20251127045944.26009-1-linkinjeon@kernel.org> References: <20251127045944.26009-1-linkinjeon@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This adds the implementation of attrib operatrions for ntfsplus. Signed-off-by: Hyunchul Lee Signed-off-by: Namjae Jeon --- fs/ntfsplus/attrib.c | 5377 ++++++++++++++++++++++++++++++++++++++++ fs/ntfsplus/attrlist.c | 285 +++ fs/ntfsplus/compress.c | 1564 ++++++++++++ 3 files changed, 7226 insertions(+) create mode 100644 fs/ntfsplus/attrib.c create mode 100644 fs/ntfsplus/attrlist.c create mode 100644 fs/ntfsplus/compress.c diff --git a/fs/ntfsplus/attrib.c b/fs/ntfsplus/attrib.c new file mode 100644 index 000000000000..86e74e560e35 --- /dev/null +++ b/fs/ntfsplus/attrib.c @@ -0,0 +1,5377 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/** + * NTFS attribute operations. Part of the Linux-NTFS project. + * + * Copyright (c) 2001-2012 Anton Altaparmakov and Tuxera Inc. + * Copyright (c) 2002 Richard Russon + * Copyright (c) 2025 LG Electronics Co., Ltd. + * + * Part of this file is based on code from the NTFS-3G project. + * and is copyrighted by the respective authors below: + * Copyright (c) 2000-2010 Anton Altaparmakov + * Copyright (c) 2002-2005 Richard Russon + * Copyright (c) 2002-2008 Szabolcs Szakacsits + * Copyright (c) 2004-2007 Yura Pakhuchiy + * Copyright (c) 2007-2021 Jean-Pierre Andre + * Copyright (c) 2010 Erik Larsson + */ + +#include +#include + +#include "attrib.h" +#include "attrlist.h" +#include "lcnalloc.h" +#include "misc.h" +#include "mft.h" +#include "ntfs.h" +#include "aops.h" +#include "ntfs_iomap.h" + +__le16 AT_UNNAMED[] =3D { cpu_to_le16('\0') }; + +/** + * ntfs_map_runlist_nolock - map (a part of) a runlist of an ntfs inode + * @ni: ntfs inode for which to map (part of) a runlist + * @vcn: map runlist part containing this vcn + * @ctx: active attribute search context if present or NULL if not + * + * Map the part of a runlist containing the @vcn of the ntfs inode @ni. + * + * If @ctx is specified, it is an active search context of @ni and its bas= e mft + * record. This is needed when ntfs_map_runlist_nolock() encounters unmap= ped + * runlist fragments and allows their mapping. If you do not have the mft + * record mapped, you can specify @ctx as NULL and ntfs_map_runlist_nolock= () + * will perform the necessary mapping and unmapping. + * + * Note, ntfs_map_runlist_nolock() saves the state of @ctx on entry and + * restores it before returning. Thus, @ctx will be left pointing to the = same + * attribute on return as on entry. However, the actual pointers in @ctx = may + * point to different memory locations on return, so you must remember to = reset + * any cached pointers from the @ctx, i.e. after the call to + * ntfs_map_runlist_nolock(), you will probably want to do: + * m =3D ctx->mrec; + * a =3D ctx->attr; + * Assuming you cache ctx->attr in a variable @a of type attr_record * and= that + * you cache ctx->mrec in a variable @m of type struct mft_record *. + */ +int ntfs_map_runlist_nolock(struct ntfs_inode *ni, s64 vcn, struct ntfs_at= tr_search_ctx *ctx) +{ + s64 end_vcn; + unsigned long flags; + struct ntfs_inode *base_ni; + struct mft_record *m; + struct attr_record *a; + struct runlist_element *rl; + struct folio *put_this_folio =3D NULL; + int err =3D 0; + bool ctx_is_temporary, ctx_needs_reset; + struct ntfs_attr_search_ctx old_ctx =3D { NULL, }; + size_t new_rl_count; + + ntfs_debug("Mapping runlist part containing vcn 0x%llx.", + (unsigned long long)vcn); + if (!NInoAttr(ni)) + base_ni =3D ni; + else + base_ni =3D ni->ext.base_ntfs_ino; + if (!ctx) { + ctx_is_temporary =3D ctx_needs_reset =3D true; + m =3D map_mft_record(base_ni); + if (IS_ERR(m)) + return PTR_ERR(m); + ctx =3D ntfs_attr_get_search_ctx(base_ni, m); + if (unlikely(!ctx)) { + err =3D -ENOMEM; + goto err_out; + } + } else { + s64 allocated_size_vcn; + + WARN_ON(IS_ERR(ctx->mrec)); + a =3D ctx->attr; + ctx_is_temporary =3D false; + if (!a->non_resident) { + err =3D -EIO; + goto err_out; + } + end_vcn =3D le64_to_cpu(a->data.non_resident.highest_vcn); + read_lock_irqsave(&ni->size_lock, flags); + allocated_size_vcn =3D ni->allocated_size >> + ni->vol->cluster_size_bits; + read_unlock_irqrestore(&ni->size_lock, flags); + if (!a->data.non_resident.lowest_vcn && end_vcn <=3D 0) + end_vcn =3D allocated_size_vcn - 1; + /* + * If we already have the attribute extent containing @vcn in + * @ctx, no need to look it up again. We slightly cheat in + * that if vcn exceeds the allocated size, we will refuse to + * map the runlist below, so there is definitely no need to get + * the right attribute extent. + */ + if (vcn >=3D allocated_size_vcn || (a->type =3D=3D ni->type && + a->name_length =3D=3D ni->name_len && + !memcmp((u8 *)a + le16_to_cpu(a->name_offset), + ni->name, ni->name_len) && + le64_to_cpu(a->data.non_resident.lowest_vcn) + <=3D vcn && end_vcn >=3D vcn)) + ctx_needs_reset =3D false; + else { + /* Save the old search context. */ + old_ctx =3D *ctx; + /* + * If the currently mapped (extent) inode is not the + * base inode we will unmap it when we reinitialize the + * search context which means we need to get a + * reference to the page containing the mapped mft + * record so we do not accidentally drop changes to the + * mft record when it has not been marked dirty yet. + */ + if (old_ctx.base_ntfs_ino && old_ctx.ntfs_ino !=3D + old_ctx.base_ntfs_ino) { + put_this_folio =3D old_ctx.ntfs_ino->folio; + folio_get(put_this_folio); + } + /* + * Reinitialize the search context so we can lookup the + * needed attribute extent. + */ + ntfs_attr_reinit_search_ctx(ctx); + ctx_needs_reset =3D true; + } + } + if (ctx_needs_reset) { + err =3D ntfs_attr_lookup(ni->type, ni->name, ni->name_len, + CASE_SENSITIVE, vcn, NULL, 0, ctx); + if (unlikely(err)) { + if (err =3D=3D -ENOENT) + err =3D -EIO; + goto err_out; + } + WARN_ON(!ctx->attr->non_resident); + } + a =3D ctx->attr; + /* + * Only decompress the mapping pairs if @vcn is inside it. Otherwise + * we get into problems when we try to map an out of bounds vcn because + * we then try to map the already mapped runlist fragment and + * ntfs_mapping_pairs_decompress() fails. + */ + end_vcn =3D le64_to_cpu(a->data.non_resident.highest_vcn) + 1; + if (unlikely(vcn && vcn >=3D end_vcn)) { + err =3D -ENOENT; + goto err_out; + } + rl =3D ntfs_mapping_pairs_decompress(ni->vol, a, &ni->runlist, &new_rl_co= unt); + if (IS_ERR(rl)) + err =3D PTR_ERR(rl); + else { + ni->runlist.rl =3D rl; + ni->runlist.count =3D new_rl_count; + } +err_out: + if (ctx_is_temporary) { + if (likely(ctx)) + ntfs_attr_put_search_ctx(ctx); + unmap_mft_record(base_ni); + } else if (ctx_needs_reset) { + /* + * If there is no attribute list, restoring the search context + * is accomplished simply by copying the saved context back over + * the caller supplied context. If there is an attribute list, + * things are more complicated as we need to deal with mapping + * of mft records and resulting potential changes in pointers. + */ + if (NInoAttrList(base_ni)) { + /* + * If the currently mapped (extent) inode is not the + * one we had before, we need to unmap it and map the + * old one. + */ + if (ctx->ntfs_ino !=3D old_ctx.ntfs_ino) { + /* + * If the currently mapped inode is not the + * base inode, unmap it. + */ + if (ctx->base_ntfs_ino && ctx->ntfs_ino !=3D + ctx->base_ntfs_ino) { + unmap_extent_mft_record(ctx->ntfs_ino); + ctx->mrec =3D ctx->base_mrec; + WARN_ON(!ctx->mrec); + } + /* + * If the old mapped inode is not the base + * inode, map it. + */ + if (old_ctx.base_ntfs_ino && + old_ctx.ntfs_ino !=3D old_ctx.base_ntfs_ino) { +retry_map: + ctx->mrec =3D map_mft_record(old_ctx.ntfs_ino); + /* + * Something bad has happened. If out + * of memory retry till it succeeds. + * Any other errors are fatal and we + * return the error code in ctx->mrec. + * Let the caller deal with it... We + * just need to fudge things so the + * caller can reinit and/or put the + * search context safely. + */ + if (IS_ERR(ctx->mrec)) { + if (PTR_ERR(ctx->mrec) =3D=3D -ENOMEM) { + schedule(); + goto retry_map; + } else + old_ctx.ntfs_ino =3D + old_ctx.base_ntfs_ino; + } + } + } + /* Update the changed pointers in the saved context. */ + if (ctx->mrec !=3D old_ctx.mrec) { + if (!IS_ERR(ctx->mrec)) + old_ctx.attr =3D (struct attr_record *)( + (u8 *)ctx->mrec + + ((u8 *)old_ctx.attr - + (u8 *)old_ctx.mrec)); + old_ctx.mrec =3D ctx->mrec; + } + } + /* Restore the search context to the saved one. */ + *ctx =3D old_ctx; + /* + * We drop the reference on the page we took earlier. In the + * case that IS_ERR(ctx->mrec) is true this means we might lose + * some changes to the mft record that had been made between + * the last time it was marked dirty/written out and now. This + * at this stage is not a problem as the mapping error is fatal + * enough that the mft record cannot be written out anyway and + * the caller is very likely to shutdown the whole inode + * immediately and mark the volume dirty for chkdsk to pick up + * the pieces anyway. + */ + if (put_this_folio) + folio_put(put_this_folio); + } + return err; +} + +/** + * ntfs_map_runlist - map (a part of) a runlist of an ntfs inode + * @ni: ntfs inode for which to map (part of) a runlist + * @vcn: map runlist part containing this vcn + * + * Map the part of a runlist containing the @vcn of the ntfs inode @ni. + */ +int ntfs_map_runlist(struct ntfs_inode *ni, s64 vcn) +{ + int err =3D 0; + + down_write(&ni->runlist.lock); + /* Make sure someone else didn't do the work while we were sleeping. */ + if (likely(ntfs_rl_vcn_to_lcn(ni->runlist.rl, vcn) <=3D + LCN_RL_NOT_MAPPED)) + err =3D ntfs_map_runlist_nolock(ni, vcn, NULL); + up_write(&ni->runlist.lock); + return err; +} + +struct runlist_element *ntfs_attr_vcn_to_rl(struct ntfs_inode *ni, s64 vcn= , s64 *lcn) +{ + struct runlist_element *rl; + int err; + bool is_retry =3D false; + + rl =3D ni->runlist.rl; + if (!rl) { + err =3D ntfs_attr_map_whole_runlist(ni); + if (err) + return ERR_PTR(-ENOENT); + rl =3D ni->runlist.rl; + } + +remap_rl: + /* Seek to element containing target vcn. */ + while (rl->length && rl[1].vcn <=3D vcn) + rl++; + *lcn =3D ntfs_rl_vcn_to_lcn(rl, vcn); + + if (*lcn <=3D LCN_RL_NOT_MAPPED && is_retry =3D=3D false) { + is_retry =3D true; + if (!ntfs_map_runlist_nolock(ni, vcn, NULL)) { + rl =3D ni->runlist.rl; + goto remap_rl; + } + } + + return rl; +} + +/** + * ntfs_attr_vcn_to_lcn_nolock - convert a vcn into a lcn given an ntfs in= ode + * @ni: ntfs inode of the attribute whose runlist to search + * @vcn: vcn to convert + * @write_locked: true if the runlist is locked for writing + * + * Find the virtual cluster number @vcn in the runlist of the ntfs attribu= te + * described by the ntfs inode @ni and return the corresponding logical cl= uster + * number (lcn). + * + * If the @vcn is not mapped yet, the attempt is made to map the attribute + * extent containing the @vcn and the vcn to lcn conversion is retried. + * + * If @write_locked is true the caller has locked the runlist for writing = and + * if false for reading. + * + * Since lcns must be >=3D 0, we use negative return codes with special me= aning: + * + * Return code Meaning / Description + * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + * LCN_HOLE Hole / not allocated on disk. + * LCN_ENOENT There is no such vcn in the runlist, i.e. @vcn is out of bo= unds. + * LCN_ENOMEM Not enough memory to map runlist. + * LCN_EIO Critical error (runlist/file is corrupt, i/o error, etc). + * + * Locking: - The runlist must be locked on entry and is left locked on re= turn. + * - If @write_locked is 'false', i.e. the runlist is locked for readi= ng, + * the lock may be dropped inside the function so you cannot rely on + * the runlist still being the same when this function returns. + */ +s64 ntfs_attr_vcn_to_lcn_nolock(struct ntfs_inode *ni, const s64 vcn, + const bool write_locked) +{ + s64 lcn; + unsigned long flags; + bool is_retry =3D false; + + ntfs_debug("Entering for i_ino 0x%lx, vcn 0x%llx, %s_locked.", + ni->mft_no, (unsigned long long)vcn, + write_locked ? "write" : "read"); + if (!ni->runlist.rl) { + read_lock_irqsave(&ni->size_lock, flags); + if (!ni->allocated_size) { + read_unlock_irqrestore(&ni->size_lock, flags); + return LCN_ENOENT; + } + read_unlock_irqrestore(&ni->size_lock, flags); + } +retry_remap: + /* Convert vcn to lcn. If that fails map the runlist and retry once. */ + lcn =3D ntfs_rl_vcn_to_lcn(ni->runlist.rl, vcn); + if (likely(lcn >=3D LCN_HOLE)) { + ntfs_debug("Done, lcn 0x%llx.", (long long)lcn); + return lcn; + } + if (lcn !=3D LCN_RL_NOT_MAPPED) { + if (lcn !=3D LCN_ENOENT) + lcn =3D LCN_EIO; + } else if (!is_retry) { + int err; + + if (!write_locked) { + up_read(&ni->runlist.lock); + down_write(&ni->runlist.lock); + if (unlikely(ntfs_rl_vcn_to_lcn(ni->runlist.rl, vcn) !=3D + LCN_RL_NOT_MAPPED)) { + up_write(&ni->runlist.lock); + down_read(&ni->runlist.lock); + goto retry_remap; + } + } + err =3D ntfs_map_runlist_nolock(ni, vcn, NULL); + if (!write_locked) { + up_write(&ni->runlist.lock); + down_read(&ni->runlist.lock); + } + if (likely(!err)) { + is_retry =3D true; + goto retry_remap; + } + if (err =3D=3D -ENOENT) + lcn =3D LCN_ENOENT; + else if (err =3D=3D -ENOMEM) + lcn =3D LCN_ENOMEM; + else + lcn =3D LCN_EIO; + } + if (lcn !=3D LCN_ENOENT) + ntfs_error(ni->vol->sb, "Failed with error code %lli.", + (long long)lcn); + return lcn; +} + +struct runlist_element *__ntfs_attr_find_vcn_nolock(struct runlist *runlis= t, const s64 vcn) +{ + size_t lower_idx, upper_idx, idx; + struct runlist_element *run; + + if (runlist->count <=3D 1) + return ERR_PTR(-ENOENT); + + run =3D &runlist->rl[0]; + if (vcn < run->vcn) + return ERR_PTR(-ENOENT); + else if (vcn < run->vcn + run->length) + return run; + + run =3D &runlist->rl[runlist->count - 2]; + if (vcn >=3D run->vcn && vcn < run->vcn + run->length) + return run; + if (vcn >=3D run->vcn + run->length) + return ERR_PTR(-ENOENT); + + lower_idx =3D 1; + upper_idx =3D runlist->count - 2; + + while (lower_idx <=3D upper_idx) { + idx =3D (lower_idx + upper_idx) >> 1; + run =3D &runlist->rl[idx]; + + if (vcn < run->vcn) + upper_idx =3D idx - 1; + else if (vcn >=3D run->vcn + run->length) + lower_idx =3D idx + 1; + else + return run; + } + + return ERR_PTR(-ENOENT); +} + +/** + * ntfs_attr_find_vcn_nolock - find a vcn in the runlist of an ntfs inode + * @ni: ntfs inode describing the runlist to search + * @vcn: vcn to find + * @ctx: active attribute search context if present or NULL if not + * + * Find the virtual cluster number @vcn in the runlist described by the nt= fs + * inode @ni and return the address of the runlist element containing the = @vcn. + * + * If the @vcn is not mapped yet, the attempt is made to map the attribute + * extent containing the @vcn and the vcn to lcn conversion is retried. + * + * If @ctx is specified, it is an active search context of @ni and its bas= e mft + * record. This is needed when ntfs_attr_find_vcn_nolock() encounters unm= apped + * runlist fragments and allows their mapping. If you do not have the mft + * record mapped, you can specify @ctx as NULL and ntfs_attr_find_vcn_nolo= ck() + * will perform the necessary mapping and unmapping. + * + * Note, ntfs_attr_find_vcn_nolock() saves the state of @ctx on entry and + * restores it before returning. Thus, @ctx will be left pointing to the = same + * attribute on return as on entry. However, the actual pointers in @ctx = may + * point to different memory locations on return, so you must remember to = reset + * any cached pointers from the @ctx, i.e. after the call to + * ntfs_attr_find_vcn_nolock(), you will probably want to do: + * m =3D ctx->mrec; + * a =3D ctx->attr; + * Assuming you cache ctx->attr in a variable @a of type attr_record * and= that + * you cache ctx->mrec in a variable @m of type struct mft_record *. + * Note you need to distinguish between the lcn of the returned runlist el= ement + * being >=3D 0 and LCN_HOLE. In the later case you have to return zeroes= on + * read and allocate clusters on write. + */ +struct runlist_element *ntfs_attr_find_vcn_nolock(struct ntfs_inode *ni, c= onst s64 vcn, + struct ntfs_attr_search_ctx *ctx) +{ + unsigned long flags; + struct runlist_element *rl; + int err =3D 0; + bool is_retry =3D false; + + ntfs_debug("Entering for i_ino 0x%lx, vcn 0x%llx, with%s ctx.", + ni->mft_no, (unsigned long long)vcn, ctx ? "" : "out"); + if (!ni->runlist.rl) { + read_lock_irqsave(&ni->size_lock, flags); + if (!ni->allocated_size) { + read_unlock_irqrestore(&ni->size_lock, flags); + return ERR_PTR(-ENOENT); + } + read_unlock_irqrestore(&ni->size_lock, flags); + } + +retry_remap: + rl =3D ni->runlist.rl; + if (likely(rl && vcn >=3D rl[0].vcn)) { + rl =3D __ntfs_attr_find_vcn_nolock(&ni->runlist, vcn); + if (IS_ERR(rl)) + err =3D PTR_ERR(rl); + else if (rl->lcn >=3D LCN_HOLE) + return rl; + else if (rl->lcn <=3D LCN_ENOENT) + err =3D -EIO; + } + if (!err && !is_retry) { + /* + * If the search context is invalid we cannot map the unmapped + * region. + */ + if (ctx && IS_ERR(ctx->mrec)) + err =3D PTR_ERR(ctx->mrec); + else { + /* + * The @vcn is in an unmapped region, map the runlist + * and retry. + */ + err =3D ntfs_map_runlist_nolock(ni, vcn, ctx); + if (likely(!err)) { + is_retry =3D true; + goto retry_remap; + } + } + if (err =3D=3D -EINVAL) + err =3D -EIO; + } else if (!err) + err =3D -EIO; + if (err !=3D -ENOENT) + ntfs_error(ni->vol->sb, "Failed with error code %i.", err); + return ERR_PTR(err); +} + +/** + * ntfs_attr_find - find (next) attribute in mft record + * @type: attribute type to find + * @name: attribute name to find (optional, i.e. NULL means don't care) + * @name_len: attribute name length (only needed if @name present) + * @ic: IGNORE_CASE or CASE_SENSITIVE (ignored if @name not present) + * @val: attribute value to find (optional, resident attributes only) + * @val_len: attribute value length + * @ctx: search context with mft record and attribute to search from + * + * You should not need to call this function directly. Use ntfs_attr_look= up() + * instead. + * + * ntfs_attr_find() takes a search context @ctx as parameter and searches = the + * mft record specified by @ctx->mrec, beginning at @ctx->attr, for an + * attribute of @type, optionally @name and @val. + * + * If the attribute is found, ntfs_attr_find() returns 0 and @ctx->attr wi= ll + * point to the found attribute. + * + * If the attribute is not found, ntfs_attr_find() returns -ENOENT and + * @ctx->attr will point to the attribute before which the attribute being + * searched for would need to be inserted if such an action were to be des= ired. + * + * On actual error, ntfs_attr_find() returns -EIO. In this case @ctx->att= r is + * undefined and in particular do not rely on it not changing. + * + * If @ctx->is_first is 'true', the search begins with @ctx->attr itself. = If it + * is 'false', the search begins after @ctx->attr. + * + * If @ic is IGNORE_CASE, the @name comparisson is not case sensitive and + * @ctx->ntfs_ino must be set to the ntfs inode to which the mft record + * @ctx->mrec belongs. This is so we can get at the ntfs volume and hence= at + * the upcase table. If @ic is CASE_SENSITIVE, the comparison is case + * sensitive. When @name is present, @name_len is the @name length in Uni= code + * characters. + * + * If @name is not present (NULL), we assume that the unnamed attribute is + * being searched for. + * + * Finally, the resident attribute value @val is looked for, if present. = If + * @val is not present (NULL), @val_len is ignored. + * + * ntfs_attr_find() only searches the specified mft record and it ignores = the + * presence of an attribute list attribute (unless it is the one being sea= rched + * for, obviously). If you need to take attribute lists into consideratio= n, + * use ntfs_attr_lookup() instead (see below). This also means that you c= annot + * use ntfs_attr_find() to search for extent records of non-resident + * attributes, as extents with lowest_vcn !=3D 0 are usually described by = the + * attribute list attribute only. - Note that it is possible that the first + * extent is only in the attribute list while the last extent is in the ba= se + * mft record, so do not rely on being able to find the first extent in the + * base mft record. + * + * Warning: Never use @val when looking for attribute types which can be + * non-resident as this most likely will result in a crash! + */ +static int ntfs_attr_find(const __le32 type, const __le16 *name, + const u32 name_len, const u32 ic, + const u8 *val, const u32 val_len, struct ntfs_attr_search_ctx *ctx) +{ + struct attr_record *a; + struct ntfs_volume *vol =3D ctx->ntfs_ino->vol; + __le16 *upcase =3D vol->upcase; + u32 upcase_len =3D vol->upcase_len; + unsigned int space; + + /* + * Iterate over attributes in mft record starting at @ctx->attr, or the + * attribute following that, if @ctx->is_first is 'true'. + */ + if (ctx->is_first) { + a =3D ctx->attr; + ctx->is_first =3D false; + } else + a =3D (struct attr_record *)((u8 *)ctx->attr + + le32_to_cpu(ctx->attr->length)); + for (;; a =3D (struct attr_record *)((u8 *)a + le32_to_cpu(a->length))) { + if ((u8 *)a < (u8 *)ctx->mrec || (u8 *)a > (u8 *)ctx->mrec + + le32_to_cpu(ctx->mrec->bytes_allocated)) + break; + + space =3D le32_to_cpu(ctx->mrec->bytes_in_use) - ((u8 *)a - (u8 *)ctx->m= rec); + if ((space < offsetof(struct attr_record, data.resident.reserved) + 1 || + space < le32_to_cpu(a->length)) && (space < 4 || a->type !=3D AT_E= ND)) + break; + + ctx->attr =3D a; + if (((type !=3D AT_UNUSED) && (le32_to_cpu(a->type) > le32_to_cpu(type))= ) || + a->type =3D=3D AT_END) + return -ENOENT; + if (unlikely(!a->length)) + break; + if (type =3D=3D AT_UNUSED) + return 0; + if (a->type !=3D type) + continue; + /* + * If @name is present, compare the two names. If @name is + * missing, assume we want an unnamed attribute. + */ + if (!name || name =3D=3D AT_UNNAMED) { + /* The search failed if the found attribute is named. */ + if (a->name_length) + return -ENOENT; + } else { + if (a->name_length && ((le16_to_cpu(a->name_offset) + + a->name_length * sizeof(__le16)) > + le32_to_cpu(a->length))) { + ntfs_error(vol->sb, "Corrupt attribute name in MFT record %lld\n", + (long long)ctx->ntfs_ino->mft_no); + break; + } + + if (!ntfs_are_names_equal(name, name_len, + (__le16 *)((u8 *)a + le16_to_cpu(a->name_offset)), + a->name_length, ic, upcase, upcase_len)) { + register int rc; + + rc =3D ntfs_collate_names(name, name_len, + (__le16 *)((u8 *)a + le16_to_cpu(a->name_offset)), + a->name_length, 1, IGNORE_CASE, + upcase, upcase_len); + /* + * If @name collates before a->name, there is no + * matching attribute. + */ + if (rc =3D=3D -1) + return -ENOENT; + /* If the strings are not equal, continue search. */ + if (rc) + continue; + rc =3D ntfs_collate_names(name, name_len, + (__le16 *)((u8 *)a + le16_to_cpu(a->name_offset)), + a->name_length, 1, CASE_SENSITIVE, + upcase, upcase_len); + if (rc =3D=3D -1) + return -ENOENT; + if (rc) + continue; + } + } + /* + * The names match or @name not present and attribute is + * unnamed. If no @val specified, we have found the attribute + * and are done. + */ + if (!val) + return 0; + /* @val is present; compare values. */ + else { + register int rc; + + rc =3D memcmp(val, (u8 *)a + le16_to_cpu( + a->data.resident.value_offset), + min_t(u32, val_len, le32_to_cpu( + a->data.resident.value_length))); + /* + * If @val collates before the current attribute's + * value, there is no matching attribute. + */ + if (!rc) { + register u32 avl; + + avl =3D le32_to_cpu(a->data.resident.value_length); + if (val_len =3D=3D avl) + return 0; + if (val_len < avl) + return -ENOENT; + } else if (rc < 0) + return -ENOENT; + } + } + ntfs_error(vol->sb, "Inode is corrupt. Run chkdsk."); + NVolSetErrors(vol); + return -EIO; +} + +void ntfs_attr_name_free(unsigned char **name) +{ + if (*name) { + ntfs_free(*name); + *name =3D NULL; + } +} + +char *ntfs_attr_name_get(const struct ntfs_volume *vol, const __le16 *unam= e, + const int uname_len) +{ + unsigned char *name =3D NULL; + int name_len; + + name_len =3D ntfs_ucstonls(vol, uname, uname_len, &name, 0); + if (name_len < 0) { + ntfs_error(vol->sb, "ntfs_ucstonls error"); + /* This function when returns -1, memory for name might + * be allocated. So lets free this memory. + */ + ntfs_attr_name_free(&name); + return NULL; + + } else if (name_len > 0) + return name; + + ntfs_attr_name_free(&name); + return NULL; +} + +int load_attribute_list(struct ntfs_inode *base_ni, u8 *al_start, const s6= 4 size) +{ + struct inode *attr_vi =3D NULL; + u8 *al; + struct attr_list_entry *ale; + + if (!al_start || size <=3D 0) + return -EINVAL; + + attr_vi =3D ntfs_attr_iget(VFS_I(base_ni), AT_ATTRIBUTE_LIST, AT_UNNAMED,= 0); + if (IS_ERR(attr_vi)) { + ntfs_error(base_ni->vol->sb, + "Failed to open an inode for Attribute list, mft =3D %ld", + base_ni->mft_no); + return PTR_ERR(attr_vi); + } + + if (ntfs_inode_attr_pread(attr_vi, 0, size, al_start) !=3D size) { + iput(attr_vi); + ntfs_error(base_ni->vol->sb, + "Failed to read attribute list, mft =3D %ld", + base_ni->mft_no); + return -EIO; + } + iput(attr_vi); + + for (al =3D al_start; al < al_start + size; al +=3D le16_to_cpu(ale->leng= th)) { + ale =3D (struct attr_list_entry *)al; + if (ale->name_offset !=3D sizeof(struct attr_list_entry)) + break; + if (le16_to_cpu(ale->length) <=3D ale->name_offset + ale->name_length || + al + le16_to_cpu(ale->length) > al_start + size) + break; + if (ale->type =3D=3D AT_UNUSED) + break; + if (MSEQNO_LE(ale->mft_reference) =3D=3D 0) + break; + } + if (al !=3D al_start + size) { + ntfs_error(base_ni->vol->sb, "Corrupt attribute list, mft =3D %ld", + base_ni->mft_no); + return -EIO; + } + return 0; +} + +/** + * ntfs_external_attr_find - find an attribute in the attribute list of an= inode + * @type: attribute type to find + * @name: attribute name to find (optional, i.e. NULL means don't care) + * @name_len: attribute name length (only needed if @name present) + * @ic: IGNORE_CASE or CASE_SENSITIVE (ignored if @name not present) + * @lowest_vcn: lowest vcn to find (optional, non-resident attributes only) + * @val: attribute value to find (optional, resident attributes only) + * @val_len: attribute value length + * @ctx: search context with mft record and attribute to search from + * + * You should not need to call this function directly. Use ntfs_attr_look= up() + * instead. + * + * Find an attribute by searching the attribute list for the corresponding + * attribute list entry. Having found the entry, map the mft record if the + * attribute is in a different mft record/inode, ntfs_attr_find() the attr= ibute + * in there and return it. + * + * On first search @ctx->ntfs_ino must be the base mft record and @ctx must + * have been obtained from a call to ntfs_attr_get_search_ctx(). On subse= quent + * calls @ctx->ntfs_ino can be any extent inode, too (@ctx->base_ntfs_ino = is + * then the base inode). + * + * After finishing with the attribute/mft record you need to call + * ntfs_attr_put_search_ctx() to cleanup the search context (unmapping any + * mapped inodes, etc). + * + * If the attribute is found, ntfs_external_attr_find() returns 0 and + * @ctx->attr will point to the found attribute. @ctx->mrec will point to= the + * mft record in which @ctx->attr is located and @ctx->al_entry will point= to + * the attribute list entry for the attribute. + * + * If the attribute is not found, ntfs_external_attr_find() returns -ENOEN= T and + * @ctx->attr will point to the attribute in the base mft record before wh= ich + * the attribute being searched for would need to be inserted if such an a= ction + * were to be desired. @ctx->mrec will point to the mft record in which + * @ctx->attr is located and @ctx->al_entry will point to the attribute li= st + * entry of the attribute before which the attribute being searched for wo= uld + * need to be inserted if such an action were to be desired. + * + * Thus to insert the not found attribute, one wants to add the attribute = to + * @ctx->mrec (the base mft record) and if there is not enough space, the + * attribute should be placed in a newly allocated extent mft record. The + * attribute list entry for the inserted attribute should be inserted in t= he + * attribute list attribute at @ctx->al_entry. + * + * On actual error, ntfs_external_attr_find() returns -EIO. In this case + * @ctx->attr is undefined and in particular do not rely on it not changin= g. + */ +static int ntfs_external_attr_find(const __le32 type, + const __le16 *name, const u32 name_len, + const u32 ic, const s64 lowest_vcn, + const u8 *val, const u32 val_len, struct ntfs_attr_search_ctx *ctx) +{ + struct ntfs_inode *base_ni, *ni; + struct ntfs_volume *vol; + struct attr_list_entry *al_entry, *next_al_entry; + u8 *al_start, *al_end; + struct attr_record *a; + __le16 *al_name; + u32 al_name_len; + bool is_first_search =3D false; + int err =3D 0; + static const char *es =3D " Unmount and run chkdsk."; + + ni =3D ctx->ntfs_ino; + base_ni =3D ctx->base_ntfs_ino; + ntfs_debug("Entering for inode 0x%lx, type 0x%x.", ni->mft_no, type); + if (!base_ni) { + /* First call happens with the base mft record. */ + base_ni =3D ctx->base_ntfs_ino =3D ctx->ntfs_ino; + ctx->base_mrec =3D ctx->mrec; + ctx->mapped_base_mrec =3D ctx->mapped_mrec; + } + if (ni =3D=3D base_ni) + ctx->base_attr =3D ctx->attr; + if (type =3D=3D AT_END) + goto not_found; + vol =3D base_ni->vol; + al_start =3D base_ni->attr_list; + al_end =3D al_start + base_ni->attr_list_size; + if (!ctx->al_entry) { + ctx->al_entry =3D (struct attr_list_entry *)al_start; + is_first_search =3D true; + } + /* + * Iterate over entries in attribute list starting at @ctx->al_entry, + * or the entry following that, if @ctx->is_first is 'true'. + */ + if (ctx->is_first) { + al_entry =3D ctx->al_entry; + ctx->is_first =3D false; + /* + * If an enumeration and the first attribute is higher than + * the attribute list itself, need to return the attribute list + * attribute. + */ + if ((type =3D=3D AT_UNUSED) && is_first_search && + le32_to_cpu(al_entry->type) > + le32_to_cpu(AT_ATTRIBUTE_LIST)) + goto find_attr_list_attr; + } else { + /* Check for small entry */ + if (((al_end - (u8 *)ctx->al_entry) < + (long)offsetof(struct attr_list_entry, name)) || + (le16_to_cpu(ctx->al_entry->length) & 7) || + (le16_to_cpu(ctx->al_entry->length) < offsetof(struct attr_list_entr= y, name))) + goto corrupt; + + al_entry =3D (struct attr_list_entry *)((u8 *)ctx->al_entry + + le16_to_cpu(ctx->al_entry->length)); + + if ((u8 *)al_entry =3D=3D al_end) + goto not_found; + + /* Preliminary check for small entry */ + if ((al_end - (u8 *)al_entry) < + (long)offsetof(struct attr_list_entry, name)) + goto corrupt; + + /* + * If this is an enumeration and the attribute list attribute + * is the next one in the enumeration sequence, just return the + * attribute list attribute from the base mft record as it is + * not listed in the attribute list itself. + */ + if ((type =3D=3D AT_UNUSED) && le32_to_cpu(ctx->al_entry->type) < + le32_to_cpu(AT_ATTRIBUTE_LIST) && + le32_to_cpu(al_entry->type) > + le32_to_cpu(AT_ATTRIBUTE_LIST)) { +find_attr_list_attr: + + /* Check for bogus calls. */ + if (name || name_len || val || val_len || lowest_vcn) + return -EINVAL; + + /* We want the base record. */ + if (ctx->ntfs_ino !=3D base_ni) + unmap_mft_record(ctx->ntfs_ino); + ctx->ntfs_ino =3D base_ni; + ctx->mapped_mrec =3D ctx->mapped_base_mrec; + ctx->mrec =3D ctx->base_mrec; + ctx->is_first =3D true; + + /* Sanity checks are performed elsewhere. */ + ctx->attr =3D (struct attr_record *)((u8 *)ctx->mrec + + le16_to_cpu(ctx->mrec->attrs_offset)); + + /* Find the attribute list attribute. */ + err =3D ntfs_attr_find(AT_ATTRIBUTE_LIST, NULL, 0, + IGNORE_CASE, NULL, 0, ctx); + + /* + * Setup the search context so the correct + * attribute is returned next time round. + */ + ctx->al_entry =3D al_entry; + ctx->is_first =3D true; + + /* Got it. Done. */ + if (!err) + return 0; + + /* Error! If other than not found return it. */ + if (err !=3D -ENOENT) + return err; + + /* Not found?!? Absurd! */ + ntfs_error(ctx->ntfs_ino->vol->sb, "Attribute list wasn't found"); + return -EIO; + } + } + for (;; al_entry =3D next_al_entry) { + /* Out of bounds check. */ + if ((u8 *)al_entry < base_ni->attr_list || + (u8 *)al_entry > al_end) + break; /* Inode is corrupt. */ + ctx->al_entry =3D al_entry; + /* Catch the end of the attribute list. */ + if ((u8 *)al_entry =3D=3D al_end) + goto not_found; + + if ((((u8 *)al_entry + offsetof(struct attr_list_entry, name)) > al_end)= || + ((u8 *)al_entry + le16_to_cpu(al_entry->length) > al_end) || + (le16_to_cpu(al_entry->length) & 7) || + (le16_to_cpu(al_entry->length) < + offsetof(struct attr_list_entry, name_length)) || + (al_entry->name_length && ((u8 *)al_entry + al_entry->name_offset + + al_entry->name_length * sizeof(__le16)) > al_end)) + break; /* corrupt */ + + next_al_entry =3D (struct attr_list_entry *)((u8 *)al_entry + + le16_to_cpu(al_entry->length)); + if (type !=3D AT_UNUSED) { + if (le32_to_cpu(al_entry->type) > le32_to_cpu(type)) + goto not_found; + if (type !=3D al_entry->type) + continue; + } + /* + * If @name is present, compare the two names. If @name is + * missing, assume we want an unnamed attribute. + */ + al_name_len =3D al_entry->name_length; + al_name =3D (__le16 *)((u8 *)al_entry + al_entry->name_offset); + + /* + * If !@type we want the attribute represented by this + * attribute list entry. + */ + if (type =3D=3D AT_UNUSED) + goto is_enumeration; + + if (!name || name =3D=3D AT_UNNAMED) { + if (al_name_len) + goto not_found; + } else if (!ntfs_are_names_equal(al_name, al_name_len, name, + name_len, ic, vol->upcase, vol->upcase_len)) { + register int rc; + + rc =3D ntfs_collate_names(name, name_len, al_name, + al_name_len, 1, IGNORE_CASE, + vol->upcase, vol->upcase_len); + /* + * If @name collates before al_name, there is no + * matching attribute. + */ + if (rc =3D=3D -1) + goto not_found; + /* If the strings are not equal, continue search. */ + if (rc) + continue; + + rc =3D ntfs_collate_names(name, name_len, al_name, + al_name_len, 1, CASE_SENSITIVE, + vol->upcase, vol->upcase_len); + if (rc =3D=3D -1) + goto not_found; + if (rc) + continue; + } + /* + * The names match or @name not present and attribute is + * unnamed. Now check @lowest_vcn. Continue search if the + * next attribute list entry still fits @lowest_vcn. Otherwise + * we have reached the right one or the search has failed. + */ + if (lowest_vcn && (u8 *)next_al_entry >=3D al_start && + (u8 *)next_al_entry + 6 < al_end && + (u8 *)next_al_entry + le16_to_cpu( + next_al_entry->length) <=3D al_end && + le64_to_cpu(next_al_entry->lowest_vcn) <=3D + lowest_vcn && + next_al_entry->type =3D=3D al_entry->type && + next_al_entry->name_length =3D=3D al_name_len && + ntfs_are_names_equal((__le16 *)((u8 *) + next_al_entry + + next_al_entry->name_offset), + next_al_entry->name_length, + al_name, al_name_len, CASE_SENSITIVE, + vol->upcase, vol->upcase_len)) + continue; + +is_enumeration: + if (MREF_LE(al_entry->mft_reference) =3D=3D ni->mft_no) { + if (MSEQNO_LE(al_entry->mft_reference) !=3D ni->seq_no) { + ntfs_error(vol->sb, + "Found stale mft reference in attribute list of base inode 0x%lx.%s", + base_ni->mft_no, es); + err =3D -EIO; + break; + } + } else { /* Mft references do not match. */ + /* If there is a mapped record unmap it first. */ + if (ni !=3D base_ni) + unmap_extent_mft_record(ni); + /* Do we want the base record back? */ + if (MREF_LE(al_entry->mft_reference) =3D=3D + base_ni->mft_no) { + ni =3D ctx->ntfs_ino =3D base_ni; + ctx->mrec =3D ctx->base_mrec; + ctx->mapped_mrec =3D ctx->mapped_base_mrec; + } else { + /* We want an extent record. */ + ctx->mrec =3D map_extent_mft_record(base_ni, + le64_to_cpu( + al_entry->mft_reference), &ni); + if (IS_ERR(ctx->mrec)) { + ntfs_error(vol->sb, + "Failed to map extent mft record 0x%lx of base inode 0x%lx.%s", + MREF_LE(al_entry->mft_reference), + base_ni->mft_no, es); + err =3D PTR_ERR(ctx->mrec); + if (err =3D=3D -ENOENT) + err =3D -EIO; + /* Cause @ctx to be sanitized below. */ + ni =3D NULL; + break; + } + ctx->ntfs_ino =3D ni; + ctx->mapped_mrec =3D true; + + } + } + a =3D ctx->attr =3D (struct attr_record *)((u8 *)ctx->mrec + + le16_to_cpu(ctx->mrec->attrs_offset)); + /* + * ctx->vfs_ino, ctx->mrec, and ctx->attr now point to the + * mft record containing the attribute represented by the + * current al_entry. + */ + /* + * We could call into ntfs_attr_find() to find the right + * attribute in this mft record but this would be less + * efficient and not quite accurate as ntfs_attr_find() ignores + * the attribute instance numbers for example which become + * important when one plays with attribute lists. Also, + * because a proper match has been found in the attribute list + * entry above, the comparison can now be optimized. So it is + * worth re-implementing a simplified ntfs_attr_find() here. + */ + /* + * Use a manual loop so we can still use break and continue + * with the same meanings as above. + */ +do_next_attr_loop: + if ((u8 *)a < (u8 *)ctx->mrec || (u8 *)a > (u8 *)ctx->mrec + + le32_to_cpu(ctx->mrec->bytes_allocated)) + break; + if (a->type =3D=3D AT_END) + continue; + if (!a->length) + break; + if (al_entry->instance !=3D a->instance) + goto do_next_attr; + /* + * If the type and/or the name are mismatched between the + * attribute list entry and the attribute record, there is + * corruption so we break and return error EIO. + */ + if (al_entry->type !=3D a->type) + break; + if (!ntfs_are_names_equal((__le16 *)((u8 *)a + + le16_to_cpu(a->name_offset)), a->name_length, + al_name, al_name_len, CASE_SENSITIVE, + vol->upcase, vol->upcase_len)) + break; + ctx->attr =3D a; + /* + * If no @val specified or @val specified and it matches, we + * have found it! + */ + if ((type =3D=3D AT_UNUSED) || !val || (!a->non_resident && le32_to_cpu( + a->data.resident.value_length) =3D=3D val_len && + !memcmp((u8 *)a + + le16_to_cpu(a->data.resident.value_offset), + val, val_len))) { + ntfs_debug("Done, found."); + return 0; + } +do_next_attr: + /* Proceed to the next attribute in the current mft record. */ + a =3D (struct attr_record *)((u8 *)a + le32_to_cpu(a->length)); + goto do_next_attr_loop; + } + +corrupt: + if (ni !=3D base_ni) { + if (ni) + unmap_extent_mft_record(ni); + ctx->ntfs_ino =3D base_ni; + ctx->mrec =3D ctx->base_mrec; + ctx->attr =3D ctx->base_attr; + ctx->mapped_mrec =3D ctx->mapped_base_mrec; + } + + if (!err) { + ntfs_error(vol->sb, + "Base inode 0x%lx contains corrupt attribute list attribute.%s", + base_ni->mft_no, es); + err =3D -EIO; + } + + if (err !=3D -ENOMEM) + NVolSetErrors(vol); + return err; +not_found: + /* + * If we were looking for AT_END, we reset the search context @ctx and + * use ntfs_attr_find() to seek to the end of the base mft record. + */ + if (type =3D=3D AT_UNUSED || type =3D=3D AT_END) { + ntfs_attr_reinit_search_ctx(ctx); + return ntfs_attr_find(AT_END, name, name_len, ic, val, val_len, + ctx); + } + /* + * The attribute was not found. Before we return, we want to ensure + * @ctx->mrec and @ctx->attr indicate the position at which the + * attribute should be inserted in the base mft record. Since we also + * want to preserve @ctx->al_entry we cannot reinitialize the search + * context using ntfs_attr_reinit_search_ctx() as this would set + * @ctx->al_entry to NULL. Thus we do the necessary bits manually (see + * ntfs_attr_init_search_ctx() below). Note, we _only_ preserve + * @ctx->al_entry as the remaining fields (base_*) are identical to + * their non base_ counterparts and we cannot set @ctx->base_attr + * correctly yet as we do not know what @ctx->attr will be set to by + * the call to ntfs_attr_find() below. + */ + if (ni !=3D base_ni) + unmap_extent_mft_record(ni); + ctx->mrec =3D ctx->base_mrec; + ctx->attr =3D (struct attr_record *)((u8 *)ctx->mrec + + le16_to_cpu(ctx->mrec->attrs_offset)); + ctx->is_first =3D true; + ctx->ntfs_ino =3D base_ni; + ctx->base_ntfs_ino =3D NULL; + ctx->base_mrec =3D NULL; + ctx->base_attr =3D NULL; + ctx->mapped_mrec =3D ctx->mapped_base_mrec; + /* + * In case there are multiple matches in the base mft record, need to + * keep enumerating until we get an attribute not found response (or + * another error), otherwise we would keep returning the same attribute + * over and over again and all programs using us for enumeration would + * lock up in a tight loop. + */ + do { + err =3D ntfs_attr_find(type, name, name_len, ic, val, val_len, + ctx); + } while (!err); + ntfs_debug("Done, not found."); + return err; +} + +/** + * ntfs_attr_lookup - find an attribute in an ntfs inode + * @type: attribute type to find + * @name: attribute name to find (optional, i.e. NULL means don't care) + * @name_len: attribute name length (only needed if @name present) + * @ic: IGNORE_CASE or CASE_SENSITIVE (ignored if @name not present) + * @lowest_vcn: lowest vcn to find (optional, non-resident attributes only) + * @val: attribute value to find (optional, resident attributes only) + * @val_len: attribute value length + * @ctx: search context with mft record and attribute to search from + * + * Find an attribute in an ntfs inode. On first search @ctx->ntfs_ino must + * be the base mft record and @ctx must have been obtained from a call to + * ntfs_attr_get_search_ctx(). + * + * This function transparently handles attribute lists and @ctx is used to + * continue searches where they were left off at. + * + * After finishing with the attribute/mft record you need to call + * ntfs_attr_put_search_ctx() to cleanup the search context (unmapping any + * mapped inodes, etc). + * + * Return 0 if the search was successful and -errno if not. + * + * When 0, @ctx->attr is the found attribute and it is in mft record + * @ctx->mrec. If an attribute list attribute is present, @ctx->al_entry = is + * the attribute list entry of the found attribute. + * + * When -ENOENT, @ctx->attr is the attribute which collates just after the + * attribute being searched for, i.e. if one wants to add the attribute to= the + * mft record this is the correct place to insert it into. If an attribute + * list attribute is present, @ctx->al_entry is the attribute list entry w= hich + * collates just after the attribute list entry of the attribute being sea= rched + * for, i.e. if one wants to add the attribute to the mft record this is t= he + * correct place to insert its attribute list entry into. + */ +int ntfs_attr_lookup(const __le32 type, const __le16 *name, + const u32 name_len, const u32 ic, + const s64 lowest_vcn, const u8 *val, const u32 val_len, + struct ntfs_attr_search_ctx *ctx) +{ + struct ntfs_inode *base_ni; + + ntfs_debug("Entering."); + if (ctx->base_ntfs_ino) + base_ni =3D ctx->base_ntfs_ino; + else + base_ni =3D ctx->ntfs_ino; + /* Sanity check, just for debugging really. */ + if (!base_ni || !NInoAttrList(base_ni) || type =3D=3D AT_ATTRIBUTE_LIST) + return ntfs_attr_find(type, name, name_len, ic, val, val_len, + ctx); + return ntfs_external_attr_find(type, name, name_len, ic, lowest_vcn, + val, val_len, ctx); +} + +/** + * ntfs_attr_init_search_ctx - initialize an attribute search context + * @ctx: attribute search context to initialize + * @ni: ntfs inode with which to initialize the search context + * @mrec: mft record with which to initialize the search context + * + * Initialize the attribute search context @ctx with @ni and @mrec. + */ +static bool ntfs_attr_init_search_ctx(struct ntfs_attr_search_ctx *ctx, + struct ntfs_inode *ni, struct mft_record *mrec) +{ + if (!mrec) { + mrec =3D map_mft_record(ni); + if (IS_ERR(mrec)) + return false; + ctx->mapped_mrec =3D true; + } else { + ctx->mapped_mrec =3D false; + } + + ctx->mrec =3D mrec; + /* Sanity checks are performed elsewhere. */ + ctx->attr =3D (struct attr_record *)((u8 *)mrec + le16_to_cpu(mrec->attrs= _offset)); + ctx->is_first =3D true; + ctx->ntfs_ino =3D ni; + ctx->al_entry =3D NULL; + ctx->base_ntfs_ino =3D NULL; + ctx->base_mrec =3D NULL; + ctx->base_attr =3D NULL; + ctx->mapped_base_mrec =3D false; + return true; +} + +/** + * ntfs_attr_reinit_search_ctx - reinitialize an attribute search context + * @ctx: attribute search context to reinitialize + * + * Reinitialize the attribute search context @ctx, unmapping an associated + * extent mft record if present, and initialize the search context again. + * + * This is used when a search for a new attribute is being started to reset + * the search context to the beginning. + */ +void ntfs_attr_reinit_search_ctx(struct ntfs_attr_search_ctx *ctx) +{ + bool mapped_mrec; + + if (likely(!ctx->base_ntfs_ino)) { + /* No attribute list. */ + ctx->is_first =3D true; + /* Sanity checks are performed elsewhere. */ + ctx->attr =3D (struct attr_record *)((u8 *)ctx->mrec + + le16_to_cpu(ctx->mrec->attrs_offset)); + /* + * This needs resetting due to ntfs_external_attr_find() which + * can leave it set despite having zeroed ctx->base_ntfs_ino. + */ + ctx->al_entry =3D NULL; + return; + } /* Attribute list. */ + if (ctx->ntfs_ino !=3D ctx->base_ntfs_ino && ctx->ntfs_ino) + unmap_extent_mft_record(ctx->ntfs_ino); + + mapped_mrec =3D ctx->mapped_base_mrec; + ntfs_attr_init_search_ctx(ctx, ctx->base_ntfs_ino, ctx->base_mrec); + ctx->mapped_mrec =3D mapped_mrec; +} + +/** + * ntfs_attr_get_search_ctx - allocate/initialize a new attribute search c= ontext + * @ni: ntfs inode with which to initialize the search context + * @mrec: mft record with which to initialize the search context + * + * Allocate a new attribute search context, initialize it with @ni and @mr= ec, + * and return it. Return NULL if allocation failed. + */ +struct ntfs_attr_search_ctx *ntfs_attr_get_search_ctx(struct ntfs_inode *n= i, + struct mft_record *mrec) +{ + struct ntfs_attr_search_ctx *ctx; + bool init; + + ctx =3D kmem_cache_alloc(ntfs_attr_ctx_cache, GFP_NOFS); + if (ctx) { + init =3D ntfs_attr_init_search_ctx(ctx, ni, mrec); + if (init =3D=3D false) { + kmem_cache_free(ntfs_attr_ctx_cache, ctx); + ctx =3D NULL; + } + } + + return ctx; +} + +/** + * ntfs_attr_put_search_ctx - release an attribute search context + * @ctx: attribute search context to free + * + * Release the attribute search context @ctx, unmapping an associated exte= nt + * mft record if present. + */ +void ntfs_attr_put_search_ctx(struct ntfs_attr_search_ctx *ctx) +{ + if (ctx->mapped_mrec) + unmap_mft_record(ctx->ntfs_ino); + + if (ctx->mapped_base_mrec && ctx->base_ntfs_ino && + ctx->ntfs_ino !=3D ctx->base_ntfs_ino) + unmap_extent_mft_record(ctx->base_ntfs_ino); + kmem_cache_free(ntfs_attr_ctx_cache, ctx); +} + +/** + * ntfs_attr_find_in_attrdef - find an attribute in the $AttrDef system fi= le + * @vol: ntfs volume to which the attribute belongs + * @type: attribute type which to find + * + * Search for the attribute definition record corresponding to the attribu= te + * @type in the $AttrDef system file. + * + * Return the attribute type definition record if found and NULL if not fo= und. + */ +static struct attr_def *ntfs_attr_find_in_attrdef(const struct ntfs_volume= *vol, + const __le32 type) +{ + struct attr_def *ad; + + WARN_ON(!type); + for (ad =3D vol->attrdef; (u8 *)ad - (u8 *)vol->attrdef < + vol->attrdef_size && ad->type; ++ad) { + /* We have not found it yet, carry on searching. */ + if (likely(le32_to_cpu(ad->type) < le32_to_cpu(type))) + continue; + /* We found the attribute; return it. */ + if (likely(ad->type =3D=3D type)) + return ad; + /* We have gone too far already. No point in continuing. */ + break; + } + /* Attribute not found. */ + ntfs_debug("Attribute type 0x%x not found in $AttrDef.", + le32_to_cpu(type)); + return NULL; +} + +/** + * ntfs_attr_size_bounds_check - check a size of an attribute type for val= idity + * @vol: ntfs volume to which the attribute belongs + * @type: attribute type which to check + * @size: size which to check + * + * Check whether the @size in bytes is valid for an attribute of @type on = the + * ntfs volume @vol. This information is obtained from $AttrDef system fi= le. + */ +int ntfs_attr_size_bounds_check(const struct ntfs_volume *vol, const __le3= 2 type, + const s64 size) +{ + struct attr_def *ad; + + if (size < 0) + return -EINVAL; + + /* + * $ATTRIBUTE_LIST has a maximum size of 256kiB, but this is not + * listed in $AttrDef. + */ + if (unlikely(type =3D=3D AT_ATTRIBUTE_LIST && size > 256 * 1024)) + return -ERANGE; + /* Get the $AttrDef entry for the attribute @type. */ + ad =3D ntfs_attr_find_in_attrdef(vol, type); + if (unlikely(!ad)) + return -ENOENT; + /* Do the bounds check. */ + if (((le64_to_cpu(ad->min_size) > 0) && + size < le64_to_cpu(ad->min_size)) || + ((le64_to_cpu(ad->max_size) > 0) && size > + le64_to_cpu(ad->max_size))) + return -ERANGE; + return 0; +} + +/** + * ntfs_attr_can_be_non_resident - check if an attribute can be non-reside= nt + * @vol: ntfs volume to which the attribute belongs + * @type: attribute type which to check + * + * Check whether the attribute of @type on the ntfs volume @vol is allowed= to + * be non-resident. This information is obtained from $AttrDef system fil= e. + */ +static int ntfs_attr_can_be_non_resident(const struct ntfs_volume *vol, + const __le32 type) +{ + struct attr_def *ad; + + /* Find the attribute definition record in $AttrDef. */ + ad =3D ntfs_attr_find_in_attrdef(vol, type); + if (unlikely(!ad)) + return -ENOENT; + /* Check the flags and return the result. */ + if (ad->flags & ATTR_DEF_RESIDENT) + return -EPERM; + return 0; +} + +/** + * ntfs_attr_can_be_resident - check if an attribute can be resident + * @vol: ntfs volume to which the attribute belongs + * @type: attribute type which to check + * + * Check whether the attribute of @type on the ntfs volume @vol is allowed= to + * be resident. This information is derived from our ntfs knowledge and m= ay + * not be completely accurate, especially when user defined attributes are + * present. Basically we allow everything to be resident except for index + * allocation and $EA attributes. + * + * Return 0 if the attribute is allowed to be non-resident and -EPERM if n= ot. + * + * Warning: In the system file $MFT the attribute $Bitmap must be non-resi= dent + * otherwise windows will not boot (blue screen of death)! We cannot + * check for this here as we do not know which inode's $Bitmap is + * being asked about so the caller needs to special case this. + */ +int ntfs_attr_can_be_resident(const struct ntfs_volume *vol, const __le32 = type) +{ + if (type =3D=3D AT_INDEX_ALLOCATION) + return -EPERM; + return 0; +} + +/** + * ntfs_attr_record_resize - resize an attribute record + * @m: mft record containing attribute record + * @a: attribute record to resize + * @new_size: new size in bytes to which to resize the attribute record @a + * + * Resize the attribute record @a, i.e. the resident part of the attribute= , in + * the mft record @m to @new_size bytes. + */ +int ntfs_attr_record_resize(struct mft_record *m, struct attr_record *a, u= 32 new_size) +{ + u32 old_size, alloc_size, attr_size; + + old_size =3D le32_to_cpu(m->bytes_in_use); + alloc_size =3D le32_to_cpu(m->bytes_allocated); + attr_size =3D le32_to_cpu(a->length); + + ntfs_debug("Sizes: old=3D%u alloc=3D%u attr=3D%u new=3D%u\n", + (unsigned int)old_size, (unsigned int)alloc_size, + (unsigned int)attr_size, (unsigned int)new_size); + + /* Align to 8 bytes if it is not already done. */ + if (new_size & 7) + new_size =3D (new_size + 7) & ~7; + /* If the actual attribute length has changed, move things around. */ + if (new_size !=3D attr_size) { + u32 new_muse =3D le32_to_cpu(m->bytes_in_use) - + attr_size + new_size; + /* Not enough space in this mft record. */ + if (new_muse > le32_to_cpu(m->bytes_allocated)) + return -ENOSPC; + + if (a->type =3D=3D AT_INDEX_ROOT && new_size > attr_size && + new_muse + 120 > alloc_size && old_size + 120 <=3D alloc_size) { + ntfs_debug("Too big struct index_root (%u > %u)\n", + new_muse, alloc_size); + return -ENOSPC; + } + + /* Move attributes following @a to their new location. */ + memmove((u8 *)a + new_size, (u8 *)a + le32_to_cpu(a->length), + le32_to_cpu(m->bytes_in_use) - ((u8 *)a - + (u8 *)m) - attr_size); + /* Adjust @m to reflect the change in used space. */ + m->bytes_in_use =3D cpu_to_le32(new_muse); + /* Adjust @a to reflect the new size. */ + if (new_size >=3D offsetof(struct attr_record, length) + sizeof(a->lengt= h)) + a->length =3D cpu_to_le32(new_size); + } + return 0; +} + +/** + * ntfs_resident_attr_value_resize - resize the value of a resident attrib= ute + * @m: mft record containing attribute record + * @a: attribute record whose value to resize + * @new_size: new size in bytes to which to resize the attribute value of = @a + * + * Resize the value of the attribute @a in the mft record @m to @new_size = bytes. + * If the value is made bigger, the newly allocated space is cleared. + */ +int ntfs_resident_attr_value_resize(struct mft_record *m, struct attr_reco= rd *a, + const u32 new_size) +{ + u32 old_size; + + /* Resize the resident part of the attribute record. */ + if (ntfs_attr_record_resize(m, a, + le16_to_cpu(a->data.resident.value_offset) + new_size)) + return -ENOSPC; + /* + * The resize succeeded! If we made the attribute value bigger, clear + * the area between the old size and @new_size. + */ + old_size =3D le32_to_cpu(a->data.resident.value_length); + if (new_size > old_size) + memset((u8 *)a + le16_to_cpu(a->data.resident.value_offset) + + old_size, 0, new_size - old_size); + /* Finally update the length of the attribute value. */ + a->data.resident.value_length =3D cpu_to_le32(new_size); + return 0; +} + +/** + * ntfs_attr_make_non_resident - convert a resident to a non-resident attr= ibute + * @ni: ntfs inode describing the attribute to convert + * @data_size: size of the resident data to copy to the non-resident attri= bute + * + * Convert the resident ntfs attribute described by the ntfs inode @ni to a + * non-resident one. + * + * @data_size must be equal to the attribute value size. This is needed s= ince + * we need to know the size before we can map the mft record and our calle= rs + * always know it. The reason we cannot simply read the size from the vfs + * inode i_size is that this is not necessarily uptodate. This happens wh= en + * ntfs_attr_make_non_resident() is called in the ->truncate call path(s). + */ +int ntfs_attr_make_non_resident(struct ntfs_inode *ni, const u32 data_size) +{ + s64 new_size; + struct inode *vi =3D VFS_I(ni); + struct ntfs_volume *vol =3D ni->vol; + struct ntfs_inode *base_ni; + struct mft_record *m; + struct attr_record *a; + struct ntfs_attr_search_ctx *ctx; + struct folio *folio; + struct runlist_element *rl; + u8 *kaddr; + unsigned long flags; + int mp_size, mp_ofs, name_ofs, arec_size, err, err2; + u32 attr_size; + u8 old_res_attr_flags; + + if (NInoNonResident(ni)) { + ntfs_warning(vol->sb, + "Trying to make non-resident attribute non-resident. Aborting...\n"); + return -EINVAL; + } + + /* Check that the attribute is allowed to be non-resident. */ + err =3D ntfs_attr_can_be_non_resident(vol, ni->type); + if (unlikely(err)) { + if (err =3D=3D -EPERM) + ntfs_debug("Attribute is not allowed to be non-resident."); + else + ntfs_debug("Attribute not defined on the NTFS volume!"); + return err; + } + + if (NInoEncrypted(ni)) + return -EIO; + + if (!NInoAttr(ni)) + base_ni =3D ni; + else + base_ni =3D ni->ext.base_ntfs_ino; + m =3D map_mft_record(base_ni); + if (IS_ERR(m)) { + err =3D PTR_ERR(m); + m =3D NULL; + ctx =3D NULL; + goto err_out; + } + ctx =3D ntfs_attr_get_search_ctx(base_ni, m); + if (unlikely(!ctx)) { + err =3D -ENOMEM; + goto err_out; + } + err =3D ntfs_attr_lookup(ni->type, ni->name, ni->name_len, + CASE_SENSITIVE, 0, NULL, 0, ctx); + if (unlikely(err)) { + if (err =3D=3D -ENOENT) + err =3D -EIO; + goto err_out; + } + m =3D ctx->mrec; + a =3D ctx->attr; + + /* + * The size needs to be aligned to a cluster boundary for allocation + * purposes. + */ + new_size =3D (data_size + vol->cluster_size - 1) & + ~(vol->cluster_size - 1); + if (new_size > 0) { + if ((a->flags & ATTR_COMPRESSION_MASK) =3D=3D ATTR_IS_COMPRESSED) { + /* must allocate full compression blocks */ + new_size =3D + ((new_size - 1) | + ((1L << (STANDARD_COMPRESSION_UNIT + + vol->cluster_size_bits)) - 1)) + 1; + } + + /* + * Will need folio later and since folio lock nests + * outside all ntfs locks, we need to get the folio now. + */ + folio =3D __filemap_get_folio(vi->i_mapping, 0, + FGP_CREAT | FGP_LOCK, + mapping_gfp_mask(vi->i_mapping)); + if (IS_ERR(folio)) { + err =3D -ENOMEM; + goto err_out; + } + + /* Start by allocating clusters to hold the attribute value. */ + rl =3D ntfs_cluster_alloc(vol, 0, new_size >> + vol->cluster_size_bits, -1, DATA_ZONE, true, + false, false); + if (IS_ERR(rl)) { + err =3D PTR_ERR(rl); + ntfs_debug("Failed to allocate cluster%s, error code %i.", + (new_size >> vol->cluster_size_bits) > 1 ? "s" : "", + err); + goto folio_err_out; + } + } else { + rl =3D NULL; + folio =3D NULL; + } + + down_write(&ni->runlist.lock); + /* Determine the size of the mapping pairs array. */ + mp_size =3D ntfs_get_size_for_mapping_pairs(vol, rl, 0, -1, -1); + if (unlikely(mp_size < 0)) { + err =3D mp_size; + ntfs_debug("Failed to get size for mapping pairs array, error code %i.\n= ", err); + goto rl_err_out; + } + + if (NInoNonResident(ni) || a->non_resident) { + err =3D -EIO; + goto rl_err_out; + } + + /* + * Calculate new offsets for the name and the mapping pairs array. + */ + if (NInoSparse(ni) || NInoCompressed(ni)) + name_ofs =3D (offsetof(struct attr_record, + data.non_resident.compressed_size) + + sizeof(a->data.non_resident.compressed_size) + + 7) & ~7; + else + name_ofs =3D (offsetof(struct attr_record, + data.non_resident.compressed_size) + 7) & ~7; + mp_ofs =3D (name_ofs + a->name_length * sizeof(__le16) + 7) & ~7; + /* + * Determine the size of the resident part of the now non-resident + * attribute record. + */ + arec_size =3D (mp_ofs + mp_size + 7) & ~7; + /* + * If the folio is not uptodate bring it uptodate by copying from the + * attribute value. + */ + attr_size =3D le32_to_cpu(a->data.resident.value_length); + WARN_ON(attr_size !=3D data_size); + if (folio && !folio_test_uptodate(folio)) { + kaddr =3D kmap_local_folio(folio, 0); + memcpy(kaddr, (u8 *)a + + le16_to_cpu(a->data.resident.value_offset), + attr_size); + memset(kaddr + attr_size, 0, PAGE_SIZE - attr_size); + kunmap_local(kaddr); + flush_dcache_folio(folio); + folio_mark_uptodate(folio); + } + + /* Backup the attribute flag. */ + old_res_attr_flags =3D a->data.resident.flags; + /* Resize the resident part of the attribute record. */ + err =3D ntfs_attr_record_resize(m, a, arec_size); + if (unlikely(err)) + goto rl_err_out; + + /* + * Convert the resident part of the attribute record to describe a + * non-resident attribute. + */ + a->non_resident =3D 1; + /* Move the attribute name if it exists and update the offset. */ + if (a->name_length) + memmove((u8 *)a + name_ofs, (u8 *)a + le16_to_cpu(a->name_offset), + a->name_length * sizeof(__le16)); + a->name_offset =3D cpu_to_le16(name_ofs); + /* Setup the fields specific to non-resident attributes. */ + a->data.non_resident.lowest_vcn =3D 0; + a->data.non_resident.highest_vcn =3D cpu_to_le64((new_size - 1) >> + vol->cluster_size_bits); + a->data.non_resident.mapping_pairs_offset =3D cpu_to_le16(mp_ofs); + memset(&a->data.non_resident.reserved, 0, + sizeof(a->data.non_resident.reserved)); + a->data.non_resident.allocated_size =3D cpu_to_le64(new_size); + a->data.non_resident.data_size =3D + a->data.non_resident.initialized_size =3D + cpu_to_le64(attr_size); + if (NInoSparse(ni) || NInoCompressed(ni)) { + a->data.non_resident.compression_unit =3D 0; + if (NInoCompressed(ni) || vol->major_ver < 3) + a->data.non_resident.compression_unit =3D 4; + a->data.non_resident.compressed_size =3D + a->data.non_resident.allocated_size; + } else + a->data.non_resident.compression_unit =3D 0; + /* Generate the mapping pairs array into the attribute record. */ + err =3D ntfs_mapping_pairs_build(vol, (u8 *)a + mp_ofs, + arec_size - mp_ofs, rl, 0, -1, NULL, NULL, NULL); + if (unlikely(err)) { + ntfs_error(vol->sb, "Failed to build mapping pairs, error code %i.", + err); + goto undo_err_out; + } + + /* Setup the in-memory attribute structure to be non-resident. */ + ni->runlist.rl =3D rl; + if (rl) { + for (ni->runlist.count =3D 1; rl->length !=3D 0; rl++) + ni->runlist.count++; + } else + ni->runlist.count =3D 0; + write_lock_irqsave(&ni->size_lock, flags); + ni->allocated_size =3D new_size; + if (NInoSparse(ni) || NInoCompressed(ni)) { + ni->itype.compressed.size =3D ni->allocated_size; + if (a->data.non_resident.compression_unit) { + ni->itype.compressed.block_size =3D 1U << + (a->data.non_resident.compression_unit + + vol->cluster_size_bits); + ni->itype.compressed.block_size_bits =3D + ffs(ni->itype.compressed.block_size) - + 1; + ni->itype.compressed.block_clusters =3D 1U << + a->data.non_resident.compression_unit; + } else { + ni->itype.compressed.block_size =3D 0; + ni->itype.compressed.block_size_bits =3D 0; + ni->itype.compressed.block_clusters =3D 0; + } + vi->i_blocks =3D ni->itype.compressed.size >> 9; + } else + vi->i_blocks =3D ni->allocated_size >> 9; + write_unlock_irqrestore(&ni->size_lock, flags); + /* + * This needs to be last since the address space operations ->read_folio + * and ->writepage can run concurrently with us as they are not + * serialized on i_mutex. Note, we are not allowed to fail once we flip + * this switch, which is another reason to do this last. + */ + NInoSetNonResident(ni); + NInoSetFullyMapped(ni); + /* Mark the mft record dirty, so it gets written back. */ + mark_mft_record_dirty(ctx->ntfs_ino); + ntfs_attr_put_search_ctx(ctx); + unmap_mft_record(base_ni); + up_write(&ni->runlist.lock); + if (folio) { + iomap_dirty_folio(vi->i_mapping, folio); + folio_unlock(folio); + folio_put(folio); + } + ntfs_debug("Done."); + return 0; +undo_err_out: + /* Convert the attribute back into a resident attribute. */ + a->non_resident =3D 0; + /* Move the attribute name if it exists and update the offset. */ + name_ofs =3D (offsetof(struct attr_record, data.resident.reserved) + + sizeof(a->data.resident.reserved) + 7) & ~7; + if (a->name_length) + memmove((u8 *)a + name_ofs, (u8 *)a + le16_to_cpu(a->name_offset), + a->name_length * sizeof(__le16)); + mp_ofs =3D (name_ofs + a->name_length * sizeof(__le16) + 7) & ~7; + a->name_offset =3D cpu_to_le16(name_ofs); + arec_size =3D (mp_ofs + attr_size + 7) & ~7; + /* Resize the resident part of the attribute record. */ + err2 =3D ntfs_attr_record_resize(m, a, arec_size); + if (unlikely(err2)) { + /* + * This cannot happen (well if memory corruption is at work it + * could happen in theory), but deal with it as well as we can. + * If the old size is too small, truncate the attribute, + * otherwise simply give it a larger allocated size. + */ + arec_size =3D le32_to_cpu(a->length); + if ((mp_ofs + attr_size) > arec_size) { + err2 =3D attr_size; + attr_size =3D arec_size - mp_ofs; + ntfs_error(vol->sb, + "Failed to undo partial resident to non-resident attribute conversion.= Truncating inode 0x%lx, attribute type 0x%x from %i bytes to %i bytes to = maintain metadata consistency. THIS MEANS YOU ARE LOSING %i BYTES DATA FRO= M THIS %s.", + vi->i_ino, + (unsigned int)le32_to_cpu(ni->type), + err2, attr_size, err2 - attr_size, + ((ni->type =3D=3D AT_DATA) && + !ni->name_len) ? "FILE" : "ATTRIBUTE"); + write_lock_irqsave(&ni->size_lock, flags); + ni->initialized_size =3D attr_size; + i_size_write(vi, attr_size); + write_unlock_irqrestore(&ni->size_lock, flags); + } + } + /* Setup the fields specific to resident attributes. */ + a->data.resident.value_length =3D cpu_to_le32(attr_size); + a->data.resident.value_offset =3D cpu_to_le16(mp_ofs); + a->data.resident.flags =3D old_res_attr_flags; + memset(&a->data.resident.reserved, 0, + sizeof(a->data.resident.reserved)); + /* Copy the data from folio back to the attribute value. */ + if (folio) + memcpy_from_folio((u8 *)a + mp_ofs, folio, 0, attr_size); + /* Setup the allocated size in the ntfs inode in case it changed. */ + write_lock_irqsave(&ni->size_lock, flags); + ni->allocated_size =3D arec_size - mp_ofs; + write_unlock_irqrestore(&ni->size_lock, flags); + /* Mark the mft record dirty, so it gets written back. */ + mark_mft_record_dirty(ctx->ntfs_ino); +rl_err_out: + up_write(&ni->runlist.lock); + if (rl) { + if (ntfs_cluster_free_from_rl(vol, rl) < 0) { + ntfs_error(vol->sb, + "Failed to release allocated cluster(s) in error code path. Run chkds= k to recover the lost cluster(s)."); + NVolSetErrors(vol); + } + ntfs_free(rl); +folio_err_out: + folio_unlock(folio); + folio_put(folio); + } +err_out: + if (ctx) + ntfs_attr_put_search_ctx(ctx); + if (m) + unmap_mft_record(base_ni); + ni->runlist.rl =3D NULL; + + if (err =3D=3D -EINVAL) + err =3D -EIO; + return err; +} + +/** + * ntfs_attr_set - fill (a part of) an attribute with a byte + * @ni: ntfs inode describing the attribute to fill + * @ofs: offset inside the attribute at which to start to fill + * @cnt: number of bytes to fill + * @val: the unsigned 8-bit value with which to fill the attribute + * + * Fill @cnt bytes of the attribute described by the ntfs inode @ni starti= ng at + * byte offset @ofs inside the attribute with the constant byte @val. + * + * This function is effectively like memset() applied to an ntfs attribute. + * Note thie function actually only operates on the page cache pages belon= ging + * to the ntfs attribute and it marks them dirty after doing the memset(). + * Thus it relies on the vm dirty page write code paths to cause the modif= ied + * pages to be written to the mft record/disk. + */ +int ntfs_attr_set(struct ntfs_inode *ni, s64 ofs, s64 cnt, const u8 val) +{ + struct address_space *mapping =3D VFS_I(ni)->i_mapping; + struct folio *folio; + pgoff_t index; + u8 *addr; + unsigned long offset; + size_t attr_len; + int ret =3D 0; + + index =3D ofs >> PAGE_SHIFT; + while (cnt) { + folio =3D ntfs_read_mapping_folio(mapping, index); + if (IS_ERR(folio)) { + ret =3D PTR_ERR(folio); + ntfs_error(VFS_I(ni)->i_sb, "Failed to read a page %lu for attr %#x: %l= d", + index, ni->type, PTR_ERR(folio)); + break; + } + + offset =3D offset_in_folio(folio, ofs); + attr_len =3D min_t(size_t, (size_t)cnt, folio_size(folio) - offset); + + folio_lock(folio); + addr =3D kmap_local_folio(folio, offset); + memset(addr, val, attr_len); + kunmap_local(addr); + + flush_dcache_folio(folio); + folio_mark_dirty(folio); + folio_unlock(folio); + folio_put(folio); + + ofs +=3D attr_len; + cnt -=3D attr_len; + index++; + cond_resched(); + } + + return ret; +} + +int ntfs_attr_set_initialized_size(struct ntfs_inode *ni, loff_t new_size) +{ + struct ntfs_attr_search_ctx *ctx; + int err =3D 0; + + if (!NInoNonResident(ni)) + return -EINVAL; + + ctx =3D ntfs_attr_get_search_ctx(ni, NULL); + if (!ctx) + return -ENOMEM; + + err =3D ntfs_attr_lookup(ni->type, ni->name, ni->name_len, + CASE_SENSITIVE, 0, NULL, 0, ctx); + if (err) + goto out_ctx; + + ctx->attr->data.non_resident.initialized_size =3D cpu_to_le64(new_size); + ni->initialized_size =3D new_size; + mark_mft_record_dirty(ctx->ntfs_ino); +out_ctx: + ntfs_attr_put_search_ctx(ctx); + return err; +} + +/** + * ntfs_make_room_for_attr - make room for an attribute inside an mft reco= rd + * @m: mft record + * @pos: position at which to make space + * @size: byte size to make available at this position + * + * @pos points to the attribute in front of which we want to make space. + */ +static int ntfs_make_room_for_attr(struct mft_record *m, u8 *pos, u32 size) +{ + u32 biu; + + ntfs_debug("Entering for pos 0x%x, size %u.\n", + (int)(pos - (u8 *)m), (unsigned int) size); + + /* Make size 8-byte alignment. */ + size =3D (size + 7) & ~7; + + /* Rigorous consistency checks. */ + if (!m || !pos || pos < (u8 *)m) { + pr_err("%s: pos=3D%p m=3D%p", __func__, pos, m); + return -EINVAL; + } + + /* The -8 is for the attribute terminator. */ + if (pos - (u8 *)m > (int)le32_to_cpu(m->bytes_in_use) - 8) + return -EINVAL; + /* Nothing to do. */ + if (!size) + return 0; + + biu =3D le32_to_cpu(m->bytes_in_use); + /* Do we have enough space? */ + if (biu + size > le32_to_cpu(m->bytes_allocated) || + pos + size > (u8 *)m + le32_to_cpu(m->bytes_allocated)) { + ntfs_debug("No enough space in the MFT record\n"); + return -ENOSPC; + } + /* Move everything after pos to pos + size. */ + memmove(pos + size, pos, biu - (pos - (u8 *)m)); + /* Update mft record. */ + m->bytes_in_use =3D cpu_to_le32(biu + size); + return 0; +} + +/** + * ntfs_resident_attr_record_add - add resident attribute to inode + * @ni: opened ntfs inode to which MFT record add attribute + * @type: type of the new attribute + * @name: name of the new attribute + * @name_len: name length of the new attribute + * @val: value of the new attribute + * @size: size of new attribute (length of @val, if @val !=3D NULL) + * @flags: flags of the new attribute + */ +int ntfs_resident_attr_record_add(struct ntfs_inode *ni, __le32 type, + __le16 *name, u8 name_len, u8 *val, u32 size, + __le16 flags) +{ + struct ntfs_attr_search_ctx *ctx; + u32 length; + struct attr_record *a; + struct mft_record *m; + int err, offset; + struct ntfs_inode *base_ni; + + ntfs_debug("Entering for inode 0x%llx, attr 0x%x, flags 0x%x.\n", + (long long) ni->mft_no, (unsigned int) le32_to_cpu(type), + (unsigned int) le16_to_cpu(flags)); + + if (!ni || (!name && name_len)) + return -EINVAL; + + err =3D ntfs_attr_can_be_resident(ni->vol, type); + if (err) { + if (err =3D=3D -EPERM) + ntfs_debug("Attribute can't be resident.\n"); + else + ntfs_debug("ntfs_attr_can_be_resident failed.\n"); + return err; + } + + /* Locate place where record should be. */ + ctx =3D ntfs_attr_get_search_ctx(ni, NULL); + if (!ctx) { + ntfs_error(ni->vol->sb, "%s: Failed to get search context", + __func__); + return -ENOMEM; + } + /* + * Use ntfs_attr_find instead of ntfs_attr_lookup to find place for + * attribute in @ni->mrec, not any extent inode in case if @ni is base + * file record. + */ + err =3D ntfs_attr_find(type, name, name_len, CASE_SENSITIVE, val, size, c= tx); + if (!err) { + err =3D -EEXIST; + ntfs_debug("Attribute already present.\n"); + goto put_err_out; + } + if (err !=3D -ENOENT) { + err =3D -EIO; + goto put_err_out; + } + a =3D ctx->attr; + m =3D ctx->mrec; + + /* Make room for attribute. */ + length =3D offsetof(struct attr_record, data.resident.reserved) + + sizeof(a->data.resident.reserved) + + ((name_len * sizeof(__le16) + 7) & ~7) + + ((size + 7) & ~7); + err =3D ntfs_make_room_for_attr(ctx->mrec, (u8 *) ctx->attr, length); + if (err) { + ntfs_debug("Failed to make room for attribute.\n"); + goto put_err_out; + } + + /* Setup record fields. */ + offset =3D ((u8 *)a - (u8 *)m); + a->type =3D type; + a->length =3D cpu_to_le32(length); + a->non_resident =3D 0; + a->name_length =3D name_len; + a->name_offset =3D + name_len ? cpu_to_le16((offsetof(struct attr_record, data.resident.reser= ved) + + sizeof(a->data.resident.reserved))) : cpu_to_le16(0); + + a->flags =3D flags; + a->instance =3D m->next_attr_instance; + a->data.resident.value_length =3D cpu_to_le32(size); + a->data.resident.value_offset =3D cpu_to_le16(length - ((size + 7) & ~7)); + if (val) + memcpy((u8 *)a + le16_to_cpu(a->data.resident.value_offset), val, size); + else + memset((u8 *)a + le16_to_cpu(a->data.resident.value_offset), 0, size); + if (type =3D=3D AT_FILE_NAME) + a->data.resident.flags =3D RESIDENT_ATTR_IS_INDEXED; + else + a->data.resident.flags =3D 0; + if (name_len) + memcpy((u8 *)a + le16_to_cpu(a->name_offset), + name, sizeof(__le16) * name_len); + m->next_attr_instance =3D + cpu_to_le16((le16_to_cpu(m->next_attr_instance) + 1) & 0xffff); + if (ni->nr_extents =3D=3D -1) + base_ni =3D ni->ext.base_ntfs_ino; + else + base_ni =3D ni; + if (type !=3D AT_ATTRIBUTE_LIST && NInoAttrList(base_ni)) { + err =3D ntfs_attrlist_entry_add(ni, a); + if (err) { + ntfs_attr_record_resize(m, a, 0); + mark_mft_record_dirty(ctx->ntfs_ino); + ntfs_debug("Failed add attribute entry to ATTRIBUTE_LIST.\n"); + goto put_err_out; + } + } + mark_mft_record_dirty(ni); + ntfs_attr_put_search_ctx(ctx); + return offset; +put_err_out: + ntfs_attr_put_search_ctx(ctx); + return -EIO; +} + +/** + * ntfs_non_resident_attr_record_add - add extent of non-resident attribute + * @ni: opened ntfs inode to which MFT record add attribute + * @type: type of the new attribute extent + * @name: name of the new attribute extent + * @name_len: name length of the new attribute extent + * @lowest_vcn: lowest vcn of the new attribute extent + * @dataruns_size: dataruns size of the new attribute extent + * @flags: flags of the new attribute extent + */ +static int ntfs_non_resident_attr_record_add(struct ntfs_inode *ni, __le32= type, + __le16 *name, u8 name_len, s64 lowest_vcn, int dataruns_size, + __le16 flags) +{ + struct ntfs_attr_search_ctx *ctx; + u32 length; + struct attr_record *a; + struct mft_record *m; + struct ntfs_inode *base_ni; + int err, offset; + + ntfs_debug("Entering for inode 0x%llx, attr 0x%x, lowest_vcn %lld, dataru= ns_size %d, flags 0x%x.\n", + (long long) ni->mft_no, (unsigned int) le32_to_cpu(type), + (long long) lowest_vcn, dataruns_size, + (unsigned int) le16_to_cpu(flags)); + + if (!ni || dataruns_size <=3D 0 || (!name && name_len)) + return -EINVAL; + + err =3D ntfs_attr_can_be_non_resident(ni->vol, type); + if (err) { + if (err =3D=3D -EPERM) + pr_err("Attribute can't be non resident"); + else + pr_err("ntfs_attr_can_be_non_resident failed"); + return err; + } + + /* Locate place where record should be. */ + ctx =3D ntfs_attr_get_search_ctx(ni, NULL); + if (!ctx) { + pr_err("%s: Failed to get search context", __func__); + return -ENOMEM; + } + /* + * Use ntfs_attr_find instead of ntfs_attr_lookup to find place for + * attribute in @ni->mrec, not any extent inode in case if @ni is base + * file record. + */ + err =3D ntfs_attr_find(type, name, name_len, CASE_SENSITIVE, NULL, 0, ctx= ); + if (!err) { + err =3D -EEXIST; + pr_err("Attribute 0x%x already present", type); + goto put_err_out; + } + if (err !=3D -ENOENT) { + pr_err("ntfs_attr_find failed"); + err =3D -EIO; + goto put_err_out; + } + a =3D ctx->attr; + m =3D ctx->mrec; + + /* Make room for attribute. */ + dataruns_size =3D (dataruns_size + 7) & ~7; + length =3D offsetof(struct attr_record, data.non_resident.compressed_size= ) + + ((sizeof(__le16) * name_len + 7) & ~7) + dataruns_size + + ((flags & (ATTR_IS_COMPRESSED | ATTR_IS_SPARSE)) ? + sizeof(a->data.non_resident.compressed_size) : 0); + err =3D ntfs_make_room_for_attr(ctx->mrec, (u8 *) ctx->attr, length); + if (err) { + pr_err("Failed to make room for attribute"); + goto put_err_out; + } + + /* Setup record fields. */ + a->type =3D type; + a->length =3D cpu_to_le32(length); + a->non_resident =3D 1; + a->name_length =3D name_len; + a->name_offset =3D cpu_to_le16(offsetof(struct attr_record, + data.non_resident.compressed_size) + + ((flags & (ATTR_IS_COMPRESSED | ATTR_IS_SPARSE)) ? + sizeof(a->data.non_resident.compressed_size) : 0)); + a->flags =3D flags; + a->instance =3D m->next_attr_instance; + a->data.non_resident.lowest_vcn =3D cpu_to_le64(lowest_vcn); + a->data.non_resident.mapping_pairs_offset =3D cpu_to_le16(length - dataru= ns_size); + a->data.non_resident.compression_unit =3D + (flags & ATTR_IS_COMPRESSED) ? STANDARD_COMPRESSION_UNIT : 0; + /* If @lowest_vcn =3D=3D 0, than setup empty attribute. */ + if (!lowest_vcn) { + a->data.non_resident.highest_vcn =3D cpu_to_le64(-1); + a->data.non_resident.allocated_size =3D 0; + a->data.non_resident.data_size =3D 0; + a->data.non_resident.initialized_size =3D 0; + /* Set empty mapping pairs. */ + *((u8 *)a + le16_to_cpu(a->data.non_resident.mapping_pairs_offset)) =3D = 0; + } + if (name_len) + memcpy((u8 *)a + le16_to_cpu(a->name_offset), + name, sizeof(__le16) * name_len); + m->next_attr_instance =3D + cpu_to_le16((le16_to_cpu(m->next_attr_instance) + 1) & 0xffff); + if (ni->nr_extents =3D=3D -1) + base_ni =3D ni->ext.base_ntfs_ino; + else + base_ni =3D ni; + if (type !=3D AT_ATTRIBUTE_LIST && NInoAttrList(base_ni)) { + err =3D ntfs_attrlist_entry_add(ni, a); + if (err) { + pr_err("Failed add attr entry to attrlist"); + ntfs_attr_record_resize(m, a, 0); + goto put_err_out; + } + } + mark_mft_record_dirty(ni); + /* + * Locate offset from start of the MFT record where new attribute is + * placed. We need relookup it, because record maybe moved during + * update of attribute list. + */ + ntfs_attr_reinit_search_ctx(ctx); + err =3D ntfs_attr_lookup(type, name, name_len, CASE_SENSITIVE, + lowest_vcn, NULL, 0, ctx); + if (err) { + pr_err("%s: attribute lookup failed", __func__); + ntfs_attr_put_search_ctx(ctx); + return err; + + } + offset =3D (u8 *)ctx->attr - (u8 *)ctx->mrec; + ntfs_attr_put_search_ctx(ctx); + return offset; +put_err_out: + ntfs_attr_put_search_ctx(ctx); + return -1; +} + +/** + * ntfs_attr_record_rm - remove attribute extent + * @ctx: search context describing the attribute which should be removed + * + * If this function succeed, user should reinit search context if he/she w= ants + * use it anymore. + */ +int ntfs_attr_record_rm(struct ntfs_attr_search_ctx *ctx) +{ + struct ntfs_inode *base_ni, *ni; + __le32 type; + int err; + + if (!ctx || !ctx->ntfs_ino || !ctx->mrec || !ctx->attr) + return -EINVAL; + + ntfs_debug("Entering for inode 0x%llx, attr 0x%x.\n", + (long long) ctx->ntfs_ino->mft_no, + (unsigned int) le32_to_cpu(ctx->attr->type)); + type =3D ctx->attr->type; + ni =3D ctx->ntfs_ino; + if (ctx->base_ntfs_ino) + base_ni =3D ctx->base_ntfs_ino; + else + base_ni =3D ctx->ntfs_ino; + + /* Remove attribute itself. */ + if (ntfs_attr_record_resize(ctx->mrec, ctx->attr, 0)) { + ntfs_debug("Couldn't remove attribute record. Bug or damaged MFT record.= \n"); + return -EIO; + } + mark_mft_record_dirty(ni); + + /* + * Remove record from $ATTRIBUTE_LIST if present and we don't want + * delete $ATTRIBUTE_LIST itself. + */ + if (NInoAttrList(base_ni) && type !=3D AT_ATTRIBUTE_LIST) { + err =3D ntfs_attrlist_entry_rm(ctx); + if (err) { + ntfs_debug("Couldn't delete record from $ATTRIBUTE_LIST.\n"); + return err; + } + } + + /* Post $ATTRIBUTE_LIST delete setup. */ + if (type =3D=3D AT_ATTRIBUTE_LIST) { + if (NInoAttrList(base_ni) && base_ni->attr_list) + ntfs_free(base_ni->attr_list); + base_ni->attr_list =3D NULL; + NInoClearAttrList(base_ni); + } + + /* Free MFT record, if it doesn't contain attributes. */ + if (le32_to_cpu(ctx->mrec->bytes_in_use) - + le16_to_cpu(ctx->mrec->attrs_offset) =3D=3D 8) { + if (ntfs_mft_record_free(ni->vol, ni)) { + ntfs_debug("Couldn't free MFT record.\n"); + return -EIO; + } + /* Remove done if we freed base inode. */ + if (ni =3D=3D base_ni) + return 0; + ntfs_inode_close(ni); + ctx->ntfs_ino =3D ni =3D NULL; + } + + if (type =3D=3D AT_ATTRIBUTE_LIST || !NInoAttrList(base_ni)) + return 0; + + /* Remove attribute list if we don't need it any more. */ + if (!ntfs_attrlist_need(base_ni)) { + struct ntfs_attr na; + struct inode *attr_vi; + + ntfs_attr_reinit_search_ctx(ctx); + if (ntfs_attr_lookup(AT_ATTRIBUTE_LIST, NULL, 0, CASE_SENSITIVE, + 0, NULL, 0, ctx)) { + ntfs_debug("Couldn't find attribute list. Succeed anyway.\n"); + return 0; + } + /* Deallocate clusters. */ + if (ctx->attr->non_resident) { + struct runlist_element *al_rl; + size_t new_rl_count; + + al_rl =3D ntfs_mapping_pairs_decompress(base_ni->vol, + ctx->attr, NULL, &new_rl_count); + if (IS_ERR(al_rl)) { + ntfs_debug("Couldn't decompress attribute list runlist. Succeed anyway= .\n"); + return 0; + } + if (ntfs_cluster_free_from_rl(base_ni->vol, al_rl)) + ntfs_debug("Leaking clusters! Run chkdsk. Couldn't free clusters from = attribute list runlist.\n"); + ntfs_free(al_rl); + } + /* Remove attribute record itself. */ + if (ntfs_attr_record_rm(ctx)) { + ntfs_debug("Couldn't remove attribute list. Succeed anyway.\n"); + return 0; + } + + na.mft_no =3D VFS_I(base_ni)->i_ino; + na.type =3D AT_ATTRIBUTE_LIST; + na.name =3D NULL; + na.name_len =3D 0; + + attr_vi =3D ilookup5(VFS_I(base_ni)->i_sb, VFS_I(base_ni)->i_ino, + ntfs_test_inode, &na); + if (attr_vi) { + clear_nlink(attr_vi); + iput(attr_vi); + } + + } + return 0; +} + +/** + * ntfs_attr_add - add attribute to inode + * @ni: opened ntfs inode to which add attribute + * @type: type of the new attribute + * @name: name in unicode of the new attribute + * @name_len: name length in unicode characters of the new attribute + * @val: value of new attribute + * @size: size of the new attribute / length of @val (if specified) + * + * @val should always be specified for always resident attributes (eg. FIL= E_NAME + * attribute), for attributes that can become non-resident @val can be NULL + * (eg. DATA attribute). @size can be specified even if @val is NULL, in t= his + * case data size will be equal to @size and initialized size will be equal + * to 0. + * + * If inode haven't got enough space to add attribute, add attribute to on= e of + * it extents, if no extents present or no one of them have enough space, = than + * allocate new extent and add attribute to it. + * + * If on one of this steps attribute list is needed but not present, than = it is + * added transparently to caller. So, this function should not be called w= ith + * @type =3D=3D AT_ATTRIBUTE_LIST, if you really need to add attribute lis= t call + * ntfs_inode_add_attrlist instead. + * + * On success return 0. On error return -1 with errno set to the error cod= e. + */ +int ntfs_attr_add(struct ntfs_inode *ni, __le32 type, + __le16 *name, u8 name_len, u8 *val, s64 size) +{ + struct super_block *sb; + u32 attr_rec_size; + int err, i, offset; + bool is_resident; + bool can_be_non_resident =3D false; + struct ntfs_inode *attr_ni; + struct inode *attr_vi; + struct mft_record *ni_mrec; + + if (!ni || size < 0 || type =3D=3D AT_ATTRIBUTE_LIST) + return -EINVAL; + + ntfs_debug("Entering for inode 0x%llx, attr %x, size %lld.\n", + (long long) ni->mft_no, type, size); + + if (ni->nr_extents =3D=3D -1) + ni =3D ni->ext.base_ntfs_ino; + + /* Check the attribute type and the size. */ + err =3D ntfs_attr_size_bounds_check(ni->vol, type, size); + if (err) { + if (err =3D=3D -ENOENT) + err =3D -EIO; + return err; + } + + sb =3D ni->vol->sb; + /* Sanity checks for always resident attributes. */ + err =3D ntfs_attr_can_be_non_resident(ni->vol, type); + if (err) { + if (err !=3D -EPERM) { + ntfs_error(sb, "ntfs_attr_can_be_non_resident failed"); + goto err_out; + } + /* @val is mandatory. */ + if (!val) { + ntfs_error(sb, + "val is mandatory for always resident attributes"); + return -EINVAL; + } + if (size > ni->vol->mft_record_size) { + ntfs_error(sb, "Attribute is too big"); + return -ERANGE; + } + } else + can_be_non_resident =3D true; + + /* + * Determine resident or not will be new attribute. We add 8 to size in + * non resident case for mapping pairs. + */ + err =3D ntfs_attr_can_be_resident(ni->vol, type); + if (!err) { + is_resident =3D true; + } else { + if (err !=3D -EPERM) { + ntfs_error(sb, "ntfs_attr_can_be_resident failed"); + goto err_out; + } + is_resident =3D false; + } + + /* Calculate attribute record size. */ + if (is_resident) + attr_rec_size =3D offsetof(struct attr_record, data.resident.reserved) + + 1 + + ((name_len * sizeof(__le16) + 7) & ~7) + + ((size + 7) & ~7); + else + attr_rec_size =3D offsetof(struct attr_record, data.non_resident.compres= sed_size) + + ((name_len * sizeof(__le16) + 7) & ~7) + 8; + + /* + * If we have enough free space for the new attribute in the base MFT + * record, then add attribute to it. + */ +retry: + ni_mrec =3D map_mft_record(ni); + if (IS_ERR(ni_mrec)) { + err =3D -EIO; + goto err_out; + } + + if (le32_to_cpu(ni_mrec->bytes_allocated) - + le32_to_cpu(ni_mrec->bytes_in_use) >=3D attr_rec_size) { + attr_ni =3D ni; + unmap_mft_record(ni); + goto add_attr_record; + } + unmap_mft_record(ni); + + /* Try to add to extent inodes. */ + err =3D ntfs_inode_attach_all_extents(ni); + if (err) { + ntfs_error(sb, "Failed to attach all extents to inode"); + goto err_out; + } + + for (i =3D 0; i < ni->nr_extents; i++) { + attr_ni =3D ni->ext.extent_ntfs_inos[i]; + ni_mrec =3D map_mft_record(attr_ni); + if (IS_ERR(ni_mrec)) { + err =3D -EIO; + goto err_out; + } + + if (le32_to_cpu(ni_mrec->bytes_allocated) - + le32_to_cpu(ni_mrec->bytes_in_use) >=3D + attr_rec_size) { + unmap_mft_record(attr_ni); + goto add_attr_record; + } + unmap_mft_record(attr_ni); + } + + /* There is no extent that contain enough space for new attribute. */ + if (!NInoAttrList(ni)) { + /* Add attribute list not present, add it and retry. */ + err =3D ntfs_inode_add_attrlist(ni); + if (err) { + ntfs_error(sb, "Failed to add attribute list"); + goto err_out; + } + goto retry; + } + + attr_ni =3D NULL; + /* Allocate new extent. */ + err =3D ntfs_mft_record_alloc(ni->vol, 0, &attr_ni, ni, NULL); + if (err) { + ntfs_error(sb, "Failed to allocate extent record"); + goto err_out; + } + unmap_mft_record(attr_ni); + +add_attr_record: + if (is_resident) { + /* Add resident attribute. */ + offset =3D ntfs_resident_attr_record_add(attr_ni, type, name, + name_len, val, size, 0); + if (offset < 0) { + if (offset =3D=3D -ENOSPC && can_be_non_resident) + goto add_non_resident; + err =3D offset; + ntfs_error(sb, "Failed to add resident attribute"); + goto free_err_out; + } + return 0; + } + +add_non_resident: + /* Add non resident attribute. */ + offset =3D ntfs_non_resident_attr_record_add(attr_ni, type, name, + name_len, 0, 8, 0); + if (offset < 0) { + err =3D offset; + ntfs_error(sb, "Failed to add non resident attribute"); + goto free_err_out; + } + + /* If @size =3D=3D 0, we are done. */ + if (!size) + return 0; + + /* Open new attribute and resize it. */ + attr_vi =3D ntfs_attr_iget(VFS_I(ni), type, name, name_len); + if (IS_ERR(attr_vi)) { + ntfs_error(sb, "Failed to open just added attribute"); + goto rm_attr_err_out; + } + attr_ni =3D NTFS_I(attr_vi); + + /* Resize and set attribute value. */ + if (ntfs_attr_truncate(attr_ni, size) || + (val && (ntfs_inode_attr_pwrite(attr_vi, 0, size, val, false) !=3D size)= )) { + err =3D -EIO; + ntfs_error(sb, "Failed to initialize just added attribute"); + if (ntfs_attr_rm(attr_ni)) + ntfs_error(sb, "Failed to remove just added attribute"); + iput(attr_vi); + goto err_out; + } + iput(attr_vi); + return 0; + +rm_attr_err_out: + /* Remove just added attribute. */ + ni_mrec =3D map_mft_record(attr_ni); + if (!IS_ERR(ni_mrec)) { + if (ntfs_attr_record_resize(ni_mrec, + (struct attr_record *)((u8 *)ni_mrec + offset), 0)) + ntfs_error(sb, "Failed to remove just added attribute #2"); + unmap_mft_record(attr_ni); + } else + pr_err("EIO when try to remove new added attr\n"); + +free_err_out: + /* Free MFT record, if it doesn't contain attributes. */ + ni_mrec =3D map_mft_record(attr_ni); + if (!IS_ERR(ni_mrec)) { + int attr_size; + + attr_size =3D le32_to_cpu(ni_mrec->bytes_in_use) - + le16_to_cpu(ni_mrec->attrs_offset); + unmap_mft_record(attr_ni); + if (attr_size =3D=3D 8) { + if (ntfs_mft_record_free(attr_ni->vol, attr_ni)) + ntfs_error(sb, "Failed to free MFT record"); + if (attr_ni->nr_extents < 0) + ntfs_inode_close(attr_ni); + } + } else + pr_err("EIO when testing mft record is free-able\n"); + +err_out: + return err; +} + +/** + * __ntfs_attr_init - primary initialization of an ntfs attribute structure + * @ni: ntfs attribute inode to initialize + * @ni: ntfs inode with which to initialize the ntfs attribute + * @type: attribute type + * @name: attribute name in little endian Unicode or NULL + * @name_len: length of attribute @name in Unicode characters (if @name gi= ven) + * + * Initialize the ntfs attribute @na with @ni, @type, @name, and @name_len. + */ +static void __ntfs_attr_init(struct ntfs_inode *ni, + const __le32 type, __le16 *name, const u32 name_len) +{ + ni->runlist.rl =3D NULL; + ni->type =3D type; + ni->name =3D name; + if (name) + ni->name_len =3D name_len; + else + ni->name_len =3D 0; +} + +/** + * ntfs_attr_init - initialize an ntfs_attr with data sizes and status + * Final initialization for an ntfs attribute. + */ +static void ntfs_attr_init(struct ntfs_inode *ni, const bool non_resident, + const bool compressed, const bool encrypted, const bool sparse, + const s64 allocated_size, const s64 data_size, + const s64 initialized_size, const s64 compressed_size, + const u8 compression_unit) +{ + if (non_resident) + NInoSetNonResident(ni); + if (compressed) { + NInoSetCompressed(ni); + ni->flags |=3D FILE_ATTR_COMPRESSED; + } + if (encrypted) { + NInoSetEncrypted(ni); + ni->flags |=3D FILE_ATTR_ENCRYPTED; + } + if (sparse) { + NInoSetSparse(ni); + ni->flags |=3D FILE_ATTR_SPARSE_FILE; + } + ni->allocated_size =3D allocated_size; + ni->data_size =3D data_size; + ni->initialized_size =3D initialized_size; + if (compressed || sparse) { + struct ntfs_volume *vol =3D ni->vol; + + ni->itype.compressed.size =3D compressed_size; + ni->itype.compressed.block_clusters =3D 1 << compression_unit; + ni->itype.compressed.block_size =3D 1 << (compression_unit + + vol->cluster_size_bits); + ni->itype.compressed.block_size_bits =3D ffs( + ni->itype.compressed.block_size) - 1; + } +} + +/** + * ntfs_attr_open - open an ntfs attribute for access + * @ni: open ntfs inode in which the ntfs attribute resides + * @type: attribute type + * @name: attribute name in little endian Unicode or AT_UNNAMED or NULL + * @name_len: length of attribute @name in Unicode characters (if @name gi= ven) + */ +int ntfs_attr_open(struct ntfs_inode *ni, const __le32 type, + __le16 *name, u32 name_len) +{ + struct ntfs_attr_search_ctx *ctx; + __le16 *newname =3D NULL; + struct attr_record *a; + bool cs; + struct ntfs_inode *base_ni; + int err; + + ntfs_debug("Entering for inode %lld, attr 0x%x.\n", + (unsigned long long)ni->mft_no, type); + + if (!ni || !ni->vol) + return -EINVAL; + + if (NInoAttr(ni)) + base_ni =3D ni->ext.base_ntfs_ino; + else + base_ni =3D ni; + + if (name && name !=3D AT_UNNAMED && name !=3D I30) { + name =3D ntfs_ucsndup(name, name_len); + if (!name) { + err =3D -ENOMEM; + goto err_out; + } + newname =3D name; + } + + ctx =3D ntfs_attr_get_search_ctx(base_ni, NULL); + if (!ctx) { + err =3D -ENOMEM; + pr_err("%s: Failed to get search context", __func__); + goto err_out; + } + + err =3D ntfs_attr_lookup(type, name, name_len, 0, 0, NULL, 0, ctx); + if (err) + goto put_err_out; + + a =3D ctx->attr; + + if (!name) { + if (a->name_length) { + name =3D ntfs_ucsndup((__le16 *)((u8 *)a + le16_to_cpu(a->name_offset)), + a->name_length); + if (!name) + goto put_err_out; + newname =3D name; + name_len =3D a->name_length; + } else { + name =3D AT_UNNAMED; + name_len =3D 0; + } + } + + __ntfs_attr_init(ni, type, name, name_len); + + /* + * Wipe the flags in case they are not zero for an attribute list + * attribute. Windows does not complain about invalid flags and chkdsk + * does not detect or fix them so we need to cope with it, too. + */ + if (type =3D=3D AT_ATTRIBUTE_LIST) + a->flags =3D 0; + + if ((type =3D=3D AT_DATA) && + (a->non_resident ? !a->data.non_resident.initialized_size : + !a->data.resident.value_length)) { + /* + * Define/redefine the compression state if stream is + * empty, based on the compression mark on parent + * directory (for unnamed data streams) or on current + * inode (for named data streams). The compression mark + * may change any time, the compression state can only + * change when stream is wiped out. + * + * Also prevent compression on NTFS version < 3.0 + * or cluster size > 4K or compression is disabled + */ + a->flags &=3D ~ATTR_COMPRESSION_MASK; + if (NInoCompressed(ni) + && (ni->vol->major_ver >=3D 3) + && NVolCompression(ni->vol) + && (ni->vol->cluster_size <=3D MAX_COMPRESSION_CLUSTER_SIZE)) + a->flags |=3D ATTR_IS_COMPRESSED; + } + + cs =3D a->flags & (ATTR_IS_COMPRESSED | ATTR_IS_SPARSE); + + if (ni->type =3D=3D AT_DATA && ni->name =3D=3D AT_UNNAMED && + ((!(a->flags & ATTR_IS_COMPRESSED) !=3D !NInoCompressed(ni)) || + (!(a->flags & ATTR_IS_SPARSE) !=3D !NInoSparse(ni)) || + (!(a->flags & ATTR_IS_ENCRYPTED) !=3D !NInoEncrypted(ni)))) { + err =3D -EIO; + pr_err("Inode %lld has corrupt attribute flags (0x%x <> 0x%x)\n", + (unsigned long long)ni->mft_no, + a->flags, ni->flags); + goto put_err_out; + } + + if (a->non_resident) { + if (((a->flags & ATTR_COMPRESSION_MASK) || a->data.non_resident.compress= ion_unit) && + (ni->vol->major_ver < 3)) { + err =3D -EIO; + pr_err("Compressed inode %lld not allowed on NTFS %d.%d\n", + (unsigned long long)ni->mft_no, + ni->vol->major_ver, + ni->vol->major_ver); + goto put_err_out; + } + + if ((a->flags & ATTR_IS_COMPRESSED) && !a->data.non_resident.compression= _unit) { + err =3D -EIO; + pr_err("Compressed inode %lld attr 0x%x has no compression unit\n", + (unsigned long long)ni->mft_no, type); + goto put_err_out; + } + if ((a->flags & ATTR_COMPRESSION_MASK) && + (a->data.non_resident.compression_unit !=3D STANDARD_COMPRESSION_UNI= T)) { + err =3D -EIO; + pr_err("Compressed inode %lld attr 0x%lx has an unsupported compression= unit %d\n", + (unsigned long long)ni->mft_no, + (long)le32_to_cpu(type), + (int)a->data.non_resident.compression_unit); + goto put_err_out; + } + ntfs_attr_init(ni, true, a->flags & ATTR_IS_COMPRESSED, + a->flags & ATTR_IS_ENCRYPTED, + a->flags & ATTR_IS_SPARSE, + le64_to_cpu(a->data.non_resident.allocated_size), + le64_to_cpu(a->data.non_resident.data_size), + le64_to_cpu(a->data.non_resident.initialized_size), + cs ? le64_to_cpu(a->data.non_resident.compressed_size) : 0, + cs ? a->data.non_resident.compression_unit : 0); + } else { + s64 l =3D le32_to_cpu(a->data.resident.value_length); + + ntfs_attr_init(ni, false, a->flags & ATTR_IS_COMPRESSED, + a->flags & ATTR_IS_ENCRYPTED, + a->flags & ATTR_IS_SPARSE, (l + 7) & ~7, l, l, + cs ? (l + 7) & ~7 : 0, 0); + } + ntfs_attr_put_search_ctx(ctx); +out: + ntfs_debug("\n"); + return err; + +put_err_out: + ntfs_attr_put_search_ctx(ctx); +err_out: + ntfs_free(newname); + goto out; +} + +/** + * ntfs_attr_close - free an ntfs attribute structure + * @ni: ntfs inode to free + * + * Release all memory associated with the ntfs attribute @na and then rele= ase + * @na itself. + */ +void ntfs_attr_close(struct ntfs_inode *ni) +{ + if (NInoNonResident(ni) && ni->runlist.rl) + ntfs_free(ni->runlist.rl); + /* Don't release if using an internal constant. */ + if (ni->name !=3D AT_UNNAMED && ni->name !=3D I30) + ntfs_free(ni->name); +} + +/** + * ntfs_attr_map_whole_runlist - map the whole runlist of an ntfs attribute + * @ni: ntfs inode for which to map the runlist + * + * Map the whole runlist of the ntfs attribute @na. For an attribute made= up + * of only one attribute extent this is the same as calling + * ntfs_map_runlist(ni, 0) but for an attribute with multiple extents this + * will map the runlist fragments from each of the extents thus giving acc= ess + * to the entirety of the disk allocation of an attribute. + */ +int ntfs_attr_map_whole_runlist(struct ntfs_inode *ni) +{ + s64 next_vcn, last_vcn, highest_vcn; + struct ntfs_attr_search_ctx *ctx; + struct ntfs_volume *vol =3D ni->vol; + struct super_block *sb =3D vol->sb; + struct attr_record *a; + int err; + struct ntfs_inode *base_ni; + int not_mapped; + size_t new_rl_count; + + ntfs_debug("Entering for inode 0x%llx, attr 0x%x.\n", + (unsigned long long)ni->mft_no, ni->type); + + if (NInoFullyMapped(ni) && ni->runlist.rl) + return 0; + + if (NInoAttr(ni)) + base_ni =3D ni->ext.base_ntfs_ino; + else + base_ni =3D ni; + + ctx =3D ntfs_attr_get_search_ctx(base_ni, NULL); + if (!ctx) { + ntfs_error(sb, "%s: Failed to get search context", __func__); + return -ENOMEM; + } + + /* Map all attribute extents one by one. */ + next_vcn =3D last_vcn =3D highest_vcn =3D 0; + a =3D NULL; + while (1) { + struct runlist_element *rl; + + not_mapped =3D 0; + if (ntfs_rl_vcn_to_lcn(ni->runlist.rl, next_vcn) =3D=3D LCN_RL_NOT_MAPPE= D) + not_mapped =3D 1; + + err =3D ntfs_attr_lookup(ni->type, ni->name, ni->name_len, + CASE_SENSITIVE, next_vcn, NULL, 0, ctx); + if (err) + break; + + a =3D ctx->attr; + + if (not_mapped) { + /* Decode the runlist. */ + rl =3D ntfs_mapping_pairs_decompress(ni->vol, a, &ni->runlist, + &new_rl_count); + if (IS_ERR(rl)) { + err =3D PTR_ERR(rl); + goto err_out; + } + ni->runlist.rl =3D rl; + ni->runlist.count =3D new_rl_count; + } + + /* Are we in the first extent? */ + if (!next_vcn) { + if (a->data.non_resident.lowest_vcn) { + err =3D -EIO; + ntfs_error(sb, + "First extent of inode %llu attribute has non-zero lowest_vcn", + (unsigned long long)ni->mft_no); + goto err_out; + } + /* Get the last vcn in the attribute. */ + last_vcn =3D le64_to_cpu(a->data.non_resident.allocated_size) >> + vol->cluster_size_bits; + } + + /* Get the lowest vcn for the next extent. */ + highest_vcn =3D le64_to_cpu(a->data.non_resident.highest_vcn); + next_vcn =3D highest_vcn + 1; + + /* Only one extent or error, which we catch below. */ + if (next_vcn <=3D 0) { + err =3D -ENOENT; + break; + } + + /* Avoid endless loops due to corruption. */ + if (next_vcn < le64_to_cpu(a->data.non_resident.lowest_vcn)) { + err =3D -EIO; + ntfs_error(sb, "Inode %llu has corrupt attribute list", + (unsigned long long)ni->mft_no); + goto err_out; + } + } + if (!a) { + ntfs_error(sb, "Couldn't find attribute for runlist mapping"); + goto err_out; + } + if (not_mapped && highest_vcn && highest_vcn !=3D last_vcn - 1) { + err =3D -EIO; + ntfs_error(sb, + "Failed to load full runlist: inode: %llu highest_vcn: 0x%llx last_vcn:= 0x%llx", + (unsigned long long)ni->mft_no, + (long long)highest_vcn, (long long)last_vcn); + goto err_out; + } + ntfs_attr_put_search_ctx(ctx); + if (err =3D=3D -ENOENT) { + NInoSetFullyMapped(ni); + return 0; + } + + return err; + +err_out: + ntfs_attr_put_search_ctx(ctx); + return err; +} + +/** + * ntfs_attr_record_move_to - move attribute record to target inode + * @ctx: attribute search context describing the attribute record + * @ni: opened ntfs inode to which move attribute record + */ +int ntfs_attr_record_move_to(struct ntfs_attr_search_ctx *ctx, struct ntfs= _inode *ni) +{ + struct ntfs_attr_search_ctx *nctx; + struct attr_record *a; + int err; + struct mft_record *ni_mrec; + struct super_block *sb; + + if (!ctx || !ctx->attr || !ctx->ntfs_ino || !ni) { + ntfs_debug("Invalid arguments passed.\n"); + return -EINVAL; + } + + sb =3D ni->vol->sb; + ntfs_debug("Entering for ctx->attr->type 0x%x, ctx->ntfs_ino->mft_no 0x%l= lx, ni->mft_no 0x%llx.\n", + (unsigned int) le32_to_cpu(ctx->attr->type), + (long long) ctx->ntfs_ino->mft_no, + (long long) ni->mft_no); + + if (ctx->ntfs_ino =3D=3D ni) + return 0; + + if (!ctx->al_entry) { + ntfs_debug("Inode should contain attribute list to use this function.\n"= ); + return -EINVAL; + } + + /* Find place in MFT record where attribute will be moved. */ + a =3D ctx->attr; + nctx =3D ntfs_attr_get_search_ctx(ni, NULL); + if (!nctx) { + ntfs_error(sb, "%s: Failed to get search context", __func__); + return -ENOMEM; + } + + /* + * Use ntfs_attr_find instead of ntfs_attr_lookup to find place for + * attribute in @ni->mrec, not any extent inode in case if @ni is base + * file record. + */ + err =3D ntfs_attr_find(a->type, (__le16 *)((u8 *)a + le16_to_cpu(a->name_= offset)), + a->name_length, CASE_SENSITIVE, NULL, + 0, nctx); + if (!err) { + ntfs_debug("Attribute of such type, with same name already present in th= is MFT record.\n"); + err =3D -EEXIST; + goto put_err_out; + } + if (err !=3D -ENOENT) { + ntfs_debug("Attribute lookup failed.\n"); + goto put_err_out; + } + + /* Make space and move attribute. */ + ni_mrec =3D map_mft_record(ni); + if (IS_ERR(ni_mrec)) { + err =3D -EIO; + goto put_err_out; + } + + err =3D ntfs_make_room_for_attr(ni_mrec, (u8 *) nctx->attr, + le32_to_cpu(a->length)); + if (err) { + ntfs_debug("Couldn't make space for attribute.\n"); + unmap_mft_record(ni); + goto put_err_out; + } + memcpy(nctx->attr, a, le32_to_cpu(a->length)); + nctx->attr->instance =3D nctx->mrec->next_attr_instance; + nctx->mrec->next_attr_instance =3D + cpu_to_le16((le16_to_cpu(nctx->mrec->next_attr_instance) + 1) & 0xffff); + ntfs_attr_record_resize(ctx->mrec, a, 0); + mark_mft_record_dirty(ctx->ntfs_ino); + mark_mft_record_dirty(ni); + + /* Update attribute list. */ + ctx->al_entry->mft_reference =3D + MK_LE_MREF(ni->mft_no, le16_to_cpu(ni_mrec->sequence_number)); + ctx->al_entry->instance =3D nctx->attr->instance; + unmap_mft_record(ni); +put_err_out: + ntfs_attr_put_search_ctx(nctx); + return err; +} + +/** + * ntfs_attr_record_move_away - move away attribute record from it's mft r= ecord + * @ctx: attribute search context describing the attribute record + * @extra: minimum amount of free space in the new holder of record + */ +int ntfs_attr_record_move_away(struct ntfs_attr_search_ctx *ctx, int extra) +{ + struct ntfs_inode *base_ni, *ni =3D NULL; + struct mft_record *m; + int i, err; + struct super_block *sb; + + if (!ctx || !ctx->attr || !ctx->ntfs_ino || extra < 0) + return -EINVAL; + + ntfs_debug("Entering for attr 0x%x, inode %llu\n", + (unsigned int) le32_to_cpu(ctx->attr->type), + (unsigned long long)ctx->ntfs_ino->mft_no); + + if (ctx->ntfs_ino->nr_extents =3D=3D -1) + base_ni =3D ctx->base_ntfs_ino; + else + base_ni =3D ctx->ntfs_ino; + + sb =3D ctx->ntfs_ino->vol->sb; + if (!NInoAttrList(base_ni)) { + ntfs_error(sb, "Inode %llu has no attrlist", + (unsigned long long)base_ni->mft_no); + return -EINVAL; + } + + err =3D ntfs_inode_attach_all_extents(ctx->ntfs_ino); + if (err) { + ntfs_error(sb, "Couldn't attach extents, inode=3D%llu", + (unsigned long long)base_ni->mft_no); + return err; + } + + mutex_lock(&base_ni->extent_lock); + /* Walk through all extents and try to move attribute to them. */ + for (i =3D 0; i < base_ni->nr_extents; i++) { + ni =3D base_ni->ext.extent_ntfs_inos[i]; + + if (ctx->ntfs_ino->mft_no =3D=3D ni->mft_no) + continue; + m =3D map_mft_record(ni); + if (IS_ERR(m)) { + ntfs_error(sb, "Can not map mft record for mft_no %lld", + (unsigned long long)ni->mft_no); + mutex_unlock(&base_ni->extent_lock); + return -EIO; + } + if (le32_to_cpu(m->bytes_allocated) - + le32_to_cpu(m->bytes_in_use) < le32_to_cpu(ctx->attr->length) + extr= a) { + unmap_mft_record(ni); + continue; + } + unmap_mft_record(ni); + + /* + * ntfs_attr_record_move_to can fail if extent with other lowest + * s64 already present in inode we trying move record to. So, + * do not return error. + */ + if (!ntfs_attr_record_move_to(ctx, ni)) { + mutex_unlock(&base_ni->extent_lock); + return 0; + } + } + mutex_unlock(&base_ni->extent_lock); + + /* + * Failed to move attribute to one of the current extents, so allocate + * new extent and move attribute to it. + */ + ni =3D NULL; + err =3D ntfs_mft_record_alloc(base_ni->vol, 0, &ni, base_ni, NULL); + if (err) { + ntfs_error(sb, "Couldn't allocate MFT record, err : %d", err); + return err; + } + unmap_mft_record(ni); + + err =3D ntfs_attr_record_move_to(ctx, ni); + if (err) + ntfs_error(sb, "Couldn't move attribute to MFT record"); + + return err; +} + +/* + * If we are in the first extent, then set/clean sparse bit, + * update allocated and compressed size. + */ +static int ntfs_attr_update_meta(struct attr_record *a, struct ntfs_inode = *ni, + struct mft_record *m, struct ntfs_attr_search_ctx *ctx) +{ + int sparse, err =3D 0; + struct ntfs_inode *base_ni; + struct super_block *sb =3D ni->vol->sb; + + ntfs_debug("Entering for inode 0x%llx, attr 0x%x\n", + (unsigned long long)ni->mft_no, ni->type); + + if (NInoAttr(ni)) + base_ni =3D ni->ext.base_ntfs_ino; + else + base_ni =3D ni; + + if (a->data.non_resident.lowest_vcn) + goto out; + + a->data.non_resident.allocated_size =3D cpu_to_le64(ni->allocated_size); + + sparse =3D ntfs_rl_sparse(ni->runlist.rl); + if (sparse < 0) { + err =3D -EIO; + goto out; + } + + /* Attribute become sparse. */ + if (sparse && !(a->flags & (ATTR_IS_SPARSE | ATTR_IS_COMPRESSED))) { + /* + * Move attribute to another mft record, if attribute is too + * small to add compressed_size field to it and we have no + * free space in the current mft record. + */ + if ((le32_to_cpu(a->length) - + le16_to_cpu(a->data.non_resident.mapping_pairs_offset) =3D=3D 8) && + !(le32_to_cpu(m->bytes_allocated) - le32_to_cpu(m->bytes_in_use))) { + + if (!NInoAttrList(base_ni)) { + err =3D ntfs_inode_add_attrlist(base_ni); + if (err) + goto out; + err =3D -EAGAIN; + goto out; + } + err =3D ntfs_attr_record_move_away(ctx, 8); + if (err) { + ntfs_error(sb, "Failed to move attribute"); + goto out; + } + + err =3D ntfs_attrlist_update(base_ni); + if (err) + goto out; + err =3D -EAGAIN; + goto out; + } + if (!(le32_to_cpu(a->length) - + le16_to_cpu(a->data.non_resident.mapping_pairs_offset))) { + err =3D -EIO; + ntfs_error(sb, "Mapping pairs space is 0"); + goto out; + } + + NInoSetSparse(ni); + ni->flags |=3D FILE_ATTR_SPARSE_FILE; + a->flags |=3D ATTR_IS_SPARSE; + a->data.non_resident.compression_unit =3D 0; + + memmove((u8 *)a + le16_to_cpu(a->name_offset) + 8, + (u8 *)a + le16_to_cpu(a->name_offset), + a->name_length * sizeof(__le16)); + + a->name_offset =3D cpu_to_le16(le16_to_cpu(a->name_offset) + 8); + + a->data.non_resident.mapping_pairs_offset =3D + cpu_to_le16(le16_to_cpu(a->data.non_resident.mapping_pairs_offset) + 8); + } + + /* Attribute no longer sparse. */ + if (!sparse && (a->flags & ATTR_IS_SPARSE) && + !(a->flags & ATTR_IS_COMPRESSED)) { + NInoClearSparse(ni); + ni->flags &=3D ~FILE_ATTR_SPARSE_FILE; + a->flags &=3D ~ATTR_IS_SPARSE; + a->data.non_resident.compression_unit =3D 0; + + memmove((u8 *)a + le16_to_cpu(a->name_offset) - 8, + (u8 *)a + le16_to_cpu(a->name_offset), + a->name_length * sizeof(__le16)); + + if (le16_to_cpu(a->name_offset) >=3D 8) + a->name_offset =3D cpu_to_le16(le16_to_cpu(a->name_offset) - 8); + + a->data.non_resident.mapping_pairs_offset =3D + cpu_to_le16(le16_to_cpu(a->data.non_resident.mapping_pairs_offset) - 8); + } + + /* Update compressed size if required. */ + if (NInoFullyMapped(ni) && (sparse || NInoCompressed(ni))) { + s64 new_compr_size; + + new_compr_size =3D ntfs_rl_get_compressed_size(ni->vol, ni->runlist.rl); + if (new_compr_size < 0) { + err =3D new_compr_size; + goto out; + } + + ni->itype.compressed.size =3D new_compr_size; + a->data.non_resident.compressed_size =3D cpu_to_le64(new_compr_size); + } + + if (NInoSparse(ni) || NInoCompressed(ni)) + VFS_I(base_ni)->i_blocks =3D ni->itype.compressed.size >> 9; + else + VFS_I(base_ni)->i_blocks =3D ni->allocated_size >> 9; + /* + * Set FILE_NAME dirty flag, to update sparse bit and + * allocated size in the index. + */ + if (ni->type =3D=3D AT_DATA && ni->name =3D=3D AT_UNNAMED) + NInoSetFileNameDirty(ni); +out: + return err; +} + +#define NTFS_VCN_DELETE_MARK -2 +/** + * ntfs_attr_update_mapping_pairs - update mapping pairs for ntfs attribute + * @ni: non-resident ntfs inode for which we need update + * @from_vcn: update runlist starting this VCN + * + * Build mapping pairs from @na->rl and write them to the disk. Also, this + * function updates sparse bit, allocated and compressed size (allocates/f= rees + * space for this field if required). + * + * @na->allocated_size should be set to correct value for the new runlist = before + * call to this function. Vice-versa @na->compressed_size will be calculat= ed and + * set to correct value during this function. + */ +int ntfs_attr_update_mapping_pairs(struct ntfs_inode *ni, s64 from_vcn) +{ + struct ntfs_attr_search_ctx *ctx; + struct ntfs_inode *base_ni; + struct mft_record *m; + struct attr_record *a; + s64 stop_vcn; + int err =3D 0, mp_size, cur_max_mp_size, exp_max_mp_size; + bool finished_build; + bool first_updated =3D false; + struct super_block *sb; + struct runlist_element *start_rl; + unsigned int de_cluster_count =3D 0; + +retry: + if (!ni || !ni->runlist.rl) + return -EINVAL; + + ntfs_debug("Entering for inode %llu, attr 0x%x\n", + (unsigned long long)ni->mft_no, ni->type); + + sb =3D ni->vol->sb; + if (!NInoNonResident(ni)) { + ntfs_error(sb, "%s: resident attribute", __func__); + return -EINVAL; + } + + if (ni->nr_extents =3D=3D -1) + base_ni =3D ni->ext.base_ntfs_ino; + else + base_ni =3D ni; + + ctx =3D ntfs_attr_get_search_ctx(base_ni, NULL); + if (!ctx) { + ntfs_error(sb, "%s: Failed to get search context", __func__); + return -ENOMEM; + } + + /* Fill attribute records with new mapping pairs. */ + stop_vcn =3D 0; + finished_build =3D false; + start_rl =3D ni->runlist.rl; + while (!(err =3D ntfs_attr_lookup(ni->type, ni->name, ni->name_len, + CASE_SENSITIVE, from_vcn, NULL, 0, ctx))) { + unsigned int de_cnt =3D 0; + + a =3D ctx->attr; + m =3D ctx->mrec; + if (!a->data.non_resident.lowest_vcn) + first_updated =3D true; + + /* + * If runlist is updating not from the beginning, then set + * @stop_vcn properly, i.e. to the lowest vcn of record that + * contain @from_vcn. Also we do not need @from_vcn anymore, + * set it to 0 to make ntfs_attr_lookup enumerate attributes. + */ + if (from_vcn) { + s64 first_lcn; + + stop_vcn =3D le64_to_cpu(a->data.non_resident.lowest_vcn); + from_vcn =3D 0; + /* + * Check whether the first run we need to update is + * the last run in runlist, if so, then deallocate + * all attrubute extents starting this one. + */ + first_lcn =3D ntfs_rl_vcn_to_lcn(ni->runlist.rl, stop_vcn); + if (first_lcn =3D=3D LCN_EINVAL) { + err =3D -EIO; + ntfs_error(sb, "Bad runlist"); + goto put_err_out; + } + if (first_lcn =3D=3D LCN_ENOENT || + first_lcn =3D=3D LCN_RL_NOT_MAPPED) + finished_build =3D true; + } + + /* + * Check whether we finished mapping pairs build, if so mark + * extent as need to delete (by setting highest vcn to + * NTFS_VCN_DELETE_MARK (-2), we shall check it later and + * delete extent) and continue search. + */ + if (finished_build) { + ntfs_debug("Mark attr 0x%x for delete in inode 0x%lx.\n", + (unsigned int)le32_to_cpu(a->type), ctx->ntfs_ino->mft_no); + a->data.non_resident.highest_vcn =3D cpu_to_le64(NTFS_VCN_DELETE_MARK); + mark_mft_record_dirty(ctx->ntfs_ino); + continue; + } + + err =3D ntfs_attr_update_meta(a, ni, m, ctx); + if (err < 0) { + if (err =3D=3D -EAGAIN) { + ntfs_attr_put_search_ctx(ctx); + goto retry; + } + goto put_err_out; + } + + /* + * Determine maximum possible length of mapping pairs, + * if we shall *not* expand space for mapping pairs. + */ + cur_max_mp_size =3D le32_to_cpu(a->length) - + le16_to_cpu(a->data.non_resident.mapping_pairs_offset); + /* + * Determine maximum possible length of mapping pairs in the + * current mft record, if we shall expand space for mapping + * pairs. + */ + exp_max_mp_size =3D le32_to_cpu(m->bytes_allocated) - + le32_to_cpu(m->bytes_in_use) + cur_max_mp_size; + + /* Get the size for the rest of mapping pairs array. */ + mp_size =3D ntfs_get_size_for_mapping_pairs(ni->vol, start_rl, + stop_vcn, -1, exp_max_mp_size); + if (mp_size <=3D 0) { + err =3D mp_size; + ntfs_error(sb, "%s: get MP size failed", __func__); + goto put_err_out; + } + /* Test mapping pairs for fitting in the current mft record. */ + if (mp_size > exp_max_mp_size) { + /* + * Mapping pairs of $ATTRIBUTE_LIST attribute must fit + * in the base mft record. Try to move out other + * attributes and try again. + */ + if (ni->type =3D=3D AT_ATTRIBUTE_LIST) { + ntfs_attr_put_search_ctx(ctx); + if (ntfs_inode_free_space(base_ni, mp_size - + cur_max_mp_size)) { + ntfs_debug("Attribute list is too big. Defragment the volume\n"); + return -ENOSPC; + } + if (ntfs_attrlist_update(base_ni)) + return -EIO; + goto retry; + } + + /* Add attribute list if it isn't present, and retry. */ + if (!NInoAttrList(base_ni)) { + ntfs_attr_put_search_ctx(ctx); + if (ntfs_inode_add_attrlist(base_ni)) { + ntfs_error(sb, "Can not add attrlist"); + return -EIO; + } + goto retry; + } + + /* + * Set mapping pairs size to maximum possible for this + * mft record. We shall write the rest of mapping pairs + * to another MFT records. + */ + mp_size =3D exp_max_mp_size; + } + + /* Change space for mapping pairs if we need it. */ + if (((mp_size + 7) & ~7) !=3D cur_max_mp_size) { + if (ntfs_attr_record_resize(m, a, + le16_to_cpu(a->data.non_resident.mapping_pairs_offset) + + mp_size)) { + err =3D -EIO; + ntfs_error(sb, "Failed to resize attribute"); + goto put_err_out; + } + } + + /* Update lowest vcn. */ + a->data.non_resident.lowest_vcn =3D cpu_to_le64(stop_vcn); + mark_mft_record_dirty(ctx->ntfs_ino); + if ((ctx->ntfs_ino->nr_extents =3D=3D -1 || NInoAttrList(ctx->ntfs_ino))= && + ctx->attr->type !=3D AT_ATTRIBUTE_LIST) { + ctx->al_entry->lowest_vcn =3D cpu_to_le64(stop_vcn); + err =3D ntfs_attrlist_update(base_ni); + if (err) + goto put_err_out; + } + + /* + * Generate the new mapping pairs array directly into the + * correct destination, i.e. the attribute record itself. + */ + err =3D ntfs_mapping_pairs_build(ni->vol, + (u8 *)a + le16_to_cpu(a->data.non_resident.mapping_pairs_offset), + mp_size, start_rl, stop_vcn, -1, &stop_vcn, &start_rl, &de_cnt); + if (!err) + finished_build =3D true; + if (!finished_build && err !=3D -ENOSPC) { + ntfs_error(sb, "Failed to build mapping pairs"); + goto put_err_out; + } + a->data.non_resident.highest_vcn =3D cpu_to_le64(stop_vcn - 1); + mark_mft_record_dirty(ctx->ntfs_ino); + de_cluster_count +=3D de_cnt; + } + + /* Check whether error occurred. */ + if (err && err !=3D -ENOENT) { + ntfs_error(sb, "%s: Attribute lookup failed", __func__); + goto put_err_out; + } + + /* + * If the base extent was skipped in the above process, + * we still may have to update the sizes. + */ + if (!first_updated) { + ntfs_attr_reinit_search_ctx(ctx); + err =3D ntfs_attr_lookup(ni->type, ni->name, ni->name_len, + CASE_SENSITIVE, 0, NULL, 0, ctx); + if (!err) { + a =3D ctx->attr; + a->data.non_resident.allocated_size =3D cpu_to_le64(ni->allocated_size); + if (NInoCompressed(ni) || NInoSparse(ni)) + a->data.non_resident.compressed_size =3D + cpu_to_le64(ni->itype.compressed.size); + /* Updating sizes taints the extent holding the attr */ + if (ni->type =3D=3D AT_DATA && ni->name =3D=3D AT_UNNAMED) + NInoSetFileNameDirty(ni); + mark_mft_record_dirty(ctx->ntfs_ino); + } else { + ntfs_error(sb, "Failed to update sizes in base extent\n"); + goto put_err_out; + } + } + + /* Deallocate not used attribute extents and return with success. */ + if (finished_build) { + ntfs_attr_reinit_search_ctx(ctx); + ntfs_debug("Deallocate marked extents.\n"); + while (!(err =3D ntfs_attr_lookup(ni->type, ni->name, ni->name_len, + CASE_SENSITIVE, 0, NULL, 0, ctx))) { + if (le64_to_cpu(ctx->attr->data.non_resident.highest_vcn) !=3D + NTFS_VCN_DELETE_MARK) + continue; + /* Remove unused attribute record. */ + err =3D ntfs_attr_record_rm(ctx); + if (err) { + ntfs_error(sb, "Could not remove unused attr"); + goto put_err_out; + } + ntfs_attr_reinit_search_ctx(ctx); + } + if (err && err !=3D -ENOENT) { + ntfs_error(sb, "%s: Attr lookup failed", __func__); + goto put_err_out; + } + ntfs_debug("Deallocate done.\n"); + ntfs_attr_put_search_ctx(ctx); + goto out; + } + ntfs_attr_put_search_ctx(ctx); + ctx =3D NULL; + + /* Allocate new MFT records for the rest of mapping pairs. */ + while (1) { + struct ntfs_inode *ext_ni =3D NULL; + unsigned int de_cnt =3D 0; + + /* Allocate new mft record. */ + err =3D ntfs_mft_record_alloc(ni->vol, 0, &ext_ni, base_ni, NULL); + if (err) { + ntfs_error(sb, "Failed to allocate extent record"); + goto put_err_out; + } + unmap_mft_record(ext_ni); + + m =3D map_mft_record(ext_ni); + if (IS_ERR(m)) { + ntfs_error(sb, "Could not map new MFT record"); + if (ntfs_mft_record_free(ni->vol, ext_ni)) + ntfs_error(sb, "Could not free MFT record"); + ntfs_inode_close(ext_ni); + err =3D -ENOMEM; + ext_ni =3D NULL; + goto put_err_out; + } + /* + * If mapping size exceed available space, set them to + * possible maximum. + */ + cur_max_mp_size =3D le32_to_cpu(m->bytes_allocated) - + le32_to_cpu(m->bytes_in_use) - + (sizeof(struct attr_record) + + ((NInoCompressed(ni) || NInoSparse(ni)) ? + sizeof(a->data.non_resident.compressed_size) : 0)) - + ((sizeof(__le16) * ni->name_len + 7) & ~7); + + /* Calculate size of rest mapping pairs. */ + mp_size =3D ntfs_get_size_for_mapping_pairs(ni->vol, + start_rl, stop_vcn, -1, cur_max_mp_size); + if (mp_size <=3D 0) { + unmap_mft_record(ext_ni); + ntfs_inode_close(ext_ni); + err =3D mp_size; + ntfs_error(sb, "%s: get mp size failed", __func__); + goto put_err_out; + } + + if (mp_size > cur_max_mp_size) + mp_size =3D cur_max_mp_size; + /* Add attribute extent to new record. */ + err =3D ntfs_non_resident_attr_record_add(ext_ni, ni->type, + ni->name, ni->name_len, stop_vcn, mp_size, 0); + if (err < 0) { + ntfs_error(sb, "Could not add attribute extent"); + unmap_mft_record(ext_ni); + if (ntfs_mft_record_free(ni->vol, ext_ni)) + ntfs_error(sb, "Could not free MFT record"); + ntfs_inode_close(ext_ni); + goto put_err_out; + } + a =3D (struct attr_record *)((u8 *)m + err); + + err =3D ntfs_mapping_pairs_build(ni->vol, (u8 *)a + + le16_to_cpu(a->data.non_resident.mapping_pairs_offset), + mp_size, start_rl, stop_vcn, -1, &stop_vcn, &start_rl, + &de_cnt); + if (err < 0 && err !=3D -ENOSPC) { + ntfs_error(sb, "Failed to build MP"); + unmap_mft_record(ext_ni); + if (ntfs_mft_record_free(ni->vol, ext_ni)) + ntfs_error(sb, "Couldn't free MFT record"); + goto put_err_out; + } + a->data.non_resident.highest_vcn =3D cpu_to_le64(stop_vcn - 1); + mark_mft_record_dirty(ext_ni); + unmap_mft_record(ext_ni); + + de_cluster_count +=3D de_cnt; + /* All mapping pairs has been written. */ + if (!err) + break; + } +out: + if (from_vcn =3D=3D 0) + ni->i_dealloc_clusters =3D de_cluster_count; + return 0; + +put_err_out: + if (ctx) + ntfs_attr_put_search_ctx(ctx); + return err; +} + +/** + * ntfs_attr_make_resident - convert a non-resident to a resident attribute + * @ni: open ntfs attribute to make resident + * @ctx: ntfs search context describing the attribute + * + * Convert a non-resident ntfs attribute to a resident one. + */ +static int ntfs_attr_make_resident(struct ntfs_inode *ni, struct ntfs_attr= _search_ctx *ctx) +{ + struct ntfs_volume *vol =3D ni->vol; + struct super_block *sb =3D vol->sb; + struct attr_record *a =3D ctx->attr; + int name_ofs, val_ofs, err; + s64 arec_size; + + ntfs_debug("Entering for inode 0x%llx, attr 0x%x.\n", + (unsigned long long)ni->mft_no, ni->type); + + /* Should be called for the first extent of the attribute. */ + if (le64_to_cpu(a->data.non_resident.lowest_vcn)) { + ntfs_debug("Eeek! Should be called for the first extent of the attribut= e. Aborting...\n"); + return -EINVAL; + } + + /* Some preliminary sanity checking. */ + if (!NInoNonResident(ni)) { + ntfs_debug("Eeek! Trying to make resident attribute resident. Aborting.= ..\n"); + return -EINVAL; + } + + /* Make sure this is not $MFT/$BITMAP or Windows will not boot! */ + if (ni->type =3D=3D AT_BITMAP && ni->mft_no =3D=3D FILE_MFT) + return -EPERM; + + /* Check that the attribute is allowed to be resident. */ + err =3D ntfs_attr_can_be_resident(vol, ni->type); + if (err) + return err; + + if (NInoCompressed(ni) || NInoEncrypted(ni)) { + ntfs_debug("Making compressed or encrypted files resident is not impleme= nted yet.\n"); + return -EOPNOTSUPP; + } + + /* Work out offsets into and size of the resident attribute. */ + name_ofs =3D 24; /* =3D sizeof(resident_struct attr_record); */ + val_ofs =3D (name_ofs + a->name_length * sizeof(__le16) + 7) & ~7; + arec_size =3D (val_ofs + ni->data_size + 7) & ~7; + + /* Sanity check the size before we start modifying the attribute. */ + if (le32_to_cpu(ctx->mrec->bytes_in_use) - le32_to_cpu(a->length) + + arec_size > le32_to_cpu(ctx->mrec->bytes_allocated)) { + ntfs_debug("Not enough space to make attribute resident\n"); + return -ENOSPC; + } + + /* Read and cache the whole runlist if not already done. */ + err =3D ntfs_attr_map_whole_runlist(ni); + if (err) + return err; + + /* Move the attribute name if it exists and update the offset. */ + if (a->name_length) { + memmove((u8 *)a + name_ofs, (u8 *)a + le16_to_cpu(a->name_offset), + a->name_length * sizeof(__le16)); + } + a->name_offset =3D cpu_to_le16(name_ofs); + + /* Resize the resident part of the attribute record. */ + if (ntfs_attr_record_resize(ctx->mrec, a, arec_size) < 0) { + /* + * Bug, because ntfs_attr_record_resize should not fail (we + * already checked that attribute fits MFT record). + */ + ntfs_error(ctx->ntfs_ino->vol->sb, "BUG! Failed to resize attribute reco= rd. "); + return -EIO; + } + + /* Convert the attribute record to describe a resident attribute. */ + a->non_resident =3D 0; + a->flags =3D 0; + a->data.resident.value_length =3D cpu_to_le32(ni->data_size); + a->data.resident.value_offset =3D cpu_to_le16(val_ofs); + /* + * File names cannot be non-resident so we would never see this here + * but at least it serves as a reminder that there may be attributes + * for which we do need to set this flag. (AIA) + */ + if (a->type =3D=3D AT_FILE_NAME) + a->data.resident.flags =3D RESIDENT_ATTR_IS_INDEXED; + else + a->data.resident.flags =3D 0; + a->data.resident.reserved =3D 0; + + /* + * Deallocate clusters from the runlist. + * + * NOTE: We can use ntfs_cluster_free() because we have already mapped + * the whole run list and thus it doesn't matter that the attribute + * record is in a transiently corrupted state at this moment in time. + */ + err =3D ntfs_cluster_free(ni, 0, -1, ctx); + if (err) { + ntfs_error(sb, "Eeek! Failed to release allocated clusters"); + ntfs_debug("Ignoring error and leaving behind wasted clusters.\n"); + } + + /* Throw away the now unused runlist. */ + ntfs_free(ni->runlist.rl); + ni->runlist.rl =3D NULL; + ni->runlist.count =3D 0; + /* Update in-memory struct ntfs_attr. */ + NInoClearNonResident(ni); + NInoClearCompressed(ni); + ni->flags &=3D ~FILE_ATTR_COMPRESSED; + NInoClearSparse(ni); + ni->flags &=3D ~FILE_ATTR_SPARSE_FILE; + NInoClearEncrypted(ni); + ni->flags &=3D ~FILE_ATTR_ENCRYPTED; + ni->initialized_size =3D ni->data_size; + ni->allocated_size =3D ni->itype.compressed.size =3D (ni->data_size + 7) = & ~7; + ni->itype.compressed.block_size =3D 0; + ni->itype.compressed.block_size_bits =3D ni->itype.compressed.block_clust= ers =3D 0; + return 0; +} + +/** + * ntfs_non_resident_attr_shrink - shrink a non-resident, open ntfs attrib= ute + * @ni: non-resident ntfs attribute to shrink + * @newsize: new size (in bytes) to which to shrink the attribute + * + * Reduce the size of a non-resident, open ntfs attribute @na to @newsize = bytes. + */ +static int ntfs_non_resident_attr_shrink(struct ntfs_inode *ni, const s64 = newsize) +{ + struct ntfs_volume *vol; + struct ntfs_attr_search_ctx *ctx; + s64 first_free_vcn; + s64 nr_freed_clusters; + int err; + struct ntfs_inode *base_ni; + + ntfs_debug("Inode 0x%llx attr 0x%x new size %lld\n", + (unsigned long long)ni->mft_no, ni->type, (long long)newsize); + + vol =3D ni->vol; + + if (NInoAttr(ni)) + base_ni =3D ni->ext.base_ntfs_ino; + else + base_ni =3D ni; + + /* + * Check the attribute type and the corresponding minimum size + * against @newsize and fail if @newsize is too small. + */ + err =3D ntfs_attr_size_bounds_check(vol, ni->type, newsize); + if (err) { + if (err =3D=3D -ERANGE) + ntfs_debug("Eeek! Size bounds check failed. Aborting...\n"); + else if (err =3D=3D -ENOENT) + err =3D -EIO; + return err; + } + + /* The first cluster outside the new allocation. */ + if (NInoCompressed(ni)) + /* + * For compressed files we must keep full compressions blocks, + * but currently we do not decompress/recompress the last + * block to truncate the data, so we may leave more allocated + * clusters than really needed. + */ + first_free_vcn =3D (((newsize - 1) | (ni->itype.compressed.block_size - = 1)) + 1) >> + vol->cluster_size_bits; + else + first_free_vcn =3D (newsize + vol->cluster_size - 1) >> + vol->cluster_size_bits; + + if (first_free_vcn < 0) + return -EINVAL; + /* + * Compare the new allocation with the old one and only deallocate + * clusters if there is a change. + */ + if ((ni->allocated_size >> vol->cluster_size_bits) !=3D first_free_vcn) { + struct ntfs_attr_search_ctx *ctx; + + err =3D ntfs_attr_map_whole_runlist(ni); + if (err) { + ntfs_debug("Eeek! ntfs_attr_map_whole_runlist failed.\n"); + return err; + } + + ctx =3D ntfs_attr_get_search_ctx(ni, NULL); + if (!ctx) { + ntfs_error(vol->sb, "%s: Failed to get search context", __func__); + return -ENOMEM; + } + + /* Deallocate all clusters starting with the first free one. */ + nr_freed_clusters =3D ntfs_cluster_free(ni, first_free_vcn, -1, ctx); + if (nr_freed_clusters < 0) { + ntfs_debug("Eeek! Freeing of clusters failed. Aborting...\n"); + ntfs_attr_put_search_ctx(ctx); + return (int)nr_freed_clusters; + } + ntfs_attr_put_search_ctx(ctx); + + /* Truncate the runlist itself. */ + if (ntfs_rl_truncate_nolock(vol, &ni->runlist, first_free_vcn)) { + /* + * Failed to truncate the runlist, so just throw it + * away, it will be mapped afresh on next use. + */ + ntfs_free(ni->runlist.rl); + ni->runlist.rl =3D NULL; + ntfs_error(vol->sb, "Eeek! Run list truncation failed.\n"); + return -EIO; + } + + /* Prepare to mapping pairs update. */ + ni->allocated_size =3D first_free_vcn << vol->cluster_size_bits; + + if (NInoSparse(ni) || NInoCompressed(ni)) { + if (nr_freed_clusters) { + ni->itype.compressed.size -=3D nr_freed_clusters << + vol->cluster_size_bits; + VFS_I(base_ni)->i_blocks =3D ni->itype.compressed.size >> 9; + } + } else + VFS_I(base_ni)->i_blocks =3D ni->allocated_size >> 9; + + /* Write mapping pairs for new runlist. */ + err =3D ntfs_attr_update_mapping_pairs(ni, 0 /*first_free_vcn*/); + if (err) { + ntfs_debug("Eeek! Mapping pairs update failed. Leaving inconstant metad= ata. Run chkdsk.\n"); + return err; + } + } + + /* Get the first attribute record. */ + ctx =3D ntfs_attr_get_search_ctx(base_ni, NULL); + if (!ctx) { + ntfs_error(vol->sb, "%s: Failed to get search context", __func__); + return -ENOMEM; + } + + err =3D ntfs_attr_lookup(ni->type, ni->name, ni->name_len, CASE_SENSITIVE, + 0, NULL, 0, ctx); + if (err) { + if (err =3D=3D -ENOENT) + err =3D -EIO; + ntfs_debug("Eeek! Lookup of first attribute extent failed. Leaving incon= stant metadata.\n"); + goto put_err_out; + } + + /* Update data and initialized size. */ + ni->data_size =3D newsize; + ctx->attr->data.non_resident.data_size =3D cpu_to_le64(newsize); + if (newsize < ni->initialized_size) { + ni->initialized_size =3D newsize; + ctx->attr->data.non_resident.initialized_size =3D cpu_to_le64(newsize); + } + /* Update data size in the index. */ + if (ni->type =3D=3D AT_DATA && ni->name =3D=3D AT_UNNAMED) + NInoSetFileNameDirty(ni); + + /* If the attribute now has zero size, make it resident. */ + if (!newsize && !NInoEncrypted(ni) && !NInoCompressed(ni)) { + err =3D ntfs_attr_make_resident(ni, ctx); + if (err) { + /* If couldn't make resident, just continue. */ + if (err !=3D -EPERM) + ntfs_error(ni->vol->sb, + "Failed to make attribute resident. Leaving as is...\n"); + } + } + + /* Set the inode dirty so it is written out later. */ + mark_mft_record_dirty(ctx->ntfs_ino); + /* Done! */ + ntfs_attr_put_search_ctx(ctx); + return 0; +put_err_out: + ntfs_attr_put_search_ctx(ctx); + return err; +} + +/** + * ntfs_non_resident_attr_expand - expand a non-resident, open ntfs attrib= ute + * @ni: non-resident ntfs attribute to expand + * @prealloc_size: preallocation size (in bytes) to which to expand the at= tribute + * @newsize: new size (in bytes) to which to expand the attribute + * + * Expand the size of a non-resident, open ntfs attribute @na to @newsize = bytes, + * by allocating new clusters. + */ +static int ntfs_non_resident_attr_expand(struct ntfs_inode *ni, const s64 = newsize, + const s64 prealloc_size, unsigned int holes) +{ + s64 lcn_seek_from; + s64 first_free_vcn; + struct ntfs_volume *vol; + struct ntfs_attr_search_ctx *ctx =3D NULL; + struct runlist_element *rl, *rln; + s64 org_alloc_size, org_compressed_size; + int err, err2; + struct ntfs_inode *base_ni; + struct super_block *sb =3D ni->vol->sb; + size_t new_rl_count; + + ntfs_debug("Inode 0x%llx, attr 0x%x, new size %lld old size %lld\n", + (unsigned long long)ni->mft_no, ni->type, + (long long)newsize, (long long)ni->data_size); + + vol =3D ni->vol; + + if (NInoAttr(ni)) + base_ni =3D ni->ext.base_ntfs_ino; + else + base_ni =3D ni; + + /* + * Check the attribute type and the corresponding maximum size + * against @newsize and fail if @newsize is too big. + */ + err =3D ntfs_attr_size_bounds_check(vol, ni->type, newsize); + if (err < 0) { + ntfs_error(sb, "%s: bounds check failed", __func__); + return err; + } + + /* Save for future use. */ + org_alloc_size =3D ni->allocated_size; + org_compressed_size =3D ni->itype.compressed.size; + + /* The first cluster outside the new allocation. */ + if (prealloc_size) + first_free_vcn =3D (prealloc_size + vol->cluster_size - 1) >> + vol->cluster_size_bits; + else + first_free_vcn =3D (newsize + vol->cluster_size - 1) >> + vol->cluster_size_bits; + if (first_free_vcn < 0) + return -EFBIG; + + /* + * Compare the new allocation with the old one and only allocate + * clusters if there is a change. + */ + if ((ni->allocated_size >> vol->cluster_size_bits) < first_free_vcn) { + err =3D ntfs_attr_map_whole_runlist(ni); + if (err) { + ntfs_error(sb, "ntfs_attr_map_whole_runlist failed"); + return err; + } + + /* + * If we extend $DATA attribute on NTFS 3+ volume, we can add + * sparse runs instead of real allocation of clusters. + */ + if ((ni->type =3D=3D AT_DATA && (vol->major_ver >=3D 3 || !NInoSparseDis= abled(ni))) && + (holes !=3D HOLES_NO)) { + if (NInoCompressed(ni)) { + int last =3D 0, i =3D 0; + s64 alloc_size; + int more_entries =3D + round_up(first_free_vcn - + (ni->allocated_size >> + vol->cluster_size_bits), + ni->itype.compressed.block_clusters) / + ni->itype.compressed.block_clusters; + + while (ni->runlist.rl[last].length) + last++; + + rl =3D ntfs_rl_realloc(ni->runlist.rl, last + 1, + last + more_entries + 1); + if (IS_ERR(rl)) { + err =3D -ENOMEM; + goto put_err_out; + } + + alloc_size =3D ni->allocated_size; + while (i++ < more_entries) { + rl[last].vcn =3D round_up(alloc_size, vol->cluster_size) >> + vol->cluster_size_bits; + rl[last].length =3D ni->itype.compressed.block_clusters - + (rl[last].vcn & + (ni->itype.compressed.block_clusters - 1)); + rl[last].lcn =3D LCN_HOLE; + last++; + alloc_size +=3D ni->itype.compressed.block_size; + } + + rl[last].vcn =3D first_free_vcn; + rl[last].lcn =3D LCN_ENOENT; + rl[last].length =3D 0; + + ni->runlist.rl =3D rl; + ni->runlist.count +=3D more_entries; + } else { + rl =3D ntfs_malloc_nofs(sizeof(struct runlist_element) * 2); + if (!rl) { + err =3D -ENOMEM; + goto put_err_out; + } + + rl[0].vcn =3D (ni->allocated_size >> + vol->cluster_size_bits); + rl[0].lcn =3D LCN_HOLE; + rl[0].length =3D first_free_vcn - + (ni->allocated_size >> vol->cluster_size_bits); + rl[1].vcn =3D first_free_vcn; + rl[1].lcn =3D LCN_ENOENT; + rl[1].length =3D 0; + } + } else { + /* + * Determine first after last LCN of attribute. + * We will start seek clusters from this LCN to avoid + * fragmentation. If there are no valid LCNs in the + * attribute let the cluster allocator choose the + * starting LCN. + */ + lcn_seek_from =3D -1; + if (ni->runlist.rl->length) { + /* Seek to the last run list element. */ + for (rl =3D ni->runlist.rl; (rl + 1)->length; rl++) + ; + /* + * If the last LCN is a hole or similar seek + * back to last valid LCN. + */ + while (rl->lcn < 0 && rl !=3D ni->runlist.rl) + rl--; + /* + * Only set lcn_seek_from it the LCN is valid. + */ + if (rl->lcn >=3D 0) + lcn_seek_from =3D rl->lcn + rl->length; + } + + rl =3D ntfs_cluster_alloc(vol, ni->allocated_size >> + vol->cluster_size_bits, first_free_vcn - + (ni->allocated_size >> + vol->cluster_size_bits), lcn_seek_from, + DATA_ZONE, false, false, false); + if (IS_ERR(rl)) { + ntfs_debug("Cluster allocation failed (%lld)", + (long long)first_free_vcn - + ((long long)ni->allocated_size >> + vol->cluster_size_bits)); + return PTR_ERR(rl); + } + } + + if (!NInoCompressed(ni)) { + /* Append new clusters to attribute runlist. */ + rln =3D ntfs_runlists_merge(&ni->runlist, rl, 0, &new_rl_count); + if (IS_ERR(rln)) { + /* Failed, free just allocated clusters. */ + ntfs_error(sb, "Run list merge failed"); + ntfs_cluster_free_from_rl(vol, rl); + ntfs_free(rl); + return -EIO; + } + ni->runlist.rl =3D rln; + ni->runlist.count =3D new_rl_count; + } + + /* Prepare to mapping pairs update. */ + ni->allocated_size =3D first_free_vcn << vol->cluster_size_bits; + err =3D ntfs_attr_update_mapping_pairs(ni, 0); + if (err) { + ntfs_debug("Mapping pairs update failed"); + goto rollback; + } + } + + ctx =3D ntfs_attr_get_search_ctx(base_ni, NULL); + if (!ctx) { + err =3D -ENOMEM; + if (ni->allocated_size =3D=3D org_alloc_size) + return err; + goto rollback; + } + + err =3D ntfs_attr_lookup(ni->type, ni->name, ni->name_len, CASE_SENSITIVE, + 0, NULL, 0, ctx); + if (err) { + if (err =3D=3D -ENOENT) + err =3D -EIO; + if (ni->allocated_size !=3D org_alloc_size) + goto rollback; + goto put_err_out; + } + + /* Update data size. */ + ni->data_size =3D newsize; + ctx->attr->data.non_resident.data_size =3D cpu_to_le64(newsize); + /* Update data size in the index. */ + if (ni->type =3D=3D AT_DATA && ni->name =3D=3D AT_UNNAMED) + NInoSetFileNameDirty(ni); + /* Set the inode dirty so it is written out later. */ + mark_mft_record_dirty(ctx->ntfs_ino); + /* Done! */ + ntfs_attr_put_search_ctx(ctx); + return 0; +rollback: + /* Free allocated clusters. */ + err2 =3D ntfs_cluster_free(ni, org_alloc_size >> + vol->cluster_size_bits, -1, ctx); + if (err2) + ntfs_debug("Leaking clusters"); + + /* Now, truncate the runlist itself. */ + down_write(&ni->runlist.lock); + err2 =3D ntfs_rl_truncate_nolock(vol, &ni->runlist, org_alloc_size >> + vol->cluster_size_bits); + up_write(&ni->runlist.lock); + if (err2) { + /* + * Failed to truncate the runlist, so just throw it away, it + * will be mapped afresh on next use. + */ + ntfs_free(ni->runlist.rl); + ni->runlist.rl =3D NULL; + ntfs_error(sb, "Couldn't truncate runlist. Rollback failed"); + } else { + /* Prepare to mapping pairs update. */ + ni->allocated_size =3D org_alloc_size; + /* Restore mapping pairs. */ + down_read(&ni->runlist.lock); + if (ntfs_attr_update_mapping_pairs(ni, 0)) + ntfs_error(sb, "Failed to restore old mapping pairs"); + up_read(&ni->runlist.lock); + + if (NInoSparse(ni) || NInoCompressed(ni)) { + ni->itype.compressed.size =3D org_compressed_size; + VFS_I(base_ni)->i_blocks =3D ni->itype.compressed.size >> 9; + } else + VFS_I(base_ni)->i_blocks =3D ni->allocated_size >> 9; + } + if (ctx) + ntfs_attr_put_search_ctx(ctx); + return err; +put_err_out: + if (ctx) + ntfs_attr_put_search_ctx(ctx); + return err; +} + +/** + * ntfs_resident_attr_resize - resize a resident, open ntfs attribute + * @attr_ni: resident ntfs inode to resize + * @prealloc_size: preallocation size (in bytes) to which to resize the at= tribute + * @newsize: new size (in bytes) to which to resize the attribute + * + * Change the size of a resident, open ntfs attribute @na to @newsize byte= s. + */ +static int ntfs_resident_attr_resize(struct ntfs_inode *attr_ni, const s64= newsize, + const s64 prealloc_size, unsigned int holes) +{ + struct ntfs_attr_search_ctx *ctx; + struct ntfs_volume *vol =3D attr_ni->vol; + struct super_block *sb =3D vol->sb; + int err =3D -EIO; + struct ntfs_inode *base_ni, *ext_ni =3D NULL; + +attr_resize_again: + ntfs_debug("Inode 0x%llx attr 0x%x new size %lld\n", + (unsigned long long)attr_ni->mft_no, attr_ni->type, + (long long)newsize); + + if (NInoAttr(attr_ni)) + base_ni =3D attr_ni->ext.base_ntfs_ino; + else + base_ni =3D attr_ni; + + /* Get the attribute record that needs modification. */ + ctx =3D ntfs_attr_get_search_ctx(base_ni, NULL); + if (!ctx) { + ntfs_error(sb, "%s: Failed to get search context", __func__); + return -ENOMEM; + } + err =3D ntfs_attr_lookup(attr_ni->type, attr_ni->name, attr_ni->name_len, + 0, 0, NULL, 0, ctx); + if (err) { + ntfs_error(sb, "ntfs_attr_lookup failed"); + goto put_err_out; + } + + /* + * Check the attribute type and the corresponding minimum and maximum + * sizes against @newsize and fail if @newsize is out of bounds. + */ + err =3D ntfs_attr_size_bounds_check(vol, attr_ni->type, newsize); + if (err) { + if (err =3D=3D -ENOENT) + err =3D -EIO; + ntfs_debug("%s: bounds check failed", __func__); + goto put_err_out; + } + /* + * If @newsize is bigger than the mft record we need to make the + * attribute non-resident if the attribute type supports it. If it is + * smaller we can go ahead and attempt the resize. + */ + if (newsize < vol->mft_record_size) { + /* Perform the resize of the attribute record. */ + err =3D ntfs_resident_attr_value_resize(ctx->mrec, ctx->attr, + newsize); + if (!err) { + /* Update attribute size everywhere. */ + attr_ni->data_size =3D attr_ni->initialized_size =3D newsize; + attr_ni->allocated_size =3D (newsize + 7) & ~7; + if (NInoCompressed(attr_ni) || NInoSparse(attr_ni)) + attr_ni->itype.compressed.size =3D attr_ni->allocated_size; + if (attr_ni->type =3D=3D AT_DATA && attr_ni->name =3D=3D AT_UNNAMED) + NInoSetFileNameDirty(attr_ni); + goto resize_done; + } + + /* Prefer AT_INDEX_ALLOCATION instead of AT_ATTRIBUTE_LIST */ + if (err =3D=3D -ENOSPC && ctx->attr->type =3D=3D AT_INDEX_ROOT) + goto put_err_out; + + } + /* There is not enough space in the mft record to perform the resize. */ + + /* Make the attribute non-resident if possible. */ + err =3D ntfs_attr_make_non_resident(attr_ni, + le32_to_cpu(ctx->attr->data.resident.value_length)); + if (!err) { + mark_mft_record_dirty(ctx->ntfs_ino); + ntfs_attr_put_search_ctx(ctx); + /* Resize non-resident attribute */ + return ntfs_non_resident_attr_expand(attr_ni, newsize, prealloc_size, ho= les); + } else if (err !=3D -ENOSPC && err !=3D -EPERM) { + ntfs_error(sb, "Failed to make attribute non-resident"); + goto put_err_out; + } + + /* Try to make other attributes non-resident and retry each time. */ + ntfs_attr_reinit_search_ctx(ctx); + while (!(err =3D ntfs_attr_lookup(AT_UNUSED, NULL, 0, 0, 0, NULL, 0, ctx)= )) { + struct inode *tvi; + struct attr_record *a; + + a =3D ctx->attr; + if (a->non_resident || a->type =3D=3D AT_ATTRIBUTE_LIST) + continue; + + if (ntfs_attr_can_be_non_resident(vol, a->type)) + continue; + + /* + * Check out whether convert is reasonable. Assume that mapping + * pairs will take 8 bytes. + */ + if (le32_to_cpu(a->length) <=3D (sizeof(struct attr_record) - sizeof(s64= )) + + ((a->name_length * sizeof(__le16) + 7) & ~7) + 8) + continue; + + if (a->type =3D=3D AT_DATA) + tvi =3D ntfs_iget(sb, base_ni->mft_no); + else + tvi =3D ntfs_attr_iget(VFS_I(base_ni), a->type, + (__le16 *)((u8 *)a + le16_to_cpu(a->name_offset)), + a->name_length); + if (IS_ERR(tvi)) { + ntfs_error(sb, "Couldn't open attribute"); + continue; + } + + if (ntfs_attr_make_non_resident(NTFS_I(tvi), + le32_to_cpu(ctx->attr->data.resident.value_length))) { + iput(tvi); + continue; + } + + mark_mft_record_dirty(ctx->ntfs_ino); + iput(tvi); + ntfs_attr_put_search_ctx(ctx); + goto attr_resize_again; + } + + /* Check whether error occurred. */ + if (err !=3D -ENOENT) { + ntfs_error(sb, "%s: Attribute lookup failed 1", __func__); + goto put_err_out; + } + + /* + * The standard information and attribute list attributes can't be + * moved out from the base MFT record, so try to move out others. + */ + if (attr_ni->type =3D=3D AT_STANDARD_INFORMATION || + attr_ni->type =3D=3D AT_ATTRIBUTE_LIST) { + ntfs_attr_put_search_ctx(ctx); + + if (!NInoAttrList(base_ni)) { + err =3D ntfs_inode_add_attrlist(base_ni); + if (err) + return err; + } + + err =3D ntfs_inode_free_space(base_ni, sizeof(struct attr_record)); + if (err) { + err =3D -ENOSPC; + ntfs_error(sb, + "Couldn't free space in the MFT record to make attribute list non resi= dent"); + return err; + } + err =3D ntfs_attrlist_update(base_ni); + if (err) + return err; + goto attr_resize_again; + } + + /* + * Move the attribute to a new mft record, creating an attribute list + * attribute or modifying it if it is already present. + */ + + /* Point search context back to attribute which we need resize. */ + ntfs_attr_reinit_search_ctx(ctx); + err =3D ntfs_attr_lookup(attr_ni->type, attr_ni->name, attr_ni->name_len, + CASE_SENSITIVE, 0, NULL, 0, ctx); + if (err) { + ntfs_error(sb, "%s: Attribute lookup failed 2", __func__); + goto put_err_out; + } + + /* + * Check whether attribute is already single in this MFT record. + * 8 added for the attribute terminator. + */ + if (le32_to_cpu(ctx->mrec->bytes_in_use) =3D=3D + le16_to_cpu(ctx->mrec->attrs_offset) + le32_to_cpu(ctx->attr->length)= + 8) { + err =3D -ENOSPC; + ntfs_debug("MFT record is filled with one attribute\n"); + goto put_err_out; + } + + /* Add attribute list if not present. */ + if (!NInoAttrList(base_ni)) { + ntfs_attr_put_search_ctx(ctx); + err =3D ntfs_inode_add_attrlist(base_ni); + if (err) + return err; + goto attr_resize_again; + } + + /* Allocate new mft record. */ + err =3D ntfs_mft_record_alloc(base_ni->vol, 0, &ext_ni, base_ni, NULL); + if (err) { + ntfs_error(sb, "Couldn't allocate MFT record"); + goto put_err_out; + } + unmap_mft_record(ext_ni); + + /* Move attribute to it. */ + err =3D ntfs_attr_record_move_to(ctx, ext_ni); + if (err) { + ntfs_error(sb, "Couldn't move attribute to new MFT record"); + err =3D -ENOMEM; + goto put_err_out; + } + + err =3D ntfs_attrlist_update(base_ni); + if (err < 0) + goto put_err_out; + + ntfs_attr_put_search_ctx(ctx); + /* Try to perform resize once again. */ + goto attr_resize_again; + +resize_done: + /* + * Set the inode (and its base inode if it exists) dirty so it is + * written out later. + */ + mark_mft_record_dirty(ctx->ntfs_ino); + ntfs_attr_put_search_ctx(ctx); + return 0; + +put_err_out: + ntfs_attr_put_search_ctx(ctx); + return err; +} + +int __ntfs_attr_truncate_vfs(struct ntfs_inode *ni, const s64 newsize, + const s64 i_size) +{ + int err =3D 0; + + if (newsize < 0 || + (ni->mft_no =3D=3D FILE_MFT && ni->type =3D=3D AT_DATA)) { + ntfs_debug("Invalid arguments passed.\n"); + return -EINVAL; + } + + ntfs_debug("Entering for inode 0x%llx, attr 0x%x, size %lld\n", + (unsigned long long)ni->mft_no, ni->type, newsize); + + if (NInoNonResident(ni)) { + if (newsize > i_size) { + down_write(&ni->runlist.lock); + err =3D ntfs_non_resident_attr_expand(ni, newsize, 0, + NVolDisableSparse(ni->vol) ? + HOLES_NO : HOLES_OK); + up_write(&ni->runlist.lock); + } else + err =3D ntfs_non_resident_attr_shrink(ni, newsize); + } else + err =3D ntfs_resident_attr_resize(ni, newsize, 0, + NVolDisableSparse(ni->vol) ? + HOLES_NO : HOLES_OK); + ntfs_debug("Return status %d\n", err); + return err; +} + +int ntfs_attr_expand(struct ntfs_inode *ni, const s64 newsize, const s64 p= realloc_size) +{ + int err =3D 0; + + if (newsize < 0 || + (ni->mft_no =3D=3D FILE_MFT && ni->type =3D=3D AT_DATA)) { + ntfs_debug("Invalid arguments passed.\n"); + return -EINVAL; + } + + ntfs_debug("Entering for inode 0x%llx, attr 0x%x, size %lld\n", + (unsigned long long)ni->mft_no, ni->type, newsize); + + if (ni->data_size =3D=3D newsize) { + ntfs_debug("Size is already ok\n"); + return 0; + } + + /* + * Encrypted attributes are not supported. We return access denied, + * which is what Windows NT4 does, too. + */ + if (NInoEncrypted(ni)) { + pr_err("Failed to truncate encrypted attribute"); + return -EACCES; + } + + if (NInoNonResident(ni)) { + if (newsize > ni->data_size) + err =3D ntfs_non_resident_attr_expand(ni, newsize, prealloc_size, + NVolDisableSparse(ni->vol) ? + HOLES_NO : HOLES_OK); + } else + err =3D ntfs_resident_attr_resize(ni, newsize, prealloc_size, + NVolDisableSparse(ni->vol) ? + HOLES_NO : HOLES_OK); + if (!err) + i_size_write(VFS_I(ni), newsize); + ntfs_debug("Return status %d\n", err); + return err; +} + +/** + * ntfs_attr_truncate_i - resize an ntfs attribute + * @ni: open ntfs inode to resize + * @newsize: new size (in bytes) to which to resize the attribute + * + * Change the size of an open ntfs attribute @na to @newsize bytes. If the + * attribute is made bigger and the attribute is resident the newly + * "allocated" space is cleared and if the attribute is non-resident the + * newly allocated space is marked as not initialised and no real allocati= on + * on disk is performed. + */ +int ntfs_attr_truncate_i(struct ntfs_inode *ni, const s64 newsize, unsigne= d int holes) +{ + int err; + + if (newsize < 0 || + (ni->mft_no =3D=3D FILE_MFT && ni->type =3D=3D AT_DATA)) { + ntfs_debug("Invalid arguments passed.\n"); + return -EINVAL; + } + + ntfs_debug("Entering for inode 0x%llx, attr 0x%x, size %lld\n", + (unsigned long long)ni->mft_no, ni->type, newsize); + + if (ni->data_size =3D=3D newsize) { + ntfs_debug("Size is already ok\n"); + return 0; + } + + /* + * Encrypted attributes are not supported. We return access denied, + * which is what Windows NT4 does, too. + */ + if (NInoEncrypted(ni)) { + pr_err("Failed to truncate encrypted attribute"); + return -EACCES; + } + + if (NInoCompressed(ni)) { + pr_err("Failed to truncate compressed attribute"); + return -EOPNOTSUPP; + } + + if (NInoNonResident(ni)) { + if (newsize > ni->data_size) + err =3D ntfs_non_resident_attr_expand(ni, newsize, 0, holes); + else + err =3D ntfs_non_resident_attr_shrink(ni, newsize); + } else + err =3D ntfs_resident_attr_resize(ni, newsize, 0, holes); + ntfs_debug("Return status %d\n", err); + return err; +} + +/* + * Resize an attribute, creating a hole if relevant + */ +int ntfs_attr_truncate(struct ntfs_inode *ni, const s64 newsize) +{ + return ntfs_attr_truncate_i(ni, newsize, + NVolDisableSparse(ni->vol) ? + HOLES_NO : HOLES_OK); +} + +int ntfs_attr_map_cluster(struct ntfs_inode *ni, s64 vcn_start, s64 *lcn_s= tart, + s64 *lcn_count, s64 max_clu_count, bool *balloc, bool update_mp, + bool skip_holes) +{ + struct ntfs_volume *vol =3D ni->vol; + struct ntfs_attr_search_ctx *ctx; + struct runlist_element *rl, *rlc; + s64 vcn =3D vcn_start, lcn, clu_count; + s64 lcn_seek_from =3D -1; + int err =3D 0; + size_t new_rl_count; + + err =3D ntfs_attr_map_whole_runlist(ni); + if (err) + return err; + + if (NInoAttr(ni)) + ctx =3D ntfs_attr_get_search_ctx(ni->ext.base_ntfs_ino, NULL); + else + ctx =3D ntfs_attr_get_search_ctx(ni, NULL); + if (!ctx) { + ntfs_error(vol->sb, "%s: Failed to get search context", __func__); + return -ENOMEM; + } + + err =3D ntfs_attr_lookup(ni->type, ni->name, ni->name_len, + CASE_SENSITIVE, vcn, NULL, 0, ctx); + if (err) { + ntfs_error(vol->sb, + "ntfs_attr_lookup failed, ntfs inode(mft_no : %ld) type : 0x%x, err = : %d", + ni->mft_no, ni->type, err); + goto out; + } + + rl =3D ntfs_attr_find_vcn_nolock(ni, vcn, ctx); + if (IS_ERR(rl)) { + ntfs_error(vol->sb, "Failed to find run after mapping runlist."); + err =3D PTR_ERR(rl); + goto out; + } + + lcn =3D ntfs_rl_vcn_to_lcn(rl, vcn); + clu_count =3D min(max_clu_count, rl->length - (vcn - rl->vcn)); + if (lcn >=3D LCN_HOLE) { + if (lcn > LCN_DELALLOC || + (lcn =3D=3D LCN_HOLE && skip_holes)) { + *lcn_start =3D lcn; + *lcn_count =3D clu_count; + *balloc =3D false; + goto out; + } + } else { + WARN_ON(lcn =3D=3D LCN_RL_NOT_MAPPED); + if (lcn =3D=3D LCN_ENOENT) + err =3D -ENOENT; + else + err =3D -EIO; + goto out; + } + + /* Search backwards to find the best lcn to start seek from. */ + rlc =3D rl; + while (rlc->vcn) { + rlc--; + if (rlc->lcn >=3D 0) { + /* + * avoid fragmenting a compressed file + * Windows does not do that, and that may + * not be desirable for files which can + * be updated + */ + if (NInoCompressed(ni)) + lcn_seek_from =3D rlc->lcn + rlc->length; + else + lcn_seek_from =3D rlc->lcn + (vcn - rlc->vcn); + break; + } + } + + if (lcn_seek_from =3D=3D -1) { + /* Backwards search failed, search forwards. */ + rlc =3D rl; + while (rlc->length) { + rlc++; + if (rlc->lcn >=3D 0) { + lcn_seek_from =3D rlc->lcn - (rlc->vcn - vcn); + if (lcn_seek_from < -1) + lcn_seek_from =3D -1; + break; + } + } + } + + if (lcn_seek_from =3D=3D -1 && ni->lcn_seek_trunc !=3D LCN_RL_NOT_MAPPED)= { + lcn_seek_from =3D ni->lcn_seek_trunc; + ni->lcn_seek_trunc =3D LCN_RL_NOT_MAPPED; + } + + rlc =3D ntfs_cluster_alloc(vol, vcn, clu_count, lcn_seek_from, DATA_ZONE, + false, true, true); + if (IS_ERR(rlc)) { + err =3D PTR_ERR(rlc); + goto out; + } + + WARN_ON(rlc->vcn !=3D vcn); + lcn =3D rlc->lcn; + clu_count =3D rlc->length; + + rl =3D ntfs_runlists_merge(&ni->runlist, rlc, 0, &new_rl_count); + if (IS_ERR(rl)) { + ntfs_error(vol->sb, "Failed to merge runlists"); + err =3D PTR_ERR(rl); + if (ntfs_cluster_free_from_rl(vol, rlc)) + ntfs_error(vol->sb, "Failed to free hot clusters."); + ntfs_free(rlc); + goto out; + } + ni->runlist.rl =3D rl; + ni->runlist.count =3D new_rl_count; + + if (!update_mp) { + u64 free =3D atomic64_read(&vol->free_clusters) * 100; + + do_div(free, vol->nr_clusters); + if (free <=3D 5) + update_mp =3D true; + } + + if (update_mp) { + ntfs_attr_reinit_search_ctx(ctx); + err =3D ntfs_attr_update_mapping_pairs(ni, 0); + if (err) { + int err2; + + err2 =3D ntfs_cluster_free(ni, vcn, clu_count, ctx); + if (err2 < 0) + ntfs_error(vol->sb, + "Failed to free cluster allocation. Leaving inconstant metadata.\n= "); + goto out; + } + } else { + VFS_I(ni)->i_blocks +=3D clu_count << (vol->cluster_size_bits - 9); + NInoSetRunlistDirty(ni); + mark_mft_record_dirty(ni); + } + + *lcn_start =3D lcn; + *lcn_count =3D clu_count; + *balloc =3D true; +out: + ntfs_attr_put_search_ctx(ctx); + return err; +} + +/** + * ntfs_attr_rm - remove attribute from ntfs inode + * @ni: opened ntfs attribute to delete + * + * Remove attribute and all it's extents from ntfs inode. If attribute was= non + * resident also free all clusters allocated by attribute. + */ +int ntfs_attr_rm(struct ntfs_inode *ni) +{ + struct ntfs_attr_search_ctx *ctx; + int err =3D 0, ret =3D 0; + struct ntfs_inode *base_ni; + struct super_block *sb =3D ni->vol->sb; + + if (NInoAttr(ni)) + base_ni =3D ni->ext.base_ntfs_ino; + else + base_ni =3D ni; + + ntfs_debug("Entering for inode 0x%llx, attr 0x%x.\n", + (long long) ni->mft_no, ni->type); + + /* Free cluster allocation. */ + if (NInoNonResident(ni)) { + struct ntfs_attr_search_ctx *ctx; + + err =3D ntfs_attr_map_whole_runlist(ni); + if (err) + return err; + ctx =3D ntfs_attr_get_search_ctx(ni, NULL); + if (!ctx) { + ntfs_error(sb, "%s: Failed to get search context", __func__); + return -ENOMEM; + } + + ret =3D ntfs_cluster_free(ni, 0, -1, ctx); + if (ret < 0) + ntfs_error(sb, + "Failed to free cluster allocation. Leaving inconstant metadata.\n"); + ntfs_attr_put_search_ctx(ctx); + } + + /* Search for attribute extents and remove them all. */ + ctx =3D ntfs_attr_get_search_ctx(base_ni, NULL); + if (!ctx) { + ntfs_error(sb, "%s: Failed to get search context", __func__); + return -ENOMEM; + } + while (!(err =3D ntfs_attr_lookup(ni->type, ni->name, ni->name_len, + CASE_SENSITIVE, 0, NULL, 0, ctx))) { + err =3D ntfs_attr_record_rm(ctx); + if (err) { + ntfs_error(sb, + "Failed to remove attribute extent. Leaving inconstant metadata.\n"); + ret =3D err; + } + ntfs_attr_reinit_search_ctx(ctx); + } + ntfs_attr_put_search_ctx(ctx); + if (err !=3D -ENOENT) { + ntfs_error(sb, "Attribute lookup failed. Probably leaving inconstant met= adata.\n"); + ret =3D err; + } + + return ret; +} + +int ntfs_attr_exist(struct ntfs_inode *ni, const __le32 type, __le16 *name, + u32 name_len) +{ + struct ntfs_attr_search_ctx *ctx; + int ret; + + ntfs_debug("Entering\n"); + + ctx =3D ntfs_attr_get_search_ctx(ni, NULL); + if (!ctx) { + ntfs_error(ni->vol->sb, "%s: Failed to get search context", + __func__); + return 0; + } + + ret =3D ntfs_attr_lookup(type, name, name_len, CASE_SENSITIVE, + 0, NULL, 0, ctx); + ntfs_attr_put_search_ctx(ctx); + + return !ret; +} + +int ntfs_attr_remove(struct ntfs_inode *ni, const __le32 type, __le16 *nam= e, + u32 name_len) +{ + struct super_block *sb; + int err; + struct inode *attr_vi; + struct ntfs_inode *attr_ni; + + ntfs_debug("Entering\n"); + + sb =3D ni->vol->sb; + if (!ni) { + ntfs_error(sb, "NULL inode pointer\n"); + return -EINVAL; + } + + attr_vi =3D ntfs_attr_iget(VFS_I(ni), type, name, name_len); + if (IS_ERR(attr_vi)) { + err =3D PTR_ERR(attr_vi); + ntfs_error(sb, "Failed to open attribute 0x%02x of inode 0x%llx", + type, (unsigned long long)ni->mft_no); + return err; + } + attr_ni =3D NTFS_I(attr_vi); + + err =3D ntfs_attr_rm(attr_ni); + if (err) + ntfs_error(sb, "Failed to remove attribute 0x%02x of inode 0x%llx", + type, (unsigned long long)ni->mft_no); + iput(attr_vi); + return err; +} + +/** + * ntfs_attr_readall - read the entire data from an ntfs attribute + * @ni: open ntfs inode in which the ntfs attribute resides + * @type: attribute type + * @name: attribute name in little endian Unicode or AT_UNNAMED or NULL + * @name_len: length of attribute @name in Unicode characters (if @name gi= ven) + * @data_size: if non-NULL then store here the data size + * + * This function will read the entire content of an ntfs attribute. + * If @name is AT_UNNAMED then look specifically for an unnamed attribute. + * If @name is NULL then the attribute could be either named or not. + * In both those cases @name_len is not used at all. + * + * On success a buffer is allocated with the content of the attribute + * and which needs to be freed when it's not needed anymore. If the + * @data_size parameter is non-NULL then the data size is set there. + */ +void *ntfs_attr_readall(struct ntfs_inode *ni, const __le32 type, + __le16 *name, u32 name_len, s64 *data_size) +{ + struct ntfs_inode *bmp_ni; + struct inode *bmp_vi; + void *data, *ret =3D NULL; + s64 size; + struct super_block *sb =3D ni->vol->sb; + + ntfs_debug("Entering\n"); + + bmp_vi =3D ntfs_attr_iget(VFS_I(ni), type, name, name_len); + if (IS_ERR(bmp_vi)) { + ntfs_debug("ntfs_attr_iget failed"); + goto err_exit; + } + bmp_ni =3D NTFS_I(bmp_vi); + + data =3D ntfs_malloc_nofs(bmp_ni->data_size); + if (!data) { + ntfs_error(sb, "ntfs_malloc_nofs failed"); + goto out; + } + + size =3D ntfs_inode_attr_pread(VFS_I(bmp_ni), 0, bmp_ni->data_size, + (u8 *)data); + if (size !=3D bmp_ni->data_size) { + ntfs_error(sb, "ntfs_attr_pread failed"); + ntfs_free(data); + goto out; + } + ret =3D data; + if (data_size) + *data_size =3D size; +out: + iput(bmp_vi); +err_exit: + ntfs_debug("\n"); + return ret; +} + +int ntfs_non_resident_attr_insert_range(struct ntfs_inode *ni, s64 start_v= cn, s64 len) +{ + struct ntfs_volume *vol =3D ni->vol; + struct runlist_element *hole_rl, *rl; + struct ntfs_attr_search_ctx *ctx; + int ret; + size_t new_rl_count; + + if (NInoAttr(ni) || ni->type !=3D AT_DATA) + return -EOPNOTSUPP; + if (start_vcn > (ni->allocated_size >> vol->cluster_size_bits)) + return -EINVAL; + + hole_rl =3D ntfs_malloc_nofs(sizeof(*hole_rl) * 2); + if (!hole_rl) + return -ENOMEM; + hole_rl[0].vcn =3D start_vcn; + hole_rl[0].lcn =3D LCN_HOLE; + hole_rl[0].length =3D len; + hole_rl[1].vcn =3D start_vcn + len; + hole_rl[1].lcn =3D LCN_ENOENT; + hole_rl[1].length =3D 0; + + down_write(&ni->runlist.lock); + ret =3D ntfs_attr_map_whole_runlist(ni); + if (ret) { + up_write(&ni->runlist.lock); + return ret; + } + + rl =3D ntfs_rl_find_vcn_nolock(ni->runlist.rl, start_vcn); + if (!rl) { + up_write(&ni->runlist.lock); + ntfs_free(hole_rl); + return -EIO; + } + + rl =3D ntfs_rl_insert_range(ni->runlist.rl, (int)ni->runlist.count, + hole_rl, 1, &new_rl_count); + if (IS_ERR(rl)) { + up_write(&ni->runlist.lock); + ntfs_free(hole_rl); + return PTR_ERR(rl); + } + ni->runlist.rl =3D rl; + ni->runlist.count =3D new_rl_count; + + ni->allocated_size +=3D len << vol->cluster_size_bits; + ni->data_size +=3D len << vol->cluster_size_bits; + if ((start_vcn << vol->cluster_size_bits) < ni->initialized_size) + ni->initialized_size +=3D len << vol->cluster_size_bits; + ret =3D ntfs_attr_update_mapping_pairs(ni, 0); + up_write(&ni->runlist.lock); + if (ret) + return ret; + + ctx =3D ntfs_attr_get_search_ctx(ni, NULL); + if (!ctx) { + ret =3D -ENOMEM; + return ret; + } + + ret =3D ntfs_attr_lookup(ni->type, ni->name, ni->name_len, CASE_SENSITIVE, + 0, NULL, 0, ctx); + if (ret) { + ntfs_attr_put_search_ctx(ctx); + return ret; + } + + ctx->attr->data.non_resident.data_size =3D cpu_to_le64(ni->data_size); + ctx->attr->data.non_resident.initialized_size =3D cpu_to_le64(ni->initial= ized_size); + if (ni->type =3D=3D AT_DATA && ni->name =3D=3D AT_UNNAMED) + NInoSetFileNameDirty(ni); + mark_mft_record_dirty(ctx->ntfs_ino); + ntfs_attr_put_search_ctx(ctx); + return ret; +} + +int ntfs_non_resident_attr_collapse_range(struct ntfs_inode *ni, s64 start= _vcn, s64 len) +{ + struct ntfs_volume *vol =3D ni->vol; + struct runlist_element *punch_rl, *rl; + struct ntfs_attr_search_ctx *ctx =3D NULL; + s64 end_vcn; + int dst_cnt; + int ret; + size_t new_rl_cnt; + + if (NInoAttr(ni) || ni->type !=3D AT_DATA) + return -EOPNOTSUPP; + + end_vcn =3D ni->allocated_size >> vol->cluster_size_bits; + if (start_vcn >=3D end_vcn) + return -EINVAL; + + down_write(&ni->runlist.lock); + ret =3D ntfs_attr_map_whole_runlist(ni); + if (ret) + return ret; + + len =3D min(len, end_vcn - start_vcn); + for (rl =3D ni->runlist.rl, dst_cnt =3D 0; rl && rl->length; rl++) + dst_cnt++; + rl =3D ntfs_rl_find_vcn_nolock(ni->runlist.rl, start_vcn); + if (!rl) { + up_write(&ni->runlist.lock); + return -EIO; + } + + rl =3D ntfs_rl_collapse_range(ni->runlist.rl, dst_cnt + 1, + start_vcn, len, &punch_rl, &new_rl_cnt); + if (IS_ERR(rl)) { + up_write(&ni->runlist.lock); + return PTR_ERR(rl); + } + ni->runlist.rl =3D rl; + ni->runlist.count =3D new_rl_cnt; + + ni->allocated_size -=3D len << vol->cluster_size_bits; + if (ni->data_size > (start_vcn << vol->cluster_size_bits)) { + if (ni->data_size > (start_vcn + len) << vol->cluster_size_bits) + ni->data_size -=3D len << vol->cluster_size_bits; + else + ni->data_size =3D start_vcn << vol->cluster_size_bits; + } + if (ni->initialized_size > (start_vcn << vol->cluster_size_bits)) { + if (ni->initialized_size > + (start_vcn + len) << vol->cluster_size_bits) + ni->initialized_size -=3D len << vol->cluster_size_bits; + else + ni->initialized_size =3D start_vcn << vol->cluster_size_bits; + } + + if (ni->allocated_size > 0) { + ret =3D ntfs_attr_update_mapping_pairs(ni, 0); + if (ret) { + up_write(&ni->runlist.lock); + goto out_rl; + } + } + up_write(&ni->runlist.lock); + + ctx =3D ntfs_attr_get_search_ctx(ni, NULL); + if (!ctx) { + ret =3D -ENOMEM; + goto out_rl; + } + + ret =3D ntfs_attr_lookup(ni->type, ni->name, ni->name_len, CASE_SENSITIVE, + 0, NULL, 0, ctx); + if (ret) + goto out_ctx; + + ctx->attr->data.non_resident.data_size =3D cpu_to_le64(ni->data_size); + ctx->attr->data.non_resident.initialized_size =3D cpu_to_le64(ni->initial= ized_size); + if (ni->allocated_size =3D=3D 0) + ntfs_attr_make_resident(ni, ctx); + mark_mft_record_dirty(ctx->ntfs_ino); + + ret =3D ntfs_cluster_free_from_rl(vol, punch_rl); + if (ret) + ntfs_error(vol->sb, "Freeing of clusters failed"); +out_ctx: + if (ctx) + ntfs_attr_put_search_ctx(ctx); +out_rl: + ntfs_free(punch_rl); + mark_mft_record_dirty(ni); + return ret; +} + +int ntfs_non_resident_attr_punch_hole(struct ntfs_inode *ni, s64 start_vcn= , s64 len) +{ + struct ntfs_volume *vol =3D ni->vol; + struct runlist_element *punch_rl, *rl; + s64 end_vcn; + int dst_cnt; + int ret; + size_t new_rl_count; + + if (NInoAttr(ni) || ni->type !=3D AT_DATA) + return -EOPNOTSUPP; + + end_vcn =3D ni->allocated_size >> vol->cluster_size_bits; + if (start_vcn >=3D end_vcn) + return -EINVAL; + + down_write(&ni->runlist.lock); + ret =3D ntfs_attr_map_whole_runlist(ni); + if (ret) { + up_write(&ni->runlist.lock); + return ret; + } + + len =3D min(len, end_vcn - start_vcn + 1); + for (rl =3D ni->runlist.rl, dst_cnt =3D 0; rl && rl->length; rl++) + dst_cnt++; + rl =3D ntfs_rl_find_vcn_nolock(ni->runlist.rl, start_vcn); + if (!rl) { + up_write(&ni->runlist.lock); + return -EIO; + } + + rl =3D ntfs_rl_punch_hole(ni->runlist.rl, dst_cnt + 1, + start_vcn, len, &punch_rl, &new_rl_count); + if (IS_ERR(rl)) { + up_write(&ni->runlist.lock); + return PTR_ERR(rl); + } + ni->runlist.rl =3D rl; + ni->runlist.count =3D new_rl_count; + + ret =3D ntfs_attr_update_mapping_pairs(ni, 0); + up_write(&ni->runlist.lock); + if (ret) { + ntfs_free(punch_rl); + return ret; + } + + ret =3D ntfs_cluster_free_from_rl(vol, punch_rl); + if (ret) + ntfs_error(vol->sb, "Freeing of clusters failed"); + + ntfs_free(punch_rl); + mark_mft_record_dirty(ni); + return ret; +} + +int ntfs_attr_fallocate(struct ntfs_inode *ni, loff_t start, loff_t byte_l= en, bool keep_size) +{ + struct ntfs_volume *vol =3D ni->vol; + struct mft_record *mrec; + struct ntfs_attr_search_ctx *ctx; + s64 old_data_size; + s64 vcn_start, vcn_end, vcn_uninit, vcn, try_alloc_cnt; + s64 lcn, alloc_cnt; + int err =3D 0; + struct runlist_element *rl; + bool balloc; + + if (NInoAttr(ni) || ni->type !=3D AT_DATA) + return -EINVAL; + + if (NInoNonResident(ni) && !NInoFullyMapped(ni)) { + down_write(&ni->runlist.lock); + err =3D ntfs_attr_map_whole_runlist(ni); + up_write(&ni->runlist.lock); + if (err) + return err; + } + + mutex_lock_nested(&ni->mrec_lock, NTFS_INODE_MUTEX_NORMAL); + mrec =3D map_mft_record(ni); + if (IS_ERR(mrec)) { + mutex_unlock(&ni->mrec_lock); + return PTR_ERR(mrec); + } + + ctx =3D ntfs_attr_get_search_ctx(ni, mrec); + if (!ctx) { + err =3D -ENOMEM; + goto out_unmap; + } + + err =3D ntfs_attr_lookup(AT_DATA, AT_UNNAMED, 0, 0, 0, NULL, 0, ctx); + if (err) { + err =3D -EIO; + goto out_unmap; + } + + old_data_size =3D ni->data_size; + if (start + byte_len > ni->data_size) { + err =3D ntfs_attr_truncate(ni, start + byte_len); + if (err) + goto out_unmap; + if (keep_size) { + ntfs_attr_reinit_search_ctx(ctx); + err =3D ntfs_attr_lookup(AT_DATA, AT_UNNAMED, 0, 0, 0, NULL, 0, ctx); + if (err) { + err =3D -EIO; + goto out_unmap; + } + ni->data_size =3D old_data_size; + if (NInoNonResident(ni)) + ctx->attr->data.non_resident.data_size =3D + cpu_to_le64(old_data_size); + else + ctx->attr->data.resident.value_length =3D + cpu_to_le64(old_data_size); + mark_mft_record_dirty(ni); + } + } + + ntfs_attr_put_search_ctx(ctx); + unmap_mft_record(ni); + mutex_unlock(&ni->mrec_lock); + + if (!NInoNonResident(ni)) + goto out; + + vcn_start =3D (s64)(start >> vol->cluster_size_bits); + vcn_end =3D (s64)(round_up(start + byte_len, vol->cluster_size) >> + vol->cluster_size_bits); + vcn_uninit =3D (s64)(round_up(ni->initialized_size, vol->cluster_size) >> + vol->cluster_size_bits); + vcn_uninit =3D min_t(s64, vcn_uninit, vcn_end); + + /* + * we have to allocate clusters for holes and delayed within initialized_= size, + * and zero out the clusters only for the holes. + */ + vcn =3D vcn_start; + while (vcn < vcn_uninit) { + down_read(&ni->runlist.lock); + rl =3D ntfs_attr_find_vcn_nolock(ni, vcn, NULL); + up_read(&ni->runlist.lock); + if (IS_ERR(rl)) { + err =3D PTR_ERR(rl); + goto out; + } + + if (rl->lcn > 0) { + vcn +=3D rl->length - (vcn - rl->vcn); + } else if (rl->lcn =3D=3D LCN_DELALLOC || rl->lcn =3D=3D LCN_HOLE) { + try_alloc_cnt =3D min(rl->length - (vcn - rl->vcn), + vcn_uninit - vcn); + + if (rl->lcn =3D=3D LCN_DELALLOC) { + vcn +=3D try_alloc_cnt; + continue; + } + + while (try_alloc_cnt > 0) { + mutex_lock_nested(&ni->mrec_lock, NTFS_INODE_MUTEX_NORMAL); + down_write(&ni->runlist.lock); + err =3D ntfs_attr_map_cluster(ni, vcn, &lcn, &alloc_cnt, + try_alloc_cnt, &balloc, false, false); + up_write(&ni->runlist.lock); + mutex_unlock(&ni->mrec_lock); + if (err) + goto out; + + err =3D ntfs_zeroed_clusters(VFS_I(ni), lcn, alloc_cnt); + if (err > 0) + goto out; + + if (signal_pending(current)) + goto out; + + vcn +=3D alloc_cnt; + try_alloc_cnt -=3D alloc_cnt; + } + } else { + err =3D -EIO; + goto out; + } + } + + /* allocate clusters outside of initialized_size */ + try_alloc_cnt =3D vcn_end - vcn; + while (try_alloc_cnt > 0) { + mutex_lock_nested(&ni->mrec_lock, NTFS_INODE_MUTEX_NORMAL); + down_write(&ni->runlist.lock); + err =3D ntfs_attr_map_cluster(ni, vcn, &lcn, &alloc_cnt, + try_alloc_cnt, &balloc, false, false); + up_write(&ni->runlist.lock); + mutex_unlock(&ni->mrec_lock); + if (err || signal_pending(current)) + goto out; + + vcn +=3D alloc_cnt; + try_alloc_cnt -=3D alloc_cnt; + cond_resched(); + } + + if (NInoRunlistDirty(ni)) { + mutex_lock_nested(&ni->mrec_lock, NTFS_INODE_MUTEX_NORMAL); + down_write(&ni->runlist.lock); + err =3D ntfs_attr_update_mapping_pairs(ni, 0); + if (err) + ntfs_error(ni->vol->sb, "Updating mapping pairs failed"); + else + NInoClearRunlistDirty(ni); + up_write(&ni->runlist.lock); + mutex_unlock(&ni->mrec_lock); + } + return err; +out_unmap: + if (ctx) + ntfs_attr_put_search_ctx(ctx); + unmap_mft_record(ni); + mutex_unlock(&ni->mrec_lock); +out: + return err >=3D 0 ? 0 : err; +} diff --git a/fs/ntfsplus/attrlist.c b/fs/ntfsplus/attrlist.c new file mode 100644 index 000000000000..7c2fb3f77e91 --- /dev/null +++ b/fs/ntfsplus/attrlist.c @@ -0,0 +1,285 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Attribute list attribute handling code. Originated from the Linux-NTFS + * project. + * Part of this file is based on code from the NTFS-3G project. + * + * Copyright (c) 2004-2005 Anton Altaparmakov + * Copyright (c) 2004-2005 Yura Pakhuchiy + * Copyright (c) 2006 Szabolcs Szakacsits + * Copyright (c) 2025 LG Electronics Co., Ltd. + */ + +#include "mft.h" +#include "attrib.h" +#include "misc.h" +#include "attrlist.h" + +/** + * ntfs_attrlist_need - check whether inode need attribute list + * @ni: opened ntfs inode for which perform check + * + * Check whether all are attributes belong to one MFT record, in that case + * attribute list is not needed. + */ +int ntfs_attrlist_need(struct ntfs_inode *ni) +{ + struct attr_list_entry *ale; + + if (!ni) { + ntfs_debug("Invalid arguments.\n"); + return -EINVAL; + } + ntfs_debug("Entering for inode 0x%llx.\n", (long long) ni->mft_no); + + if (!NInoAttrList(ni)) { + ntfs_debug("Inode haven't got attribute list.\n"); + return -EINVAL; + } + + if (!ni->attr_list) { + ntfs_debug("Corrupt in-memory struct.\n"); + return -EINVAL; + } + + ale =3D (struct attr_list_entry *)ni->attr_list; + while ((u8 *)ale < ni->attr_list + ni->attr_list_size) { + if (MREF_LE(ale->mft_reference) !=3D ni->mft_no) + return 1; + ale =3D (struct attr_list_entry *)((u8 *)ale + le16_to_cpu(ale->length)); + } + return 0; +} + +int ntfs_attrlist_update(struct ntfs_inode *base_ni) +{ + struct inode *attr_vi; + struct ntfs_inode *attr_ni; + int err; + + attr_vi =3D ntfs_attr_iget(VFS_I(base_ni), AT_ATTRIBUTE_LIST, AT_UNNAMED,= 0); + if (IS_ERR(attr_vi)) { + err =3D PTR_ERR(attr_vi); + return err; + } + attr_ni =3D NTFS_I(attr_vi); + + err =3D ntfs_attr_truncate_i(attr_ni, base_ni->attr_list_size, HOLES_NO); + if (err =3D=3D -ENOSPC && attr_ni->mft_no =3D=3D FILE_MFT) { + err =3D ntfs_attr_truncate(attr_ni, 0); + if (err || ntfs_attr_truncate_i(attr_ni, base_ni->attr_list_size, HOLES_= NO) !=3D 0) { + iput(attr_vi); + ntfs_error(base_ni->vol->sb, + "Failed to truncate attribute list of inode %#llx", + (long long)base_ni->mft_no); + return -EIO; + } + } else if (err) { + iput(attr_vi); + ntfs_error(base_ni->vol->sb, + "Failed to truncate attribute list of inode %#llx", + (long long)base_ni->mft_no); + return -EIO; + } + + i_size_write(attr_vi, base_ni->attr_list_size); + + if (NInoNonResident(attr_ni) && !NInoAttrListNonResident(base_ni)) + NInoSetAttrListNonResident(base_ni); + + if (ntfs_inode_attr_pwrite(attr_vi, 0, base_ni->attr_list_size, + base_ni->attr_list, false) !=3D + base_ni->attr_list_size) { + iput(attr_vi); + ntfs_error(base_ni->vol->sb, + "Failed to write attribute list of inode %#llx", + (long long)base_ni->mft_no); + return -EIO; + } + + NInoSetAttrListDirty(base_ni); + iput(attr_vi); + return 0; +} + +/** + * ntfs_attrlist_entry_add - add an attribute list attribute entry + * @ni: opened ntfs inode, which contains that attribute + * @attr: attribute record to add to attribute list + */ +int ntfs_attrlist_entry_add(struct ntfs_inode *ni, struct attr_record *att= r) +{ + struct attr_list_entry *ale; + __le64 mref; + struct ntfs_attr_search_ctx *ctx; + u8 *new_al; + int entry_len, entry_offset, err; + struct mft_record *ni_mrec; + u8 *old_al; + + ntfs_debug("Entering for inode 0x%llx, attr 0x%x.\n", + (long long) ni->mft_no, + (unsigned int) le32_to_cpu(attr->type)); + + if (!ni || !attr) { + ntfs_debug("Invalid arguments.\n"); + return -EINVAL; + } + + ni_mrec =3D map_mft_record(ni); + if (IS_ERR(ni_mrec)) { + ntfs_debug("Invalid arguments.\n"); + return -EIO; + } + + mref =3D MK_LE_MREF(ni->mft_no, le16_to_cpu(ni_mrec->sequence_number)); + unmap_mft_record(ni); + + if (ni->nr_extents =3D=3D -1) + ni =3D ni->ext.base_ntfs_ino; + + if (!NInoAttrList(ni)) { + ntfs_debug("Attribute list isn't present.\n"); + return -ENOENT; + } + + /* Determine size and allocate memory for new attribute list. */ + entry_len =3D (sizeof(struct attr_list_entry) + sizeof(__le16) * + attr->name_length + 7) & ~7; + new_al =3D ntfs_malloc_nofs(ni->attr_list_size + entry_len); + if (!new_al) + return -ENOMEM; + + /* Find place for the new entry. */ + ctx =3D ntfs_attr_get_search_ctx(ni, NULL); + if (!ctx) { + err =3D -ENOMEM; + ntfs_error(ni->vol->sb, "Failed to get search context"); + goto err_out; + } + + err =3D ntfs_attr_lookup(attr->type, (attr->name_length) ? (__le16 *) + ((u8 *)attr + le16_to_cpu(attr->name_offset)) : + AT_UNNAMED, attr->name_length, CASE_SENSITIVE, + (attr->non_resident) ? le64_to_cpu(attr->data.non_resident.lowest_vcn) : + 0, (attr->non_resident) ? NULL : ((u8 *)attr + + le16_to_cpu(attr->data.resident.value_offset)), (attr->non_resident) ? + 0 : le32_to_cpu(attr->data.resident.value_length), ctx); + if (!err) { + /* Found some extent, check it to be before new extent. */ + if (ctx->al_entry->lowest_vcn =3D=3D attr->data.non_resident.lowest_vcn)= { + err =3D -EEXIST; + ntfs_debug("Such attribute already present in the attribute list.\n"); + ntfs_attr_put_search_ctx(ctx); + goto err_out; + } + /* Add new entry after this extent. */ + ale =3D (struct attr_list_entry *)((u8 *)ctx->al_entry + + le16_to_cpu(ctx->al_entry->length)); + } else { + /* Check for real errors. */ + if (err !=3D -ENOENT) { + ntfs_debug("Attribute lookup failed.\n"); + ntfs_attr_put_search_ctx(ctx); + goto err_out; + } + /* No previous extents found. */ + ale =3D ctx->al_entry; + } + /* Don't need it anymore, @ctx->al_entry points to @ni->attr_list. */ + ntfs_attr_put_search_ctx(ctx); + + /* Determine new entry offset. */ + entry_offset =3D ((u8 *)ale - ni->attr_list); + /* Set pointer to new entry. */ + ale =3D (struct attr_list_entry *)(new_al + entry_offset); + memset(ale, 0, entry_len); + /* Form new entry. */ + ale->type =3D attr->type; + ale->length =3D cpu_to_le16(entry_len); + ale->name_length =3D attr->name_length; + ale->name_offset =3D offsetof(struct attr_list_entry, name); + if (attr->non_resident) + ale->lowest_vcn =3D attr->data.non_resident.lowest_vcn; + else + ale->lowest_vcn =3D 0; + ale->mft_reference =3D mref; + ale->instance =3D attr->instance; + memcpy(ale->name, (u8 *)attr + le16_to_cpu(attr->name_offset), + attr->name_length * sizeof(__le16)); + + /* Copy entries from old attribute list to new. */ + memcpy(new_al, ni->attr_list, entry_offset); + memcpy(new_al + entry_offset + entry_len, ni->attr_list + + entry_offset, ni->attr_list_size - entry_offset); + + /* Set new runlist. */ + old_al =3D ni->attr_list; + ni->attr_list =3D new_al; + ni->attr_list_size =3D ni->attr_list_size + entry_len; + + err =3D ntfs_attrlist_update(ni); + if (err) { + ni->attr_list =3D old_al; + ni->attr_list_size -=3D entry_len; + goto err_out; + } + ntfs_free(old_al); + return 0; +err_out: + ntfs_free(new_al); + return err; +} + +/** + * ntfs_attrlist_entry_rm - remove an attribute list attribute entry + * @ctx: attribute search context describing the attribute list entry + * + * Remove the attribute list entry @ctx->al_entry from the attribute list. + */ +int ntfs_attrlist_entry_rm(struct ntfs_attr_search_ctx *ctx) +{ + u8 *new_al; + int new_al_len; + struct ntfs_inode *base_ni; + struct attr_list_entry *ale; + + if (!ctx || !ctx->ntfs_ino || !ctx->al_entry) { + ntfs_debug("Invalid arguments.\n"); + return -EINVAL; + } + + if (ctx->base_ntfs_ino) + base_ni =3D ctx->base_ntfs_ino; + else + base_ni =3D ctx->ntfs_ino; + ale =3D ctx->al_entry; + + ntfs_debug("Entering for inode 0x%llx, attr 0x%x, lowest_vcn %lld.\n", + (long long)ctx->ntfs_ino->mft_no, + (unsigned int)le32_to_cpu(ctx->al_entry->type), + (long long)le64_to_cpu(ctx->al_entry->lowest_vcn)); + + if (!NInoAttrList(base_ni)) { + ntfs_debug("Attribute list isn't present.\n"); + return -ENOENT; + } + + /* Allocate memory for new attribute list. */ + new_al_len =3D base_ni->attr_list_size - le16_to_cpu(ale->length); + new_al =3D ntfs_malloc_nofs(new_al_len); + if (!new_al) + return -ENOMEM; + + /* Copy entries from old attribute list to new. */ + memcpy(new_al, base_ni->attr_list, (u8 *)ale - base_ni->attr_list); + memcpy(new_al + ((u8 *)ale - base_ni->attr_list), (u8 *)ale + le16_to_cpu( + ale->length), new_al_len - ((u8 *)ale - base_ni->attr_list)); + + /* Set new runlist. */ + ntfs_free(base_ni->attr_list); + base_ni->attr_list =3D new_al; + base_ni->attr_list_size =3D new_al_len; + + return ntfs_attrlist_update(base_ni); +} diff --git a/fs/ntfsplus/compress.c b/fs/ntfsplus/compress.c new file mode 100644 index 000000000000..a801ad6eb8fe --- /dev/null +++ b/fs/ntfsplus/compress.c @@ -0,0 +1,1564 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/** + * NTFS kernel compressed attributes handling. + * Part of the Linux-NTFS project. + * + * Copyright (c) 2001-2004 Anton Altaparmakov + * Copyright (c) 2002 Richard Russon + * Copyright (c) 2025 LG Electronics Co., Ltd. + * + * Part of this file is based on code from the NTFS-3G project. + * and is copyrighted by the respective authors below: + * Copyright (c) 2004-2005 Anton Altaparmakov + * Copyright (c) 2004-2006 Szabolcs Szakacsits + * Copyright (c) 2005 Yura Pakhuchiy + * Copyright (c) 2009-2014 Jean-Pierre Andre + * Copyright (c) 2014 Eric Biggers + */ + +#include +#include +#include +#include + +#include "attrib.h" +#include "inode.h" +#include "misc.h" +#include "ntfs.h" +#include "misc.h" +#include "aops.h" +#include "lcnalloc.h" +#include "mft.h" + +/** + * enum of constants used in the compression code + */ +enum { + /* Token types and access mask. */ + NTFS_SYMBOL_TOKEN =3D 0, + NTFS_PHRASE_TOKEN =3D 1, + NTFS_TOKEN_MASK =3D 1, + + /* Compression sub-block constants. */ + NTFS_SB_SIZE_MASK =3D 0x0fff, + NTFS_SB_SIZE =3D 0x1000, + NTFS_SB_IS_COMPRESSED =3D 0x8000, + + /* + * The maximum compression block size is by definition 16 * the cluster + * size, with the maximum supported cluster size being 4kiB. Thus the + * maximum compression buffer size is 64kiB, so we use this when + * initializing the compression buffer. + */ + NTFS_MAX_CB_SIZE =3D 64 * 1024, +}; + +/** + * ntfs_compression_buffer - one buffer for the decompression engine + */ +static u8 *ntfs_compression_buffer; + +/** + * ntfs_cb_lock - mutex lock which protects ntfs_compression_buffer + */ +static DEFINE_MUTEX(ntfs_cb_lock); + +/** + * allocate_compression_buffers - allocate the decompression buffers + * + * Caller has to hold the ntfs_lock mutex. + * + * Return 0 on success or -ENOMEM if the allocations failed. + */ +int allocate_compression_buffers(void) +{ + if (ntfs_compression_buffer) + return 0; + + ntfs_compression_buffer =3D vmalloc(NTFS_MAX_CB_SIZE); + if (!ntfs_compression_buffer) + return -ENOMEM; + return 0; +} + +/** + * free_compression_buffers - free the decompression buffers + * + * Caller has to hold the ntfs_lock mutex. + */ +void free_compression_buffers(void) +{ + mutex_lock(&ntfs_cb_lock); + if (!ntfs_compression_buffer) { + mutex_unlock(&ntfs_cb_lock); + return; + } + + vfree(ntfs_compression_buffer); + ntfs_compression_buffer =3D NULL; + mutex_unlock(&ntfs_cb_lock); +} + +/** + * zero_partial_compressed_page - zero out of bounds compressed page region + */ +static void zero_partial_compressed_page(struct page *page, + const s64 initialized_size) +{ + u8 *kp =3D page_address(page); + unsigned int kp_ofs; + + ntfs_debug("Zeroing page region outside initialized size."); + if (((s64)page->__folio_index << PAGE_SHIFT) >=3D initialized_size) { + clear_page(kp); + return; + } + kp_ofs =3D initialized_size & ~PAGE_MASK; + memset(kp + kp_ofs, 0, PAGE_SIZE - kp_ofs); +} + +/** + * handle_bounds_compressed_page - test for&handle out of bounds compresse= d page + */ +static inline void handle_bounds_compressed_page(struct page *page, + const loff_t i_size, const s64 initialized_size) +{ + if ((page->__folio_index >=3D (initialized_size >> PAGE_SHIFT)) && + (initialized_size < i_size)) + zero_partial_compressed_page(page, initialized_size); +} + +/** + * ntfs_decompress - decompress a compression block into an array of pages + * @dest_pages: destination array of pages + * @completed_pages: scratch space to track completed pages + * @dest_index: current index into @dest_pages (IN/OUT) + * @dest_ofs: current offset within @dest_pages[@dest_index] (IN/OUT) + * @dest_max_index: maximum index into @dest_pages (IN) + * @dest_max_ofs: maximum offset within @dest_pages[@dest_max_index] (IN) + * @xpage: the target page (-1 if none) (IN) + * @xpage_done: set to 1 if xpage was completed successfully (IN/OUT) + * @cb_start: compression block to decompress (IN) + * @cb_size: size of compression block @cb_start in bytes (IN) + * @i_size: file size when we started the read (IN) + * @initialized_size: initialized file size when we started the read (IN) + * + * The caller must have disabled preemption. ntfs_decompress() reenables i= t when + * the critical section is finished. + * + * This decompresses the compression block @cb_start into the array of + * destination pages @dest_pages starting at index @dest_index into @dest_= pages + * and at offset @dest_pos into the page @dest_pages[@dest_index]. + * + * When the page @dest_pages[@xpage] is completed, @xpage_done is set to 1. + * If xpage is -1 or @xpage has not been completed, @xpage_done is not mod= ified. + * + * @cb_start is a pointer to the compression block which needs decompressi= ng + * and @cb_size is the size of @cb_start in bytes (8-64kiB). + * + * Return 0 if success or -EOVERFLOW on error in the compressed stream. + * @xpage_done indicates whether the target page (@dest_pages[@xpage]) was + * completed during the decompression of the compression block (@cb_start). + * + * Warning: This function *REQUIRES* PAGE_SIZE >=3D 4096 or it will blow up + * unpredicatbly! You have been warned! + * + * Note to hackers: This function may not sleep until it has finished acce= ssing + * the compression block @cb_start as it is a per-CPU buffer. + */ +static int ntfs_decompress(struct page *dest_pages[], int completed_pages[= ], + int *dest_index, int *dest_ofs, const int dest_max_index, + const int dest_max_ofs, const int xpage, char *xpage_done, + u8 *const cb_start, const u32 cb_size, const loff_t i_size, + const s64 initialized_size) +{ + /* + * Pointers into the compressed data, i.e. the compression block (cb), + * and the therein contained sub-blocks (sb). + */ + u8 *cb_end =3D cb_start + cb_size; /* End of cb. */ + u8 *cb =3D cb_start; /* Current position in cb. */ + u8 *cb_sb_start =3D cb; /* Beginning of the current sb in the cb. */ + u8 *cb_sb_end; /* End of current sb / beginning of next sb. */ + + /* Variables for uncompressed data / destination. */ + struct page *dp; /* Current destination page being worked on. */ + u8 *dp_addr; /* Current pointer into dp. */ + u8 *dp_sb_start; /* Start of current sub-block in dp. */ + u8 *dp_sb_end; /* End of current sb in dp (dp_sb_start + NTFS_SB_SIZE). = */ + u16 do_sb_start; /* @dest_ofs when starting this sub-block. */ + u16 do_sb_end; /* @dest_ofs of end of this sb (do_sb_start + NTFS_SB_SIZ= E). */ + + /* Variables for tag and token parsing. */ + u8 tag; /* Current tag. */ + int token; /* Loop counter for the eight tokens in tag. */ + int nr_completed_pages =3D 0; + + /* Default error code. */ + int err =3D -EOVERFLOW; + + ntfs_debug("Entering, cb_size =3D 0x%x.", cb_size); +do_next_sb: + ntfs_debug("Beginning sub-block at offset =3D 0x%zx in the cb.", + cb - cb_start); + /* + * Have we reached the end of the compression block or the end of the + * decompressed data? The latter can happen for example if the current + * position in the compression block is one byte before its end so the + * first two checks do not detect it. + */ + if (cb =3D=3D cb_end || !le16_to_cpup((__le16 *)cb) || + (*dest_index =3D=3D dest_max_index && + *dest_ofs =3D=3D dest_max_ofs)) { + int i; + + ntfs_debug("Completed. Returning success (0)."); + err =3D 0; +return_error: + /* We can sleep from now on, so we drop lock. */ + mutex_unlock(&ntfs_cb_lock); + /* Second stage: finalize completed pages. */ + if (nr_completed_pages > 0) { + for (i =3D 0; i < nr_completed_pages; i++) { + int di =3D completed_pages[i]; + + dp =3D dest_pages[di]; + /* + * If we are outside the initialized size, zero + * the out of bounds page range. + */ + handle_bounds_compressed_page(dp, i_size, + initialized_size); + flush_dcache_page(dp); + kunmap_local(page_address(dp)); + SetPageUptodate(dp); + unlock_page(dp); + if (di =3D=3D xpage) + *xpage_done =3D 1; + else + put_page(dp); + dest_pages[di] =3D NULL; + } + } + return err; + } + + /* Setup offsets for the current sub-block destination. */ + do_sb_start =3D *dest_ofs; + do_sb_end =3D do_sb_start + NTFS_SB_SIZE; + + /* Check that we are still within allowed boundaries. */ + if (*dest_index =3D=3D dest_max_index && do_sb_end > dest_max_ofs) + goto return_overflow; + + /* Does the minimum size of a compressed sb overflow valid range? */ + if (cb + 6 > cb_end) + goto return_overflow; + + /* Setup the current sub-block source pointers and validate range. */ + cb_sb_start =3D cb; + cb_sb_end =3D cb_sb_start + (le16_to_cpup((__le16 *)cb) & NTFS_SB_SIZE_MA= SK) + + 3; + if (cb_sb_end > cb_end) + goto return_overflow; + + /* Get the current destination page. */ + dp =3D dest_pages[*dest_index]; + if (!dp) { + /* No page present. Skip decompression of this sub-block. */ + cb =3D cb_sb_end; + + /* Advance destination position to next sub-block. */ + *dest_ofs =3D (*dest_ofs + NTFS_SB_SIZE) & ~PAGE_MASK; + if (!*dest_ofs && (++*dest_index > dest_max_index)) + goto return_overflow; + goto do_next_sb; + } + + /* We have a valid destination page. Setup the destination pointers. */ + dp_addr =3D (u8 *)page_address(dp) + do_sb_start; + + /* Now, we are ready to process the current sub-block (sb). */ + if (!(le16_to_cpup((__le16 *)cb) & NTFS_SB_IS_COMPRESSED)) { + ntfs_debug("Found uncompressed sub-block."); + /* This sb is not compressed, just copy it into destination. */ + + /* Advance source position to first data byte. */ + cb +=3D 2; + + /* An uncompressed sb must be full size. */ + if (cb_sb_end - cb !=3D NTFS_SB_SIZE) + goto return_overflow; + + /* Copy the block and advance the source position. */ + memcpy(dp_addr, cb, NTFS_SB_SIZE); + cb +=3D NTFS_SB_SIZE; + + /* Advance destination position to next sub-block. */ + *dest_ofs +=3D NTFS_SB_SIZE; + *dest_ofs &=3D ~PAGE_MASK; + if (!(*dest_ofs)) { +finalize_page: + /* + * First stage: add current page index to array of + * completed pages. + */ + completed_pages[nr_completed_pages++] =3D *dest_index; + if (++*dest_index > dest_max_index) + goto return_overflow; + } + goto do_next_sb; + } + ntfs_debug("Found compressed sub-block."); + /* This sb is compressed, decompress it into destination. */ + + /* Setup destination pointers. */ + dp_sb_start =3D dp_addr; + dp_sb_end =3D dp_sb_start + NTFS_SB_SIZE; + + /* Forward to the first tag in the sub-block. */ + cb +=3D 2; +do_next_tag: + if (cb =3D=3D cb_sb_end) { + /* Check if the decompressed sub-block was not full-length. */ + if (dp_addr < dp_sb_end) { + int nr_bytes =3D do_sb_end - *dest_ofs; + + ntfs_debug("Filling incomplete sub-block with zeroes."); + /* Zero remainder and update destination position. */ + memset(dp_addr, 0, nr_bytes); + *dest_ofs +=3D nr_bytes; + } + /* We have finished the current sub-block. */ + *dest_ofs &=3D ~PAGE_MASK; + if (!(*dest_ofs)) + goto finalize_page; + goto do_next_sb; + } + + /* Check we are still in range. */ + if (cb > cb_sb_end || dp_addr > dp_sb_end) + goto return_overflow; + + /* Get the next tag and advance to first token. */ + tag =3D *cb++; + + /* Parse the eight tokens described by the tag. */ + for (token =3D 0; token < 8; token++, tag >>=3D 1) { + register u16 i; + u16 lg, pt, length, max_non_overlap; + u8 *dp_back_addr; + + /* Check if we are done / still in range. */ + if (cb >=3D cb_sb_end || dp_addr > dp_sb_end) + break; + + /* Determine token type and parse appropriately.*/ + if ((tag & NTFS_TOKEN_MASK) =3D=3D NTFS_SYMBOL_TOKEN) { + /* + * We have a symbol token, copy the symbol across, and + * advance the source and destination positions. + */ + *dp_addr++ =3D *cb++; + ++*dest_ofs; + + /* Continue with the next token. */ + continue; + } + + /* + * We have a phrase token. Make sure it is not the first tag in + * the sb as this is illegal and would confuse the code below. + */ + if (dp_addr =3D=3D dp_sb_start) + goto return_overflow; + + /* + * Determine the number of bytes to go back (p) and the number + * of bytes to copy (l). We use an optimized algorithm in which + * we first calculate log2(current destination position in sb), + * which allows determination of l and p in O(1) rather than + * O(n). We just need an arch-optimized log2() function now. + */ + lg =3D 0; + for (i =3D *dest_ofs - do_sb_start - 1; i >=3D 0x10; i >>=3D 1) + lg++; + + /* Get the phrase token into i. */ + pt =3D le16_to_cpup((__le16 *)cb); + + /* + * Calculate starting position of the byte sequence in + * the destination using the fact that p =3D (pt >> (12 - lg)) + 1 + * and make sure we don't go too far back. + */ + dp_back_addr =3D dp_addr - (pt >> (12 - lg)) - 1; + if (dp_back_addr < dp_sb_start) + goto return_overflow; + + /* Now calculate the length of the byte sequence. */ + length =3D (pt & (0xfff >> lg)) + 3; + + /* Advance destination position and verify it is in range. */ + *dest_ofs +=3D length; + if (*dest_ofs > do_sb_end) + goto return_overflow; + + /* The number of non-overlapping bytes. */ + max_non_overlap =3D dp_addr - dp_back_addr; + + if (length <=3D max_non_overlap) { + /* The byte sequence doesn't overlap, just copy it. */ + memcpy(dp_addr, dp_back_addr, length); + + /* Advance destination pointer. */ + dp_addr +=3D length; + } else { + /* + * The byte sequence does overlap, copy non-overlapping + * part and then do a slow byte by byte copy for the + * overlapping part. Also, advance the destination + * pointer. + */ + memcpy(dp_addr, dp_back_addr, max_non_overlap); + dp_addr +=3D max_non_overlap; + dp_back_addr +=3D max_non_overlap; + length -=3D max_non_overlap; + while (length--) + *dp_addr++ =3D *dp_back_addr++; + } + + /* Advance source position and continue with the next token. */ + cb +=3D 2; + } + + /* No tokens left in the current tag. Continue with the next tag. */ + goto do_next_tag; + +return_overflow: + ntfs_error(NULL, "Failed. Returning -EOVERFLOW."); + goto return_error; +} + +/** + * ntfs_read_compressed_block - read a compressed block into the page cache + * @folio: locked folio in the compression block(s) we need to read + * + * When we are called the page has already been verified to be locked and = the + * attribute is known to be non-resident, not encrypted, but compressed. + * + * 1. Determine which compression block(s) @page is in. + * 2. Get hold of all pages corresponding to this/these compression block(= s). + * 3. Read the (first) compression block. + * 4. Decompress it into the corresponding pages. + * 5. Throw the compressed data away and proceed to 3. for the next compre= ssion + * block or return success if no more compression blocks left. + * + * Warning: We have to be careful what we do about existing pages. They mi= ght + * have been written to so that we would lose data if we were to just over= write + * them with the out-of-date uncompressed data. + */ +int ntfs_read_compressed_block(struct folio *folio) +{ + struct page *page =3D &folio->page; + loff_t i_size; + s64 initialized_size; + struct address_space *mapping =3D page->mapping; + struct ntfs_inode *ni =3D NTFS_I(mapping->host); + struct ntfs_volume *vol =3D ni->vol; + struct super_block *sb =3D vol->sb; + struct runlist_element *rl; + unsigned long flags; + u8 *cb, *cb_pos, *cb_end; + unsigned long offset, index =3D page->__folio_index; + u32 cb_size =3D ni->itype.compressed.block_size; + u64 cb_size_mask =3D cb_size - 1UL; + s64 vcn; + s64 lcn; + /* The first wanted vcn (minimum alignment is PAGE_SIZE). */ + s64 start_vcn =3D (((s64)index << PAGE_SHIFT) & ~cb_size_mask) >> + vol->cluster_size_bits; + /* + * The first vcn after the last wanted vcn (minimum alignment is again + * PAGE_SIZE. + */ + s64 end_vcn =3D ((((s64)(index + 1UL) << PAGE_SHIFT) + cb_size - 1) + & ~cb_size_mask) >> vol->cluster_size_bits; + /* Number of compression blocks (cbs) in the wanted vcn range. */ + unsigned int nr_cbs =3D (end_vcn - start_vcn) << vol->cluster_size_bits + >> ni->itype.compressed.block_size_bits; + /* + * Number of pages required to store the uncompressed data from all + * compression blocks (cbs) overlapping @page. Due to alignment + * guarantees of start_vcn and end_vcn, no need to round up here. + */ + unsigned int nr_pages =3D (end_vcn - start_vcn) << + vol->cluster_size_bits >> PAGE_SHIFT; + unsigned int xpage, max_page, cur_page, cur_ofs, i, page_ofs, page_index; + unsigned int cb_clusters, cb_max_ofs; + int cb_max_page, err =3D 0; + struct page **pages; + int *completed_pages; + unsigned char xpage_done =3D 0; + struct page *lpage; + + ntfs_debug("Entering, page->index =3D 0x%lx, cb_size =3D 0x%x, nr_pages = =3D %i.", + index, cb_size, nr_pages); + /* + * Bad things happen if we get here for anything that is not an + * unnamed $DATA attribute. + */ + if (ni->type !=3D AT_DATA || ni->name_len) { + unlock_page(page); + return -EIO; + } + + pages =3D kmalloc_array(nr_pages, sizeof(struct page *), GFP_NOFS); + completed_pages =3D kmalloc_array(nr_pages + 1, sizeof(int), GFP_NOFS); + + if (unlikely(!pages || !completed_pages)) { + kfree(pages); + kfree(completed_pages); + unlock_page(page); + ntfs_error(vol->sb, "Failed to allocate internal buffers."); + return -ENOMEM; + } + + /* + * We have already been given one page, this is the one we must do. + * Once again, the alignment guarantees keep it simple. + */ + offset =3D start_vcn << vol->cluster_size_bits >> PAGE_SHIFT; + xpage =3D index - offset; + pages[xpage] =3D page; + /* + * The remaining pages need to be allocated and inserted into the page + * cache, alignment guarantees keep all the below much simpler. (-8 + */ + read_lock_irqsave(&ni->size_lock, flags); + i_size =3D i_size_read(VFS_I(ni)); + initialized_size =3D ni->initialized_size; + read_unlock_irqrestore(&ni->size_lock, flags); + max_page =3D ((i_size + PAGE_SIZE - 1) >> PAGE_SHIFT) - + offset; + /* Is the page fully outside i_size? (truncate in progress) */ + if (xpage >=3D max_page) { + kfree(pages); + kfree(completed_pages); + zero_user_segments(page, 0, PAGE_SIZE, 0, 0); + ntfs_debug("Compressed read outside i_size - truncated?"); + SetPageUptodate(page); + unlock_page(page); + return 0; + } + if (nr_pages < max_page) + max_page =3D nr_pages; + + for (i =3D 0; i < max_page; i++, offset++) { + if (i !=3D xpage) + pages[i] =3D grab_cache_page_nowait(mapping, offset); + page =3D pages[i]; + if (page) { + /* + * We only (re)read the page if it isn't already read + * in and/or dirty or we would be losing data or at + * least wasting our time. + */ + if (!PageDirty(page) && (!PageUptodate(page))) { + kmap_local_page(page); + continue; + } + unlock_page(page); + put_page(page); + pages[i] =3D NULL; + } + } + + /* + * We have the runlist, and all the destination pages we need to fill. + * Now read the first compression block. + */ + cur_page =3D 0; + cur_ofs =3D 0; + cb_clusters =3D ni->itype.compressed.block_clusters; +do_next_cb: + nr_cbs--; + + mutex_lock(&ntfs_cb_lock); + if (!ntfs_compression_buffer) + if (allocate_compression_buffers()) { + mutex_unlock(&ntfs_cb_lock); + goto err_out; + } + + + cb =3D ntfs_compression_buffer; + cb_pos =3D cb; + cb_end =3D cb + cb_size; + + rl =3D NULL; + for (vcn =3D start_vcn, start_vcn +=3D cb_clusters; vcn < start_vcn; + vcn++) { + bool is_retry =3D false; + + if (!rl) { +lock_retry_remap: + down_read(&ni->runlist.lock); + rl =3D ni->runlist.rl; + } + if (likely(rl !=3D NULL)) { + /* Seek to element containing target vcn. */ + while (rl->length && rl[1].vcn <=3D vcn) + rl++; + lcn =3D ntfs_rl_vcn_to_lcn(rl, vcn); + } else + lcn =3D LCN_RL_NOT_MAPPED; + ntfs_debug("Reading vcn =3D 0x%llx, lcn =3D 0x%llx.", + (unsigned long long)vcn, + (unsigned long long)lcn); + if (lcn < 0) { + /* + * When we reach the first sparse cluster we have + * finished with the cb. + */ + if (lcn =3D=3D LCN_HOLE) + break; + if (is_retry || lcn !=3D LCN_RL_NOT_MAPPED) { + mutex_unlock(&ntfs_cb_lock); + goto rl_err; + } + is_retry =3D true; + /* + * Attempt to map runlist, dropping lock for the + * duration. + */ + up_read(&ni->runlist.lock); + if (!ntfs_map_runlist(ni, vcn)) + goto lock_retry_remap; + mutex_unlock(&ntfs_cb_lock); + goto map_rl_err; + } + + page_ofs =3D (lcn << vol->cluster_size_bits) & ~PAGE_MASK; + page_index =3D (lcn << vol->cluster_size_bits) >> PAGE_SHIFT; + +retry: + lpage =3D read_mapping_page(sb->s_bdev->bd_mapping, + page_index, NULL); + if (PTR_ERR(page) =3D=3D -EINTR) + goto retry; + else if (IS_ERR(lpage)) { + err =3D PTR_ERR(lpage); + mutex_unlock(&ntfs_cb_lock); + goto read_err; + } + + lock_page(lpage); + memcpy(cb_pos, page_address(lpage) + page_ofs, + vol->cluster_size); + unlock_page(lpage); + put_page(lpage); + cb_pos +=3D vol->cluster_size; + } + + /* Release the lock if we took it. */ + if (rl) + up_read(&ni->runlist.lock); + + /* Just a precaution. */ + if (cb_pos + 2 <=3D cb + cb_size) + *(u16 *)cb_pos =3D 0; + + /* Reset cb_pos back to the beginning. */ + cb_pos =3D cb; + + /* We now have both source (if present) and destination. */ + ntfs_debug("Successfully read the compression block."); + + /* The last page and maximum offset within it for the current cb. */ + cb_max_page =3D (cur_page << PAGE_SHIFT) + cur_ofs + cb_size; + cb_max_ofs =3D cb_max_page & ~PAGE_MASK; + cb_max_page >>=3D PAGE_SHIFT; + + /* Catch end of file inside a compression block. */ + if (cb_max_page > max_page) + cb_max_page =3D max_page; + + if (vcn =3D=3D start_vcn - cb_clusters) { + /* Sparse cb, zero out page range overlapping the cb. */ + ntfs_debug("Found sparse compression block."); + /* We can sleep from now on, so we drop lock. */ + mutex_unlock(&ntfs_cb_lock); + if (cb_max_ofs) + cb_max_page--; + for (; cur_page < cb_max_page; cur_page++) { + page =3D pages[cur_page]; + if (page) { + if (likely(!cur_ofs)) + clear_page(page_address(page)); + else + memset(page_address(page) + cur_ofs, 0, + PAGE_SIZE - + cur_ofs); + flush_dcache_page(page); + kunmap_local(page_address(page)); + SetPageUptodate(page); + unlock_page(page); + if (cur_page =3D=3D xpage) + xpage_done =3D 1; + else + put_page(page); + pages[cur_page] =3D NULL; + } + cb_pos +=3D PAGE_SIZE - cur_ofs; + cur_ofs =3D 0; + if (cb_pos >=3D cb_end) + break; + } + /* If we have a partial final page, deal with it now. */ + if (cb_max_ofs && cb_pos < cb_end) { + page =3D pages[cur_page]; + if (page) + memset(page_address(page) + cur_ofs, 0, + cb_max_ofs - cur_ofs); + /* + * No need to update cb_pos at this stage: + * cb_pos +=3D cb_max_ofs - cur_ofs; + */ + cur_ofs =3D cb_max_ofs; + } + } else if (vcn =3D=3D start_vcn) { + /* We can't sleep so we need two stages. */ + unsigned int cur2_page =3D cur_page; + unsigned int cur_ofs2 =3D cur_ofs; + u8 *cb_pos2 =3D cb_pos; + + ntfs_debug("Found uncompressed compression block."); + /* Uncompressed cb, copy it to the destination pages. */ + if (cb_max_ofs) + cb_max_page--; + /* First stage: copy data into destination pages. */ + for (; cur_page < cb_max_page; cur_page++) { + page =3D pages[cur_page]; + if (page) + memcpy(page_address(page) + cur_ofs, cb_pos, + PAGE_SIZE - cur_ofs); + cb_pos +=3D PAGE_SIZE - cur_ofs; + cur_ofs =3D 0; + if (cb_pos >=3D cb_end) + break; + } + /* If we have a partial final page, deal with it now. */ + if (cb_max_ofs && cb_pos < cb_end) { + page =3D pages[cur_page]; + if (page) + memcpy(page_address(page) + cur_ofs, cb_pos, + cb_max_ofs - cur_ofs); + cb_pos +=3D cb_max_ofs - cur_ofs; + cur_ofs =3D cb_max_ofs; + } + /* We can sleep from now on, so drop lock. */ + mutex_unlock(&ntfs_cb_lock); + /* Second stage: finalize pages. */ + for (; cur2_page < cb_max_page; cur2_page++) { + page =3D pages[cur2_page]; + if (page) { + /* + * If we are outside the initialized size, zero + * the out of bounds page range. + */ + handle_bounds_compressed_page(page, i_size, + initialized_size); + flush_dcache_page(page); + kunmap_local(page_address(page)); + SetPageUptodate(page); + unlock_page(page); + if (cur2_page =3D=3D xpage) + xpage_done =3D 1; + else + put_page(page); + pages[cur2_page] =3D NULL; + } + cb_pos2 +=3D PAGE_SIZE - cur_ofs2; + cur_ofs2 =3D 0; + if (cb_pos2 >=3D cb_end) + break; + } + } else { + /* Compressed cb, decompress it into the destination page(s). */ + unsigned int prev_cur_page =3D cur_page; + + ntfs_debug("Found compressed compression block."); + err =3D ntfs_decompress(pages, completed_pages, &cur_page, + &cur_ofs, cb_max_page, cb_max_ofs, xpage, + &xpage_done, cb_pos, cb_size - (cb_pos - cb), + i_size, initialized_size); + /* + * We can sleep from now on, lock already dropped by + * ntfs_decompress(). + */ + if (err) { + ntfs_error(vol->sb, + "ntfs_decompress() failed in inode 0x%lx with error code %i. Skipping = this compression block.", + ni->mft_no, -err); + /* Release the unfinished pages. */ + for (; prev_cur_page < cur_page; prev_cur_page++) { + page =3D pages[prev_cur_page]; + if (page) { + flush_dcache_page(page); + kunmap_local(page_address(page)); + unlock_page(page); + if (prev_cur_page !=3D xpage) + put_page(page); + pages[prev_cur_page] =3D NULL; + } + } + } + } + + /* Do we have more work to do? */ + if (nr_cbs) + goto do_next_cb; + + /* Clean up if we have any pages left. Should never happen. */ + for (cur_page =3D 0; cur_page < max_page; cur_page++) { + page =3D pages[cur_page]; + if (page) { + ntfs_error(vol->sb, + "Still have pages left! Terminating them with extreme prejudice. Inod= e 0x%lx, page index 0x%lx.", + ni->mft_no, page->__folio_index); + flush_dcache_page(page); + kunmap_local(page_address(page)); + unlock_page(page); + if (cur_page !=3D xpage) + put_page(page); + pages[cur_page] =3D NULL; + } + } + + /* We no longer need the list of pages. */ + kfree(pages); + kfree(completed_pages); + + /* If we have completed the requested page, we return success. */ + if (likely(xpage_done)) + return 0; + + ntfs_debug("Failed. Returning error code %s.", err =3D=3D -EOVERFLOW ? + "EOVERFLOW" : (!err ? "EIO" : "unknown error")); + return err < 0 ? err : -EIO; + +map_rl_err: + ntfs_error(vol->sb, "ntfs_map_runlist() failed. Cannot read compression b= lock."); + goto err_out; + +rl_err: + up_read(&ni->runlist.lock); + ntfs_error(vol->sb, "ntfs_rl_vcn_to_lcn() failed. Cannot read compression= block."); + goto err_out; + +read_err: + up_read(&ni->runlist.lock); + ntfs_error(vol->sb, "IO error while reading compressed data."); + +err_out: + for (i =3D cur_page; i < max_page; i++) { + page =3D pages[i]; + if (page) { + flush_dcache_page(page); + kunmap_local(page_address(page)); + unlock_page(page); + if (i !=3D xpage) + put_page(page); + } + } + kfree(pages); + kfree(completed_pages); + return -EIO; +} + +/* + * Match length at or above which ntfs_best_match() will stop searching for + * longer matches. + */ +#define NICE_MATCH_LEN 18 + +/* + * Maximum number of potential matches that ntfs_best_match() will conside= r at + * each position. + */ +#define MAX_SEARCH_DEPTH 24 + +/* log base 2 of the number of entries in the hash table for match-finding= . */ +#define HASH_SHIFT 14 + +/* Constant for the multiplicative hash function. */ +#define HASH_MULTIPLIER 0x1E35A7BD + +struct COMPRESS_CONTEXT { + const unsigned char *inbuf; + int bufsize; + int size; + int rel; + int mxsz; + s16 head[1 << HASH_SHIFT]; + s16 prev[NTFS_SB_SIZE]; +}; + +/* + * Hash the next 3-byte sequence in the input buffer + */ +static inline unsigned int ntfs_hash(const u8 *p) +{ + u32 str; + u32 hash; + + /* + * Unaligned access allowed, and little endian CPU. + * Callers ensure that at least 4 (not 3) bytes are remaining. + */ + str =3D *(const u32 *)p & 0xFFFFFF; + hash =3D str * HASH_MULTIPLIER; + + /* High bits are more random than the low bits. */ + return hash >> (32 - HASH_SHIFT); +} + +/* + * Search for the longest sequence matching current position + * + * A hash table, each entry of which points to a chain of sequence + * positions sharing the corresponding hash code, is maintained to speed up + * searching for matches. To maintain the hash table, either + * ntfs_best_match() or ntfs_skip_position() has to be called for each + * consecutive position. + * + * This function is heavily used; it has to be optimized carefully. + * + * This function sets pctx->size and pctx->rel to the length and offset, + * respectively, of the longest match found. + * + * The minimum match length is assumed to be 3, and the maximum match + * length is assumed to be pctx->mxsz. If this function produces + * pctx->size < 3, then no match was found. + * + * Note: for the following reasons, this function is not guaranteed to find + * *the* longest match up to pctx->mxsz: + * + * (1) If this function finds a match of NICE_MATCH_LEN bytes or grea= ter, + * it ends early because a match this long is good enough and it'= s not + * worth spending more time searching. + * + * (2) If this function considers MAX_SEARCH_DEPTH matches with a sin= gle + * position, it ends early and returns the longest match found so= far. + * This saves a lot of time on degenerate inputs. + */ +static void ntfs_best_match(struct COMPRESS_CONTEXT *pctx, const int i, + int best_len) +{ + const u8 * const inbuf =3D pctx->inbuf; + const u8 * const strptr =3D &inbuf[i]; /* String we're matching against */ + s16 * const prev =3D pctx->prev; + const int max_len =3D min(pctx->bufsize - i, pctx->mxsz); + const int nice_len =3D min(NICE_MATCH_LEN, max_len); + int depth_remaining =3D MAX_SEARCH_DEPTH; + const u8 *best_matchptr =3D strptr; + unsigned int hash; + s16 cur_match; + const u8 *matchptr; + int len; + + if (max_len < 4) + goto out; + + /* Insert the current sequence into the appropriate hash chain. */ + hash =3D ntfs_hash(strptr); + cur_match =3D pctx->head[hash]; + prev[i] =3D cur_match; + pctx->head[hash] =3D i; + + if (best_len >=3D max_len) { + /* + * Lazy match is being attempted, but there aren't enough length + * bits remaining to code a longer match. + */ + goto out; + } + + /* Search the appropriate hash chain for matches. */ + + for (; cur_match >=3D 0 && depth_remaining--; cur_match =3D prev[cur_matc= h]) { + matchptr =3D &inbuf[cur_match]; + + /* + * Considering the potential match at 'matchptr': is it longer + * than 'best_len'? + * + * The bytes at index 'best_len' are the most likely to differ, + * so check them first. + * + * The bytes at indices 'best_len - 1' and '0' are less + * important to check separately. But doing so still gives a + * slight performance improvement, at least on x86_64, probably + * because they create separate branches for the CPU to predict + * independently of the branches in the main comparison loops. + */ + if (matchptr[best_len] !=3D strptr[best_len] || + matchptr[best_len - 1] !=3D strptr[best_len - 1] || + matchptr[0] !=3D strptr[0]) + goto next_match; + + for (len =3D 1; len < best_len - 1; len++) + if (matchptr[len] !=3D strptr[len]) + goto next_match; + + /* + * The match is the longest found so far --- + * at least 'best_len' + 1 bytes. Continue extending it. + */ + + best_matchptr =3D matchptr; + + do { + if (++best_len >=3D nice_len) { + /* + * 'nice_len' reached; don't waste time + * searching for longer matches. Extend the + * match as far as possible and terminate the + * search. + */ + while (best_len < max_len && + (best_matchptr[best_len] =3D=3D + strptr[best_len])) + best_len++; + goto out; + } + } while (best_matchptr[best_len] =3D=3D strptr[best_len]); + + /* Found a longer match, but 'nice_len' not yet reached. */ + +next_match: + /* Continue to next match in the chain. */ + ; + } + + /* + * Reached end of chain, or ended early due to reaching the maximum + * search depth. + */ + +out: + /* Return the longest match we were able to find. */ + pctx->size =3D best_len; + pctx->rel =3D best_matchptr - strptr; /* given as a negative number! */ +} + +/* + * Advance the match-finder, but don't search for matches. + */ +static void ntfs_skip_position(struct COMPRESS_CONTEXT *pctx, const int i) +{ + unsigned int hash; + + if (pctx->bufsize - i < 4) + return; + + /* Insert the current sequence into the appropriate hash chain. */ + hash =3D ntfs_hash(pctx->inbuf + i); + pctx->prev[i] =3D pctx->head[hash]; + pctx->head[hash] =3D i; +} + +/* + * Compress a 4096-byte block + * + * Returns a header of two bytes followed by the compressed data. + * If compression is not effective, the header and an uncompressed + * block is returned. + * + * Note : two bytes may be output before output buffer overflow + * is detected, so a 4100-bytes output buffer must be reserved. + * + * Returns the size of the compressed block, including the + * header (minimal size is 2, maximum size is 4098) + * 0 if an error has been met. + */ +static unsigned int ntfs_compress_block(const char *inbuf, const int bufsi= ze, + char *outbuf) +{ + struct COMPRESS_CONTEXT *pctx; + int i; /* current position */ + int j; /* end of best match from current position */ + int k; /* end of best match from next position */ + int offs; /* offset to best match */ + int bp; /* bits to store offset */ + int bp_cur; /* saved bits to store offset at current position */ + int mxoff; /* max match offset : 1 << bp */ + unsigned int xout; + unsigned int q; /* aggregated offset and size */ + int have_match; /* do we have a match at the current position? */ + char *ptag; /* location reserved for a tag */ + int tag; /* current value of tag */ + int ntag; /* count of bits still undefined in tag */ + + pctx =3D ntfs_malloc_nofs(sizeof(struct COMPRESS_CONTEXT)); + if (!pctx) + return -ENOMEM; + + /* + * All hash chains start as empty. The special value '-1' indicates the + * end of each hash chain. + */ + memset(pctx->head, 0xFF, sizeof(pctx->head)); + + pctx->inbuf =3D (const unsigned char *)inbuf; + pctx->bufsize =3D bufsize; + xout =3D 2; + i =3D 0; + bp =3D 4; + mxoff =3D 1 << bp; + pctx->mxsz =3D (1 << (16 - bp)) + 2; + have_match =3D 0; + tag =3D 0; + ntag =3D 8; + ptag =3D &outbuf[xout++]; + + while ((i < bufsize) && (xout < (NTFS_SB_SIZE + 2))) { + + /* + * This implementation uses "lazy" parsing: it always chooses + * the longest match, unless the match at the next position is + * longer. This is the same strategy used by the high + * compression modes of zlib. + */ + if (!have_match) { + /* + * Find the longest match at the current position. But + * first adjust the maximum match length if needed. + * (This loop might need to run more than one time in + * the case that we just output a long match.) + */ + while (mxoff < i) { + bp++; + mxoff <<=3D 1; + pctx->mxsz =3D (pctx->mxsz + 2) >> 1; + } + ntfs_best_match(pctx, i, 2); + } + + if (pctx->size >=3D 3) { + /* Found a match at the current position. */ + j =3D i + pctx->size; + bp_cur =3D bp; + offs =3D pctx->rel; + + if (pctx->size >=3D NICE_MATCH_LEN) { + /* Choose long matches immediately. */ + q =3D (~offs << (16 - bp_cur)) + (j - i - 3); + outbuf[xout++] =3D q & 255; + outbuf[xout++] =3D (q >> 8) & 255; + tag |=3D (1 << (8 - ntag)); + + if (j =3D=3D bufsize) { + /* + * Shortcut if the match extends to the + * end of the buffer. + */ + i =3D j; + --ntag; + break; + } + i +=3D 1; + do { + ntfs_skip_position(pctx, i); + } while (++i !=3D j); + have_match =3D 0; + } else { + /* + * Check for a longer match at the next + * position. + */ + + /* + * Doesn't need to be while() since we just + * adjusted the maximum match length at the + * previous position. + */ + if (mxoff < i + 1) { + bp++; + mxoff <<=3D 1; + pctx->mxsz =3D (pctx->mxsz + 2) >> 1; + } + ntfs_best_match(pctx, i + 1, pctx->size); + k =3D i + 1 + pctx->size; + + if (k > (j + 1)) { + /* + * Next match is longer. + * Output a literal. + */ + outbuf[xout++] =3D inbuf[i++]; + have_match =3D 1; + } else { + /* + * Next match isn't longer. + * Output the current match. + */ + q =3D (~offs << (16 - bp_cur)) + + (j - i - 3); + outbuf[xout++] =3D q & 255; + outbuf[xout++] =3D (q >> 8) & 255; + tag |=3D (1 << (8 - ntag)); + + /* + * The minimum match length is 3, and + * we've run two bytes through the + * matchfinder already. So the minimum + * number of positions we need to skip + * is 1. + */ + i +=3D 2; + do { + ntfs_skip_position(pctx, i); + } while (++i !=3D j); + have_match =3D 0; + } + } + } else { + /* No match at current position. Output a literal. */ + outbuf[xout++] =3D inbuf[i++]; + have_match =3D 0; + } + + /* Store the tag if fully used. */ + if (!--ntag) { + *ptag =3D tag; + ntag =3D 8; + ptag =3D &outbuf[xout++]; + tag =3D 0; + } + } + + /* Store the last tag if partially used. */ + if (ntag =3D=3D 8) + xout--; + else + *ptag =3D tag; + + /* Determine whether to store the data compressed or uncompressed. */ + if ((i >=3D bufsize) && (xout < (NTFS_SB_SIZE + 2))) { + /* Compressed. */ + outbuf[0] =3D (xout - 3) & 255; + outbuf[1] =3D 0xb0 + (((xout - 3) >> 8) & 15); + } else { + /* Uncompressed. */ + memcpy(&outbuf[2], inbuf, bufsize); + if (bufsize < NTFS_SB_SIZE) + memset(&outbuf[bufsize + 2], 0, NTFS_SB_SIZE - bufsize); + outbuf[0] =3D 0xff; + outbuf[1] =3D 0x3f; + xout =3D NTFS_SB_SIZE + 2; + } + + /* + * Free the compression context and return the total number of bytes + * written to 'outbuf'. + */ + ntfs_free(pctx); + return xout; +} + +static int ntfs_write_cb(struct ntfs_inode *ni, loff_t pos, struct page **= pages, + int pages_per_cb) +{ + struct ntfs_volume *vol =3D ni->vol; + char *outbuf =3D NULL, *pbuf, *inbuf; + u32 compsz, p, insz =3D pages_per_cb << PAGE_SHIFT; + s32 rounded, bio_size; + unsigned int sz, bsz; + bool fail =3D false, allzeroes; + /* a single compressed zero */ + static char onezero[] =3D {0x01, 0xb0, 0x00, 0x00}; + /* a couple of compressed zeroes */ + static char twozeroes[] =3D {0x02, 0xb0, 0x00, 0x00, 0x00}; + /* more compressed zeroes, to be followed by some count */ + static char morezeroes[] =3D {0x03, 0xb0, 0x02, 0x00}; + struct page **pages_disk =3D NULL, *pg; + s64 bio_lcn; + struct runlist_element *rlc, *rl; + int i, err; + int pages_count =3D (round_up(ni->itype.compressed.block_size + 2 * + (ni->itype.compressed.block_size / NTFS_SB_SIZE) + 2, PAGE_SIZE)) / PAGE= _SIZE; + size_t new_rl_count; + struct bio *bio =3D NULL; + loff_t new_length; + s64 new_vcn; + + inbuf =3D vmap(pages, pages_per_cb, VM_MAP, PAGE_KERNEL_RO); + if (!inbuf) + return -ENOMEM; + + /* may need 2 extra bytes per block and 2 more bytes */ + pages_disk =3D kcalloc(pages_count, sizeof(struct page *), GFP_NOFS); + if (!pages_disk) { + vunmap(inbuf); + return -ENOMEM; + } + + for (i =3D 0; i < pages_count; i++) { + pg =3D alloc_page(GFP_KERNEL); + if (!pg) { + err =3D -ENOMEM; + goto out; + } + pages_disk[i] =3D pg; + lock_page(pg); + kmap_local_page(pg); + } + + outbuf =3D vmap(pages_disk, pages_count, VM_MAP, PAGE_KERNEL); + if (!outbuf) { + err =3D -ENOMEM; + goto out; + } + + compsz =3D 0; + allzeroes =3D true; + for (p =3D 0; (p < insz) && !fail; p +=3D NTFS_SB_SIZE) { + if ((p + NTFS_SB_SIZE) < insz) + bsz =3D NTFS_SB_SIZE; + else + bsz =3D insz - p; + pbuf =3D &outbuf[compsz]; + sz =3D ntfs_compress_block(&inbuf[p], bsz, pbuf); + /* fail if all the clusters (or more) are needed */ + if (!sz || ((compsz + sz + vol->cluster_size + 2) > + ni->itype.compressed.block_size)) + fail =3D true; + else { + if (allzeroes) { + /* check whether this is all zeroes */ + switch (sz) { + case 4: + allzeroes =3D !memcmp(pbuf, onezero, 4); + break; + case 5: + allzeroes =3D !memcmp(pbuf, twozeroes, 5); + break; + case 6: + allzeroes =3D !memcmp(pbuf, morezeroes, 4); + break; + default: + allzeroes =3D false; + break; + } + } + compsz +=3D sz; + } + } + + if (!fail && !allzeroes) { + outbuf[compsz++] =3D 0; + outbuf[compsz++] =3D 0; + rounded =3D ((compsz - 1) | (vol->cluster_size - 1)) + 1; + memset(&outbuf[compsz], 0, rounded - compsz); + bio_size =3D rounded; + pages =3D pages_disk; + } else if (allzeroes) { + err =3D 0; + goto out; + } else { + bio_size =3D insz; + } + + new_vcn =3D (pos & ~(ni->itype.compressed.block_size - 1)) >> vol->cluste= r_size_bits; + new_length =3D round_up(bio_size, vol->cluster_size) >> vol->cluster_size= _bits; + + err =3D ntfs_non_resident_attr_punch_hole(ni, new_vcn, ni->itype.compress= ed.block_clusters); + if (err < 0) + goto out; + + rlc =3D ntfs_cluster_alloc(vol, new_vcn, new_length, -1, DATA_ZONE, + false, true, true); + if (IS_ERR(rlc)) { + err =3D PTR_ERR(rlc); + goto out; + } + + bio_lcn =3D rlc->lcn; + down_write(&ni->runlist.lock); + rl =3D ntfs_runlists_merge(&ni->runlist, rlc, 0, &new_rl_count); + if (IS_ERR(rl)) { + up_write(&ni->runlist.lock); + ntfs_error(vol->sb, "Failed to merge runlists"); + err =3D PTR_ERR(rl); + if (ntfs_cluster_free_from_rl(vol, rlc)) + ntfs_error(vol->sb, "Failed to free hot clusters."); + ntfs_free(rlc); + goto out; + } + + ni->runlist.count =3D new_rl_count; + ni->runlist.rl =3D rl; + + err =3D ntfs_attr_update_mapping_pairs(ni, 0); + up_write(&ni->runlist.lock); + if (err) { + err =3D -EIO; + goto out; + } + + i =3D 0; + while (bio_size > 0) { + int page_size; + + if (bio_size >=3D PAGE_SIZE) { + page_size =3D PAGE_SIZE; + bio_size -=3D PAGE_SIZE; + } else { + page_size =3D bio_size; + bio_size =3D 0; + } + +setup_bio: + if (!bio) { + bio =3D ntfs_setup_bio(vol, REQ_OP_WRITE, bio_lcn + i, 0); + if (!bio) { + err =3D -ENOMEM; + goto out; + } + } + + if (!bio_add_page(bio, pages[i], page_size, 0)) { + err =3D submit_bio_wait(bio); + bio_put(bio); + if (err) + goto out; + bio =3D NULL; + goto setup_bio; + } + i++; + } + + err =3D submit_bio_wait(bio); + bio_put(bio); +out: + vunmap(outbuf); + for (i =3D 0; i < pages_count; i++) { + pg =3D pages_disk[i]; + if (pg) { + kunmap_local(page_address(pg)); + unlock_page(pg); + put_page(pg); + } + } + kfree(pages_disk); + vunmap(inbuf); + NInoSetFileNameDirty(ni); + mark_mft_record_dirty(ni); + + return err; +} + +int ntfs_compress_write(struct ntfs_inode *ni, loff_t pos, size_t count, + struct iov_iter *from) +{ + struct folio *folio; + struct page **pages =3D NULL, *page; + int pages_per_cb =3D ni->itype.compressed.block_size >> PAGE_SHIFT; + int cb_size =3D ni->itype.compressed.block_size, cb_off, err =3D 0; + int i, ip; + size_t written =3D 0; + struct address_space *mapping =3D VFS_I(ni)->i_mapping; + + pages =3D kmalloc_array(pages_per_cb, sizeof(struct page *), GFP_NOFS); + if (!pages) + return -ENOMEM; + + while (count) { + pgoff_t index; + size_t copied, bytes; + int off; + + off =3D pos & (cb_size - 1); + bytes =3D cb_size - off; + if (bytes > count) + bytes =3D count; + + cb_off =3D pos & ~(cb_size - 1); + index =3D cb_off >> PAGE_SHIFT; + + if (unlikely(fault_in_iov_iter_readable(from, bytes))) { + err =3D -EFAULT; + goto out; + } + + for (i =3D 0; i < pages_per_cb; i++) { + folio =3D ntfs_read_mapping_folio(mapping, index + i); + if (IS_ERR(folio)) { + for (ip =3D 0; ip < i; ip++) { + folio_unlock(page_folio(pages[ip])); + folio_put(page_folio(pages[ip])); + } + err =3D PTR_ERR(folio); + goto out; + } + + folio_lock(folio); + pages[i] =3D folio_page(folio, 0); + } + + WARN_ON(!bytes); + copied =3D 0; + ip =3D off >> PAGE_SHIFT; + off =3D offset_in_page(pos); + + for (;;) { + size_t cp, tail =3D PAGE_SIZE - off; + + page =3D pages[ip]; + cp =3D copy_folio_from_iter_atomic(page_folio(page), off, + min(tail, bytes), from); + flush_dcache_page(page); + + copied +=3D cp; + bytes -=3D cp; + if (!bytes || !cp) + break; + + if (cp < tail) { + off +=3D cp; + } else { + ip++; + off =3D 0; + } + } + + err =3D ntfs_write_cb(ni, pos, pages, pages_per_cb); + + for (i =3D 0; i < pages_per_cb; i++) { + folio =3D page_folio(pages[i]); + if (i < ip) { + folio_clear_dirty(folio); + folio_mark_uptodate(folio); + } + folio_unlock(folio); + folio_put(folio); + } + + if (err) + goto out; + + cond_resched(); + pos +=3D copied; + written +=3D copied; + count =3D iov_iter_count(from); + } + +out: + kfree(pages); + if (err < 0) + written =3D err; + + return written; +} --=20 2.25.1