From nobody Sun Dec 14 11:17:16 2025 Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ADC1630F7F1 for ; Fri, 12 Dec 2025 07:45:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765525519; cv=none; b=epbensdGQIF1H5Lz5QRVy4VUUJ+ESh+Kq6QZTE/7q/rxrlSTO6frky45kC7N6RMiU8dp1DwfJ9Q12Oz253ZQWoCwslHit7T/n22CtNXXYX8628cx6KJBIYDMZBMnhZvYXmadfmYHBCa/HbNsYdW8cXzcrQQl1WiszosVnDrEyKg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765525519; c=relaxed/simple; bh=MFi/erZFW/WIYd6hrvxk7ChTVOOgkAQH/Yx3B0zM2tg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=etkE6/aarc5y2T+NBNOMJHwApVgJ3ztxBhtbvzasrueMSJdDUg9nonUANvapjBz07vmRMLjgqn8lPiFjwc9mX7hOM30yGJYPP1i41cEND7l12H9Dm6jwQZRQiC+uOCMvXZVKkjk1kbV1xEtv9uYRZBX17rTrUU/9EJzgkFD44qg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=JRGxjucW; arc=none smtp.client-ip=209.85.128.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="JRGxjucW" Received: by mail-wm1-f51.google.com with SMTP id 5b1f17b1804b1-477985aea2bso465375e9.3 for ; Thu, 11 Dec 2025 23:45:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1765525515; x=1766130315; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=P05KKxYK3bHA72OQxEvz6zLFou6nxqg/1zGwgDAaaQY=; b=JRGxjucWAmVvITNUAPj8yk3LQqc+2GXjurhWgqLCdSbub6ldB1A23XgaT8Yuyut5pF JmvBbPCVD3Z9VCp3eCvehjlPXiO0jasEBBaH1sKFPG4QV3V8T6KvUvTfgAsXJ57bbw0v lGzZpheWtKV+UztMdwn8Oa2AB8mFHgPa5QIR+Qs8Q7wuX+ONIsVMaZ20DiWaDmp+4vfq fYB4ORQSIkPbfmCGnGRi7YKSfi08aIPXp/6S/dgGbNSJWR5G5I5E07IrNrjU/hjjfJ54 Nb/q8T3kQzHnP7wif7j7hawl88TSUhdCZgcpZZrqsLHUI+ojaHVxzPiEUQvdSycgF30C 63yg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765525515; x=1766130315; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=P05KKxYK3bHA72OQxEvz6zLFou6nxqg/1zGwgDAaaQY=; b=ahNoCaTtt9PV57GA1SUtabRXrYp3nOW9+cklKafTUlQ8McWCr5ogelsF/T1yCMdrj9 ZMCb/CMsagcDkStdLsRjvZmamj7yW/CLUx433qvxmfZ60i5TbI6zD/z1A+RU3SohF+fT YVmDcckGaxFoueDg1ZCqr0TDAyYzqrqyUM03sb5fx8fa925k2EwkE2hbT6KAJqhGyaAs IN8tQZReBYsRjNEm3Slb4qz9rW2u4WjsEgwTFK/4NT3QwdOeqcw4fj1hkB98B2NJSHgF lbyxzJDrs/NgDqrT2yT/79xNp82pcP/rSM28W4Fh/UQEs/r4Y/5JRob7Ms2XcKSHFFva 7FZg== X-Forwarded-Encrypted: i=1; AJvYcCUwle097llcnGvP61Tb32RioquAH/7rCgX+DgJdSm+72MWO+3xqBDkBS1j28MeIChWo8o1eTmtOlGBdFkY=@vger.kernel.org X-Gm-Message-State: AOJu0YxuiU5bqSTn/h1uBzntSNaYkuhI1wVPYbzOpirotR1S+ErroGqW H3X0CcsOA+0WwvBoqdFfLlvKsOfHMXtLw1Ob8i0tVFvDB7KYUTeqsooAE+RlyGLXZ/w= X-Gm-Gg: AY/fxX4v5jJuHF6BV00n/iv+QoAwuqvOmv/WkPnAbTCXiX7aKlF73NbBlqDWLQ3LWqq tIUllcaES7BWU04ReKjq4gYPKGt5U8VW8D31PWBRqWixOjtEI24TsgLsgxOBhlU7wKRaUPsIJZO o1ODPtF42ZIgfHTB4mfhKH3Z48nBVco8rK64S2jPvA6TxSlZFmXLp8K2yRar/TnSTBK6pOzGt6a iJm4LXoHOGRzPMhAxblMYAA75dJXt9fsKH/0gRzj776Wslac/rL/NfrWBMGgfSFVq6HKcjCZzKR Z7sdt/BGtx63fU6FJONiJwEtAmjNmvRluTegug9bYHTbS0KjginVff2B6iEH1DQVBc8DEYFX3Vt ryPGaJmmuLHQCggpif92p1G/RET5hgEcqQ+fNvhA+6oxTqF2BWAgfHyi9D9KODDhv0UerN7Fc3v wbFyqfNumKBxA= X-Google-Smtp-Source: AGHT+IGNm6YypEjkra2IJ8KHE/QqydrnksUWxhI7edfs9SXj213SizfF+KrBCDE0zvZNLJXWD6H77w== X-Received: by 2002:a05:600c:3e85:b0:477:9a4d:b92d with SMTP id 5b1f17b1804b1-47a8f90c7a7mr6035815e9.5.1765525514923; Thu, 11 Dec 2025 23:45:14 -0800 (PST) Received: from p15.suse.cz ([202.127.77.110]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29ee9d38adcsm45916975ad.30.2025.12.11.23.45.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Dec 2025 23:45:13 -0800 (PST) From: Heming Zhao To: joseph.qi@linux.alibaba.com, mark@fasheh.com, jlbec@evilplan.org Cc: Heming Zhao , ocfs2-devel@lists.linux.dev, linux-kernel@vger.kernel.org, glass.su@suse.com Subject: [PATCH v6 1/2] ocfs2: give ocfs2 the ability to reclaim suballocator free bg Date: Fri, 12 Dec 2025 15:45:03 +0800 Message-ID: <20251212074505.25962-2-heming.zhao@suse.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251212074505.25962-1-heming.zhao@suse.com> References: <20251212074505.25962-1-heming.zhao@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The current ocfs2 code can't reclaim suballocator block group space. In some cases, this causes ocfs2 to hold onto a lot of space. For example, when creating lots of small files, the space is held/managed by the '//inode_alloc'. After the user deletes all the small files, the space never returns to the '//global_bitmap'. This issue prevents ocfs2 from providing the needed space even when there is enough free space in a small ocfs2 volume. This patch gives ocfs2 the ability to reclaim suballocator free space when the block group is freed. For performance reasons, this patch keeps the first suballocator block group active. Signed-off-by: Heming Zhao Reviewed-by: Su Yue Reviewed-by: Joseph Qi --- fs/ocfs2/suballoc.c | 308 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 299 insertions(+), 9 deletions(-) diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c index 6ac4dcd54588..9a19f5230c8c 100644 --- a/fs/ocfs2/suballoc.c +++ b/fs/ocfs2/suballoc.c @@ -294,6 +294,74 @@ static int ocfs2_validate_group_descriptor(struct supe= r_block *sb, return ocfs2_validate_gd_self(sb, bh, 0); } =20 +/* + * The hint group descriptor (gd) may already have been released + * in _ocfs2_free_suballoc_bits(). We first check the gd signature, + * then perform the standard ocfs2_read_group_descriptor() jobs. + * + * If the gd signature is invalid, we return 'rc=3D0' and set + * '*released=3D1'. The caller is expected to handle this specific case. + * Otherwise, we return the actual error code. + * + * We treat gd signature corruption case as a release case. The + * caller ocfs2_claim_suballoc_bits() will use ocfs2_search_chain() + * to search each gd block. The code will eventually find this + * corrupted gd block - Late, but not missed. + * + * Note: + * The caller is responsible for initializing the '*released' status. + */ +static int ocfs2_read_hint_group_descriptor(struct inode *inode, + struct ocfs2_dinode *di, u64 gd_blkno, + struct buffer_head **bh, int *released) +{ + int rc; + struct buffer_head *tmp =3D *bh; + struct ocfs2_group_desc *gd; + + rc =3D ocfs2_read_block(INODE_CACHE(inode), gd_blkno, &tmp, NULL); + if (rc) + goto out; + + gd =3D (struct ocfs2_group_desc *) tmp->b_data; + if (!OCFS2_IS_VALID_GROUP_DESC(gd)) { + /* + * Invalid gd cache was set in ocfs2_read_block(), + * which will affect block_group allocation. + * Path: + * ocfs2_reserve_suballoc_bits + * ocfs2_block_group_alloc + * ocfs2_block_group_alloc_contig + * ocfs2_set_new_buffer_uptodate + */ + ocfs2_remove_from_cache(INODE_CACHE(inode), tmp); + *released =3D 1; /* we return 'rc=3D0' for this case */ + goto free_bh; + } + + /* below jobs same with ocfs2_read_group_descriptor() */ + if (!buffer_jbd(tmp)) { + rc =3D ocfs2_validate_group_descriptor(inode->i_sb, tmp); + if (rc) + goto free_bh; + } + + rc =3D ocfs2_validate_gd_parent(inode->i_sb, di, tmp, 0); + if (rc) + goto free_bh; + + /* If ocfs2_read_block() got us a new bh, pass it up. */ + if (!*bh) + *bh =3D tmp; + + return rc; + +free_bh: + brelse(tmp); +out: + return rc; +} + int ocfs2_read_group_descriptor(struct inode *inode, struct ocfs2_dinode *= di, u64 gd_blkno, struct buffer_head **bh) { @@ -1724,7 +1792,7 @@ static int ocfs2_search_one_group(struct ocfs2_alloc_= context *ac, u32 bits_wanted, u32 min_bits, struct ocfs2_suballoc_result *res, - u16 *bits_left) + u16 *bits_left, int *released) { int ret; struct buffer_head *group_bh =3D NULL; @@ -1732,9 +1800,11 @@ static int ocfs2_search_one_group(struct ocfs2_alloc= _context *ac, struct ocfs2_dinode *di =3D (struct ocfs2_dinode *)ac->ac_bh->b_data; struct inode *alloc_inode =3D ac->ac_inode; =20 - ret =3D ocfs2_read_group_descriptor(alloc_inode, di, - res->sr_bg_blkno, &group_bh); - if (ret < 0) { + ret =3D ocfs2_read_hint_group_descriptor(alloc_inode, di, + res->sr_bg_blkno, &group_bh, released); + if (*released) { + return 0; + } else if (ret < 0) { mlog_errno(ret); return ret; } @@ -1949,6 +2019,7 @@ static int ocfs2_claim_suballoc_bits(struct ocfs2_all= oc_context *ac, struct ocfs2_suballoc_result *res) { int status; + int released =3D 0; u16 victim, i; u16 bits_left =3D 0; u64 hint =3D ac->ac_last_group; @@ -1975,6 +2046,7 @@ static int ocfs2_claim_suballoc_bits(struct ocfs2_all= oc_context *ac, goto bail; } =20 + /* the hint bg may already be released, we quiet search this group. */ res->sr_bg_blkno =3D hint; if (res->sr_bg_blkno) { /* Attempt to short-circuit the usual search mechanism @@ -1982,7 +2054,12 @@ static int ocfs2_claim_suballoc_bits(struct ocfs2_al= loc_context *ac, * allocation group. This helps us maintain some * contiguousness across allocations. */ status =3D ocfs2_search_one_group(ac, handle, bits_wanted, - min_bits, res, &bits_left); + min_bits, res, &bits_left, + &released); + if (released) { + res->sr_bg_blkno =3D 0; + goto chain_search; + } if (!status) goto set_hint; if (status < 0 && status !=3D -ENOSPC) { @@ -1990,7 +2067,7 @@ static int ocfs2_claim_suballoc_bits(struct ocfs2_all= oc_context *ac, goto bail; } } - +chain_search: cl =3D (struct ocfs2_chain_list *) &fe->id2.i_chain; =20 victim =3D ocfs2_find_victim_chain(cl); @@ -2102,6 +2179,12 @@ int ocfs2_claim_metadata(handle_t *handle, return status; } =20 +/* + * after ocfs2 has the ability to release block group unused space, + * the ->ip_last_used_group may be invalid. so this function returns + * ac->ac_last_group need to verify. + * refer the 'hint' in ocfs2_claim_suballoc_bits() for more details. + */ static void ocfs2_init_inode_ac_group(struct inode *dir, struct buffer_head *parent_di_bh, struct ocfs2_alloc_context *ac) @@ -2540,6 +2623,198 @@ static int ocfs2_block_group_clear_bits(handle_t *h= andle, return status; } =20 +/* + * Reclaim the suballocator managed space to main bitmap. + * This function first works on the suballocator to perform the + * cleanup rec/alloc_inode job, then switches to the main bitmap + * to reclaim released space. + * + * handle: The transaction handle + * alloc_inode: The suballoc inode + * alloc_bh: The buffer_head of suballoc inode + * group_bh: The group descriptor buffer_head of suballocator managed. + * Caller should release the input group_bh. + */ +static int _ocfs2_reclaim_suballoc_to_main(handle_t *handle, + struct inode *alloc_inode, + struct buffer_head *alloc_bh, + struct buffer_head *group_bh) +{ + int idx, status =3D 0; + int i, next_free_rec, len =3D 0; + __le16 old_bg_contig_free_bits =3D 0; + u16 start_bit; + u32 tmp_used; + u64 bg_blkno, start_blk; + unsigned int count; + struct ocfs2_chain_rec *rec; + struct buffer_head *main_bm_bh =3D NULL; + struct inode *main_bm_inode =3D NULL; + struct ocfs2_super *osb =3D OCFS2_SB(alloc_inode->i_sb); + struct ocfs2_dinode *fe =3D (struct ocfs2_dinode *) alloc_bh->b_data; + struct ocfs2_chain_list *cl =3D &fe->id2.i_chain; + struct ocfs2_group_desc *group =3D (struct ocfs2_group_desc *) group_bh->= b_data; + + idx =3D le16_to_cpu(group->bg_chain); + rec =3D &(cl->cl_recs[idx]); + + status =3D ocfs2_extend_trans(handle, + ocfs2_calc_group_alloc_credits(osb->sb, + le16_to_cpu(cl->cl_cpg))); + if (status) { + mlog_errno(status); + goto bail; + } + status =3D ocfs2_journal_access_di(handle, INODE_CACHE(alloc_inode), + alloc_bh, OCFS2_JOURNAL_ACCESS_WRITE); + if (status < 0) { + mlog_errno(status); + goto bail; + } + + /* + * Only clear the suballocator rec item in-place. + * + * If idx is not the last, we don't compress (remove the empty item) + * the cl_recs[]. If not, we need to do lots jobs. + * + * Compress cl_recs[] code example: + * if (idx !=3D cl->cl_next_free_rec - 1) + * memmove(&cl->cl_recs[idx], &cl->cl_recs[idx + 1], + * sizeof(struct ocfs2_chain_rec) * + * (cl->cl_next_free_rec - idx - 1)); + * for(i =3D idx; i < cl->cl_next_free_rec-1; i++) { + * group->bg_chain =3D "later group->bg_chain"; + * group->bg_blkno =3D xxx; + * ... ... + * } + */ + + tmp_used =3D le32_to_cpu(fe->id1.bitmap1.i_total); + fe->id1.bitmap1.i_total =3D cpu_to_le32(tmp_used - le32_to_cpu(rec->c_tot= al)); + + /* Substraction 1 for the block group itself */ + tmp_used =3D le32_to_cpu(fe->id1.bitmap1.i_used); + fe->id1.bitmap1.i_used =3D cpu_to_le32(tmp_used - 1); + + tmp_used =3D le32_to_cpu(fe->i_clusters); + fe->i_clusters =3D cpu_to_le32(tmp_used - le16_to_cpu(cl->cl_cpg)); + + spin_lock(&OCFS2_I(alloc_inode)->ip_lock); + OCFS2_I(alloc_inode)->ip_clusters -=3D le32_to_cpu(fe->i_clusters); + fe->i_size =3D cpu_to_le64(ocfs2_clusters_to_bytes(alloc_inode->i_sb, + le32_to_cpu(fe->i_clusters))); + spin_unlock(&OCFS2_I(alloc_inode)->ip_lock); + i_size_write(alloc_inode, le64_to_cpu(fe->i_size)); + alloc_inode->i_blocks =3D ocfs2_inode_sector_count(alloc_inode); + + ocfs2_journal_dirty(handle, alloc_bh); + ocfs2_update_inode_fsync_trans(handle, alloc_inode, 0); + + start_blk =3D le64_to_cpu(rec->c_blkno); + count =3D le32_to_cpu(rec->c_total) / le16_to_cpu(cl->cl_bpc); + + /* + * If the rec is the last one, let's compress the chain list by + * removing the empty cl_recs[] at the end. + */ + next_free_rec =3D le16_to_cpu(cl->cl_next_free_rec); + if (idx =3D=3D (next_free_rec - 1)) { + len++; /* the last item should be counted first */ + for (i =3D (next_free_rec - 2); i > 0; i--) { + if (cl->cl_recs[i].c_free =3D=3D cl->cl_recs[i].c_total) + len++; + else + break; + } + } + le16_add_cpu(&cl->cl_next_free_rec, -len); + + rec->c_free =3D 0; + rec->c_total =3D 0; + rec->c_blkno =3D 0; + ocfs2_remove_from_cache(INODE_CACHE(alloc_inode), group_bh); + memset(group, 0, sizeof(struct ocfs2_group_desc)); + + /* prepare job for reclaim clusters */ + main_bm_inode =3D ocfs2_get_system_file_inode(osb, + GLOBAL_BITMAP_SYSTEM_INODE, + OCFS2_INVALID_SLOT); + if (!main_bm_inode) + goto bail; /* ignore the error in reclaim path */ + + inode_lock(main_bm_inode); + + status =3D ocfs2_inode_lock(main_bm_inode, &main_bm_bh, 1); + if (status < 0) + goto free_bm_inode; /* ignore the error in reclaim path */ + + ocfs2_block_to_cluster_group(main_bm_inode, start_blk, &bg_blkno, + &start_bit); + fe =3D (struct ocfs2_dinode *) main_bm_bh->b_data; + cl =3D &fe->id2.i_chain; + /* reuse group_bh, caller will release the input group_bh */ + group_bh =3D NULL; + + /* reclaim clusters to global_bitmap */ + status =3D ocfs2_read_group_descriptor(main_bm_inode, fe, bg_blkno, + &group_bh); + if (status < 0) { + mlog_errno(status); + goto free_bm_bh; + } + group =3D (struct ocfs2_group_desc *) group_bh->b_data; + + if ((count + start_bit) > le16_to_cpu(group->bg_bits)) { + ocfs2_error(alloc_inode->i_sb, + "reclaim length (%d) beyands block group length (%d)", + count + start_bit, le16_to_cpu(group->bg_bits)); + goto free_group_bh; + } + + old_bg_contig_free_bits =3D group->bg_contig_free_bits; + status =3D ocfs2_block_group_clear_bits(handle, main_bm_inode, + group, group_bh, + start_bit, count, 0, + _ocfs2_clear_bit); + if (status < 0) { + mlog_errno(status); + goto free_group_bh; + } + + status =3D ocfs2_journal_access_di(handle, INODE_CACHE(main_bm_inode), + main_bm_bh, OCFS2_JOURNAL_ACCESS_WRITE); + if (status < 0) { + mlog_errno(status); + ocfs2_block_group_set_bits(handle, main_bm_inode, group, group_bh, + start_bit, count, + le16_to_cpu(old_bg_contig_free_bits), 1); + goto free_group_bh; + } + + idx =3D le16_to_cpu(group->bg_chain); + rec =3D &(cl->cl_recs[idx]); + + le32_add_cpu(&rec->c_free, count); + tmp_used =3D le32_to_cpu(fe->id1.bitmap1.i_used); + fe->id1.bitmap1.i_used =3D cpu_to_le32(tmp_used - count); + ocfs2_journal_dirty(handle, main_bm_bh); + +free_group_bh: + brelse(group_bh); + +free_bm_bh: + ocfs2_inode_unlock(main_bm_inode, 1); + brelse(main_bm_bh); + +free_bm_inode: + inode_unlock(main_bm_inode); + iput(main_bm_inode); + +bail: + return status; +} + /* * expects the suballoc inode to already be locked. */ @@ -2552,12 +2827,13 @@ static int _ocfs2_free_suballoc_bits(handle_t *hand= le, void (*undo_fn)(unsigned int bit, unsigned long *bitmap)) { - int status =3D 0; + int idx, status =3D 0; u32 tmp_used; struct ocfs2_dinode *fe =3D (struct ocfs2_dinode *) alloc_bh->b_data; struct ocfs2_chain_list *cl =3D &fe->id2.i_chain; struct buffer_head *group_bh =3D NULL; struct ocfs2_group_desc *group; + struct ocfs2_chain_rec *rec; __le16 old_bg_contig_free_bits =3D 0; =20 /* The alloc_bh comes from ocfs2_free_dinode() or @@ -2603,12 +2879,26 @@ static int _ocfs2_free_suballoc_bits(handle_t *hand= le, goto bail; } =20 - le32_add_cpu(&cl->cl_recs[le16_to_cpu(group->bg_chain)].c_free, - count); + idx =3D le16_to_cpu(group->bg_chain); + rec =3D &(cl->cl_recs[idx]); + + le32_add_cpu(&rec->c_free, count); tmp_used =3D le32_to_cpu(fe->id1.bitmap1.i_used); fe->id1.bitmap1.i_used =3D cpu_to_le32(tmp_used - count); ocfs2_journal_dirty(handle, alloc_bh); =20 + /* + * Reclaim suballocator free space. + * Bypass: global_bitmap, non empty rec, first rec in cl_recs[] + */ + if (ocfs2_is_cluster_bitmap(alloc_inode) || + (le32_to_cpu(rec->c_free) !=3D (le32_to_cpu(rec->c_total) - 1)) || + (le16_to_cpu(cl->cl_next_free_rec) =3D=3D 1)) { + goto bail; + } + + _ocfs2_reclaim_suballoc_to_main(handle, alloc_inode, alloc_bh, group_bh); + bail: brelse(group_bh); return status; --=20 2.43.0