From nobody Sun Dec 14 19:28:52 2025 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0FEA51FB3; Sat, 19 Apr 2025 08:34:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.191 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745051688; cv=none; b=NKQnaiPXMAPLnv0v56ssSMlH3WUt8YvunyM+U0e7fGBA3Hm4DGwwVBRUQfuHGjy6kkycW/r/u9tFqELrKs0rdmonX95Dj4KgBHdJGt5Z4gkWmr+WG1Mjn7hjluxN+PmQAwIbpp0ophugpL+1l+N9sSuVoHHlIY1EZOjjFCErGZk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745051688; c=relaxed/simple; bh=xp8b68otYzJsW+WLvmJMydcS7zsrXWz4Wu9jHJ3S91k=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=s4lj6Zug4uWzukRshrhR4DFO8F5Xne+uZ2BBSQygkmTGY2R7/UK0tLpzE5VbzlZ1o3A9rojokZCIr2LOg2zzu/hU4dWazws/jwiPlugjIIRt7KzJLQ3diyVWaCy1x9dUueZNCNYRn/SxRHJOnixu6zQ3Qkl6Ro+l5vYt3iD8/dU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.191 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4ZflHq5tqzz1R7S6; Sat, 19 Apr 2025 16:32:39 +0800 (CST) Received: from kwepemg500017.china.huawei.com (unknown [7.202.181.81]) by mail.maildlp.com (Postfix) with ESMTPS id B6A7818001B; Sat, 19 Apr 2025 16:34:37 +0800 (CST) Received: from huawei.com (10.175.127.227) by kwepemg500017.china.huawei.com (7.202.181.81) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 19 Apr 2025 16:34:36 +0800 From: Li Lingfeng To: , , , CC: , , , , , , , Subject: [PATCH 1/2] nfs: handle failure of nfs_get_lock_context in unlock path Date: Sat, 19 Apr 2025 16:53:54 +0800 Message-ID: <20250419085355.1451457-2-lilingfeng3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20250419085355.1451457-1-lilingfeng3@huawei.com> References: <20250419085355.1451457-1-lilingfeng3@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemg500017.china.huawei.com (7.202.181.81) Content-Type: text/plain; charset="utf-8" When memory is insufficient, the allocation of nfs_lock_context in nfs_get_lock_context() fails and returns -ENOMEM. If we mistakenly treat an nfs4_unlockdata structure (whose l_ctx member has been set to -ENOMEM) as valid and proceed to execute rpc_run_task(), this will trigger a NULL pointer dereference in nfs4_locku_prepare. For example: BUG: kernel NULL pointer dereference, address: 000000000000000c PGD 0 P4D 0 Oops: Oops: 0000 [#1] SMP PTI CPU: 15 UID: 0 PID: 12 Comm: kworker/u64:0 Not tainted 6.15.0-rc2-dirty #60 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-2.fc40 Workqueue: rpciod rpc_async_schedule RIP: 0010:nfs4_locku_prepare+0x35/0xc2 Code: 89 f2 48 89 fd 48 c7 c7 68 69 ef b5 53 48 8b 8e 90 00 00 00 48 89 f3 RSP: 0018:ffffbbafc006bdb8 EFLAGS: 00010246 RAX: 000000000000004b RBX: ffff9b964fc1fa00 RCX: 0000000000000000 RDX: 0000000000000000 RSI: fffffffffffffff4 RDI: ffff9ba53fddbf40 RBP: ffff9ba539934000 R08: 0000000000000000 R09: ffffbbafc006bc38 R10: ffffffffb6b689c8 R11: 0000000000000003 R12: ffff9ba539934030 R13: 0000000000000001 R14: 0000000004248060 R15: ffffffffb56d1c30 FS: 0000000000000000(0000) GS:ffff9ba5881f0000(0000) knlGS:00000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000000000000000c CR3: 000000093f244000 CR4: 00000000000006f0 Call Trace: __rpc_execute+0xbc/0x480 rpc_async_schedule+0x2f/0x40 process_one_work+0x232/0x5d0 worker_thread+0x1da/0x3d0 ? __pfx_worker_thread+0x10/0x10 kthread+0x10d/0x240 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x34/0x50 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1a/0x30 Modules linked in: CR2: 000000000000000c Reviewed-by: Jeff Layton ---[ end trace 0000000000000000 ]--- Free the allocated nfs4_unlockdata when nfs_get_lock_context() fails and return NULL to terminate subsequent rpc_run_task, preventing NULL pointer dereference. Fixes: f30cb757f680 ("NFS: Always wait for I/O completion before unlock") Signed-off-by: Li Lingfeng Reviewed-by: Jeff Layton --- fs/nfs/nfs4proc.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index 970f28dbf253..9f5689c43a50 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -7074,10 +7074,18 @@ static struct nfs4_unlockdata *nfs4_alloc_unlockdat= a(struct file_lock *fl, struct nfs4_unlockdata *p; struct nfs4_state *state =3D lsp->ls_state; struct inode *inode =3D state->inode; + struct nfs_lock_context *l_ctx; =20 p =3D kzalloc(sizeof(*p), GFP_KERNEL); if (p =3D=3D NULL) return NULL; + l_ctx =3D nfs_get_lock_context(ctx); + if (!IS_ERR(l_ctx)) { + p->l_ctx =3D l_ctx; + } else { + kfree(p); + return NULL; + } p->arg.fh =3D NFS_FH(inode); p->arg.fl =3D &p->fl; p->arg.seqid =3D seqid; @@ -7085,7 +7093,6 @@ static struct nfs4_unlockdata *nfs4_alloc_unlockdata(= struct file_lock *fl, p->lsp =3D lsp; /* Ensure we don't close file until we're done freeing locks! */ p->ctx =3D get_nfs_open_context(ctx); - p->l_ctx =3D nfs_get_lock_context(ctx); locks_init_lock(&p->fl); locks_copy_lock(&p->fl, fl); p->server =3D NFS_SERVER(inode); --=20 2.31.1