fs/smb/server/transport_ipc.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-)
ipc_msg_send_request() waits for a generic netlink reply using an
ipc_msg_table_entry on the stack. The generic netlink handler
(handle_generic_event()/handle_response()) fills entry->response under
ipc_msg_table_lock, but ipc_msg_send_request() used to validate and free
entry->response without holding the same lock.
Under high concurrency this allows a race where handle_response() is
copying data into entry->response while ipc_msg_send_request() has just
freed it, leading to a slab-use-after-free reported by KASAN in
handle_generic_event():
BUG: KASAN: slab-use-after-free in handle_generic_event+0x3c4/0x5f0 [ksmbd]
Write of size 12 at addr ffff888198ee6e20 by task pool/109349
...
Freed by task:
kvfree
ipc_msg_send_request [ksmbd]
ksmbd_rpc_open -> ksmbd_session_rpc_open [ksmbd]
Fix by:
- Taking ipc_msg_table_lock in ipc_msg_send_request() while validating
entry->response, freeing it when invalid, and removing the entry from
ipc_msg_table.
- Returning the final entry->response pointer to the caller only after
the hash entry is removed under the lock.
- Returning NULL in the error path, preserving the original API
semantics.
This makes all accesses to entry->response consistent with
handle_response(), which already updates and fills the response buffer
under ipc_msg_table_lock, and closes the race that allowed the UAF.
Reported-by: Qianchang Zhao <pioooooooooip@gmail.com>
Reported-by: Zhitong Liu <liuzhitong1993@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Qianchang Zhao <pioooooooooip@gmail.com>
---
fs/smb/server/transport_ipc.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/fs/smb/server/transport_ipc.c b/fs/smb/server/transport_ipc.c
index 46f87fd1c..7b1a060da 100644
--- a/fs/smb/server/transport_ipc.c
+++ b/fs/smb/server/transport_ipc.c
@@ -532,6 +532,7 @@ static int ipc_validate_msg(struct ipc_msg_table_entry *entry)
static void *ipc_msg_send_request(struct ksmbd_ipc_msg *msg, unsigned int handle)
{
struct ipc_msg_table_entry entry;
+ void *response = NULL;
int ret;
if ((int)handle < 0)
@@ -553,6 +554,8 @@ static void *ipc_msg_send_request(struct ksmbd_ipc_msg *msg, unsigned int handle
ret = wait_event_interruptible_timeout(entry.wait,
entry.response != NULL,
IPC_WAIT_TIMEOUT);
+
+ down_write(&ipc_msg_table_lock);
if (entry.response) {
ret = ipc_validate_msg(&entry);
if (ret) {
@@ -560,11 +563,19 @@ static void *ipc_msg_send_request(struct ksmbd_ipc_msg *msg, unsigned int handle
entry.response = NULL;
}
}
+
+ response = entry.response;
+ hash_del(&entry.ipc_table_hlist);
+ up_write(&ipc_msg_table_lock);
+
+ return response;
+
out:
down_write(&ipc_msg_table_lock);
hash_del(&entry.ipc_table_hlist);
up_write(&ipc_msg_table_lock);
- return entry.response;
+
+ return NULL;
}
static int ksmbd_ipc_heartbeat_request(void)
--
2.34.1
On Wed, Nov 26, 2025 at 10:49 AM Qianchang Zhao <pioooooooooip@gmail.com> wrote: > > ipc_msg_send_request() waits for a generic netlink reply using an > ipc_msg_table_entry on the stack. The generic netlink handler > (handle_generic_event()/handle_response()) fills entry->response under > ipc_msg_table_lock, but ipc_msg_send_request() used to validate and free > entry->response without holding the same lock. > > Under high concurrency this allows a race where handle_response() is > copying data into entry->response while ipc_msg_send_request() has just > freed it, leading to a slab-use-after-free reported by KASAN in > handle_generic_event(): > > BUG: KASAN: slab-use-after-free in handle_generic_event+0x3c4/0x5f0 [ksmbd] > Write of size 12 at addr ffff888198ee6e20 by task pool/109349 > ... > Freed by task: > kvfree > ipc_msg_send_request [ksmbd] > ksmbd_rpc_open -> ksmbd_session_rpc_open [ksmbd] > > Fix by: > - Taking ipc_msg_table_lock in ipc_msg_send_request() while validating > entry->response, freeing it when invalid, and removing the entry from > ipc_msg_table. > - Returning the final entry->response pointer to the caller only after > the hash entry is removed under the lock. > - Returning NULL in the error path, preserving the original API > semantics. > > This makes all accesses to entry->response consistent with > handle_response(), which already updates and fills the response buffer > under ipc_msg_table_lock, and closes the race that allowed the UAF. > > Reported-by: Qianchang Zhao <pioooooooooip@gmail.com> > Reported-by: Zhitong Liu <liuzhitong1993@gmail.com> > Cc: stable@vger.kernel.org > Signed-off-by: Qianchang Zhao <pioooooooooip@gmail.com> I have directly updated your patch and applied it to #ksmbd-for-next-next. Let me know if the attached patch has some issue. Thanks!
© 2016 - 2025 Red Hat, Inc.