Coredumping currently supports two modes:
(1) Dumping directly into a file somewhere on the filesystem.
(2) Dumping into a pipe connected to a usermode helper process
spawned as a child of the system_unbound_wq or kthreadd.
For simplicity I'm mostly ignoring (1). There's probably still some
users of (1) out there but processing coredumps in this way can be
considered adventurous especially in the face of set*id binaries.
The most common option should be (2) by now. It works by allowing
userspace to put a string into /proc/sys/kernel/core_pattern like:
|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h
The "|" at the beginning indicates to the kernel that a pipe must be
used. The path following the pipe indicator is a path to a binary that
will be spawned as a usermode helper process. Any additional parameters
pass information about the task that is generating the coredump to the
binary that processes the coredump.
In this case systemd-coredump is spawned as a usermode helper. There's
various conceptual consequences of this (non-exhaustive list):
- systemd-coredump is spawned with file descriptor number 0 (stdin)
to the read-end of the pipe. All other file descriptors are closed.
That specifically includes 1 (stdout) and 2 (stderr). This has already
caused bugs because userspace assumed that this cannot happen (Whether
or not this is a sane assumption is irrelevant.).
- systemd-coredump will be spawned as a child of system_unbound_wq. So
it is not a child of any userspace process and specifically not a
child of PID 1 so it cannot be waited upon and is in general a weird
hybrid upcall.
- systemd-coredump is spawned highly privileged as it is spawned with
full kernel credentials requiring all kinds of weird privilege
dropping excercises in userspaces.
This adds another mode:
(3) Dumping into a AF_UNIX socket.
Userspace can set /proc/sys/kernel/core_pattern to:
:/run/coredump.socket
The ":" at the beginning indicates to the kernel that an AF_UNIX socket
is used to process coredumps. The task generating the coredump simply
connects to the socket and writes the coredump into the socket.
Userspace can get a stable handle on the task generating the coredump by
using the SO_PEERPIDFD socket option. SO_PEERPIDFD uses the thread-group
leader pid stashed during connect(). Even if the task generating the
coredump is a subthread in the thread-group the pidfd of the
thread-group leader is a reliable stable handle. Userspace that's
interested in the credentials of the specific thread that crashed can
use SCM_PIDFD to retrieve them.
The pidfd can be used to safely open and parse /proc/<pid> of the task
and it can also be used to retrieve additional meta information via the
PIDFD_GET_INFO ioctl().
This will allow userspace to not have to rely on usermode helpers for
processing coredumps and thus to stop having to handle super privileged
coredumping helpers.
This is easy to test:
(a) coredump processing (we're using socat):
> cat coredump_socket.sh
#!/bin/bash
set -x
sudo bash -c "echo ':/tmp/stream.sock' > /proc/sys/kernel/core_pattern"
socat --statistics unix-listen:/tmp/stream.sock,fork FILE:core_file,create,append,truncate
(b) trigger a coredump:
user1@localhost:~/data/scripts$ cat crash.c
#include <stdio.h>
#include <unistd.h>
int main(int argc, char *argv[])
{
fprintf(stderr, "%u\n", (1 / 0));
_exit(0);
}
Signed-off-by: Christian Brauner <brauner@kernel.org>
---
fs/coredump.c | 137 +++++++++++++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 132 insertions(+), 5 deletions(-)
diff --git a/fs/coredump.c b/fs/coredump.c
index 1779299b8c61..9a6cba233db9 100644
--- a/fs/coredump.c
+++ b/fs/coredump.c
@@ -45,6 +45,9 @@
#include <linux/elf.h>
#include <linux/pidfs.h>
#include <uapi/linux/pidfd.h>
+#include <linux/net.h>
+#include <uapi/linux/un.h>
+#include <linux/socket.h>
#include <linux/uaccess.h>
#include <asm/mmu_context.h>
@@ -79,6 +82,7 @@ unsigned int core_file_note_size_limit = CORE_FILE_NOTE_SIZE_DEFAULT;
enum coredump_type_t {
COREDUMP_FILE = 1,
COREDUMP_PIPE = 2,
+ COREDUMP_SOCK = 3,
};
struct core_name {
@@ -232,13 +236,16 @@ static int format_corename(struct core_name *cn, struct coredump_params *cprm,
cn->corename = NULL;
if (*pat_ptr == '|')
cn->core_type = COREDUMP_PIPE;
+ else if (*pat_ptr == ':')
+ cn->core_type = COREDUMP_SOCK;
else
cn->core_type = COREDUMP_FILE;
if (expand_corename(cn, core_name_size))
return -ENOMEM;
cn->corename[0] = '\0';
- if (cn->core_type == COREDUMP_PIPE) {
+ switch (cn->core_type) {
+ case COREDUMP_PIPE: {
int argvs = sizeof(core_pattern) / 2;
(*argv) = kmalloc_array(argvs, sizeof(**argv), GFP_KERNEL);
if (!(*argv))
@@ -247,6 +254,39 @@ static int format_corename(struct core_name *cn, struct coredump_params *cprm,
++pat_ptr;
if (!(*pat_ptr))
return -ENOMEM;
+ break;
+ }
+ case COREDUMP_SOCK: {
+ /* skip ':' */
+ ++pat_ptr;
+ /* no spaces */
+ if (!(*pat_ptr))
+ return -EINVAL;
+ /* must be an absolute path */
+ if (!(*pat_ptr == '/'))
+ return -EINVAL;
+ err = cn_printf(cn, "%s", pat_ptr);
+ if (err)
+ return err;
+ /*
+ * For simplicitly we simply refuse spaces in the socket
+ * path. It's in line with what we do for pipes.
+ */
+ if (strchr(cn->corename, ' '))
+ return -EINVAL;
+
+ /*
+ * Currently no need to parse any other options.
+ * Relevant information can be retrieved from the peer
+ * pidfd retrievable via SO_PEERPIDFD by the receiver or
+ * via /proc/<pid>, using the SO_PEERPIDFD to guard
+ * against pid recycling when opening /proc/<pid>.
+ */
+ return 0;
+ }
+ default:
+ WARN_ON_ONCE(cn->core_type != COREDUMP_FILE);
+ break;
}
/* Repeat as long as we have more pattern to process and more output
@@ -801,6 +841,73 @@ void do_coredump(const kernel_siginfo_t *siginfo)
}
break;
}
+ case COREDUMP_SOCK: {
+ struct file *file __free(fput) = NULL;
+#ifdef CONFIG_UNIX
+ ssize_t addr_size;
+ struct sockaddr_un unix_addr = {
+ .sun_family = AF_UNIX,
+ };
+ struct sockaddr_storage *addr;
+
+ /*
+ * TODO: We need to really support core_pipe_limit to
+ * prevent the task from being reaped before userspace
+ * had a chance to look at /proc/<pid>.
+ *
+ * I need help from the networking people (or maybe Oleg
+ * also knows?) how to do this.
+ *
+ * IOW, we need to wait for the other side to shutdown
+ * the socket/terminate the connection.
+ *
+ * We could just read but then userspace could sent us
+ * SCM_RIGHTS and we just shouldn't need to deal with
+ * any of that.
+ */
+ if (WARN_ON_ONCE(core_pipe_limit)) {
+ retval = -EINVAL;
+ goto close_fail;
+ }
+
+ retval = strscpy(unix_addr.sun_path, cn.corename, sizeof(unix_addr.sun_path));
+ if (retval < 0)
+ goto close_fail;
+ addr_size = offsetof(struct sockaddr_un, sun_path) + retval + 1,
+
+ file = __sys_socket_file(AF_UNIX, SOCK_STREAM, 0);
+ if (IS_ERR(file))
+ goto close_fail;
+
+ /*
+ * It is possible that the userspace process which is
+ * supposed to handle the coredump and is listening on
+ * the AF_UNIX socket coredumps. This should be fine
+ * though. If this was the only process which was
+ * listen()ing on the AF_UNIX socket for coredumps it
+ * obviously won't be listen()ing anymore by the time it
+ * gets here. So the __sys_connect_file() call will
+ * often fail with ECONNREFUSED and the coredump.
+ *
+ * In general though, userspace should just mark itself
+ * non dumpable and not do any of this nonsense. We
+ * shouldn't work around this.
+ */
+ addr = (struct sockaddr_storage *)(&unix_addr);
+ retval = __sys_connect_file(file, addr, addr_size, O_CLOEXEC);
+ if (retval)
+ goto close_fail;
+
+ /* The peer isn't supposed to write and we for sure won't read. */
+ retval = __sys_shutdown_sock(sock_from_file(file), SHUT_RD);
+ if (retval)
+ goto close_fail;
+
+ cprm.limit = RLIM_INFINITY;
+#endif
+ cprm.file = no_free_ptr(file);
+ break;
+ }
default:
WARN_ON_ONCE(true);
retval = -EINVAL;
@@ -818,7 +925,10 @@ void do_coredump(const kernel_siginfo_t *siginfo)
* have this set to NULL.
*/
if (!cprm.file) {
- coredump_report_failure("Core dump to |%s disabled", cn.corename);
+ if (cn.core_type == COREDUMP_PIPE)
+ coredump_report_failure("Core dump to |%s disabled", cn.corename);
+ else
+ coredump_report_failure("Core dump to :%s disabled", cn.corename);
goto close_fail;
}
if (!dump_vma_snapshot(&cprm))
@@ -839,8 +949,25 @@ void do_coredump(const kernel_siginfo_t *siginfo)
file_end_write(cprm.file);
free_vma_snapshot(&cprm);
}
- if ((cn.core_type == COREDUMP_PIPE) && core_pipe_limit)
- wait_for_dump_helpers(cprm.file);
+
+ if (core_pipe_limit) {
+ switch (cn.core_type) {
+ case COREDUMP_PIPE:
+ wait_for_dump_helpers(cprm.file);
+ break;
+ case COREDUMP_SOCK: {
+ /*
+ * TODO: Wait for the coredump handler to shut
+ * down the socket so we prevent the task from
+ * being reaped.
+ */
+ break;
+ }
+ default:
+ break;
+ }
+ }
+
close_fail:
if (cprm.file)
filp_close(cprm.file, NULL);
@@ -1070,7 +1197,7 @@ EXPORT_SYMBOL(dump_align);
void validate_coredump_safety(void)
{
if (suid_dumpable == SUID_DUMP_ROOT &&
- core_pattern[0] != '/' && core_pattern[0] != '|') {
+ core_pattern[0] != '/' && core_pattern[0] != '|' && core_pattern[0] != ':') {
coredump_report_failure("Unsafe core_pattern used with fs.suid_dumpable=2: "
"pipe handler or fully qualified core dump path required. "
--
2.47.2
On Fri, May 2, 2025 at 2:42 PM Christian Brauner <brauner@kernel.org> wrote:
> diff --git a/fs/coredump.c b/fs/coredump.c
[...]
> @@ -801,6 +841,73 @@ void do_coredump(const kernel_siginfo_t *siginfo)
> }
> break;
> }
> + case COREDUMP_SOCK: {
> + struct file *file __free(fput) = NULL;
> +#ifdef CONFIG_UNIX
> + ssize_t addr_size;
> + struct sockaddr_un unix_addr = {
> + .sun_family = AF_UNIX,
> + };
> + struct sockaddr_storage *addr;
> +
> + /*
> + * TODO: We need to really support core_pipe_limit to
> + * prevent the task from being reaped before userspace
> + * had a chance to look at /proc/<pid>.
> + *
> + * I need help from the networking people (or maybe Oleg
> + * also knows?) how to do this.
> + *
> + * IOW, we need to wait for the other side to shutdown
> + * the socket/terminate the connection.
> + *
> + * We could just read but then userspace could sent us
> + * SCM_RIGHTS and we just shouldn't need to deal with
> + * any of that.
> + */
I don't think userspace can send you SCM_RIGHTS if you don't do a
recvmsg() with a control data buffer?
> + if (WARN_ON_ONCE(core_pipe_limit)) {
> + retval = -EINVAL;
> + goto close_fail;
> + }
> +
> + retval = strscpy(unix_addr.sun_path, cn.corename, sizeof(unix_addr.sun_path));
> + if (retval < 0)
> + goto close_fail;
> + addr_size = offsetof(struct sockaddr_un, sun_path) + retval + 1,
> +
> + file = __sys_socket_file(AF_UNIX, SOCK_STREAM, 0);
> + if (IS_ERR(file))
> + goto close_fail;
> +
> + /*
> + * It is possible that the userspace process which is
> + * supposed to handle the coredump and is listening on
> + * the AF_UNIX socket coredumps. This should be fine
> + * though. If this was the only process which was
> + * listen()ing on the AF_UNIX socket for coredumps it
> + * obviously won't be listen()ing anymore by the time it
> + * gets here. So the __sys_connect_file() call will
> + * often fail with ECONNREFUSED and the coredump.
Why will the server not be listening anymore? Have the task's file
descriptors already been closed by the time we get here?
(Maybe just get rid of this comment, I agree with the following
comment saying we should let userspace deal with this.)
> + * In general though, userspace should just mark itself
> + * non dumpable and not do any of this nonsense. We
> + * shouldn't work around this.
> + */
> + addr = (struct sockaddr_storage *)(&unix_addr);
> + retval = __sys_connect_file(file, addr, addr_size, O_CLOEXEC);
Have you made an intentional decision on whether you want to connect
to a unix domain socket with a path relative to current->fs->root (so
that containers can do their own core dump handling) or relative to
the root namespace root (so that core dumps always reach the init
namespace's core dumping even if a process sandboxes itself with
namespaces or such)? Also, I think this connection attempt will be
subject to restrictions imposed by (for example) Landlock or AppArmor,
I'm not sure if that is desired here (since this is not actually a
connection that the process in whose context the call happens decided
to make, it's something the system administrator decided to do, and
especially with Landlock, policies are controlled by individual
applications that may not know how core dumps work on the system).
I guess if we keep the current behavior where the socket path is
namespaced, then we also need to keep the security checks, since an
unprivileged user could probably set up a namespace and chroot() to a
place where the socket path (indirectly, through a symlink) refers to
an arbitrary socket...
An alternative design might be to directly register the server socket
on the userns/mountns/netns or such in some magic way, and then have
the core dumping walk up the namespace hierarchy until it finds a
namespace that has opted in to using its own core dumping socket, and
connect to that socket bypassing security checks. (A bit like how
namespaced binfmt_misc works.) Like, maybe userspace with namespaced
CAP_SYS_ADMIN could bind() to some magic UNIX socket address, or use
some new setsockopt() on the socket or such, to become the handler of
core dumps? This would also have the advantage that malicious
userspace wouldn't be able to send fake bogus core dumps to the
server, and the server would provide clear consent to being connected
to without security checks at connection time.
> + if (retval)
> + goto close_fail;
> +
> + /* The peer isn't supposed to write and we for sure won't read. */
> + retval = __sys_shutdown_sock(sock_from_file(file), SHUT_RD);
> + if (retval)
> + goto close_fail;
> +
> + cprm.limit = RLIM_INFINITY;
> +#endif
> + cprm.file = no_free_ptr(file);
> + break;
> + }
> default:
> WARN_ON_ONCE(true);
> retval = -EINVAL;
> @@ -818,7 +925,10 @@ void do_coredump(const kernel_siginfo_t *siginfo)
> * have this set to NULL.
> */
> if (!cprm.file) {
> - coredump_report_failure("Core dump to |%s disabled", cn.corename);
> + if (cn.core_type == COREDUMP_PIPE)
> + coredump_report_failure("Core dump to |%s disabled", cn.corename);
> + else
> + coredump_report_failure("Core dump to :%s disabled", cn.corename);
> goto close_fail;
> }
> if (!dump_vma_snapshot(&cprm))
> @@ -839,8 +949,25 @@ void do_coredump(const kernel_siginfo_t *siginfo)
> file_end_write(cprm.file);
> free_vma_snapshot(&cprm);
> }
> - if ((cn.core_type == COREDUMP_PIPE) && core_pipe_limit)
> - wait_for_dump_helpers(cprm.file);
> +
> + if (core_pipe_limit) {
> + switch (cn.core_type) {
> + case COREDUMP_PIPE:
> + wait_for_dump_helpers(cprm.file);
> + break;
> + case COREDUMP_SOCK: {
> + /*
> + * TODO: Wait for the coredump handler to shut
> + * down the socket so we prevent the task from
> + * being reaped.
> + */
Hmm, I'm no expert but maybe you could poll for the POLLRDHUP event...
though that might require writing your own helper with a loop that
does vfs_poll() and waits for a poll wakeup, since I don't think there
is a kernel helper analogous to a synchronous poll() syscall yet.
> + break;
> + }
> + default:
> + break;
> + }
> + }
> +
> close_fail:
> if (cprm.file)
> filp_close(cprm.file, NULL);
On Fri, May 02, 2025 at 04:04:32PM +0200, Jann Horn wrote:
> On Fri, May 2, 2025 at 2:42 PM Christian Brauner <brauner@kernel.org> wrote:
> > diff --git a/fs/coredump.c b/fs/coredump.c
> [...]
> > @@ -801,6 +841,73 @@ void do_coredump(const kernel_siginfo_t *siginfo)
> > }
> > break;
> > }
> > + case COREDUMP_SOCK: {
> > + struct file *file __free(fput) = NULL;
> > +#ifdef CONFIG_UNIX
> > + ssize_t addr_size;
> > + struct sockaddr_un unix_addr = {
> > + .sun_family = AF_UNIX,
> > + };
> > + struct sockaddr_storage *addr;
> > +
> > + /*
> > + * TODO: We need to really support core_pipe_limit to
> > + * prevent the task from being reaped before userspace
> > + * had a chance to look at /proc/<pid>.
> > + *
> > + * I need help from the networking people (or maybe Oleg
> > + * also knows?) how to do this.
> > + *
> > + * IOW, we need to wait for the other side to shutdown
> > + * the socket/terminate the connection.
> > + *
> > + * We could just read but then userspace could sent us
> > + * SCM_RIGHTS and we just shouldn't need to deal with
> > + * any of that.
> > + */
>
> I don't think userspace can send you SCM_RIGHTS if you don't do a
> recvmsg() with a control data buffer?
Oh hm, then maybe just a regular read at the end would work. As soon as
userspace send us anything or we get a close event we just disconnect.
But btw, I think we really need a recvmsg() flag that allows a receiver
to refuse SCM_RIGHTS/file descriptors from being sent to it. IIRC, right
now this is a real issue that systemd works around by always calling its
cmsg_close_all() helper after each recvmsg() to ensure that no one sent
it file descriptors it didn't want. The problem there is that someone
could have sent it an fd to a hanging NFS server or something and then
it would hang in close() even though it never even wanted any file
descriptors in the first place.
>
> > + if (WARN_ON_ONCE(core_pipe_limit)) {
> > + retval = -EINVAL;
> > + goto close_fail;
> > + }
> > +
> > + retval = strscpy(unix_addr.sun_path, cn.corename, sizeof(unix_addr.sun_path));
> > + if (retval < 0)
> > + goto close_fail;
> > + addr_size = offsetof(struct sockaddr_un, sun_path) + retval + 1,
> > +
> > + file = __sys_socket_file(AF_UNIX, SOCK_STREAM, 0);
> > + if (IS_ERR(file))
> > + goto close_fail;
> > +
> > + /*
> > + * It is possible that the userspace process which is
> > + * supposed to handle the coredump and is listening on
> > + * the AF_UNIX socket coredumps. This should be fine
> > + * though. If this was the only process which was
> > + * listen()ing on the AF_UNIX socket for coredumps it
> > + * obviously won't be listen()ing anymore by the time it
> > + * gets here. So the __sys_connect_file() call will
> > + * often fail with ECONNREFUSED and the coredump.
>
> Why will the server not be listening anymore? Have the task's file
> descriptors already been closed by the time we get here?
No, the file descriptors are still open.
>
> (Maybe just get rid of this comment, I agree with the following
> comment saying we should let userspace deal with this.)
Good idea.
>
> > + * In general though, userspace should just mark itself
> > + * non dumpable and not do any of this nonsense. We
> > + * shouldn't work around this.
> > + */
> > + addr = (struct sockaddr_storage *)(&unix_addr);
> > + retval = __sys_connect_file(file, addr, addr_size, O_CLOEXEC);
>
> Have you made an intentional decision on whether you want to connect
> to a unix domain socket with a path relative to current->fs->root (so
> that containers can do their own core dump handling) or relative to
> the root namespace root (so that core dumps always reach the init
> namespace's core dumping even if a process sandboxes itself with
> namespaces or such)? Also, I think this connection attempt will be
Fsck no. :) I just jotted this down as an RFC. Details below.
> subject to restrictions imposed by (for example) Landlock or AppArmor,
> I'm not sure if that is desired here (since this is not actually a
> connection that the process in whose context the call happens decided
> to make, it's something the system administrator decided to do, and
> especially with Landlock, policies are controlled by individual
> applications that may not know how core dumps work on the system).
>
> I guess if we keep the current behavior where the socket path is
> namespaced, then we also need to keep the security checks, since an
> unprivileged user could probably set up a namespace and chroot() to a
> place where the socket path (indirectly, through a symlink) refers to
> an arbitrary socket...
>
> An alternative design might be to directly register the server socket
> on the userns/mountns/netns or such in some magic way, and then have
> the core dumping walk up the namespace hierarchy until it finds a
> namespace that has opted in to using its own core dumping socket, and
> connect to that socket bypassing security checks. (A bit like how
> namespaced binfmt_misc works.) Like, maybe userspace with namespaced
Yeah, I namespaced that thing. :)
> CAP_SYS_ADMIN could bind() to some magic UNIX socket address, or use
> some new setsockopt() on the socket or such, to become the handler of
> core dumps? This would also have the advantage that malicious
> userspace wouldn't be able to send fake bogus core dumps to the
> server, and the server would provide clear consent to being connected
> to without security checks at connection time.
I think that's policy that I absolute don't want the kernel to get
involved in unless absolutely necessary. A few days ago I just discussed
this at length with Lennart and the issue is that systemd would want to
see all coredumps on the system independent of the namespace they're
created in. To have a per-namespace (userns/mountns/netns) coredump
socket would invalidate that one way or the other and end up hiding
coredumps from the administrator unless there's some elaborate scheme
where it doesn't.
systemd-coredump (and Apport fwiw) has infrastructure to forward
coredumps to individual services and containers and it's already based
on AF_UNIX afaict. And I really like that it's the job of userspace to
deal with this instead of the kernel having to get involved in that
mess.
So all of this should be relative to the initial namespace. I want a
separate security hook though so an LSMs can be used to prevent
processes from connecting to the coredump socket.
My idea has been that systemd-coredump could use a bpf lsm program that
would allow to abort a coredump before the crashing process connects to
the socket and again make this a userspace policy issue.
>
> > + if (retval)
> > + goto close_fail;
> > +
> > + /* The peer isn't supposed to write and we for sure won't read. */
> > + retval = __sys_shutdown_sock(sock_from_file(file), SHUT_RD);
> > + if (retval)
> > + goto close_fail;
> > +
> > + cprm.limit = RLIM_INFINITY;
> > +#endif
> > + cprm.file = no_free_ptr(file);
> > + break;
> > + }
> > default:
> > WARN_ON_ONCE(true);
> > retval = -EINVAL;
> > @@ -818,7 +925,10 @@ void do_coredump(const kernel_siginfo_t *siginfo)
> > * have this set to NULL.
> > */
> > if (!cprm.file) {
> > - coredump_report_failure("Core dump to |%s disabled", cn.corename);
> > + if (cn.core_type == COREDUMP_PIPE)
> > + coredump_report_failure("Core dump to |%s disabled", cn.corename);
> > + else
> > + coredump_report_failure("Core dump to :%s disabled", cn.corename);
> > goto close_fail;
> > }
> > if (!dump_vma_snapshot(&cprm))
> > @@ -839,8 +949,25 @@ void do_coredump(const kernel_siginfo_t *siginfo)
> > file_end_write(cprm.file);
> > free_vma_snapshot(&cprm);
> > }
> > - if ((cn.core_type == COREDUMP_PIPE) && core_pipe_limit)
> > - wait_for_dump_helpers(cprm.file);
> > +
> > + if (core_pipe_limit) {
> > + switch (cn.core_type) {
> > + case COREDUMP_PIPE:
> > + wait_for_dump_helpers(cprm.file);
> > + break;
> > + case COREDUMP_SOCK: {
> > + /*
> > + * TODO: Wait for the coredump handler to shut
> > + * down the socket so we prevent the task from
> > + * being reaped.
> > + */
>
> Hmm, I'm no expert but maybe you could poll for the POLLRDHUP event...
> though that might require writing your own helper with a loop that
> does vfs_poll() and waits for a poll wakeup, since I don't think there
> is a kernel helper analogous to a synchronous poll() syscall yet.
>
> > + break;
> > + }
> > + default:
> > + break;
> > + }
> > + }
> > +
> > close_fail:
> > if (cprm.file)
> > filp_close(cprm.file, NULL);
On Fri, May 2, 2025 at 10:11 PM Christian Brauner <brauner@kernel.org> wrote:
> On Fri, May 02, 2025 at 04:04:32PM +0200, Jann Horn wrote:
> > On Fri, May 2, 2025 at 2:42 PM Christian Brauner <brauner@kernel.org> wrote:
> > > diff --git a/fs/coredump.c b/fs/coredump.c
> > [...]
> > > @@ -801,6 +841,73 @@ void do_coredump(const kernel_siginfo_t *siginfo)
> > > }
> > > break;
> > > }
> > > + case COREDUMP_SOCK: {
> > > + struct file *file __free(fput) = NULL;
> > > +#ifdef CONFIG_UNIX
> > > + ssize_t addr_size;
> > > + struct sockaddr_un unix_addr = {
> > > + .sun_family = AF_UNIX,
> > > + };
> > > + struct sockaddr_storage *addr;
> > > +
> > > + /*
> > > + * TODO: We need to really support core_pipe_limit to
> > > + * prevent the task from being reaped before userspace
> > > + * had a chance to look at /proc/<pid>.
> > > + *
> > > + * I need help from the networking people (or maybe Oleg
> > > + * also knows?) how to do this.
> > > + *
> > > + * IOW, we need to wait for the other side to shutdown
> > > + * the socket/terminate the connection.
> > > + *
> > > + * We could just read but then userspace could sent us
> > > + * SCM_RIGHTS and we just shouldn't need to deal with
> > > + * any of that.
> > > + */
> >
> > I don't think userspace can send you SCM_RIGHTS if you don't do a
> > recvmsg() with a control data buffer?
>
> Oh hm, then maybe just a regular read at the end would work. As soon as
> userspace send us anything or we get a close event we just disconnect.
>
> But btw, I think we really need a recvmsg() flag that allows a receiver
> to refuse SCM_RIGHTS/file descriptors from being sent to it. IIRC, right
> now this is a real issue that systemd works around by always calling its
> cmsg_close_all() helper after each recvmsg() to ensure that no one sent
> it file descriptors it didn't want. The problem there is that someone
> could have sent it an fd to a hanging NFS server or something and then
> it would hang in close() even though it never even wanted any file
> descriptors in the first place.
Would a recvmsg() flag really solve that aspect of NFS hangs? By the
time you read from the socket, the file is already attached to an SKB
queued up on the socket, and cleaning up the file is your task's
responsibility either way (which will either be done by the kernel for
you if you don't read it into a control message, or by userspace if it
was handed off through a control message). The process that sent the
file to you might already be gone, it can't be on the hook for
cleaning up the file anymore.
I think the thorough fix would probably be to introduce a socket
option (controlled via setsockopt()) that already blocks the peer's
sendmsg().
> > > + * In general though, userspace should just mark itself
> > > + * non dumpable and not do any of this nonsense. We
> > > + * shouldn't work around this.
> > > + */
> > > + addr = (struct sockaddr_storage *)(&unix_addr);
> > > + retval = __sys_connect_file(file, addr, addr_size, O_CLOEXEC);
> >
> > Have you made an intentional decision on whether you want to connect
> > to a unix domain socket with a path relative to current->fs->root (so
> > that containers can do their own core dump handling) or relative to
> > the root namespace root (so that core dumps always reach the init
> > namespace's core dumping even if a process sandboxes itself with
> > namespaces or such)? Also, I think this connection attempt will be
>
> Fsck no. :) I just jotted this down as an RFC. Details below.
>
> > subject to restrictions imposed by (for example) Landlock or AppArmor,
> > I'm not sure if that is desired here (since this is not actually a
> > connection that the process in whose context the call happens decided
> > to make, it's something the system administrator decided to do, and
> > especially with Landlock, policies are controlled by individual
> > applications that may not know how core dumps work on the system).
> >
> > I guess if we keep the current behavior where the socket path is
> > namespaced, then we also need to keep the security checks, since an
> > unprivileged user could probably set up a namespace and chroot() to a
> > place where the socket path (indirectly, through a symlink) refers to
> > an arbitrary socket...
> >
> > An alternative design might be to directly register the server socket
> > on the userns/mountns/netns or such in some magic way, and then have
> > the core dumping walk up the namespace hierarchy until it finds a
> > namespace that has opted in to using its own core dumping socket, and
> > connect to that socket bypassing security checks. (A bit like how
> > namespaced binfmt_misc works.) Like, maybe userspace with namespaced
>
> Yeah, I namespaced that thing. :)
Oh, hah, sorry, I forgot that was you.
> > CAP_SYS_ADMIN could bind() to some magic UNIX socket address, or use
> > some new setsockopt() on the socket or such, to become the handler of
> > core dumps? This would also have the advantage that malicious
> > userspace wouldn't be able to send fake bogus core dumps to the
> > server, and the server would provide clear consent to being connected
> > to without security checks at connection time.
>
> I think that's policy that I absolute don't want the kernel to get
> involved in unless absolutely necessary. A few days ago I just discussed
> this at length with Lennart and the issue is that systemd would want to
> see all coredumps on the system independent of the namespace they're
> created in. To have a per-namespace (userns/mountns/netns) coredump
> socket would invalidate that one way or the other and end up hiding
> coredumps from the administrator unless there's some elaborate scheme
> where it doesn't.
>
> systemd-coredump (and Apport fwiw) has infrastructure to forward
> coredumps to individual services and containers and it's already based
> on AF_UNIX afaict. And I really like that it's the job of userspace to
> deal with this instead of the kernel having to get involved in that
> mess.
>
> So all of this should be relative to the initial namespace. I want a
Ah, sounds good.
> separate security hook though so an LSMs can be used to prevent
> processes from connecting to the coredump socket.
>
> My idea has been that systemd-coredump could use a bpf lsm program that
> would allow to abort a coredump before the crashing process connects to
> the socket and again make this a userspace policy issue.
I don't understand this part. Why would you need an LSM to prevent a
crashing process from connecting, can't the coredumping server process
apply whatever filtering it wants in userspace?
On Fri, May 02, 2025 at 10:23:44PM +0200, Jann Horn wrote:
> On Fri, May 2, 2025 at 10:11 PM Christian Brauner <brauner@kernel.org> wrote:
> > On Fri, May 02, 2025 at 04:04:32PM +0200, Jann Horn wrote:
> > > On Fri, May 2, 2025 at 2:42 PM Christian Brauner <brauner@kernel.org> wrote:
> > > > diff --git a/fs/coredump.c b/fs/coredump.c
> > > [...]
> > > > @@ -801,6 +841,73 @@ void do_coredump(const kernel_siginfo_t *siginfo)
> > > > }
> > > > break;
> > > > }
> > > > + case COREDUMP_SOCK: {
> > > > + struct file *file __free(fput) = NULL;
> > > > +#ifdef CONFIG_UNIX
> > > > + ssize_t addr_size;
> > > > + struct sockaddr_un unix_addr = {
> > > > + .sun_family = AF_UNIX,
> > > > + };
> > > > + struct sockaddr_storage *addr;
> > > > +
> > > > + /*
> > > > + * TODO: We need to really support core_pipe_limit to
> > > > + * prevent the task from being reaped before userspace
> > > > + * had a chance to look at /proc/<pid>.
> > > > + *
> > > > + * I need help from the networking people (or maybe Oleg
> > > > + * also knows?) how to do this.
> > > > + *
> > > > + * IOW, we need to wait for the other side to shutdown
> > > > + * the socket/terminate the connection.
> > > > + *
> > > > + * We could just read but then userspace could sent us
> > > > + * SCM_RIGHTS and we just shouldn't need to deal with
> > > > + * any of that.
> > > > + */
> > >
> > > I don't think userspace can send you SCM_RIGHTS if you don't do a
> > > recvmsg() with a control data buffer?
> >
> > Oh hm, then maybe just a regular read at the end would work. As soon as
> > userspace send us anything or we get a close event we just disconnect.
> >
> > But btw, I think we really need a recvmsg() flag that allows a receiver
> > to refuse SCM_RIGHTS/file descriptors from being sent to it. IIRC, right
> > now this is a real issue that systemd works around by always calling its
> > cmsg_close_all() helper after each recvmsg() to ensure that no one sent
> > it file descriptors it didn't want. The problem there is that someone
> > could have sent it an fd to a hanging NFS server or something and then
> > it would hang in close() even though it never even wanted any file
> > descriptors in the first place.
>
> Would a recvmsg() flag really solve that aspect of NFS hangs? By the
> time you read from the socket, the file is already attached to an SKB
> queued up on the socket, and cleaning up the file is your task's
> responsibility either way (which will either be done by the kernel for
> you if you don't read it into a control message, or by userspace if it
> was handed off through a control message). The process that sent the
> file to you might already be gone, it can't be on the hook for
> cleaning up the file anymore.
Hm, I guess the unix_gc() runs in task context? I had thought that it
might take care of that.
>
> I think the thorough fix would probably be to introduce a socket
> option (controlled via setsockopt()) that already blocks the peer's
> sendmsg().
Yes, I had considered that as well.
>
> > > > + * In general though, userspace should just mark itself
> > > > + * non dumpable and not do any of this nonsense. We
> > > > + * shouldn't work around this.
> > > > + */
> > > > + addr = (struct sockaddr_storage *)(&unix_addr);
> > > > + retval = __sys_connect_file(file, addr, addr_size, O_CLOEXEC);
> > >
> > > Have you made an intentional decision on whether you want to connect
> > > to a unix domain socket with a path relative to current->fs->root (so
> > > that containers can do their own core dump handling) or relative to
> > > the root namespace root (so that core dumps always reach the init
> > > namespace's core dumping even if a process sandboxes itself with
> > > namespaces or such)? Also, I think this connection attempt will be
> >
> > Fsck no. :) I just jotted this down as an RFC. Details below.
> >
> > > subject to restrictions imposed by (for example) Landlock or AppArmor,
> > > I'm not sure if that is desired here (since this is not actually a
> > > connection that the process in whose context the call happens decided
> > > to make, it's something the system administrator decided to do, and
> > > especially with Landlock, policies are controlled by individual
> > > applications that may not know how core dumps work on the system).
> > >
> > > I guess if we keep the current behavior where the socket path is
> > > namespaced, then we also need to keep the security checks, since an
> > > unprivileged user could probably set up a namespace and chroot() to a
> > > place where the socket path (indirectly, through a symlink) refers to
> > > an arbitrary socket...
> > >
> > > An alternative design might be to directly register the server socket
> > > on the userns/mountns/netns or such in some magic way, and then have
> > > the core dumping walk up the namespace hierarchy until it finds a
> > > namespace that has opted in to using its own core dumping socket, and
> > > connect to that socket bypassing security checks. (A bit like how
> > > namespaced binfmt_misc works.) Like, maybe userspace with namespaced
> >
> > Yeah, I namespaced that thing. :)
>
> Oh, hah, sorry, I forgot that was you.
>
> > > CAP_SYS_ADMIN could bind() to some magic UNIX socket address, or use
> > > some new setsockopt() on the socket or such, to become the handler of
> > > core dumps? This would also have the advantage that malicious
> > > userspace wouldn't be able to send fake bogus core dumps to the
> > > server, and the server would provide clear consent to being connected
> > > to without security checks at connection time.
> >
> > I think that's policy that I absolute don't want the kernel to get
> > involved in unless absolutely necessary. A few days ago I just discussed
> > this at length with Lennart and the issue is that systemd would want to
> > see all coredumps on the system independent of the namespace they're
> > created in. To have a per-namespace (userns/mountns/netns) coredump
> > socket would invalidate that one way or the other and end up hiding
> > coredumps from the administrator unless there's some elaborate scheme
> > where it doesn't.
> >
> > systemd-coredump (and Apport fwiw) has infrastructure to forward
> > coredumps to individual services and containers and it's already based
> > on AF_UNIX afaict. And I really like that it's the job of userspace to
> > deal with this instead of the kernel having to get involved in that
> > mess.
> >
> > So all of this should be relative to the initial namespace. I want a
>
> Ah, sounds good.
>
> > separate security hook though so an LSMs can be used to prevent
> > processes from connecting to the coredump socket.
> >
> > My idea has been that systemd-coredump could use a bpf lsm program that
> > would allow to abort a coredump before the crashing process connects to
> > the socket and again make this a userspace policy issue.
>
> I don't understand this part. Why would you need an LSM to prevent a
> crashing process from connecting, can't the coredumping server process
> apply whatever filtering it wants in userspace?
Coredumping is somewhat asynchronous in that the crash-dumping process
already starts writing by the time userspace could've made a decision
whether it should bother in the first place. Then userspace would need
to terminate the connection so that the kernel stops writing.
With a bpf LSM you could make a decision right when the connect happens
whether the task is even allowed to connect to the coredump socket in
the first place. This would also allow-rate limiting a reapeatedly
coredumping service/container.
From: Christian Brauner <brauner@kernel.org>
Date: Sat, 3 May 2025 07:17:10 +0200
> On Fri, May 02, 2025 at 10:23:44PM +0200, Jann Horn wrote:
> > On Fri, May 2, 2025 at 10:11 PM Christian Brauner <brauner@kernel.org> wrote:
> > > On Fri, May 02, 2025 at 04:04:32PM +0200, Jann Horn wrote:
> > > > On Fri, May 2, 2025 at 2:42 PM Christian Brauner <brauner@kernel.org> wrote:
> > > > > diff --git a/fs/coredump.c b/fs/coredump.c
> > > > [...]
> > > > > @@ -801,6 +841,73 @@ void do_coredump(const kernel_siginfo_t *siginfo)
> > > > > }
> > > > > break;
> > > > > }
> > > > > + case COREDUMP_SOCK: {
> > > > > + struct file *file __free(fput) = NULL;
> > > > > +#ifdef CONFIG_UNIX
> > > > > + ssize_t addr_size;
> > > > > + struct sockaddr_un unix_addr = {
> > > > > + .sun_family = AF_UNIX,
> > > > > + };
> > > > > + struct sockaddr_storage *addr;
> > > > > +
> > > > > + /*
> > > > > + * TODO: We need to really support core_pipe_limit to
> > > > > + * prevent the task from being reaped before userspace
> > > > > + * had a chance to look at /proc/<pid>.
> > > > > + *
> > > > > + * I need help from the networking people (or maybe Oleg
> > > > > + * also knows?) how to do this.
> > > > > + *
> > > > > + * IOW, we need to wait for the other side to shutdown
> > > > > + * the socket/terminate the connection.
> > > > > + *
> > > > > + * We could just read but then userspace could sent us
> > > > > + * SCM_RIGHTS and we just shouldn't need to deal with
> > > > > + * any of that.
> > > > > + */
> > > >
> > > > I don't think userspace can send you SCM_RIGHTS if you don't do a
> > > > recvmsg() with a control data buffer?
> > >
> > > Oh hm, then maybe just a regular read at the end would work. As soon as
> > > userspace send us anything or we get a close event we just disconnect.
> > >
> > > But btw, I think we really need a recvmsg() flag that allows a receiver
> > > to refuse SCM_RIGHTS/file descriptors from being sent to it. IIRC, right
> > > now this is a real issue that systemd works around by always calling its
> > > cmsg_close_all() helper after each recvmsg() to ensure that no one sent
> > > it file descriptors it didn't want. The problem there is that someone
> > > could have sent it an fd to a hanging NFS server or something and then
> > > it would hang in close() even though it never even wanted any file
> > > descriptors in the first place.
> >
> > Would a recvmsg() flag really solve that aspect of NFS hangs? By the
> > time you read from the socket, the file is already attached to an SKB
> > queued up on the socket, and cleaning up the file is your task's
> > responsibility either way (which will either be done by the kernel for
> > you if you don't read it into a control message, or by userspace if it
> > was handed off through a control message).
Right. recvmsg() is too late. Once sendmsg() is done, the last
fput() responsibility could fall on the receiver.
Btw, I was able to implement the cmsg_close_all() equivalent at
sendmsg() with BPF LSM to completely remove the issue.
I will send a series shortly and hope you like it :)
> > The process that sent the
> > file to you might already be gone, it can't be on the hook for
> > cleaning up the file anymore.
>
> Hm, I guess the unix_gc() runs in task context? I had thought that it
> might take care of that.
Note that unix_gc() is a garbage collector only for AF_UNIX fds
that have circular dependency:
1) AF_UNIX sk1 sends its fd to itself
2) AF_UNIX sk1 sends its fd to AF_UNIX sk2 and
AF_UNIX sk2 sends its fd to AF_UNIX sk1
In these examples, file refcnts remain even after close() by all
users of fds.
So, the GC is not a mechanism to deligate fput() for fds sent
by SCM_RIGHTS.
© 2016 - 2026 Red Hat, Inc.