[PATCH v3 0/6] net/tap: Fix QEMU frozen issue when the maximum number of file descriptors is very large

Bin Meng posted 6 patches 11 months ago
Failed in applying to current master (apply log)
Maintainers: Jason Wang <jasowang@redhat.com>, "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
There is a newer version of this series
include/qemu/osdep.h                |  1 +
net/tap.c                           | 23 ++++++------
tests/tcg/cris/libc/check_openpf5.c | 57 ++++++++++++++---------------
util/async-teardown.c               | 38 +------------------
util/osdep.c                        | 48 ++++++++++++++++++++++++
5 files changed, 89 insertions(+), 78 deletions(-)
[PATCH v3 0/6] net/tap: Fix QEMU frozen issue when the maximum number of file descriptors is very large
Posted by Bin Meng 11 months ago
Current codes using a brute-force traversal of all file descriptors
do not scale on a system where the maximum number of file descriptors
is set to a very large value (e.g.: in a Docker container of Manjaro
distribution it is set to 1073741816). QEMU just looks frozen during
start-up.

The close-on-exec flag (O_CLOEXEC) was introduced since Linux kernel
2.6.23, FreeBSD 8.3, OpenBSD 5.0, Solaris 11. While it's true QEMU
doesn't need to manually close the fds for child process as the proper
O_CLOEXEC flag should have been set properly on files with its own
codes, QEMU uses a huge number of 3rd party libraries and we don't
trust them to reliably be using O_CLOEXEC on everything they open.

Modern Linux and BSDs have the close_range() call we can use to do the
job, and on Linux we have one more way to walk through /proc/self/fd
to complete the task efficiently, which is what qemu_close_range()
does, a new API we add in util/osdep.c.

V1 link: https://lore.kernel.org/qemu-devel/20230406112041.798585-1-bmeng@tinylab.org/

Changes in v3:
- fix win32 build failure
- limit the last_fd of qemu_close_range() to sysconf(_SC_OPEN_MAX)

Changes in v2:
- new patch: "tests/tcg/cris: Fix the coding style"
- new patch: "tests/tcg/cris: Correct the off-by-one error"
- new patch: "util/async-teardown: Fall back to close fds one by one"
- new patch: "util/osdep: Introduce qemu_close_range()"
- new patch: "util/async-teardown: Use qemu_close_range() to close fds"
- Change to use qemu_close_range() to close fds for child process efficiently
- v1 link: https://lore.kernel.org/qemu-devel/20230406112041.798585-1-bmeng@tinylab.org/

Bin Meng (4):
  tests/tcg/cris: Fix the coding style
  tests/tcg/cris: Correct the off-by-one error
  util/async-teardown: Fall back to close fds one by one
  util/osdep: Introduce qemu_close_range()

Zhangjin Wu (2):
  util/async-teardown: Use qemu_close_range() to close fds
  net: tap: Use qemu_close_range() to close fds

 include/qemu/osdep.h                |  1 +
 net/tap.c                           | 23 ++++++------
 tests/tcg/cris/libc/check_openpf5.c | 57 ++++++++++++++---------------
 util/async-teardown.c               | 38 +------------------
 util/osdep.c                        | 48 ++++++++++++++++++++++++
 5 files changed, 89 insertions(+), 78 deletions(-)

-- 
2.34.1
Re: [PATCH v3 0/6] net/tap: Fix QEMU frozen issue when the maximum number of file descriptors is very large
Posted by Richard Henderson 11 months ago
On 6/17/23 07:36, Bin Meng wrote:
> Current codes using a brute-force traversal of all file descriptors
> do not scale on a system where the maximum number of file descriptors
> is set to a very large value (e.g.: in a Docker container of Manjaro
> distribution it is set to 1073741816). QEMU just looks frozen during
> start-up.
> 
> The close-on-exec flag (O_CLOEXEC) was introduced since Linux kernel
> 2.6.23, FreeBSD 8.3, OpenBSD 5.0, Solaris 11. While it's true QEMU
> doesn't need to manually close the fds for child process as the proper
> O_CLOEXEC flag should have been set properly on files with its own
> codes, QEMU uses a huge number of 3rd party libraries and we don't
> trust them to reliably be using O_CLOEXEC on everything they open.
> 
> Modern Linux and BSDs have the close_range() call we can use to do the
> job, and on Linux we have one more way to walk through /proc/self/fd
> to complete the task efficiently, which is what qemu_close_range()
> does, a new API we add in util/osdep.c.
> 
> V1 link:https://lore.kernel.org/qemu-devel/20230406112041.798585-1-bmeng@tinylab.org/
> 
> Changes in v3:
> - fix win32 build failure
> - limit the last_fd of qemu_close_range() to sysconf(_SC_OPEN_MAX)

Sorry, I didn't see this and sent some comments against v2.
Though using _SC_OPEN_MAX was one of them, so that's nice.  :-)


r~
Re: [PATCH v3 0/6] net/tap: Fix QEMU frozen issue when the maximum number of file descriptors is very large
Posted by Michael Tokarev 10 months, 3 weeks ago
17.06.2023 08:36, Bin Meng wrote:
> 
> Current codes using a brute-force traversal of all file descriptors
> do not scale on a system where the maximum number of file descriptors
> is set to a very large value (e.g.: in a Docker container of Manjaro
> distribution it is set to 1073741816). QEMU just looks frozen during
> start-up.

What's the reason to close all these file descriptors in the first place?

No other software I know does this.

For some situations, such closing is actually bad, -- think, eg,

   flock lockfile qemu-system-foo ...  --

this one opens a file, locks it using fcntl/flock, and executes the
command, keeping the file descriptor open across exec, so the file
stays locked until the process terminates. This works and works well.
Qemu with its let's-close-everything approach breaks this.

Why? :)

/mjt