daemon/libvirtd.service.in | 7 +++++-- src/locking/virtlockd.service.in | 4 ++++ src/logging/virtlogd.service.in | 5 +++++ 3 files changed, 14 insertions(+), 2 deletions(-)
Linux still defaults to a 1024 open file handle limit. This causes
scalability problems for libvirtd / virtlockd / virtlogd on large
hosts which might want > 1024 guest to be running. In fact if each
guest needs > 1 FD, we can't even get to 500 guests. This is not
good enough when we see machines with 100's of physical cores and
TBs of RAM.
In comparison to other memory requirements of libvirtd & related
daemons, the resource usage associated with open file handles
is essentially line noise. It is thus reasonable to increase the
limits unconditionally for all installs.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
---
daemon/libvirtd.service.in | 7 +++++--
src/locking/virtlockd.service.in | 4 ++++
src/logging/virtlogd.service.in | 5 +++++
3 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/daemon/libvirtd.service.in b/daemon/libvirtd.service.in
index c72dde5..22fc156 100644
--- a/daemon/libvirtd.service.in
+++ b/daemon/libvirtd.service.in
@@ -24,8 +24,11 @@ ExecStart=@sbindir@/libvirtd $LIBVIRTD_ARGS
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
-# Override the maximum number of opened files
-#LimitNOFILE=2048
+# At least 1 FD per guest, often 2 (eg qemu monitor + qemu agent).
+# If we want to support 2048 guests, we'll typically need 4096 FDs
+# If changing this, also consider virtlogd.service & virtlockd.service
+# limits which are also related to number of guests
+LimitNOFILE=8192
[Install]
WantedBy=multi-user.target
diff --git a/src/locking/virtlockd.service.in b/src/locking/virtlockd.service.in
index 69b568f..c369591 100644
--- a/src/locking/virtlockd.service.in
+++ b/src/locking/virtlockd.service.in
@@ -13,6 +13,10 @@ ExecReload=/bin/kill -USR1 $MAINPID
# cause the machine to be fenced (rebooted), so make
# sure we discourage OOM killer
OOMScoreAdjust=-900
+# Needs to allow for max guests * average disks per guest
+# libvirtd.service written to expect 4096 guests, so if we
+# allow for 4 disks per guest, we get:
+LimitNOFILE=16384
[Install]
Also=virtlockd.socket
diff --git a/src/logging/virtlogd.service.in b/src/logging/virtlogd.service.in
index 09e0740..be039c6 100644
--- a/src/logging/virtlogd.service.in
+++ b/src/logging/virtlogd.service.in
@@ -13,6 +13,11 @@ ExecReload=/bin/kill -USR1 $MAINPID
# cause the machine to be fenced (rebooted), so make
# sure we discourage OOM killer
OOMScoreAdjust=-900
+# Need to have at least one file open per guest (eg QEMU
+# stdio log), but might be more (eg serial console logs)
+# libvirtd.service written to expect 2048 guests, so if we
+# guess at 2 log files per guest here (stdio + 1 serial):
+LimitNOFILE=8192
[Install]
Also=virtlogd.socket
--
2.9.3
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
On 03/15/2017 12:55 PM, Daniel P. Berrange wrote: > Linux still defaults to a 1024 open file handle limit. This causes > scalability problems for libvirtd / virtlockd / virtlogd on large > hosts which might want > 1024 guest to be running. In fact if each > guest needs > 1 FD, we can't even get to 500 guests. This is not > good enough when we see machines with 100's of physical cores and > TBs of RAM. > > In comparison to other memory requirements of libvirtd & related > daemons, the resource usage associated with open file handles > is essentially line noise. It is thus reasonable to increase the > limits unconditionally for all installs. ACK. -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Hi, On Wed, Mar 15, 2017 at 04:55:04PM +0000, Daniel P. Berrange wrote: > Linux still defaults to a 1024 open file handle limit. This causes > scalability problems for libvirtd / virtlockd / virtlogd on large > hosts which might want > 1024 guest to be running. In fact if each > guest needs > 1 FD, we can't even get to 500 guests. This is not > good enough when we see machines with 100's of physical cores and > TBs of RAM. > > In comparison to other memory requirements of libvirtd & related > daemons, the resource usage associated with open file handles > is essentially line noise. It is thus reasonable to increase the > limits unconditionally for all installs. > > Signed-off-by: Daniel P. Berrange <berrange@redhat.com> > --- > daemon/libvirtd.service.in | 7 +++++-- > src/locking/virtlockd.service.in | 4 ++++ > src/logging/virtlogd.service.in | 5 +++++ > 3 files changed, 14 insertions(+), 2 deletions(-) > > diff --git a/daemon/libvirtd.service.in b/daemon/libvirtd.service.in > index c72dde5..22fc156 100644 > --- a/daemon/libvirtd.service.in > +++ b/daemon/libvirtd.service.in > @@ -24,8 +24,11 @@ ExecStart=@sbindir@/libvirtd $LIBVIRTD_ARGS > ExecReload=/bin/kill -HUP $MAINPID > KillMode=process > Restart=on-failure > -# Override the maximum number of opened files > -#LimitNOFILE=2048 > +# At least 1 FD per guest, often 2 (eg qemu monitor + qemu agent). > +# If we want to support 2048 guests, we'll typically need 4096 FDs 4096 FDs here… > +# If changing this, also consider virtlogd.service & virtlockd.service > +# limits which are also related to number of guests > +LimitNOFILE=8192 …but 8192 here. So we're looking at 4096 rather than 2048 guests (2 fds per guest)? > > [Install] > WantedBy=multi-user.target > diff --git a/src/locking/virtlockd.service.in b/src/locking/virtlockd.service.in > index 69b568f..c369591 100644 > --- a/src/locking/virtlockd.service.in > +++ b/src/locking/virtlockd.service.in > @@ -13,6 +13,10 @@ ExecReload=/bin/kill -USR1 $MAINPID > # cause the machine to be fenced (rebooted), so make > # sure we discourage OOM killer > OOMScoreAdjust=-900 > +# Needs to allow for max guests * average disks per guest > +# libvirtd.service written to expect 4096 guests, so if we > +# allow for 4 disks per guest, we get: > +LimitNOFILE=16384 Correct if we're looking at 4095 guests above. > > [Install] > Also=virtlockd.socket > diff --git a/src/logging/virtlogd.service.in b/src/logging/virtlogd.service.in > index 09e0740..be039c6 100644 > --- a/src/logging/virtlogd.service.in > +++ b/src/logging/virtlogd.service.in > @@ -13,6 +13,11 @@ ExecReload=/bin/kill -USR1 $MAINPID > # cause the machine to be fenced (rebooted), so make > # sure we discourage OOM killer > OOMScoreAdjust=-900 > +# Need to have at least one file open per guest (eg QEMU > +# stdio log), but might be more (eg serial console logs) > +# libvirtd.service written to expect 2048 guests, so if we Rather 4096 as above? > +# guess at 2 log files per guest here (stdio + 1 serial): > +LimitNOFILE=8192 > > [Install] > Also=virtlogd.socket Cheers, -- Guido -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
On Wed, Mar 15, 2017 at 07:34:04PM +0100, Guido Günther wrote: > Hi, > On Wed, Mar 15, 2017 at 04:55:04PM +0000, Daniel P. Berrange wrote: > > Linux still defaults to a 1024 open file handle limit. This causes > > scalability problems for libvirtd / virtlockd / virtlogd on large > > hosts which might want > 1024 guest to be running. In fact if each > > guest needs > 1 FD, we can't even get to 500 guests. This is not > > good enough when we see machines with 100's of physical cores and > > TBs of RAM. > > > > In comparison to other memory requirements of libvirtd & related > > daemons, the resource usage associated with open file handles > > is essentially line noise. It is thus reasonable to increase the > > limits unconditionally for all installs. > > > > Signed-off-by: Daniel P. Berrange <berrange@redhat.com> > > --- > > daemon/libvirtd.service.in | 7 +++++-- > > src/locking/virtlockd.service.in | 4 ++++ > > src/logging/virtlogd.service.in | 5 +++++ > > 3 files changed, 14 insertions(+), 2 deletions(-) > > > > diff --git a/daemon/libvirtd.service.in b/daemon/libvirtd.service.in > > index c72dde5..22fc156 100644 > > --- a/daemon/libvirtd.service.in > > +++ b/daemon/libvirtd.service.in > > @@ -24,8 +24,11 @@ ExecStart=@sbindir@/libvirtd $LIBVIRTD_ARGS > > ExecReload=/bin/kill -HUP $MAINPID > > KillMode=process > > Restart=on-failure > > -# Override the maximum number of opened files > > -#LimitNOFILE=2048 > > +# At least 1 FD per guest, often 2 (eg qemu monitor + qemu agent). > > +# If we want to support 2048 guests, we'll typically need 4096 FDs > > 4096 FDs here… Sigh, double both these numbers > > +# If changing this, also consider virtlogd.service & virtlockd.service > > +# limits which are also related to number of guests > > +LimitNOFILE=8192 > > …but 8192 here. So we're looking at 4096 rather than 2048 guests (2 fds per guest)? > > > > > [Install] > > WantedBy=multi-user.target > > diff --git a/src/locking/virtlockd.service.in b/src/locking/virtlockd.service.in > > index 69b568f..c369591 100644 > > --- a/src/locking/virtlockd.service.in > > +++ b/src/locking/virtlockd.service.in > > @@ -13,6 +13,10 @@ ExecReload=/bin/kill -USR1 $MAINPID > > # cause the machine to be fenced (rebooted), so make > > # sure we discourage OOM killer > > OOMScoreAdjust=-900 > > +# Needs to allow for max guests * average disks per guest > > +# libvirtd.service written to expect 4096 guests, so if we > > +# allow for 4 disks per guest, we get: > > +LimitNOFILE=16384 > > Correct if we're looking at 4095 guests above. > > > > > [Install] > > Also=virtlockd.socket > > diff --git a/src/logging/virtlogd.service.in b/src/logging/virtlogd.service.in > > index 09e0740..be039c6 100644 > > --- a/src/logging/virtlogd.service.in > > +++ b/src/logging/virtlogd.service.in > > @@ -13,6 +13,11 @@ ExecReload=/bin/kill -USR1 $MAINPID > > # cause the machine to be fenced (rebooted), so make > > # sure we discourage OOM killer > > OOMScoreAdjust=-900 > > +# Need to have at least one file open per guest (eg QEMU > > +# stdio log), but might be more (eg serial console logs) > > +# libvirtd.service written to expect 2048 guests, so if we > > Rather 4096 as above? Yes. > > > +# guess at 2 log files per guest here (stdio + 1 serial): > > +LimitNOFILE=8192 > > > > [Install] > > Also=virtlogd.socket I'll push with the comments fixed Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
© 2016 - 2024 Red Hat, Inc.