[libvirt PATCH] daemon: set default memlock limit for systemd service

Pavel Hrdina posted 1 patch 4 years, 1 month ago
Test syntax-check failed
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/libvirt tags/patchew/d87d08ed109af6b7831a695312dba2bc25b4c67c.1582729604.git.phrdina@redhat.com
src/remote/libvirtd.service.in | 5 +++++
1 file changed, 5 insertions(+)
[libvirt PATCH] daemon: set default memlock limit for systemd service
Posted by Pavel Hrdina 4 years, 1 month ago
The default memlock limit is 64k which is not enough to start a single
VM. The requirements for one VM are 12k, 8k for eBPF map and 4k for eBPF
program, however, it fails to create eBPF map and program with 64k limit.
By testing I figured out that the minimal limit is 80k to start a single
VM with functional eBPF and if I add 12k I can start another one.

This leads into following calculation:

80k as memlock limit worked to start a VM with eBPF which means there
is 68k of lock memory that I was not able to figure out what was using
it.  So to get a number for 4096 VMs:

        68 + 12 * 4096 = 49220

If we round it up we will get 49M of memory lock limit to support 4096
VMs with default map size which can hold 64 entries for devices.

This should be good enough as a sane default and users can change it if
the need to.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1807090

Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
---
 src/remote/libvirtd.service.in | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/src/remote/libvirtd.service.in b/src/remote/libvirtd.service.in
index 9c8c54a2ef..8a3ace5bdb 100644
--- a/src/remote/libvirtd.service.in
+++ b/src/remote/libvirtd.service.in
@@ -40,6 +40,11 @@ LimitNOFILE=8192
 # A conservative default of 8 tasks per guest results in a TasksMax of
 # 32k to support 4096 guests.
 TasksMax=32768
+# With cgroups v2 there is no devices controller anymore, we have to use
+# eBPF to control access to devices.  In order to do that we create a eBPF
+# hash MAP which locked memory.  The default map size for 64 devices together
+# with program takes 12k per guest which results in 49M to support 4096 guests.
+LimitMEMLOCK=49M
 
 [Install]
 WantedBy=multi-user.target
-- 
2.24.1

Re: [libvirt PATCH] daemon: set default memlock limit for systemd service
Posted by Michal Prívozník 4 years, 1 month ago
On 2/26/20 4:07 PM, Pavel Hrdina wrote:
> The default memlock limit is 64k which is not enough to start a single
> VM. The requirements for one VM are 12k, 8k for eBPF map and 4k for eBPF
> program, however, it fails to create eBPF map and program with 64k limit.
> By testing I figured out that the minimal limit is 80k to start a single
> VM with functional eBPF and if I add 12k I can start another one.
> 
> This leads into following calculation:
> 
> 80k as memlock limit worked to start a VM with eBPF which means there
> is 68k of lock memory that I was not able to figure out what was using
> it.  So to get a number for 4096 VMs:
> 
>         68 + 12 * 4096 = 49220
> 
> If we round it up we will get 49M of memory lock limit to support 4096
> VMs with default map size which can hold 64 entries for devices.
> 
> This should be good enough as a sane default and users can change it if
> the need to.
> 
> Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1807090
> 
> Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
> ---
>  src/remote/libvirtd.service.in | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/src/remote/libvirtd.service.in b/src/remote/libvirtd.service.in
> index 9c8c54a2ef..8a3ace5bdb 100644
> --- a/src/remote/libvirtd.service.in
> +++ b/src/remote/libvirtd.service.in
> @@ -40,6 +40,11 @@ LimitNOFILE=8192
>  # A conservative default of 8 tasks per guest results in a TasksMax of
>  # 32k to support 4096 guests.
>  TasksMax=32768
> +# With cgroups v2 there is no devices controller anymore, we have to use
> +# eBPF to control access to devices.  In order to do that we create a eBPF
> +# hash MAP which locked memory.  The default map size for 64 devices together

s/locked/locks/

> +# with program takes 12k per guest which results in 49M to support 4096 guests.
> +LimitMEMLOCK=49M

Should we round this up to the nearest power of two? 49MB looks just
ugly. This is just a limit, it doesn't mean that libvirtd will lock
whole 49MB (or 64MB as I suggest) right from the beginning.

>  
>  [Install]
>  WantedBy=multi-user.target
> 

Michal

Re: [libvirt PATCH] daemon: set default memlock limit for systemd service
Posted by Pavel Hrdina 4 years, 1 month ago
On Wed, Feb 26, 2020 at 04:33:13PM +0100, Michal Prívozník wrote:
> On 2/26/20 4:07 PM, Pavel Hrdina wrote:
> > The default memlock limit is 64k which is not enough to start a single
> > VM. The requirements for one VM are 12k, 8k for eBPF map and 4k for eBPF
> > program, however, it fails to create eBPF map and program with 64k limit.
> > By testing I figured out that the minimal limit is 80k to start a single
> > VM with functional eBPF and if I add 12k I can start another one.
> > 
> > This leads into following calculation:
> > 
> > 80k as memlock limit worked to start a VM with eBPF which means there
> > is 68k of lock memory that I was not able to figure out what was using
> > it.  So to get a number for 4096 VMs:
> > 
> >         68 + 12 * 4096 = 49220
> > 
> > If we round it up we will get 49M of memory lock limit to support 4096
> > VMs with default map size which can hold 64 entries for devices.
> > 
> > This should be good enough as a sane default and users can change it if
> > the need to.
> > 
> > Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1807090
> > 
> > Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
> > ---
> >  src/remote/libvirtd.service.in | 5 +++++
> >  1 file changed, 5 insertions(+)
> > 
> > diff --git a/src/remote/libvirtd.service.in b/src/remote/libvirtd.service.in
> > index 9c8c54a2ef..8a3ace5bdb 100644
> > --- a/src/remote/libvirtd.service.in
> > +++ b/src/remote/libvirtd.service.in
> > @@ -40,6 +40,11 @@ LimitNOFILE=8192
> >  # A conservative default of 8 tasks per guest results in a TasksMax of
> >  # 32k to support 4096 guests.
> >  TasksMax=32768
> > +# With cgroups v2 there is no devices controller anymore, we have to use
> > +# eBPF to control access to devices.  In order to do that we create a eBPF
> > +# hash MAP which locked memory.  The default map size for 64 devices together
> 
> s/locked/locks/
> 
> > +# with program takes 12k per guest which results in 49M to support 4096 guests.
> > +LimitMEMLOCK=49M
> 
> Should we round this up to the nearest power of two? 49MB looks just
> ugly. This is just a limit, it doesn't mean that libvirtd will lock
> whole 49MB (or 64MB as I suggest) right from the beginning.

I'm glad to see this suggestion because I was tempted to round it up to
64M as well, so works for me.

Pavel

Re: [libvirt PATCH] daemon: set default memlock limit for systemd service
Posted by Michal Privoznik 4 years, 1 month ago
On 2/26/20 4:58 PM, Pavel Hrdina wrote:
> On Wed, Feb 26, 2020 at 04:33:13PM +0100, Michal Prívozník wrote:
>> On 2/26/20 4:07 PM, Pavel Hrdina wrote:
>>> The default memlock limit is 64k which is not enough to start a single
>>> VM. The requirements for one VM are 12k, 8k for eBPF map and 4k for eBPF
>>> program, however, it fails to create eBPF map and program with 64k limit.
>>> By testing I figured out that the minimal limit is 80k to start a single
>>> VM with functional eBPF and if I add 12k I can start another one.
>>>
>>> This leads into following calculation:
>>>
>>> 80k as memlock limit worked to start a VM with eBPF which means there
>>> is 68k of lock memory that I was not able to figure out what was using
>>> it.  So to get a number for 4096 VMs:
>>>
>>>          68 + 12 * 4096 = 49220
>>>
>>> If we round it up we will get 49M of memory lock limit to support 4096
>>> VMs with default map size which can hold 64 entries for devices.
>>>
>>> This should be good enough as a sane default and users can change it if
>>> the need to.
>>>
>>> Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1807090
>>>
>>> Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
>>> ---
>>>   src/remote/libvirtd.service.in | 5 +++++
>>>   1 file changed, 5 insertions(+)
>>>
>>> diff --git a/src/remote/libvirtd.service.in b/src/remote/libvirtd.service.in
>>> index 9c8c54a2ef..8a3ace5bdb 100644
>>> --- a/src/remote/libvirtd.service.in
>>> +++ b/src/remote/libvirtd.service.in
>>> @@ -40,6 +40,11 @@ LimitNOFILE=8192
>>>   # A conservative default of 8 tasks per guest results in a TasksMax of
>>>   # 32k to support 4096 guests.
>>>   TasksMax=32768
>>> +# With cgroups v2 there is no devices controller anymore, we have to use
>>> +# eBPF to control access to devices.  In order to do that we create a eBPF
>>> +# hash MAP which locked memory.  The default map size for 64 devices together
>>
>> s/locked/locks/
>>
>>> +# with program takes 12k per guest which results in 49M to support 4096 guests.
>>> +LimitMEMLOCK=49M
>>
>> Should we round this up to the nearest power of two? 49MB looks just
>> ugly. This is just a limit, it doesn't mean that libvirtd will lock
>> whole 49MB (or 64MB as I suggest) right from the beginning.
> 
> I'm glad to see this suggestion because I was tempted to round it up to
> 64M as well, so works for me.

Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

And safe for freeze.

Michal

Re: [libvirt PATCH] daemon: set default memlock limit for systemd service
Posted by Cole Robinson 4 years, 1 month ago
On 2/26/20 10:07 AM, Pavel Hrdina wrote:
> The default memlock limit is 64k which is not enough to start a single
> VM. The requirements for one VM are 12k, 8k for eBPF map and 4k for eBPF
> program, however, it fails to create eBPF map and program with 64k limit.
> By testing I figured out that the minimal limit is 80k to start a single
> VM with functional eBPF and if I add 12k I can start another one.
> 
> This leads into following calculation:
> 
> 80k as memlock limit worked to start a VM with eBPF which means there
> is 68k of lock memory that I was not able to figure out what was using
> it.  So to get a number for 4096 VMs:
> 
>         68 + 12 * 4096 = 49220
> 
> If we round it up we will get 49M of memory lock limit to support 4096
> VMs with default map size which can hold 64 entries for devices.
> 
> This should be good enough as a sane default and users can change it if
> the need to.
> 
> Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1807090
> 
> Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
> ---
>  src/remote/libvirtd.service.in | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/src/remote/libvirtd.service.in b/src/remote/libvirtd.service.in
> index 9c8c54a2ef..8a3ace5bdb 100644
> --- a/src/remote/libvirtd.service.in
> +++ b/src/remote/libvirtd.service.in
> @@ -40,6 +40,11 @@ LimitNOFILE=8192
>  # A conservative default of 8 tasks per guest results in a TasksMax of
>  # 32k to support 4096 guests.
>  TasksMax=32768
> +# With cgroups v2 there is no devices controller anymore, we have to use
> +# eBPF to control access to devices.  In order to do that we create a eBPF
> +# hash MAP which locked memory.  The default map size for 64 devices together
> +# with program takes 12k per guest which results in 49M to support 4096 guests.
> +LimitMEMLOCK=49M
>  
>  [Install]
>  WantedBy=multi-user.target
> 

I guess we will want this for virtqemud and virtlxcd as well.
Any ideas if the root issue affects qemu:///session ?

Thanks,
Cole

- Cole

Re: [libvirt PATCH] daemon: set default memlock limit for systemd service
Posted by Pavel Hrdina 4 years, 1 month ago
On Wed, Feb 26, 2020 at 10:35:58AM -0500, Cole Robinson wrote:
> On 2/26/20 10:07 AM, Pavel Hrdina wrote:
> > The default memlock limit is 64k which is not enough to start a single
> > VM. The requirements for one VM are 12k, 8k for eBPF map and 4k for eBPF
> > program, however, it fails to create eBPF map and program with 64k limit.
> > By testing I figured out that the minimal limit is 80k to start a single
> > VM with functional eBPF and if I add 12k I can start another one.
> > 
> > This leads into following calculation:
> > 
> > 80k as memlock limit worked to start a VM with eBPF which means there
> > is 68k of lock memory that I was not able to figure out what was using
> > it.  So to get a number for 4096 VMs:
> > 
> >         68 + 12 * 4096 = 49220
> > 
> > If we round it up we will get 49M of memory lock limit to support 4096
> > VMs with default map size which can hold 64 entries for devices.
> > 
> > This should be good enough as a sane default and users can change it if
> > the need to.
> > 
> > Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1807090
> > 
> > Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
> > ---
> >  src/remote/libvirtd.service.in | 5 +++++
> >  1 file changed, 5 insertions(+)
> > 
> > diff --git a/src/remote/libvirtd.service.in b/src/remote/libvirtd.service.in
> > index 9c8c54a2ef..8a3ace5bdb 100644
> > --- a/src/remote/libvirtd.service.in
> > +++ b/src/remote/libvirtd.service.in
> > @@ -40,6 +40,11 @@ LimitNOFILE=8192
> >  # A conservative default of 8 tasks per guest results in a TasksMax of
> >  # 32k to support 4096 guests.
> >  TasksMax=32768
> > +# With cgroups v2 there is no devices controller anymore, we have to use
> > +# eBPF to control access to devices.  In order to do that we create a eBPF
> > +# hash MAP which locked memory.  The default map size for 64 devices together
> > +# with program takes 12k per guest which results in 49M to support 4096 guests.
> > +LimitMEMLOCK=49M
> >  
> >  [Install]
> >  WantedBy=multi-user.target
> > 
> 
> I guess we will want this for virtqemud and virtlxcd as well.
> Any ideas if the root issue affects qemu:///session ?

Good point about the virtlxcd and virtqemud, I'll add it there in v2.

Cgroups are not used for session daemon.

Pavel