.gitlab-ci.d/containers.yml | 248 ++++++++++++++++++++++++++++++++++++ .gitlab-ci.d/edk2.yml | 3 +- .gitlab-ci.d/opensbi.yml | 3 +- .gitlab-ci.yml | 187 +++++++++++++-------------- 4 files changed, 340 insertions(+), 101 deletions(-) create mode 100644 .gitlab-ci.d/containers.yml
The current gitlab CI jobs are quite inefficient because they use the generic distro images and then apt-get/dnf install extra packages every time. The other downside is that the container environment used is only defined in thte .gitlab-ci.yml file, so it tedious to reproduce locally. We already have containers defined in tests/docker for use by developers building locally. We can use these for CI systems too if we just had a way to build them.... ...GitLab CI offers such a way. We can use docker-in-docker to build the images at the start of the CI cycle, and use the built images in later jobs. These later jobs are now faster because they're not having to install any software. There is a performance hit to build the images, however, they are cached in the GitLab docker registry. A user forking the repo will use the cache from qemu-project/qemu and their own local fork. So overall the time penalty to build images will only be seen if the user modifiers tests/docker/dockerfiles in their patch series, or if the base image changes. The GitLab container registry is public, so we're not limited to using the built images in GitLab CI. Any other CI system that uses docker can consume these images. Similarly we could change the tests/docker/Makefile rules so that developers pull from https://gitlab.com/qemu-project/qemu, instead of building images locally. This is why we're building all the images, instead of just the handful the current jobs want. The interesting stuff is mostly in patch 2. Patch 3 is a quick hack I did to convert existing jobs to use the newly built images. I know there's other work being done on the GitLab jobs right now that probably conflicts with this though, so I've not invested much time in patch 3. Consider it more an illustration of what's possible, than a finished proposal for merge. The patch 3 diff is fairly unintelligble, so it is easier to look at the final result: https://gitlab.com/berrange/qemu/-/blob/ci-cont/.gitlab-ci.yml An example pipeline can be viewed here: https://gitlab.com/berrange/qemu/-/pipelines/158824359 The cached images are here: https://gitlab.com/berrange/qemu/container_registry Daniel P. Berrangé (3): gitlab: introduce explicit "container" and "build" stages gitlab: build all container images during CI gitlab: convert jobs to use custom built containers .gitlab-ci.d/containers.yml | 248 ++++++++++++++++++++++++++++++++++++ .gitlab-ci.d/edk2.yml | 3 +- .gitlab-ci.d/opensbi.yml | 3 +- .gitlab-ci.yml | 187 +++++++++++++-------------- 4 files changed, 340 insertions(+), 101 deletions(-) create mode 100644 .gitlab-ci.d/containers.yml -- 2.24.1
Daniel P. Berrangé <berrange@redhat.com> writes: > The current gitlab CI jobs are quite inefficient because they > use the generic distro images and then apt-get/dnf install > extra packages every time. <snip> I should say I've queued this into testing/next and tweaked it slightly. -- Alex Bennée
On 22/06/2020 17.33, Daniel P. Berrangé wrote: > The current gitlab CI jobs are quite inefficient because they > use the generic distro images and then apt-get/dnf install > extra packages every time. > > The other downside is that the container environment used is > only defined in thte .gitlab-ci.yml file, so it tedious to > reproduce locally. > > We already have containers defined in tests/docker for use by > developers building locally. We can use these for CI systems > too if we just had a way to build them.... > > ...GitLab CI offers such a way. We can use docker-in-docker > to build the images at the start of the CI cycle, and use > the built images in later jobs. > > These later jobs are now faster because they're not having > to install any software. Did you see any speed-up? I had a look at some pipelines, and it seems to me that they rather got slower now? For example, this is the system1 pipeline before your change: https://gitlab.com/huth/qemu/-/jobs/610924897 and after your change: https://gitlab.com/huth/qemu/-/jobs/611069374 Duration went up from 35 minutes to 42 minutes. Seems also to happen in your builds, before the change: https://gitlab.com/berrange/qemu/-/jobs/582995084 and after the change: https://gitlab.com/berrange/qemu/-/jobs/606175927 ... went from 36 minutes up to 42 minutes. Could be a coincidence due to the load on the shared runners, but it looks at least a little bit suspicious... Thomas
On Thu, Jun 25, 2020 at 01:15:52PM +0200, Thomas Huth wrote: > On 22/06/2020 17.33, Daniel P. Berrangé wrote: > > The current gitlab CI jobs are quite inefficient because they > > use the generic distro images and then apt-get/dnf install > > extra packages every time. > > > > The other downside is that the container environment used is > > only defined in thte .gitlab-ci.yml file, so it tedious to > > reproduce locally. > > > > We already have containers defined in tests/docker for use by > > developers building locally. We can use these for CI systems > > too if we just had a way to build them.... > > > > ...GitLab CI offers such a way. We can use docker-in-docker > > to build the images at the start of the CI cycle, and use > > the built images in later jobs. > > > > These later jobs are now faster because they're not having > > to install any software. > > Did you see any speed-up? I had a look at some pipelines, and it seems to me > that they rather got slower now? For example, this is the system1 pipeline > before your change: > > https://gitlab.com/huth/qemu/-/jobs/610924897 > > and after your change: > > https://gitlab.com/huth/qemu/-/jobs/611069374 > > Duration went up from 35 minutes to 42 minutes. > > Seems also to happen in your builds, before the change: > > https://gitlab.com/berrange/qemu/-/jobs/582995084 > > and after the change: > > https://gitlab.com/berrange/qemu/-/jobs/606175927 > > ... went from 36 minutes up to 42 minutes. > > Could be a coincidence due to the load on the shared runners, but it looks > at least a little bit suspicious... I think the difference is because we're building more features now. The dockerfiles have provided more build pre-requisites that the old gitlab recipe did. If you compare the configure summary, I see the new build now covers SDL, curses, curl, pulseaudio, virtiofs, SASL, libjpeg, xen, docs and a few more. So we've saved time by not intsallling many packages each time, but consumed a greater amount of time by compiling more features. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Thu, Jun 25, 2020 at 12:26:53PM +0100, Daniel P. Berrangé wrote: > On Thu, Jun 25, 2020 at 01:15:52PM +0200, Thomas Huth wrote: > > On 22/06/2020 17.33, Daniel P. Berrangé wrote: > > > The current gitlab CI jobs are quite inefficient because they > > > use the generic distro images and then apt-get/dnf install > > > extra packages every time. > > > > > > The other downside is that the container environment used is > > > only defined in thte .gitlab-ci.yml file, so it tedious to > > > reproduce locally. > > > > > > We already have containers defined in tests/docker for use by > > > developers building locally. We can use these for CI systems > > > too if we just had a way to build them.... > > > > > > ...GitLab CI offers such a way. We can use docker-in-docker > > > to build the images at the start of the CI cycle, and use > > > the built images in later jobs. > > > > > > These later jobs are now faster because they're not having > > > to install any software. > > > > Did you see any speed-up? I had a look at some pipelines, and it seems to me > > that they rather got slower now? For example, this is the system1 pipeline > > before your change: > > > > https://gitlab.com/huth/qemu/-/jobs/610924897 > > > > and after your change: > > > > https://gitlab.com/huth/qemu/-/jobs/611069374 > > > > Duration went up from 35 minutes to 42 minutes. > > > > Seems also to happen in your builds, before the change: > > > > https://gitlab.com/berrange/qemu/-/jobs/582995084 > > > > and after the change: > > > > https://gitlab.com/berrange/qemu/-/jobs/606175927 > > > > ... went from 36 minutes up to 42 minutes. > > > > Could be a coincidence due to the load on the shared runners, but it looks > > at least a little bit suspicious... > > I think the difference is because we're building more features now. The > dockerfiles have provided more build pre-requisites that the old gitlab > recipe did. > > If you compare the configure summary, I see the new build now covers > SDL, curses, curl, pulseaudio, virtiofs, SASL, libjpeg, xen, docs > and a few more. So we've saved time by not intsallling many packages > each time, but consumed a greater amount of time by compiling more > features. Oh a missed a lot more actually - there's also spice, opengl, libiscsi, libnfs, libusb, seccomp, libssh, lzo, snappy, bzip, zstd, numa and udev too. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On 25/06/2020 13.29, Daniel P. Berrangé wrote: > On Thu, Jun 25, 2020 at 12:26:53PM +0100, Daniel P. Berrangé wrote: >> On Thu, Jun 25, 2020 at 01:15:52PM +0200, Thomas Huth wrote: >>> On 22/06/2020 17.33, Daniel P. Berrangé wrote: >>>> The current gitlab CI jobs are quite inefficient because they >>>> use the generic distro images and then apt-get/dnf install >>>> extra packages every time. >>>> >>>> The other downside is that the container environment used is >>>> only defined in thte .gitlab-ci.yml file, so it tedious to >>>> reproduce locally. >>>> >>>> We already have containers defined in tests/docker for use by >>>> developers building locally. We can use these for CI systems >>>> too if we just had a way to build them.... >>>> >>>> ...GitLab CI offers such a way. We can use docker-in-docker >>>> to build the images at the start of the CI cycle, and use >>>> the built images in later jobs. >>>> >>>> These later jobs are now faster because they're not having >>>> to install any software. >>> >>> Did you see any speed-up? I had a look at some pipelines, and it seems to me >>> that they rather got slower now? For example, this is the system1 pipeline >>> before your change: >>> >>> https://gitlab.com/huth/qemu/-/jobs/610924897 >>> >>> and after your change: >>> >>> https://gitlab.com/huth/qemu/-/jobs/611069374 >>> >>> Duration went up from 35 minutes to 42 minutes. >>> >>> Seems also to happen in your builds, before the change: >>> >>> https://gitlab.com/berrange/qemu/-/jobs/582995084 >>> >>> and after the change: >>> >>> https://gitlab.com/berrange/qemu/-/jobs/606175927 >>> >>> ... went from 36 minutes up to 42 minutes. >>> >>> Could be a coincidence due to the load on the shared runners, but it looks >>> at least a little bit suspicious... >> >> I think the difference is because we're building more features now. The >> dockerfiles have provided more build pre-requisites that the old gitlab >> recipe did. >> >> If you compare the configure summary, I see the new build now covers >> SDL, curses, curl, pulseaudio, virtiofs, SASL, libjpeg, xen, docs >> and a few more. So we've saved time by not intsallling many packages >> each time, but consumed a greater amount of time by compiling more >> features. > > Oh a missed a lot more actually - there's also spice, opengl, libiscsi, > libnfs, libusb, seccomp, libssh, lzo, snappy, bzip, zstd, numa and udev > too. Ok, that's fair, I think it's ok to spend some additional minutes for the extended test coverage here. Thomas
On Thu, Jun 25, 2020 at 01:33:42PM +0200, Thomas Huth wrote: > On 25/06/2020 13.29, Daniel P. Berrangé wrote: > > On Thu, Jun 25, 2020 at 12:26:53PM +0100, Daniel P. Berrangé wrote: > > > On Thu, Jun 25, 2020 at 01:15:52PM +0200, Thomas Huth wrote: > > > > On 22/06/2020 17.33, Daniel P. Berrangé wrote: > > > > > The current gitlab CI jobs are quite inefficient because they > > > > > use the generic distro images and then apt-get/dnf install > > > > > extra packages every time. > > > > > > > > > > The other downside is that the container environment used is > > > > > only defined in thte .gitlab-ci.yml file, so it tedious to > > > > > reproduce locally. > > > > > > > > > > We already have containers defined in tests/docker for use by > > > > > developers building locally. We can use these for CI systems > > > > > too if we just had a way to build them.... > > > > > > > > > > ...GitLab CI offers such a way. We can use docker-in-docker > > > > > to build the images at the start of the CI cycle, and use > > > > > the built images in later jobs. > > > > > > > > > > These later jobs are now faster because they're not having > > > > > to install any software. > > > > > > > > Did you see any speed-up? I had a look at some pipelines, and it seems to me > > > > that they rather got slower now? For example, this is the system1 pipeline > > > > before your change: > > > > > > > > https://gitlab.com/huth/qemu/-/jobs/610924897 > > > > > > > > and after your change: > > > > > > > > https://gitlab.com/huth/qemu/-/jobs/611069374 > > > > > > > > Duration went up from 35 minutes to 42 minutes. > > > > > > > > Seems also to happen in your builds, before the change: > > > > > > > > https://gitlab.com/berrange/qemu/-/jobs/582995084 > > > > > > > > and after the change: > > > > > > > > https://gitlab.com/berrange/qemu/-/jobs/606175927 > > > > > > > > ... went from 36 minutes up to 42 minutes. > > > > > > > > Could be a coincidence due to the load on the shared runners, but it looks > > > > at least a little bit suspicious... > > > > > > I think the difference is because we're building more features now. The > > > dockerfiles have provided more build pre-requisites that the old gitlab > > > recipe did. > > > > > > If you compare the configure summary, I see the new build now covers > > > SDL, curses, curl, pulseaudio, virtiofs, SASL, libjpeg, xen, docs > > > and a few more. So we've saved time by not intsallling many packages > > > each time, but consumed a greater amount of time by compiling more > > > features. > > > > Oh a missed a lot more actually - there's also spice, opengl, libiscsi, > > libnfs, libusb, seccomp, libssh, lzo, snappy, bzip, zstd, numa and udev > > too. > > Ok, that's fair, I think it's ok to spend some additional minutes for the > extended test coverage here. Unlike Travis which limits to 5 concurrent jobs, GitLab doesn't seem to have a fixed concurrency limit (at least I've not managed to hit one yet). So if we want faster overall build time, we have alot more scope for splitting jobs in half and making use of much better parallelism in CI to get this to complete sooner. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
© 2016 - 2026 Red Hat, Inc.