Run the bare minimum build that is possible to create the docs. Ideally
the '--without-remote' arg would be passed, but there are several bugs
preventing a build from succeeding without the remote driver built.
The generated website is published as an artifact and thus is browsable
on build completion and can be downloaded as a zip file.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
.gitlab-ci.yml | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index ea49c6178b..6f7e0ce135 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -44,3 +44,26 @@ debian-sid-cross-i686:
debian-sid-cross-mipsel:
<<: *job_definition
image: quay.io/libvirt/buildenv-libvirt-debian-sid-cross-mipsel:latest
+
+# This artifact published by this job is downloaded by libvirt.org to
+# be deployed to the web root:
+# https://gitlab.com/libvirt/libvirt/-/jobs/artifacts/master/download?job=website
+website:
+ script:
+ - mkdir build
+ - cd build
+ - ../autogen.sh $CONFIGURE_OPTS --prefix=$(pwd)/../vroot || (cat config.log && exit 1)
+ - make -j $(getconf _NPROCESSORS_ONLN)
+ - make -j $(getconf _NPROCESSORS_ONLN) install
+ - cd ..
+ - mv vroot/share/doc/libvirt/html/ website
+ image: quay.io/libvirt/buildenv-libvirt-fedora-31:latest
+ variables:
+ CONFIGURE_OPTS: --without-libvirtd --without-esx --without-hyperv --without-test --without-dtrace --without-openvz --without-vmware --without-attr --without-audit --without-blkid --without-bash-completion --without-capng --without-curl --without-dbus --without-firewalld --without-fuse --without-glusterfs --without-libiscsi --without-libssh --without-numactl --without-openwsman --without-pciaccess --without-readline --without-sanlock --without-sasl --without-selinux --without-ssh2 --without-udev
+ artifacts:
+ expose_as: 'Website'
+ name: 'website'
+ when: on_success
+ expire_in: 30 days
+ paths:
+ - website
--
2.24.1
On Tue, 2020-03-10 at 10:09 +0000, Daniel P. Berrangé wrote:
> +# This artifact published by this job is downloaded by libvirt.org to
> +# be deployed to the web root:
> +# https://gitlab.com/libvirt/libvirt/-/jobs/artifacts/master/download?job=website
> +website:
> + script:
> + - mkdir build
> + - cd build
> + - ../autogen.sh $CONFIGURE_OPTS --prefix=$(pwd)/../vroot || (cat config.log && exit 1)
> + - make -j $(getconf _NPROCESSORS_ONLN)
> + - make -j $(getconf _NPROCESSORS_ONLN) install
> + - cd ..
> + - mv vroot/share/doc/libvirt/html/ website
> + image: quay.io/libvirt/buildenv-libvirt-fedora-31:latest
> + variables:
> + CONFIGURE_OPTS: --without-libvirtd --without-esx --without-hyperv --without-test --without-dtrace --without-openvz --without-vmware --without-attr --without-audit --without-blkid --without-bash-completion --without-capng --without-curl --without-dbus --without-firewalld --without-fuse --without-glusterfs --without-libiscsi --without-libssh --without-numactl --without-openwsman --without-pciaccess --without-readline --without-sanlock --without-sasl --without-selinux --without-ssh2 --without-udev
> + artifacts:
> + expose_as: 'Website'
> + name: 'website'
> + when: on_success
> + expire_in: 30 days
> + paths:
> + - website
The overall idea of building the website as a CI job is a reasonable
one, especially because it will allow us to stop periodically
speding time convincing libvirt to build just enough on what has long
been an unsupported target.
A couple of more technical questions:
* why do we care about whether all those features are enabled or
not? It's pretty ugly to have that list hardcoded in our build
scripts, and I don't quite get the point in having it in the
first place;
* as a follow up, why would this be a separate job? We are always
going to build the website on one of our supported targets, so
basically we end up building it twice...
Can't we just generate the artifact as a side-effect of the regular
Fedora 31 build?
--
Andrea Bolognani / Red Hat / Virtualization
On Fri, Mar 20, 2020 at 06:07:47PM +0100, Andrea Bolognani wrote: > On Tue, 2020-03-10 at 10:09 +0000, Daniel P. Berrangé wrote: > > +# This artifact published by this job is downloaded by libvirt.org to > > +# be deployed to the web root: > > +# https://gitlab.com/libvirt/libvirt/-/jobs/artifacts/master/download?job=website > > +website: > > + script: > > + - mkdir build > > + - cd build > > + - ../autogen.sh $CONFIGURE_OPTS --prefix=$(pwd)/../vroot || (cat config.log && exit 1) > > + - make -j $(getconf _NPROCESSORS_ONLN) > > + - make -j $(getconf _NPROCESSORS_ONLN) install > > + - cd .. > > + - mv vroot/share/doc/libvirt/html/ website > > + image: quay.io/libvirt/buildenv-libvirt-fedora-31:latest > > + variables: > > + CONFIGURE_OPTS: --without-libvirtd --without-esx --without-hyperv --without-test --without-dtrace --without-openvz --without-vmware --without-attr --without-audit --without-blkid --without-bash-completion --without-capng --without-curl --without-dbus --without-firewalld --without-fuse --without-glusterfs --without-libiscsi --without-libssh --without-numactl --without-openwsman --without-pciaccess --without-readline --without-sanlock --without-sasl --without-selinux --without-ssh2 --without-udev > > + artifacts: > > + expose_as: 'Website' > > + name: 'website' > > + when: on_success > > + expire_in: 30 days > > + paths: > > + - website > > The overall idea of building the website as a CI job is a reasonable > one, especially because it will allow us to stop periodically > speding time convincing libvirt to build just enough on what has long > been an unsupported target. > > A couple of more technical questions: > > * why do we care about whether all those features are enabled or > not? It's pretty ugly to have that list hardcoded in our build > scripts, and I don't quite get the point in having it in the > first place; It is to reduce the build time - it cuts time for the job in 1/2 which is worthwhile win. > * as a follow up, why would this be a separate job? We are always > going to build the website on one of our supported targets, so > basically we end up building it twice... > > Can't we just generate the artifact as a side-effect of the regular > Fedora 31 build? The other jobs run "distcheck" for building, and as such the built artifacts are all deleted upon success, as its all internal to the distcheck target. Using gitlab stages for different types of builds gives us a more friendly output view. We can distinguish what aspect of the build has failed at a glance instead of having to peer into the 100's of KB of build logs. Eventually we can make use of filters so that when people submit a patch which only touches the docs, we can skip all the native build and cross build jobs entirely, only running the docs stage. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Fri, 2020-03-20 at 17:27 +0000, Daniel P. Berrangé wrote: > On Fri, Mar 20, 2020 at 06:07:47PM +0100, Andrea Bolognani wrote: > > * why do we care about whether all those features are enabled or > > not? It's pretty ugly to have that list hardcoded in our build > > scripts, and I don't quite get the point in having it in the > > first place; > > It is to reduce the build time - it cuts time for the job in 1/2 > which is worthwhile win. On my laptop, where make is configured to use parallel builds by default through $MAKEFLAGS: $ git clean -xdf && time sh -c 'mkdir build && cd build && ../autogen.sh && make && make install DESTDIR=$(pwd)/../install' real 2m52.997s user 14m46.604s sys 1m56.444s $ git clean -xdf && time sh -c 'mkdir build && cd build && ../autogen.sh --without-libvirtd --without-esx --without-hyperv --without-test --without-dtrace --without-openvz --without-vmware --without-attr --without-audit --without-blkid --without-bash-completion --without-capng --without-curl --without-dbus --without-firewalld --without-fuse --without-glusterfs --without-libiscsi --without-libssh --without-numactl --without-openwsman --without-pciaccess --without-readline --without-sanlock --without-sasl --without-selinux --without-ssh2 --without-udev && make && make install DESTDIR=$(pwd)/../install' real 1m59.594s user 9m4.929s sys 1m13.152s $ git clean -xdf && time sh -c 'mkdir build && cd build && ../autogen.sh && make -C docs/ && make -C docs/ install DESTDIR=$(pwd)/../install' real 0m33.350s user 0m54.281s sys 0m10.986s So we can basically have our cake and eat it too! :) > Using gitlab stages for different types of builds gives us a more > friendly output view. We can distinguish what aspect of the build > has failed at a glance instead of having to peer into the 100's > of KB of build logs. Are we eventually going to have the same syntax-check / build / check split as we currently have in Jenkins? > Eventually we can make use of filters so that when people submit > a patch which only touches the docs, we can skip all the native > build and cross build jobs entirely, only running the docs stage. Sounds like a nice little optimization, assuming we can get it to work reliably, but I have to wonder how frequent it really is that the documentation is updated outside of a series that touches the code as well... -- Andrea Bolognani / Red Hat / Virtualization
On Mon, Mar 23, 2020 at 03:35:03PM +0100, Andrea Bolognani wrote: > On Fri, 2020-03-20 at 17:27 +0000, Daniel P. Berrangé wrote: > > On Fri, Mar 20, 2020 at 06:07:47PM +0100, Andrea Bolognani wrote: > > > * why do we care about whether all those features are enabled or > > > not? It's pretty ugly to have that list hardcoded in our build > > > scripts, and I don't quite get the point in having it in the > > > first place; > > > > It is to reduce the build time - it cuts time for the job in 1/2 > > which is worthwhile win. > > On my laptop, where make is configured to use parallel builds by > default through $MAKEFLAGS: > > $ git clean -xdf && time sh -c 'mkdir build && cd build && ../autogen.sh && make && make install DESTDIR=$(pwd)/../install' > > real 2m52.997s > user 14m46.604s > sys 1m56.444s > > $ git clean -xdf && time sh -c 'mkdir build && cd build && ../autogen.sh --without-libvirtd --without-esx --without-hyperv --without-test --without-dtrace --without-openvz --without-vmware --without-attr --without-audit --without-blkid --without-bash-completion --without-capng --without-curl --without-dbus --without-firewalld --without-fuse --without-glusterfs --without-libiscsi --without-libssh --without-numactl --without-openwsman --without-pciaccess --without-readline --without-sanlock --without-sasl --without-selinux --without-ssh2 --without-udev && make && make install DESTDIR=$(pwd)/../install' > > real 1m59.594s > user 9m4.929s > sys 1m13.152s > > $ git clean -xdf && time sh -c 'mkdir build && cd build && ../autogen.sh && make -C docs/ && make -C docs/ install DESTDIR=$(pwd)/../install' > > real 0m33.350s > user 0m54.281s > sys 0m10.986s > > So we can basically have our cake and eat it too! :) > > > Using gitlab stages for different types of builds gives us a more > > friendly output view. We can distinguish what aspect of the build > > has failed at a glance instead of having to peer into the 100's > > of KB of build logs. > > Are we eventually going to have the same syntax-check / build / > check split as we currently have in Jenkins? This isn't a desirable approach, because in general you're not going to be sharing the git checkout between build jobs in the pipeline. You can selectively publish data from one stage to another, but we don't really want to publish the entire build dir output, which is what would be required to split off the build & check stages. The main benefit for having them separate is to make it easier to view the logs to see what part failed. GitLab has a mechanism for publishing artifacts, and the GNOME projects use this to publish their unit tests results in junit format IIUC. If we can get something like this wired up then we can solve the problem if making it easy to view test failures as a distinct thing from general build failures. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
On Mon, 2020-03-23 at 15:27 +0000, Daniel P. Berrangé wrote: > On Mon, Mar 23, 2020 at 03:35:03PM +0100, Andrea Bolognani wrote: > > Are we eventually going to have the same syntax-check / build / > > check split as we currently have in Jenkins? > > This isn't a desirable approach, because in general you're not > going to be sharing the git checkout between build jobs in the > pipeline. You can selectively publish data from one stage to > another, but we don't really want to publish the entire build > dir output, which is what would be required to split off the > build & check stages. Makes sense. I was asking mostly out of curiosity anyway. > The main benefit for having them separate is to make it easier > to view the logs to see what part failed. > > GitLab has a mechanism for publishing artifacts, and the GNOME > projects use this to publish their unit tests results in junit > format IIUC. If we can get something like this wired up then we > can solve the problem if making it easy to view test failures > as a distinct thing from general build failures. That'd be neat :) -- Andrea Bolognani / Red Hat / Virtualization
On Tue, Mar 10, 2020 at 10:09:41AM +0000, Daniel P. Berrangé wrote: > Run the bare minimum build that is possible to create the docs. Ideally > the '--without-remote' arg would be passed, but there are several bugs > preventing a build from succeeding without the remote driver built. > > The generated website is published as an artifact and thus is browsable > on build completion and can be downloaded as a zip file. > > Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> > --- Reviewed-by: Erik Skultety <skultety.erik@gmail.com>
© 2016 - 2026 Red Hat, Inc.