With the recent efforts in upstream libvirt to centralize our CI on
gitlab, let's add a new gitlab-specific flavor along with related
playbook tasks. This flavour revolves around installing and configuring
the gitlab-runner agent binary which requires the per-project
registration token to be specified in order for the runner to be
successfully registered with the gitlab server.
Note that as part of the registration process each runner acquires a new
unique access token. This means that we must ensure that the
registration is run only on the first update, otherwise a new runner
with a new access token is registered with the gitlab project.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
---
guests/playbooks/update/main.yml | 5 +++
guests/playbooks/update/tasks/gitlab.yml | 52 ++++++++++++++++++++++++
2 files changed, 57 insertions(+)
create mode 100644 guests/playbooks/update/tasks/gitlab.yml
diff --git a/guests/playbooks/update/main.yml b/guests/playbooks/update/main.yml
index a5a4de8..371e53d 100644
--- a/guests/playbooks/update/main.yml
+++ b/guests/playbooks/update/main.yml
@@ -58,3 +58,8 @@
- include: '{{ playbook_base }}/tasks/jenkins.yml'
when:
- flavor == 'jenkins'
+
+ # Install the Gitlab runner agent
+ - include: '{{ playbook_base }}/tasks/gitlab.yml'
+ when:
+ - flavor == 'gitlab'
diff --git a/guests/playbooks/update/tasks/gitlab.yml b/guests/playbooks/update/tasks/gitlab.yml
new file mode 100644
index 0000000..b8f731d
--- /dev/null
+++ b/guests/playbooks/update/tasks/gitlab.yml
@@ -0,0 +1,52 @@
+---
+- name: Define gitlab-related facts
+ set_fact:
+ gitlab_url: '{{ lookup("file", gitlab_url_file) }}'
+ gitlab_runner_secret: '{{ lookup("file", gitlab_runner_token_file) }}'
+ gitlab_runner_download_url: https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-{{ ansible_system|lower }}-amd64
+ gitlab_runner_config_path: '/home/gitlab/.gitlab-runner/config.toml'
+
+- name: Download gitlab-runner agent
+ get_url:
+ url: '{{ gitlab_runner_download_url }}'
+ dest: /home/gitlab/bin/gitlab-runner
+ owner: gitlab
+ group: gitlab
+ mode: '0775'
+ force: yes
+
+- name: Register the gitlab-runner agent
+ become: true
+ become_user: gitlab
+ shell: '/home/gitlab/bin/gitlab-runner register --non-interactive --config {{ gitlab_runner_config_path }} --registration-token {{ gitlab_runner_secret }} --url {{ gitlab_url }} --executor shell --tag-list {{ inventory_hostname }}'
+ args:
+ creates: '{{ gitlab_runner_config_path }}'
+
+- block:
+ - name: Install the gitlab-runner service unit
+ template:
+ src: '{{ playbook_base }}/templates/gitlab-runner.service.j2'
+ dest: /etc/systemd/system/gitlab-runner.service
+
+ - name: Enable the gitlab-runner service
+ systemd:
+ name: gitlab-runner
+ state: started
+ enabled: yes
+ daemon_reload: yes
+ when: ansible_service_mgr == 'systemd'
+
+- block:
+ - name: Install the gitlab_runner rc service script
+ template:
+ src: '{{ playbook_base }}/templates/gitlab-runner.j2'
+ dest: '/usr/local/etc/rc.d/gitlab_runner'
+ mode: '0755'
+
+ - name: Enable the gitlab-runner rc service
+ service:
+ name: gitlab_runner
+ state: started
+ enabled: yes
+ when: ansible_service_mgr != 'systemd'
+
--
2.25.1
On Tue, 2020-04-07 at 13:31 +0200, Erik Skultety wrote:
> +- name: Register the gitlab-runner agent
> + become: true
> + become_user: gitlab
> + shell: '/home/gitlab/bin/gitlab-runner register --non-interactive --config {{ gitlab_runner_config_path }} --registration-token {{ gitlab_runner_secret }} --url {{ gitlab_url }} --executor shell --tag-list {{ inventory_hostname }}'
You didn't answer in the other thread, so I'll ask again here: is the
idea that we're going to use only the FreeBSD runners to supplement
the shared runners for the existing unit tests, and all Linux runners
are only going to be used for integration tests later on, hence why
we need to use the shell executor rather than the docker executor?
Additional nit: instead of using {{ inventory_hostname }} as tag, we
can have a nicer tag by using {{ os_name|lower }}-{{ os_version }}.
It would also be a good idea to quote all command arguments.
The rest looks good, but given the potential security issue raised by
Dan I'll wait for v3 before handing out actual ACKs :)
--
Andrea Bolognani / Red Hat / Virtualization
On Wed, Apr 08, 2020 at 12:05:11PM +0200, Andrea Bolognani wrote:
> On Tue, 2020-04-07 at 13:31 +0200, Erik Skultety wrote:
> > +- name: Register the gitlab-runner agent
> > + become: true
> > + become_user: gitlab
> > + shell: '/home/gitlab/bin/gitlab-runner register --non-interactive --config {{ gitlab_runner_config_path }} --registration-token {{ gitlab_runner_secret }} --url {{ gitlab_url }} --executor shell --tag-list {{ inventory_hostname }}'
>
> You didn't answer in the other thread, so I'll ask again here: is the
> idea that we're going to use only the FreeBSD runners to supplement
> the shared runners for the existing unit tests, and all Linux runners
> are only going to be used for integration tests later on, hence why
> we need to use the shell executor rather than the docker executor?
Why not both? We can always extend the capacity with VM builders, although it's
true that functional tests was what I had in mind originally (+the FreeBSD
exception like you already mentioned). Either way, I don't understand why we
should force usage of the docker executor for the VMs which we can use for
builds. The way I'm looking at the setup is: container setup vs VM setup, with
both being consistent in their own respective category, IOW, why should the
setup of the VM in terms of the gitlab-runner be different when running
functional tests vs builds? So, I'd like to stay with the shell executor on VMs
in all cases and not fragment the setup even further.
Furthermore, with the OpenShift infra we got, I see very little to no value in
using the VMs to extend our build capacity.
>
> Additional nit: instead of using {{ inventory_hostname }} as tag, we
> can have a nicer tag by using {{ os_name|lower }}-{{ os_version }}.
Sure, I don't mind that change.
> It would also be a good idea to quote all command arguments.
Will do.
--
Erik Skultety
On Wed, 2020-04-08 at 12:28 +0200, Erik Skultety wrote:
> On Wed, Apr 08, 2020 at 12:05:11PM +0200, Andrea Bolognani wrote:
> > You didn't answer in the other thread, so I'll ask again here: is the
> > idea that we're going to use only the FreeBSD runners to supplement
> > the shared runners for the existing unit tests, and all Linux runners
> > are only going to be used for integration tests later on, hence why
> > we need to use the shell executor rather than the docker executor?
>
> Why not both? We can always extend the capacity with VM builders, although it's
> true that functional tests was what I had in mind originally (+the FreeBSD
> exception like you already mentioned). Either way, I don't understand why we
> should force usage of the docker executor for the VMs which we can use for
> builds. The way I'm looking at the setup is: container setup vs VM setup, with
> both being consistent in their own respective category, IOW, why should the
> setup of the VM in terms of the gitlab-runner be different when running
> functional tests vs builds? So, I'd like to stay with the shell executor on VMs
> in all cases and not fragment the setup even further.
Because all the builds that currently exist are defined in terms of
containers, so when you have something like
x86-fedora-31:
script:
...
image: quay.io/libvirt/buildenv-libvirt-fedora-31:latest
you cannot just run the same job on a worker that is configured to
use the shell executor.
I guess you could drop the image element and replace it with
tags:
- fedora-31
but then you'd either have to duplicate the job definition, or to
only have the new one which then would not work for forks and merge
requests, so that makes it less interesting.
> Furthermore, with the OpenShift infra we got, I see very little to no value in
> using the VMs to extend our build capacity.
I don't understand what you're trying to say here at all, sorry.
--
Andrea Bolognani / Red Hat / Virtualization
On Wed, Apr 08, 2020 at 03:21:00PM +0200, Andrea Bolognani wrote: > On Wed, 2020-04-08 at 12:28 +0200, Erik Skultety wrote: > > On Wed, Apr 08, 2020 at 12:05:11PM +0200, Andrea Bolognani wrote: > > > You didn't answer in the other thread, so I'll ask again here: is the > > > idea that we're going to use only the FreeBSD runners to supplement > > > the shared runners for the existing unit tests, and all Linux runners > > > are only going to be used for integration tests later on, hence why > > > we need to use the shell executor rather than the docker executor? > > > > Why not both? We can always extend the capacity with VM builders, although it's > > true that functional tests was what I had in mind originally (+the FreeBSD > > exception like you already mentioned). Either way, I don't understand why we > > should force usage of the docker executor for the VMs which we can use for > > builds. The way I'm looking at the setup is: container setup vs VM setup, with > > both being consistent in their own respective category, IOW, why should the > > setup of the VM in terms of the gitlab-runner be different when running > > functional tests vs builds? So, I'd like to stay with the shell executor on VMs > > in all cases and not fragment the setup even further. > > Because all the builds that currently exist are defined in terms of > containers, so when you have something like > > x86-fedora-31: > script: > ... > image: quay.io/libvirt/buildenv-libvirt-fedora-31:latest > > you cannot just run the same job on a worker that is configured to > use the shell executor. > > I guess you could drop the image element and replace it with > > tags: > - fedora-31 ^This is what I had in mind > > but then you'd either have to duplicate the job definition, or to > only have the new one which then would not work for forks and merge > requests, so that makes it less interesting. We'll have to duplicate it for FreeBSD anyway, so I don't understand why should do it differently for other VM distros. > > > Furthermore, with the OpenShift infra we got, I see very little to no value in > > using the VMs to extend our build capacity. > > I don't understand what you're trying to say here at all, sorry. What I meant is that I don't see much value to run the builds in VMs since we have a bunch of images ready where we can already execute the builds so it's basically only about having resources to spawn the containers and that's where OpenShift comes in. I understand you reminding me that private runners cannot be used to run on merge requests (and forks for obvious reasons), but the same applies to VMs we're discussing in this thread. So, I wouldn't focus primarily on running the builds there is what I'm saying. -- Erik Skultety
On Wed, 2020-04-08 at 15:56 +0200, Erik Skultety wrote: > On Wed, Apr 08, 2020 at 03:21:00PM +0200, Andrea Bolognani wrote: > > I guess you could drop the image element and replace it with > > > > tags: > > - fedora-31 > > ^This is what I had in mind > > > but then you'd either have to duplicate the job definition, or to > > only have the new one which then would not work for forks and merge > > requests, so that makes it less interesting. > > We'll have to duplicate it for FreeBSD anyway, so I don't understand why should > do it differently for other VM distros. We will not duplicate it, because there is no existing container based build for FreeBSD: the FreeBSD builds can, by definition, only happen inside VMs. > > I don't understand what you're trying to say here at all, sorry. > > What I meant is that I don't see much value to run the builds in VMs since we > have a bunch of images ready where we can already execute the builds Totally agree with you up until this point: this is exactly what I was trying to convey. > so it's > basically only about having resources to spawn the containers and that's where > OpenShift comes in. I still don't understand how OpenShift is part of the picture. I have probably either missed or forgotten something O:-) > I understand you reminding me that private runners cannot be used to run on > merge requests (and forks for obvious reasons), but the same applies to VMs > we're discussing in this thread. So, I wouldn't focus primarily on running the > builds there is what I'm saying. I think we're kind of talking past each other at this point :) To reiterate/clarify: I'm perfectly okay with using Linux VMs for integration testing only and leaving builds to shared runner and FreeBSD VMs. I don't think we should use Linux VMs for builds unless we use the docker executor for them, but then again I don't think we really need to use Linux VMs for builds in the first place. -- Andrea Bolognani / Red Hat / Virtualization
© 2016 - 2026 Red Hat, Inc.