How to scale Gitlab Runners into Kubernetes using HPA based on external metrics throughout Prometheus Adapter

Raphael Moraes
Webera
Published in
3 min readMar 15, 2022

--

Hey, I’m back here to share a much more efficient way to scale your Gitlab Runner using a custom metric about the number of jobs, instead of using the default metrics of resource usage. To do this, we need to export the metrics to the API “/apis/external.metrics.k8s.io/v1beta1” into Kubernetes. Before continuing, It’s important to understand that in this scenario I’m assuming that you are using the Gitlab Runner Instances into Kubernetes and that you have used the official way of deploying using the “gitlab-runner” helm chart.

By default, the HPA scales the PODs based on CPU or Mem usage (latest versions of Kubernetes), but this way may not be much efficient for some scenarios. The reason why I’m writing this article is a good example of a scenario in which the HPA needs to use custom metrics instead of using the default mechanism based on resource usage. But why? Because the pods of Gitlab Runners Instances need to be scaled based on the number of Jobs. To achieve this, we need firstly export the Gitlab Runner’s metrics, install the Prometheus Stack to get those metrics, and finally, install the Prometheus Adapter which will act as a Custom Metrics API Server for application-specific metrics (external.metrics.k8s.io or custom.metrics.k8s.io API) that are a not officially part of Kubernetes). In my case, I’m going to make the custom metric available throughout the API “external.metrics.k8s.io”.

Now that you understood the better and more efficient way for this scenario that I introduced to you, let’s walking through the configuration steps.

Step1. Install the Kube Prometheus Stack and Prometheus Adapter

Follow below the links providing the instructions for the installation:

a. Installing Kube Prometheus Stack

Below is the code snippet that you need to adjust in the “prometheus.prometheusSpec” section, to create a custom scrap to get the metrics of the Gitlab Runners using the kubernetes_sd_configs (kubernetes service discovery mechanism):

NOTE: In lines 21 to 23 is the part of the code that I did add an action to filter and keep the Gitlab Runner by the label “name”. This is helpful when you are using more than one Runner. In this way is possible to create a job_name for each one of the Runners that you have and filter by each specific name. It is extremely important to ensure that the HPA will work properly in the scenario where you have multiple Runners.

b. Installing Prometheus Adapter (the API metrics server)

Below is the code snippet that you need to add under the section “rule” in the values.yaml for the Prometheus Adapter:

Besides that code above, you need to adjust also this part of the values.yaml for setting properly the name of the Prometheus Server that you installed in the previous step (step a):

Due to the reason why I used a specific namespace to install the Prometheus Adapater, was necessary to create a ClusterRole and ClusterRoleBindings, to grant permission to the default service account “horizontal-pod-autoscaler” (residing in kube-system) to access the API “external.metrics.k8s.io”.

Cluster Role

Cluster Role Bindings

Step3. Adjust the HPA settings in the values.yaml for the Gitlab Runner Helm Chart

Adjust the values.yaml file adding the HPA settings as you can see in the example below:

That’s it. =)

Now, your HPA will scale the Gitlab Runners Pods based on the custom metrics that were exported via API “external.metrics.k8s.io”.

Thank you for reading the article, and remember:

— “knowledge acquired but not shared is lost knowledge” —.

--

--