Skip to content

Containers for Azure DevOps Automation

Introduction

Probably for most of us containers relate to creating cloud-native applications and microservices platforms. The potential of container mobility makes them interesting for designing and architecting cloud applications. However, they may be incredibly beneficial in other contexts too, like DevOps automation. In this blog, I would want to speak about how containers coupled with Azure DevOps may help with DevOps automation when it comes to Azure cloud solutions.

The first question that may arise is why containers for DevOps automation? To address your questions, let me to set the scene a bit. Let’s imagine that we utilize Azure DevOps to store the source code of our solution alongside Azure infrastructure code. Our solution is comprised of a variety of Azure Services working within the Azure Virtual Network. To illustrate, here is a straightforward diagram:

Image not found

Azure Virtual Network is utilized for environment security, and Function Apps are linked with VNET via Private Links. Probably you already know this, however utilizing Microsoft-Hosted Azure DevOps Agents with this design and setup can be troublesome. Why? Because they cannot access Azure Functions to deploy your code by default. Because Azure resources are housed on the Azure Virtual Network, access to them directly is restricted.

In this instance, we have three alternatives for deploying our code with Azure DevOps pipelines:

Utilize Azure DevOps service tag to grant Azure DevOps access to Virtual Network. This is not an ideal approach, as we are unable to permit access for a single Microsoft-hosted Agent and must instead add a whole range of IP addresses. Here you may read more about it. Since in this case, our Virtual Network is exposed to the entire region of Azure DevOps Microsoft-hosted Agents that are not under our control, this may not always be appropriate due to security constraints.

Utilize Azure Virtual Machines that are integrated with our VNET and install Azure Self-Hosted DevOps Agent. This will resolve the issue with network access, but we will have issues with respect to the cost and maintenance of virtual machines. Use Self-Hosted Agents as containers for execution. Azure Container Instances, Azure Container Apps, and Azure Kubernetes Service are the three alternatives for running these containers. We shall discuss each strategy later in the essay. When compared to Virtual Machines, the obvious advantage of this strategy is its lower cost.

Virtual Machines vs Container services in Azure

Let’s examine two techniques for deploying Self-Hosted Agents in the Azure cloud to distribute our code to Azure Virtual Network resources.
Virtual Machines to automate DevOps
Azure DevOps Self-Hosted Agents are most commonly hosted on Azure Virtual Machines that are coupled with Azure Virtual Network. Due to this, it is simple to publish code to Azure Web Apps and Function Apps inside the same Azure Virtual Network.
Unfortunately, there are two major issues with this strategy:
Azure Virtual Machine is expensive. Here’s the Linux system with minimal settings. The anticipated monthly cost is around $13.19 Obviously, it might be lower or higher based on consumption, but assuming it would be $13.19 each month, the annual cost will be $158.28. The price will rise if you want to utilize a Windows computer.

Azure Container Instances

Image not found

Azure Container Instances enable the execution of containers on Azure without the need to manage virtual machines or a higher-level service. Azure Container Instances are ideal for situations that may function in isolated containers, such as basic apps, task automation, and Azure DevOps pipeline build processes. Examining the pricing, we can see that it is far less expensive than Azure Virtual Machine.

Image not found

It is worth knowing that when we create Azure Container Instances, we can choose to run one container or group of containers:

Image not found

A container group consists of containers scheduled on the same host computer. A container group shares a lifespan, resources, local network, and storage volumes among its containers. This implies they cannot be scaled dynamically. Once a Container Instance has been built, the number of groups cannot be scaled (pods in Kubernetes world).
To make the cost easier to comprehend, let’s perform the easy computation supplied by the Microsoft calculator.
We establish a Linux container group with a 1.3 vCPU, 2.15 GB configuration fifty times every day during a month (30 days). The duration of the container group is 150 seconds. In this instance, the CPU and memory use must be rounded to determine the overall cost.
Memory length:
50 container groups multiplied by 150 seconds multiplied by 2.2 GB multiplied by $0.00000124 per GB-s multiplied by 30 days equals $0.612
Number of container groups * vCPU duration (seconds) * number of vCPUs * price per vCPU-s * number of days
Memory duration (seconds) plus vCPU duration (seconds) equals total cost.
$0.612 + $5.063 = $5.675
It’s important to note that pricing might vary depending on the operating system utilized to run containers (Windows or Linux). Windows software duration incurs an extra cost of $0.000012 per vCPU second for Windows container groups.
Azure Container Instances may be utilized to execute Azure DevOps Self-Hosted Agents. In this instance, after the Container Instance has been established, the Azure DevOps Pipeline may be scheduled to execute. In a typical situation, one Self-Hosted Agent will be deployed to a single Azure Container Instance:

Image not found

In other words, if we want additional Self-Hosted Agents to handle scheduled Pipelines runs, we must construct more Azure Container Instances. There is no event-driven or dynamic scaling provided. If three distinct jobs are planned in Pipelines in Azure DevOps, each job will be queued and performed sequentially.
There is no way to scale a particular ACI instance. This is one of the disadvantages since you would need to reload the container if you want extra CPU or memory. When it comes to Azure DevOps Sel-Hosted Agents, this Azure solution has one and only one significant downside. To grow to five container instances, for example, you would establish five different container instances.
Horizontally scaling Azure Container Instances depending on the number of pipelines runs waiting in a certain agent pool is not possible. It means that our pipelines must be designed to establish an Azure Container Instance before executing the real task.
There are further advantages to adopting IaaS VMs:
It uses no public IP addresses. As the Azure DevOps agent begins contact with the service, this is unnecessary.
It possesses no exposed ports. There is no requirement for publication.
It takes between 5 and 10 minutes to completely configure a container instance with the necessary components.
In conclusion, Azure Container Instances is the optimal solution when we need to operate Azure DevOps Self-Hosted Agents, do not wish to manage Azure Virtual Machines, and need to cut costs. Notably, Azure Container Instances may be deployed to Azure Virtual Network, allowing us to communicate with other resources in this network.

Azure Container Apps

Image not found

Personally, this is my favourite container service in the Azure cloud. Azure Container Apps enables you to run microservices and containerized applications on a serverless platform so you can forget about managing complex Kubernetes clusters but still benefit from Kubernetes concepts.

One of the biggest advantages is auto-scaling. Applications built on Azure Container Apps can dynamically scale based on the following characteristics:

  • HTTP traffic
  • Event-driven processing
  • CPU or memory load
  • Any KEDA-supported scaler

Azure Container Apps manages automatic horizontal scaling through a set of declarative scaling rules. As a container app scales out, new instances of the container app are created on-demand. These instances are known as replicas. When you first create a container app, the scale rule is set to zero. No charges are incurred when an application scales to zero.

Can we use Azure Container Apps to run Azure DevOps Self-Hosted Agents? Of course, we can! Let’s dive into the topic and see why the Azure Container Apps service can be more beneficial than using Azure Container Apps.

Scaling

The biggest advantage over Azure Container Instances is the fact that we can automatically scale the number of containers with our Self-Hosted Agents. Here is a great place to mention that Azure Container Apps supports KEDA ScaledObjects and all of the available KEDA scalers. For those of you who are not familiar with KEDA, here is a link. KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed.

In the Azure Container Apps documentation, we can find a section with an explanation of how to enable KEDA scaling.

Image not found

Why do we talk about KEDA and how it is related to Azure DevOps Self-Hosted Agents? We talk about it because with KEDA we can scale Azure Container Apps based on agent pool queues for Azure Pipelines. I encourage you to read the whole article about this topic here.

Below code presents the Azure Container App definition using Azure Bicep with enabled KEDA auto-scaling when there are more queued pipelines in Azure DevOps:

Summary
In this post, I outlined the three most common and advantageous methods for operating Azure DevOps Self-Hosted Agents and decreasing the costs associated with using Azure Virtual Machines. With all three strategies, we can deploy to Azure Virtual Network resources, which would be impossible with Microsoft-hosted Agents. I hope that this post assists you in designing pipelines for your workload deployments.

Leave a Reply

Your email address will not be published. Required fields are marked *