Portainer has CD (Continuous Delivery) functionality built in. With our Portainer API, Git-based deployment method, Kube-Proxy and the ability to download a user specific kubeconfig file, Portainer makes it that much easier to automate your app deployments and management.
But how would I integrate all of this with my exisiting CI pipelines you may ask? The following use case showcases one such scenario.
In this blog post we use Gitlab as our repository and CI, AWS ECR as our container registry while we show you how to tie all this together in Portainer. We will cover how we could deploy and manage a simple multi-tier application to a single cluster, this can be expanded easily to cover more complex applications or multiple clusters. This will be a long post so please bear with us.
There are several parts to this workflow:
- Repository with all applications files hosted on Gitlab
- AWS ECR for all the our application container images
- ECR Configured in Portainer as a custom registry
- Build + Push images using Gitlab CI
- Configure application deployment in Portainer
- Application auto-update using Portainer
- Re-deploy / update application using Gitlab CI
We are using Guestbook as the example application. This example shows how to build a simple multi-tier web application using Kubernetes and Docker. The application consists of a web front end, Redis master for storage, and replicated set of Redis slaves, all for which we will create Kubernetes replication controllers, pods, and services.
We are using a public repo on Gitlab.com, you can do the same with your self hosted Gitlab instance. We have all the manifest files to deploy the application along with dockerfile and all the application source code in our Gitlab repository. We are going to be using the v2 of the application in this example and you can see the contents of the repo in the screenshot below.
For our application we will build analyzer and guestbook container images. For the back-end we will be utilising redis image from public registries. We will build and publish our images to AWS Elastic Container Registry. I've pre-created the necessary repositories in AWS as below
ECR Registry config in Portainer
If ECR is not configured in Portainer already you may add it as a custom registry.
Enable access to the registry for the namespace that we will be deploying our application into. Below is an example of how you can create a namespace and give access to a certain custom registry in Portainer.
Build and Publish Docker Images
We utilise Gitlab's CI pipeline to build and publish images. Configure the necessary variables required for CI/CD, we need AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY variables created with values.
Create a Gitlab CI pipeline job to build and publish application images, source code from our .gitlab-ci.yml below.
With this pipeline we can build the two images using respective dockerfiles and push them into the ECR registry at the same time. We are tagging images with a version number as well as latest.
The buildimages job uses the official AWS CLI version 2 Docker image. Docker is installed into it to build the images and to push them to AWS ECR. This job is added to the pipeline only if there are modifications to either of the Dockerfiles referenced in the rules:changes array. You can see an example run results below.
We should also see these images published into our ECR registry.
Application Deployment in Portainer
We will use aio.yaml manifest from the Gitlab repository to deploy our applications in Portainer. This manifest includes all the deployments and services that we need deployed. You can achieve the same with individual yaml files that are available in the repo as well.
Snippet of the yaml file below
Deploy the application using Portainer's Git based deployment method and set up to auto-update when changes are detected on the repo.
Auto-update using Polling/Webhook
Portainer has two ways to keep applications up-to date.
- Polling: Portainer polls the repository for any changes at given interval and applies the changes. This is based on the commit hash of the branch.
- Webhook: Choose this option if you would like to control when to update, you may simply call the webhook provided in Portainer to have the application updated with any changes in the manifest.
We are using the polling mechanism for this post. For example when we want to update the image for the redis-master deployment, simply change the image tag in aio.yaml and commit the changes to the repository.
Portainer detects the changes and sends a request to Kubernetes cluster, similar to a
kubectl apply -f <filename> command.
After changing tag from 3.2.9 to 3.2.12 in the manifest file, Portainer polls the repository for changes. Triggers an auto-update once the change is detected.
Deployment after the update
Re-deploy / update application using Gitlab CI
While the above method works perfectly when there are changes in the repository, it would not work when the manifest file itself doesn't change.
For example, if you use latest tag for your images and there is new image built with that tag. In this case you could update your application using Gitlab CI.
Earlier we have built and pushed our application images into ECR, now we will see how we can update our application deployment when a new image is built. We will add a second stage to our Gitlab CI Pipeline along with a new CI/CD Variable for Portainer API Key. For instructions on how you can create one please visit our documentation page.
Second stage and step in the CI Pipeline
Fun fact: Portainer starts a pod running portainer/kubectl-shell when you click on the button to open Portainer's built in Kubernetes shell to run kubectl commands.
This step depends on the previous buildimages step and runs only when the previous stage is successful.
The deployapp job uses the portainer/kubectl-shell image which is an Alpine based image that contains kubectl, helm, and a few other niceties.
This job uses Portainer API Key stored in PORTAINER_TOKEN variable to authenticate against Portainer API and download kubeconfig file for the specific kubernetes cluster that you want to deploy this application into.
This kubeconfig file points to Portainer's Kube Proxy and no direct connection to your kubernetes cluster is needed or used.
We then run kubectl commands to do a rolling restart of our deployment which forces it to use the new image.
kubectl -n guestbook rollout restart deployment/guestbook-v2 --kubeconfig portainer-ctx
kubectl -n guestbook rollout restart deployment/analyzer --kubeconfig portainer-ctx
Result of a pipeline run from Gitlab CI
Details of the second stage run
Portainer CD triggered by GitLab CI to fetch and redeploy images from AWS ECR might seem ridiculously niche. However, this is just to show case a couple of the many ways, as no one solution fits all.