One of the complicated issues we face while we manage or setup multiple cluster or environments in managing the credentials with roles, mainly we may use any CI/CD system to store those credentials. generally, here we should store the credentials in some CI system or code repo. This method can be problematic if the service gets compromised, e.g., as it happened to CodeShip last year.
Even using services such as GitLab CI and GitHub Actions requires that credentials for accessing your cluster be stored with them. If you’re employing GitOps, to take advantage of using the usual Push to repo -> Review Code -> Merge Code sequence for managing your infrastructure configuration as well, this would also mean access to your whole infrastructure.
It can also be difficult to keep track of how the different deployed environments are drifting from the configuration files stored in the repo, since these external services are not specific to Kubernetes and thus aren’t aware of the status of all the deployed pieces.
How to mitigate this? there are tools to help us with these issues. Two of the most known are Argo CD and Flux. They allow credentials to be stored within your Kubernetes cluster, where you have more control over their security. They also offer pull-based deployment with drift detection. Both of these tools solve the same issues, but tackle them from different angles.
Here, we’ll take a deeper look at Argo CD out of the two.
What is Argo CD
As we seen previously, Argo CD is a tool which will read your environment configuration (written either as a helm chart, kustomize files, jsonnet or plain yaml files) from your git repository and apply it to your Kubernetes namespaces. Some of the features of Argo CD are: declarative and version-controlled application deployments.
Argo CD automates the deployment of the desired application states in the specified target environments. Application deployments can track updates to branches, tags, or pinned to a specific version of manifests at a Git commit.
Prerequisites
- Azure Kubernetes Service up and running, if you don’t have one, please follow the steps with terraform to create it. https://foxutech.com/how-to-create-azure-kubernetes-service-using-terraform/
- Kubectl installed in the VM or machine you are going to manage the AKS.
- Have a kubeconfig file (default location is ~/.kube/config).
- Argo CD setup. If not available you can refer https://foxutech.com/setup-argocd-on-azure-kubernetes-services/ to setup it.
- Helm
About Helm
Helm is a templating engine for Kubernetes. It allows us to define values separately from the structure of the yaml files, which can help with access control and managing multiple environments using the same template.
You can grab Helm here: https://github.com/helm/helm/releases
Deploy application from GITHUB
As mentioned, this article considers Prerequisites has been followed. With that, lets login to argoCD via UI. use `admin` as username, and the password retrieved by this command or your password, if you changed before.
# kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
Once you’re logged in, connect your repo from the Repositories inside the Settings menu on the left side. Here, we can choose between SSH and https authentication, GitHub – for this article, let’s use https, but for SSH, you’d only need to set up a key pair for use.
As I am using https with GitHub, I am using my GitHub username and token, as GitHub only allows to login via access token. To generate access token, login to GitHub >> profile settings >> Developer settings >> Personal access token >> Generate new token. With this you can generate new token and use it in argoCD to login the repository.
Once the repository connected successfully, you can set up an application which will help to sync the state of your deployment with that described in the github repo.
Our sample app you can find on kubernetes/argocd/node-app at main · foxutech/kubernetes (github.com). you can fork and use it.
Setup an application
Now we are set to create the application, you can use New App to create a new application. In that you need to choose a branch or a tag to use to monitor by argoCD. In my case, I am choosing master branch for now, make sure it contains your latest stable changes. Setting the sync policy to automatic allows for automatic deployments when the GIT repo is updated, and also provides automatic pruning and self-healing capabilities, if needed.
Be sure to set the destination cluster to the one available in the dropdown and use preferable namespace. If everything is set correctly, Argo CD should now start syncing the deployment state.
Features of Argo CD
From the application view, you can now see the different parts that comprise our demo application.
Clicking on any of these parts allows for checking the diff of the deployed config, and the one checked into git, as well as the yaml files themselves separately. The diff should be empty for now, but we’ll see it in action once we make some changes or if you disable automatic syncing.
Along with this you should have access to check the logs and events on respective resources. You can get access to the logs from the pods, which can be quite useful. Note, logs are not retained between different pod instances like deleted pods.
You know, you can also possible to handle rollbacks from UI by just clicking on the “History and Rollback” button, interesting one right. Here, you can see all the different versions that have been deployed to our cluster by commit. You can re-deploy any of them using right side menu on the top right, and selecting “Redeploy” this feature needs automatic deployment to be turned off. However, you’ll be prompted to do so here.
These should cover the most important parts of the UI and what is available in Argo CD. Next up, we’ll take a look at how the deployment update happens when code changes on GitHub.
Let’s check update sync
With the setup done, any changes you make to the configuration that you push to the master branch should be reflected on the deployment shortly after. A very simple way to check out the updating process is to bump up the `replicaCount` in values.yaml to 2 (or more), and run the helm command again to generate the resources.yaml. Then, commit and push to master and monitor the update process on the Argo CD UI.
You should see a new event in the argoapptest-app events, with the reason `ScalingReplicaSet`. You can double-check the result using kubectl, where you should now see two instances of the argoapptest-app running:
# kubectl get pod -n argoapptest
You can create new branch prepared in the repo and name it any with your own wish, let me name it as argov2test, which has another version of previous app that you can deploy, so you can see some more of the update process and diffs. It is quite similar to how the previous deployment works.
From the existing app folder, make some changes in Dockerfile, then build and push the docker image. Make sure to have a different version tag for it.
# docker build -t motoskia/argo-app-1:v2 .
# docker push motoskia/argo-app-1:v2
You can set deployments to manual for this step, to be able to take a better look at the diff before the actual update happens. You can do this from the sync settings part of `App details`.
Generate the updated resources file afterwards, then commit and push it to git to trigger the update in Argo CD:
helm template -f "./helm/demo-app/values.yaml" "./helm/demo-app" > "./helm/demo-app/resources/resources.yaml"
This should result in a diff appearing `App details` -> `Diff` for you to check out. You can either deploy it manually or just turn auto-deploy back.
ArgoCD safeguards you from those resource changes that are drifting from the latest source-controlled version of your code. Let’s try to manually scale up the deployment to 5 instances:
Get the name of the replica set:
# kubectl get rs -n argoapptest
Scale it to 3 instances:
# kubectl -n argoapptest scale --replicas=3 rs/argoapptest-app
If you are quick enough, you can catch the changes applied on the ArgoCD Application Visualization as it tries to add those instances. However, ArgoCD will prevent this change, because it would drift from the source-controlled version of the deployment. It also scales the deployment down to the defined value in the latest commit (in my example it was set to 3).
The downscale event can be found under the “argoapptest-app” deployment events, like below.
That’s it for now, hope this post helps all to understand argoCD, in upcoming post will see some random app deployments with different tools and also some integrations with ArgoCD. Stay tuned.