GitOps: ArgoCD Application Management App of Apps Models

Written by Parveen Kumar Patidar.

ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes. By tracking application definitions in a Git repository, ArgoCD enables version control, reproducibility, and automated deployment of your applications. Git is a single source of truth, ensuring that your clusters’ desired states are always represented accurately and consistently.

ArgoCD is a powerful tool for continuous delivery in Kubernetes environments. By integrating Helm, Helmfile, and Kubernetes manifests, you can create a robust and flexible system for managing your applications. This guide provides an end-to-end solution for setting up and managing ArgoCD applications, ensuring a streamlined and efficient deployment process.

Introduction

This blog will cover a comprehensive guide to demonstrate the ArgoCD ‘app of apps’ model. We will also cover bootstrapping argoCD application, repository management and application creation in the ArgoCD supported with the help of a demo. The demo also supports a basic production-ready application architecture by having platform and workload applications separately defined. The architecture also supports multiple environments by using the Cluster names and corresponding values files. The ArgoCD applications support Helm remote, Helm local charts and plain Kubernetes manifest files. The blog also covers all three types of configurations. 

The code is located here:  GitOps-in-a-box 

Here is documentation regarding ArgoCD Applications and ApplicationSets

NOTE: The blog does NOT cover any Kubernetes Cluster Creation. The Cluster should have at least one node ready for the ArgoCD to deploy.

Below is the architecture document representing the Applications and the ApplicationSetz (app of apps model). In the demo, we are categorising the platform apps, which mainly consist of applications managed by the platform team. This is followed by a Workload application that consists of workload-related applications.

Let's deep dive into the code repo and start setting up ArgoCD followed by the repositories  - 

Setup Environment 

To start with the environment setup. Below are the toolset required - 

Helmfile can also be run via Docker, Follow the readme file for more information on helmfile running on Docker - https://github.com/praveenkpatidar/GitOps-in-a-box/blob/main/README.md

Setup connection

As noted earlier, the Kubernetes cluster is supposed to be set up already. 

Setup the kubectl context - https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-context/

Export the system variable CLUSTER=<Name of Your Cluster>

Setting up ArgoCD

Go to argocd/argocd folder

ArgoCD Config Prep

  • Here is helmfile.yaml to run - 

---
repositories:
  - name: argocd
    url: https://argoproj.github.io/argo-helm
releases:
  - name: argocd
    chart: "argocd/argo-cd"
    namespace: "argocd"
    version: "7.3.4"
    values:
      - ./values/common.yaml
      - ./values/{{ requiredEnv "CLUSTER" }}.yaml

  - name: argocd-repos
    namespace: argocd
    chart: ./{{ requiredEnv "CLUSTER" }}
  • As you can see it needs CLUSTER variable to be set up before running. 

  • Review the below files folders as per your cluster 

    • Change the name of folder mycluster > <Cluster Name>

    • Change the file name values/mycluster.yaml to values/<Cluster Name>.yaml

    • Update all the TODO values in the values/<Cluster Name>.yaml

ArgoCD Repositories Prep

The helmfile consists of the second chart which is argocd-init. 

ArgoCD maintains the repository configuration using secret. The file <Cluster Name>/argocd-init.yaml contains the repository and secret configuration that is used in the ApplicationSets later. The file consists of several repositories and credentials for the repo (considering the repos are private). Some of the key highlights 

  • If the repos are public then there won't be any credentials required, thus the secrets can be managed in plain files inside source control. 

    Label Required - ​​argocd.argoproj.io/secret-type: repository

  • If the reports are private, one secret is required with the base organisation URL. you can review the file that has required configuration templates. Two reports and one credential. 

  • For credentials, you don’t need to maintain them in each repo. Create one secret with the base repo URL. In general, it’s the URL of GitHub Organisation or Username. E.g. in the example code it’s https://github.com/praveenkpatidar

    Label Required - argocd.argoproj.io/secret-type: repo-creds

  • The final configuration is metadata of the “in-cluster” secret. You can have as much as required metadata information here. All metadata can be read in the ApplicationSets later to diversify the deployments. 

  • For demo, you can have GitOps-in-a-box repo and in-cluster secret configuration.

Running ArgoCD Helmfile

You are now all set to run the helmfile.yaml using 

Export CLUSTER=<Cluster Name>; helmfile apply

Access ArgoCD UI 

Access ArgoCD UI by using port forwarding - 

kubectl port-forward -n argocd svc/argocd-server 8080:80

Access URL: http://localhost:8080/ 

Username: admin Password: AdminPwd

Setup App of Apps 

Here is general documentation about App of Apps - https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#app-of-apps

App of Apps Config 

Go to folder init -

Here we are setting up two main applications. One is for the platform, which generally represents the applications required and managed by the platform team and another one is workload which will have workload-specific applications. It also needs to be set up once and may not need regular updates. 

  • Here is helmfile.yaml

releases:
  - name: init-argocd-platform
    namespace: argocd
    chart: ./values/platform

  - name: init-argocd-workload
    namespace: argocd
    chart: ./values/workload/{{ requiredEnv "CLUSTER" }}

The example consists of the platform application being set in the /values/platform folder, whereas the workload which can vary from cluster to cluster can be found in /values/workload/ClusterName folder. Here is a sample code for an application. It simply directed to the folder /platform or /workload in the same repo (GitOps-in-a-box) 

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: platform-main
  namespace: argocd
spec:
  project: platform
  source:
    repoURL: https://github.com/praveenkpatidar/GitOps-in-a-box
    path: argocd/platform
    targetRevision: HEAD
  destination:
    server: https://kubernetes.default.svc
    namespace: argocd
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

Running the App of Apps Helmfile 

Review the repos and you are now all set to run the helmfile.yaml using 

Export CLUSTER=<Cluster Name>; helmfile apply

Access the UI again you can see Applications deployed as per the architecture diagram. 


Platform Repo (Using Remote Helm)

The platform ApplicationSet is defined in /platform/appset.yaml. The ApplicationSet uses generators to generate multiple applications using templates. 

The ApplicationSet uses Remote Helm Charts, along with local values files. It also uses a list option to manage multiple charts deployed in the same value file architecture. 

The local values are referred to using the same repo, with the values folder. 

Here goes the UI for the ApplicationSet

Workload Repo 

Workload repo has two kinds of ApplicationSets. The first one is the local helm chart, another other is local Kubernetes manifests. The ApplicationSets are located in /workload/appset-*.yaml files. 

Local Helm Charts, need to have a directory containing the helm chart specs. E.g. Chart.yaml, values.yaml etc. Here is a link to get started with the local helm chart - https://opensource.com/article/20/5/helm-charts. /workload/appset-helm.yaml contains the configuration for the same. We have helm-game and helm-guestbook applications using a local helm chart. However, we are keeping the values files in the same order as in remote helm in /workload/values files. 

Local Kubernetes Manifes are simple folder structures containing the plain YAML configuration to deploy in Kubernetes. They are not dynamic, which means all environments will have the same configuration. The ApplicationSet is defined in /workload/appset-local.yaml. The application folders are /game and /guestbook. 

Metadata Info

You can see the ApplicationSets have templates using variables defined in the generators/list elements. However, in the values files it is also using variables referring to metadata. {{metadata.labels.aws-cluster-name}}

These values are nothing but derived from in-cluster secrets created as part of /argocd deployment. The secret metadata can be accessed here directly. As you can in-cluster secret is having set of labels, annotations and the data itself. The data can be accessed directly as {{ server }} {{ name }}, however, the annotations and labels need to be called using metadata. E.g. 

{{metadata.labels.aws-cluster-name}}

{{metadata.annotations.environment}}

This way you can manage much more information regarding cluster e.g. specific repos to be used for ApplicationSets. 


Conclusion

That's the end of the blog. By following this setup, you can achieve a comprehensive GitOps system with ArgoCD that supports various application types and deployment strategies. Here’s a summary of the key components:

  • Bootstrap: Install ArgoCD using Helmfile and create base repositories using secrets.

  • Init: Create an ArgoCD Application that manages platform and workload-specific ApplicationSets.

  • Platform: Use remote Helm charts with local values files for platform-specific applications.

  • Workload: Deploy local deployment manifests and Helm charts for workload-specific applications.

  • Metadata: Use ArgoCD cluster secret to define the metadata about the cluster that can be used in ApplicationSets. 

This setup provides a flexible, maintainable, and scalable approach to managing applications in Kubernetes using ArgoCD. It allows you to leverage the strengths of Helm and Helmfile while maintaining the declarative nature of GitOps.

08/01/2024