View on GitHub

Cloud Native Infrastructure 2.

A comprehensive demonstration of modern DevOps practices, featuring Kubernetes orchestration, Service Mesh traffic management, and GitOps automation. Experience a complete pipeline with isolated Staging and Production environments, fully automated via GitHub Actions CI/CD.

Environment

FRONTEND ONLY

Backend Status

Clone the master branch and deploy the applicaiton via terraform

Clone the master branch and deploy the applicaiton via terraform

Clone the master branch and deploy the applicaiton via terraform

AWS
Amazon EKS
Terraform
Kubernetes
ArgoCD
Docker
Istio
Kiali
GitHub
GitHub Actions
Next.js
Node.js
Prometheus
Bash
TypeScript
AWS
Amazon EKS
Terraform
Kubernetes
ArgoCD
Docker
Istio
Kiali
GitHub
GitHub Actions
Next.js
Node.js
Prometheus
Bash
TypeScript

Infrastructure as Code

Two highly available EKS clusters are provisioned using Terraform. The infrastructure is modular, utilizing the terraform-aws-modules/eks/aws blueprints. Each environment is completely isolated and defined in infra/cluster-a (Staging) and infra/cluster-b (Production).

Staging Cluster

infra/cluster-a

Cluster A

Host for our ArgoCD Control Plane. This cluster manages itself AND the production cluster. It runs specific versions of addons to ensure compatibility.

  • EKS Version: 1.34
  • Istio: v1.28.0 (Service Mesh)
  • Kiali: v1.89.0 (Observability)
  • Prometheus: v25.8.0 (Metrics)

Provisioning Commands

Deploy Staging
cd infra/cluster-a
terraform init
terraform apply --auto-approve
Configure Kubeconfig
aws eks update-kubeconfig --region us-west-2 --name cluster-a-staging --alias staging

Terraform Output

Terraform Staging Output

Production Cluster

infra/cluster-b

Cluster B

The Production Environment is strictly for workload execution. It does not run CI/CD tools, maximizing resources for the application. Managed remotely by ArgoCD from Cluster A via a Service Account.

  • Region: us-west-2
  • High Availability: Multi-AZ deployment
  • Ingress: AWS Network Load Balancer (NLB)

Provisioning Commands

Deploy Production
cd infra/cluster-b
terraform init
terraform apply --auto-approve
Configure Kubeconfig
aws eks update-kubeconfig --region us-west-2 --name cluster-b-prod --alias production

Terraform Output

Terraform Production Output

Post-Provisioning: Get URLs

A helper script get_cluster_outputs.sh is available to automatically retrieve the Load Balancer URLs for both Staging and Production services.

Retrieve Outputs
chmod +x get_cluster_outputs.sh
./get_cluster_outputs.sh

Script Output

Cluster Outputs Script

Verify Cluster Access

After provisioning, verify that you can connect to both contexts properly. This is crucial for the next steps involving ArgoCD configuration.

Check Nodes
kubectl config use-context staging
kubectl get nodes

kubectl config use-context production
kubectl get nodes

CI/CD & GitOps Pipeline

A fully automated workflow from code commit to production deployment.

1. Continuous Integration (GitHub Actions)

Build, Tag, and Push

GitHub Actions builds optimized Docker images for the frontend and backend.

GitHub Actions Workflow

Manifest Update

Manifest Apply

2. GitOps Architecture

End-to-End Flow (Branch → Staging → Main → Prod)

A strict GitOps workflow is enforced. Changes to the infrastructure or application are made via Git commits.

  • Feature Branches: Triggers build & deploy to Staging (Cluster A).
  • Main Branch: Merging a PR triggers deploy to Production (Cluster B).

The Flow

1. git push feature-branch2. GitHub Actions Build → GHCR3. Manifest Update (Staging)4. ArgoCD Sync → Staging Cluster5. Merge PR → Production Sync

Staging Application

Staging Frontend

Production Application

Production Frontend

3. Continuous Deployment (ArgoCD)

Declarative GitOps Sync

Installation & Setup

Install ArgoCD
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Login & Add Cluster
argocd login localhost:8081 --username admin --insecure
argocd cluster add production

Adding Production Cluster

ArgoCD Cluster Setup

Application Details

Staging Application

Staging App Detail
Staging Sync Status

Production Application

Production App Detail
Production Sync Status

Global Dashboard

ArgoCD Dashboard

Observability & Scaling

Ensuring reliability with Horizontal Pod Autoscaling (HPA) for load management and Kiali for deep visibility into the Istio Service Mesh.

Horizontal Pod Autoscaling

Automatic scaling based on CPU utilization (Target: 50%)

Staging Environment

Min: 2 | Max: 10
Staging HPA
Check Staging HPA
kubectl get hpa -n staging --watch

Production Environment

Min: 2 | Max: 10
Production HPA
Check Production HPA
kubectl get hpa -n production --watch

Service Mesh Visualization (Kiali)

Deep dive into traffic topology and mesh health.

Kiali provides a powerful dashboard to visualize the Istio Service Mesh. It connects to Prometheus to gather metrics and topology data. Below are the different views available in the Kiali dashboard for both Staging and Production.

Access the Dashboard
Port Forward Kiali
kubectl -n istio-system port-forward svc/kiali 20001:20001

Staging Traffic Graph

Staging Graph

Production Traffic Graph

Prod Graph

Staging Applications

Staging Apps

Production Applications

Prod Apps

Staging Mesh

Staging Mesh

Production Mesh

Prod Mesh

Staging Overview

Staging Overview

Production Overview

Prod Overview

Staging Services

Staging Services

Advanced Debugging

Comprehensive command reference for troubleshooting clusters, networking, and application state.

Cluster Health

bash
kubectl get nodes -o wide
bash
kubectl get all -A
bash
kubectl get events --sort-by='.lastTimestamp' -A

Networking & Services

Check endpoints
kubectl get endpoints -n staging
bash
kubectl get ingress -n istio-system
bash
kubectl port-forward svc/frontend 3000:3000 -n staging

Scaling & Metrics

bash
kubectl describe hpa -n staging
bash
kubectl -n kube-system logs -l k8s-app=metrics-server
bash
kubectl top pods -n staging

Application Logs

bash
kubectl logs -l app=frontend -n staging -f
bash
kubectl logs -l app=backend -n staging -f
bash
kubectl logs -l app=backend -n staging --previous