← Back to Projects

Building a K8s Lab: Juggling Ports, Ingress and Observability Using Kite and K9s

Published on 2026-04-26

TL;DR

  • Built a local Kubernetes lab on kind (k8s v1.35.0) on Apple Silicon Mac with nginx as the single ingress entry point
  • No kubectl port-forward, no LaunchAgents, used extraPortMappings and Docker Desktop autostart handle everything
  • Every service gets an Ingress resource, i.e one entry point, infinite services behind it
  • Kite dashboard deployed with in-cluster RBAC and SQLite backed by a PersistentVolumeClaim for state persistence
  • Project is hosted at https://github.com/Jenish-1235/lab, making it extremely easily to recreate using just recipes

The Problem

Every local Kubernetes guide I found fell into one of two traps. Either it used minikube with no real ingress thinking just kubectl port-forward everything and call it a day or it assumed you'd never sleep your laptop and never need to recover the cluster from scratch.

I wanted something different. A local lab that mirrors how production actually works, a single ingress entry point, GitOps-style config, proper namespacing, and full recoverability from git.


Architecture Overview

flowchart TD
    Browser[Browser]
    Hosts["/etc/hosts 
           kite.local → 127.0.0.1
           "]
    Docker[Docker Desktop 
           Autostart on login]
    Kind[kind cluster: lab 
    k8s v1.35.0]
    PortMap[extraPortMappings 
           hostPort 80 / 443]
    Nginx[nginx ingress controller 
    namespace: ingress-nginx]
    KiteSvc[kite service 
    ClusterIP :80]
    KitePod[kite pod 
    port 8080]
    SQLite[(SQLite 
    PersistentVolumeClaim 1Gi)]

    Browser --> Hosts
    Hosts --> Docker
    Docker --> Kind
    Kind --> PortMap
    PortMap --> Nginx
    Nginx -->|host: kite.local| KiteSvc
    KiteSvc --> KitePod
    KitePod --> SQLite

Repository Structure

Each unit in the lab is self-contained, it owns its namespace, deployment, service, ingress, configmap, secrets, and any CRDs it needs. Applying a unit is always one command: kubectl apply -f /

plain text
lab/
├── justfile
├── kind/
│   └── kind-config.yaml
├── ingress/
│   └── nginx-ingress.yaml
└── kite/
    ├── namespace.yaml
    ├── deployment.yaml
    ├── service.yaml
    ├── ingress.yaml
    ├── rbac.yaml
    └── pvc.yaml

Technical Deep Dive

kind : Kubernetes in Docker

kind runs a full Kubernetes cluster inside Docker containers. Each node is a Docker container. For a single-node lab, one container acts as both control plane and worker.

The critical config is extraPortMappings , this binds ports on the host Mac directly to the kind node container at the Docker level, not via port-forwarding:

yaml
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
  - containerPort: 443
    hostPort: 443

Since Docker Desktop is configured to start on login, this binding is always live, no manual intervention after sleep, wake, or reboot.


nginx Ingress Controller

The ingress controller is the single entry point for all HTTP traffic into the cluster. Every service gets an Ingress resource, the controller reads these and dynamically updates its routing table.

flowchart LR
    Nginx[nginx ingress controller]
    Kite[kite.local → kite:80]
    Grafana[grafana.local → grafana:3000]
    Argo[argocd.local → argocd-server:80]

    Nginx --> Kite
    Nginx --> Grafana
    Nginx --> Argo

Key components and why each exists:

Namespace ingress-nginx :

  • isolates controller resources from application workloads.

ServiceAccount and RBAC:

  • Two service accounts:
    • one for the controller, one for the admission webhook. The controller needs cluster-wide watch permissions so it can see Ingress resources across all namespaces. Without this, the controller in ingress-nginx would be blind to Ingress resources created in kite, grafana, or anywhere else.

ConfigMap ingress-nginx-controller :

  • global nginx tuning, empty by default. Proxy timeouts, body size limits, real IP headers all configurable here without redeploying the controller.

Deployment:

  • the nginx controller pod. Uses hostPort: 80/443 to bind directly to the kind node, which maps to the Mac via extraPortMappings.

IngressClass nginx :

  • tells Kubernetes which controller handles ingressClassName: nginx. Allows running multiple ingress controllers in the same cluster and routing different Ingresses to different controllers.

ValidatingWebhookConfiguration:

  • intercepts every Ingress CREATE and UPDATE. If the nginx config is invalid, the kubectl apply is rejected immediately instead of silently breaking routing.

Traffic Flow End to End

sequenceDiagram
    participant Browser
    participant Hosts as /etc/hosts
    participant Mac as Mac :80
    participant Kind as kind node container
    participant Nginx as nginx pod
    participant Svc as kite service
    participant Pod as kite pod

    Browser->>Hosts: resolve kite.local
    Hosts-->>Browser: 127.0.0.1
    Browser->>Mac: GET http://kite.local
    Mac->>Kind: extraPortMappings :80
    Kind->>Nginx: hostPort binding :80
    Nginx->>Svc: match host kite.local → kite:80
    Svc->>Pod: ClusterIP routing → :8080
    Pod-->>Browser: response

Kite Kubernetes Dashboard

Kite is a modern, lightweight Kubernetes dashboard with multi-cluster support, RBAC governance, and an AI assistant. It runs as a standard Kubernetes workload inside the cluster not as an external tool on the host machine.

Since kite runs inside the cluster, it uses in-cluster service account credentials to talk to the Kubernetes API, not local ~/.kube/config.

RBAC

flowchart LR
    SA[ServiceAccount: kite namespace: kite]
    CRB[ClusterRoleBinding: kite-admin]
    CR[ClusterRole: cluster-admin]

    SA --> CRB --> CR

The kite pod is bound to cluster-admin via a ClusterRoleBinding. Full cluster access is appropriate for a local lab and can be scoped down in production.

Persistence

Kite uses SQLite for its database. A PersistentVolumeClaim of 1Gi is mounted at /data. Without this, every pod restart wipes the superadmin account and all configuration.

flowchart LR
    Pod[kite pod]
    PVC[PersistentVolumeClaim 1Gi]
    SQLite[(db.sqlite
           /data/db.sqlite)]

    Pod -->|volumeMount /data| PVC
    PVC --> SQLite

Justfile for Cluster Lifecycle

All cluster operations are just recipes. The cluster is ephemeral, hence we keep git as the source of truth.

plain text
just cluster-create   # spin up kind cluster
just apply-all        # apply ingress + all services
just apply-kite       # apply kite only
just cluster-delete   # tear down
just hosts            # add /etc/hosts entries

The Mistake Log

Mistake 1: emptyDir for kite storage

Assumed the SQLite database would survive pod restarts. It doesn't. emptyDir is tied to pod lifecycle, when the pod dies, the volume dies with it. Every restart wiped the superadmin account and asked for re-registration.

The fix was replacing emptyDir with a PersistentVolumeClaim. From now always ask "what is the lifecycle of this storage?" before choosing a volume type. emptyDir is for scratch space and caches, not state.

Mistake 2: Deploying kite without a ServiceAccount

Kite came up healthy, looked connected, but showed no cluster resources. The dashboard was running but didn’t connected to cluster and showed no data.

The assumption was that kite would auto-discover the cluster since it was running inside it. It does, but only if it has permissions. A pod running inside a cluster doesn't automatically get API access. It needs a ServiceAccount with the right RBAC bindings. Without it, every API call returns 403.

Mistake 3: Stale kubectl context after cluster recreation

After running just cluster-delete && just cluster-create, kubectl commands were hitting a stale context. The cluster was up but commands were failing with connection refused.

kind sets the context automatically on cluster-create , but only if the cluster name matches. Always verify with kubectl config current-context after recreation.


AI × Engineer

Every decision where AI suggested one approach and the I pushed back with a better one. The most valuable learning moments, where production intuition overrides textbook suggestions.

Decision 1: Port Forwarding vs Host Port Mappings + Ingress

What AI suggested

After deploying kite, use kubectl port-forward to expose it locally, then wrap it in a macOS LaunchAgent to keep it alive across sleep and reboots. One LaunchAgent per service.

bash
kubectl port-forward svc/kite -n kube-system 8080:80

Why I said no

Port-forward is a debugging tool, not infrastructure. One LaunchAgent per service doesn't scale every new deployment needs its own agent, its own port, its own management overhead. Routing logic living outside the cluster on the host machine defeats the purpose of building cluster-native infrastructure.

What we did instead

extraPortMappings in the kind config binds host ports 80 and 443 at the Docker level. nginx ingress controller handles all routing internally. Every new service just gets an Ingress resource no host changes, no agents, no port conflicts.

flowchart TB
    subgraph AI[AI Suggestion: one agent per service]
        A1[kite port-forward + LaunchAgent :8080]
        A2[grafana port-forward + LaunchAgent :3000]
        A3[argocd port-forward + LaunchAgent :8081]
    end

    subgraph Eng[My Solution: one ingress, many services]
        E1[nginx ingress single entry point :80]
        E2[kite.local]
        E3[grafana.local]
        E4[argocd.local]
        E1 --> E2
        E1 --> E3
        E1 --> E4
    end

What this teaches

Think in cluster-native primitives. The ingress controller pattern is exactly how production works one load balancer, many services, routing by hostname. Building the local lab this way means the mental model transfers directly to EKS.


Decision 2: Ingress Controller: nginx vs Envoy

What AI suggested

Given production EKS goals and my interest in service meshes, lean toward Envoy via Gateway API from the start. Envoy underpins Istio, AWS App Mesh, and the future of Kubernetes networking.

Why I said no

Don't over-engineer the foundation. Start with nginx, get the lab running without friction, then migrate to Envoy deliberately as its own learning exercise.

What we did instead

nginx now, Envoy later. The migration makes sense when we actually need envoy and it itself becomes a future chapter, learning both controllers and the path between them, which is a real production skill.

What this teaches

Incremental complexity. A local lab should build confidence, not fight you. The migration from nginx → Envoy is on the roadmap and will be its own post.


Production Delta

Lab Decision Production Equivalent
cluster-admin for kite Scoped read-only ClusterRole
kind single node Multi-AZ EKS node groups
/etc/hosts for DNS Route53 & external-dns controller
PVC with local storage EBS CSI driver or EFS for shared state
nginx ingress AWS ALB Ingress Controller or nginx/envoy on EKS
Manual just apply-all ArgoCD GitOps, git push triggers apply

Break It On Purpose

The best way to understand a system is to break it deliberately. Try each of these after finishing the setup:

bash
# 1. Delete the PVC, watch kite lose its database on next restart
kubectl delete pvc kite-data -n kite
kubectl rollout restart deployment/kite -n kite
# → kite asks you to create superadmin again
# → teaches: emptyDir vs PVC lifecycle

# 2. Remove ClusterRoleBinding, and watch kite go blind
kubectl delete clusterrolebinding kite-admin
# → kite dashboard shows no resources
# → teaches: in-cluster auth, ServiceAccount permissions

# 3. Delete the Ingress resource, watch kite.local 404
kubectl delete ingress kite -n kite
curl http://kite.local
# → nginx returns 404, pod still running
# → teaches: ingress routing vs pod health

# 4. Full recovery drill, delete cluster, rebuild from git only
just cluster-delete
just cluster-create && just apply-all
# → time yourself, should be under 2 minutes
# → teaches: why GitOps matters

Mental Model

A local cluster should be built with the same primitives as production, just with a smaller blast radius. Every shortcut you take locally is a mental model you'll have to unlearn later.

The port-forward temptation is real. It works, it's fast, it requires zero setup. But the moment you reach for it, you're building a habit of thinking outside the cluster instead of inside it. Production doesn't have port-forward. Production has ingress controllers, service meshes, and load balancers. Build the local lab the same way and the mental model transfers for free.


Recovery Playbook

bash
git clone https://github.com/Jenish-1235/lab
cd lab
just cluster-create
just apply-all
# /etc/hosts entries are already permanent
# open http://kite.local

Total recovery time: ~2 minutes.


What's Next

  • Grafana + Prometheus observability stack
  • ArgoCD GitOps, so git push auto-applies instead of just apply-all
  • Migrate ingress nginx to Envoy when needed i.e Gateway API
  • AWS cross-account architecture, EKS with dev/prod accounts mirroring enterprise patterns
  • Chaos engineering, Chaos Mesh + AWS FIS, extending the fightprod project