Skip to Content
Self HostingSelf-Hosting on Kubernetes

Self-Hosting on Kubernetes

The manifests below give you a minimal but functional Govrix Scout deployment on any standard Kubernetes cluster (EKS, GKE, AKS, or bare-metal).

This is a minimal configuration intended to get you running quickly. Production deployments should add PersistentVolumeClaims for PostgreSQL, resource limits and requests on all containers, init containers for schema migrations, and horizontal pod autoscaling for the proxy. See the GitHub repo  for production-grade manifests.

Namespace

Create a dedicated namespace for all Govrix Scout resources:

kubectl apply -f - <<'EOF' apiVersion: v1 kind: Namespace metadata: name: govrix-scout EOF

Secret

Store sensitive values — the management API key and the database URL — in a Kubernetes Secret. Replace the placeholder values before applying.

# secret.yaml apiVersion: v1 kind: Secret metadata: name: govrix-scout-secrets namespace: govrix-scout type: Opaque stringData: GOVRIX_API_KEY: "replace-with-a-strong-random-secret" GOVRIX_DATABASE_URL: "postgres://govrix:govrix_scout_dev@postgres:5432/govrix"
kubectl apply -f secret.yaml

For production, manage secrets with an external secrets operator (e.g. External Secrets Operator + AWS Secrets Manager) rather than storing plaintext in stringData.

ConfigMap

Non-secret configuration goes in a ConfigMap:

# configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: govrix-scout-config namespace: govrix-scout data: GOVRIX_PROXY_PORT: "4000" GOVRIX_API_PORT: "4001" GOVRIX_PROXY__UPSTREAM_OPENAI: "https://api.openai.com" GOVRIX_PROXY__UPSTREAM_ANTHROPIC: "https://api.anthropic.com" RUST_LOG: "govrix_scout_proxy=info"
kubectl apply -f configmap.yaml

Deployment

The Deployment runs the govrix-scout binary. Liveness and readiness probes both hit the management API /health endpoint on port 4001.

# deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: govrix-scout namespace: govrix-scout labels: app: govrix-scout spec: replicas: 1 selector: matchLabels: app: govrix-scout template: metadata: labels: app: govrix-scout spec: containers: - name: govrix-scout image: ghcr.io/manaspros/govrix-scout:latest ports: - name: proxy containerPort: 4000 - name: api containerPort: 4001 - name: metrics containerPort: 9090 envFrom: - configMapRef: name: govrix-scout-config - secretRef: name: govrix-scout-secrets livenessProbe: httpGet: path: /health port: 4001 initialDelaySeconds: 15 periodSeconds: 30 failureThreshold: 3 readinessProbe: httpGet: path: /ready port: 4001 initialDelaySeconds: 5 periodSeconds: 10 failureThreshold: 3
kubectl apply -f deployment.yaml

Services

Two Services are recommended:

  • ClusterIP — for internal cluster traffic to the management API (port 4001)
  • LoadBalancer — to expose the proxy (port 4000) to agents running outside the cluster
# service.yaml apiVersion: v1 kind: Service metadata: name: govrix-scout-api namespace: govrix-scout spec: type: ClusterIP selector: app: govrix-scout ports: - name: api port: 4001 targetPort: 4001 --- apiVersion: v1 kind: Service metadata: name: govrix-scout-proxy namespace: govrix-scout spec: type: LoadBalancer selector: app: govrix-scout ports: - name: proxy port: 4000 targetPort: 4000 - name: metrics port: 9090 targetPort: 9090
kubectl apply -f service.yaml

Apply everything at once

If you have saved all manifests into a single directory:

kubectl apply -f namespace.yaml kubectl apply -f secret.yaml kubectl apply -f configmap.yaml kubectl apply -f deployment.yaml kubectl apply -f service.yaml

Verify the deployment

# Check pod status kubectl get pods -n govrix-scout # Stream logs from the proxy container kubectl logs -n govrix-scout -l app=govrix-scout -f # Port-forward the API locally for a quick health check kubectl port-forward -n govrix-scout svc/govrix-scout-api 4001:4001 & curl http://127.0.0.1:4001/health # Expected: {"status":"ok"}

Port reference

PortServiceKubernetes exposure
4000Proxy (agent traffic)LoadBalancer Service
4001Management APIClusterIP Service
9090Prometheus metricsLoadBalancer Service (or scrape via PodMonitor)

Next steps

  • Add a PostgreSQL StatefulSet with a PVC (or use a managed database like AWS RDS with TimescaleDB)
  • Run govrix-scout migrate as a Kubernetes Job or init container before the proxy starts
  • Configure a HorizontalPodAutoscaler targeting the govrix-scout Deployment on CPU or custom Prometheus metrics
  • Set up a PodMonitor or ServiceMonitor for Prometheus Operator to scrape port 9090
Last updated on