Skip to content

Deployment

All components are managed by FluxCD via the rig-gitops repo.

GitOps Structure

rig-gitops/
├── clusters/dell/
│   └── conductor-e.yaml          FluxCD Kustomization (entrypoint)
└── apps/conductor-e/
    ├── namespace.yaml             conductor-e namespace
    ├── automate-e-source.yaml     GitRepository: automate-e repo
    ├── automate-e-helmrelease.yaml HelmRelease: Automate-E agent + cron
    ├── conductor-e-api-source.yaml GitRepository: conductor-e repo
    └── conductor-e-api-kustomization.yaml  Kustomization: API + PostgreSQL

Components

Component Image Managed By Source
Conductor-E Agent ghcr.io/stig-johnny/automate-e:latest HelmRelease automate-e repo
Conductor-E Cron Same image HelmRelease (CronJob) automate-e repo
Conductor-E API ghcr.io/stig-johnny/conductor-e:latest Kustomization conductor-e repo
PostgreSQL 16 postgres:16-alpine Kustomization conductor-e repo

Coordination Cron

Every 5 minutes, a CronJob runs the coordination loop:

  1. Check event store health
  2. Get current priority queue
  3. Search GitHub for agent-ready issues across managed repos
  4. Sync new issues to event store as ISSUE_APPROVED events
  5. Post summary to #conductor-e via Discord webhook

Cost: ~$0.005 per run ($1.44/month with Haiku).

Secrets

The conductor-e-secrets secret (created manually, not in GitOps):

Key Purpose Used By
discord-bot-token Discord bot (Conductor-E, formerly ATL-E) Automate-E
anthropic-api-key 1-year Max subscription OAuth token Automate-E
github-token GitHub PAT for MCP tools Automate-E, Cron
postgres-password PostgreSQL auth API + PostgreSQL
discord-webhook-url Cron results webhook for #conductor-e Cron
database-url Empty (in-memory mode for Automate-E memory) Automate-E

FluxCD Commands

# Check status
flux get kustomizations
flux get helmreleases -A

# Force reconciliation
flux reconcile source git flux-system
flux reconcile kustomization conductor-e

# Check what FluxCD manages
kubectl get pods -n conductor-e

Building Images

Conductor-E API

No CI pipeline yet. Build manually on Dell:

# On Dell (100.95.212.93)
cd /tmp/conductor-e-build

# Login to GHCR
GHCR_AUTH=$(kubectl get secret ghcr-pull-secret -n automate-e \
  -o jsonpath='{.data.\.dockerconfigjson}' | base64 -d | \
  python3 -c "import sys,json; d=json.load(sys.stdin); print(d['auths']['ghcr.io']['auth'])" | base64 -d)
echo "${GHCR_AUTH#*:}" | buildah login -u "${GHCR_AUTH%%:*}" --password-stdin ghcr.io

# Build and push
buildah bud -t ghcr.io/stig-johnny/conductor-e:latest -f Dockerfile .
buildah push ghcr.io/stig-johnny/conductor-e:latest

# Restart (FluxCD will also pick up new images on next reconciliation)
kubectl rollout restart deployment/conductor-e-api -n conductor-e

macOS tar workaround

When SCP'ing source from macOS, use COPYFILE_DISABLE=1 tar czf and find . -name '._*' -delete on Dell to remove resource fork files.

Automate-E

Same process but for the automate-e image:

cd /tmp/automate-e-build
buildah bud -t ghcr.io/stig-johnny/automate-e:latest -f Dockerfile .
buildah push ghcr.io/stig-johnny/automate-e:latest
kubectl rollout restart deployment/conductor-e-automate-e -n conductor-e

Verifying

# All pods
kubectl get pods -n conductor-e

# API health
kubectl port-forward -n conductor-e svc/conductor-e-api 18080:8080 &
curl http://localhost:18080/health

# API logs
kubectl logs -n conductor-e -l app=conductor-e-api --tail=20

# Automate-E logs
kubectl logs -n conductor-e -l app.kubernetes.io/name=automate-e --tail=20

# Cron job logs (last run)
kubectl logs -n conductor-e -l app.kubernetes.io/component=cron --tail=20