Skip to main content

Self-Hosting Conductor With Kubernetes

info

Self-hosted Conductor is released under a proprietary license. Self-hosting Conductor for commercial or production use requires a license key.

Overview

This guide covers deploying DBOS Conductor on Kubernetes so your applications get durable workflow execution, automatic workflow recovery, workflow management and observability — all running on infrastructure you control.

The Kubernetes manifests are portable to any conformant cluster.


Deployments

Database — Conductor needs a PostgreSQL database, which we recommend configuring with a dedicated database role.

Conductor — A stateless, single-container Deployment listening on port 8090. All state lives in PostgreSQL: use a Deployment and not a StatefulSet. Required environment variables:

  • DBOS__CONDUCTOR_DB_URL (connection string to the dbos_conductor database)
  • DBOS_CONDUCTOR_LICENSE_KEY (obtain a license key)

Conductor is out of the critical path and a single Conductor instance can serve tens of thousands of application servers.

Console — A stateless, single-container Deployment listening on port 80. It connects to Conductor using the environment variable DBOS_CONDUCTOR_URL.

Updating Conductor

Conductor is architecturally out-of-band — it is not on the critical path of your application. To upgrade, update the container image tag in conductor.yaml and console.yaml, (latest by default) then kubectl rollout restart. Prefer updating both Conductor and the console together. Applications seamlessly reconnect to the new Conductor version with no impact on their availability.

Register applications

After deploying Conductor and Console, register your application, and generate an API key. The application connects to Conductor via WebSocket using this API key and the Conductor URL.

Authentication

Conductor supports OAuth 2.0 with any OIDC-compliant provider. See the authentication setup guide.

Ingress

We recommend setting up a reverse proxy (e.g., Nginx) in front of all services. The reverse proxy should perform TLS termination and support WebSockets. You must configure your DBOS applications to point at your load balancer or reverse proxy URL, which should redirect to Conductor.

The DBOS SDK maintains a long-lived WebSocket connection to Conductor, so both the reverse proxy and any cloud load balancer in front of it (e.g., AWS ELB) should have idle timeouts high enough (e.g., 300s) to tolerate network hiccups. The DBOS SDK sends periodic pings to keep the connection alive, but a network hiccup that delays pings past the timeout will cause a disconnect. In case of disconnection, the DBOS SDK will reconnect automatically.

Security Best Practices

Secret management — Conductor deployments need credentials for PostgreSQL, a license key, and an API key. Store these as Kubernetes Secrets and inject them via secretKeyRef. For Git-safe storage, encrypt with Sealed Secrets, SOPS, or a cloud-native secrets manager (AWS Secrets Manager, Vault, etc.).

Network policies — Apply a default-deny ingress policy to the namespace, then add explicit allow rules for each pod. If Conductor and Console are co-located, allow HTTPS traffic from the Console to Conductor.

RBAC — Restrict which ServiceAccounts can read Secrets in the namespace. Conductor credentials (database URLs, license key, API key) should only be accessible to the pods that need them.


Walkthrough (AWS EKS)

In addition to DBOS Conductor and the DBOS Console, the infrastructure includes the following components:

ComponentRole
RDSDatabase for Conductor operating state
Reverse Proxy (Nginx Ingress)TLS termination, path-based routing, WebSocket support
Sealed SecretsEncrypts secrets at rest; decrypts them in-cluster
Set environment variables

Set these variables before proceeding — replace the placeholder values with your own:

# Your AWS account ID (12-digit number)
AWS_ACCOUNT_ID=123456789012

# AWS region for all resources
AWS_REGION=us-west-2

# PostgreSQL admin password (used for the RDS master user)
POSTGRES_PASSWORD='choose-a-secure-password'

# Password for the Conductor database role
CONDUCTOR_ROLE_PASSWORD='choose-another-secure-password'

# Conductor license key (from DBOS Console or sales)
CONDUCTOR_LICENSE_KEY='your-license-key'

Infrastructure

CLI tools required on your workstation
ToolPurposeInstall
AWS CLIAWS account accessInstall guide
eksctlCreate and manage EKS clustersInstall guide
kubectlInteract with KubernetesIncluded with eksctl, or install separately
HelmInstall cluster add-ons (Ingress, Sealed Secrets)brew install helm or Install guide
kubesealEncrypt Kubernetes secretsbrew install kubeseal or Install guide
opensslGenerate self-signed TLS certificatePre-installed on macOS/Linux

Verify your AWS credentials are configured:

aws sts get-caller-identity

DBOS Conductor License Key

Obtain a development license key from the DBOS Console or contact DBOS sales for a pro license key. You can follow this guide with a development license key for evaluation, but you will be limited to one executor per application.

Create an EKS Cluster

Create a managed EKS cluster with two nodes. This takes approximately 15 minutes.

Create EKS cluster
eksctl create cluster \
--name dbos-conductor \
--region $AWS_REGION \
--version 1.31 \
--nodegroup-name default \
--node-type t3.medium \
--nodes 2 \
--managed

eksctl automatically:

  • Creates a VPC with public and private subnets
  • Configures the Amazon VPC CNI, which supports NetworkPolicy enforcement
  • Sets up your ~/.kube/config to point at the new cluster

Once complete, verify the cluster is ready:

kubectl get nodes

You should see two nodes in Ready status:

NAME                                           STATUS   ROLES    AGE   VERSION
ip-192-168-xx-xx.us-west-2.compute.internal Ready <none> 2m v1.31.x
ip-192-168-xx-xx.us-west-2.compute.internal Ready <none> 2m v1.31.x

Create a Namespace

All resources in this guide are deployed to a dedicated dbos namespace:

kubectl create namespace dbos

Provision an RDS PostgreSQL Instance

RDS provisioning commands

Find the VPC and private subnets that eksctl created:

# Get the VPC ID
VPC_ID=$(aws ec2 describe-vpcs \
--filters "Name=tag:alpha.eksctl.io/cluster-name,Values=dbos-conductor" \
--query "Vpcs[0].VpcId" --output text --region $AWS_REGION)
echo "VPC: $VPC_ID"

# Get the private subnets
PRIVATE_SUBNETS=($(aws ec2 describe-subnets \
--filters "Name=vpc-id,Values=$VPC_ID" \
"Name=tag:aws:cloudformation:logical-id,Values=SubnetPrivate*" \
--query "Subnets[*].SubnetId" --output text --region $AWS_REGION))
echo "Private subnets: ${PRIVATE_SUBNETS[@]}"

Create a DB subnet group from the private subnets:

aws rds create-db-subnet-group \
--db-subnet-group-name dbos-conductor-db \
--db-subnet-group-description "DBOS Conductor RDS subnets" \
--subnet-ids "${PRIVATE_SUBNETS[@]}" \
--region $AWS_REGION

Create a security group that allows PostgreSQL access from the EKS nodes:

# Get the EKS cluster security group
EKS_SG=$(aws ec2 describe-security-groups \
--filters "Name=vpc-id,Values=$VPC_ID" \
"Name=tag:aws:eks:cluster-name,Values=dbos-conductor" \
--query "SecurityGroups[0].GroupId" \
--output text --region $AWS_REGION)
echo "EKS SG: $EKS_SG"

# Create a security group for RDS
RDS_SG=$(aws ec2 create-security-group \
--group-name dbos-conductor-rds \
--description "Allow PostgreSQL from EKS nodes" \
--vpc-id $VPC_ID \
--query "GroupId" --output text --region $AWS_REGION)
echo "RDS SG: $RDS_SG"

# Allow inbound PostgreSQL from EKS nodes
aws ec2 authorize-security-group-ingress \
--group-id $RDS_SG \
--protocol tcp --port 5432 \
--source-group $EKS_SG \
--region $AWS_REGION

Create the RDS instance:

aws rds create-db-instance \
--db-instance-identifier dbos-conductor-pg \
--db-instance-class db.t4g.micro \
--engine postgres \
--engine-version 16 \
--master-username postgres \
--master-user-password "$POSTGRES_PASSWORD" \
--allocated-storage 20 \
--db-subnet-group-name dbos-conductor-db \
--vpc-security-group-ids $RDS_SG \
--no-publicly-accessible \
--region $AWS_REGION

Wait for the instance to become available (this takes a few minutes):

aws rds wait db-instance-available \
--db-instance-identifier dbos-conductor-pg \
--region $AWS_REGION

Get the RDS endpoint:

RDS_ENDPOINT=$(aws rds describe-db-instances \
--db-instance-identifier dbos-conductor-pg \
--query "DBInstances[0].Endpoint.Address" \
--output text --region $AWS_REGION)
echo "RDS endpoint: $RDS_ENDPOINT"

Create the databases and roles from a pod inside the cluster (since the RDS instance is not publicly accessible):

Create databases and roles
kubectl run pg-setup --restart=Never \
--namespace dbos \
--image=postgres:16 \
--env="PGPASSWORD=$POSTGRES_PASSWORD" \
--command -- bash -c "
psql -h $RDS_ENDPOINT -U postgres -c 'CREATE DATABASE dbos_conductor;'
psql -h $RDS_ENDPOINT -U postgres -c \"CREATE ROLE dbos_conductor_role WITH LOGIN PASSWORD '$CONDUCTOR_ROLE_PASSWORD';\"
psql -h $RDS_ENDPOINT -U postgres -c 'GRANT ALL PRIVILEGES ON DATABASE dbos_conductor TO dbos_conductor_role;'
psql -h $RDS_ENDPOINT -U postgres -d dbos_conductor -c 'GRANT ALL ON SCHEMA public TO dbos_conductor_role;'
"
# Wait for the pod to finish, then clean up
sleep 15 && kubectl logs pg-setup -n dbos && kubectl delete pod pg-setup -n dbos

This creates:

  • dbos_conductor — Conductor's internal database (application registry, metadata)
  • dbos_conductor_role — a dedicated role for Conductor's database access

Install Cluster Add-ons

We install two Helm charts that the later sections depend on.

Helm installs (Nginx Ingress, Sealed Secrets)

Nginx Ingress Controller — reverse proxy and TLS termination:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx --create-namespace \
--set controller.service.type=LoadBalancer

Sealed Secrets — encrypt secrets for safe Git storage:

helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
helm install sealed-secrets sealed-secrets/sealed-secrets \
--namespace kube-system

Verify all add-ons are running:

# Ingress controller
kubectl get pods -n ingress-nginx

# Sealed Secrets controller
kubectl get pods -n kube-system -l app.kubernetes.io/name=sealed-secrets

Secrets

Several components need sensitive credentials. We use Bitnami Sealed Secrets: create a regular Secret, encrypt it with kubeseal, and apply the encrypted SealedSecret to the cluster. The controller decrypts it in-cluster into a standard Kubernetes Secret that pods can reference. The encrypted form is safe to commit to Git.

Secrets Inventory

SecretKeysUsed by
conductor-dbdatabase-urlConductor — connection to dbos_conductor database
conductor-licenselicense-keyConductor — production license

Create and Seal Secrets

kubeseal commands

Create each secret, pipe it through kubeseal, and save the encrypted form:

# 1. Conductor database credentials (dedicated role)
kubectl create secret generic conductor-db \
--namespace dbos \
--from-literal=database-url="postgresql://dbos_conductor_role:${CONDUCTOR_ROLE_PASSWORD}@${RDS_ENDPOINT}:5432/dbos_conductor?sslmode=require" \
--dry-run=client -o yaml | \
kubeseal --controller-name=sealed-secrets --controller-namespace=kube-system --format yaml \
> sealed-conductor-db.yaml

# 2. Conductor license key
kubectl create secret generic conductor-license \
--namespace dbos \
--from-literal=license-key="$CONDUCTOR_LICENSE_KEY" \
--dry-run=client -o yaml | \
kubeseal --controller-name=sealed-secrets --controller-namespace=kube-system --format yaml \
> sealed-conductor-license.yaml

Apply and Verify

kubectl apply -f sealed-conductor-db.yaml
kubectl apply -f sealed-conductor-license.yaml

Verify the controller has decrypted them into regular Kubernetes Secrets:

kubectl get secrets -n dbos
NAME                TYPE     DATA   AGE
conductor-db Opaque 1 10s
conductor-license Opaque 1 10s

Ingress

With the Nginx Ingress Controller installed, you have a load balancer in front of the cluster. This section creates a TLS certificate and an Ingress resource so that all services are reachable over HTTPS.

This walkthrough uses a self-signed certificate on the load balancer's hostname. For production, use cert-manager with a real domain.

Get the load balancer hostname:

ELB_HOSTNAME=$(kubectl get svc -n ingress-nginx ingress-nginx-controller \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
echo $ELB_HOSTNAME

Save this value — you'll need it throughout the rest of the guide. It looks like xxxxxxxx.us-west-2.elb.amazonaws.com.

Create a self-signed TLS certificate
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout tls.key -out tls.crt \
-subj "/CN=dbos-conductor" \
-addext "subjectAltName=DNS:${ELB_HOSTNAME}"

kubectl create secret tls dbos-tls \
--cert=tls.crt --key=tls.key \
--namespace dbos
note

The CN is kept short because OpenSSL's CN field has a 64-character limit — the actual hostname is covered by the SAN extension. Your browser will show a certificate warning for the self-signed cert — accept it to proceed. For production, use cert-manager with a real domain.

ingress.yaml

The Ingress routes /conductor/... to the Conductor service and everything else to the Console. A regex rewrite strips the /conductor prefix so Conductor sees requests at /. Replace <your-elb-hostname> with the $ELB_HOSTNAME value you retrieved above.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dbos-ingress
namespace: dbos
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
spec:
ingressClassName: nginx
tls:
- hosts:
- <your-elb-hostname>
secretName: dbos-tls
rules:
- host: <your-elb-hostname>
http:
paths:
- path: /conductor(/|$)(.*)
pathType: ImplementationSpecific
backend:
service:
name: conductor
port:
number: 8090
- path: /()(.*)
pathType: ImplementationSpecific
backend:
service:
name: console
port:
number: 80

The host in both tls and rules must match — without it, Nginx serves its default fake certificate instead of dbos-tls.

Request pathBackend
/conductor/conductor:8090 → /
/conductor/v1/workflowsconductor:8090 → /v1/workflows
/console:80
/healthconsole:80
  • rewrite-target: /$2 — strips the /conductor prefix using the second capture group. The Console catch-all uses /()(.*) so $2 passes the full path through unchanged.
  • proxy-read-timeout / proxy-send-timeout — set to 3600s to keep Conductor's long-lived WebSocket connections alive.

Apply the Ingress

kubectl apply -f ingress.yaml

WebSocket Configuration

The application connects to Conductor via a long-lived WebSocket. Three layers must be configured to prevent idle connections from being dropped:

LayerSettingDefaultSuggestedWhy
Nginx Ingressproxy-read-timeout60s3600sPrevents Nginx from closing an idle WebSocket
Nginx Ingressproxy-send-timeout60s3600sSame, for the send direction
AWS ELBidle timeout60s3600sPrevents the load balancer from closing an idle TCP connection

The Nginx timeouts are already set via the Ingress annotations. Nginx handles the Connection: Upgrade and Upgrade: websocket headers automatically — no additional annotation is needed for the protocol upgrade itself.

The AWS load balancer idle timeout is configured separately on the ingress-nginx-controller Service:

kubectl patch svc ingress-nginx-controller -n ingress-nginx -p \
'{"metadata":{"annotations":{"service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout":"3600"}}}'
note

The DBOS SDK sends periodic ping frames that keep the connection active under normal conditions. Albeit the SDK will reconnect automatically, increasing the ELB idle timeout will prevent network hiccups to drop the connection.

Deployments

Conductor is the core service that manages workflow recovery and the application registry. It connects to the dbos_conductor database using the dbos_conductor_role credentials.

conductor.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: conductor
namespace: dbos
spec:
replicas: 1
selector:
matchLabels:
app: conductor
template:
metadata:
labels:
app: conductor
spec:
containers:
- name: conductor
image: dbosdev/conductor
env:
- name: DBOS__CONDUCTOR_DB_URL
valueFrom:
secretKeyRef:
name: conductor-db
key: database-url
- name: DBOS_CONDUCTOR_LICENSE_KEY
valueFrom:
secretKeyRef:
name: conductor-license
key: license-key
ports:
- containerPort: 8090
readinessProbe:
httpGet:
path: /healthz
port: 8090
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /healthz
port: 8090
initialDelaySeconds: 15
periodSeconds: 30
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: "1"
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: conductor
namespace: dbos
spec:
selector:
app: conductor
ports:
- port: 8090
targetPort: 8090

Both sensitive values (DBOS__CONDUCTOR_DB_URL and DBOS_CONDUCTOR_LICENSE_KEY) are pulled from the Sealed Secrets created in the Secrets section.

The Console is the web UI for managing applications, monitoring workflows, and generating API keys. In this example, it connects to Conductor via internal cluster DNS.

console.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: console
namespace: dbos
spec:
replicas: 1
selector:
matchLabels:
app: console
template:
metadata:
labels:
app: console
spec:
containers:
- name: console
image: dbosdev/console
env:
- name: DBOS_CONDUCTOR_URL
value: "conductor.dbos.svc.cluster.local:8090"
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 10
periodSeconds: 30
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: console
namespace: dbos
spec:
selector:
app: console
ports:
- port: 80
targetPort: 80

Deploy both with:

kubectl apply -f conductor.yaml
kubectl apply -f console.yaml

Verify both pods are running:

kubectl get pods -n dbos
NAME                         READY   STATUS    RESTARTS   AGE
conductor-xxxxxxxxx-xxxxx 1/1 Running 0 2m
console-xxxxxxxxx-xxxxx 1/1 Running 0 30s

Access the Console and Generate an API Key

At this point, your self-hosted Conductor deployment is fully operational! Open https://<your-elb-hostname>/ in your browser (accept the self-signed cert warning), then follow the Conductor setup instructions to:

  1. Register your application
  2. Generate an API key

Cleanup

To tear down all AWS resources when done:

# Delete the EKS cluster (includes VPC, security groups, and node group)
eksctl delete cluster --name dbos-conductor --region $AWS_REGION

# Delete the RDS instance
aws rds delete-db-instance --db-instance-identifier dbos-conductor-pg \
--skip-final-snapshot --region $AWS_REGION

# Delete the RDS security group
RDS_SG=$(aws ec2 describe-security-groups \
--filters "Name=group-name,Values=dbos-conductor-rds" \
--query "SecurityGroups[0].GroupId" --output text --region $AWS_REGION)
aws ec2 delete-security-group --group-id $RDS_SG --region $AWS_REGION

# Delete the DB subnet group
aws rds delete-db-subnet-group --db-subnet-group-name dbos-conductor-db --region $AWS_REGION