Skip to main content

AI-Assisted Development

If you're using an AI coding agent to build a DBOS application, make sure it has the latest information on DBOS by either:

  1. Installing DBOS skills.
  2. Providing your agent with a DBOS prompt.

You may also want to use the DBOS MCP server so your model can directly access your application's workflows and steps.

DBOS Agent Skills

Agent Skills help developers use AI agents to add DBOS durable workflows to their applications. DBOS provides open-source skills you can check out here.

To install them into your coding agent, run:

npx skills add dbos-inc/agent-skills

The Skills CLI is compatible with most coding agents, including Claude Code, Codex, Antigravity, and Cursor.

DBOS Prompt

You can use this prompt to add rich information about DBOS to your AI coding agent's context. You can copy and paste it directly into your context, or follow these directions to add it to your AI-powered IDE or coding agent of choice:

DBOS TypeScript Prompt
# Build Reliable Applications With DBOS

## Guidelines

- Respond in a friendly and concise manner
- Ask clarifying questions when requirements are ambiguous
- Generate code in TypeScript using the DBOS library. Make sure to fully type everything.
- You MUST import all methods and classes used in the code you generate
- You SHALL keep all code in a single file unless otherwise specified.
- You MUST await all promises.
- DBOS does NOT stand for anything.

## Workflow Guidelines

Workflows provide durable execution so you can write programs that are resilient to any failure.
Workflows are comprised of steps, which are ordinary TypeScript functions called with DBOS.runStep().
When using DBOS workflows, you should call any function that performs complex operations or accesses external APIs or services as a step using DBOS.runStep.

If a workflow is interrupted for any reason (e.g., an executor restarts or crashes), when your program restarts the workflow automatically resumes execution from the last completed step.

- If asked to add DBOS to existing code, you MUST ask which function to make a workflow. Do NOT recommend any changes until they have told you what function to make a workflow. Do NOT make a function a workflow unless SPECIFICALLY requested.
- When making a function a workflow, you should make all functions it calls steps. Do NOT change the functions in any way.
- Do NOT make functions steps unless they are DIRECTLY called by a workflow.
- If the workflow function performs a non-deterministic action, you MUST move that action to its own function and make that function a step. Examples of non-deterministic actions include accessing an external API or service, accessing files on disk, generating a random number, of getting the current time.
- Do NOT use Promise.all() due to the risks posed by multiple rejections. Using Promise.allSettled() for parallelism is allowed for single-step promises only. For any complex parallel execution, you should instead use DBOS.startWorkflow and DBOS queues to achieve the parallelism.
- DBOS workflows and steps should NOT have side effects in memory outside of their own scope. They can access global variables, but they should NOT create or update global variables or variables outside their scope.
- Do NOT call any DBOS context method (DBOS.send, DBOS.recv, DBOS.startWorkflow, DBOS.sleep, DBOS.setEvent, DBOS.getEvent) from a step.
- Do NOT start workflows from inside a step.
- Do NOT call DBOS.setEvent and DBOS.recv from outside a workflow function.
- Do NOT use DBOS.getApi, DBOS.postApi, or other DBOS HTTP annotations. These are DEPRECATED. Instead, use Express for HTTP serving by default, unless another web framework is specified.

## DBOS Lifecycle Guidelines

DBOS should be installed and imported from the `@dbos-inc/dbos-sdk` package.
Due to its internal workflow registry, The DBOS library and DBOS workflows cannot be bundled with JavaScript or TypeScript bundlers (Webpack, Vite, Rollup, esbuild, Parcel, etc.) and must be treated as an external library by these tools. Configuration for bundlers should be suggested if these tools are in use and cannot be avoided.

DBOS does not support "serverless" frameworks due to its long-running background jobs. DBOS programs MUST have a starting file (typically 'main.ts' or 'server.ts') that creates all objects and workflow functions during startup.

Any DBOS program MUST call DBOS.setConfig and DBOS.launch in its main function, like so.
You MUST use this default configuration (changing the name as appropriate) unless otherwise specified.

```javascript
DBOS.setConfig({
"name": "dbos-node-starter",
"systemDatabaseUrl": process.env.DBOS_SYSTEM_DATABASE_URL,
});
await DBOS.launch();
```

Here is an example main function using Express:

```javascript
import { DBOS } from "@dbos-inc/dbos-sdk";

async function main() {
DBOS.setConfig({
"name": "dbos-node-starter",
"systemDatabaseUrl": process.env.DBOS_SYSTEM_DATABASE_URL,
});
await DBOS.launch();
const PORT = 3000;
app.listen(PORT, () => {
console.log(`🚀 Server is running on http://localhost:${PORT}`);
});
}

main().catch(console.log);
```

## Workflow and Steps Examples

Simple example:


```javascript
import { DBOS } from "@dbos-inc/dbos-sdk";

async function stepOne() {
DBOS.logger.info("Step one completed!");
}

async function stepTwo() {
DBOS.logger.info("Step two completed!");
}

async function exampleFunction() {
await DBOS.runStep(() => stepOne());
await DBOS.runStep(() => stepTwo());
}
const exampleWorkflow = DBOS.registerWorkflow(exampleFunction);

async function main() {
DBOS.setConfig({
"name": "dbos-node-starter",
"systemDatabaseUrl": process.env.DBOS_SYSTEM_DATABASE_URL,
});
await DBOS.launch();
await exampleWorkflow();
await DBOS.shutdown();
}

main().catch(console.log);
```

Example with Express:

```javascript
import { DBOS } from "@dbos-inc/dbos-sdk";
import express from "express";

export const app = express();
app.use(express.json());

async function stepOne() {
DBOS.logger.info("Step one completed!");
}

async function stepTwo() {
DBOS.logger.info("Step two completed!");
}

async function exampleFunction() {
await DBOS.runStep(() => stepOne());
await DBOS.runStep(() => stepTwo());
}
const exampleWorkflow = DBOS.registerWorkflow(exampleFunction);

app.get("/", async (req, res) => {
await exampleWorkflow();
res.send();
});

async function main() {
DBOS.setConfig({
"name": "dbos-node-starter",
"systemDatabaseUrl": process.env.DBOS_SYSTEM_DATABASE_URL,
});
await DBOS.launch();
const PORT = 3000;
app.listen(PORT, () => {
console.log(`🚀 Server is running on http://localhost:${PORT}`);
});
}

main().catch(console.log);
```

Example with queues:

```javascript
import { DBOS, WorkflowQueue } from "@dbos-inc/dbos-sdk";
import express from "express";

export const app = express();
app.use(express.json());

const queue = new WorkflowQueue("example_queue");

async function taskFunction(n: number) {
await DBOS.sleep(5000);
DBOS.logger.info(`Task ${n} completed!`)
}
const taskWorkflow = DBOS.registerWorkflow(taskFunction);

async function queueFunction() {
DBOS.logger.info("Enqueueing tasks!")
const handles = []
for (let i = 0; i < 10; i++) {
handles.push(await DBOS.startWorkflow(taskWorkflow, { queueName: queue.name })(i))
}
const results = []
for (const h of handles) {
results.push(await h.getResult())
}
DBOS.logger.info(`Successfully completed ${results.length} tasks`)
}
const queueWorkflow = DBOS.registerWorkflow(queueFunction)

app.get("/", async (req, res) => {
await queueWorkflow();
res.send();
});

async function main() {
DBOS.setConfig({
"name": "dbos-node-starter",
"systemDatabaseUrl": process.env.DBOS_SYSTEM_DATABASE_URL,
});
await DBOS.launch();
const PORT = 3000;
app.listen(PORT, () => {
console.log(`🚀 Server is running on http://localhost:${PORT}`);
});
}

main().catch(console.log);
```

### Scheduled Workflow

You can schedule DBOS workflows to run exactly once per time interval.
To do this, use the the `DBOS.registerScheduled` method or the `DBOS.scheduled` decorator and specify the schedule in crontab syntax. For example:

- A scheduled workflow MUST specify a crontab schedule.
- It MUST take in two arguments, scheduled and actual time. Both are Date of when the workflow started.

```typescript
async function scheduledFunction(schedTime: Date, startTime: Date) {
DBOS.logger.info(`I am a workflow scheduled to run every 30 seconds`);
}

const scheduledWorkflow = DBOS.registerWorkflow(scheduledFunction);
DBOS.registerScheduled(scheduledWorkflow, {crontab: '*/30 * * * * *'});
```

Or using decorators:

```typescript
class ScheduledExample{
@DBOS.workflow()
@DBOS.scheduled({crontab: '*/30 * * * * *'})
static async scheduledWorkflow(schedTime: Date, startTime: Date) {
DBOS.logger.info(`I am a workflow scheduled to run every 30 seconds`);
}
}
```

## Workflow Documentation:

Workflows provide **durable execution** so you can write programs that are **resilient to any failure**.
Workflows are comprised of steps, which wrap ordinary TypeScript (or JavaScript) functions.
If a workflow is interrupted for any reason (e.g., an executor restarts or crashes), when your program restarts the workflow automatically resumes execution from the last completed step.

To write a workflow, register a TypeScript function with `DBOS.registerWorkflow`.
The function's inputs and outputs must be serializable to JSON.
For example:

```typescript
async function stepOne() {
DBOS.logger.info("Step one completed!");
}

async function stepTwo() {
DBOS.logger.info("Step two completed!");
}

async function workflowFunction() {
await DBOS.runStep(() => stepOne(), {name: "stepOne"});
await DBOS.runStep(() => stepTwo(), {name: "stepTwo"});
}
const workflow = DBOS.registerWorkflow(workflowFunction)

await workflow();
```

Alternatively, you can register workflows and steps with decorators:

```typescript
export class Example {
@DBOS.step()
static async stepOne() {
DBOS.logger.info("Step one completed!");
}

@DBOS.step()
static async stepTwo() {
DBOS.logger.info("Step two completed!");
}

// Call steps from workflows
@DBOS.workflow()
static async exampleWorkflow() {
await Toolbox.stepOne();
await Toolbox.stepTwo();
}
}

await Example.exampleWorkflow();
```

## Starting Workflows In The Background

One common use-case for workflows is building reliable background tasks that keep running even when your program is interrupted, restarted, or crashes.
You can use `DBOS.startWorkflow` to start a workflow in the background.
If you start a workflow this way, it returns a workflow handle, from which you can access information about the workflow or wait for it to complete and retrieve its result.

Here's an example:

```javascript
class Example {
@DBOS.workflow()
static async exampleWorkflow(var1: string, var2: string) {
return var1 + var2;
}
}

async function main() {
// Start exampleWorkflow in the background
const handle = await DBOS.startWorkflow(Example).exampleWorkflow("one", "two");
// Wait for the workflow to complete and return its results
const result = await handle.getResult();
}
```

After starting a workflow in the background, you can use `DBOS.retrieveWorkflow` to retrieve a workflow's handle from its ID.
You can also retrieve a workflow's handle from outside of your DBOS application with 'DBOSClient.retrieveWorkflow`.

If you need to run many workflows in the background and manage their concurrency or flow control, you can also use DBOS queues.

## Workflow IDs and Idempotency

Every time you execute a workflow, that execution is assigned a unique ID, by default a UUID.
You can access this ID through the `DBOS.workflowID` context variable.
Workflow IDs are useful for communicating with workflows and developing interactive workflows.

You can set the workflow ID of a workflow as an argument to `DBOS.startWorkflow()`.
Workflow IDs must be **globally unique** for your application.
An assigned workflow ID acts as an idempotency key: if a workflow is called multiple times with the same ID, it executes only once.
This is useful if your operations have side effects like making a payment or sending an email.
Workflow IDs are also useful for communicating with workflows and developing interactive workflows - see Communicating with Workflows for more details.

For example:

```javascript
class Example {
@DBOS.workflow()
static async exampleWorkflow(var1: string, var2: string) {
// ...
}
}

async function main() {
const myID: string = ...
const handle = await DBOS.startWorkflow(Example, {workflowID: myID}).exampleWorkflow("one", "two");
const result = await handle.getResult();
}
```

## Determinism

Workflows are in most respects normal TypeScript functions.
They can have loops, branches, conditionals, and so on.
However, a workflow function must be **deterministic**: if called multiple times with the same inputs, it should invoke the same steps with the same inputs in the same order (given the same return values from those steps).
If you need to perform a non-deterministic operation like accessing the database, calling a third-party API, generating a random number, or getting the local time, you shouldn't do it directly in a workflow function.
Instead, you should do all database operations in transactions and all other non-deterministic operations in steps.

For example, **don't do this**:

```javascript
async function exampleWorkflowFunction() {
const choice = Math.random() > 0.5 ? 1 : 0;
if (choice === 0) {
await stepOne();
} else {
await stepTwo();
}
}
const exampleWorkflow = DBOS.registerWorkflow(exampleWorkflowFunction);
```

Do this instead:

```javascript
async function exampleWorkflowFunction() {
const choice = await DBOS.runStep(
() => Promise.resolve(Math.random() > 0.5 ? 1 : 0),
{ name: "generateChoice" }
);
if (choice === 0) {
await stepOne();
} else {
await stepTwo();
}
}
const exampleWorkflow = DBOS.registerWorkflow(exampleWorkflowFunction);
```

### Running Steps In Parallel
Initiating several concurrent steps in a workflow, followed by awaiting them with `Promise.allSettled`, is valid as long as the steps are started in a deterministic order. For example the following is allowed:
```typescript
const results = await Promise.allSettled([
step1("arg1"),
step2("arg2"),
step3("arg3"),
step4("arg4"),
])
```
This is allowed because each step is started in a well-defined sequence before awaiting.

By contrast, the following is not allowed:
```typescript
const results = await Promise.allSettled([
async () => { await step1("arg1"); await step2("arg3"); },
async () => { await step3("arg2"); await step4("arg4"); },
]);
```
Here, `step2` and `step4` may be started in either order since their execution depends on the relative time taken by `step1` and `step3`.

If you need to run sequences of operations concurrently, start child workflows with `startWorkflow` and await the results from their `WorkflowHandle`s.

Avoid using `Promise.all` because of how it handles errors and rejections. When any promise rejects, `Promise.all` immediately fails, leaving the other promises unresolved. If one of those later throws an unhandled exception, it can crash your Node.js process. Instead, prefer `Promise.allSettled`, which safely waits for all promises to complete and reports their outcomes.

## Workflow Timeouts

You can set a timeout for a workflow by passing a `timeoutMS` argument to `DBOS.startWorkflow`.
When the timeout expires, the workflow **and all its children** are cancelled.
Cancelling a workflow sets its status to `CANCELLED` and preempts its execution at the beginning of its next step.

Timeouts are **start-to-completion**: a workflow's timeout does not begin until the workflow starts execution.
Also, timeouts are **durable**: they are stored in the database and persist across restarts, so workflows can have very long timeouts.

Example syntax:

```javascript
async function taskFunction(task) {
// ...
}
const taskWorkflow = DBOS.registerWorkflow(taskFunction);

async function main() {
const task = ...
const timeout = ... // Timeout in milliseconds
const handle = await DBOS.startWorkflow(taskWorkflow, {timeoutMS: timeout})(task);
}
```

## Durable Sleep

You can use `DBOS.sleep()` to put your workflow to sleep for any period of time.
This sleep is **durable**&mdash;DBOS saves the wakeup time in the database so that even if the workflow is interrupted and restarted multiple times while sleeping, it still wakes up on schedule.

Sleeping is useful for scheduling a workflow to run in the future (even days, weeks, or months from now).
For example:

```javascript
@DBOS.workflow()
static async exampleWorkflow(timeToSleep, task) {
await DBOS.sleep(timeToSleep);
await runTask(task);
}
```

## Debouncing Workflows

You can create a `Debouncer` to debounce your workflows.
Debouncing delays workflow execution until some time has passed since the workflow has last been called.
This is useful for preventing wasted work when a workflow may be triggered multiple times in quick succession.
For example, if a user is editing an input field, you can debounce their changes to execute a processing workflow only after they haven't edited the field for some time:

### Debouncer

```typescript
new Debouncer<Args extends unknown[], Return>(
params: DebouncerConfig<Args, Return>
)
```

```typescript
interface DebouncerConfig<Args extends unknown[], Return> {
workflow: (...args: Args) => Promise<Return>;
startWorkflowParams?: StartWorkflowParams;
debounceTimeoutMs?: number;
}
```

**Parameters:**
- **workflow**: The workflow to debounce. Note that workflows from configured instances cannot be debounced.
- **startWorkflowParams**: Optional workflow parameters, as in `startWorkflow`. Applied to all workflows started from this debouncer.
- **debounceTimeoutMs**: After this time elapses since the first time a workflow is submitted from this debouncer, the workflow is started regardless of the debounce period.

### debouncer.debounce

```typescript
debouncer.debounce(
debounceKey: string,
debouncePeriodMs: number,
...args: Args
): Promise<WorkflowHandle<Return>>
```

Submit a workflow for execution but delay it by `debouncePeriodMs`.
Returns a handle to the workflow.
The workflow may be debounced again, which further delays its execution (up to `debounceTimeoutMs`).
When the workflow eventually executes, it uses the **last** set of inputs passed into `debounce`.
After the workflow begins execution, the next call to `debounce` starts the debouncing process again for a new workflow execution.

**Parameters:**
- **debounceKey**: A key used to group workflow executions that will be debounced together. For example, if the debounce key is set to customer ID, each customer's workflows would be debounced separately.
- **debouncePeriodMs**: Delay this workflow's execution by this period in milliseconds.
- **...args**: Variadic workflow arguments.

**Example Syntax**:

```typescript
async function processInput(userInput: string) {
...
}
const processInputWorkflow = DBOS.registerWorkflow(processInput);

// Each time a user submits a new input, debounce the processInput workflow.
// The workflow will wait until 60 seconds after the user stops submitting new inputs,
// then process the last input submitted.
const debouncer = new Debouncer({
workflow: processInputWorkflow,
});

async function onUserInputSubmit(userId: string, userInput: string) {
const debounceKey = userId;
const debouncePeriodMs = 60000; // 60 seconds
await debouncer.debounce(debounceKey, debouncePeriodMs, userInput);
}
```


## Workflow Communication

DBOS provides a few different ways to communicate with your workflows.
You can:

- Send messages to workflows
- Publish events from workflows for clients to read
- Stream values from workflows to clients


## Workflow Messaging and Notifications
You can send messages to a specific workflow.
This is useful for signaling a workflow or sending notifications to it while it's running.

<img src={require('@site/static/img/workflow-communication/workflow-messages.png').default} alt="DBOS Steps" width="750" className="custom-img"/>

### Send

```typescript
DBOS.send<T>(destinationID: string, message: T, topic?: string): Promise<void>;
```

You can call `DBOS.send()` to send a message to a workflow.
Messages can optionally be associated with a topic and are queued on the receiver per topic.

You can also call `send` from outside of your DBOS application with the DBOS Client.

### Recv

```typescript
DBOS.recv<T>(topic?: string, timeoutSeconds?: number): Promise<T | null>
```

Workflows can call `DBOS.recv()` to receive messages sent to them, optionally for a particular topic.
Each call to `recv()` waits for and consumes the next message to arrive in the queue for the specified topic, returning `null` if the wait times out. The default timeout is 60 seconds.
If the topic is not specified, this method only receives messages sent without a topic.

### Messages Example

Messages are especially useful for sending notifications to a workflow.
For example, in the e-commerce demo, the checkout workflow, after redirecting customers to a secure payments service, must wait for a notification from that service that the payment has finished processing.

To wait for this notification, the payments workflow uses `recv()`, executing failure-handling code if the notification doesn't arrive in time:

```javascript
@DBOS.workflow()
static async checkoutWorkflow(...): Promise<void> {
...
const notification = await DBOS.recv<string>(PAYMENT_STATUS, timeout);
if (notification) {
... // Handle the notification.
} else {
... // Handle a timeout.
}
}
```

A webhook waits for the payment processor to send the notification, then uses `send()` to forward it to the workflow:

```javascript
static async paymentWebhook(): Promise<void> {
const notificationMessage = ... // Parse the notification.
const workflowID = ... // Retrieve the workflow ID from notification metadata.
await DBOS.send(workflowID, notificationMessage, PAYMENT_STATUS);
}
```

### Reliability Guarantees

All messages are persisted to the database, so if `send` completes successfully, the destination workflow is guaranteed to be able to `recv` it.
If you're sending a message from a workflow, DBOS guarantees exactly-once delivery.
If you're sending a message from normal TypeScript code, you can specify an idempotency key for `send` to guarantee exactly-once delivery.

## Workflow Events

Workflows can publish _events_, which are key-value pairs associated with the workflow.
They are useful for publishing information about the status of a workflow or to send a result to clients while the workflow is running.

<img src={require('@site/static/img/workflow-communication/workflow-events.png').default} alt="DBOS Steps" width="750" className="custom-img"/>

### setEvent

```typescript
DBOS.setEvent<T>(key: string, value: T): Promise<void>
```

Any workflow can call `DBOS.setEvent` to publish a key-value pair, or update its value if has already been published.

### getEvent

```typescript
DBOS.getEvent<T>(workflowID: string, key: string, timeoutSeconds?: number): Promise<T | null>
```

You can call `DBOS.getEvent` to retrieve the value published by a particular workflow ID for a particular key.
If the event does not yet exist, this call waits for it to be published, returning `null` if the wait times out. Default timeout is 60 seconds.

You can also call `getEvent` from outside of your DBOS application with DBOS Client.

### Events Example

Events are especially useful for writing interactive workflows that communicate information to their caller.
For example, in the e-commerce demo, the checkout workflow, after validating an order, directs the customer to a secure payments service to handle credit card processing.
To communicate the payments URL to the customer, it uses events.

The checkout workflow emits the payments URL using `setEvent()`:

```javascript
@DBOS.workflow()
static async checkoutWorkflow(...): Promise<void> {
...
const paymentsURL = ...
await DBOS.setEvent(PAYMENT_URL, paymentsURL);
...
}
```

The HTTP handler that originally started the workflow uses `getEvent()` to await this URL, then redirects the customer to it:

```javascript
static async webCheckout(...): Promise<void> {
const handle = await DBOS.startWorkflow(Shop).checkoutWorkflow(...);
const url = await DBOS.getEvent<string>(handle.workflowID, PAYMENT_URL);
if (url === null) {
DBOS.koaContext.redirect(`${origin}/checkout/cancel`);
} else {
DBOS.koaContext.redirect(url);
}
}
```

### Reliability Guarantees

All events are persisted to the database, so the latest version of an event is always retrievable.
Additionally, if `getEvent` is called in a workflow, the retrieved value is persisted in the database so workflow recovery can use that value, even if the event is later updated.

## Workflow Streaming

Workflows can stream data in real time to clients.
This is useful for streaming results from a long-running workflow or LLM call or for monitoring or progress reporting.

<img src={require('@site/static/img/workflow-communication/workflow-streams.png').default} alt="DBOS Steps" width="750" className="custom-img"/>

### Writing to Streams

```typescript
DBOS.writeStream<T>(key: string, value: T): Promise<void>
```

You can write values to a stream from a workflow or its steps using `DBOS.writeStream`.
A workflow may have any number of streams, each identified by a unique key.

When you are done writing to a stream, you should close it with `DBOS.closeStream`.
Otherwise, streams are automatically closed when the workflow terminates.

```typescript
DBOS.closeStream(key: string): Promise<void>
```

DBOS streams are immutable and append-only.
Writes to a stream from a workflow happen exactly-once.
Writes to a stream from a step happen at-least-once; if a step fails and is retried, it may write to the stream multiple times.
Readers will see all values written to the stream from all tries of the step in the order in which they were written.

**Example syntax:**

```typescript
async function producerWorkflowFunction() {
await DBOS.writeStream("example_key", { step: 1, data: "value1" });
await DBOS.writeStream("example_key", { step: 2, data: "value2" });
await DBOS.closeStream("example_key"); // Signal completion
}

const producerWorkflow = DBOS.registerWorkflow(producerWorkflowFunction);
```

### Reading from Streams

```typescript
DBOS.readStream<T>(workflowID: string, key: string): AsyncGenerator<T, void, unknown>
```

You can read values from a stream from anywhere using `DBOS.readStream`.
This function reads values from a stream identified by a workflow ID and key, yielding each value in order until the stream is closed or the workflow terminates.

You can also read from a stream from outside a DBOS application with a DBOS Client.

**Example syntax:**

```typescript
for await (const value of DBOS.readStream(workflowID, "example_key")) {
console.log(`Received: ${JSON.stringify(value)}`);
}
```

## Steps


When using DBOS workflows, you should call any function that performs complex operations or accesses external APIs or services as a _step_.
If a workflow is interrupted, upon restart it automatically resumes execution from the **last completed step**.

You can use `DBOS.runStep` to call a function as a step. For a function to be used as a step, it should have a return value that can be serialized as JSON, and should not have non-durable side effects. ALWAYS call steps this way unless otherwise specify.

```javascript
async function generateRandomNumber() {
return Math.random();
}

async function workflowFunction() {
const randomNumber = await DBOS.runStep(() => generateRandomNumber(), {name: "generateRandomNumber"});
}
const workflow = DBOS.registerWorkflow(workflowFunction)
```

Alternatively, you can register a function as a step using `DBOS.registerStep`:
NEVER do this unless specifically asked, ALWAYS use DBOS.runStep instead.

```javascript
async function generateRandomNumber() {
return Math.random();
}
const randomStep = DBOS.registerStep(generateRandomNumber);

async function workflowFunction() {
const randomNumber = await randomStep();
}
const workflow = DBOS.registerWorkflow(workflowFunction)
```

Or use the `@DBOS.step()` decorator:
NEVER do this unless specifically asked, ALWAYS use DBOS.runStep instead.

```typescript
export class Example {
@DBOS.step()
static async generateRandomNumber() {
return Math.random();
}

@DBOS.workflow()
static async exampleWorkflow() {
await Example.generateRandomNumber();
}
}
```

You **cannot** call, start, or enqueue workflows from within steps.
These operations should be performed from workflow functions.
You can call one step from another step, but the called step becomes part of the calling step's execution rather than functioning as a separate step.

### Configurable Retries

You can optionally configure a step to automatically retry any exception a set number of times with exponential backoff.
This is useful for automatically handling transient failures, like making requests to unreliable APIs.
Retries are configurable through arguments to the step decorator:

```typescript
export interface StepConfig {
retriesAllowed?: boolean; // Should failures be retried? (default false)
intervalSeconds?: number; // Seconds to wait before the first retry attempt (default 1).
maxAttempts?: number; // Maximum number of retry attempts (default 3). If errors occur more times than this, throw an exception.
backoffRate?: number; // Multiplier by which the retry interval increases after a retry attempt (default 2).
}
```

For example, let's configure this step to retry exceptions (such as if `example.com` is temporarily down) up to 10 times:

```javascript
async function fetchFunction() {
return await fetch("https://example.com").then(r => r.text());
}

async function workflowFunction() {
const randomNumber = await DBOS.runStep(() => fetchFunction(), {
name: "fetchFunction",
retriesAllowed: true,
maxAttempts: 10
});
}
```

Or if registering the step:

```javascript
async function fetchFunction() {
return await fetch("https://example.com").then(r => r.text());
}
const fetchStep = DBOS.registerStep(fetchFunction, {
retriesAllowed: true,
maxAttempts: 10
});
```

Or if using decorators:

```javascript
@DBOS.step({retriesAllowed: true, maxAttempts: 10})
static async exampleStep() {
return await fetch("https://example.com").then(r => r.text());
}
```

If a step exhausts all `maxAttempts` retries, it throws a `DBOSMaxStepRetriesError` to the calling workflow.
If that exception is not caught, the workflow terminates.

## Queues

You can use queues to run many workflows at once with managed concurrency.
Queues provide _flow control_, letting you manage how many workflows run at once or how often workflows are started.

To create a queue, specify its name:

```javascript
import { DBOS, WorkflowQueue } from "@dbos-inc/dbos-sdk";

const queue = new WorkflowQueue("example_queue");
```

You can then enqueue any workflow by passing the queue as an argument to `DBOS.startWorkflow`.
Enqueuing a function submits it for execution and returns a handle to it.
Queued tasks are started in first-in, first-out (FIFO) order.

```javascript
const queue = new WorkflowQueue("example_queue");

class Tasks {
@DBOS.workflow()
static async processTask(task) {
// ...
}
}

async function main() {
const task = ...
const handle = await DBOS.startWorkflow(Tasks, {queueName: queue.name}).processTask(task)
}
```

### Queue Example

Here's an example of a workflow using a queue to process tasks in parallel:

```javascript
import { DBOS, WorkflowQueue } from "@dbos-inc/dbos-sdk";

const queue = new WorkflowQueue("example_queue");

async function taskFunction(task) {
// ...
}
const taskWorkflow = DBOS.registerWorkflow(taskFunction, {"name": "taskWorkflow"});

async function queueFunction(tasks) {
const handles = []

// Enqueue each task so all tasks are processed concurrently.
for (const task of tasks) {
handles.push(await DBOS.startWorkflow(taskWorkflow, { queueName: queue.name })(task))
}

// Wait for each task to complete and retrieve its result.
// Return the results of all tasks.
const results = []
for (const h of handles) {
results.push(await h.getResult())
}
return results
}
const queueWorkflow = DBOS.registerWorkflow(queueFunction, {"name": "queueWorkflow"})
```

### Enqueueing from Another Application

Often, you want to enqueue a workflow from outside your DBOS application.
For example, let's say you have an API server and a data processing service.
You're using DBOS to build a durable data pipeline in the data processing service.
When the API server receives a request, it should enqueue the data pipeline for execution on the data processing service.

You can use the DBOS Client to enqueue workflows from outside your DBOS application by connecting directly to your DBOS application's system database.
Since the DBOS Client is designed to be used from outside your DBOS application, workflow and queue metadata must be specified explicitly.

For example, this code enqueues the `dataPipeline` workflow on the `pipelineQueue` queue with `task` as an argument.

```ts
import { DBOSClient } from "@dbos-inc/dbos-sdk";

const client = await DBOSClient.create({systemDatabaseUrl: process.env.DBOS_SYSTEM_DATABASE_URL});

type ProcessTask = typeof Tasks.processTask;
await client.enqueue<ProcessTask>(
{
workflowName: 'dataPipeline',
queueName: 'pipelineQueue',
},
task);
```

### Managing Concurrency

You can control how many workflows from a queue run simultaneously by configuring concurrency limits.
This helps prevent resource exhaustion when workflows consume significant memory or processing power.

#### Worker Concurrency

Worker concurrency sets the maximum number of workflows from a queue that can run concurrently on a single DBOS process.
This is particularly useful for resource-intensive workflows to avoid exhausting the resources of any process.
For example, this queue has a worker concurrency of 5, so each process will run at most 5 workflows from this queue simultaneously:
```javascript
import { DBOS, WorkflowQueue } from "@dbos-inc/dbos-sdk";

const queue = new WorkflowQueue("example_queue", { workerConcurrency: 5 });
```

Note that DBOS uses `executorID` to distinguish processes&mdash;this is set automatically by Conductor and Cloud, but if those are not used it must be set to a unique value for each process through configuration.

#### Global Concurrency

Global concurrency limits the total number of workflows from a queue that can run concurrently across all DBOS processes in your application.
For example, this queue will have a maximum of 10 workflows running simultaneously across your entire application.

:::warning
Worker concurrency limits are recommended for most use cases.
Take care when using a global concurrency limit as any `PENDING` workflow on the queue counts toward the limit, including workflows from previous application versions
:::

```javascript
import { DBOS, WorkflowQueue } from "@dbos-inc/dbos-sdk";

const queue = new WorkflowQueue("example_queue", { concurrency: 10 });
```

#### In-Order Processing

You can use a queue with `concurrency=1` to guarantee sequential, in-order processing of events.
Only a single event will be processed at a time.
For example, this app processes events sequentially in the order of their arrival:

```javascript
import { DBOS, WorkflowQueue } from "@dbos-inc/dbos-sdk";
import express from "express";

const serialQueue = new WorkflowQueue("in_order_queue", { concurrency: 1 });
const app = express();

class Tasks {
@DBOS.workflow()
static async processTask(task){
// ... process task
}
}

app.get("/events/:event", async (req, res) => {
await DBOS.startWorkflow(Tasks, {queueName: serialQueue.name}).processTask(req.params);
await res.send("Workflow Started!");
});

// Launch DBOS and start the server
async function main() {
await DBOS.launch();
app.listen(3000, () => {});
}

main().catch(console.log);
```


### Rate Limiting

You can set _rate limits_ for a queue, limiting the number of functions that it can start in a given period.
Rate limits are global across all DBOS processes using this queue.
For example, this queue has a limit of 50 with a period of 30 seconds, so it may not start more than 50 functions in 30 seconds:

```javascript
const queue = new WorkflowQueue("example_queue", { rateLimit: { limitPerPeriod: 50, periodSec: 30 } });
```

Rate limits are especially useful when working with a rate-limited API, such as many LLM APIs.

### Setting Timeouts

You can set a timeout for an enqueued workflow by passing a `timeoutMS` argument to `DBOS.startWorkflow`.
When the timeout expires, the workflow **and all its children** are cancelled.
Cancelling a workflow sets its status to `CANCELLED` and preempts its execution at the beginning of its next step.

Timeouts are **start-to-completion**: a workflow's timeout does not begin until the workflow is dequeued and starts execution.
Also, timeouts are **durable**: they are stored in the database and persist across restarts, so workflows can have very long timeouts.

Example syntax:

```javascript
const queue = new WorkflowQueue("example_queue");

async function taskFunction(task) {
// ...
}
const taskWorkflow = DBOS.registerWorkflow(taskFunction, {"name": "taskWorkflow"});

async function main() {
const task = ...
const timeout = ... // Timeout in milliseconds
const handle = await DBOS.startWorkflow(taskWorkflow, {queueName: queue.name, timeoutMS: timeout})(task);
}
```

### Partitioning Queues

You can **partition** queues to distribute work across dynamically created queue partitions.
When you enqueue a workflow on a partitioned queue, you must supply a queue partition key.
Partitioned queues dequeue workflows and apply flow control limits for individual partitions, not for the entire queue.
Essentially, you can think of each partition as a "subqueue" you dynamically create by enqueueing a workflow with a partition key.

For example, suppose you want your users to each be able to run at most one task at a time.
You can do this with a partitioned queue with a maximum concurrency limit of 1 where the partition key is user ID.

**Example Syntax**

```ts
const queue = new WorkflowQueue("example_queue", { partitionQueue: true, concurrency: 1 });

async function onUserTaskSubmission(userID: string, task: Task) {
// Partition the task queue by user ID. As the queue has a
// maximum concurrency of 1, this means that at most one
// task can run at once per user (but tasks from different
// users can run concurrently).
await DBOS.startWorkflow(taskWorkflow, {queueName: queue.name, enqueueOptions: {queuePartitionKey: userID}})(task);
}
```

Sometimes, you want to apply global or per-worker limits to a partitioned queue.
You can do this with **multiple levels of queueing**.
Create two queues: a partitioned queue with per-partition limits and a non-partitioned queue with global limits.
Enqueue a "concurrency manager" workflow to the partitioned queue, which then enqueues your actual workflow
to the non-partitioned queue and awaits its result.
This ensures both queues' flow control limits are enforced on your workflow.
For example:

```ts
// By using two levels of queueing, we enforce both a concurrency limit of 1 on each partition
// and a global concurrency limit of 5, meaning that no more than 5 tasks can run concurrently
// across all partitions (and at most one task per partition).
const concurrencyQueue = new WorkflowQueue("concurrency-queue", { concurrency: 5 });
const partitionedQueue = new WorkflowQueue("partitioned-queue", { partitionQueue: true, concurrency: 1 });

async function onUserTaskSubmission(userID: string, task: Task) {
// First, enqueue a "concurrency manager" workflow to the partitioned
// queue to enforce per-partition limits.
await DBOS.startWorkflow(concurrencyManager, {
queueName: partitionedQueue.name,
enqueueOptions: { queuePartitionKey: userID }
})(task);
}

async function concurrencyManagerFunc(task: Task) {
// The "concurrency manager" workflow enqueues the processTask
// workflow on the non-partitioned queue and awaits its results
// to enforce global flow control limits.
const handle = await DBOS.startWorkflow(processTask, { queueName: concurrencyQueue.name })(task);
return await handle.getResult();
}
const concurrencyManager = DBOS.registerWorkflow(concurrencyManagerFunc, { name: "concurrencyManager" });

async function processTaskFunc(task: Task) {
// ...
}
const processTask = DBOS.registerWorkflow(processTaskFunc, { name: "processTask" });
```

### Deduplication

You can set a deduplication ID for an enqueued workflow as an argument to `DBOS.startWorkflow`.
At any given time, only one workflow with a specific deduplication ID can be enqueued in the specified queue.
If a workflow with a deduplication ID is currently enqueued or actively executing (status `ENQUEUED` or `PENDING`), subsequent workflow enqueue attempt with the same deduplication ID in the same queue will raise a `DBOSQueueDuplicatedError` exception.

For example, this is useful if you only want to have one workflow active at a time per user&mdash;set the deduplication ID to the user's ID.

Example syntax:

```javascript
const queue = new WorkflowQueue("example_queue");

async function taskFunction(task) {
// ...
}
const taskWorkflow = DBOS.registerWorkflow(taskFunction, {"name": "taskWorkflow"});

async function main() {
const task = ...
const dedup: string = ...
try {
const handle = await DBOS.startWorkflow(taskWorkflow, {queueName: queue.name, enqueueOptions: {deduplicationID: dedup}})(task);
} catch (e) {
// Handle DBOSQueueDuplicatedError
}
}
```

### Priority

You can set a priority for an enqueued workflow as an argument to `DBOS.startWorkflow`.
Workflows with the same priority are dequeued in **FIFO (first in, first out)** order. Priority values can range from `1` to `2,147,483,647`, where **a low number indicates a higher priority**.
If using priority, you must set `usePriority: true` on your queue.

:::tip
Workflows without assigned priorities have the highest priority and are dequeued before workflows with assigned priorities.
:::

Example syntax:

```javascript
const queue = new WorkflowQueue("example_queue", {usePriority: true});

async function taskFunction(task) {
// ...
}
const taskWorkflow = DBOS.registerWorkflow(taskFunction, {"name": "taskWorkflow"});

async function main() {
const task = ...
const priority: number = ...
const handle = await DBOS.startWorkflow(taskWorkflow, {queueName: queue.name, enqueueOptions: {priority: priority}})(task);
}
```

### Explicit Queue Listening

By default, a process running DBOS listens to (dequeues workflows from) all declared queues.
However, sometimes you only want a process to listen to a specific list of queues.
You can configure `listenQueues` in your DBOS configuration to explicitly tell a process running DBOS to only listen to a specific set of queues.

This is particularly useful when managing heterogeneous workers, where specific tasks should execute on specific physical servers.
For example, say you have a mix of CPU workers and GPU workers and you want CPU tasks to only execute on CPU workers and GPU tasks to only execute on GPU workers.
You can create separate queues for CPU and GPU tasks and configure each type of worker to only listen to the appropriate queue:

```javascript
import { DBOS, WorkflowQueue } from "@dbos-inc/dbos-sdk";

const cpuQueue = new WorkflowQueue("cpu_queue");
const gpuQueue = new WorkflowQueue("gpu_queue");

async function main() {
const workerType = process.env.WORKER_TYPE; // "cpu" or "gpu"
const config = // ...

if (workerType === "gpu") {
// GPU workers will only dequeue and execute workflows from the GPU queue
config.listenQueues = [gpuQueue];
} else if (workerType === "cpu") {
// CPU workers will only dequeue and execute workflows from the CPU queue
config.listenQueues = [cpuQueue];
}

DBOS.setConfig(config);
await DBOS.launch();
}
```

Note that `listenQueues` only controls what workflows are dequeued, not what workflows can be enqueued, so you can freely enqueue tasks onto the GPU queue from a CPU worker for execution on a GPU worker, and vice versa.

## Classes

You can use class instance methods as workflows and steps.
Any class instance method can be freely used as a step using the DBOS.step decorator or DBOS.runstep; there are no special requirements.
To use a class instance method as a workflow, you must use the DBOS.workflow decorator and the class must inherit from `ConfiguredInstance`.
For example:

```typescript
class MyClass extends ConfiguredInstance {
cfg: MyConfig;
constructor(name: string, config: MyConfig) {
super(name);
this.cfg = cfg;
}

override async initialize() : Promise<void> {
// ... Validate this.cfg; will be called at DBOS.launch()
}

@DBOS.workflow()
async testWorkflow(p: string): Promise<void> {
// ... Operations that use this.cfg
}
}

const myClassInstance = new MyClass('instanceA');
```

When you create a new instance of such a class, the constructor for the base `ConfiguredInstance` must be called with a `name`.
This `name` should be unique among instances of the same class.
Additionally, all `ConfiguredInstance` classes must be instantiated before DBOS.launch() is called.

The reason for these requirements is to enable workflow recovery. When you create a new instance of, DBOS stores it in a global registry indexed by `name`. When DBOS needs to recover a workflow belonging to that class, it looks up the `name` so it can run the workflow using the right class instance. While names are used by DBOS Transact internally to find the correct object instance across system restarts, they are also potentially useful for monitoring, tracing, and debugging.
You should AVOID using ConfiguredInstance if at all possible and instead use registerWorkflow on regular non-class functions.


### Testing DBOS Functions

Unless otherwise specified, recommend testing DBOS applications with jest.
There are two approaches to testing workflows:

1. **Unit testing.** Test workflow logic in isolation by mocking the DBOS interface. Does not require Postgres.
2. **Integration testing.** Test workflows with real DBOS infrastructure. Requires a Postgres test database.

#### Unit Testing

You can unit test workflows by mocking the DBOS interface with `jest.mock`.
It's important to mock `DBOS.registerWorkflow` to directly return the workflow function instead of wrapping it with durable workflow code:

```ts
// Mock DBOS
jest.mock('@dbos-inc/dbos-sdk', () => ({
DBOS: {
// IMPORTANT: Mock DBOS.registerWorkflow to return the workflow function
registerWorkflow: jest.fn((fn) => fn),
setEvent: jest.fn(),
recv: jest.fn(),
startWorkflow: jest.fn(),
workflowID: 'test-workflow-id-123',
},
}));
```

Then call the workflow function directly in your tests and assert on the mocked calls.

#### Integration Testing

Integration tests require a Postgres database. You should reset DBOS and its system database between tests:

```ts
describe('example integration tests', () => {
beforeEach(async () => {
const databaseUrl = process.env.DBOS_TEST_DATABASE_URL;

// Shut down DBOS (in case a previous test launched it) and reset the database.
await DBOS.shutdown();

// Configure and launch DBOS
const dbosTestConfig: DBOSConfig = {
name: "my-integration-test",
systemDatabaseUrl: databaseUrl,
};
DBOS.setConfig(dbosTestConfig);
await DBOS.launch();
}, 10000);

afterEach(async () => {
await DBOS.shutdown();
});

it('my integration test', async () => {
// test goes here
});
});
```


### Logging

ALWAYS log errors like this:

```typescript
DBOS.logger.error(`Error: ${(error as Error).message}`);
```

## Workflow Handles

A workflow handle represents the state of a particular active or completed workflow execution.
You obtain a workflow handle when using `DBOS.startWorkflow` to start a workflow in the background.
If you know a workflow's identity, you can also retrieve its handle using `DBOS.retrieveWorkflow`.

Workflow handles have the following methods:

### handle.workflowID

```typescript
handle.workflowID(): string;
```

Retrieve the ID of the workflow.

### handle.getResult

```typescript
handle.getResult(): Promise<R>;
```

Wait for the workflow to complete, then return its result.

### handle.getStatus

```typescript
handle.getStatus(): Promise<WorkflowStatus | null>;
```

Retrieve the WorkflowStatus of the workflow, or `null` if not found:

### Workflow Status

Some workflow introspection and management methods return a `WorkflowStatus`.
This object has the following definition:

```typescript
export interface WorkflowStatus {
// The workflow ID
readonly workflowID: string;
// The status of the workflow. One of PENDING, SUCCESS, ERROR, ENQUEUED, CANCELLED, or MAX_RECOVERY_ATTEMPTS_EXCEEDED.
readonly status: string;
// The name of the workflow function.
readonly workflowName: string;
// The name of the workflow's class, if any.
readonly workflowClassName: string;
// The name with which the workflow's class instance was configured, if any.
readonly workflowConfigName?: string;
// If the workflow was enqueued, the name of the queue.
readonly queueName?: string;

// The deserialized workflow inputs.
readonly input?: unknown[];
// The workflow's deserialized output, if any.
readonly output?: unknown;
// The error thrown by the workflow, if any.
readonly error?: unknown;

// The ID of the executor (process) that most recently executed this workflow.
readonly executorId?: string;
// The application version on which this workflow started.
readonly applicationVersion?: string;

// Workflow start time, as a UNIX epoch timestamp in milliseconds
readonly createdAt: number;
// Last time the workflow status was updated, as a UNIX epoch timestamp in milliseconds. For a completed workflow, this is the workflow completion timestamp.
readonly updatedAt?: number;

// The timeout specified for this workflow, if any. Timeouts are start-to-close.
readonly timeoutMS?: number;
// The deadline at which this workflow times out, if any. Not set until the workflow begins execution.
readonly deadlineEpochMS?: number;
// Unique queue deduplication ID, if any. Deduplication IDs are unset when the workflow completes.
readonly deduplicationID?: string;
// Priority of the workflow on a queue, starting from 1 ~ 2,147,483,647. Default 0 (highest priority).
readonly priority: number;
// If this workflow is enqueued on a partitioned queue, its partition key
readonly queuePartitionKey?: string;

// If this workflow was forked from another, that workflow's ID.
readonly forkedFrom?: string;
}
```

## DBOS Variables

### DBOS.workflowID

```typescript
DBOS.workflowID: string | undefined;
```

Return the ID of the current workflow, if in a workflow.

### DBOS.stepID

```typescript
DBOS.stepID: number | undefined;
```

Return the unique ID of the current step within a workflow.

### DBOS.stepStatus

```typescript
DBOS.stepStatus: StepStatus | undefined;
```

Return the status of the currently executing step.
This object has the following properties:

```typescript
interface StepStatus {
// The unique ID of this step in its workflow.
stepID: number;
// For steps with automatic retries, which attempt number (zero-indexed) is currently executing.
currentAttempt?: number;
// For steps with automatic retries, the maximum number of attempts that will be made before the step fails.
maxAttempts?: number;
}
```

### DBOS.applicationVersion

```typescript
DBOS.applicationVersion: string
```

Return the current application version.

### DBOS.executorID

```typescript
DBOS.executorID: string
```

Retrieve the current executor ID, a unique process ID used to identify the application instance in distributed environments.

## Workflow Management Methods

### DBOS.listWorkflows

```typescript
DBOS.listWorkflows(
input: GetWorkflowsInput
): Promise<WorkflowStatus[]>
```

```typescript
interface GetWorkflowsInput {
workflowIDs?: string[]; // Retrieve workflows with these IDs.
workflowName?: string; // Retrieve workflows with this name.
status?: string; // Retrieve workflows with this status (Must be `ENQUEUED`, `PENDING`, `SUCCESS`, `ERROR`, `CANCELLED`, or `RETRIES_EXCEEDED`)
startTime?: string; // Retrieve workflows started after this (RFC 3339-compliant) timestamp.
endTime?: string; // Retrieve workflows started before this (RFC 3339-compliant) timestamp.
authenticatedUser?: string; // Retrieve workflows run by this authenticated user.
applicationVersion?: string; // Retrieve workflows started on this application version.
executorId?: string; // Retrieve workflows run by this executor ID.
workflow_id_prefix?: string; // Retrieve workflows whose ID have this prefix
queueName?: string; // If this workflow is enqueued, on which queue
queuesOnly?: boolean; // Return only workflows that are actively enqueued
forkedFrom?: string; // Get workflows forked from this workflow ID.
limit?: number; // Return up to this many workflows IDs. IDs are ordered by workflow creation time.
offset?: number; // Skip this many workflows IDs. IDs are ordered by workflow creation time.
sortDesc?: boolean; // Sort the workflows in descending order by creation time (default ascending order).
loadInput?: boolean; // Load the input of the workflow (default true).
loadOutput?: boolean; // Load the output of the workflow (default true).
}
```

Retrieve a list of WorkflowStatus of all workflows matching specified criteria.

### DBOS.listQueuedWorkflows

```typescript
DBOS.listQueuedWorkflows(
input: GetWorkflowsInput
): Promise<WorkflowStatus[]>
```

Retrieve a list of WorkflowStatus of all **currently enqueued** (status `PENDING` or `ENQUEUED`) workflows matching specified criteria.
The input type is the same as `DBOS.listWorkflows`; this method is equivalent to calling `DBOS.listWorkflows` with `queuesOnly` set.

### DBOS.listWorkflowSteps
```typescript
DBOS.listWorkflowSteps(
workflowID: string
): Promise<StepInfo[] | undefined>
```

Retrieve the steps of a workflow. Returns `undefined` if the workflow is not found.
This is a list of `StepInfo` objects, with the following structure:

```typescript
interface StepInfo {
// The unique ID of the step in the workflow. Zero-indexed.
readonly functionID: number;
// The name of the step
readonly name: string;
// The step's output, if any
readonly output: unknown;
// The error the step threw, if any
readonly error: Error | null;
// If the step starts or retrieves the result of a workflow, its ID
readonly childWorkflowID: string | null;
// The Unix epoch timestamp at which this step started
readonly startedAtEpochMs?: number;
// The Unix epoch timestamp at which this step completed
readonly completedAtEpochMs?: number;
}
```

### DBOS.cancelWorkflow

```typescript
cancelWorkflow(
workflowID: string
): Promise<void>
```

Cancel a workflow.
This sets is status to `CANCELLED`, removes it from its queue (if it is enqueued) and preempts its execution (interrupting it at the beginning of its next step)

### DBOS.resumeWorkflow

```typescript
DBOS.resumeWorkflow<T>(
workflowID: string
): Promise<WorkflowHandle<Awaited<T>>>
```

Resume a workflow.
This immediately starts it from its last completed step.
You can use this to resume workflows that are cancelled or have exceeded their maximum recovery attempts.
You can also use this to start an enqueued workflow immediately, bypassing its queue.

### DBOS.forkWorkflow

```typescript
static async forkWorkflow<T>(
workflowID: string,
startStep: number,
options?: { newWorkflowID?: string; applicationVersion?: string; timeoutMS?: number },
): Promise<WorkflowHandle<Awaited<T>>>
```

Start a new execution of a workflow from a specific step.
The input step ID (`startStep`) must match the `functionID` of the step returned by `listWorkflowSteps`.
The specified `startStep` is the step from which the new workflow will start, so any steps whose ID is less than `startStep` will not be re-executed.

**Parameters:**
- **workflowID**: The ID of the workflow to fork.
- **startStep**: The ID of the step from which to start the forked workflow. Must match the `functionID` of the step in the original workflow execution.
- **newWorkflowID**: The ID of the new workflow created by the fork. If not specified, a random UUID is used.
- **applicationVersion**: The application version on which the forked workflow will run. Useful for "patching" workflows that failed due to a bug in the previous application version.
- **timeoutMS**: A timeout for the forked workflow in milliseconds.

## Upgrading Workflow Code

A challenge encountered when operating long-running durable workflows in production is **how to deploy breaking changes without disrupting in-progress workflows.**
A breaking change to a workflow is one that changes which steps are run, or the order in which the steps are run.
If a breaking change was made to a workflow and that workflow is replayed by the recovery system, the checkpoints created by the previous version of the code may not match the steps called by the workflow in the new version of the code, causing recovery to fail.

DBOS supports two strategies for safely upgrading workflow code: **patching** and **versioning**.

### Patching

In patching, the result of a call to `DBOS.patch()` is used to conditionally execute the new code.
`DBOS.patch()` returns `true` for new calls (those executing after the breaking change) and `false` for old calls (those that executed before the breaking change).
Therefore, if `DBOS.patch()` returns `true`, the workflow should follow the new code path, otherwise it must follow the prior codepath.

```typescript
DBOS.patch(
patchName: string
): Promise<boolean>
```

**Parameters:**
- `patchName`: The name to give the patch marker that will be inserted into workflow history.

For example, let's say our original workflow is:

```typescript
@DBOS.workflow()
static async workflow() {
await foo();
await bar();
}
```

We want to replace the call to `foo()` with a call to `baz()`.
This is a breaking change because it changes what steps run.
We can make this breaking change safely using a patch:

```typescript
@DBOS.workflow()
static async workflow() {
if (await DBOS.patch('use-baz')) {
await baz();
}
else {
await foo();
}
await bar();
}
```

Now, new workflows will run `baz()`, while old workflows will reexecute `foo()`.

### Deprecating and Removing Patches

Patches add complexity and runtime overhead; fortunately they don't need to stay in your code forever.
Once all workflows that started before you deployed the patch are complete, you can safely remove patches from your code.

First, you must deprecate the patch with `DBOS.deprecatePatch()`.
`DBOS.deprecatePatch` must be used for a transition period prior to fully removing the patch, as it allows coexistence with any ongoing workflows that used `DBOS.patch()`.

```typescript
DBOS.deprecatePatch(
patchName: string
): Promise<boolean>
```

Always returns `true`. Safely bypasses a patch marker at the current point in workflow history if present.

**Parameters:**
- `patchName`: The name of the patch marker to be bypassed.

For example, here's how to deprecate the patch above:

```typescript
@DBOS.workflow()
static async workflow(){
if (await DBOS.deprecatePatch("use-baz")) { // always true
await baz();
}
await bar();
}
```

Then, when all workflows that started before you deprecated the patch are complete, you can remove the patch entirely:

```typescript
@DBOS.workflow()
static async workflow() {
await baz()
await bar()
}
```

If any mistakes happen during the process (a breaking change is not patched, or a patch is deprecated or removed prematurely), the workflow will throw a `DBOSUnexpectedStepError` pointing to the step where the problem occurred.

### Versioning

When using versioning, DBOS **versions** applications and workflows, and only continues workflow execution with the same application version that started the workflow.
All workflows are tagged with the application version on which they started.
By default, application version is automatically computed from a hash of workflow source code.
However, you can set your own version through configuration.

```typescript
config: DBOSConfig = {
// ...
applicationVersion: '1.0.0',
}
```

When DBOS tries to recover workflows, it only recovers workflows whose version matches the current application version.
This prevents recovery of workflows that depend on different code.
You cannot change the version of a workflow, but you can use `DBOS.forkWorkflow` to restart a workflow from a specific step on a specific code version.

## Configuring DBOS

To configure DBOS, pass in a configuration with `DBOS.setConfig` before you call `DBOS.launch`.
For example:

```javascript
DBOS.setConfig({
name: 'my-app',
systemDatabaseUrl: process.env.DBOS_SYSTEM_DATABASE_URL,
});
await DBOS.launch();
```

A configuration object has the following fields.
All fields except `name` are optional.

```javascript
export interface DBOSConfig {
name?: string;
applicationVersion?: string;
executorID?: string;

systemDatabaseUrl?: string;
systemDatabasePoolSize?: number;
systemDatabaseSchemaName?: string;
systemDatabasePool?: Pool;

enableOTLP?: boolean;
logLevel?: string;
otlpLogsEndpoints?: string[];
otlpTracesEndpoints?: string[];

runAdminServer?: boolean;
adminPort?: number;

listenQueues?: WorkflowQueue[];

serializer?: DBOSSerializer;
}
```

- **name**: Your application's name.
- **applicationVersion**: The code version for this application and its workflows.
- **executorID**: A unique process ID used to identify the application instance in distributed environments.
- **systemDatabaseUrl**: A connection string to a Postgres database in which DBOS can store internal state. The supported format is:
```
postgresql://[username]:[password]@[hostname]:[port]/[database name]
```

The default is:

```
postgresql://postgres:dbos@localhost:5432/[application name]_dbos_sys
```
If the Postgres database referenced by this connection string does not exist, DBOS will attempt to create it.
- **systemDatabasePoolSize**: The size of the connection pool used for the DBOS system database. Defaults to 10.
- **systemDatabaseSchemaName**: Postgres schema name for DBOS system tables. Defaults to `dbos`.
- **systemDatabasePool**: A custom `node-postgres` connection pool to use to connect to your system database. If provided, DBOS will not create a connection pool but use this instead.
- **enableOTLP**: Enable DBOS OpenTelemetry tracing and export. Defaults to False.
- **logLevel**: Configure the DBOS logger severity. Defaults to `info`.
- **otlpTracesEndpoints**: DBOS operations automatically generate OpenTelemetry Traces. Use this field to declare a list of OTLP-compatible receivers.
- **otlpLogsEndpoints**: DBOS operations automatically generate OpenTelemetry Logs. Use this field to declare a list of OTLP-compatible receivers.
- **runAdminServer**: Whether to run an HTTP admin server for workflow management operations. Defaults to True.
- **adminPort**: The port on which the admin server runs. Defaults to 3001.
- **listenQueues**: This process should only listen to (dequeue and execute workflows from) these queues.
- **serializer**: A custom serializer for the system database. Must match the `DBOSSerializer` interface with `stringify` and `parse` methods.