Build reliable
software effortlessly

Add durable workflows to your code in minutes. Make apps resilient to any failure. From vibe coded software to production ready in minutes.

Build with your stack
SOC 2 Compliant
2024 Gartner® Cool Vendor™

Durable workflows, with endless possibilities

Orchestrate durable workflows

Write your business logic in normal code, with branches, loops, subtasks, and retries. DBOS makes it resilient to any failure.

# Define a durable checkout workflow
@DBOS.workflow()
def checkout_workflow(items):
    # Step 1: Create the order
    order = create_order()

    # Step 2: Reserve inventory for the items
    reserve_inventory(order, items)

    # Step 3: Process the payment
    payment_status = process_payment(order, items)

    # Step 4: If paid, fulfill the order
    if payment_status == 'paid':
        fulfill_order(order)
    else:
        # If payment fails, release inventory and cancel the order
        undo_reserve_inventory(order, items)
        cancel_order(order)

Process your events exactly once

Consume events exactly-once, no need to worry about timeouts or offsets.

# Listen for new messages on the "alerts-topic" Kafka topic
@DBOS.kafka_consumer(config, ["alerts-topic"])

# Define a durable workflow triggered by each Kafka message
@DBOS.workflow()
def process_kafka_alerts(msg: KafkaMessage):
    # Decode the Kafka message payload
    alerts = msg.value.decode()
    
    # Loop through each alert and respond accordingly
    for alert in alerts:
        respond_to_alert(alert)

Cron jobs made easy

Schedule your durable workflows to run exactly once per time interval. Record a stock's price once a minute, migrate some data once every hour, or send emails to inactive users once a week.

# Schedule this workflow to run every hour
@DBOS.scheduled("0 * * * *")

# Define a durable workflow that takes scheduled and actual time
@DBOS.workflow()
def run_hourly(scheduled_time: datetime, actual_time: datetime):
    # Search Hacker News for the keyword "serverless"
    results = search_hackernews("serverless")
    
    # Post each result (comment and URL) to Slack
    for comment, url in results:
        post_to_slack(comment, url)

Resilient data pipelines

Build data pipelines that are reliable and observable by default. DBOS durable queues guarantee all your tasks complete.

# Create a named queue for indexing tasks
queue = Queue("indexing_queue")

# Define a durable workflow that indexes a list of URLs
@DBOS.workflow()
def indexing_workflow(urls: List[HttpUrl]):
    handles: List[WorkflowHandle] = []

    # Enqueue a document indexing task for each URL
    for url in urls:
        handle = queue.enqueue(index_document, url)
        handles.append(handle)

    indexed_pages = 0

    # Wait for each indexing task to complete and tally results
    for handle in handles:
        indexed_pages += handle.get_result()

    # Log the total indexed pages
    logger.info(f"Indexed {len(urls)} documents totaling {indexed_pages} pages")

Build reliable AI agents

Use durable workflows to build reliable, fault-tolerant AI agents.

@DBOS.workflow()
def agentic_research_workflow(topic, max_iterations):
    research_results = []

    for i in range(max_iterations):
        # Run a query based on the current topic
        research_result = research_query(topic)
        research_results.append(research_result)

        # Stop if the results suggest no further research is needed
        if not should_continue(research_results):
            break

        # Refine the topic for the next research iteration
        topic = generate_next_topic(topic, research_results)

    # Combine all collected results into a final report
    return synthesize_research_report(research_results)

@DBOS.step()
def research_query(topic):
    ...

Handle notifications & webhooks reliably

Effortlessly mix synchronous webhook code with asynchronous event processing. Reliably wait weeks or months for events, then use idempotency and durable execution to process them exactly once.

# Listen for incoming Slack messages using the Slack Bolt app
@slackapp.message()
def handle_message(request: BoltRequest) -> None:
    # Use the Slack event ID to set a unique workflow ID
    event_id = request.body["event_id"]
    
    with SetWorkflowID(event_id):
        # Start a durable workflow to process the message
        DBOS.start_workflow(message_workflow, request.body["event"])
Why DBOS

Durable workflows done right

Add durable workflows to your app in just a few lines of code. No additional infrastructure required.

No extra servers

Run anywhere, from your own hardware to any cloud. No new infrastructure required.

No rearchitecting

Add a few annotations to your code to make it durable. Nothing else needed.

No privacy issues

We never access your data. It stays private and under your control

Durable workflows

Make code reliable in minutes

Add a few annotations to your code to make it durable.
So if your application crashes or restarts, it automatically resumes your workflows from the last completed step.

Automatic failure recovery
Built-in observability
Durable execution guaranteed
Fetch data inputs
Retrieve raw inputs
Pending
Transform data
Enrich with external APIs
Pending
Store to database
Persist to durable store
Pending
Transform data
Enrich with external APIs
Failed
Fetch data inputs
Retrieve raw inputs
Success
Fetch data inputs
Retrieve raw inputs
Pending
Transform data
Enrich with external APIs
Pending
Store to database
Persist to durable store
Pending
Transform data
Enrich with external APIs
Failed
Transform data
Enrich with external APIs
DBOS Recovering
Fetch data inputs
Retrieve raw inputs
Success
Transform data
Enrich with external APIs
Success
Store to database
Persist to durable store
Success
>_ System Console
Live
07:49:04
[SUCCESS]
Step 1: fetch_data_inputs() successful
07:49:06
[ERROR]
Step 2: transform_data() API failed
07:49:08
[CRASH]
Data pipeline terminated at Step 2
07:49:04
[SUCCESS]
Step 1: fetch_data_inputs() successful
07:49:06
[ERROR]
Step 2: transform_data() API failed
07:49:07
[DBOS]
Step 2: transform_data() recovering
07:49:09
[SUCCESS]
Step 2: transform_data() successful
07:49:10
[SUCCESS]
Step 3: store_db() successful

Build with your favorite language.
Deploy anywhere.

Build with your favorite language.
Deploy anywhere.

Build with your favorite language.
Deploy anywhere.

Build with your favorite language.
Deploy anywhere.

Build with your favorite language.
Deploy anywhere.

Build with your favorite language.
Deploy anywhere.

Build with your favorite language.
Deploy anywhere.

Build with your favorite language.
Deploy anywhere.

"We've been impressed by how lightweight and flexible DBOS is, the speed at which their team ships, and the level of support offered. We are excited to scale with DBOS."
Abhishek Das

CEO & Co-Founder, Yutori.ai

Monitor workflows in real time
Spot issues fast
One-click workflow replay
View workflow execution live
Visually debug in real-time
Iterate faster
Auto-generate OpenTelemetry traces
View live traces by default
Pinpoint where failures occur
"I love the design of DBOS. If you're a gamer, it's like having a “save point” in your programs. If a function fails, a new function can start, picking up at the last checkpoint." - Paul Copplestone, CEO & Co-founder
Paul Copplestone

CEO & Co-Founder

Software

Supabase: Running Durable Workflows in Postgres

Durable execution

Never lose progress—functions resume from the last successful step, even after crashes.

Built-in observability

Interactively view, search, and manage your workflows from a graphical UI

Durable queues

Lightweight, durable, distributed queues backed by Postgres

Autoscale

DBOS will automatically scale your application to meet requests and notify you as limits approach.

Host anywhere

Run your workflows on any infrastructure. Cloud, on-prem, or containers.

Schedule cron jobs

Replace brittle cron jobs with reliable and observable workflows.

“What took us 2 months to build using a labyrinth of AWS resources took just 2 days on DBOS Cloud.” - Thomas McNally, VP Technology
Thomas McNally Portrait
Thomas McNally

VP Technology, TMG.io

eCommerce

TMG: Failure-proof Shopify-SAP integration

Security

Enterprise-grade data protection

We prioritize security, privacy, and reliability, so your team can build with confidence, not manage infrastructure. Our platform meets the security and compliance standards you need to run mission-critical workflows at scale, ensuring seamless execution and enterprise-grade protection.

SOC 2 Compliant
GDPR Compliant
CCPA Ready

SOC 2 compliant

Audited regularly for compliance with your B2B solution.

Privacy preservation

Your data stays in your control. DBOS never sees your data, and you can keep your data wherever it is safest and most compliant for your use case.

SSO and SAML

Secure your account with single sign-on and SAML.

Runs at scale

DBOS has been put to the test and designed to scale so your team can just focus on building a great product.

HIPAA BAA Available

DBOS has been built with healthcare in mind to handle sensitive data.

Transferring large genomic sequencing datasets between S3 buckets 40x faster than AWS DataSync, at a fraction of the cost, and with resilience to failures, and real-time file-wise observability of ongoing and past transfers.
Life Sciences

BMS: Durable Genomic Dataset Processing

Templates

Build in seconds.
Launch in minutes.

Get started with a range of templates, and make your backend more durable, cost-efficient and easier to maintain.

Hacker News Agent

Use DBOS to build an AI deep research agent searching Hacker News.

Document Pipeline

Use DBOS to build a reliable and scalable doc ingestion pipeline for your chat agent.

Fault-Tolerant Checkout

Use durable workflows to build an online storefront that's resilient to any failure.

Solutions

Reliability at scale, anywhere

Tooling and hosting to make your DBOS Transact deployments a success.
Ready for any platform.

DBOS Pro

Run anywhere, effortlessly

Tooling to operate DBOS Transact applications anywhere.

Manage application deployment, versioning, scaling
Automatically detect and recover interrupted workflows
View and manage your workflows from anywhere
DBOS Cloud

Durable app hosting

A seriously fast serverless platform for DBOS Transact applications.

25x better price-performance than AWS Lambda + Step Functions
Automatic app restart / resume with exactly-once processing
Deploy with a click, scale to millions
Community

Helping teams build bulletproof backends

DBOS users get access to real-time help and open discussions.
Join the DBOS community and shape a new era in durable systems.

Start building for free

Use the open source DBOS Transact library, free forever.
Pair it with DBOS Pro, with a free 30-day trial.

Build with your stack
SOC 2 Compliant