Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

dex

dex is an opinionated CLI framework for data project operations. It scaffolds Python packages, Databricks Asset Bundles, and AI agent projects — and can be extended by teams to wrap their own tooling.

100% Rust. Single binary. No runtime required.

Install

curl -sSf https://raw.githubusercontent.com/yarrib/dex/main/install.sh | sh

Auto-detects your platform and downloads the right binary from GitHub Releases. See Installation for manual install, Windows, and build-from-source options.

30-second example

# Scaffold a new Databricks Asset Bundle project
dex init --template dabs-package --dir my_project

# Scaffold a plain Python package
dex init --template default --dir my_package

# Non-interactive (use all defaults)
dex init --template dabs-package --no-prompt --dir my_project

What dex generates

For a dabs-package project:

my_project/
├── src/my_project/
│   ├── __init__.py
│   └── main.py          # entry point with argparse
├── resources/
│   └── my_project_job.yml   # DABs job definition
├── notebooks/
│   └── exploration.py   # Databricks notebook
├── tests/
│   ├── __init__.py
│   └── test_my_project.py
├── databricks.yml       # bundle config (dev/staging/prod targets)
├── pyproject.toml       # project config
├── dex.toml             # dex project config
├── README.md
└── .gitignore

Next steps

Quickstart

Install dex and scaffold your first project in under a minute.

1. Install

curl -sSf https://raw.githubusercontent.com/yarrib/dex/main/install.sh | sh

See Installation for platform-specific binaries and Windows instructions.

2. Scaffold a project

Databricks Asset Bundle:

dex init --template dabs-package --dir my_project

Prompts:

Project name [my_project]:
Python version (3.12, 3.11) [3.12]:
Include exploration notebook? [Y/n]:
Include job definition? [Y/n]:
Use serverless compute? [y/N]:

Plain Python package:

dex init --template default --dir my_package

Non-interactive (CI / scripts):

dex init --template dabs-package --no-prompt --dir my_project

3. Inspect what was generated

my_project/
├── src/my_project/
│   ├── __init__.py
│   └── main.py
├── resources/
│   └── my_project_job.yml
├── notebooks/
│   └── exploration.py
├── tests/
│   └── test_my_project.py
├── databricks.yml
├── pyproject.toml
├── dex.toml
├── README.md
└── .gitignore

4. Deploy to Databricks

cd my_project
databricks bundle deploy          # → dev target
databricks bundle deploy --target prod

Next steps

Installation

dex is distributed as a pre-built native binary on GitHub Releases. No Python, no runtime, no toolchain required to run it.

The fastest way to install dex is with the installer script, which downloads the right binary for your platform:

curl -sSf https://raw.githubusercontent.com/yarrib/dex/main/install.sh | sh

The script will:

  1. Detect your OS and CPU architecture
  2. Fetch the latest release from GitHub
  3. Download the correct binary
  4. Place it in ~/.local/bin (or the platform default)

After install, dex is available globally:

dex --help

Note: The install script supports Linux and macOS. For Windows, use the manual install path below.

Manual install

  1. Go to the latest release
  2. Download the binary for your platform:
PlatformFilename
Linux x86_64dex-linux-x86_64
Linux aarch64dex-linux-aarch64
macOS Apple Silicondex-macos-aarch64
macOS Inteldex-macos-x86_64
Windows x86_64dex-windows-x86_64.exe
  1. Make it executable and move it onto your PATH:
chmod +x dex-linux-x86_64
mv dex-linux-x86_64 ~/.local/bin/dex

Upgrade

Re-run the install script to upgrade to the latest release:

curl -sSf https://raw.githubusercontent.com/yarrib/dex/main/install.sh | sh

Or download the new binary manually from GitHub Releases.

Uninstall

Remove the binary from wherever you placed it:

rm ~/.local/bin/dex

Build from source

Requires Rust (stable).

git clone https://github.com/yarrib/dex
cd dex
cargo build --release
# Binary at: target/release/dex

Install directly:

cargo install --path crates/dex-cli

Requirements

No runtime dependencies. The dex binary is fully self-contained.

For building from source: Rust stable toolchain (rustup update stable).

Usage Overview

dex provides three core commands:

CommandDescription
dex initScaffold a new project from a template
dex agent newScaffold an AI agent project via Q&A
dex mcp serveStart the MCP server for AI agent integration

See the sidebar for detailed documentation on each command.

dex init

Scaffold a new project from a template.

Synopsis

dex init [OPTIONS] [DIRECTORY]

Options

OptionDefaultDescription
--template, -tdefaultTemplate to scaffold from
--dir, -d.Target directory
--no-promptUse all defaults, skip interactive prompts
--presetPre-fill variables from a named preset profile
--presets-filePath to a presets TOML file
--standardsPath to a standards TOML file for variable pre-fills

Examples

# Scaffold into a new directory, prompting for all variables
dex init --template dabs-package --dir my_project

# Non-interactive: use all defaults
dex init --template dabs-package --no-prompt --dir my_project

# Pre-fill variables from a preset profile
dex init --template dabs-package --preset ml-project --dir my_project

# Use a team standards file
dex init --template default --standards ./org-standards.toml --dir my_package

Interactive prompts

When you run dex init without --no-prompt, it asks for each variable defined in the template’s manifest. For example, dabs-package asks:

Project name [my_project]:
Python version (3.12, 3.11) [3.12]:
Include exploration notebook? [Y/n]:
Include job definition? [Y/n]:
Use serverless compute? [y/N]:

Listing available templates

dex init --help

Lists all built-in templates with names and descriptions.

Template Reference

default

A general-purpose Python + Databricks project.

Variables:

NameTypeDefaultDescription
project_namestring(dir name)Python package name (lowercase, underscores)
python_versionchoice3.12Python version (3.12, 3.11)

Generated files:

src/<project_name>/
├── __init__.py
└── main.py
notebooks/
└── exploration.py
tests/
├── __init__.py
└── test_main.py
databricks.yml
pyproject.toml
README.md
.gitignore

dabs-package

A full Databricks Asset Bundle Python package. Includes job definitions, multi-target deploy config, and optional notebook scaffolding.

Variables:

NameTypeDefaultDescription
project_namestring(dir name)Python package name (lowercase, underscores)
python_versionchoice3.12Python version (3.12, 3.11)
include_notebookbooltrueGenerate notebooks/exploration.py
include_jobbooltrueGenerate resources/<project_name>_job.yml
use_serverlessboolfalseUse serverless compute in the job definition

Generated files:

src/<project_name>/
├── __init__.py
└── main.py              # entry point with --catalog / --schema args
resources/               # only if include_job=true
└── <project_name>_job.yml
notebooks/               # only if include_notebook=true
└── exploration.py
tests/
├── __init__.py
└── test_<project_name>.py
databricks.yml           # bundle config: artifacts, targets, variables
pyproject.toml           # dev deps: pytest, ruff, databricks-connect
README.md
.gitignore

DABs targets:

The generated databricks.yml includes three targets:

  • dev (default) — mode: development, catalog dev
  • stagingmode: development, catalog staging
  • prod — production catalog

Deploy with:

databricks bundle deploy              # → dev
databricks bundle deploy --target prod

dex agent new

Scaffold an AI agent project via an interactive Q&A flow.

Synopsis

dex agent new

Q&A flow

dex agent new asks a series of questions to understand the agent’s purpose, and then generates a project skeleton with a CLAUDE.md, system_prompt.md, and starter code.

Questions asked:

  1. Name — short identifier for the agent
  2. Description — what the agent does in one sentence
  3. Trigger — how the agent is activated (user_request, schedule, event, upstream_system)
  4. Success criteria — how you know the agent succeeded
  5. Reads — data sources the agent reads from
  6. Writes — data sinks or side effects
  7. Handoff — whether the agent hands off to a human
  8. Autonomous — whether the agent runs without human review
  9. Example input — a concrete example of what the agent receives
  10. Example output — what a good response looks like
  11. Bad output — what a bad response looks like (for guardrails)
  12. Deploy targetjob, serving_endpoint, or interactive

Generated files

<agent_name>/
├── CLAUDE.md            # Agent instructions for Claude Code
├── system_prompt.md     # System prompt template
├── main.py              # Entry point skeleton
└── README.md            # Setup and usage

MCP integration

After scaffolding, the agent project can be served via the dex MCP server:

dex mcp serve

This exposes the agent’s tools to Claude and other MCP clients.

dex mcp serve

Start the dex MCP server to expose dex tools to Claude and other MCP clients.

Synopsis

dex mcp serve

Overview

The MCP (Model Context Protocol) server lets AI tools like Claude Desktop and Claude Code call dex operations directly — scaffolding projects and listing templates — without leaving the chat interface.

Available tools

ToolDescription
list_templatesReturns all built-in templates with names and descriptions
get_template_variablesReturns variable specs for a named template
scaffold_projectScaffolds a project from a template into a directory

Installation

Install the dex binary first — the MCP server is built in, no separate install needed.

Install script (Linux/macOS):

curl -sSf https://raw.githubusercontent.com/yarrib/dex/main/install.sh | sh

Build from source:

cargo install --path crates/dex-cli

See Installation for full details.

Wiring into Claude Desktop

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "dex": {
      "command": "dex",
      "args": ["mcp", "serve"]
    }
  }
}

Restart Claude Desktop. The dex tools will appear in the tool picker.

Wiring into Claude Code

Create .mcp.json at your project root (or ~/.claude/mcp.json for global config):

{
  "mcpServers": {
    "dex": {
      "command": "dex",
      "args": ["mcp", "serve"]
    }
  }
}

Claude Code will start the server automatically when you open the project.

Usage examples

Once connected, you can prompt Claude naturally:

List templates:

What dex templates are available?

Inspect a template:

What variables does the dabs-package template need?

Scaffold a project:

Scaffold a new dabs-package project called my_pipeline in ~/projects/my_pipeline

Claude will call the appropriate tool and report the created files.

See also

Built-in Templates

dex ships five built-in templates, embedded in the binary at compile time. All support --no-prompt for CI/scripting use.

Choosing a template

TemplateUse when…
defaultStarting a plain Python package or prototyping
dabs-packageBuilding a production Databricks job in Python
dabs-etlBuilding a DLT pipeline with Autoloader ingestion
dabs-mlTraining a model with MLflow and optionally serving it
dabs-aiagentDeploying an AI agent via mlflow.pyfunc + model serving

default

A minimal Python package for general-purpose development.

When to use: Prototyping, standalone scripts, or non-Databricks Python projects. No DABs config included.

dex init --template default --dir my_package

Variables:

NameTypeDefaultDescription
project_namestring(dir name)Package name (lowercase, hyphens or underscores)
python_versionchoice3.12Python version (3.12, 3.11)

Generated files:

src/<project_name>/
├── __init__.py
└── main.py
notebooks/
└── exploration.py
tests/
├── __init__.py
└── test_main.py
databricks.yml
pyproject.toml
README.md
.gitignore

dabs-package

A full Databricks Asset Bundle Python package. The go-to template for production Databricks jobs.

When to use: Any Databricks job that runs Python code — ETL scripts, batch jobs, data quality checks. Includes multi-target deploy config (dev/staging/prod) out of the box.

dex init --template dabs-package --dir my_project

Variables:

NameTypeDefaultDescription
project_namestring(dir name)Package name (lowercase, underscores)
python_versionchoice3.12Python version (3.12, 3.11)
include_notebookbooltrueGenerate notebooks/exploration.py
include_jobbooltrueGenerate resources/<name>_job.yml
use_serverlessboolfalseUse serverless compute in the job definition

Generated files:

src/<project_name>/
├── __init__.py
└── main.py                         # entry point with --catalog / --schema args
resources/                          # if include_job=true
└── <project_name>_job.yml
notebooks/                          # if include_notebook=true
└── exploration.py
tests/
├── __init__.py
└── test_<project_name>.py
databricks.yml                      # dev / staging / prod targets
pyproject.toml
README.md
.gitignore

dabs-etl

A DLT (Delta Live Tables) pipeline project with Autoloader ingestion.

When to use: Streaming or batch ingestion pipelines where data arrives in cloud storage and needs to be loaded into Delta tables. Autoloader handles schema inference and file tracking automatically.

dex init --template dabs-etl --dir my_pipeline

Variables:

NameTypeDefaultDescription
project_namestring(dir name)Package name (lowercase, underscores)
python_versionchoice3.12Python version (3.12, 3.11)
source_pathstringabfss://raw@<storage-account>.dfs.core.windows.net/landing/Autoloader source path
use_serverlessboolfalseUse serverless compute
include_notebookbooltrueGenerate notebooks/exploration.py

Generated files:

src/<project_name>/
├── __init__.py
└── pipeline.py                     # DLT pipeline definition
resources/
└── <project_name>_pipeline.yml     # DLT pipeline DABs resource
notebooks/                          # if include_notebook=true
└── exploration.py
tests/
├── __init__.py
└── test_<project_name>.py
databricks.yml
pyproject.toml
README.md
.gitignore

dabs-ml

An MLflow training project with model registry integration and optional model serving.

When to use: Supervised ML workflows — feature engineering, training, evaluation, and registration to Unity Catalog model registry. Optionally deploys a real-time serving endpoint.

dex init --template dabs-ml --dir my_model

Variables:

NameTypeDefaultDescription
project_namestring(dir name)Package name (lowercase, underscores)
python_versionchoice3.12Python version (3.12, 3.11)
use_serverlessboolfalseUse serverless compute
include_servingbooltrueGenerate model serving endpoint config
include_notebookbooltrueGenerate notebooks/exploration.py

Generated files:

src/<project_name>/
├── __init__.py
└── train.py                        # MLflow training script
resources/
└── <project_name>_training_job.yml
serving/                            # if include_serving=true
└── <project_name>_serving.yml      # Model serving endpoint definition
notebooks/                          # if include_notebook=true
└── exploration.py
tests/
├── __init__.py
└── test_<project_name>.py
databricks.yml
pyproject.toml
README.md
.gitignore

dabs-aiagent

An AI agent project using mlflow.pyfunc for packaging and Databricks model serving for deployment.

When to use: Building a custom AI agent or LLM-powered application that needs to be deployed as a Databricks model serving endpoint. Optionally includes a vector search retriever for RAG patterns.

dex init --template dabs-aiagent --dir my_agent

Variables:

NameTypeDefaultDescription
project_namestring(dir name)Package name (lowercase, underscores)
python_versionchoice3.12Python version (3.12, 3.11)
include_vector_searchboolfalseInclude a vector search retriever (RAG)
use_serverlessbooltrueUse serverless compute for the deploy job

Generated files:

src/<project_name>/
├── __init__.py
├── agent.py                        # mlflow.pyfunc agent wrapper
└── tools/
    ├── __init__.py
    └── retriever.py                # if include_vector_search=true
resources/
├── <project_name>_deploy_job.yml   # DABs job to register + deploy the agent
└── <project_name>_serving.yml      # Model serving endpoint definition
notebooks/
├── deploy_agent.py                 # notebook: register model + deploy endpoint
└── evaluate_agent.py               # notebook: evaluate agent with MLflow
tests/
├── __init__.py
└── test_<project_name>.py
databricks.yml
pyproject.toml
README.md
.gitignore

Template Authoring Guide

A dex template is a directory with a manifest (template.toml) and a files/ subtree of Jinja2 template files. Templates are embedded into the dex binary at compile time via include_dir.

Directory layout

templates/<template-name>/
├── template.toml          # manifest: metadata, variables, file rules
└── files/                 # files to render and write
    ├── pyproject.toml.j2  # .j2 = rendered through Jinja2
    ├── README.md.j2
    └── src/
        └── {{ project_name }}/   # directory names can use variables too
            └── __init__.py

template.toml reference

[template] — metadata

[template]
name = "my-template"          # unique identifier, used with dex init --template
description = "Short description shown in dex init --help"
version = "0.1.0"
min_dex_version = "0.1.0"    # minimum dex version required

[[variables]] — input variables

Each variable becomes a prompt in dex init and a value available in templates.

[[variables]]
name = "project_name"         # variable name, referenced in templates as {{ project_name }}
prompt = "Project name"       # text shown to the user
type = "string"               # string | bool | choice
required = true
validate = "^[a-z][a-z0-9_]*$"  # optional regex; applied to string variables

Variable types:

string:

[[variables]]
name = "author"
prompt = "Author name"
type = "string"
required = false
default = "me"

bool:

[[variables]]
name = "include_notebook"
prompt = "Include exploration notebook?"
type = "bool"
default = true     # rendered as true/false
required = false

choice:

[[variables]]
name = "python_version"
prompt = "Python version"
type = "choice"
choices = ["3.12", "3.11"]
default = "3.12"
required = false

All variable fields:

FieldRequiredDescription
nameyesVariable identifier. Referenced in templates as {{ name }}.
promptyesText shown when prompting the user.
typeyesstring, bool, or choice.
requiredyesIf true, no default is accepted.
defaultnoValue used with --no-prompt or when the user presses Enter.
choiceschoice onlyList of accepted values.
validatestring onlyRegex the value must match.

[[files]] — conditional file rules

Use [[files]] to include or exclude entire directory trees based on a variable.

[[files]]
src = "notebooks/"       # path relative to files/
condition = "include_notebook"   # include only if this bool variable is true

[[files]]
src = "resources/"
condition = "include_job"

If no [[files]] entry exists for a path, it is always included.

Writing template files

Template files use Jinja2 syntax, rendered by minijinja in Rust. Use the .j2 extension for any file that needs rendering.

# src/{{ project_name }}/main.py.j2
"""{{ project_name }} — entry point."""


def main() -> None:
    print("Hello from {{ project_name }}")


if __name__ == "__main__":
    main()

Variable substitution in filenames:

Directory and file names can also contain variable references. The engine substitutes them before writing.

files/src/{{ project_name }}/__init__.py   →   src/my_project/__init__.py

Conditionals:

# pyproject.toml.j2
[project]
name = "{{ project_name }}"
requires-python = ">={{ python_version }}"
{% if use_serverless %}
# serverless config here
{% endif %}

Loops:

{% for dep in extra_deps %}
"{{ dep }}",
{% endfor %}

Building and testing

After adding or editing a template, rebuild to embed it in the binary:

cargo build

Then test with:

dex init --template my-template --dir /tmp/test-output
dex init --template my-template --no-prompt --dir /tmp/test-output-defaults

Inspect the output to verify files were rendered correctly.

Tips

  • Non-template files (files without .j2) are copied verbatim — useful for binary assets or files where Jinja2 syntax would conflict.
  • Validate regex early. Use validate on project_name to enforce naming conventions before the user gets to see broken output.
  • Use bool variables for optional sections. Combine with [[files]] to omit entire directories, and with {% if %} inside templates to omit sections within a file.
  • Keep defaults sensible. Templates should work correctly with --no-prompt, so every optional variable needs a reasonable default.

Building Org Templates

This guide walks through building a custom template for your team — from first file to everyone on your org using it seamlessly alongside the dex built-ins.

What you’re building

A custom template works exactly like a built-in: users run dex init --template your-name, get interactive prompts, and a project appears. The only difference is that you authored it.


1. Pick a name and create the directory

Template names must be unique within the set of templates dex sees (built-ins + your org templates). Use a prefix to avoid collisions:

mkdir -p acme-etl/files

Your template directory should be inside a dedicated templates repo or within your org CLI package:

acme-dex-templates/
├── acme-etl/
├── acme-ml/
└── acme-serving/

2. Write the manifest

Every template needs a template.toml at its root. This defines metadata and the variables users will be prompted for.

acme-etl/template.toml:

[template]
name        = "acme-etl"
description = "Acme standard DLT ingestion pipeline"
version     = "0.1.0"
min_dex_version = "0.1.0"

# --- Variables ---

[[variables]]
name     = "project_name"
prompt   = "Project name"
type     = "string"
required = true
validate = "^[a-z][a-z0-9_]*$"     # enforce snake_case

[[variables]]
name     = "python_version"
prompt   = "Python version"
type     = "choice"
choices  = ["3.12", "3.11"]
default  = "3.12"

[[variables]]
name     = "source_path"
prompt   = "Autoloader source path"
type     = "string"
default  = "abfss://raw@acme.dfs.core.windows.net/landing/"

[[variables]]
name     = "use_serverless"
prompt   = "Use serverless compute?"
type     = "bool"
default  = false

[[variables]]
name     = "include_notebook"
prompt   = "Include exploration notebook?"
type     = "bool"
default  = true

# --- File rules (conditional inclusion) ---

[[files]]
src       = "notebooks/"
condition = "include_notebook"

Variable tips:

  • Put project_name first — users expect it.
  • Use validate on project_name to catch naming mistakes early.
  • Give every optional variable a sensible default so --no-prompt works correctly.
  • Use bool + [[files]] to gate optional sections rather than cluttering file content with {% if %} blocks.

3. Write the template files

Template files live in files/. Files with a .j2 extension are rendered through minijinja (Jinja2 syntax). Files without .j2 are copied verbatim.

acme-etl/
└── files/
    ├── dex.toml.j2
    ├── pyproject.toml.j2
    ├── databricks.yml.j2
    ├── README.md.j2
    ├── .gitignore
    ├── src/
    │   └── {{ project_name }}/     ← directory name uses variable
    │       ├── __init__.py
    │       └── pipeline.py.j2
    ├── resources/
    │   └── {{ project_name }}_pipeline.yml.j2
    ├── tests/
    │   ├── __init__.py
    │   └── test_{{ project_name }}.py.j2
    └── notebooks/                  ← gated by include_notebook
        └── exploration.py.j2

Variable substitution in filenames and directory names:

dex substitutes {{ variable }} in both file and directory names before writing. No .j2 extension needed for the name itself — only the file content needs .j2.

Example file — src/{{ project_name }}/__init__.py:

"""{{ project_name }}"""

(Copied verbatim — no .j2 needed for trivial files.)

Example file — dex.toml.j2:

[project]
name     = "{{ project_name }}"
template = "acme-etl"

[passthrough.db]
command     = "databricks"
description = "Databricks CLI"

[passthrough.dlt]
command     = "databricks"
description = "DLT pipeline operations"

Example file — pyproject.toml.j2:

[project]
name            = "{{ project_name }}"
version         = "0.1.0"
requires-python = ">={{ python_version }}"
dependencies    = [
    "databricks-sdk>=0.20",
    "delta-spark>=3.0",
]

[dependency-groups]
dev = [
    "pytest>=8",
    "ruff>=0.5",
    "databricks-connect>=15.0",
]

Example file — resources/{{ project_name }}_pipeline.yml.j2:

resources:
  pipelines:
    {{ project_name }}_pipeline:
      name: "{{ project_name }}"
      serverless: {{ use_serverless | lower }}
      libraries:
        - notebook:
            path: /Workspace/${workspace.root_path}/notebooks/{{ project_name }}

Jinja2 reference for template authors:

SyntaxPurpose
{{ variable }}Insert variable value
{% if condition %}...{% endif %}Conditional block
{% for item in list %}...{% endfor %}Loop
{{ value | lower }}Apply filter (lower, upper, title)
{# comment #}Template comment (not written to output)

4. Test locally

Before sharing, test that the template renders correctly. Point dex at your templates directory with --template-dir (or configure it in user config — see step 5):

# Interactive
dex init --template-dir ~/acme-dex-templates --template acme-etl --dir /tmp/test-etl

# Non-interactive (tests --no-prompt path + all defaults)
dex init --template-dir ~/acme-dex-templates --template acme-etl --no-prompt --dir /tmp/test-etl-defaults

Check the output:

# Verify files exist
ls /tmp/test-etl

# Verify variable substitution
cat /tmp/test-etl/dex.toml
cat /tmp/test-etl/pyproject.toml

# Verify directory names were substituted
ls /tmp/test-etl/src/

Common issues:

SymptomFix
Directory not createdCheck condition variable name matches [[variables]] name exactly
{{ variable }} appears literallyFile is missing .j2 extension
Render errorCheck Jinja2 syntax — missing %} or unclosed block
validate rejects valid inputTest the regex: echo "my_project" | grep -P "^[a-z][a-z0-9_]*$"

5. Make it available to your team

There are two ways to share templates across your org. Use whichever fits your infrastructure.

Option A: Shared git repository

Host the templates directory as a git repo. Team members clone it once and point dex at it.

# Team member setup (run once)
git clone https://github.com/acme/acme-dex-templates ~/acme-dex-templates

Add to ~/.config/dex/config.toml:

[templates]
paths = ["~/acme-dex-templates"]

To update to the latest templates:

git -C ~/acme-dex-templates pull

Pin a specific version in your onboarding docs:

git clone --branch v1.2.0 https://github.com/acme/acme-dex-templates ~/acme-dex-templates

Option B: Project-local templates

For templates that belong to a specific project or monorepo, commit them into a templates/ directory at the repo root. dex discovers them automatically — no config needed.

my-platform/
├── dex.toml
└── templates/
    └── acme-etl/
        ├── template.toml
        └── files/

Anyone who clones the repo gets the template immediately.


6. Ship a standards file (optional)

A standards file pre-fills variables that are the same across all projects at your org — author names, default storage accounts, Python version policy. This removes repetitive prompts.

acme-standards.toml (checked into a shared repo or wiki):

python_version = "3.12"
source_path    = "abfss://raw@acme.dfs.core.windows.net/landing/"

Team members reference it with --standards:

dex init --template acme-etl --standards ~/acme-standards.toml --dir my_pipeline

Or set a default in ~/.config/dex/config.toml:

[defaults]
standards_file = "~/acme-standards.toml"

7. Full team onboarding checklist

Once templates are authored and hosted, onboarding a new engineer takes two minutes:

# 1. Install dex
curl -sSf https://raw.githubusercontent.com/yarrib/dex/main/install.sh | sh

# 2. Clone templates
git clone https://github.com/acme/acme-dex-templates ~/acme-dex-templates

# 3. Configure dex
cat >> ~/.config/dex/config.toml <<'EOF'
[templates]
paths = ["~/acme-dex-templates"]
EOF

# 4. Done — scaffold a project
dex init --template acme-etl --dir my_pipeline

See also

Org Template Registries

Teams can publish their own templates and share them across the org via a shared directory or git repository. Users reference them the same way as built-in templates.

How it works

  1. Create a directory of templates following the authoring guide
  2. Distribute the directory (git repo, shared filesystem, etc.)
  3. Users point dex at the directory via ~/.config/dex/config.toml or --template-dir

No package manager, no Python, no runtime required.


1. Create the template directory

acme-dex-templates/
├── acme-etl/
│   ├── template.toml
│   └── files/
│       ├── dex.toml.j2
│       ├── pyproject.toml.j2
│       ├── README.md.j2
│       └── src/
│           └── {{ project_name }}/
│               └── pipeline.py.j2
└── acme-ml/
    ├── template.toml
    └── files/

Templates follow the same format as built-in templates. See the authoring guide.


2. Distribute via git

Host the templates in a git repository:

git clone https://github.com/acme/acme-dex-templates

3. Configure dex to find the templates

Per-user (global)

Add the templates directory to ~/.config/dex/config.toml:

[templates]
paths = ["~/acme-dex-templates"]

Now all dex commands on this machine can use the org templates.

Per-project

Place templates in a templates/ directory at the project root:

my-project/
├── dex.toml
└── templates/
    └── acme-etl/
        ├── template.toml
        └── files/

dex discovers them automatically.

One-off (via CLI flag)

dex init --template-dir ~/acme-dex-templates --template acme-etl --dir my_pipeline

User experience

After configuring the templates path, users have all built-in templates plus the org templates:

dex init --template default       # built-in
dex init --template dabs-package  # built-in
dex init --template acme-etl      # org template
dex init --template acme-ml       # org template

If an org template name conflicts with a built-in, the org template takes precedence.


Versioning org templates

Pin a specific commit or tag in your team’s setup instructions:

git clone --branch v1.2.0 https://github.com/acme/acme-dex-templates ~/acme-dex-templates

Or check out the templates directory as a git submodule in your project repos.

Monorepo pattern

If templates live in a monorepo alongside other tooling:

# ~/.config/dex/config.toml
[templates]
paths = ["~/acme-platform/dex-templates"]

Why dex?

dex competes in a space with existing scaffolding tools. Here’s why it exists and when to use it.


vs Cookiecutter

Cookiecutter is the most common Python project scaffolding tool. It works, but it has a few rough edges for data engineering teams.

No runtime required

Cookiecutter requires Python and pip install cookiecutter. On a new machine, in a CI job, or on a team with mixed Python/non-Python backgrounds, this is friction. dex is a single static binary — curl | sh and you’re done.

Org-wide variable pre-fill

Cookiecutter prompts for every variable, every time. dex has two mechanisms to skip prompts for values your org already knows:

  • Standards (~/.config/dex/standards.toml): flat key-value file that auto-fills any matching variable (e.g. author, python_version). Set once, never prompted again.
  • Presets (~/.config/dex/presets.toml): named profiles for different project contexts (ML workloads vs ETL pipelines, dev workspace vs staging workspace).

For platform teams standardizing tooling across an org, this matters. You want to distribute opinionated defaults without making every developer answer the same five questions every time.

Beyond scaffolding

Cookiecutter stops at file generation. dex continues into the project lifecycle:

  • dex run <task> — run tasks defined in dex.toml (tests, linting, deploy scripts)
  • dex db, dex az — pass-through commands that proxy to external CLIs, configured in dex.toml
  • dex agent new — scaffolds AI agent projects
  • dex mcp serve — exposes dex tools to Claude and other MCP clients

One binary, one config file, from init through daily development.

No template hosting required

Cookiecutter templates are typically GitHub repos that users clone. dex supports the same pattern, but also embeds its built-in templates directly in the binary (zero network calls) and allows teams to point at a local directory or a remote git repo via config — no separate template registry service needed.


vs Databricks Asset Bundle (DABs) Templates

Databricks Asset Bundles ship their own databricks bundle init scaffolding. It’s Databricks’ own tool for their own format. It works well for pure Databricks projects, but it has a narrower scope than dex by design.

Generic, not Databricks-specific

DABs templates are built for one platform. Most data teams also maintain:

  • Plain Python packages (shared utilities, libraries)
  • AI agent projects (Claude, OpenAI, custom)
  • Custom internal tools that don’t fit the Databricks mold

dex uses the same template system, same prompt flow, and same config format for all of these. One mental model for your entire project portfolio.

Standards and presets don’t exist in DABs init

databricks bundle init prompts you for values. dex’s standards and presets layer lets you pre-fill those values org-wide — including for Databricks-specific variables like workspace_url, cluster_id, and python_version. Teams can ship a standards.toml with their dex onboarding guide and skip the majority of prompts entirely.

Org templates without forking Databricks tooling

DABs templates live inside the Databricks CLI codebase. If your org wants a custom template that deviates from Databricks’ defaults, you’re maintaining a fork or working around it. dex’s org template model is first-class: point to a directory or a git repo in ~/.config/dex/config.toml, and dex init picks it up alongside built-in templates. No fork, no separate tool, no special distribution mechanism.


Why “generic” is the right default

It’s tempting to build a narrowly scoped tool that does one thing perfectly. But for a platform team, the cost of having five different scaffolding tools (one per project type) is real: different conventions, different learning curves, different ways to extend.

dex’s template model is intentionally generic:

  • template.toml variables are just names and types — no Databricks-specific schema
  • File rules are path patterns — works for any directory layout
  • Standards and presets are flat key-value maps — applies to any variable in any template
  • Pass-throughs are just subprocess delegation — wraps any CLI, not just Databricks

The built-in templates happen to target Databricks workflows because that’s the primary user. But an org running AWS Glue or dbt can write their own templates and dex becomes their tool too, with no changes to the binary.

The goal is a single, well-understood scaffolding and project operations convention — not one that locks you into a specific platform’s mental model.

Extending dex

dex is designed to be extended by teams through configuration and custom templates. No code is required — everything is driven by dex.toml and a templates directory.

Pass-through commands

The most common extension: expose an external CLI as a dex subcommand, forwarding all arguments and inheriting stdin/stdout/stderr for full interactivity.

Add pass-throughs to dex.toml at your project root:

[passthrough.db]
command = "databricks"
description = "Databricks CLI"

[passthrough.tf]
command = "terraform"
description = "Terraform"

[passthrough.az]
command = "az"
description = "Azure CLI"

Your team now has:

dex db clusters list               # → databricks clusters list
dex tf plan                        # → terraform plan
dex az account show                # → az account show

Pass-throughs appear in dex --help and forward --help to the target command.

Custom templates

Add custom templates to a directory and tell dex where to find them.

Per-user (global)

In ~/.config/dex/config.toml:

[templates]
paths = ["~/acme-dex-templates"]

Per-project

Place templates in a templates/ directory at the project root. dex discovers them automatically alongside built-in templates.

my-project/
├── dex.toml
└── templates/
    └── acme-etl/
        ├── template.toml
        └── files/

Then use them like any built-in template:

dex init --template acme-etl --dir my_pipeline

See the Template Authoring Guide for the full template format, and Org Template Registries for how to share templates across a team.

Org-wide dex.toml

For teams, check a dex.toml into your project repos with shared pass-throughs and task definitions:

[project]
name = "my-project"

[passthrough.db]
command = "databricks"
description = "Databricks CLI"

[tasks.deploy-dev]
command = "databricks bundle deploy --target dev"
description = "Deploy to dev"

[tasks.deploy-prod]
command = "databricks bundle deploy --target prod"
description = "Deploy to prod"

All engineers on the project get the same commands — no individual setup required beyond having databricks on their PATH.

Contributing

Dev setup

Requires Rust (stable).

git clone https://github.com/yarrib/dex
cd dex
cargo build

Make targets

TargetDescription
make buildcargo build
make testcargo test
make lintcargo clippy -- -D warnings
make fmtcargo fmt
make fmt-checkFormat check only (no writes)
make cleanRemove build artifacts
make docsBuild docs site
make docs-serveServe docs site at localhost:3000

Architecture

crates/dex-core/    Rust library — all business logic, no UI
crates/dex-cli/     Rust binary — clap CLI, dialoguer prompts, console output
templates/          Built-in Jinja2 templates, embedded at compile time

Rules:

  • dex-core has no terminal output. It returns data; dex-cli renders it.
  • dex-cli owns all user interaction: prompts, formatting, error display.
  • Config is TOML. No YAML, no JSON for config.
  • Template files use .j2 extension (Jinja2/minijinja syntax).
  • No unwrap() or expect() in library code — propagate with ?.

Adding a template

  1. Create templates/<name>/template.toml (see Template Reference)
  2. Create templates/<name>/files/ with Jinja2 template files
  3. Run make build to embed the template in the binary
  4. Test with dex init --template <name>

Adding a subcommand

  1. Add core logic to crates/dex-core/src/ (new module or extend existing)
  2. Expose via dex-core’s public API in lib.rs
  3. Add clap command in crates/dex-cli/src/commands/
  4. Register in crates/dex-cli/src/main.rs
  5. Add tests at each layer
  6. Update docs/SPEC.md with the command’s interface

Docs

The docs site uses mdBook and is served via GitHub Pages.

Local preview:

make docs-install   # install mdbook (once)
make docs-serve     # browse docs at localhost:3000

Commit conventions

feat:      new feature
fix:       bug fix
refactor:  code change without behaviour change
docs:      documentation only
test:      tests only
chore:     build, deps, tooling

Releasing

Releases are manual and tag-driven. Because main is protected, version bumps go through a PR. The tag is pushed after the PR is merged, which triggers the release workflow.

Prerequisites

  • All changes for the release are merged to main
  • You have push access to the repo (to push tags)

Release flow

1. Decide the version bump

Follow Semantic Versioning:

Change typeCommand
Bug fixes, docs, choresmake bump-patch0.1.00.1.1
New features, backwards-compatiblemake bump-minor0.1.00.2.0
Breaking changesmake bump-major0.1.01.0.0

2. Create the version bump PR

git checkout main && git pull
git checkout -b chore/release-v0.x.y
make bump-patch   # or bump-minor / bump-major
git push -u origin chore/release-v0.x.y

make bump-patch will:

  1. Update the version in Cargo.toml files
  2. Commit the changes with message chore: bump version to vX.Y.Z

Open a PR for the branch, get it merged.

Warning: The bump commands check for uncommitted changes and will abort if any exist.

3. Tag and push

After the PR is merged:

git checkout main && git pull
make tag-release

make tag-release tags the current HEAD with the version in Cargo.toml and pushes the tag to GitHub.

4. Watch the release workflow

Go to Actions → Release on GitHub. The workflow:

  1. Validates the tag format (v<major>.<minor>.<patch>)
  2. Verifies Cargo.toml version matches the tag
  3. Generates a changelog from conventional commits (git-cliff)
  4. Builds native binaries for all target platforms:
    • Linux x86_64
    • Linux aarch64
    • macOS Apple Silicon
    • macOS Intel
    • Windows x86_64
  5. Creates a GitHub Release with all binaries attached

5. Verify the release

  • Check GitHub Releases for the new release
  • Confirm binaries are attached for all platforms
  • Confirm the changelog looks correct
  • The docs site will auto-deploy the new versioned docs via the docs.yml workflow

Commit conventions and changelog

The changelog is generated automatically from commit messages using git-cliff. Use conventional commit prefixes so changes appear correctly:

PrefixChangelog section
feat:Features
fix:Bug Fixes
refactor:Refactoring
docs:Documentation
test:Testing
chore:Chores
perf:Performance

Commits without a conventional prefix are filtered out of the changelog.


Hotfix releases

Same flow as a regular release — fix on a branch, PR to main, then tag:

git checkout -b fix/critical-bug
# make your fix
git push -u origin fix/critical-bug
# open PR, merge
git checkout main && git pull
make bump-patch
git push -u origin chore/release-vX.Y.Z
# open PR, merge
git checkout main && git pull
make tag-release

If the release workflow fails

The most common causes:

  • Tag/version mismatchCargo.toml version doesn’t match the tag. Delete the tag, fix the version, re-tag.
  • Build failure — a Rust compilation error. Fix the code, delete the tag, re-tag.

To delete a tag and re-release:

git tag -d v0.1.1
git push origin :refs/tags/v0.1.1
# fix the issue, then
make tag-release

Changelog

All notable changes to dex are documented here.

[Unreleased]

Bug Fixes

  • context-map: Populate tasks from scaffolded dex.toml (#48) (a25d62d)

Documentation

  • Add SCOPE.md — product scope guardrails and decision filter (#44) (7062c81)
  • Add PRD for dex-in-browser WASM feature (#45) (7b2cd91)
  • Add scaffolding differentiation PRD (#46) (a1dda0c)

Features

  • cli: Add dex templates list/show (#40) (0752c04)
  • templates: Add databricks-app-streamlit template (#41) (6e3f963)
  • templates: Add databricks-app-streamlit template (#42) (2239358)
  • templates: Add dabs-dashboard template (#43) (2eeb5a0)
  • Add next.js template, notebook trait, and context-map generation (#47) (95dba9b)

[0.2.0] — 2026-04-02

Bug Fixes

  • release: Use cross for Linux musl targets (bb22702)

Chores

  • Bump version to v0.2.0 (#39) (1078c43)

Documentation

  • Add PRD for AI-ready scaffolding (context map, traits, WASM) (#34) (6cbcd49)
  • Add PRD for Snowflake templates (#35) (b8fa631)

Features

  • skills: Add dex skills system — agent skill pack management (#36) (2f1e593)
  • mcp: Implement scaffold_agent tool and add .mcp.json (#37) (3035344)
  • devcontainer: Ai-dev-kit integration with profile-based skill setup (#38) (4535cd0)

[0.1.1] — 2026-04-01

Bug Fixes

  • release: Align install.sh artifact names with release.yml and add linux aarch64 target (#30) (a378387)
  • release: Support workflow_dispatch and fix first-release changelog (#31) (5da5907)
  • release: Support workflow_dispatch and fix first-release changelog (b604a2d)

Chores

  • Remove Python layer and expand Rust test coverage (#22) (f085abb)

Documentation

  • Migrate from MkDocs to mdBook (#23) (8f177a0)
  • Add changelog.md placeholder for mdBook build (#24) (1084403)
  • Rewrite all docs for Rust binary architecture (#25) (e3efd53)

Features

  • templates: Inline variables format, order field, and standards pre-fill (#20) (7719579)
  • Port dex to pure Rust single binary (#21) (30690b9)
  • Add web-based project scaffolding app (#26) (a6ab3a6)
  • cli: Add dex run command (#28) (bacd088)
  • templates: Add python-package template (#29) (dfdc0e9)

Testing

  • core: Add regression tests for embedded template variable and file loading (#27) (bf5f8b5)

[0.1.0] — 2026-03-10

Bug Fixes

  • Tag-only versioning to work with branch protection rules (22f9f80)
  • Slugify hyphens to underscores, expose system_prompt and claude_md in PyO3 binding (c576227)
  • ci: Use dtolnay/rust-toolchain@master with explicit toolchain input (d439535)
  • fmt: Apply cargo fmt to bring Rust sources in line with rustfmt (4dc2eb0)
  • fmt: Apply ruff format to cli.py (c8b5f61)
  • lint: Resolve clippy collapsible_if and ruff errors (0269bf6)
  • ci: Add maturin to dev extras, set fail-fast: false on Python matrix (289e43e)
  • ci: Hoist UV_PYTHON to job-level env (cfee45e)
  • types: Add _core.pyi stubs, suppress ty false positives on click.BaseCommand (2f9817c)
  • cli: Resolve all Python bug backlog items (22e29a5)
  • release: Fix bump-version idempotency and replace git-cliff Docker action (b5d760e)
  • release: Remove redundant version stamp step in build jobs (18f3db6)
  • docs: Resolve gh-pages deploy alias conflict (#18) (439426a)
  • docs: Add mike set-default to create root redirect (#19) (d8715ff)
  • docs: Use latest as version name, deploy numbered versions on tags (0c7c4f8)
  • docs: Suppress MkDocs 2.0 compatibility warning (0dcf505)
  • docs: Move NO_MKDOCS_2_WARNING to job level, delete version before deploy (bfeb2a5)

Chores

  • Add root .gitignore (d88daab)
  • Apply cargo fmt formatting fixes (2cadc0e)
  • Add TASKS.md backlog (f776c91)
  • Infrastructure, installer, and docs improvements (d21108e)
  • Note PassthroughCommand click.BaseCommand deprecation in backlog (1458fb4)
  • docs: Add workflow_dispatch to docs deploy workflow (0e067c2)
  • release: Remove auto-version workflow, add release guide (#13) (1f9f2e9)
  • Release v0.1.0 (#17) (7b9ade0)

Documentation

  • Default to uv tool install from GitHub Releases, remove pip/PyPI references (1772eb7)
  • Add quickstart, mcp serve page, versioning guide, and template … (#11) (d7b6caf)

Features

  • Add project specification, architecture, and initial scaffolding (4188e3b)
  • Add DABs template composition model (32f9d15)
  • Add DABs prompt modes and agent scaffolding (7409110)
  • Add dabs-package template, fix multi-variable scaffolding, add docs site (476e1ac)
  • Add Actions-based Pages deploy, release pipeline, and auto-versioning (8e0557a)
  • templates: Add dabs-etl, dabs-ml, and dabs-aiagent pattern templates (c3a7528)

Refactoring

  • dabs-aiagent: Lighten AI agent template — no LangChain, dbutils notebooks, DABs deploy job (1ad4db2)

Testing

  • Add CLI smoke tests to fix pytest exit code 5 (0ecad9f)