Skip to content

add

dbt-forge add extends an existing dbt project. Use it when the starting structure is already in place and you want to scaffold a new component without creating files by hand.

Terminal window
# Scaffold components
dbt-forge add mart NAME
dbt-forge add source NAME
dbt-forge add source NAME --from-database # introspect warehouse
dbt-forge add snapshot NAME
dbt-forge add seed NAME
dbt-forge add exposure NAME
dbt-forge add macro NAME
# Interactive generators
dbt-forge add model NAME
dbt-forge add test MODEL_NAME
dbt-forge add ci [PROVIDER]
dbt-forge add package [NAME]
# Tooling
dbt-forge add pre-commit
dbt-forge add project NAME # add sub-project to a dbt Mesh

All add commands must run from inside an existing dbt project. The CLI walks upward from the current directory until it finds dbt_project.yml.

If no dbt project is found, the command exits with an error. You can run add from any subdirectory of the project — it does not need to be the project root.


Terminal window
dbt-forge add mart finance

Scaffolds:

  • models/marts/finance/finance_orders.sql — mart model stub
  • models/marts/finance/__finance__models.yml — YAML with model name and description placeholder
  • models/intermediate/finance/int_finance__orders_enriched.sql — intermediate model stub
Terminal window
dbt-forge add source salesforce
dbt-forge add source raw --from-database
dbt-forge add source raw --from-database --target prod

Scaffolds stub files:

  • models/staging/salesforce/_salesforce__sources.yml — source definition with a sample table
  • models/staging/salesforce/_salesforce__models.yml — YAML entry for the staging model
  • models/staging/salesforce/stg_salesforce__records.sql — staging model referencing the source

Introspects a live warehouse to generate source YAML and staging models from real table metadata. The flow:

  1. Reads profiles.yml to detect the adapter and connection config
  2. Connects to the warehouse
  3. Lists schemas — presents a selection prompt
  4. Lists tables in the selected schema — presents a multi-select prompt
  5. Fetches column metadata (name, type, nullability) for each selected table
  6. Generates _<source>__sources.yml with real column types and not_null tests for non-nullable columns
  7. Generates stg_<source>__<table>.sql per table with explicit column listing

Use --target to select a non-default profile target (defaults to dev).

The introspection feature requires the database driver for your adapter. Install the corresponding extra:

Terminal window
pip install dbt-forge[snowflake]
pip install dbt-forge[bigquery]
pip install dbt-forge[postgres]
pip install dbt-forge[duckdb]
pip install dbt-forge[databricks]
pip install dbt-forge[redshift]
pip install dbt-forge[trino]
pip install dbt-forge[spark]

If the driver is not installed, the command exits with instructions.

Terminal window
dbt-forge add snapshot orders

Scaffolds:

  • snapshots/orders.sql

The generated file contains a {% snapshot %} block configured with the timestamp strategy. Update the unique_key, updated_at, and source reference to match your data.

Terminal window
dbt-forge add seed dim_country

Scaffolds:

  • seeds/dim_country.csv — a three-column CSV stub (id, name, created_at)
  • seeds/_dim_country__seeds.yml — YAML with column descriptions and unique/not_null tests
Terminal window
dbt-forge add exposure weekly_revenue

Scaffolds:

  • models/marts/__weekly_revenue__exposures.yml

The generated file declares a dashboard exposure with type: dashboard, maturity: medium, a placeholder depends_on reference, and an owner block.

Terminal window
dbt-forge add macro cents_to_dollars

Scaffolds:

  • macros/cents_to_dollars.sql

The generated file contains a named {% macro %} block with a placeholder body.


Terminal window
dbt-forge add model users

Interactively scaffolds a new dbt model with SQL and YAML. Prompts for:

  • Layer: staging, intermediate, or marts
  • Materialization: view, table, incremental, or ephemeral
  • Source (staging only): auto-detected from existing source YAML files, or entered manually
  • Description: model-level description
  • Columns: optional interactive loop to define column names, descriptions, and tests
LayerDefault materializationName prefixDirectory
stagingviewstg_<source>__models/staging/<source>/
intermediateephemeralint_models/intermediate/
martstable(none)models/marts/

For staging models, the CLI scans models/**/*sources*.yml and models/**/*sources*.yaml for defined source names. If sources are found, it presents a selection list:

? Source name:
> stripe
salesforce
Other (enter manually)

If no sources are found or “Other” is selected, the CLI falls back to a text prompt.

For a staging model named users with source stripe:

  • models/staging/stripe/stg_stripe__users.sql — SQL with source('stripe', 'users')
  • models/staging/stripe/_stg_stripe__users__models.yml — YAML entry with columns and tests

For incremental models, the SQL includes an is_incremental() block:

{{
config(
materialized='incremental',
unique_key='id'
)
}}
select * from {{ ref('upstream_model') }}
{% if is_incremental() %}
where updated_at > (select max(updated_at) from {{ this }})
{% endif %}

When adding columns interactively, each column prompts for:

  • Column name
  • Description
  • Tests to apply: unique, not_null, accepted_values, relationships
Terminal window
dbt-forge add test stg_orders

Scaffolds a test for an existing model. Prompts for test type:

Generates tests/assert_stg_orders_valid.sql with a SQL assertion stub that references the model via ref().

Generates tests/unit/test_stg_orders.yml with a mock-based unit test:

unit_tests:
- name: test_stg_orders
model: stg_orders
given:
- input: ref('stg_orders')
rows:
- {id: 1, amount: 100}
expect:
rows:
- {id: 1, amount: 100}

Generates models/_stg_orders__tests.yml with column-level tests. The flow:

  1. Column detection — scans existing models/**/*.yml for the model’s column definitions. If found, presents a checkbox to select columns. If not found, prompts for comma-separated column names.

  2. Test selection — for each column, prompts for test types:

    • unique — column values are unique
    • not_null — no null values
    • accepted_values — prompts for a comma-separated list of allowed values
    • relationships — prompts for the referenced model name and field
  3. Output — generates a YAML file:

version: 2
models:
- name: stg_orders
columns:
- name: id
data_tests:
- unique
- not_null
- name: status
data_tests:
- accepted_values:
values: ['active', 'inactive', 'archived']
- name: customer_id
data_tests:
- relationships:
to: ref('dim_customers')
field: id
Terminal window
dbt-forge add ci github
dbt-forge add ci gitlab
dbt-forge add ci bitbucket
dbt-forge add ci # interactive prompt

Scaffolds CI/CD pipeline config for an existing dbt project. Reuses the same templates used during init. Auto-detects the adapter from profiles/profiles.yml.

Provider arguments (case-insensitive):

ArgumentProviderGenerated file
githubGitHub Actions.github/workflows/dbt_ci.yml
gitlabGitLab CI.gitlab-ci.yml
bitbucketBitbucket Pipelinesbitbucket-pipelines.yml

Without an argument, the CLI shows a multi-select prompt. Skips if the CI config file already exists.

Terminal window
dbt-forge add package dbt-utils
dbt-forge add package --list # browse available packages
dbt-forge add package # interactive selection

Adds a dbt package to packages.yml from a curated registry with known-good version ranges. Parses the existing YAML, appends the new entry, and writes it back.

PackageHub path
dbt-utilsdbt-labs/dbt_utils
dbt-expectationsmetaplane/dbt_expectations
dbt-codegendbt-labs/codegen
elementaryelementary-data/elementary
dbt-audit-helperdbt-labs/audit_helper
dbt-project-evaluatordbt-labs/dbt_project_evaluator
dbt-meta-testingtnightengale/dbt_meta_testing
dbt-datecalogica/dbt_date
dbt-profilerdata-mie/dbt_profiler
re-datare-data/dbt_re_data
dbt-artifactsbrooklyn-data/dbt_artifacts
dbt-external-tablesdbt-labs/dbt_external_tables
metricsdbt-labs/metrics
dbt-activity-schemabcodell/dbt_activity_schema
dbt-constraintsSnowflake-Labs/dbt_constraints
dbt-privacypvcy/dbt_privacy
dbt-unit-testingEqualExperts/dbt-unit-testing
dbt-fivetran-utilsfivetran/fivetran_utils
dbt-snowplow-websnowplow/dbt_snowplow_web
dbt-segmentdbt-labs/segment

Use --list to see all packages and their hub paths in the terminal.

Some packages need configuration in dbt_project.yml. When you add one of these packages, the CLI automatically merges the required vars into dbt_project.yml:

PackageConfig added to dbt_project.yml
elementaryvars: { elementary: { edr_cli_run: "true" } }
dbt-project-evaluatorvars: { dbt_project_evaluator: { documentation_coverage_target: 100, test_coverage_target: 100 } }

If dbt_project.yml does not exist or is not parseable, the config step is skipped with a warning.

Skips if the package is already present in packages.yml.

Terminal window
dbt-forge add pre-commit

Scaffolds:

  • .pre-commit-config.yaml — hooks for trailing whitespace, end-of-file, YAML validation, yamllint, and optionally SQLFluff (auto-detected from .sqlfluff)
  • .editorconfig — consistent formatting rules (UTF-8, LF line endings, 2-space YAML/SQL indent)
  • .sqlfluffignore — excludes target/, dbt_packages/, logs/ (only if .sqlfluff exists)

After running, activate the hooks with pre-commit install.


Terminal window
dbt-forge add project analytics
dbt-forge add project analytics --purpose marts

Adds a new sub-project to an existing dbt Mesh setup. Must be run from inside a mesh project (a directory containing a Makefile and sub-directories with dbt_project.yml).

The command:

  1. Detects the mesh root by walking up from the current directory
  2. Lists existing sub-projects
  3. Prompts for upstream dependencies (multi-select from existing sub-projects)
  4. Generates the sub-project with:
    • dbt_project.yml
    • dependencies.yml (if upstream deps selected)
    • models/_groups.yml with group definition
    • Example models with access levels based on the --purpose flag
    • profiles/profiles.yml stub
    • Empty scaffold directories (macros, tests, seeds, snapshots, analyses)

The adapter is auto-detected from the first existing sub-project’s profile.


  • All commands must run inside an existing dbt project (any directory containing or beneath dbt_project.yml).
  • Existing files are not overwritten (except add package, which appends to packages.yml, and package config which merges into dbt_project.yml).
  • Interactive commands (add model, add test, add ci, add package) require a terminal.
  • The generated SQL and YAML are starter files and should be adapted to the real warehouse, source schema, and naming rules used by the project.
  • Run dbt-forge add --help for a summary of all subcommands.

Use init to scaffold the starting structure, then use add commands as the dbt project grows into new domains, source systems, or analytical artifacts. Use doctor to validate that the project follows best practices as it evolves.