add
dbt-forge add extends an existing dbt project. Use it when the starting structure is
already in place and you want to scaffold a new component without creating files by hand.
Commands
Section titled “Commands”# Scaffold componentsdbt-forge add mart NAMEdbt-forge add source NAMEdbt-forge add source NAME --from-database # introspect warehousedbt-forge add snapshot NAMEdbt-forge add seed NAMEdbt-forge add exposure NAMEdbt-forge add macro NAME
# Interactive generatorsdbt-forge add model NAMEdbt-forge add test MODEL_NAMEdbt-forge add ci [PROVIDER]dbt-forge add package [NAME]
# Toolingdbt-forge add pre-commitdbt-forge add project NAME # add sub-project to a dbt MeshProject detection
Section titled “Project detection”All add commands must run from inside an existing dbt project. The CLI walks upward
from the current directory until it finds dbt_project.yml.
If no dbt project is found, the command exits with an error. You can run add from any
subdirectory of the project — it does not need to be the project root.
add mart
Section titled “add mart”dbt-forge add mart financeScaffolds:
models/marts/finance/finance_orders.sql— mart model stubmodels/marts/finance/__finance__models.yml— YAML with model name and description placeholdermodels/intermediate/finance/int_finance__orders_enriched.sql— intermediate model stub
add source
Section titled “add source”dbt-forge add source salesforcedbt-forge add source raw --from-databasedbt-forge add source raw --from-database --target prodDefault mode
Section titled “Default mode”Scaffolds stub files:
models/staging/salesforce/_salesforce__sources.yml— source definition with a sample tablemodels/staging/salesforce/_salesforce__models.yml— YAML entry for the staging modelmodels/staging/salesforce/stg_salesforce__records.sql— staging model referencing the source
--from-database mode
Section titled “--from-database mode”Introspects a live warehouse to generate source YAML and staging models from real table metadata. The flow:
- Reads
profiles.ymlto detect the adapter and connection config - Connects to the warehouse
- Lists schemas — presents a selection prompt
- Lists tables in the selected schema — presents a multi-select prompt
- Fetches column metadata (name, type, nullability) for each selected table
- Generates
_<source>__sources.ymlwith real column types andnot_nulltests for non-nullable columns - Generates
stg_<source>__<table>.sqlper table with explicit column listing
Use --target to select a non-default profile target (defaults to dev).
Optional adapter dependencies
Section titled “Optional adapter dependencies”The introspection feature requires the database driver for your adapter. Install the corresponding extra:
pip install dbt-forge[snowflake]pip install dbt-forge[bigquery]pip install dbt-forge[postgres]pip install dbt-forge[duckdb]pip install dbt-forge[databricks]pip install dbt-forge[redshift]pip install dbt-forge[trino]pip install dbt-forge[spark]If the driver is not installed, the command exits with instructions.
add snapshot
Section titled “add snapshot”dbt-forge add snapshot ordersScaffolds:
snapshots/orders.sql
The generated file contains a {% snapshot %} block configured with the timestamp
strategy. Update the unique_key, updated_at, and source reference to match your data.
add seed
Section titled “add seed”dbt-forge add seed dim_countryScaffolds:
seeds/dim_country.csv— a three-column CSV stub (id,name,created_at)seeds/_dim_country__seeds.yml— YAML with column descriptions andunique/not_nulltests
add exposure
Section titled “add exposure”dbt-forge add exposure weekly_revenueScaffolds:
models/marts/__weekly_revenue__exposures.yml
The generated file declares a dashboard exposure with type: dashboard,
maturity: medium, a placeholder depends_on reference, and an owner block.
add macro
Section titled “add macro”dbt-forge add macro cents_to_dollarsScaffolds:
macros/cents_to_dollars.sql
The generated file contains a named {% macro %} block with a placeholder body.
add model
Section titled “add model”dbt-forge add model usersInteractively scaffolds a new dbt model with SQL and YAML. Prompts for:
- Layer: staging, intermediate, or marts
- Materialization: view, table, incremental, or ephemeral
- Source (staging only): auto-detected from existing source YAML files, or entered manually
- Description: model-level description
- Columns: optional interactive loop to define column names, descriptions, and tests
Layer defaults
Section titled “Layer defaults”| Layer | Default materialization | Name prefix | Directory |
|---|---|---|---|
| staging | view | stg_<source>__ | models/staging/<source>/ |
| intermediate | ephemeral | int_ | models/intermediate/ |
| marts | table | (none) | models/marts/ |
Source auto-detection
Section titled “Source auto-detection”For staging models, the CLI scans models/**/*sources*.yml and models/**/*sources*.yaml
for defined source names. If sources are found, it presents a selection list:
? Source name:> stripe salesforce Other (enter manually)If no sources are found or “Other” is selected, the CLI falls back to a text prompt.
Generated files
Section titled “Generated files”For a staging model named users with source stripe:
models/staging/stripe/stg_stripe__users.sql— SQL withsource('stripe', 'users')models/staging/stripe/_stg_stripe__users__models.yml— YAML entry with columns and tests
For incremental models, the SQL includes an is_incremental() block:
{{ config( materialized='incremental', unique_key='id' )}}
select * from {{ ref('upstream_model') }}
{% if is_incremental() %}where updated_at > (select max(updated_at) from {{ this }}){% endif %}Column definition
Section titled “Column definition”When adding columns interactively, each column prompts for:
- Column name
- Description
- Tests to apply:
unique,not_null,accepted_values,relationships
add test
Section titled “add test”dbt-forge add test stg_ordersScaffolds a test for an existing model. Prompts for test type:
Data test
Section titled “Data test”Generates tests/assert_stg_orders_valid.sql with a SQL assertion stub that references
the model via ref().
Unit test (dbt 1.8+)
Section titled “Unit test (dbt 1.8+)”Generates tests/unit/test_stg_orders.yml with a mock-based unit test:
unit_tests: - name: test_stg_orders model: stg_orders given: - input: ref('stg_orders') rows: - {id: 1, amount: 100} expect: rows: - {id: 1, amount: 100}Schema test (column-level in .yml)
Section titled “Schema test (column-level in .yml)”Generates models/_stg_orders__tests.yml with column-level tests. The flow:
-
Column detection — scans existing
models/**/*.ymlfor the model’s column definitions. If found, presents a checkbox to select columns. If not found, prompts for comma-separated column names. -
Test selection — for each column, prompts for test types:
unique— column values are uniquenot_null— no null valuesaccepted_values— prompts for a comma-separated list of allowed valuesrelationships— prompts for the referenced model name and field
-
Output — generates a YAML file:
version: 2
models: - name: stg_orders columns: - name: id data_tests: - unique - not_null - name: status data_tests: - accepted_values: values: ['active', 'inactive', 'archived'] - name: customer_id data_tests: - relationships: to: ref('dim_customers') field: idadd ci
Section titled “add ci”dbt-forge add ci githubdbt-forge add ci gitlabdbt-forge add ci bitbucketdbt-forge add ci # interactive promptScaffolds CI/CD pipeline config for an existing dbt project. Reuses the same templates
used during init. Auto-detects the adapter from profiles/profiles.yml.
Provider arguments (case-insensitive):
| Argument | Provider | Generated file |
|---|---|---|
github | GitHub Actions | .github/workflows/dbt_ci.yml |
gitlab | GitLab CI | .gitlab-ci.yml |
bitbucket | Bitbucket Pipelines | bitbucket-pipelines.yml |
Without an argument, the CLI shows a multi-select prompt. Skips if the CI config file already exists.
add package
Section titled “add package”dbt-forge add package dbt-utilsdbt-forge add package --list # browse available packagesdbt-forge add package # interactive selectionAdds a dbt package to packages.yml from a curated registry with known-good version
ranges. Parses the existing YAML, appends the new entry, and writes it back.
Available packages
Section titled “Available packages”| Package | Hub path |
|---|---|
| dbt-utils | dbt-labs/dbt_utils |
| dbt-expectations | metaplane/dbt_expectations |
| dbt-codegen | dbt-labs/codegen |
| elementary | elementary-data/elementary |
| dbt-audit-helper | dbt-labs/audit_helper |
| dbt-project-evaluator | dbt-labs/dbt_project_evaluator |
| dbt-meta-testing | tnightengale/dbt_meta_testing |
| dbt-date | calogica/dbt_date |
| dbt-profiler | data-mie/dbt_profiler |
| re-data | re-data/dbt_re_data |
| dbt-artifacts | brooklyn-data/dbt_artifacts |
| dbt-external-tables | dbt-labs/dbt_external_tables |
| metrics | dbt-labs/metrics |
| dbt-activity-schema | bcodell/dbt_activity_schema |
| dbt-constraints | Snowflake-Labs/dbt_constraints |
| dbt-privacy | pvcy/dbt_privacy |
| dbt-unit-testing | EqualExperts/dbt-unit-testing |
| dbt-fivetran-utils | fivetran/fivetran_utils |
| dbt-snowplow-web | snowplow/dbt_snowplow_web |
| dbt-segment | dbt-labs/segment |
Use --list to see all packages and their hub paths in the terminal.
Package config generation
Section titled “Package config generation”Some packages need configuration in dbt_project.yml. When you add one of these
packages, the CLI automatically merges the required vars into dbt_project.yml:
| Package | Config added to dbt_project.yml |
|---|---|
| elementary | vars: { elementary: { edr_cli_run: "true" } } |
| dbt-project-evaluator | vars: { dbt_project_evaluator: { documentation_coverage_target: 100, test_coverage_target: 100 } } |
If dbt_project.yml does not exist or is not parseable, the config step is skipped
with a warning.
Skips if the package is already present in packages.yml.
add pre-commit
Section titled “add pre-commit”dbt-forge add pre-commitScaffolds:
.pre-commit-config.yaml— hooks for trailing whitespace, end-of-file, YAML validation, yamllint, and optionally SQLFluff (auto-detected from.sqlfluff).editorconfig— consistent formatting rules (UTF-8, LF line endings, 2-space YAML/SQL indent).sqlfluffignore— excludestarget/,dbt_packages/,logs/(only if.sqlfluffexists)
After running, activate the hooks with pre-commit install.
add project
Section titled “add project”dbt-forge add project analyticsdbt-forge add project analytics --purpose martsAdds a new sub-project to an existing dbt Mesh setup. Must be run from inside a mesh
project (a directory containing a Makefile and sub-directories with dbt_project.yml).
The command:
- Detects the mesh root by walking up from the current directory
- Lists existing sub-projects
- Prompts for upstream dependencies (multi-select from existing sub-projects)
- Generates the sub-project with:
dbt_project.ymldependencies.yml(if upstream deps selected)models/_groups.ymlwith group definition- Example models with access levels based on the
--purposeflag profiles/profiles.ymlstub- Empty scaffold directories (macros, tests, seeds, snapshots, analyses)
The adapter is auto-detected from the first existing sub-project’s profile.
Behavior and limits
Section titled “Behavior and limits”- All commands must run inside an existing dbt project (any directory containing or beneath
dbt_project.yml). - Existing files are not overwritten (except
add package, which appends topackages.yml, and package config which merges intodbt_project.yml). - Interactive commands (
add model,add test,add ci,add package) require a terminal. - The generated SQL and YAML are starter files and should be adapted to the real warehouse, source schema, and naming rules used by the project.
- Run
dbt-forge add --helpfor a summary of all subcommands.
Recommended workflow
Section titled “Recommended workflow”Use init to scaffold the starting structure, then use add commands as the dbt
project grows into new domains, source systems, or analytical artifacts. Use
doctor to validate that the project follows best practices as it
evolves.