creating-dbt-models
About
This skill creates and modifies dbt models while automatically discovering and following your project's naming conventions. It handles tasks involving creating, building, or implementing models across any layer and can work from schema.yml specs or output requirements. Crucially, it runs `dbt build` (not just compile) to verify the models work correctly.
Quick Install
Claude Code
Recommended/plugin add https://github.com/majiayu000/claude-skill-registrygit clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/creating-dbt-modelsCopy and paste this command in Claude Code to install this skill
Documentation
dbt Model Development
Read before you write. Build after you write. Verify your output.
Critical Rules
- ALWAYS run
dbt buildafter creating/modifying models - compile is NOT enough - ALWAYS verify output after build using
dbt show- don't assume success - If build fails 3+ times, stop and reassess your entire approach
Workflow
1. Understand the Task Requirements
- What columns are needed? List them explicitly.
- What is the grain of the table (one row per what)?
- What calculations or aggregations are required?
2. Discover Project Conventions
cat dbt_project.yml
find models/ -name "*.sql" | head -20
Read 2-3 existing models to learn naming, config, and SQL patterns.
3. Find Similar Models
# Find models with similar purpose
find models/ -name "*agg*.sql" -o -name "*fct_*.sql" | head -5
Learn from existing models: join types, aggregation patterns, NULL handling.
4. Check Upstream Data
# Preview upstream data if needed
dbt show --select <upstream_model> --limit 10
5. Write the Model
Follow discovered conventions. Match the required columns exactly.
6. Compile (Syntax Check)
dbt compile --select <model_name>
7. BUILD - MANDATORY
This step is REQUIRED. Do NOT skip it.
dbt build --select <model_name>
If build fails:
- Read the error carefully
- Fix the specific issue
- Run build again
- If fails 3+ times, step back and reassess approach
8. Verify Output (CRITICAL)
Build success does NOT mean correct output.
# Check the table was created and preview data
dbt show --select <model_name> --limit 10
Verify:
- Column names match requirements exactly
- Row count is reasonable
- Data values look correct
- No unexpected NULLs
9. Verify Calculations Against Sample Data
For models with calculations, verify correctness manually:
# Pick a specific row and verify calculation by hand
dbt show --inline "
select *
from {{ ref('model_name') }}
where <primary_key> = '<known_value>'
" --limit 1
# Cross-check aggregations
dbt show --inline "
select count(*), sum(<column>)
from {{ ref('model_name') }}
"
For example, if calculating total_revenue = quantity * price:
- Pick one row from output
- Look up the source quantity and price
- Manually calculate: does it match?
10. Re-review Against Requirements
Before declaring done, re-read the original request:
- Did you implement what was asked, not what you assumed?
- Are column names exactly as specified?
- Is the calculation logic correct per the requirements?
- Does the grain (one row per what?) match what was requested?
Anti-Patterns
- Declaring done after compile without running build
- Not verifying output data after build
- Getting stuck in compile/build error loops
- Assuming table exists just because model file exists
- Writing SQL without checking existing model patterns first
GitHub Repository
Related Skills
content-collections
MetaThis skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.
creating-opencode-plugins
MetaThis skill provides the structure and API specifications for creating OpenCode plugins that hook into 25+ event types like commands, files, and LSP operations. It offers implementation patterns for JavaScript/TypeScript modules that intercept and extend the AI assistant's lifecycle. Use it when you need to build event-driven plugins for monitoring, custom handling, or extending OpenCode's capabilities.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
Algorithmic Art Generation
MetaThis skill helps developers create algorithmic art using p5.js, focusing on generative art, computational aesthetics, and interactive visualizations. It automatically activates for topics like "generative art" or "p5.js visualization" and guides you through creating unique algorithms with features like seeded randomness, flow fields, and particle systems. Use it when you need to build reproducible, code-driven artistic patterns.
