"Vibe coding" is revolutionizing software development. We think "vibe analysis"—think "Cursor for data & analytics"—has the potential to be much bigger.
There are ~2 million software engineers in the US, while ~4 million data professionals support business roles (sales, marketing, ops, etc.) with ad-hoc data requests and report building. In the US alone, we’re spending 10+ billion hours a year building and reviewing these reports.
Our thesis is that AI will greatly increase that number. Many more people will do analysis, much faster and much more often (much like what is happening with software development).
For this to become reality, data needs to be cleaned, well-maintained, tested, and documented. We're building Buster to help data teams do this at scale.
what is buster?
Buster helps data teams ship AI analytics that actually work.
It generates and maintains the context that AI needs to understand a company’s data — and manages that context directly inside the dbt project.
Many data teams already have their data modeled in dbt, but their documentation (if it exists at all) is typically inadequate for an AI analyst to understand how models relate, what each field is for, or how metrics are defined. That’s why self-serve AI tools often feel unreliable — they don’t have the context they need.
Buster fixes that. It connects to your dbt repo and warehouse, profiles every model, and writes rich, structured documentation directly into your dbt YAML files. It then keeps those docs up to date automatically through pull requests and scheduled profiling runs.
how does it work?
Buster integrates with your dbt project, GitHub, and data warehouse. Once connected, it runs a system of agents that:
Generate context: Analyze models, lineage, metadata, and profiles to write complete dbt-native documentation.
Maintain context: Review pull requests, detect model changes, and update documentation automatically before merge.
Monitor drift: Run scheduled data profiling tests that look for distribution shifts, new values, and schema drift — then update docs or alert the team.
All documentation lives inside your dbt repo — versioned, reviewed, and fully compatible with downstream tools.
The agents don’t create an external layer; they work in the repo you already use. Every change happens through standard CI/CD workflows.
The result is a continuously updated semantic layer — one that’s always current, auditable, and AI-readable.
who uses buster?
Analytics engineers maintaining dbt projects at scale.
Data platform leads preparing for AI-driven self-serve.
Data teams that already follow best practices but need automation to keep docs, tests, and metadata current.
And companies experimenting with AI analysts or chat-based reporting tools — who quickly realize those tools fail without structured context.
These teams (typically) already know how to build great dbt projects. Buster helps them make those projects understandable to AI.
what value does buster create?
AI analytics only works when the AI understands your data. Most teams don’t realize how much context that actually takes. They build good dbt models, but the models alone don’t tell an AI what each table means, how to join them, or which fields are trustworthy.
Without that, AI self-serve tools hallucinate, misaggregate, or fail silently.
Buster fixes this. It builds and maintains the context layer that AI needs — the detailed, structured documentation that describes what your data means, not just what it is.
Buster’s value is simple but absolute: It’s the difference between AI analytics working and not working.
And it does this in three ways:
It generates context Buster writes deep, dbt-native documentation for every model, column, and metric. It understands lineage, profiles data, and captures purpose — all written directly into your YAML files.
It maintains that context automatically It reviews pull requests, detects model changes, and updates documentation before merge. It also runs scheduled profiling to catch drift or new values, updating docs or alerting teams as needed.
It powers downstream AI With clean, structured context in place, every AI tool you connect to dbt — Tableau AI, Hex, Omni, or any other — instantly performs better. Queries are more accurate, explanations are grounded, and answers are reliable.
And for teams that want a direct, end-to-end solution, Buster includes its own self-serve analytics interface — an AI data analyst that’s purpose-built to read dbt context. It gives business users a conversational, context-aware way to explore data and build reliable reports. It’s everything AI self-serve should have been — just finally works because the foundation underneath is sound.
Buster doesn’t just make AI analytics easier — it makes it possible.
why this matters
Buster makes dbt projects AI-ready.
It gives AI analysts the context they need to answer questions accurately, and it gives data teams the automation they need to maintain that context as their models evolve.
Most AI analytics failures come down to missing documentation, unclear lineage, or drifted semantics. Buster solves that at the source — inside dbt — and keeps it solved.
The outcome is simple:
Your dbt repo becomes a living, well-documented semantic layer.
Your AI tools finally understand your data.
And your data team stops worrying about maintenance and starts shipping.
Start using AI agents in your analytics workflows today