Quickstart
This guide walks you through creating your first project, connecting a database, and building a semantic model that AI agents can query.
-
Log in
Open
http://localhost:8080(or your configured URL) and log in with the credentials you set during installation. The default username isadmin(configurable viaUI_USERNAME), and the password is whatever you set inUI_PASSWORD.On your first login, a disclaimer dialog will appear. It highlights important operational considerations: LLM token costs from long-running agents, potential load on your source databases (especially data lakes where large data scans may occur), the fact that schema metadata (which may include PII) is sent to your LLM provider, and that AI-generated models should be reviewed. Read it carefully, check the acknowledgment box, and click Continue.
-
Create a Project
After logging in, click New Project on the dashboard. Give it a name and an optional description. A project groups related database connections and their semantic models.
-
Add a Connection
Navigate to Data Federation in the sidebar and click New Connection. Fill in:
- Name: a friendly label (e.g., “Shopify Airbyte Postgres”)
- Type: select your database type (Postgres, MySQL, MSSQL, SQLite, DuckDB, Iceberg)
- Connection details: host, port, database, user, and password (or paste a URI)
Click Test Connection to verify, then save.
-
Build a Semantic Model
Go to Semantic Models in the sidebar and start a new conversation. Describe what kind of model you want in natural language, for example:
“Build a semantic model from the orders, customers, and products tables in my shopify connection, and connect this to the contacts data in my CRM connection.”
The AI agent will:
- Discover schemas and tables from your connection
- Map columns to typed fields with expressions
- Detect enum values and time dimensions
- Infer relationships between datasets
- Define metrics
Review the generated model in the graph or tree view, adjust as needed, and save.
-
Publish the Model
Once you are happy with the model, click Publish in the semantic model detail view. Publishing takes the individual source YAML files (root file + per-dataset files under
src/) and assembles them into a single optimized YAML in thebuild/directory. The production MCP endpoint (/mcp/<slug>/mcp) always reads from the published build, so any changes you make to a model are invisible to production AI agents until you publish.You can keep editing and re-publishing as often as needed. Each publish overwrites the previous build.
There is also a test MCP endpoint at
/mcp/<slug>/test/mcpthat assembles from the source files on-the-fly. This means you can iterate on a model and immediately test it with an AI agent without publishing first. The test endpoint is useful during development; the production endpoint is what you give to external agents. -
Add Test Cases
Before connecting external agents, it’s a good idea to verify your model works correctly. Go to Testing in the sidebar:
- Under Test Agents, create a test agent configuration. Pick a name, choose the LLM model to use, and optionally set a custom system prompt.
- Under Test Cases, create one or more questions that an AI agent should be able to answer using your semantic model, for example: “What was total revenue last quarter?” or “Which customer placed the most orders?”. Assign each test case to the semantic model it targets. Treat test cases as ground truth: before creating a test case, manually verify the expected answer by querying your database directly. If you add a test case claiming “total revenue last quarter was $142,000” but that number is wrong, every future test run will chase a phantom bug. Take the time to confirm metrics, counts, and edge cases against your actual data first.
- Use the Playground to run a single test case interactively and watch the agent’s tool calls and reasoning in real time. The playground uses the test MCP endpoint, so it always reflects your latest saved changes, even if you haven’t published yet.
- Use Batch Runs to run multiple test cases at once and review pass/fail results.
Testing helps you catch missing fields, ambiguous descriptions, and incorrect relationships before real agents hit your data. The quality of your test suite depends on the accuracy of the expected answers, so invest in verifying them upfront.
-
Create an MCP Token
Go to MCP Access and create a token. Choose which semantic models the token can access (scopes) and set an optional expiry. All tokens are read-only by default.
-
Connect an AI Agent
Configure your AI agent’s MCP client to use:
- Endpoint:
http://your-server:8080/mcp/your-project-slug/mcp - Auth:
Bearer <your-token>
The agent can now discover your semantic models and run queries.
- Endpoint:
What’s Next?
Section titled “What’s Next?”- Learn how to author semantic models in detail
- Set up MCP integration for your AI tools
- Explore data federation across multiple databases
- Validate your models with the testing suite