Skip to main content
Models are currently in beta and are available on Team, Pro, and Enterprise plans.
Models let you write SQL queries that run in the background and materialize the results. This enables you to prepare optimized, reusable datasets for your reports without slowing down your analysis. Once created, models are available to use just like sources throughout Evidence - in visualizations, the SQL console, Explore, and anywhere else you reference data.

When to use models

Models are ideal for:
  • Joining sources together - Combine data from multiple sources or databases into a single clean table
  • Type casting - Convert data types ahead of time so your reports don’t need to handle it
  • Reusable logic - Centralize calculations and business logic in one place instead of repeating it across reports
  • Pre-running large queries - Execute heavy transformations once instead of on every page load
  • Clean report tables - Create simple, well-structured tables that are easy to work with when building visualizations
If you have large, complex inline queries on your pages, these might be candidates for models.

Creating a model

  1. Navigate to the Models tab in the sidebar
  2. Click New Model
  3. Name your model - this is how you’ll reference it in your reports
  4. Write your SQL query in the editor
  5. Click Save to create the model
Your model will begin running in the background. Once complete, it will be available to use in your reports just like a regular source.

What should go in models

Models are where complex data preparation happens, keeping your reports fast and focused on presentation.

Joins belong in models

All joins between sources should be handled in models, not in reports. This ensures:
  • Faster report loading times
  • Consistent join logic across all reports
  • Easier maintenance when relationships change

Model granularity

Create models at the most granular level users need for drilldown and exploration. Think about:
  • What’s the finest detail users should be able to filter to?
  • What dimensions do they need to slice and dice by?
  • What level of detail supports their analysis?
If users need to drill down to individual orders, your model should have order-level detail. If they only need daily summaries, aggregate to the day level in your model.

Recategorizing and bucketing

Use CASE statements in models to create clean, consistent categories. This is ideal for:
  • Bucketing numeric values into ranges (small/medium/large orders)
  • Standardizing messy category data
  • Creating business-friendly labels from technical values
This ensures categories are the same across reports instead of each person defining their own buckets.

Keep reports simple

Reports should focus on filtering, sorting, and presenting data from models. Complex transformations and joins slow down reports and make them harder to maintain. In models: Joins, aggregations, complex calculations, data type casting, CASE statements
In reports: Filtering, sorting, simple calculations, formatting

Refresh schedules

Models automatically refresh every 8 hours to keep your data up to date. You can also manually refresh a model at any time from the Models page. Custom refresh schedules are coming soon.

Examples

Enriched order details

Create a model called enriched_orders that joins order details with item information:
SELECT 
    od.order_id,
    od.date,
    od.hour,
    od.category,
    od.item_name,
    od.unit_price,
    od.quantity,
    od.unit_price * od.quantity as line_total,
    i.base_price,
    od.unit_price - i.base_price as price_variance
FROM demo_order_details od
LEFT JOIN demo_items i 
    ON od.category = i.category 
    AND od.item_name = i.item_name
Now you can use enriched_orders throughout your reports:
{% table data="enriched_orders" /%}
This model enriches order details with base pricing information, making it easy to analyze pricing variance and order profitability across reports.

Daily sales summary

Create a model called daily_sales that pre-aggregates order data by day:
SELECT 
    date,
    category,
    COUNT(DISTINCT order_id) as orders,
    SUM(quantity) as items_sold,
    SUM(unit_price * quantity) as revenue,
    SUM(unit_price * quantity) / COUNT(DISTINCT order_id) as avg_order_value
FROM demo_order_details
GROUP BY date, category
Use daily_sales in a line chart to visualize trends over time:
{% line_chart 
    data="daily_sales" 
    x="date" 
    y="sum(revenue)" 
    series="category" 
/%}
This model creates a daily summary that’s ready for time-series analysis and dashboard visualizations without heavy aggregation in your reports.

Order summary with item counts

Create a model called order_summary that combines headers with aggregated details:
SELECT 
    h.order_id,
    h.date,
    h.hour,
    h.primary_category,
    COUNT(d.item_name) as item_count,
    SUM(d.quantity) as total_quantity,
    SUM(d.unit_price * d.quantity) as order_total,
    AVG(d.unit_price) as avg_item_price
FROM demo_order_headers h
LEFT JOIN demo_order_details d ON h.order_id = d.order_id
GROUP BY h.order_id, h.date, h.hour, h.primary_category
Use order_summary to quickly analyze order patterns:
{% bar_chart 
    data="order_summary" 
    x="primary_category" 
    y="sum(order_total)" 
/%}
This model centralizes order-level calculations, ensuring consistent metrics whether you’re analyzing orders by date, category, or order size.
I