Databases
Database connectors connect to your existing data warehouse or database. Evidence extracts the data, and you can define a schedule for sync. How it works:- You provide connection credentials to your database
- Evidence runs queries against your database to extract data
- Data is cached in Evidence’s query engine in your chosen region
- Reports query the cached data for fast performance
- Existing data warehouses (Snowflake, BigQuery, Redshift, etc.)
- Data that’s already modeled and ready to report on
- Teams who want Evidence to manage the data pipeline
- Snowflake
- BigQuery
- PostgreSQL
- RDS PostgreSQL
- Redshift
- MotherDuck
- Athena
- MySQL
- SQL Server
- Azure SQL Database
- Azure Postgres
- Databricks
Object Storage
Object storage connectors read Parquet files directly from cloud storage buckets that you control. Evidence queries the data in place using ClickHouse’s S3 table engine. In addition to the below named providers, any S3 compatible storage is supported (via the “custom S3” connector). How it works:- You store Parquet files in your own bucket (S3, GCS, Azure, etc.)
- You provide bucket credentials to Evidence
- Evidence queries the Parquet files directly — no data is copied
- Instant data availability: When you update a Parquet file in your bucket, the new data is immediately available in Evidence—no sync or publish required
- Schema changes require publish: Adding or removing columns requires clicking Publish or setting up a refresh schedule
- Data stays in your infrastructure: Your data never leaves your bucket, Evidence queries it in place
- S3-compatible: Any storage provider with an S3-compatible API works (AWS S3, GCS, Azure Blob, Cloudflare R2, MinIO, Backblaze B2, etc.)
- Data residency or sovereignty requirements
- Teams who manage their own data pipelines and produce Parquet files
- High-frequency data updates where you control the source files
- Keeping data within your own infrastructure
- AWS S3
- Google Cloud Storage
- Azure Blob Storage
- Cloudflare R2
- Backblaze B2
- Custom S3-compatible provider
Data Lakes
Data lake connectors connect to open table formats that provide ACID transactions, schema evolution, and time travel capabilities on top of object storage. How it works:- You provide credentials to your data lake catalog or storage
- Evidence reads the table metadata and data files
- Data is queried with full support for the table format’s features
- Teams using modern lakehouse architectures
- Large-scale data with complex partitioning
- Scenarios requiring time travel or schema evolution
Applications
Application connectors sync data from SaaS applications and APIs. Evidence uses Fivetran to extract and normalize data from these sources. How it works:- You authenticate with the application (OAuth or API key)
- Evidence syncs data on a configurable schedule
- Data is normalized and stored in Evidence’s query engine
- Product analytics and operational reporting
- Combining SaaS data with your data warehouse
- Teams who want turnkey integrations without building pipelines
Flat Files
Upload flat files directly to Evidence for quick data analysis without setting up external connections. How it works:- Upload a file directly through the Evidence interface
- Evidence processes and stores the data
- Query the data immediately in your reports
- CSV
- Parquet
- JSONL
- Excel
- Quick prototyping and ad-hoc analysis
- Small datasets that don’t require a database
- Getting started with Evidence before connecting external sources
Comparison
| Databases | Object Storage | Data Lakes | Flat Files | Applications | |
|---|---|---|---|---|---|
| Data location | Data is extracted | Stays in your bucket | Stays in your storage | Uploaded to Evidence | Data is extracted |
| Data updates | Scheduled sync | Instant (for row changes) | Scheduled sync | Re-upload file | Scheduled sync |
| Schema changes | Requires re-publish | Requires re-publish | Requires re-publish | Re-upload file | Requires re-publish |
| Data residency | Choose Evidence region | Your bucket location | Your storage location | Choose Evidence region | Choose Evidence region |
Storage Regions
Evidence supports the following storage regions for extracted data.| Region | Location |
|---|---|
us-central1 | Iowa, USA |
us-east1 | South Carolina, USA |
us-east4 | Virginia, USA |
us-east5 | Ohio, USA |
us-west1 | Oregon, USA |
us-west2 | Los Angeles, USA |
us-west3 | Salt Lake City, USA |
us-west4 | Las Vegas, USA |
northamerica-northeast1 | Montreal, Canada |
northamerica-northeast2 | Toronto, Canada |
europe-west2 | London, UK |
europe-west3 | Frankfurt, Germany |
europe-west4 | Eemshaven, Netherlands |
europe-west6 | Zurich, Switzerland |
europe-west8 | Milan, Italy |
europe-west9 | Paris, France |
europe-west10 | Berlin, Germany |
asia-south1 | Mumbai, India |
asia-southeast1 | Singapore |
asia-northeast1 | Tokyo, Japan |
australia-southeast1 | Sydney, Australia |

