Last updated: March 18, 2026
The Detrics Approach
Detrics follows an ETL (Extract, Transform, Load) model for data warehousing. Unlike tools that dump raw API responses into your warehouse and ask you to clean them up, Detrics gives you full control over what gets extracted and how it’s structured before it lands in BigQuery.
You decide:
- Which metrics and dimensions to include in each table
- Which filters to apply at the source
- How to aggregate over time (daily, weekly, monthly, or total)
- Which sync strategy to use for each table
This means every table in your warehouse contains exactly the data you asked for — clean, normalized, and ready to query.
The Sync Pipeline
When a transfer runs — whether on schedule or triggered manually — Detrics executes a five-stage pipeline:
Stage 1: Configuration Resolution
Detrics loads the transfer’s configuration: which tables to sync, which accounts, which filters (including any per-transfer filter overrides), the sync mode assigned to each table, and refresh window settings.
For each table in the transfer, Detrics calls the platform’s API to fetch the requested metrics and dimensions. The date range depends on the sync mode and whether this is the initial sync or a subsequent run:
Initial sync (first run after creating a transfer):
- All sync modes fetch the full historical sync range configured on each table (e.g., last 3 months, last 12 months, or all time). This backfills your BigQuery tables with historical data.
Subsequent syncs:
- Incremental: Fetches only the last N days (the refresh window), deduplicates, and preserves everything outside the window
- Full Refresh: Fetches the full configured date range and replaces the table
- Full Append: Fetches the full configured date range and appends a new snapshot
For large syncs (many accounts, long date ranges, or detailed entity levels like individual ads), Detrics automatically chunks the work — splitting by time range and account batches to prevent timeouts and manage memory.
Before loading into BigQuery, Detrics normalizes the data:
- Column naming — All field names are converted to
snake_case for consistent SQL across platforms. See Column Naming
- System columns — Four columns are added to every row:
_detrics_account_id, _detrics_row_id, _detrics_sync_id, and _detrics_synced_at. See System Columns
- Type mapping — Platform data types are mapped to BigQuery-native types (STRING, FLOAT64, INT64, DATE, TIMESTAMP, BOOL)
- Row identity — A SHA-256 hash is computed from each row’s content for deduplication tracking
Stage 4: Schema Migration
If this is the first sync for a table, Detrics creates the BigQuery table with the appropriate schema. On subsequent syncs, if you’ve added or removed fields, Detrics migrates the schema — adding new columns and preserving existing data.
Stage 5: Data Loading
Data is loaded into BigQuery according to the sync mode assigned to each table in the transfer:
| Sync Mode | What Happens |
|---|
| Incremental | Delete rows in the refresh window for the synced accounts, then insert fresh data. This is done atomically — if the insert fails, the delete is rolled back |
| Full Refresh | Replace the entire table with the new data in one atomic operation |
| Full Append | Append new rows to the existing table without modifying or removing anything |
After loading, Detrics records the results in the sync log metadata table and updates the transfer’s status.
Detrics is built specifically for the platforms used by modern digital marketing teams. This focus allows for deeper integration than general-purpose ETL tools:
Meta Ads, Google Ads, TikTok Ads, LinkedIn Ads, X Ads, Pinterest Ads, Bing Ads, Mercado Ads, Google DV360 — with support for attribution windows, conversion tracking, and ad-level creative data.
Analytics & SEO
Google Analytics 4 and Google Search Console — with support for custom dimensions and property-level configuration.
Instagram Insights, Facebook Insights, Facebook Public Data, TikTok Organic — organic performance metrics alongside paid data.
Email Marketing
Klaviyo and Mailchimp — campaign performance, flow analytics, and subscriber metrics.
CRM
HubSpot — contacts, deals, companies, and other entities with custom properties.
E-commerce
Shopify, TiendaNube, WooCommerce, Google Merchant Center — orders, products, inventory, and storefront data.
Payments
Stripe — charges, subscriptions, and payment analytics.
Chunked Execution
Large syncs are automatically split into smaller chunks to ensure reliability. Chunking happens along two axes:
- Time range — Long date ranges are split into monthly segments
- Account batches — Many accounts are processed in smaller groups
The chunk strategy varies by platform and entity level. For example, Meta Ads at the ad level uses smaller chunks (1 month × 5 accounts) because ad-level data is denser, while campaign-level data uses larger chunks (12 months × 25 accounts).
Each completed chunk is persisted immediately. If a sync is interrupted (timeout, transient error), completed chunks are preserved — only the remaining chunks need to be retried.
What Detrics Adds to Your Data
Every table in your warehouse includes four system columns managed by Detrics:
| Column | Description |
|---|
_detrics_account_id | The platform account ID that owns each row (e.g., your Meta ad account ID) |
_detrics_row_id | A content hash for deduplication tracking |
_detrics_sync_id | The transfer run ID that created the row |
_detrics_synced_at | Timestamp of when the row was loaded into BigQuery |
These columns let you filter by account, track data freshness, and join with the sync log for auditing. See System Columns for details.