Elido
Pick the angle that fits your team
For analytics-first teams

Click data you can actually query.

You measure attribution, funnel drop-offs, and incremental lift. Elido stores every click in ClickHouse with raw access — no sampling, no aggregation lag.

  • No click sampling at any tier — every event stored
  • Per-workspace ClickHouse DSN, read-only, rotatable
  • Scheduled S3 + BigQuery export (Parquet by default)
  • Raw click events via webhook firehose / Kafka consumer
Clicks · last 7 days
elido.me/launch
MonTueWedThuFriSatSun7,120
24h granularity · 38,620 total+18.4% wk/wk
0%
Click sampling
<5s
Event ingest lag
24 months
Retention on Business
ClickHouse DSN
Direct SQL access

How click data lands

Click → Redpanda → ClickHouse, with no aggregation in the middle.

Most shorteners give you a counter. We give you a row per click, ingested in under five seconds, queryable from your own SQL client. The pipeline is one binary writing to one Kafka topic that one consumer drains into ClickHouse — no aggregation service, no daily summaries, no ‘sampled after 10K’ footnote.

  1. Step 1

    Click

    elido.me/x → 302

    Edge POP returns the destination + emits an event to Redpanda.

  2. Step 2

    Redpanda

    topic: clicks.<workspace>

    12 partitions, at-least-once delivery, 7-day topic retention.

  3. Step 3

    ClickHouse

    <5s p99 ingest lag

    click-ingester drains the topic into the per-workspace events table.

  4. Step 4

    Your tools

    DSN · BigQuery · Kafka

    Read-only DSN, scheduled Parquet export, or direct firehose consumer.

Per-workspace ClickHouse DSN

A read-only DSN you can paste straight into Metabase.

Business workspaces get a per-workspace, read-only ClickHouse DSN scoped to their event table via row-level security. Plug it into Metabase, Hex, Apache Superset, Grafana, or any ClickHouse-compatible client. The DSN is rotatable from workspace settings without changing the underlying table.

  • Stable schema
    Versioned in /docs/api-reference; migration guides in /changelog
  • Row-level security
    DSN scoped to your workspace's event rows only
  • BI-tool compatible
    Metabase, Hex, Superset, Grafana, Looker — anything that speaks ClickHouse
  • Sub-second queries
    1B-row tables under 1s for typical group-by-country / hour aggregations
Read about analytics →
ClickHouse · query editor
read-only DSN
clickhouse://ws_8a2f:****@ch-eu-central-1.elido.app:9440/events
SELECT country, COUNT(*) AS clicks
FROM events
WHERE link_id = 'lnk_8a2fc1...'
  AND occurred_at >= now() - INTERVAL 7 DAY
GROUP BY country
ORDER BY clicks DESC
LIMIT 5;
Result · 5 rowsscanned 1.2M rows · 0.18s
countryclicksdistribution
DE18,429
FR12,184
ES9,847
IT8,213
PL7,062
Connected · ClickHouse 24.xeu-central-1

Geography that survives the export

Country-level density on every click — not a hashed bucket.

Every click event includes ISO 3166-1 alpha-2 country, region, and city, resolved from MaxMind GeoIP at edge time. The IP itself is truncated to /24 (IPv4) or /48 (IPv6) before storage, so geo persists but PII does not. Below is the same data in the UI that lands in your warehouse — no aggregation tier in between.

Clicks by country · last 7 days
24 countries · ISO 3166-1 alpha-2
DE
18.4k
FR
12.2k
ES
9.8k
IT
8.2k
PL
7.1k
NL
6.5k
GB
5.9k
PT
4.9k
BE
4.0k
SE
3.7k
AT
3.2k
CZ
2.8k
DK
2.5k
IE
2.2k
FI
1.9k
GR
1.7k
HU
1.5k
RO
1.3k
NO
1.1k
CH
982
SK
794
LT
612
EE
481
LV
348
Cooler    Hotter5-bucket log scale · max 18,429
ClickHouse
events table · per workspace

Source of truth. 0% sampling, 24-month retention on Business.

Step 1
S3 · Parquet
s3://your-bucket/elido/clicks/

Hourly buckets, snappy-compressed Parquet (or JSON if you prefer).

Step 2
BigQuery / Snowflake / Redshift
native transfer · external table

Native BigQuery Transfer service or Snowflake external table loads from S3.

Step 3

Warehouse export

Hourly Parquet to S3, then a native transfer into your warehouse.

The scheduled export pushes click events as Parquet to your S3 bucket on an hourly or daily cadence; native BigQuery Transfer or Snowflake external table loads it from there. The first run is a full backfill to your retention window; subsequent runs append only new events keyed on the event timestamp. Failures retry; a dead-letter notification fires if a batch can’t land within 2 hours.

  • Parquet (default) or JSON; one object per hour-bucket
  • Filter export by domain, campaign, or link tag
  • Native BigQuery Transfer + Snowflake external table
  • Dead-letter alert on >2h batch failure
  • Kafka firehose for sub-second delivery (Business)

What you can do

  • No click sampling at any tier — every event stored
  • Per-workspace ClickHouse DSN, read-only, rotatable
  • Scheduled S3 + BigQuery export (Parquet by default)
  • Raw click events via webhook firehose / Kafka consumer
  • Sub-second query latency on 1B+ row tables
  • Server-side click attribution with click-ID dedupe

What 'analytics-first' means in Elido's data model

Most shortener analytics are aggregated totals. The features below explain what changes when the raw click stream is the primary artifact, not a summary.

No sampling
01

Every click stored — no 'after N events we sample' footnote

Click events are ingested via a Redpanda Kafka topic and written to ClickHouse by the click-ingester service. There is no sampling layer. A link with 10 clicks and a link with 10 million clicks both have every event in the same table — the schema doesn't change, no aggregation is applied at ingest time. Retention is 90 days on Free, 12 months on Pro, and 24 months on Business. After the retention window, events are hard-deleted; the count of deleted events is logged. The ClickHouse schema is public — you can see exactly what fields are stored, which means you can plan your data model in your warehouse before you start exporting. Event lag from the click to ClickHouse availability is typically under 5 seconds; the Redpanda consumer runs with auto-commit and logs lag metrics so you can see if the pipeline falls behind.

Server-side attribution
02

GA4 MP, Meta CAPI, and Mixpanel server-side — deduplicated against the click

Client-side pixels miss a significant fraction of conversions depending on adblocker penetration and iOS Safari ITP. Server-side forwarding sends the conversion to GA4 Measurement Protocol, Meta Conversions API, or Mixpanel directly from Elido's backend — no client-side JS required. The deduplication key is the click ID: when a conversion event arrives via Stripe or Shopify webhook, Elido matches it to the originating click and fans it out to all configured server-side endpoints. The click ID is passed as a query parameter to the destination URL at click time; your checkout flow should preserve it through to the conversion event. Each forwarded event carries the original UTM parameters from the click so attribution survives the full funnel. This is useful for recovering conversions that client-side pixels miss — it's not a replacement for a full CDP, but it closes the common last-click attribution gap.

BYO BI
03

Per-workspace read-only ClickHouse DSN — plug directly into Metabase, Hex, or Grafana

Business workspaces get a per-workspace read-only ClickHouse DSN scoped to their event table. Point Metabase, Hex, Apache Superset, Grafana, or any ClickHouse-compatible client at the DSN and write SQL directly against your click event data. The DSN is rotatable without changing the events table; it connects to a read-only user that can only SELECT, not INSERT or DROP. The ClickHouse schema is stable and versioned; schema changes get a migration guide in the changelog before they land. For teams who want to join click events with their own product data — 'which links drove users who went on to activate?' — the pattern is to copy click events to your own warehouse via scheduled export, then join there. The ClickHouse DSN is for teams whose BI tool can connect to ClickHouse directly and who don't need to join with external tables.

Warehouse export
04

Scheduled exports to S3, BigQuery, and Snowflake

Scheduled export runs on a configurable cadence (hourly, daily) and pushes the click event stream — or a subset filtered by domain, campaign, or link tag — to S3, BigQuery, or Snowflake. The S3 export uses Parquet by default (JSON available); BigQuery and Snowflake use the native connectors with a schema Elido creates and keeps current. Incremental exports are keyed on the event timestamp; the first export is a full backfill to your retention window; subsequent exports append new events only. If you need to replay from a specific timestamp, a one-off full export is available via support request. Export failures are logged and retried; a dead-letter notification goes to the workspace email if a batch fails for more than 2 hours.

Kafka firehose
05

Real-time Kafka consumer for event pipelines that can't wait for batch exports

Business workspaces can consume click events directly from a Redpanda topic as a Kafka consumer group. You get a consumer group ID, a bootstrap server, and a client certificate — standard Kafka consumer configuration. This is the right path for real-time alerting (spike detection on a link, geo anomaly flagging), real-time dashboards that need sub-second data, and pipelines where the scheduled export cadence is too slow. The firehose delivers every event at-least-once; your consumer is responsible for idempotency on replay. Topic retention is 7 days; if your consumer falls behind more than 7 days, events are lost — set up monitoring on consumer lag. This is not a beginner analytics feature; it requires Kafka consumer code and operational experience with consumer groups. If scheduled export to BigQuery gets you what you need, start there.

Stack you’ll touch

  • Raw click events
  • ClickHouse direct access
  • GA4 / Meta CAPI / Mixpanel
  • S3 + BigQuery export
  • Per-workspace DSN
  • Webhook firehose

What you'll measure

Sampling rate
0% — every click stored
Event ingestion lag
Under 5 seconds
Retention horizon
Up to 24 months

Analytics teams running on this

Names are placeholders for now — real customer names land here as case studies are published.

The ClickHouse DSN let us wire Metabase directly at click event data without building an ETL. We now answer 'which campaign drove MQL-to-SQL conversion?' from a Metabase dashboard with no extra infrastructure.

D
Data engineering team, B2B SaaS, Helsinki
Lead Data Engineer

Server-side Meta CAPI via Elido recovered attribution on roughly 25% of conversions our client-side pixel was missing. The setup was one sprint; attribution accuracy improvement was immediate.

G
Growth analytics team, e-commerce, Paris
Analytics Engineer

We consume the Kafka firehose into our own stream processor. Sub-5-second event lag means our real-time link-performance dashboards aren't lying to the editorial team during live events.

D
Data infrastructure team, media company, Copenhagen
Senior Data Engineer

Elido analytics vs Bitly Analytics vs Heap

Bitly Analytics is adequate for click counts and basic geo. Heap is a full product analytics platform. The comparison below is honest about where each option is the right tool.

CapabilityElidoBitly AnalyticsHeap
Click data sampling0% — every event storedAggregated; raw events not accessiblePlan-dependent on the free tier
Direct SQL accessRead-only ClickHouse DSN (Business)No direct DB accessHeap Data Lake (warehouse export)
Scheduled export to BigQuery/SnowflakeYes, Business+CSV export onlyYes — core feature
Real-time Kafka firehoseYes, Business+Not availableNot available
Server-side conversion forwardingGA4 MP, Meta CAPI, Mixpanel — deduplicatedNot availableServer-side event ingestion (product events)
User-level trackingNo — click-level only, no user identityNoYes — core feature
Funnel + cohort retentionClick cohorts on BusinessNoFull funnel + cohort — mature
Event retentionUp to 24 months rawAggregated counters; raw not availableVaries by plan

Analytics team questions

What's the exact ClickHouse schema for click events?

The schema is public at /docs/api-reference under 'Click events'. Key fields: click_id (UUID), link_id, workspace_id, occurred_at (UTC timestamp), country_iso2, region, city, device_type (mobile/tablet/desktop), os, browser, referrer_domain, utm_source, utm_medium, utm_campaign, utm_term, utm_content. Nullable fields are nullable, not empty strings. Schema changes are announced in /changelog with a migration guide.

Is there a Kafka consumer guide?

Yes — /docs/guides/kafka-firehose covers bootstrap server, consumer group setup, client cert rotation, and example consumer code in Go and Python. Topic is one per workspace; partition count is fixed at 12. Offset reset is earliest by default on first consumer group join. If you're building on top of this, budget time for consumer lag monitoring — that's the failure mode that bites teams who don't set it up.

Can I join click events with my own user table?

In your warehouse, yes. The standard pattern is: export click events to BigQuery or Snowflake via scheduled export, then join on the UTM parameters or a custom user_id parameter you append to your short link destinations. Elido doesn't store user identity in click events — the click_id is a random UUID per click, not tied to a user account.

How does server-side conversion deduplication work?

When you POST a conversion event to Elido's conversion endpoint, you include the click_id that was returned in the original click response (it's passed as a query parameter to the destination URL). Elido looks up the click, checks it hasn't already been attributed, and fans the conversion out to GA4 MP, Meta CAPI, or Mixpanel with the original click's UTM context. Duplicate submissions with the same click_id are idempotent — they're acknowledged but not double-counted.

What happens if my Kafka consumer falls behind?

Events are retained in the topic for 7 days. If your consumer group's committed offset falls more than 7 days behind, older events will be lost before your consumer reads them. Monitor consumer lag; set up an alert at 6-hour lag as an early warning. If you fall behind on a non-recoverable event, the scheduled export to S3/BigQuery covers the gap — it's a good backup for the firehose.

Does the ClickHouse DSN give access to other workspaces' data?

No. The DSN is scoped to your workspace's event table only, via a read-only ClickHouse user with row-level security applied. You cannot see other workspaces' events. The DSN is revocable from workspace settings; rotate it on the same cadence as API keys.

Is there a sample size minimum before click cohorts are meaningful?

ClickHouse runs the cohort query at whatever data size exists — there's no minimum enforced. Statistical meaningfulness is your judgment call. A cohort of 50 clicks gives you a number, but it's noisy. We show raw counts and percentages; we don't apply Bayesian smoothing or confidence intervals to cohort views. For formal analysis, export and run your model in your warehouse.

Can I filter the scheduled export to a subset of links?

Yes — export filters support: specific domain, specific campaign ID, specific tag, or a date range. A filtered export is still incremental; subsequent runs append only new events matching the filter. If you add a new filter condition to an existing export job, you'll need to either create a new job or do a one-off full re-export to backfill the new filter's history.

Not sure which angle fits?

Most teams start as one and grow into all four. Our sales team can walk through your specific stack in 20 minutes.

For analytics-first teams — Click data you can actually query. · Elido