Self-host with Postgres
The Postgres adapter is the simplest way to run Litemetrics if you already operate a Postgres database. It has full feature parity with the ClickHouse adapter (every metric, time series, top-N query, and retention cohort returns identical results) and adds zero new infrastructure to your stack.
When to pick Postgres
- You already run Postgres for your app and want one less moving piece.
- You expect under ~10M events per month per site.
- You value operational simplicity over peak query throughput.
- You want analytics data in the same backup, replication, and observability story as your app data.
For larger volumes or sub-second top-N queries on hundreds of millions of events, prefer ClickHouse.
1. Install
Add the collector to an Express app and point it at your Postgres instance:
npm install @litemetrics/nodeimport express from 'express';
import { createCollector } from '@litemetrics/node';
const app = express();
app.use(express.json());
const collector = await createCollector({
db: {
adapter: 'postgres',
url: process.env.DATABASE_URL,
},
adminSecret: process.env.ADMIN_SECRET,
geoip: true,
});
app.post('/api/collect', collector.handler());
app.get('/api/stats', collector.queryHandler());
app.all('/api/events', collector.eventsHandler());
app.all('/api/users/*', collector.usersHandler());
app.all('/api/sites/*', collector.sitesHandler());
app.listen(3002);2. Or run the bundled image
The Docker image works the same way; pass DB_ADAPTER and POSTGRES_URL:
docker run -d \
-e DB_ADAPTER=postgres \
-e POSTGRES_URL=postgres://user:pass@db.example.com:5432/litemetrics \
-e ADMIN_SECRET=change-me \
-p 3002:3002 \
ghcr.io/metehankurucu/litemetrics:latest3. Configuration
| Variable | Default | Notes |
|---|---|---|
DB_ADAPTER | clickhouse | Set to postgres. |
POSTGRES_URL | postgres://postgres:postgres@localhost:5432/litemetrics | Standard Postgres connection string. SSL params supported. |
ADMIN_SECRET | admin | Required for site CRUD. |
GEOIP | true | MaxMind GeoLite2 country and city resolution. |
TRUST_PROXY | true | Set when running behind a load balancer. |
4. Schema
Tables are created on first boot. The Postgres adapter uses native types where Postgres is good at them:
events: one row per event. Properties stored asjsonbfor queryable extra fields.- Primary index on
(site_id, timestamp)for fast range scans (the dashboard's hottest query pattern). - Secondary indexes on
(site_id, type, timestamp)for filtered breakdowns. sites: soft-delete viadeleted_at.identity: visitor-to-user merge map.
5. Operating notes
- Connection pool: the adapter uses
pg.Pool. The default pool size of 10 is enough for most workloads; bump it if your collector runs behind heavy concurrent traffic. - Vacuum: events are insert-only with TTL deletions (when configured). Postgres autovacuum keeps things tidy without intervention.
- Partitioning: for multi-tenant deployments above a few million events per site per month, declarative partitioning on
timestampreduces query and vacuum costs. Add it via a manual migration; the adapter does not require it.
6. Migrating to ClickHouse later
If you outgrow Postgres, you can switch to ClickHouse without changing application code; only the DB_ADAPTER and CLICKHOUSE_URL env vars change. A one-shot export script is provided in the repo for migrating historical events.
Where to next
- Quickstart: end-to-end setup including the dashboard.
- ClickHouse setup: comparison and switch path.
- vs PostHog: when a lighter Postgres stack beats a heavyweight platform.