Dozor

Data model

The dashboard owns 14 tables, organised into two domains:

  • Auth sideUser, Account, VerificationToken, Authenticator, OtpRateLimit, InviteSendRateLimit (six tables, Auth.js territory + rate-limit support).
  • Business sideOrganization, Membership, Invite, Project, TrackedUser, Session, EventBatch, Marker (eight tables, the actual product).

Schema source: prisma/schema.prisma. This page exists because the schema file is ground truth but doesn't tell you the story — what depends on what, what cascades, what's hot, what's cold.

Auth-side ER

TableKey fieldsRelationsPurpose
Userid (cuid), email (unique), locale, activeOrganizationId (FK → Organization, SetNull)Owns: Account, Authenticator, Membership, InviteSendRateLimit (all Cascade)The signed-in human
Account(provider, providerAccountId) unique, userId (FK → User, Cascade)Belongs to UserOAuth links (Google, GitHub)
AuthenticatorcredentialID (unique), composite PK with userId (FK → User, Cascade)Belongs to UserWebAuthn / passkey registration
Membership(userId, organizationId) unique, role, both FKs CascadeUser ↔ OrganizationPer-org role (OWNER / ADMIN / VIEWER)
VerificationToken(identifier, token) unique, expiresOne-time email OTP (Auth.js managed)
OtpRateLimit(email, date) unique, countPer-day OTP send cap (5/day, 60s cooldown)
InviteSendRateLimit(userId, date) unique, count, userId (FK → User, Cascade)Belongs to UserPer-day invite send cap (100/day per signed-in user)

Business-side ER

TableKey fieldsRelationsPurpose
Organizationid, name, type (PERSONAL | TEAM)Owns: Membership (Cascade), Invite (Cascade), Project (Cascade)The team / personal-space tenant
Projectid, name, key (unique), organizationId (FK → Org, Cascade), defaultDisplayNameTraitKeyOwns: TrackedUser (Cascade), Session (Cascade)The recording target — one project = one regeneratable API key
Inviteemail, role, status (enum), expiresAt, organizationId (FK → Org, Cascade), invitedById (FK → User)Belongs to Org + inviterPending invitation to join an org
TrackedUserexternalId, projectId (FK → Project, Cascade), customName, displayNameTraitKey, traits (Json)Belongs to Project, owns Sessions (via SetNull on delete)A visitor identified via dozor.identify()
SessionexternalId, projectId (FK → Project, Cascade), trackedUserId (FK → TrackedUser, SetNull), url, userAgent, duration, eventCount, startedAt, endedAtOwns: EventBatch (Cascade), Marker (Cascade)One browser session as captured by the SDK
EventBatchfirstTimestamp, lastTimestamp, eventCount, data (Bytes — gzip-compressed JSON array), sessionId (FK → Session, Cascade)Belongs to SessionOne row per ingest POST — stored verbatim for replay
Markertimestamp, kind (url | identity), data (Json), sessionId (FK → Session, Cascade)Belongs to SessionTyped timeline anchors extracted from rrweb custom events

Cascade rules — the destructive map

When you delete an entity, this is what comes with it:

User delete

  • Cascades: Account, Authenticator, Membership, InviteSendRateLimit.
  • Pre-step: any solo-member org (Personal Space + any TEAM org with just this user) is deleted outright, cascading further down. Shared TEAM orgs survive — ownership transfers if needed.
  • FK SetNull: User.activeOrganizationId on other users pointing at orgs the deleted user owned.

Organization delete (TEAM only — PERSONAL deletion is rejected with 403)

  • Pre-step: User.activeOrganizationId nullified for any user pointing to this org (the FK has onDelete: SetNull, but the route's pre-delete null-sweep makes the intent explicit).
  • Cascades: Membership, Invite, Project (and everything below).

Project delete

  • Cascades: Session, TrackedUser.

Session delete

  • Cascades: EventBatch, Marker.
  • FK SetNull: the TrackedUser row stays — the user keeps their other sessions; only this session's trackedUserId was the linkage.

TrackedUser delete

  • FK SetNull: Session.trackedUserId for sessions owned by the deleted user. The recordings stay replayable as anonymous.

The pattern: identity goes, recordings stay. A user who deleted their account doesn't take their session recordings with them. A project that gets nuked takes its sessions because the project is the recording target. Personal Space can never be deleted.

Indexing strategy

Hot read paths:

QueryIndex used
Sessions list (per project, newest first)[projectId, createdAt(sort: Desc)] on Session
Tracked users list (per project, recent first)[projectId, updatedAt(sort: Desc)] on TrackedUser
Ingest upsert (per project, by external id)[projectId, externalId] unique on Session + TrackedUser
Session events stream (player load)[sessionId, firstTimestamp] on EventBatch
Marker timeline (history feed + stats)[sessionId, kind, timestamp] on Marker
Member's orgsuserId on Membership
Org's membersorganizationId on Membership
OAuth account lookup[provider, providerAccountId] unique on Account
Project key lookup (ingest auth)key @unique on Project
Invite by email (recipient inbox)email on Invite

Cold reads (admin / cron only) — no indexes beyond the FK ones.

Typical row sizes

For capacity estimation. These are order-of-magnitude — your traffic profile shifts them.

TableBytes per row (typical)Cardinality scaling
User200One per signed-in user (10s–100s)
Organization150~1.5× user count (personal + team orgs)
Membership80Sum of org sizes (10s–1000s)
Project250A few per org (10s–100s total)
TrackedUser300 + traits JSONOne per (project, externalId) (1000s–100000s)
Session300One per browser session (10000s–1000000s)
EventBatch50 + ~50 KB gzipped blobOne per ingest POST (~1/min while recording) — dominant table by storage
Marker100 + data JSONA few per session (per URL change + identify)
Invite200Low cardinality, swept daily
OtpRateLimit80Bucket-resets at UTC midnight

Event is by far the heaviest. A 5-minute session of normal browsing typically captures 1000–3000 events, so the table grows ~10000 rows per active user per day on a busy product.

The 90-day retention cron (see Authentication & sessions → Rate limits) is what keeps the database from compounding indefinitely.

Capacity estimates (Neon Free tier)

The free Neon tier is 0.5 GB of storage. Rough fits:

  • 50–100k sessions at average size (1500 events each, ~250 bytes per event payload)
  • 5–10 million events before pressure
  • At ~1000 sessions/day that's about 50–100 days at retention. Well within the 90-day cleanup cron.

For higher traffic, Neon's paid tiers scale linearly. There's no soft limit in the dashboard itself — the cap is whatever your Postgres instance supports.

On this page