Implement coordinated timeout strategy across all layers: **Client-side (90s total):** - HTTP requests: 75s timeout with AbortController - Sync wait timeout: 90s (exceeds all server timeouts) - Restore service: Retry logic for network errors (2s, 4s backoff) - Better error messages for timeout scenarios **Server-side:** - Caddy proxy: 85s timeout (exceeds Fastify) - Fastify server: 80s request timeout (exceeds DB timeout) - Database operations: 60s (unchanged) **Rationale:** Each timeout layer exceeds the one below it, allowing inner operations to complete and report errors properly. The 90s client timeout ensures large operations (snapshot generation, imports) can complete without premature abortion. **Monitoring:** - Log slow requests >30s for visibility - Detailed error logging with duration tracking - Android WebView timeout support via CapacitorHttp |
||
|---|---|---|
| .. | ||
| docs | ||
| prisma | ||
| public | ||
| scripts | ||
| src | ||
| tests | ||
| .env.example | ||
| .gitignore | ||
| Caddyfile | ||
| docker-compose.build.yml | ||
| docker-compose.monitoring.yml | ||
| docker-compose.yml | ||
| Dockerfile | ||
| Dockerfile.test | ||
| env.example | ||
| package.json | ||
| privacy-policy-en.md | ||
| privacy-policy.md | ||
| README.md | ||
| sync-server-architecture-diagrams.md | ||
| terms-of-service-en.md | ||
| terms-of-service.md | ||
| tsconfig.json | ||
| vitest.config.ts | ||
SuperSync Server
A custom, high-performance synchronization server for Super Productivity.
Note: This server implements a custom operation-based synchronization protocol (Event Sourcing), not WebDAV. It is designed specifically for the Super Productivity client's efficient sync requirements.
Related Documentation:
- Authentication Architecture - Auth design decisions and security features
- Operation Log Architecture - Client-side architecture
- Server Architecture Diagrams - Visual diagrams
Architecture
The server uses an Append-Only Log architecture backed by PostgreSQL (via Prisma):
- Operations: Clients upload atomic operations (Create, Update, Delete, Move).
- Sequence Numbers: The server assigns a strictly increasing
server_seqto each operation. - Synchronization: Clients request "all operations since sequence
X". - Snapshots: The server can regenerate the full state by replaying operations, optimizing initial syncs.
Key Design Principles
| Principle | Description |
|---|---|
| Server-Authoritative | Server assigns monotonic sequence numbers for total ordering |
| Client-Side Conflict Resolution | Server stores operations as-is; clients detect and resolve conflicts |
| E2E Encryption Support | Payloads can be encrypted client-side; server treats them as opaque blobs |
| Idempotent Uploads | Request ID deduplication prevents duplicate operations |
Quick Start
Docker (Recommended)
The easiest way to run the server is using the provided Docker Compose configuration.
# 1. Copy environment example
cp .env.example .env
# 2. Configure .env (Set JWT_SECRET, DOMAIN, POSTGRES_PASSWORD)
nano .env
# 3. Start the stack (Server + Postgres + Caddy)
docker-compose up -d
Manual Setup (Development)
# Install dependencies
npm install
# Generate Prisma Client
npx prisma generate
# Set up .env
cp .env.example .env
# Edit .env to point to your PostgreSQL instance (DATABASE_URL)
# Push schema to DB
npx prisma db push
# Start the server
npm run dev
# Or build and run
npm run build
npm start
Configuration
All configuration is done via environment variables.
| Variable | Default | Description |
|---|---|---|
PORT |
1900 |
Server port |
DATABASE_URL |
- | PostgreSQL connection string (e.g. postgresql://user:pass@localhost:5432/db) |
JWT_SECRET |
- | Required. Secret for signing JWTs (min 32 chars) |
PUBLIC_URL |
- | Public URL used for email links (e.g. https://sync.example.com) |
CORS_ORIGINS |
https://app.super-productivity.com |
Allowed CORS origins |
SMTP_HOST |
- | SMTP Server for emails |
API Endpoints
Authentication
Register a new user
POST /api/register
Content-Type: application/json
{
"email": "user@example.com",
"password": "yourpassword"
}
Response:
{
"message": "User registered. Please verify your email.",
"id": 1,
"email": "user@example.com"
}
Login
POST /api/login
Content-Type: application/json
{
"email": "user@example.com",
"password": "yourpassword"
}
Response:
{
"token": "jwt-token",
"user": { "id": 1, "email": "user@example.com" }
}
Synchronization
All sync endpoints require Bearer authentication: Authorization: Bearer <jwt-token>
1. Upload Operations
Send new changes to the server.
POST /api/sync/ops
2. Download Operations
Get changes from other devices.
GET /api/sync/ops?sinceSeq=123
3. Get Snapshot
Get the full current state (optimized).
GET /api/sync/snapshot
4. Sync Status
Check pending operations and device status.
GET /api/sync/status
Client Configuration
In Super Productivity, configure the Custom Sync provider with:
- Base URL:
https://sync.your-domain.com(or your deployed URL) - Auth Token: JWT token from login
Maintenance
Scripts
The server includes scripts for administrative tasks. These use the configured database.
# Delete a user account
npm run delete-user -- user@example.com
# Clear sync data (preserves account)
npm run clear-data -- user@example.com
# Clear ALL sync data (dangerous)
npm run clear-data -- --all
API Details
Upload Operations (POST /api/sync/ops)
Request body:
{
"ops": [
{
"id": "uuid-v7",
"opType": "UPD",
"entityType": "TASK",
"entityId": "task-123",
"payload": { "changes": { "title": "New title" } },
"vectorClock": { "clientA": 5 },
"timestamp": 1701234567890,
"schemaVersion": 1
}
],
"clientId": "clientA",
"lastKnownSeq": 100
}
Response:
{
"results": [{ "opId": "uuid-v7", "accepted": true, "serverSeq": 101 }],
"newOps": [],
"latestSeq": 101
}
Download Operations (GET /api/sync/ops)
Query parameters:
sinceSeq(required): Server sequence number to start fromlimit(optional): Max operations to return (default: 500)
Upload Snapshot (POST /api/sync/snapshot)
Used for full-state operations (BackupImport, SyncImport, Repair):
{
"state": {
/* Full AppDataComplete */
},
"clientId": "clientA",
"reason": "initial",
"vectorClock": { "clientA": 10 },
"schemaVersion": 1
}
Security Features
| Feature | Implementation |
|---|---|
| Authentication | JWT Bearer tokens in Authorization header |
| Timing Attack Mitigation | Dummy hash comparison on invalid users |
| Input Validation | Operation ID, entity ID, schema version validated |
| Rate Limiting | Configurable per-user limits |
| Vector Clock Sanitization | Limited to 100 entries, 255 char keys |
| Entity Type Allowlist | Prevents injection of invalid entity types |
| Request Deduplication | Prevents duplicate operations on retry |
Multi-Instance Deployment Considerations
When deploying multiple server instances behind a load balancer, be aware of these limitations:
Passkey Challenge Storage
Issue: WebAuthn challenges are stored in an in-memory Map, which doesn't work across instances.
Symptom: Passkey registration/login fails if the challenge generation request hits instance A but verification hits instance B.
Solution for multi-instance:
- Implement Redis-backed challenge storage
- Or use sticky sessions (less ideal)
Current status: A warning is logged at startup in production if in-memory storage is used.
Snapshot Generation Locks
Issue: Concurrent snapshot generation prevention uses an in-memory Map.
Symptom: Same user may trigger duplicate snapshot computations across different instances.
Impact: Performance only (no data corruption) - snapshots are deterministic.
Solution for multi-instance:
- Implement Redis distributed lock (optional, only for performance)
Single-Instance Deployment
For single-instance deployments, these limitations do not apply. The current implementation is fully functional and well-tested for single-instance use.
Security Notes
- Set JWT_SECRET to a secure random value in production (min 32 characters).
- Use HTTPS in production. The Docker setup includes Caddy to handle this automatically.
- Restrict CORS origins in production.
- Database backups are recommended for production deployments.