diff --git a/docs/ai/sync/hybrid-manifest-architecture.md b/docs/ai/sync/hybrid-manifest-architecture.md
index 8a7c93154..0de176314 100644
--- a/docs/ai/sync/hybrid-manifest-architecture.md
+++ b/docs/ai/sync/hybrid-manifest-architecture.md
@@ -605,7 +605,8 @@ Remote Storage Layout (v2):
Code Files:
src/app/core/persistence/operation-log/
├── operation.types.ts # Add HybridManifest types
-├── operation-log-sync.service.ts # Buffer/overflow logic
+├── sync/
+│ └── operation-log-sync.service.ts # Buffer/overflow logic
├── hybrid-snapshot.service.ts # NEW: Snapshot generation/loading
└── manifest-recovery.service.ts # NEW: Corruption recovery
```
diff --git a/docs/ai/sync/operation-log-architecture-diagrams.md b/docs/ai/sync/operation-log-architecture-diagrams.md
index 3a27dac3e..2137a0d65 100644
--- a/docs/ai/sync/operation-log-architecture-diagrams.md
+++ b/docs/ai/sync/operation-log-architecture-diagrams.md
@@ -21,13 +21,13 @@ graph TD
Filter -- No --> Ignore[Ignore / UI Only]
Filter -- Yes --> Transform["Transform to Operation
UUIDv7, Timestamp, VectorClock
operation-converter.util.ts"]
- Transform -->|2. Validate| PayloadValid{"Payload
Valid?
validate-operation-payload.ts"}
+ Transform -->|2. Validate| PayloadValid{"Payload
Valid?
processing/validate-operation-payload.ts"}
PayloadValid -- No --> ErrorSnack[Show Error Snackbar]
PayloadValid -- Yes --> DBWrite
end
subgraph "Persistence Layer (IndexedDB)"
- DBWrite["Write to SUP_OPS
operation-log-store.service.ts"]:::storage
+ DBWrite["Write to SUP_OPS
store/operation-log-store.service.ts"]:::storage
DBWrite -->|Append| OpsTable["Table: ops
The Event Log
IndexedDB"]:::storage
DBWrite -->|Update| StateCache["Table: state_cache
Snapshots
IndexedDB"]:::storage
@@ -41,18 +41,18 @@ graph TD
subgraph "Compaction System"
OpsTable -->|Count > 500| CompactionTrig{"Compaction
Trigger
operation-log.effects.ts"}:::trigger
- CompactionTrig -->|Yes| Compactor["CompactionService
operation-log-compaction.service.ts"]:::process
+ CompactionTrig -->|Yes| Compactor["CompactionService
store/operation-log-compaction.service.ts"]:::process
Compactor -->|Read State| NgRx
Compactor -->|Save Snapshot| StateCache
Compactor -->|Delete Old Ops| OpsTable
end
subgraph "Read Path (Hydration)"
- Startup((App Startup)) --> Hydrator["OperationLogHydrator
operation-log-hydrator.service.ts"]:::process
+ Startup((App Startup)) --> Hydrator["OperationLogHydrator
store/operation-log-hydrator.service.ts"]:::process
Hydrator -->|1. Load| StateCache
- StateCache -->|Check| Schema{"Schema
Version?
schema-migration.service.ts"}
- Schema -- Old --> Migrator["SchemaMigrationService
schema-migration.service.ts"]:::process
+ StateCache -->|Check| Schema{"Schema
Version?
store/schema-migration.service.ts"}
+ Schema -- Old --> Migrator["SchemaMigrationService
store/schema-migration.service.ts"]:::process
Migrator -->|Transform State| MigratedState
Schema -- Current --> CurrentState
@@ -60,12 +60,12 @@ graph TD
MigratedState -->|Load State| StoreInit
Hydrator -->|2. Load Tail| OpsTable
- OpsTable -->|Replay Ops| Replayer["OperationApplier
operation-applier.service.ts"]:::process
+ OpsTable -->|Replay Ops| Replayer["OperationApplier
processing/operation-applier.service.ts"]:::process
Replayer -->|Dispatch| NgRx
end
subgraph "Multi-Tab"
- DBWrite -->|4. Broadcast| BC["BroadcastChannel
multi-tab-coordinator.service.ts"]
+ DBWrite -->|4. Broadcast| BC["BroadcastChannel
sync/multi-tab-coordinator.service.ts"]
BC -->|Notify| OtherTabs((Other Tabs))
end
@@ -91,9 +91,9 @@ graph TD
end
subgraph "Client: Sync Loop"
- Scheduler((Scheduler)) -->|Interval| SyncService["OperationLogSyncService
operation-log-sync.service.ts"]
+ Scheduler((Scheduler)) -->|Interval| SyncService["OperationLogSyncService
sync/operation-log-sync.service.ts"]
- SyncService -->|1. Get Last Seq| LocalMeta["Sync Metadata
operation-log-store.service.ts"]
+ SyncService -->|1. Get Last Seq| LocalMeta["Sync Metadata
store/operation-log-store.service.ts"]
%% Download Flow
SyncService -->|2. Download Ops| API
@@ -105,7 +105,7 @@ graph TD
end
subgraph "Client: Conflict Management"
- ConflictDet{"Conflict
Detection
conflict-resolution.service.ts"}:::conflict
+ ConflictDet{"Conflict
Detection
sync/conflict-resolution.service.ts"}:::conflict
ConflictDet -->|Check Vector Clocks| VCCheck[Entity-Level Check]
@@ -125,10 +125,10 @@ graph TD
subgraph "Client: Application & Validation"
ApplyRemote -->|Apply to Store| Store[NgRx Store]
- Store -->|Post-Apply| Validator{"Validate
State?
validate-state.service.ts"}:::repair
+ Store -->|Post-Apply| Validator{"Validate
State?
processing/validate-state.service.ts"}:::repair
Validator -- Valid --> Done((Sync Done))
- Validator -- Invalid --> Repair["Auto-Repair Service
repair-operation.service.ts"]:::repair
+ Validator -- Invalid --> Repair["Auto-Repair Service
processing/repair-operation.service.ts"]:::repair
Repair -->|Fix Data| RepairedState
Repair -->|Create Op| RepairOp[Create REPAIR Op]:::repair
diff --git a/docs/ai/sync/operation-log-architecture.md b/docs/ai/sync/operation-log-architecture.md
index 1f92179f8..a5f258cc2 100644
--- a/docs/ai/sync/operation-log-architecture.md
+++ b/docs/ai/sync/operation-log-architecture.md
@@ -1763,23 +1763,28 @@ What if data exists in both `pf` AND `SUP_OPS` databases?
```
src/app/core/persistence/operation-log/
├── operation.types.ts # Type definitions (Operation, OpType, EntityType)
-├── operation-log-store.service.ts # SUP_OPS IndexedDB wrapper
+├── operation-log.const.ts # Constants
├── operation-log.effects.ts # Action capture + META_MODEL bridge
-├── operation-log-hydrator.service.ts # Startup hydration
-├── operation-log-compaction.service.ts # Snapshot + cleanup
-├── operation-log-migration.service.ts # Genesis migration from legacy
-├── operation-log-sync.service.ts # Upload/download operations (Part C)
-├── operation-applier.service.ts # Apply ops to store with dependency handling
├── operation-converter.util.ts # Op ↔ Action conversion
├── persistent-action.interface.ts # PersistentAction type + isPersistentAction guard
-├── lock.service.ts # Cross-tab locking (Web Locks + fallback)
-├── multi-tab-coordinator.service.ts # BroadcastChannel coordination
-├── schema-migration.service.ts # State schema migrations
-├── dependency-resolver.service.ts # Extract/check operation dependencies
-├── conflict-resolution.service.ts # Conflict UI presentation
-├── validate-state.service.ts # Typia + cross-model validation wrapper
-├── validate-operation-payload.ts # Checkpoint A - payload validation
-└── repair-operation.service.ts # REPAIR operation creation + notification
+├── entity-key.util.ts # Entity key generation utilities
+├── store/
+│ ├── operation-log-store.service.ts # SUP_OPS IndexedDB wrapper
+│ ├── operation-log-hydrator.service.ts # Startup hydration
+│ ├── operation-log-compaction.service.ts # Snapshot + cleanup
+│ ├── operation-log-migration.service.ts # Genesis migration from legacy
+│ └── schema-migration.service.ts # State schema migrations
+├── sync/
+│ ├── operation-log-sync.service.ts # Upload/download operations (Part C)
+│ ├── lock.service.ts # Cross-tab locking (Web Locks + fallback)
+│ ├── multi-tab-coordinator.service.ts # BroadcastChannel coordination
+│ ├── dependency-resolver.service.ts # Extract/check operation dependencies
+│ └── conflict-resolution.service.ts # Conflict UI presentation
+└── processing/
+ ├── operation-applier.service.ts # Apply ops to store with dependency handling
+ ├── validate-state.service.ts # Typia + cross-model validation wrapper
+ ├── validate-operation-payload.ts # Checkpoint A - payload validation
+ └── repair-operation.service.ts # REPAIR operation creation + notification
src/app/pfapi/
├── pfapi-store-delegate.service.ts # Reads NgRx for sync (Part B)
diff --git a/docs/ai/sync/operation-log-execution-plan.md b/docs/ai/sync/operation-log-execution-plan.md
deleted file mode 100644
index 2a10759fa..000000000
--- a/docs/ai/sync/operation-log-execution-plan.md
+++ /dev/null
@@ -1,58 +0,0 @@
-# Operation Log: Remaining Tasks & Future Enhancements
-
-**Status:** Core Implementation Complete (Parts A, B, C, D)
-**Branch:** `feat/operation-logs`
-**Last Updated:** December 3, 2025
-
----
-
-## Overview
-
-The core Operation Log architecture (Local Persistence, Legacy Bridge, Server Sync, Validation & Repair) is fully implemented and operational. This document tracks future enhancements and optimizations.
-
----
-
-## 1. Performance & Storage Optimizations
-
-| Enhancement | Description | Priority | Effort |
-| ---------------------------- | ------------------------------------------------------------------------- | -------- | ------ |
-| **IndexedDB index** | Add index on `syncedAt` for O(1) `getUnsynced()` queries | Low | Low |
-| **Persistent compaction** | Track `opsSinceCompaction` counter in DB to persist across restarts | Low | Low |
-| **Optimize getAppliedOpIds** | Consider Merkle trees or Bloom filters if log grows very large (>10k ops) | Low | Medium |
-| **Diff-based storage** | Store diffs (e.g., diff-match-patch) for large text fields (Notes) | Defer | High |
-
-## 2. Observability & Tooling
-
-| Enhancement | Description | Priority | Effort |
-| ----------------- | --------------------------------------------------------------------------- | -------- | ------ |
-| **Op Log Viewer** | Hidden debug panel (in Settings → About) to view/inspect raw operation logs | Medium | Medium |
-
-**Implementation Idea:**
-
-- Tab showing total ops, pending ops, last sync time, vector clock.
-- List of recent operations (seq, id, type, timestamp) with JSON expansion.
-
-## 3. Feature Enhancements
-
-| Enhancement | Description | Priority | Effort |
-| -------------- | ----------------------------------------------------------------------- | -------- | ------ |
-| **Auto-merge** | Automatically merge non-conflicting field changes on the same entity | Low | High |
-| **Undo/Redo** | Leverage the operation log history to implement robust global Undo/Redo | Low | High |
-
-## 4. Migration System Improvements
-
-Refinements for the Schema Migration system (Part A.7).
-
-| Enhancement | Description | Priority | Effort |
-| ---------------------------- | --------------------------------------------- | -------- | ------ |
-| **Operation migration** | Transform old ops to new schema during replay | Low | High |
-| **Conflict-aware migration** | Special handling for version conflicts | Medium | High |
-| **Migration rollback** | Undo migration if it fails partway | Low | Medium |
-| **Progressive migration** | Migrate in background over multiple sessions | Low | High |
-
----
-
-# References
-
-- [Architecture](./operation-log-architecture.md) - Complete System Design
-- [PFAPI Architecture](./pfapi-sync-persistence-architecture.md) - Legacy Sync System
diff --git a/src/app/core/persistence/operation-log/docs/hybrid-manifest-architecture.md b/src/app/core/persistence/operation-log/docs/hybrid-manifest-architecture.md
new file mode 100644
index 000000000..0de176314
--- /dev/null
+++ b/src/app/core/persistence/operation-log/docs/hybrid-manifest-architecture.md
@@ -0,0 +1,612 @@
+# Hybrid Manifest & Snapshot Architecture for File-Based Sync
+
+**Status:** Proposal / Planned
+**Context:** Optimizing WebDAV/Dropbox sync for the Operation Log architecture.
+**Related:** [Operation Log Architecture](./operation-log-architecture.md)
+
+---
+
+## 1. The Problem
+
+The current `OperationLogSyncService` fallback for file-based providers (WebDAV, Dropbox) is inefficient for frequent, small updates.
+
+**Current Workflow (Naive Fallback):**
+
+1. **Write Operation File:** Upload `ops/ops_CLIENT_TIMESTAMP.json`.
+2. **Read Manifest:** Download `ops/manifest.json` to get current list.
+3. **Update Manifest:** Upload new `ops/manifest.json` with the new filename added.
+
+**Issues:**
+
+- **High Request Count:** Minimum 3 HTTP requests per sync cycle.
+- **File Proliferation:** Rapidly creates thousands of small files, degrading WebDAV directory listing performance.
+- **Latency:** On slow connections (standard WebDAV), this makes sync feel sluggish.
+
+---
+
+## 2. Proposed Solution: Hybrid Manifest
+
+Instead of treating the manifest solely as an _index_ of files, we treat it as a **buffer** for recent operations.
+
+### 2.1. Concept
+
+- **Embedded Operations:** Small batches of operations are stored directly inside `manifest.json`.
+- **Lazy Flush:** New operation files (`ops_*.json`) are only created when the manifest buffer fills up.
+- **Snapshots:** A "base state" file allows us to delete old operation files and clear the manifest history.
+
+### 2.2. Data Structures
+
+**Updated Manifest:**
+
+```typescript
+interface HybridManifest {
+ version: 2;
+
+ // The baseline state (snapshot). If present, clients load this first.
+ lastSnapshot?: SnapshotReference;
+
+ // Ops stored directly in the manifest (The Buffer)
+ // Limit: ~50 ops or 100KB payload size
+ embeddedOperations: EmbeddedOperation[];
+
+ // References to external operation files (The Overflow)
+ // Older ops that were flushed out of the buffer
+ operationFiles: OperationFileReference[];
+
+ // Merged vector clock from all embedded operations
+ // Used for quick conflict detection without parsing all ops
+ frontierClock: VectorClock;
+
+ // Last modification timestamp (for ETag-like cache invalidation)
+ lastModified: number;
+}
+
+interface SnapshotReference {
+ fileName: string; // e.g. "snapshots/snap_1701234567890.json"
+ schemaVersion: number; // Schema version of the snapshot
+ vectorClock: VectorClock; // Clock state at snapshot time
+ timestamp: number; // When snapshot was created
+}
+
+interface OperationFileReference {
+ fileName: string; // e.g. "ops/overflow_1701234567890.json"
+ opCount: number; // Number of operations in file (for progress estimation)
+ minSeq: number; // First operation's logical sequence in this file
+ maxSeq: number; // Last operation's logical sequence
+}
+
+// Embedded operations are lightweight - full Operation minus redundant fields
+interface EmbeddedOperation {
+ id: string;
+ actionType: string;
+ opType: OpType;
+ entityType: EntityType;
+ entityId?: string;
+ entityIds?: string[];
+ payload: unknown;
+ clientId: string;
+ vectorClock: VectorClock;
+ timestamp: number;
+ schemaVersion: number;
+}
+```
+
+**Snapshot File Format:**
+
+```typescript
+interface SnapshotFile {
+ version: 1;
+ schemaVersion: number; // App schema version
+ vectorClock: VectorClock; // Merged clock at snapshot time
+ timestamp: number;
+ data: AppDataComplete; // Full application state
+ checksum?: string; // Optional SHA-256 for integrity verification
+}
+```
+
+---
+
+## 3. Workflows
+
+### 3.1. Upload (Write Path)
+
+When a client has local pending operations to sync:
+
+```
+┌─────────────────────────────────────────────────────────────────┐
+│ Upload Flow │
+└─────────────────────────────────────────────────────────────────┘
+ │
+ ▼
+ ┌───────────────────────────────┐
+ │ 1. Download manifest.json │
+ └───────────────────────────────┘
+ │
+ ▼
+ ┌───────────────────────────────┐
+ │ 2. Detect remote changes │
+ │ (compare frontierClock) │
+ └───────────────────────────────┘
+ │
+ ┌───────────────┴───────────────┐
+ ▼ ▼
+ Remote has new ops? No remote changes
+ │ │
+ ▼ │
+ Download & apply first ◄───────┘
+ │
+ ▼
+ ┌───────────────────────────────┐
+ │ 3. Check buffer capacity │
+ │ embedded.length + pending │
+ └───────────────────────────────┘
+ │
+ ┌───────────────┴───────────────┐
+ ▼ ▼
+ < BUFFER_LIMIT (50) >= BUFFER_LIMIT
+ │ │
+ ▼ ▼
+ Append to embedded Flush embedded to file
+ │ + add pending to empty buffer
+ │ │
+ └───────────────┬───────────────┘
+ ▼
+ ┌───────────────────────────────┐
+ │ 4. Check snapshot trigger │
+ │ (operationFiles > 50 OR │
+ │ total ops > 5000) │
+ └───────────────────────────────┘
+ │
+ ┌───────────────┴───────────────┐
+ ▼ ▼
+ Trigger snapshot No snapshot needed
+ │ │
+ └───────────────┬───────────────┘
+ ▼
+ ┌───────────────────────────────┐
+ │ 5. Upload manifest.json │
+ └───────────────────────────────┘
+```
+
+**Detailed Steps:**
+
+1. **Download Manifest:** Fetch `manifest.json` (or create empty v2 manifest if not found).
+2. **Detect Remote Changes:**
+ - Compare `manifest.frontierClock` with local `lastSyncedClock`.
+ - If remote has unseen changes → download and apply before uploading (prevents lost updates).
+3. **Evaluate Buffer:**
+ - `BUFFER_LIMIT = 50` operations (configurable)
+ - `BUFFER_SIZE_LIMIT = 100KB` payload size (prevents manifest bloat)
+4. **Strategy Selection:**
+ - **Scenario A (Append):** If `embedded.length + pending.length < BUFFER_LIMIT`:
+ - Append `pendingOps` to `manifest.embeddedOperations`.
+ - Update `manifest.frontierClock` with merged clocks.
+ - **Result:** 1 Write (manifest). Fast path.
+ - **Scenario B (Overflow):** If buffer would exceed limit:
+ - Upload `manifest.embeddedOperations` to new file `ops/overflow_TIMESTAMP.json`.
+ - Add file reference to `manifest.operationFiles`.
+ - Place `pendingOps` into now-empty `manifest.embeddedOperations`.
+ - **Result:** 1 Upload (overflow file) + 1 Write (manifest).
+5. **Upload Manifest:** Write updated `manifest.json`.
+
+### 3.2. Download (Read Path)
+
+When a client checks for updates:
+
+```
+┌─────────────────────────────────────────────────────────────────┐
+│ Download Flow │
+└─────────────────────────────────────────────────────────────────┘
+ │
+ ▼
+ ┌───────────────────────────────┐
+ │ 1. Download manifest.json │
+ └───────────────────────────────┘
+ │
+ ▼
+ ┌───────────────────────────────┐
+ │ 2. Quick-check: any changes? │
+ │ Compare frontierClock │
+ └───────────────────────────────┘
+ │
+ ┌───────────────┴───────────────┐
+ ▼ ▼
+ No changes (clocks equal) Changes detected
+ │ │
+ ▼ ▼
+ Done ┌────────────────────────┐
+ │ 3. Need snapshot? │
+ │ (local behind snapshot)│
+ └────────────────────────┘
+ │
+ ┌───────────────┴───────────────┐
+ ▼ ▼
+ Download snapshot Skip to ops
+ + apply as base │
+ │ │
+ └───────────────┬───────────────┘
+ ▼
+ ┌────────────────────────┐
+ │ 4. Download new op │
+ │ files (filter seen) │
+ └────────────────────────┘
+ │
+ ▼
+ ┌────────────────────────┐
+ │ 5. Apply embedded ops │
+ │ (filter by op.id) │
+ └────────────────────────┘
+ │
+ ▼
+ ┌────────────────────────┐
+ │ 6. Update local │
+ │ lastSyncedClock │
+ └────────────────────────┘
+```
+
+**Detailed Steps:**
+
+1. **Download Manifest:** Fetch `manifest.json`.
+2. **Quick-Check Changes:**
+ - Compare `manifest.frontierClock` against local `lastSyncedClock`.
+ - If clocks are equal → no changes, done.
+3. **Check Snapshot Needed:**
+ - If local state is older than `manifest.lastSnapshot.vectorClock` → download snapshot first.
+ - Apply snapshot as base state (replaces local state).
+4. **Download Operation Files:**
+ - Filter `manifest.operationFiles` to only files with `maxSeq > localLastAppliedSeq`.
+ - Download and parse each file.
+ - Collect all operations.
+5. **Apply Embedded Operations:**
+ - Filter `manifest.embeddedOperations` by `op.id` (skip already-applied).
+ - Add to collected operations.
+6. **Apply All Operations:**
+ - Sort by `vectorClock` (causal order).
+ - Detect conflicts using existing `detectConflicts()` logic.
+ - Apply non-conflicting ops; present conflicts to user.
+7. **Update Tracking:**
+ - Set `localLastSyncedClock = manifest.frontierClock`.
+
+---
+
+## 4. Snapshotting (Compaction)
+
+To prevent unbounded growth of operation files, any client can trigger a snapshot.
+
+### 4.1. Triggers
+
+| Condition | Threshold | Rationale |
+| ------------------------------- | --------- | -------------------------------------- |
+| External `operationFiles` count | > 50 | Prevent WebDAV directory bloat |
+| Total operations since snapshot | > 5000 | Bound replay time for fresh installs |
+| Time since last snapshot | > 7 days | Ensure periodic cleanup |
+| Manifest size | > 500KB | Prevent manifest from becoming too big |
+
+### 4.2. Process
+
+```
+┌─────────────────────────────────────────────────────────────────┐
+│ Snapshot Flow │
+└─────────────────────────────────────────────────────────────────┘
+ │
+ ▼
+ ┌───────────────────────────────┐
+ │ 1. Ensure full sync complete │
+ │ (no pending local/remote) │
+ └───────────────────────────────┘
+ │
+ ▼
+ ┌───────────────────────────────┐
+ │ 2. Read current state from │
+ │ NgRx (authoritative) │
+ └───────────────────────────────┘
+ │
+ ▼
+ ┌───────────────────────────────┐
+ │ 3. Generate snapshot file │
+ │ + compute checksum │
+ └───────────────────────────────┘
+ │
+ ▼
+ ┌───────────────────────────────┐
+ │ 4. Upload snapshot file │
+ │ (atomic, verify success) │
+ └───────────────────────────────┘
+ │
+ ▼
+ ┌───────────────────────────────┐
+ │ 5. Update manifest │
+ │ - Set lastSnapshot │
+ │ - Clear operationFiles │
+ │ - Clear embeddedOperations │
+ │ - Reset frontierClock │
+ └───────────────────────────────┘
+ │
+ ▼
+ ┌───────────────────────────────┐
+ │ 6. Upload manifest │
+ └───────────────────────────────┘
+ │
+ ▼
+ ┌───────────────────────────────┐
+ │ 7. Cleanup (async, best- │
+ │ effort): delete old files │
+ └───────────────────────────────┘
+```
+
+### 4.3. Snapshot Atomicity
+
+**Problem:** If the client crashes between uploading snapshot and updating manifest, other clients won't see the new snapshot.
+
+**Solution:** Snapshot files are immutable and safe to leave orphaned. The manifest is the source of truth. Cleanup is best-effort.
+
+**Invariant:** Never delete the current `lastSnapshot` file until a new snapshot is confirmed.
+
+---
+
+## 5. Conflict Handling
+
+The hybrid manifest doesn't change conflict detection - it still uses vector clocks. However, the `frontierClock` in the manifest enables **early conflict detection**.
+
+### 5.1. Early Conflict Detection
+
+Before downloading all operations, compare clocks:
+
+```typescript
+const comparison = compareVectorClocks(localFrontierClock, manifest.frontierClock);
+
+switch (comparison) {
+ case VectorClockComparison.LESS_THAN:
+ // Remote is ahead - safe to download
+ break;
+ case VectorClockComparison.GREATER_THAN:
+ // Local is ahead - upload our changes
+ break;
+ case VectorClockComparison.CONCURRENT:
+ // Potential conflicts - download ops for detailed analysis
+ break;
+ case VectorClockComparison.EQUAL:
+ // No changes - skip download
+ break;
+}
+```
+
+### 5.2. Conflict Resolution
+
+When conflicts are detected at the operation level, the existing `ConflictResolutionService` handles them. The hybrid manifest doesn't change this flow.
+
+---
+
+## 6. Edge Cases & Failure Modes
+
+### 6.1. Concurrent Uploads (Race Condition)
+
+**Scenario:** Two clients download the manifest simultaneously, both append ops, both upload.
+
+**Problem:** Second upload overwrites first client's operations.
+
+**Solution:** Use provider-specific mechanisms:
+
+| Provider | Mechanism |
+| ----------- | ------------------------------------------- |
+| **Dropbox** | Use `update` mode with `rev` parameter |
+| **WebDAV** | Use `If-Match` header with ETag |
+| **Local** | File locking (already implemented in PFAPI) |
+
+**Implementation:**
+
+```typescript
+interface HybridManifest {
+ // ... existing fields
+
+ // Optimistic concurrency control
+ etag?: string; // Server-assigned revision (Dropbox rev, WebDAV ETag)
+}
+
+async uploadManifest(manifest: HybridManifest, expectedEtag?: string): Promise {
+ // If expectedEtag provided, use conditional upload
+ // On conflict (412 Precondition Failed), re-download and retry
+}
+```
+
+### 6.2. Manifest Corruption
+
+**Scenario:** Manifest JSON is invalid (partial write, encoding issue).
+
+**Recovery Strategy:**
+
+1. Attempt to parse manifest.
+2. On parse failure, check for backup manifest (`manifest.json.bak`).
+3. If no backup, reconstruct from operation files using `listFiles()`.
+4. If reconstruction fails, fall back to snapshot-only state.
+
+```typescript
+async loadManifestWithRecovery(): Promise {
+ try {
+ return await this._loadRemoteManifest();
+ } catch (parseError) {
+ PFLog.warn('Manifest corrupted, attempting recovery...');
+
+ // Try backup
+ try {
+ return await this._loadBackupManifest();
+ } catch {
+ // Reconstruct from files
+ return await this._reconstructManifestFromFiles();
+ }
+ }
+}
+```
+
+### 6.3. Snapshot File Missing
+
+**Scenario:** Manifest references a snapshot that doesn't exist on the server.
+
+**Recovery Strategy:**
+
+1. Log error and notify user.
+2. Fall back to replaying all available operation files.
+3. If operation files also reference missing ops, show data loss warning.
+
+### 6.4. Schema Version Mismatch
+
+**Scenario:** Snapshot was created with schema version 3, but local app is version 2.
+
+**Handling:**
+
+- If `snapshot.schemaVersion > CURRENT_SCHEMA_VERSION + MAX_VERSION_SKIP`:
+ - Reject snapshot, prompt user to update app.
+- If `snapshot.schemaVersion > CURRENT_SCHEMA_VERSION`:
+ - Load with warning (some fields may be stripped by Typia).
+- If `snapshot.schemaVersion < CURRENT_SCHEMA_VERSION`:
+ - Run migrations on loaded state.
+
+### 6.5. Large Pending Operations
+
+**Scenario:** User was offline for a week, has 500 pending operations.
+
+**Handling:**
+
+- Don't try to embed all 500 in manifest.
+- Batch into multiple overflow files (100 ops each).
+- Upload files first, then update manifest once.
+
+```typescript
+const BATCH_SIZE = 100;
+const chunks = chunkArray(pendingOps, BATCH_SIZE);
+
+for (const chunk of chunks) {
+ await this._uploadOverflowFile(chunk);
+}
+
+// Single manifest update at the end
+await this._uploadManifest(manifest);
+```
+
+---
+
+## 7. Advantages Summary
+
+| Metric | Current (v1) | Hybrid Manifest (v2) |
+| :---------------------- | :----------------------------------- | :---------------------------------------------------- |
+| **Requests per Sync** | 3 (Upload Op + Read Man + Write Man) | **1-2** (Read Man, optional Write) |
+| **Files on Server** | Unbounded growth | **Bounded** (1 Manifest + 0-50 Op Files + 1 Snapshot) |
+| **Fresh Install Speed** | O(n) - replay all ops | **O(1)** - load snapshot + small delta |
+| **Conflict Detection** | Must parse all ops | **Quick check** via frontierClock |
+| **Bandwidth per Sync** | ~2KB (op file) + manifest overhead | **~1KB** (manifest only for small changes) |
+| **Offline Resilience** | Good | **Same** (operations buffered locally) |
+
+---
+
+## 8. Implementation Plan
+
+### Phase 1: Core Infrastructure
+
+1. **Update Types** (`operation.types.ts`):
+
+ - Add `HybridManifest`, `SnapshotReference`, `OperationFileReference` interfaces.
+ - Keep backward compatibility with existing `OperationLogManifest`.
+
+2. **Manifest Handling** (`operation-log-sync.service.ts`):
+
+ - Update `_loadRemoteManifest()` to detect version and parse accordingly.
+ - Add `_migrateV1ToV2Manifest()` for automatic upgrade.
+ - Implement buffer/overflow logic in `_uploadPendingOpsViaFiles()`.
+
+3. **Add FrontierClock Tracking**:
+ - Merge vector clocks when adding embedded operations.
+ - Store `lastSyncedFrontierClock` locally for quick-check.
+
+### Phase 2: Snapshot Support
+
+4. **Create `HybridSnapshotService`**:
+
+ - `generateSnapshot()`: Serialize current state + compute checksum.
+ - `uploadSnapshot()`: Upload with retry logic.
+ - `loadSnapshot()`: Download + validate + apply.
+
+5. **Integrate Snapshot Triggers**:
+ - Check conditions after each upload.
+ - Add manual "Force Snapshot" option in settings for debugging.
+
+### Phase 3: Robustness
+
+6. **Optimistic Concurrency**:
+
+ - Implement ETag/rev-based conditional uploads.
+ - Add retry-on-conflict logic.
+
+7. **Recovery Logic**:
+ - Manifest corruption recovery.
+ - Missing file handling.
+ - Schema migration for snapshots.
+
+### Phase 4: Testing & Migration
+
+8. **Add Tests**:
+
+ - Unit tests for buffer overflow logic.
+ - Integration tests for multi-client scenarios.
+ - Stress tests for large operation counts.
+
+9. **Migration Path**:
+ - v1 clients continue to work (read v2 manifest, ignore new fields).
+ - v2 clients auto-upgrade v1 manifests on first write.
+
+---
+
+## 9. Configuration Constants
+
+```typescript
+// Buffer limits
+const EMBEDDED_OP_LIMIT = 50; // Max operations in manifest buffer
+const EMBEDDED_SIZE_LIMIT_KB = 100; // Max payload size in KB
+
+// Snapshot triggers
+const SNAPSHOT_FILE_THRESHOLD = 50; // Trigger when operationFiles exceeds this
+const SNAPSHOT_OP_THRESHOLD = 5000; // Trigger when total ops exceed this
+const SNAPSHOT_AGE_DAYS = 7; // Trigger if no snapshot in N days
+
+// Batching
+const UPLOAD_BATCH_SIZE = 100; // Ops per overflow file
+
+// Retry
+const MAX_UPLOAD_RETRIES = 3;
+const RETRY_DELAY_MS = 1000;
+```
+
+---
+
+## 10. Open Questions
+
+1. **Encryption:** Should snapshots be encrypted differently than operation files? (Same encryption is simpler)
+
+2. **Compression:** Should we gzip the snapshot file? (Trade-off: smaller size vs. no partial reads)
+
+3. **Checksum Verification:** Is SHA-256 overkill for snapshot integrity? (Consider CRC32 for speed)
+
+4. **Clock Drift:** How to handle clients with significantly wrong system clocks? (Vector clocks help, but timestamps in snapshot could confuse users)
+
+---
+
+## 11. File Reference
+
+```
+Remote Storage Layout (v2):
+├── manifest.json # HybridManifest (buffer + references)
+├── ops/
+│ ├── overflow_170123.json # Flushed operations (batches of 100)
+│ └── overflow_170456.json
+└── snapshots/
+ └── snap_170789.json # Full state snapshot
+```
+
+```
+Code Files:
+src/app/core/persistence/operation-log/
+├── operation.types.ts # Add HybridManifest types
+├── sync/
+│ └── operation-log-sync.service.ts # Buffer/overflow logic
+├── hybrid-snapshot.service.ts # NEW: Snapshot generation/loading
+└── manifest-recovery.service.ts # NEW: Corruption recovery
+```
diff --git a/src/app/core/persistence/operation-log/docs/operation-log-architecture-diagrams.md b/src/app/core/persistence/operation-log/docs/operation-log-architecture-diagrams.md
new file mode 100644
index 000000000..2137a0d65
--- /dev/null
+++ b/src/app/core/persistence/operation-log/docs/operation-log-architecture-diagrams.md
@@ -0,0 +1,513 @@
+# Operation Log: Architecture Diagrams
+
+## 1. Operation Log Architecture (Local Persistence & Legacy Bridge)
+
+This diagram illustrates how user actions flow through the system, how they are persisted to IndexedDB (`SUP_OPS`), how the system hydrates on startup, and how it bridges to the legacy PFAPI system.
+
+```mermaid
+graph TD
+ %% Styles
+ classDef storage fill:#f9f,stroke:#333,stroke-width:2px,color:black;
+ classDef process fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:black;
+ classDef legacy fill:#fff3e0,stroke:#ef6c00,stroke-width:2px,stroke-dasharray: 5 5,color:black;
+ classDef trigger fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px,color:black;
+
+ User((User / UI)) -->|Dispatch Action| NgRx["NgRx Store
Runtime Source of Truth
*.effects.ts / *.reducer.ts"]
+
+ subgraph "Write Path (Runtime)"
+ NgRx -->|Action Stream| OpEffects["OperationLogEffects
operation-log.effects.ts"]
+
+ OpEffects -->|1. Check isPersistent| Filter{"Is Persistent?
persistent-action.interface.ts"}
+ Filter -- No --> Ignore[Ignore / UI Only]
+ Filter -- Yes --> Transform["Transform to Operation
UUIDv7, Timestamp, VectorClock
operation-converter.util.ts"]
+
+ Transform -->|2. Validate| PayloadValid{"Payload
Valid?
processing/validate-operation-payload.ts"}
+ PayloadValid -- No --> ErrorSnack[Show Error Snackbar]
+ PayloadValid -- Yes --> DBWrite
+ end
+
+ subgraph "Persistence Layer (IndexedDB)"
+ DBWrite["Write to SUP_OPS
store/operation-log-store.service.ts"]:::storage
+
+ DBWrite -->|Append| OpsTable["Table: ops
The Event Log
IndexedDB"]:::storage
+ DBWrite -->|Update| StateCache["Table: state_cache
Snapshots
IndexedDB"]:::storage
+ end
+
+ subgraph "Legacy Bridge (PFAPI)"
+ DBWrite -.->|3. Bridge| LegacyMeta["META_MODEL
Vector Clock
pfapi.service.ts"]:::legacy
+ LegacyMeta -.->|Update| LegacySync["Legacy Sync Adapters
WebDAV / Dropbox / Local
pfapi.service.ts"]:::legacy
+ noteLegacy[Updates Vector Clock so
Legacy Sync detects changes]:::legacy
+ end
+
+ subgraph "Compaction System"
+ OpsTable -->|Count > 500| CompactionTrig{"Compaction
Trigger
operation-log.effects.ts"}:::trigger
+ CompactionTrig -->|Yes| Compactor["CompactionService
store/operation-log-compaction.service.ts"]:::process
+ Compactor -->|Read State| NgRx
+ Compactor -->|Save Snapshot| StateCache
+ Compactor -->|Delete Old Ops| OpsTable
+ end
+
+ subgraph "Read Path (Hydration)"
+ Startup((App Startup)) --> Hydrator["OperationLogHydrator
store/operation-log-hydrator.service.ts"]:::process
+ Hydrator -->|1. Load| StateCache
+
+ StateCache -->|Check| Schema{"Schema
Version?
store/schema-migration.service.ts"}
+ Schema -- Old --> Migrator["SchemaMigrationService
store/schema-migration.service.ts"]:::process
+ Migrator -->|Transform State| MigratedState
+ Schema -- Current --> CurrentState
+
+ CurrentState -->|Load State| StoreInit[Init NgRx State]
+ MigratedState -->|Load State| StoreInit
+
+ Hydrator -->|2. Load Tail| OpsTable
+ OpsTable -->|Replay Ops| Replayer["OperationApplier
processing/operation-applier.service.ts"]:::process
+ Replayer -->|Dispatch| NgRx
+ end
+
+ subgraph "Multi-Tab"
+ DBWrite -->|4. Broadcast| BC["BroadcastChannel
sync/multi-tab-coordinator.service.ts"]
+ BC -->|Notify| OtherTabs((Other Tabs))
+ end
+
+ class OpsTable,StateCache storage;
+ class LegacyMeta,LegacySync,noteLegacy legacy;
+```
+
+## 2. Operation Log Sync Architecture (Server Sync)
+
+This diagram details the flow for syncing individual operations with a server (`Part C`), including conflict detection, resolution strategies, and the validation loop (`Part D`).
+
+```mermaid
+graph TD
+ %% Styles
+ classDef remote fill:#e3f2fd,stroke:#1565c0,stroke-width:2px,color:black;
+ classDef local fill:#fff,stroke:#333,stroke-width:2px,color:black;
+ classDef conflict fill:#ffebee,stroke:#c62828,stroke-width:2px,color:black;
+ classDef repair fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px,color:black;
+
+ subgraph "Remote Server"
+ ServerDB[(Server Database)]:::remote
+ API[Sync API Endpoint]:::remote
+ end
+
+ subgraph "Client: Sync Loop"
+ Scheduler((Scheduler)) -->|Interval| SyncService["OperationLogSyncService
sync/operation-log-sync.service.ts"]
+
+ SyncService -->|1. Get Last Seq| LocalMeta["Sync Metadata
store/operation-log-store.service.ts"]
+
+ %% Download Flow
+ SyncService -->|2. Download Ops| API
+ API -->|Return Ops > Seq| DownOps[Downloaded Operations]
+
+ DownOps --> FilterApplied{Already
Applied?}
+ FilterApplied -- Yes --> Discard[Discard]
+ FilterApplied -- No --> ConflictDet
+ end
+
+ subgraph "Client: Conflict Management"
+ ConflictDet{"Conflict
Detection
sync/conflict-resolution.service.ts"}:::conflict
+
+ ConflictDet -->|Check Vector Clocks| VCCheck[Entity-Level Check]
+
+ VCCheck -- Concurrent --> ConflictFound[Conflict Found!]:::conflict
+ VCCheck -- Sequential --> NoConflict[No Conflict]
+
+ ConflictFound --> UserDialog["User Resolution Dialog
dialog-conflict-resolution.component.ts"]:::conflict
+
+ UserDialog -- "Keep Remote" --> MarkRejected[Mark Local Ops
as Rejected]:::conflict
+ MarkRejected --> ApplyRemote[Apply Remote Ops]
+
+ UserDialog -- "Keep Local" --> IgnoreRemote[Ignore Remote Ops]
+
+ NoConflict --> ApplyRemote
+ end
+
+ subgraph "Client: Application & Validation"
+ ApplyRemote -->|Apply to Store| Store[NgRx Store]
+
+ Store -->|Post-Apply| Validator{"Validate
State?
processing/validate-state.service.ts"}:::repair
+
+ Validator -- Valid --> Done((Sync Done))
+ Validator -- Invalid --> Repair["Auto-Repair Service
processing/repair-operation.service.ts"]:::repair
+
+ Repair -->|Fix Data| RepairedState
+ Repair -->|Create Op| RepairOp[Create REPAIR Op]:::repair
+ RepairOp -->|Append| OpLog[(SUP_OPS)]
+ RepairedState -->|Update| Store
+ end
+
+ subgraph "Client: Upload Flow"
+ OpLog -->|Get Unsynced| PendingOps[Pending Ops]
+ PendingOps -->|Filter| FilterRejected{Is
Rejected?}
+
+ FilterRejected -- Yes --> Skip[Skip Upload]
+ FilterRejected -- No --> UploadBatch[Batch for Upload]
+
+ UploadBatch -->|3. Upload| API
+ API -->|Ack| ServerAck[Server Acknowledgement]
+ ServerAck -->|Update| MarkSynced[Mark Ops Synced]
+ MarkSynced --> OpLog
+ end
+
+ API <--> ServerDB
+```
+
+## 3. Conflict-Aware Migration Strategy (The Migration Shield)
+
+> **Note:** Sections 3, 4.1, and 4.2 describe **planned architecture** that is not yet implemented. Currently, only state cache snapshots are migrated via `SchemaMigrationService.migrateIfNeeded()`. Individual operation migration (`migrateOperation()`) is not implemented—tail ops are replayed directly without per-operation migration.
+
+This diagram visualizes the "Receiver-Side Migration" strategy. The Migration Layer acts as a shield, ensuring that _only_ operations matching the current schema version ever reach the core conflict detection and application logic.
+
+```mermaid
+graph TD
+ %% Nodes
+ subgraph "Sources of Operations (Mixed Versions)"
+ Remote[Remote Client Sync]:::src
+ Disk[Local Disk Tail Ops]:::src
+ end
+
+ subgraph "Migration Layer (The Shield)"
+ Check{"Is Op Old?
(vOp < vCurrent)"}:::logic
+ Migrate["Run migrateOperation()
Pipeline"]:::action
+ CheckDrop{"Result is
Null?"}:::logic
+ Pass["Pass Through"]:::pass
+ end
+
+ subgraph "Core System (Current Version Only)"
+ Conflict["Conflict Detection
(Apples-to-Apples)"]:::core
+ Apply["Apply to State"]:::core
+ end
+
+ %% Flow
+ Remote --> Check
+ Disk --> Check
+
+ Check -- Yes --> Migrate
+ Check -- No --> Pass
+
+ Migrate --> CheckDrop
+ CheckDrop -- Yes --> Drop[("🗑️ Drop Op
(Destructive Change)")]:::drop
+ CheckDrop -- No --> Conflict
+
+ Pass --> Conflict
+ Conflict --> Apply
+
+ %% Styles
+ classDef src fill:#fff3e0,stroke:#ef6c00,stroke-width:2px;
+ classDef logic fill:#fff,stroke:#333,stroke-width:2px;
+ classDef action fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px;
+ classDef pass fill:#e3f2fd,stroke:#1565c0,stroke-width:2px;
+ classDef drop fill:#ffebee,stroke:#c62828,stroke-width:2px,stroke-dasharray: 5 5;
+ classDef core fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px;
+```
+
+## 4. Migration Scenarios
+
+### 4.1 Tail Ops Migration (Local Startup Consistency)
+
+Ensures that operations occurring after a snapshot ("Tail Ops") are migrated to the current version before being applied to the migrated state.
+
+```mermaid
+sequenceDiagram
+ participant IDB as IndexedDB (SUP_OPS)
+ participant Hydrator as OpLogHydrator
+ participant Migrator as SchemaMigrationService
+ participant Applier as OperationApplier
+ participant Store as NgRx Store
+
+ Note over IDB, Store: App Updated from V1 -> V2
+
+ Hydrator->>IDB: Load Snapshot (Version 1)
+ IDB-->>Hydrator: Returns Snapshot V1
+
+ Hydrator->>Migrator: migrateIfNeeded(Snapshot V1)
+ Migrator-->>Hydrator: Returns Migrated Snapshot (Version 2)
+
+ Hydrator->>Store: Load Initial State (V2)
+
+ Hydrator->>IDB: Load Tail Ops (Version 1)
+ Note right of IDB: Ops created after snapshot
but before update
+ IDB-->>Hydrator: Returns Ops [OpA(v1), OpB(v1)]
+
+ loop For Each Op
+ Hydrator->>Migrator: migrateOperation(Op V1)
+ Migrator-->>Hydrator: Returns Op V2 (or null)
+
+ alt Op was Dropped (null)
+ Hydrator->>Hydrator: Ignore
+ else Op Migrated
+ Hydrator->>Applier: Apply(Op V2)
+ Applier->>Store: Dispatch Action (V2 Payload)
+ end
+ end
+
+ Note over Store: State matches V2 Schema
Consistency Preserved
+```
+
+### 4.2 Receiver-Side Sync Migration
+
+Demonstrates how a client on V2 handles incoming data from a client still on V1.
+
+```mermaid
+sequenceDiagram
+ participant Remote as Remote Client (V1)
+ participant Server as Sync Server
+ participant Local as Local Client (V2)
+ participant Conflict as Conflict Detector
+
+ Remote->>Server: Upload Operation (Version 1)
{ payload: { oldField: 'X' } }
+ Server-->>Local: Download Operation (Version 1)
+
+ Note over Local: Client V2 receives V1 data
+
+ Local->>Local: Check Op Schema Version (v1 < v2)
+ Local->>Local: Call SchemaMigrationService.migrateOperation()
+
+ Note over Local: Transforms payload:
{ oldField: 'X' } -> { newField: 'X' }
+
+ Local->>Conflict: detectConflicts(Remote Op V2)
+
+ alt Conflict Detected
+ Conflict->>Local: Show Dialog (V2 vs V2 comparison)
+ else No Conflict
+ Local->>Local: Apply Operation (V2)
+ end
+```
+
+## 5. Hybrid Manifest (File-Based Sync)
+
+This diagram illustrates the "Hybrid Manifest" optimization (`hybrid-manifest-architecture.md`) which reduces HTTP request overhead for WebDAV/Dropbox sync by buffering small operations directly inside the manifest file.
+
+```mermaid
+graph TD
+ %% Nodes
+ subgraph "Hybrid Manifest File (JSON)"
+ ManVer[Version: 2]:::file
+ SnapRef[Last Snapshot: 'snap_123.json']:::file
+ Buffer[Embedded Ops Buffer
Op1, Op2, ...]:::buffer
+ ExtFiles[External Files List
ops_A.json, ...]:::file
+ end
+
+ subgraph "Sync Logic (Upload Path)"
+ Start((Start Sync)) --> ReadMan[Download Manifest]
+ ReadMan --> CheckSize{Buffer Full?
more than 50 ops}
+
+ CheckSize -- No --> AppendBuffer[Append to
Embedded Ops]:::action
+ AppendBuffer --> WriteMan[Upload Manifest]:::io
+
+ CheckSize -- Yes --> Flush[Flush Buffer]:::action
+ Flush --> CreateFile[Create 'ops_NEW.json'
with old buffer content]:::io
+ CreateFile --> UpdateRef[Add 'ops_NEW.json'
to External Files]:::action
+ UpdateRef --> ClearBuffer[Clear Buffer &
Add Pending Ops]:::action
+ ClearBuffer --> WriteMan
+ end
+
+ %% Styles
+ classDef file fill:#fff3e0,stroke:#ef6c00,stroke-width:2px;
+ classDef buffer fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px;
+ classDef action fill:#e3f2fd,stroke:#1565c0,stroke-width:2px;
+ classDef io fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px;
+```
+
+## 6. Hybrid Manifest Conceptual Overview
+
+This diagram shows the Hybrid Manifest architecture: how operations flow from "hot" (recent, in manifest) to "cold" (archived files) to "frozen" (snapshot), and the decision logic for each transition.
+
+### 6.1 Data Lifecycle: Hot → Cold → Frozen
+
+```mermaid
+graph LR
+ subgraph "HOT: Manifest Buffer"
+ direction TB
+ Buffer["embeddedOperations[]
━━━━━━━━━━━━━━━
• Op 47
• Op 48
• Op 49
━━━━━━━━━━━━━━━
~50 ops max"]
+ end
+
+ subgraph "COLD: Operation Files"
+ direction TB
+ Files["operationFiles[]
━━━━━━━━━━━━━━━
• overflow_001.json
• overflow_002.json
• overflow_003.json
━━━━━━━━━━━━━━━
~50 files max"]
+ end
+
+ subgraph "FROZEN: Snapshot"
+ direction TB
+ Snap["lastSnapshot
━━━━━━━━━━━━━━━
snap_170789.json
━━━━━━━━━━━━━━━
Full app state"]
+ end
+
+ NewOp((New Op)) -->|"Always"| Buffer
+ Buffer -->|"When full
(overflow)"| Files
+ Files -->|"When too many
(compaction)"| Snap
+
+ style Buffer fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
+ style Files fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
+ style Snap fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
+ style NewOp fill:#fff,stroke:#333,stroke-width:2px
+```
+
+### 6.2 Manifest File Structure
+
+```mermaid
+graph TB
+ subgraph Manifest["manifest.json"]
+ direction TB
+
+ V["version: 2"]
+ FC["frontierClock: { A: 5, B: 3 }"]
+
+ subgraph SnapRef["lastSnapshot (optional)"]
+ SF["fileName: 'snap_170789.json'"]
+ SV["vectorClock: { A: 2, B: 1 }"]
+ end
+
+ subgraph EmbeddedOps["embeddedOperations[] — THE BUFFER"]
+ E1["Op { id: 'abc', entityType: 'TASK', ... }"]
+ E2["Op { id: 'def', entityType: 'PROJECT', ... }"]
+ E3["...up to 50 ops"]
+ end
+
+ subgraph OpFiles["operationFiles[] — OVERFLOW REFERENCES"]
+ F1["{ fileName: 'overflow_001.json', opCount: 100 }"]
+ F2["{ fileName: 'overflow_002.json', opCount: 100 }"]
+ end
+ end
+
+ style Manifest fill:#fff,stroke:#333,stroke-width:3px
+ style EmbeddedOps fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
+ style OpFiles fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
+ style SnapRef fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
+```
+
+### 6.3 Write Path: Buffer vs Overflow Decision
+
+```mermaid
+flowchart TD
+ Start([Client has pending ops]) --> Download[Download manifest.json]
+ Download --> CheckRemote{Remote has
new ops?}
+
+ CheckRemote -->|Yes| ApplyFirst[Download & apply
remote ops first]
+ ApplyFirst --> CheckBuffer
+ CheckRemote -->|No| CheckBuffer
+
+ CheckBuffer{Buffer + Pending
< 50 ops?}
+
+ CheckBuffer -->|Yes| FastPath
+ CheckBuffer -->|No| SlowPath
+
+ subgraph FastPath["⚡ FAST PATH (1 request)"]
+ Append[Append pending to
embeddedOperations]
+ Append --> Upload1[Upload manifest.json]
+ end
+
+ subgraph SlowPath["📦 OVERFLOW PATH (2 requests)"]
+ Flush[Upload embeddedOperations
as overflow_XXX.json]
+ Flush --> AddRef[Add file to operationFiles]
+ AddRef --> Clear[Put pending ops in
now-empty buffer]
+ Clear --> Upload2[Upload manifest.json]
+ end
+
+ Upload1 --> CheckSnap
+ Upload2 --> CheckSnap
+
+ CheckSnap{Files > 50 OR
Ops > 5000?}
+ CheckSnap -->|Yes| Compact[Trigger Compaction]
+ CheckSnap -->|No| Done([Done])
+ Compact --> Done
+
+ style FastPath fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
+ style SlowPath fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
+ style Start fill:#fff,stroke:#333
+ style Done fill:#fff,stroke:#333
+```
+
+### 6.4 Read Path: Reconstructing State
+
+```mermaid
+flowchart TD
+ Start([Client checks for updates]) --> Download[Download manifest.json]
+
+ Download --> QuickCheck{frontierClock
changed?}
+ QuickCheck -->|No| Done([No changes - done])
+
+ QuickCheck -->|Yes| NeedSnap{Local behind
snapshot?}
+
+ NeedSnap -->|Yes| LoadSnap
+ NeedSnap -->|No| LoadFiles
+
+ subgraph LoadSnap["🧊 Load Snapshot (fresh install / behind)"]
+ DownSnap[Download snapshot file]
+ DownSnap --> ApplySnap[Apply as base state]
+ end
+
+ ApplySnap --> LoadFiles
+
+ subgraph LoadFiles["📁 Load Operation Files"]
+ FilterFiles[Filter to unseen files only]
+ FilterFiles --> DownFiles[Download each file]
+ DownFiles --> CollectOps[Collect all operations]
+ end
+
+ CollectOps --> LoadEmbed
+
+ subgraph LoadEmbed["⚡ Load Embedded Ops"]
+ FilterEmbed[Filter by op.id
skip already-applied]
+ FilterEmbed --> AddOps[Add to collected ops]
+ end
+
+ AddOps --> Apply
+
+ subgraph Apply["✅ Apply All"]
+ Sort[Sort by vectorClock]
+ Sort --> Detect[Detect conflicts]
+ Detect --> ApplyOps[Apply non-conflicting]
+ end
+
+ ApplyOps --> UpdateClock[Update local
lastSyncedClock]
+ UpdateClock --> Done2([Done])
+
+ style LoadSnap fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
+ style LoadFiles fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
+ style LoadEmbed fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
+ style Apply fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
+```
+
+### 6.5 Compaction: Freezing State
+
+```mermaid
+flowchart TD
+ Trigger{{"Trigger Conditions"}}
+ Trigger --> C1["operationFiles > 50"]
+ Trigger --> C2["Total ops > 5000"]
+ Trigger --> C3["7+ days since snapshot"]
+
+ C1 --> Start
+ C2 --> Start
+ C3 --> Start
+
+ Start([Begin Compaction]) --> Sync[Ensure full sync
no pending ops]
+ Sync --> Read[Read current state
from NgRx]
+ Read --> Generate[Generate snapshot file
+ checksum]
+ Generate --> UpSnap[Upload snapshot file]
+
+ UpSnap --> UpdateMan
+
+ subgraph UpdateMan["Update Manifest"]
+ SetSnap[Set lastSnapshot →
new file reference]
+ SetSnap --> ClearFiles[Clear operationFiles]
+ ClearFiles --> ClearBuffer[Clear embeddedOperations]
+ ClearBuffer --> ResetClock[Set frontierClock →
snapshot's clock]
+ end
+
+ UpdateMan --> UpMan[Upload manifest.json]
+ UpMan --> Cleanup[Async: Delete old files
from server]
+ Cleanup --> Done([Done])
+
+ style Trigger fill:#ffebee,stroke:#c62828,stroke-width:2px
+ style UpdateMan fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
+```
+
+### 6.6 Request Count Comparison
+
+| Scenario | Old (v1) | Hybrid (v2) | Savings |
+| -------------------- | ---------------------- | -------------------------------- | ------- |
+| Small sync (1-5 ops) | 3 requests | **1 request** | 67% |
+| Buffer overflow | 3 requests | **2 requests** | 33% |
+| Fresh install | N requests (all files) | **2 requests** (snap + manifest) | ~95% |
+| No changes | 1 request (manifest) | **1 request** (manifest) | Same |
diff --git a/src/app/core/persistence/operation-log/docs/operation-log-architecture.md b/src/app/core/persistence/operation-log/docs/operation-log-architecture.md
new file mode 100644
index 000000000..a5f258cc2
--- /dev/null
+++ b/src/app/core/persistence/operation-log/docs/operation-log-architecture.md
@@ -0,0 +1,1800 @@
+# Operation Log Architecture
+
+**Status:** Parts A, B, C, D Complete
+**Branch:** `feat/operation-logs`
+**Last Updated:** December 4, 2025 (Part C implemented)
+
+---
+
+## Introduction: The Core Architecture
+
+### The Core Concept: Event Sourcing
+
+The Operation Log fundamentally changes how the app treats data. Instead of treating the database as a "bucket" where we overwrite data (e.g., "The task title is now X"), we treat it as a **timeline of events** (e.g., "At 10:00 AM, User changed task title to X").
+
+- **Source of Truth:** The _Log_ is the truth. The "current state" of the app (what you see on screen) is just a calculation derived by replaying that log from the beginning of time.
+- **Immutability:** Once an operation is written, it is never changed. We only append new operations. If you "delete" a task, we don't remove the row; we append a `DELETE` operation.
+
+### 1. How Data is Saved (The Write Path)
+
+When a user performs an action (like ticking a checkbox):
+
+1. **Capture:** The system intercepts the Redux action (e.g., `TaskUpdate`).
+2. **Wrap:** It wraps this action into a standardized `Operation` object. This object includes:
+ - **Payload:** What changed (e.g., `{ isDone: true }`).
+ - **ID & Timestamp:** A unique ID (UUID v7) and the time it happened.
+ - **Vector Clock:** A version counter used to track causality (e.g., "This change happened _after_ version 5").
+3. **Persist:** This `Operation` is immediately appended to the `SUP_OPS` table in IndexedDB. This is very fast because we're just adding a small JSON object, not rewriting a huge file.
+4. **Broadcast:** The operation is broadcast to other open tabs so they update instantly.
+
+### 2. How Data is Loaded (The Read Path)
+
+Replaying _every_ operation since the beginning would be too slow. We use **Snapshots** to speed this up:
+
+1. **Load Snapshot:** On startup, the app loads the most recent "Save Point" (a full copy of the app state saved, say, yesterday).
+2. **Replay Tail:** The app then queries the Log: "Give me all operations that happened _after_ this snapshot."
+3. **Fast Forward:** It applies those few "tail" operations to the snapshot. Now the app is fully up to date.
+4. **Hydration Optimization:** If a sync just happened, we might simply load the new state directly, skipping the replay entirely.
+
+### 3. How Sync Works
+
+The Operation Log enables two types of synchronization:
+
+**A. True "Server Sync" (The Modern Way)**
+This is efficient and precise.
+
+- **Exchange:** Devices swap individual `Operations`, not full files. This saves massive amounts of bandwidth.
+- **Conflict Detection:** Because every operation has a **Vector Clock**, we can mathematically prove if two changes happened concurrently.
+ - _Example:_ Device A sends "Update Title (Version 1 -> 2)". Device B sees it has "Version 1", so it applies the update safely.
+ - _Conflict:_ If Device B _also_ made a change and is at "Version 2", it knows "Wait, we both changed Version 1 at the same time!" -> **Conflict Detected**.
+- **Resolution:** The user is shown a dialog to pick the winner. The loser isn't deleted; it's marked as "Rejected" in the log but kept for history.
+
+**B. "Legacy Sync" (Dropbox, WebDAV, Local File)**
+This is a compatibility bridge.
+
+- The Operation Log itself doesn't sync files. Instead, when it saves an operation, it secretly "ticks" a version number in the legacy database.
+- The legacy sync system (PFAPI) sees this tick, realizes "Local data has changed," and triggers its standard "Upload Everything" process.
+- This ensures the new architecture works seamlessly with your existing sync providers without breaking them.
+
+### 4. Safety & Self-Healing
+
+The system assumes data corruption is inevitable (power loss, bad sync, cosmic rays) and builds defenses against it:
+
+- **Validation Checkpoints:** Data is checked _before_ writing to disk, _after_ loading from disk, and _after_ receiving sync data.
+- **Auto-Repair:** If the state is invalid (e.g., a subtask points to a missing parent), the app doesn't crash. It runs an auto-repair script (e.g., detaches the subtask) and generates a special **`REPAIR` Operation**.
+- **Audit Trail:** This `REPAIR` op is saved to the log. This means you can look back and see exactly _when_ and _why_ the system modified your data automatically.
+
+### 5. Maintenance (Compaction)
+
+If we kept every operation forever, the database would grow huge.
+
+- **Compaction:** Every ~500 operations, the system takes a new Snapshot of the current state.
+- **Cleanup:** It then looks for old operations that are already "baked into" that snapshot and have been successfully synced to the server. It safely deletes them to free up space, keeping the log lean.
+
+---
+
+## Overview
+
+The Operation Log serves **four distinct purposes**:
+
+| Purpose | Description | Status |
+| -------------------------- | --------------------------------------------- | ----------- |
+| **A. Local Persistence** | Fast writes, crash recovery, event sourcing | Complete ✅ |
+| **B. Legacy Sync Bridge** | Vector clock updates for PFAPI sync detection | Complete ✅ |
+| **C. Server Sync** | Upload/download individual operations | Complete ✅ |
+| **D. Validation & Repair** | Prevent corruption, auto-repair invalid state | Complete ✅ |
+
+> **✅ Migration Ready**: Migration safety (A.7.12), tail ops consistency (A.7.13), and unified migration interface (A.7.15) are now implemented. The system is ready for schema migrations when `CURRENT_SCHEMA_VERSION > 1`.
+
+This document is structured around these four purposes. Most complexity lives in **Part A** (local persistence). **Part B** is a thin bridge to PFAPI. **Part C** handles operation-based sync with servers. **Part D** integrates validation and automatic repair.
+
+```
+┌───────────────────────────────────────────────────────────────────┐
+│ User Action │
+└───────────────────────────────────────────────────────────────────┘
+ ▼
+ NgRx Store
+ (Runtime Source of Truth)
+ │
+ ┌───────────────────┼───────────────────┐
+ ▼ │ ▼
+ OpLogEffects │ Other Effects
+ │ │
+ ├──► SUP_OPS ◄──────┘
+ │ (Local Persistence - Part A)
+ │
+ └──► META_MODEL vector clock
+ (Legacy Sync Bridge - Part B)
+
+ PFAPI reads from NgRx for sync (not from op-log)
+```
+
+---
+
+# Part A: Local Persistence
+
+The operation log is primarily a **Write-Ahead Log (WAL)** for local persistence. It provides:
+
+1. **Fast writes** - Small ops are instant vs. serializing 5MB on every change
+2. **Crash recovery** - Replay uncommitted ops from log
+3. **Event sourcing** - Full history of user actions for debugging/undo
+
+## A.1 Database Architecture
+
+### SUP_OPS Database
+
+```typescript
+// ops table - the event log
+interface OperationLogEntry {
+ seq: number; // Auto-increment primary key
+ op: Operation; // The operation
+ appliedAt: number; // When applied locally
+ source: 'local' | 'remote';
+ syncedAt?: number; // For server sync (Part C)
+ rejectedAt?: number; // When rejected during conflict resolution
+}
+
+// state_cache table - periodic snapshots
+interface StateCache {
+ state: AllSyncModels; // Full snapshot
+ lastAppliedOpSeq: number;
+ vectorClock: VectorClock; // Current merged vector clock
+ compactedAt: number; // When this snapshot was created
+ schemaVersion?: number; // Optional for backward compatibility
+}
+```
+
+### Relationship to 'pf' Database
+
+```
+┌─────────────────────────────────────────────────────────────────────┐
+│ IndexedDB │
+├────────────────────────────────┬────────────────────────────────────┤
+│ 'pf' database │ 'SUP_OPS' database │
+│ (PFAPI Metadata) │ (Operation Log) │
+│ │ │
+│ ┌──────────────────────┐ │ ┌──────────────────────┐ │
+│ │ META_MODEL │◄─────┼──│ ops (event log) │ │
+│ │ - vectorClock │ │ │ state_cache │ │
+│ │ - revMap │ │ └──────────────────────┘ │
+│ │ - lastSyncedUpdate │ │ │
+│ └──────────────────────┘ │ ALL model data persisted here │
+│ │ │
+│ Model tables NOT used │ │
+└────────────────────────────────┴────────────────────────────────────┘
+```
+
+**Key insight:** The `pf` database is only for PFAPI sync metadata. All model data (task, project, tag, etc.) is persisted in SUP_OPS.
+
+## A.2 Write Path
+
+```
+User Action
+ │
+ ▼
+NgRx Dispatch (action)
+ │
+ ├──► Reducer updates state (optimistic, in-memory)
+ │
+ └──► OperationLogEffects
+ │
+ ├──► Filter: action.meta.isPersistent === true?
+ │ └──► Skip if false or missing
+ │
+ ├──► Filter: action.meta.isRemote === true?
+ │ └──► Skip (prevents re-logging sync/replay)
+ │
+ ├──► Convert action to Operation
+ │
+ ├──► Append to SUP_OPS.ops (disk)
+ │
+ ├──► Increment META_MODEL.vectorClock (Part B bridge)
+ │
+ └──► Broadcast to other tabs
+```
+
+### Operation Structure
+
+```typescript
+interface Operation {
+ id: string; // UUID v7 (time-ordered)
+ actionType: string; // NgRx action type
+ opType: OpType; // CRT | UPD | DEL | MOV | BATCH
+ entityType: EntityType; // TASK | PROJECT | TAG | NOTE | ...
+ entityId?: string; // Affected entity ID
+ entityIds?: string[]; // For batch operations
+ payload: unknown; // Action payload
+ clientId: string; // Device ID
+ vectorClock: VectorClock; // Per-op causality (for Part C)
+ timestamp: number; // Wall clock (epoch ms)
+ schemaVersion: number; // For migrations
+}
+
+type OpType =
+ | 'CRT'
+ | 'UPD'
+ | 'DEL'
+ | 'MOV'
+ | 'BATCH'
+ | 'SYNC_IMPORT'
+ | 'BACKUP_IMPORT'
+ | 'REPAIR';
+```
+
+### Persistent Action Pattern
+
+Actions are persisted based on explicit `meta.isPersistent: true`:
+
+```typescript
+// persistent-action.interface.ts
+export interface PersistentActionMeta {
+ isPersistent?: boolean; // When true, action is persisted
+ entityType: EntityType;
+ entityId?: string;
+ entityIds?: string[]; // For batch operations
+ opType: OpType;
+ isRemote?: boolean; // TRUE if from Sync (prevents re-logging)
+ isBulk?: boolean; // TRUE for batch operations
+}
+
+// Type guard - only actions with explicit isPersistent: true are persisted
+export const isPersistentAction = (action: Action): action is PersistentAction => {
+ const a = action as PersistentAction;
+ return !!a.meta && a.meta.isPersistent === true;
+};
+```
+
+Actions that should NOT be persisted:
+
+- UI-only actions (selectedTaskId, currentTaskId, toggle sidebar, etc.)
+- Load/hydration actions (data already in log)
+- Upsert actions (typically from sync/import)
+- Internal cleanup actions
+
+## A.3 Read Path (Hydration)
+
+```
+App Startup
+ │
+ ▼
+OperationLogHydratorService
+ │
+ ├──► Load snapshot from SUP_OPS.state_cache
+ │ │
+ │ └──► If no snapshot: Genesis migration from 'pf'
+ │
+ ├──► Run schema migration if needed
+ │
+ ├──► Dispatch loadAllData(snapshot, { isHydration: true })
+ │
+ └──► Load tail ops (seq > snapshot.lastAppliedOpSeq)
+ │
+ ├──► If last op is SyncImport: load directly (skip replay)
+ │
+ ├──► Otherwise: Replay ops (prevents re-logging via isRemote flag)
+ │
+ └──► If replayed >10 ops: Save new snapshot for faster future loads
+```
+
+### Hydration Optimizations
+
+Two optimizations speed up hydration:
+
+1. **Skip replay for SyncImport**: When the last operation in the log is a `SyncImport` (full state import), the hydrator loads it directly instead of replaying all preceding operations. This significantly speeds up initial load after imports or syncs.
+
+2. **Save snapshot after replay**: After replaying more than 10 tail operations, a new state cache snapshot is saved. This avoids replaying the same operations on subsequent startups.
+
+### Genesis Migration
+
+On first startup (SUP_OPS empty):
+
+```typescript
+async createGenesisSnapshot(): Promise {
+ // Load ALL models from legacy pf database
+ const allModels = await this.pfapiService.pf.getAllSyncModelData();
+
+ // Create initial snapshot
+ await this.opLogStore.saveStateCache({
+ state: allModels,
+ lastAppliedOpSeq: 0,
+ vectorClock: {},
+ compactedAt: Date.now(),
+ schemaVersion: CURRENT_SCHEMA_VERSION
+ });
+}
+```
+
+## A.4 Compaction
+
+### Purpose
+
+Without compaction, the op log grows unbounded. Compaction:
+
+1. Creates a fresh snapshot from current NgRx state
+2. Deletes old ops that are "baked into" the snapshot
+
+### Triggers
+
+- Every **500 operations**
+- After sync download (safety)
+- On app close (optional)
+
+### Process
+
+```typescript
+async compact(): Promise {
+ // 1. Acquire lock
+ await this.lockService.request('sp_op_log_compact', async () => {
+ // 2. Read current state from NgRx (via delegate)
+ const currentState = await this.storeDelegate.getAllSyncModelDataFromStore();
+
+ // 3. Save new snapshot
+ const lastSeq = await this.opLogStore.getLastSeq();
+ await this.opLogStore.saveStateCache({
+ state: currentState,
+ lastAppliedOpSeq: lastSeq,
+ vectorClock: await this.opLogStore.getCurrentVectorClock(),
+ compactedAt: Date.now(),
+ schemaVersion: CURRENT_SCHEMA_VERSION
+ });
+
+ // 4. Delete old ops (sync-aware)
+ // Only delete ops that have been synced AND are older than retention window
+ const retentionWindowMs = 7 * 24 * 60 * 60 * 1000; // 7 days
+ const cutoff = Date.now() - retentionWindowMs;
+
+ await this.opLogStore.deleteOpsWhere(
+ (entry) =>
+ !!entry.syncedAt && // never drop unsynced ops
+ entry.appliedAt < cutoff &&
+ entry.seq <= lastSeq
+ );
+ });
+}
+```
+
+### Configuration
+
+| Setting | Value | Description |
+| ------------------ | ------- | ------------------------- |
+| Compaction trigger | 500 ops | Ops before snapshot |
+| Retention window | 7 days | Keep recent synced ops |
+| Unsynced ops | ∞ | Never delete unsynced ops |
+
+## A.5 Multi-Tab Coordination
+
+### Write Locking
+
+```typescript
+// Primary: Web Locks API
+await navigator.locks.request('sp_op_log_write', async () => {
+ await this.writeOperation(op);
+});
+
+// Fallback: localStorage mutex (for older WebViews)
+```
+
+### State Broadcast
+
+When one tab writes an operation:
+
+1. Write to SUP_OPS
+2. Broadcast via BroadcastChannel
+3. Other tabs receive and apply (with `isRemote=true` to prevent re-logging)
+
+```typescript
+// Tab A writes
+this.broadcastChannel.postMessage({ type: 'NEW_OP', op });
+
+// Tab B receives
+this.broadcastChannel.onmessage = (event) => {
+ if (event.data.type === 'NEW_OP') {
+ const action = convertOpToAction(event.data.op); // Sets isRemote: true
+ this.store.dispatch(action);
+ }
+};
+```
+
+## A.6 Disaster Recovery
+
+### SUP_OPS Corruption
+
+```
+1. Detect: Hydration fails or returns empty/invalid state
+2. Check legacy 'pf' database for data
+3. If found: Run recovery migration with that data
+4. If not: Check remote sync for data
+5. If remote has data: Force sync download
+6. If all else fails: User must restore from backup
+```
+
+### Implementation
+
+```typescript
+async hydrateStore(): Promise {
+ try {
+ const snapshot = await this.opLogStore.loadStateCache();
+ if (!snapshot || !this.isValidSnapshot(snapshot)) {
+ await this.attemptRecovery();
+ return;
+ }
+ // Normal hydration...
+ } catch (e) {
+ await this.attemptRecovery();
+ }
+}
+
+private async attemptRecovery(): Promise {
+ // 1. Try legacy database
+ const legacyData = await this.pfapi.getAllSyncModelDataFromModelCtrls();
+ if (legacyData && this.hasData(legacyData)) {
+ await this.recoverFromLegacyData(legacyData);
+ return;
+ }
+ // 2. Try remote sync
+ // 3. Show error to user
+}
+```
+
+## A.7 Schema Migrations
+
+When Super Productivity's data model changes (new fields, renamed properties, restructured entities), schema migrations ensure existing data remains usable after app updates.
+
+> **Current Status:** Migration infrastructure is implemented, but no actual migrations exist yet. The `MIGRATIONS` array is empty and `CURRENT_SCHEMA_VERSION = 1`. This section documents the designed behavior for when migrations are needed.
+
+### Configuration
+
+`CURRENT_SCHEMA_VERSION` is defined in `src/app/core/persistence/operation-log/schema-migration.service.ts`:
+
+```typescript
+export const CURRENT_SCHEMA_VERSION = 1;
+export const MIN_SUPPORTED_SCHEMA_VERSION = 1;
+export const MAX_VERSION_SKIP = 5; // Max versions ahead we'll attempt to load
+```
+
+### Core Concepts
+
+| Concept | Description |
+| -------------------------- | --------------------------------------------------------------------------- |
+| **Schema Version** | Integer tracking current data model version (stored in ops + snapshots) |
+| **Migration** | Function transforming state from version N to N+1 |
+| **Snapshot Boundary** | Migrations run when loading snapshots, creating clean versioned checkpoints |
+| **Forward Compatibility** | Newer apps can read older data (via migrations) |
+| **Backward Compatibility** | Older apps receiving newer ops (via graceful degradation) |
+
+### Migration Triggers
+
+```
+┌─────────────────────────────────────────────────────────────────────┐
+│ App Update Detected │
+│ (schemaVersion mismatch) │
+└─────────────────────────────────────────────────────────────────────┘
+ │
+ ┌───────────────────┼───────────────────┐
+ ▼ ▼ ▼
+ Load Snapshot Replay Ops Receive Remote Ops
+ (stale version) (mixed versions) (newer/older version)
+ │ │ │
+ ▼ ▼ ▼
+ Run migrations Apply ops as-is Migrate if needed
+ on full state (ops are additive) (full state imports)
+```
+
+### A.7.1 Snapshot Migration (Local)
+
+When app starts and finds a snapshot with older schema version:
+
+```
+App Startup (schema v1 → v2)
+ │
+ ▼
+Load state_cache (v1 snapshot)
+ │
+ ▼
+Detect version mismatch: snapshot.schemaVersion < CURRENT_SCHEMA_VERSION
+ │
+ ▼
+Run migration chain: migrateV1ToV2(snapshot.state)
+ │
+ ▼
+Dispatch loadAllData(migratedState)
+ │
+ ▼
+Force new snapshot with schemaVersion = 2
+ │
+ ▼
+Continue with tail ops (ops after snapshot)
+```
+
+### A.7.2 Operation Replay (Mixed Versions)
+
+Operations in the log may have different schema versions. During replay:
+
+```typescript
+// Operations are "additive" - they describe what changed, not full state
+// Example: { opType: 'UPD', payload: { task: { id: 'x', changes: { title: 'new' } } } }
+
+// Old ops apply to migrated state because:
+// 1. Fields they reference still exist (or are mapped)
+// 2. New fields have defaults filled by migration
+// 3. Renamed fields are handled by migration aliases
+
+async replayOperation(op: Operation, currentState: AppDataComplete): Promise {
+ // Op schema version is informational - ops apply to current state structure
+ // The snapshot was already migrated to current schema
+ await this.operationApplier.applyOperations([op]);
+}
+```
+
+> **Limitation:** Operations are NOT migrated during replay. If a migration renames a field (e.g., `estimate` → `timeEstimate`), old operations referencing `estimate` will apply that field to the entity, potentially causing data inconsistency. To avoid this:
+>
+> 1. **Prefer additive migrations** - Add new fields with defaults rather than renaming
+> 2. **Use aliases in reducers** - If renaming is necessary, reducers should accept both old and new field names
+> 3. **Force compaction after migration** - Reduce the window of mixed-version operations
+>
+> Operation-level migration (transforming old ops to new schema during replay) is listed as a future enhancement in A.7.9.
+
+### A.7.3 Remote Sync (Cross-Version Clients)
+
+When clients run different Super Productivity versions, sync must handle version differences:
+
+```
+┌─────────────────────────────────────────────────────────────────────┐
+│ Remote Sync Scenarios │
+└─────────────────────────────────────────────────────────────────────┘
+
+Scenario 1: Newer client receives older ops
+──────────────────────────────────────────
+Client v2 ◄─── ops from v1 client
+ │
+ └── Ops apply normally (additive changes to migrated state)
+ Missing new fields use defaults from migration
+
+Scenario 2: Older client receives newer ops
+──────────────────────────────────────────
+Client v1 ◄─── ops from v2 client
+ │
+ ├── Individual ops: Unknown fields ignored (graceful degradation)
+ │ { task: { id: 'x', changes: { title: 'a', newFieldV2: 'b' } } }
+ │ ↑ ignored by v1
+ │
+ └── Full state imports (SYNC_IMPORT): May fail validation
+ → User prompted to update app or resolve manually
+
+Scenario 3: Mixed version sync with conflicts
+──────────────────────────────────────────
+Client v1 conflicts with Client v2
+ │
+ └── Conflict resolution uses entity-level comparison
+ Version-specific fields handled during merge
+```
+
+### A.7.4 Full State Imports (SYNC_IMPORT/BACKUP_IMPORT)
+
+When receiving full state from remote (e.g., SYNC_IMPORT from another client):
+
+```typescript
+async handleFullStateImport(payload: { appDataComplete: AppDataComplete }): Promise {
+ const { appDataComplete } = payload;
+
+ // 1. Detect schema version of incoming state (from schemaVersion field or structure)
+ const incomingVersion = appDataComplete.schemaVersion ?? detectSchemaVersion(appDataComplete);
+
+ if (incomingVersion < CURRENT_SCHEMA_VERSION) {
+ // 2a. Migrate incoming state up to current version
+ const migratedState = await this.migrateState(appDataComplete, incomingVersion);
+ this.store.dispatch(loadAllData({ appDataComplete: migratedState }));
+
+ } else if (incomingVersion > CURRENT_SCHEMA_VERSION + MAX_VERSION_SKIP) {
+ // 2b. Too far ahead - reject and prompt user to update
+ this.snackService.open({
+ type: 'ERROR',
+ msg: T.F.SYNC.S.VERSION_TOO_OLD,
+ actionStr: T.PS.UPDATE_APP,
+ actionFn: () => window.open(UPDATE_URL, '_blank'),
+ });
+ throw new Error(`Schema version ${incomingVersion} requires app update`);
+
+ } else if (incomingVersion > CURRENT_SCHEMA_VERSION) {
+ // 2c. Slightly ahead - attempt graceful load with warning
+ PFLog.warn('Received state from newer app version', { incomingVersion, current: CURRENT_SCHEMA_VERSION });
+ this.snackService.open({
+ type: 'WARN',
+ msg: T.F.SYNC.S.NEWER_VERSION_WARNING, // "Data from newer app version - some features may not work"
+ });
+ // Attempt load - unknown fields will be stripped by Typia validation
+ // This may cause data loss for fields the older app doesn't understand
+ this.store.dispatch(loadAllData({ appDataComplete }));
+
+ } else {
+ // 2d. Same version - direct load
+ this.store.dispatch(loadAllData({ appDataComplete }));
+ }
+
+ // 3. Save snapshot (always with current schema version)
+ await this.saveStateCache(/* current state with schemaVersion = CURRENT_SCHEMA_VERSION */);
+}
+```
+
+### A.7.5 Migration Implementation
+
+Migrations are defined in `src/app/core/persistence/operation-log/schema-migration.service.ts`.
+
+#### How to Create a New Migration
+
+1. **Increment Version**: Update `CURRENT_SCHEMA_VERSION` to `N + 1`.
+2. **Define Migration**: Add a new entry to the `MIGRATIONS` array.
+3. **Implement Logic**: Write the transformation function `migrate(state)`.
+
+```typescript
+// src/app/core/persistence/operation-log/schema-migration.service.ts
+
+export const CURRENT_SCHEMA_VERSION = 2; // Increment this!
+
+const MIGRATIONS: SchemaMigration[] = [
+ {
+ fromVersion: 1,
+ toVersion: 2,
+ description: 'Add priority field to tasks',
+ migrate: (state: AllSyncModels) => {
+ // 'state' here is the entire app data (AllSyncModels)
+ // Deep clone or careful spread recommended to avoid direct mutation
+ const newState: AllSyncModels = { ...state };
+
+ // Transform specific model (e.g., task)
+ if (newState.task && newState.task.entities) {
+ newState.task.entities = Object.fromEntries(
+ Object.entries(newState.task.entities).map(([id, task]: [string, any]) => [
+ id,
+ { ...task, priority: task.priority ?? 'NORMAL' },
+ ]),
+ );
+ }
+ return newState;
+ },
+ },
+];
+```
+
+#### Best Practices
+
+1. **Type Safety**: While `state: any` is used for flexibility, aim for `state: AllSyncModels` or a more specific type if possible, and cast internally as needed.
+2. **Immutability**: Always return a new state object. Avoid directly mutating the incoming `state` object.
+3. **Handle missing data**: `state` or its nested properties might be partial or undefined (e.g., in edge cases or if a new model is introduced). Use optional chaining (`?.`) and nullish coalescing (`??`) as appropriate.
+4. **Preserve unrelated data**: Do not unintentionally drop fields or entire models that your migration is not concerned with.
+5. **Test your migration**: Write unit tests to verify that your migration correctly transforms data from version N to N+1, especially edge cases.
+
+#### Service Implementation
+
+```typescript
+// schema-migration.service.ts
+
+interface SchemaMigration {
+ fromVersion: number;
+ toVersion: number;
+ description: string;
+ migrate: (state: unknown) => unknown;
+}
+
+async migrateIfNeeded(snapshot: StateCache): Promise {
+ let { state, schemaVersion } = snapshot;
+ schemaVersion = schemaVersion ?? 1; // Default for pre-versioned data
+
+ while (schemaVersion < CURRENT_SCHEMA_VERSION) {
+ const migration = MIGRATIONS.find(m => m.fromVersion === schemaVersion);
+ if (!migration) {
+ throw new Error(`No migration path from schema v${schemaVersion}`);
+ }
+
+ PFLog.log(`[Migration] Running: ${migration.description}`);
+ state = migration.migrate(state);
+ schemaVersion = migration.toVersion;
+ }
+
+ return { ...snapshot, state, schemaVersion };
+}
+
+// Helper to detect schema version from state structure (fallback when schemaVersion field is missing)
+function detectSchemaVersion(state: unknown): number {
+ // Primary: Use explicit schemaVersion field if present
+ if (typeof state === 'object' && state !== null && 'schemaVersion' in state) {
+ return (state as { schemaVersion: number }).schemaVersion;
+ }
+
+ // Fallback: Infer from structure (add checks as migrations are implemented)
+ // Example checks (not yet implemented):
+ // - v2 added task.priority field
+ // - v3 renamed task.estimate to task.timeEstimate
+ //
+ // if (hasTaskPriorityField(state)) return 2;
+ // if (hasTimeEstimateField(state)) return 3;
+
+ return 1; // Default: assume v1 for unversioned/legacy data
+}
+```
+
+### A.7.6 Version Handling in Operations
+
+Each operation includes the schema version when it was created:
+
+```typescript
+interface Operation {
+ // ... other fields
+ schemaVersion: number; // Schema version when op was created
+}
+
+// When creating operations:
+const op: Operation = {
+ id: uuidv7(),
+ // ... other fields
+ schemaVersion: CURRENT_SCHEMA_VERSION, // e.g., 1
+};
+```
+
+This enables:
+
+- **Debugging** - Know which app version created an operation
+- **Future migration** - Transform old ops if needed (not currently implemented)
+- **Compatibility checks** - Warn when receiving ops from much newer versions
+
+### A.7.7 Design Principles for Migrations
+
+| Principle | Description |
+| ------------------------------ | -------------------------------------------------- |
+| **Additive changes preferred** | Adding new optional fields with defaults is safest |
+| **Avoid breaking renames** | Use aliases or transformations instead |
+| **Test migration chains** | Ensure v1→v2→v3 produces same result as v1→v3 |
+| **Preserve unknown fields** | Don't strip fields from newer versions |
+| **Idempotent migrations** | Running twice should be safe |
+
+### A.7.8 Handling Unsupported Versions
+
+```typescript
+// When local version is too old for remote data
+if (remoteSchemaVersion > CURRENT_SCHEMA_VERSION + MAX_VERSION_SKIP) {
+ this.snackService.open({
+ type: 'ERROR',
+ msg: T.F.SYNC.S.VERSION_TOO_OLD,
+ actionStr: T.PS.UPDATE_APP,
+ actionFn: () => window.open(UPDATE_URL, '_blank'),
+ });
+ throw new Error('App version too old for synced data');
+}
+
+// When remote sends data from ancient version
+if (remoteSchemaVersion < MIN_SUPPORTED_SCHEMA_VERSION) {
+ this.snackService.open({
+ type: 'ERROR',
+ msg: T.F.SYNC.S.REMOTE_DATA_TOO_OLD,
+ });
+ // May need manual intervention or force re-sync
+}
+```
+
+### A.7.9 Future Considerations
+
+| Enhancement | Description | Priority |
+| ---------------------------- | --------------------------------------------- | -------- |
+| **Operation migration** | Transform old ops to new schema during replay | Low |
+| **Conflict-aware migration** | Special handling for version conflicts | Medium |
+| **Migration rollback** | Undo migration if it fails partway | Low |
+| **Progressive migration** | Migrate in background over multiple sessions | Low |
+
+### A.7.10 Relationship with Legacy PFAPI Migrations (CROSS_MODEL_MIGRATION)
+
+The application contains a legacy migration system (`CROSS_MODEL_MIGRATION` in `src/app/pfapi/migrate/cross-model-migrations.ts`) used by the old persistence layer.
+
+**Do we keep it?**
+Yes, for now. The **Genesis Migration** (A.3) relies on `pfapi` services to load the initial state from the legacy database. This loading process executes `CROSS_MODEL_MIGRATION`s to ensure the legacy data is in a consistent state before it is imported into the Operation Log.
+
+**Should we remove it?**
+No, not yet. It provides the bridge from older versions of the app to the Operation Log version. However:
+
+1. **No new migrations** should be added to `CROSS_MODEL_MIGRATION`.
+2. All future schema changes should use the **Schema Migration** system (A.7) described above.
+3. Once the Operation Log is fully established and legacy data is considered obsolete (e.g., after several major versions), the legacy migration code can be removed.
+
+### A.7.11 Conflict-Aware Migration Strategy
+
+**Status:** Design Ready (Not Implemented)
+
+To handle synchronization between clients on different schema versions, the system must ensure that operations are comparable ("apples-to-apples") before conflict detection occurs.
+
+#### Strategy
+
+1. **Operation-Level Migration Pipeline**
+
+ - Extend `SchemaMigration` interface to include `migrateOperation?: (op: Operation) => Operation`.
+ - This allows transforming a V1 `UPDATE` (e.g., `{ changes: { oldField: 'val' } }`) into a V2 `UPDATE` (e.g., `{ changes: { newField: 'val' } }`).
+
+2. **Inbound Migration (Receive Path)**
+
+ - **Location:** `OperationLogSyncService.processRemoteOps`
+ - **Logic:**
+ 1. Receive `remoteOps`.
+ 2. Check `op.schemaVersion` for each op.
+ 3. If `op.schemaVersion < CURRENT_SCHEMA_VERSION`, run `SchemaMigrationService.migrateOperation(op)`.
+ 4. Pass _migrated_ ops to `detectConflicts()`.
+ - **Benefit:** Conflict detection works on the _current_ schema structure, preventing false negatives (missing a conflict because field names differ) and confusing diffs.
+
+3. **Outbound Migration (Send Path)**
+
+ - **Location:** `OperationLogStore.getUnsynced()`
+ - **Strategy:** **Send As-Is (Receiver Migrates)**.
+ - **Logic:** The client sends operations exactly as they are stored in `SUP_OPS`, preserving their original `schemaVersion`. We do _not_ migrate operations before uploading.
+ - **Reasoning:** This follows the "Robustness Principle" (be conservative in what you do, liberal in what you accept). It avoids the performance cost of batch-migrating thousands of pending operations after a long offline period and eliminates the need to rewrite `SUP_OPS`. The receiving client (which knows its own version best) is responsible for upgrading incoming data.
+
+4. **Destructive Migrations**
+
+ - **Scenario:** A feature is removed in V2 (e.g., "Pomodoro" settings deleted), but we receive a V1 `UPDATE` op for it.
+ - **Logic:** The `migrateOperation` function can return `null`.
+ - **Handling:** The sync and replay systems must handle `null` by dropping the operation entirely.
+
+5. **Conflict Resolution**
+ - The `ConflictResolutionService` will display the _migrated_ remote operation against the current local state.
+ - **UI Decision:** We display the migrated values directly without special "migrated" annotations. Ideally, the conflict dialog uses the same formatters/components as the main UI, so the data looks familiar (e.g., "1 hour" instead of "3600").
+
+### A.7.12 Migration Safety
+
+**Status:** Implemented ✅
+
+Migrations are critical and risky. To prevent data loss if a migration crashes mid-process:
+
+1. **Backup Before Migrate:** Before `SchemaMigrationService.migrateIfNeeded()` begins modifying the state, it must create a backup of the current `state_cache`.
+ - Implementation: Copy `SUP_OPS` object store entry to a backup key (e.g., `state_cache_backup`).
+2. **Rollback on Failure:** If migration throws an error, catch it, restore the backup, and prevent the app from loading potentially corrupted "half-migrated" state. (Likely show a fatal error screen asking user to export backup or contact support).
+
+### A.7.13 Tail Ops Consistency
+
+**Status:** Implemented ✅
+
+**What are Tail Ops?**
+When the app starts, it loads the most recent snapshot (e.g., from yesterday). It then loads all operations that occurred _after_ that snapshot (the "tail") to reconstruct the exact state at the moment the app was closed.
+
+**The Consistency Gap**
+
+1. Snapshot is loaded (Version 1).
+2. App is updated (Version 2).
+3. Snapshot is migrated (V1 → V2).
+4. Tail ops are loaded (Version 1).
+5. **Problem:** If we apply V1 tail ops to the V2 state, they might write to fields that no longer exist or have changed format.
+
+**Solution**
+The **Tail Ops MUST be migrated** during hydration.
+
+- The `OperationLogHydrator` must pass tail ops through the same `SchemaMigrationService.migrateOperation` pipeline used for sync.
+- This ensures that the `OperationApplier` always receives operations matching the current runtime schema.
+
+### A.7.14 Other Design Decisions
+
+**Operation Envelope vs. Payload**
+
+- **Decision:** `schemaVersion` applies **only to the `payload`** of the operation.
+- **Reasoning:** Changes to the Operation structure itself (the "envelope", e.g., `id`, `vectorClock`, `opType`) are considered "System Level" breaking changes. They cannot be handled by the standard schema migration system. If the envelope changes, we would likely need a "Genesis V2" event or a specialized one-time database upgrade script.
+
+**Conflict UI for Synthetic Conflicts**
+
+- **Scenario:** Migration transforms logic (e.g., "1h" string → 3600 seconds).
+- **Decision:** The Conflict Resolution UI will simply show the migrated value (3600). We will **not** implement special annotations (e.g., "Values differ due to migration").
+- **KISS Principle:** Users generally recognize their data even if the format shifts slightly. The complexity of tracking "why" a value changed is not worth the implementation cost.
+
+### A.7.15 Unified State and Operation Migrations
+
+**Status:** Implemented ✅
+
+State migrations and operation migrations are closely related—both handle the same underlying data model changes. This section defines how they work together.
+
+#### The Relationship
+
+| Migration Type | Applies To | When Executed |
+| ----------------------- | ----------------------------- | --------------------------- |
+| **State migration** | Full snapshot (AllSyncModels) | Hydration, sync import |
+| **Operation migration** | Individual ops | Tail ops replay, remote ops |
+
+Both use the same `schemaVersion` field. A single schema change may require one or both migration types.
+
+#### When Is Operation Migration Needed?
+
+| Change Type | State Migration | Op Migration | Example |
+| -------------------- | ----------------- | ------------------------------ | --------------------------- |
+| Add optional field | ✅ (set default) | ❌ (old ops just don't set it) | `priority?: string` |
+| Rename field | ✅ (copy old→new) | ✅ (transform payload) | `estimate` → `timeEstimate` |
+| Remove field/feature | ✅ (delete it) | ✅ (drop ops or strip field) | Remove `pomodoro` |
+| Change field type | ✅ (convert) | ✅ (convert in payload) | `"1h"` → `3600` |
+| Add entity type | ✅ (initialize) | ❌ (no old ops exist) | New `Board` entity |
+
+**Rule of thumb:** If the change is purely additive (new optional fields with defaults, new entity types), operation migration is usually not needed. If the change modifies or removes existing fields, operation migration is required.
+
+#### Unified Migration Definition
+
+Link state and operation migrations in a single definition:
+
+```typescript
+interface SchemaMigration {
+ fromVersion: number;
+ toVersion: number;
+ description: string;
+
+ // Required: transform full state snapshot
+ migrateState: (state: AllSyncModels) => AllSyncModels;
+
+ // Optional: transform individual operation
+ // Return null to drop the operation entirely (e.g., for removed features)
+ migrateOperation?: (op: Operation) => Operation | null;
+
+ // Explicit declaration forces author to think about operation migration
+ // If true but migrateOperation is undefined, startup validation fails
+ requiresOperationMigration: boolean;
+}
+```
+
+**Benefits:**
+
+1. **Single source of truth** - One place defines all changes for a version bump
+2. **Explicit decision** - `requiresOperationMigration` forces thinking about ops
+3. **Consistent versioning** - No risk of version number mismatch between the two
+4. **Validation** - Startup check catches missing operation migrations
+
+#### Startup Validation
+
+```typescript
+// In schema-migration.service.ts initialization
+for (const migration of MIGRATIONS) {
+ if (migration.requiresOperationMigration && !migration.migrateOperation) {
+ throw new Error(
+ `Migration v${migration.fromVersion}→v${migration.toVersion} declares ` +
+ `requiresOperationMigration=true but migrateOperation is not defined`,
+ );
+ }
+}
+```
+
+#### Execution Order
+
+```
+Hydration Flow:
+ 1. Load snapshot from state_cache (schemaVersion = 1)
+ 2. Run migrateState(snapshot) → v2 state
+ 3. Save migrated snapshot (for faster future loads)
+ 4. Load tail ops (may have schemaVersion = 1)
+ 5. For each op where op.schemaVersion < CURRENT:
+ migrateOperation(op) → v2 op (or null to drop)
+ 6. Apply migrated ops to v2 state
+
+Sync Flow (receiving remote ops):
+ 1. Download remote ops (may have mixed schemaVersions)
+ 2. For each op where op.schemaVersion < CURRENT:
+ migrateOperation(op) → v2 op
+ 3. Run conflict detection on v2 ops
+ 4. Apply to v2 state
+```
+
+#### Example: Field Rename Migration
+
+```typescript
+const MIGRATIONS: SchemaMigration[] = [
+ {
+ fromVersion: 1,
+ toVersion: 2,
+ description: 'Rename task.estimate to task.timeEstimate',
+ requiresOperationMigration: true,
+
+ migrateState: (state) => {
+ if (!state.task?.entities) return state;
+
+ const migratedEntities = Object.fromEntries(
+ Object.entries(state.task.entities).map(([id, task]: [string, any]) => [
+ id,
+ {
+ ...task,
+ timeEstimate: task.estimate ?? task.timeEstimate ?? 0,
+ estimate: undefined, // Remove old field
+ },
+ ]),
+ );
+
+ return {
+ ...state,
+ task: { ...state.task, entities: migratedEntities },
+ };
+ },
+
+ migrateOperation: (op) => {
+ // Only transform TASK UPDATE operations
+ if (op.entityType !== 'TASK' || op.opType !== 'UPD') {
+ return op;
+ }
+
+ const changes = (op.payload as any)?.changes;
+ if (!changes || changes.estimate === undefined) {
+ return op; // No estimate field in this op
+ }
+
+ // Transform: estimate → timeEstimate
+ return {
+ ...op,
+ schemaVersion: 2, // Mark as migrated
+ payload: {
+ ...op.payload,
+ changes: {
+ ...changes,
+ timeEstimate: changes.estimate,
+ estimate: undefined,
+ },
+ },
+ };
+ },
+ },
+];
+```
+
+#### Example: Feature Removal Migration
+
+```typescript
+{
+ fromVersion: 2,
+ toVersion: 3,
+ description: 'Remove deprecated pomodoro feature',
+ requiresOperationMigration: true,
+
+ migrateState: (state) => {
+ // Remove pomodoro data from state
+ const { pomodoro, ...rest } = state as any;
+ return rest;
+ },
+
+ migrateOperation: (op) => {
+ // Drop any operations targeting the removed feature
+ if (op.entityType === 'POMODORO') {
+ return null; // Operation is dropped entirely
+ }
+
+ // Strip pomodoro fields from task operations
+ if (op.entityType === 'TASK' && op.opType === 'UPD') {
+ const changes = (op.payload as any)?.changes;
+ if (changes?.pomodoroCount !== undefined) {
+ const { pomodoroCount, ...restChanges } = changes;
+ return {
+ ...op,
+ schemaVersion: 3,
+ payload: { ...op.payload, changes: restChanges },
+ };
+ }
+ }
+
+ return op;
+ },
+}
+```
+
+#### Why Not Auto-Derive Operation Migration?
+
+It might seem possible to derive operation migration from state migration:
+
+```typescript
+// Hypothetical auto-derivation (NOT recommended)
+migrateOperation(op: Operation): Operation {
+ const fakeState = { [op.entityType]: { entities: { temp: op.payload } } };
+ const migrated = migrateState(fakeState);
+ return { ...op, payload: migrated[op.entityType].entities.temp };
+}
+```
+
+**This doesn't work because:**
+
+1. **UPDATE payloads differ from entities** - UPDATE ops have `{ id, changes }`, not full entity
+2. **Partial data** - Ops may only contain the fields being changed
+3. **CREATE vs UPDATE semantics** - State migration sees full entities; ops may be partial
+4. **Null handling** - Dropping ops (return null) can't be auto-derived
+
+**Conclusion:** Explicit `migrateOperation` functions are required for non-additive changes.
+
+---
+
+# Part B: Legacy Sync Bridge
+
+The operation log does **NOT** participate in legacy sync protocol. PFAPI handles all sync logic for WebDAV, Dropbox, and LocalFile providers.
+
+However, the op-log must **bridge** to PFAPI by updating `META_MODEL.vectorClock` so PFAPI can detect local changes.
+
+## B.1 How Legacy Sync Works
+
+```
+Sync Triggered (WebDAV/Dropbox/LocalFile)
+ │
+ ▼
+PFAPI compares local vs remote vector clocks
+ │
+ └──► META_MODEL.vectorClock vs remote __meta.vectorClock
+ │
+ └──► If different: local changes exist
+ │
+ ▼
+ PFAPI.getAllSyncModelData()
+ │
+ ▼
+ PfapiStoreDelegateService
+ │
+ └──► Read ALL models from NgRx via selectors
+ │
+ ▼
+ Upload to provider
+```
+
+**Key point:** PFAPI reads current state from NgRx, NOT from the operation log. The op-log is invisible to sync.
+
+## B.2 Vector Clock Bridge
+
+When `OperationLogEffects` writes an operation, it must also update META_MODEL:
+
+```typescript
+private async writeOperation(op: Operation): Promise {
+ // 1. Write to SUP_OPS (Part A)
+ await this.opLogStore.appendOperation(op);
+
+ // 2. Bridge to PFAPI (Part B) - Update META_MODEL vector clock
+ // Skip if sync is in progress (database locked) - the op is already safe in SUP_OPS
+ if (!this.pfapiService.pf.isSyncInProgress) {
+ await this.pfapiService.pf.metaModel.incrementVectorClockForLocalChange(this.clientId);
+ }
+
+ // 3. Broadcast to other tabs (Part A)
+ this.multiTabCoordinator.broadcastOperation(op);
+}
+```
+
+This ensures:
+
+- PFAPI can detect "there are local changes to sync"
+- Legacy sync providers work unchanged
+- No changes needed to PFAPI sync protocol
+- **No lock errors during sync** - META_MODEL update is skipped when sync is in progress (op is still safely persisted in SUP_OPS)
+
+## B.3 Sync Download Persistence
+
+When PFAPI downloads remote data, the hydrator persists it to SUP_OPS:
+
+```typescript
+async hydrateFromRemoteSync(): Promise {
+ // 1. Read synced data from 'pf' database
+ const syncedData = await this.pfapiService.pf.getAllSyncModelDataFromModelCtrls();
+
+ // 2. Create SYNC_IMPORT operation
+ const op: Operation = {
+ id: uuidv7(),
+ opType: 'SYNC_IMPORT',
+ entityType: 'ALL',
+ payload: syncedData,
+ // ...
+ };
+ await this.opLogStore.append(op, 'remote');
+
+ // 3. Force snapshot for crash safety
+ await this.opLogStore.saveStateCache({
+ state: syncedData,
+ lastAppliedOpSeq: lastSeq,
+ // ...
+ });
+
+ // 4. Dispatch to NgRx
+ this.store.dispatch(loadAllData({ appDataComplete: syncedData }));
+}
+```
+
+### loadAllData Variants
+
+```typescript
+interface LoadAllDataMeta {
+ isHydration?: boolean; // From SUP_OPS startup - skip logging
+ isRemoteSync?: boolean; // From sync download - create import op
+ isBackupImport?: boolean; // From file import - create import op
+}
+```
+
+| Source | Create Op? | Force Snapshot? |
+| -------------------- | ------------------- | --------------- |
+| Hydration (startup) | No | No |
+| Remote sync download | Yes (SYNC_IMPORT) | Yes |
+| Backup file import | Yes (BACKUP_IMPORT) | Yes |
+
+## B.4 PfapiStoreDelegateService
+
+This service reads ALL sync models from NgRx for PFAPI:
+
+```typescript
+@Injectable({ providedIn: 'root' })
+export class PfapiStoreDelegateService {
+ getAllSyncModelDataFromStore(): Promise {
+ return firstValueFrom(
+ combineLatest([
+ this._store.select(selectTaskFeatureState),
+ this._store.select(selectProjectFeatureState),
+ this._store.select(selectTagFeatureState),
+ this._store.select(selectConfigFeatureState),
+ this._store.select(selectNoteFeatureState),
+ this._store.select(selectIssueProviderState),
+ this._store.select(selectPlannerState),
+ this._store.select(selectBoardsState),
+ this._store.select(selectMetricFeatureState),
+ this._store.select(selectSimpleCounterFeatureState),
+ this._store.select(selectTaskRepeatCfgFeatureState),
+ this._store.select(selectMenuTreeState),
+ this._store.select(selectTimeTrackingState),
+ this._store.select(selectPluginUserDataFeatureState),
+ this._store.select(selectPluginMetadataFeatureState),
+ this._store.select(selectReminderFeatureState),
+ this._store.select(selectArchiveYoungFeatureState),
+ this._store.select(selectArchiveOldFeatureState),
+ ]).pipe(first(), map(/* combine into AllSyncModels */)),
+ );
+ }
+}
+```
+
+All sync models are now in NgRx - no hybrid persistence.
+
+---
+
+# Part C: Server Sync
+
+For server-based sync, the operation log IS the sync mechanism. Individual operations are uploaded/downloaded rather than full state snapshots.
+
+## C.1 How Server Sync Differs
+
+| Aspect | Legacy Sync (Part B) | Server Sync (Part C) |
+| ------------------- | -------------------- | --------------------- |
+| What syncs | Full state snapshot | Individual operations |
+| Conflict detection | File-level LWW | Entity-level |
+| Op-log role | Not involved | IS the sync |
+| `syncedAt` tracking | Not needed | Required |
+
+## C.2 Operation Sync Protocol
+
+Providers that support operation sync implement `OperationSyncCapable`:
+
+```typescript
+interface OperationSyncCapable {
+ supportsOperationSync: true;
+ uploadOps(
+ ops: SyncOperation[],
+ clientId: string,
+ lastKnownSeq: number,
+ ): Promise;
+ downloadOps(
+ sinceSeq: number,
+ clientId?: string,
+ limit?: number,
+ ): Promise;
+ acknowledgeOps(clientId: string, upToSeq: number): Promise;
+ getLastServerSeq(): Promise;
+ setLastServerSeq(seq: number): Promise;
+}
+```
+
+### Upload Flow
+
+```typescript
+async uploadPendingOps(syncProvider: OperationSyncCapable): Promise {
+ const pendingOps = await this.opLogStore.getUnsynced();
+
+ // Upload in batches (up to 100 ops per request)
+ for (const chunk of chunkArray(pendingOps, 100)) {
+ const response = await syncProvider.uploadOps(
+ chunk.map(entry => toSyncOperation(entry.op)),
+ clientId,
+ lastKnownServerSeq
+ );
+
+ // Mark accepted ops as synced
+ const acceptedSeqs = response.results
+ .filter(r => r.accepted)
+ .map(r => findEntry(r.opId).seq);
+ await this.opLogStore.markSynced(acceptedSeqs);
+
+ // Process piggybacked new ops from other clients
+ if (response.newOps?.length > 0) {
+ await this.processRemoteOps(response.newOps);
+ }
+ }
+}
+```
+
+### Download Flow
+
+```typescript
+async downloadRemoteOps(syncProvider: OperationSyncCapable): Promise {
+ let sinceSeq = await syncProvider.getLastServerSeq();
+ let hasMore = true;
+
+ while (hasMore) {
+ const response = await syncProvider.downloadOps(sinceSeq, undefined, 500);
+
+ // Filter already-applied ops
+ const newOps = response.ops.filter(op => !appliedOpIds.has(op.id));
+ await this.processRemoteOps(newOps);
+
+ sinceSeq = response.ops[response.ops.length - 1].serverSeq;
+ hasMore = response.hasMore;
+ await syncProvider.setLastServerSeq(response.latestSeq);
+ }
+}
+```
+
+## C.3 File-Based Sync Fallback
+
+For providers without API support (WebDAV/Dropbox), operations are synced via files (`OperationLogUploadService` and `OperationLogDownloadService` handle this transparently):
+
+```
+ops/
+├── manifest.json
+├── ops_CLIENT1_1701234567890.json
+├── ops_CLIENT1_1701234599999.json
+└── ops_CLIENT2_1701234600000.json
+```
+
+The manifest tracks which operation files exist. Each file contains a batch of operations. The system supports both API-based sync and this file-based fallback.
+
+## C.4 Conflict Detection
+
+Conflicts are detected using vector clocks at the entity level:
+
+```typescript
+async detectConflicts(remoteOps: Operation[]): Promise {
+ const localPendingByEntity = await this.opLogStore.getUnsyncedByEntity();
+ const appliedFrontierByEntity = await this.opLogStore.getEntityFrontier();
+
+ for (const remoteOp of remoteOps) {
+ const entityKey = `${remoteOp.entityType}:${remoteOp.entityId}`;
+ const localFrontier = mergeClocks(
+ appliedFrontierByEntity.get(entityKey),
+ ...localPendingByEntity.get(entityKey)?.map(op => op.vectorClock) || []
+ );
+
+ const comparison = compareVectorClocks(localFrontier, remoteOp.vectorClock);
+ if (comparison === VectorClockComparison.CONCURRENT) {
+ conflicts.push({
+ entityType: remoteOp.entityType,
+ entityId: remoteOp.entityId,
+ localOps: localPendingByEntity.get(entityKey) || [],
+ remoteOps: [remoteOp],
+ suggestedResolution: 'manual'
+ });
+ } else {
+ nonConflicting.push(remoteOp);
+ }
+ }
+
+ return { nonConflicting, conflicts };
+}
+```
+
+## C.5 Conflict Resolution
+
+Conflicts are presented to the user via `ConflictResolutionService`:
+
+```typescript
+async presentConflicts(conflicts: EntityConflict[]): Promise {
+ const dialogRef = this.dialog.open(DialogConflictResolutionComponent, {
+ data: { conflicts },
+ disableClose: true
+ });
+
+ const result = await firstValueFrom(dialogRef.afterClosed());
+
+ if (result.resolution === 'remote') {
+ // Apply remote ops, overwrite local state
+ for (const conflict of conflicts) {
+ await this.operationApplier.applyOperations(conflict.remoteOps);
+ }
+ // Mark local ops as rejected so they won't be re-synced
+ const localOpIds = conflicts.flatMap(c => c.localOps.map(op => op.id));
+ await this.opLogStore.markRejected(localOpIds);
+ } else {
+ // Keep local ops, ignore remote
+ }
+}
+```
+
+### Rejected Operations
+
+When the user chooses "remote" resolution, local conflicting operations are marked with `rejectedAt` timestamp:
+
+- Rejected ops remain in the log for history/debugging
+- `getUnsynced()` excludes rejected ops (won't re-upload)
+- Compaction may eventually delete old rejected ops
+
+## C.6 Dependency Resolution
+
+Operations may have dependencies (e.g., subtask requires parent task):
+
+```typescript
+interface OperationDependency {
+ entityType: EntityType;
+ entityId: string;
+ mustExist: boolean; // Hard dependency
+ relation: 'parent' | 'reference';
+}
+
+// Operations with missing hard dependencies are queued for retry
+// After MAX_RETRY_ATTEMPTS (3), they're marked as permanently failed
+```
+
+---
+
+# Part D: Data Validation & Repair
+
+The operation log integrates with PFAPI's validation and repair system to prevent data corruption and automatically recover from invalid states.
+
+## D.1 Validation Architecture
+
+Four validation checkpoints ensure data integrity throughout the operation lifecycle:
+
+| Checkpoint | Location | When | Action on Failure |
+| ---------- | ----------------------------------- | ------------------------- | ------------------------------------------ |
+| **A** | `operation-log.effects.ts` | Before IndexedDB write | Reject operation, log error, show snackbar |
+| **B** | `operation-log-hydrator.service.ts` | After loading snapshot | Attempt repair, create REPAIR op |
+| **C** | `operation-log-hydrator.service.ts` | After replaying tail ops | Attempt repair, create REPAIR op |
+| **D** | `operation-log-sync.service.ts` | After applying remote ops | Attempt repair, create REPAIR op |
+
+## D.2 REPAIR Operation Type
+
+When validation fails at checkpoints B, C, or D, the system attempts automatic repair using PFAPI's `dataRepair()` function. If repair succeeds, a REPAIR operation is created:
+
+```typescript
+enum OpType {
+ // ... existing types
+ Repair = 'REPAIR', // Auto-repair operation with full repaired state
+}
+
+interface RepairPayload {
+ appDataComplete: AppDataCompleteNew; // Full repaired state
+ repairSummary: RepairSummary; // What was fixed
+}
+
+interface RepairSummary {
+ entityStateFixed: number; // Fixed ids/entities array sync
+ orphanedEntitiesRestored: number; // Tasks restored from archive
+ invalidReferencesRemoved: number; // Non-existent project/tag IDs removed
+ relationshipsFixed: number; // Project/tag ID consistency
+ structureRepaired: number; // Menu tree, inbox project creation
+ typeErrorsFixed: number; // Typia errors auto-fixed
+}
+```
+
+### REPAIR Operation Behavior
+
+- **During replay**: REPAIR operations load state directly (like SyncImport), skipping prior operations
+- **User notification**: Shows snackbar with count of issues fixed
+- **Audit trail**: REPAIR operations are visible in the operation log for debugging
+
+## D.3 Checkpoint A: Payload Validation
+
+Before writing to IndexedDB, operation payloads are validated in `validate-operation-payload.ts`:
+
+```typescript
+validateOperationPayload(op: Operation): PayloadValidationResult {
+ // 1. Structural validation - payload must be object
+ // 2. OpType-specific validation:
+ // - CREATE: entity with valid 'id' field required
+ // - UPDATE: id + changes, or entity with id required
+ // - DELETE: entityId/entityIds required
+ // - MOVE: ids array required
+ // - BATCH: non-empty payload required
+ // - SYNC_IMPORT/BACKUP_IMPORT: appDataComplete structure required
+ // - REPAIR: skip (internally generated)
+}
+```
+
+This validation is **intentionally lenient** - it checks structural requirements rather than deep entity validation. Full Typia validation happens at state checkpoints.
+
+## D.4 Checkpoints B & C: Hydration Validation
+
+During hydration, state is validated at two points:
+
+```
+App Startup
+ │
+ ▼
+Load snapshot from state_cache
+ │
+ ├──► CHECKPOINT B: Validate snapshot
+ │ │
+ │ └──► If invalid: repair + create REPAIR op
+ │
+ ▼
+Dispatch loadAllData(snapshot)
+ │
+ ▼
+Replay tail operations
+ │
+ └──► CHECKPOINT C: Validate current state
+ │
+ └──► If invalid: repair + create REPAIR op + dispatch repaired state
+```
+
+### Implementation
+
+```typescript
+// In operation-log-hydrator.service.ts
+private async _validateAndRepairState(state: AppDataCompleteNew): Promise {
+ if (this._isRepairInProgress) return state; // Prevent infinite loops
+
+ const result = this.validateStateService.validateAndRepair(state);
+ if (!result.wasRepaired) return state;
+
+ this._isRepairInProgress = true;
+ try {
+ await this.repairOperationService.createRepairOperation(
+ result.repairedState,
+ result.repairSummary,
+ );
+ return result.repairedState;
+ } finally {
+ this._isRepairInProgress = false;
+ }
+}
+```
+
+## D.5 Checkpoint D: Post-Sync Validation
+
+After applying remote operations, state is validated:
+
+- In `operation-log-sync.service.ts` - after applying non-conflicting ops (when no conflicts)
+- In `conflict-resolution.service.ts` - after resolving all conflicts
+
+This catches:
+
+- State drift from remote operations
+- Corruption introduced during sync
+- Invalid operations from other clients
+
+## D.6 ValidateStateService
+
+Wraps PFAPI's validation and repair functionality:
+
+```typescript
+@Injectable({ providedIn: 'root' })
+export class ValidateStateService {
+ validateState(state: AppDataCompleteNew): StateValidationResult {
+ // 1. Run Typia schema validation
+ const typiaResult = validateAllData(state);
+
+ // 2. Run cross-model relationship validation
+ // NOTE: isRelatedModelDataValid errors are now caught and treated as validation failures
+ // rather than crashing, allowing validateAndRepair to trigger dataRepair.
+ let isRelatedValid = true;
+ try {
+ isRelatedValid = isRelatedModelDataValid(state);
+ } catch (e) {
+ PFLog.warn(
+ 'isRelatedModelDataValid threw an error, treating as validation failure',
+ e,
+ );
+ isRelatedValid = false;
+ }
+
+ return {
+ isValid,
+ typiaErrors,
+ crossModelError: !isRelatedValid
+ ? 'isRelatedModelDataValid threw error'
+ : undefined,
+ };
+ }
+
+ validateAndRepair(state: AppDataCompleteNew): ValidateAndRepairResult {
+ // 1. Validate
+ // 2. If invalid: run dataRepair()
+ // 3. Re-validate repaired state
+ // 4. Return repaired state + summary
+ }
+}
+```
+
+## D.7 RepairOperationService
+
+Creates REPAIR operations and notifies the user:
+
+```typescript
+@Injectable({ providedIn: 'root' })
+export class RepairOperationService {
+ async createRepairOperation(
+ repairedState: AppDataCompleteNew,
+ repairSummary: RepairSummary,
+ ): Promise {
+ // 1. Create REPAIR operation with repaired state + summary
+ // 2. Append to operation log
+ // 3. Save state cache snapshot
+ // 4. Show notification to user
+ }
+
+ static createEmptyRepairSummary(): RepairSummary {
+ return {
+ entityStateFixed: 0,
+ orphanedEntitiesRestored: 0,
+ invalidReferencesRemoved: 0,
+ relationshipsFixed: 0,
+ structureRepaired: 0,
+ typeErrorsFixed: 0,
+ };
+ }
+}
+```
+
+---
+
+# Edge Cases & Missing Considerations
+
+This section documents known edge cases and areas requiring further design or implementation.
+
+## Storage & Resource Limits
+
+### IndexedDB Quota Exhaustion
+
+**Status:** Not Handled
+
+When IndexedDB storage quota is exceeded:
+
+- **Current behavior**: Write fails, error thrown
+- **Desired behavior**: Graceful degradation with user notification
+- **Proposed solution**:
+ 1. Catch `QuotaExceededError` in `OperationLogStore`
+ 2. Trigger emergency compaction (delete old synced ops regardless of retention window)
+ 3. If still failing, show user dialog with options: clear old data, export backup, or sync and clear
+
+### Compaction Trigger Coordination
+
+**Status:** Implemented ✅
+
+The 500-ops compaction trigger uses a persistent counter stored in `state_cache.compactionCounter`:
+
+- Counter is shared across tabs via IndexedDB
+- Counter persists across app restarts
+- Counter is reset after successful compaction
+- Web Locks still prevent concurrent compaction execution
+
+## Data Integrity Edge Cases
+
+### Genesis Migration with Partial Data
+
+**Status:** Not Fully Defined
+
+What if data exists in both `pf` AND `SUP_OPS` databases?
+
+- **Scenario**: Interrupted migration, partial data in each store
+- **Current behavior**: If `SUP_OPS.state_cache` exists, use it; ignore `pf`
+- **Risk**: May lose newer data that was written to `pf` after partial migration
+- **Proposed solution**: Compare timestamps, merge if necessary, or prompt user
+
+### Compaction During Active Sync
+
+**Status:** Handled via Locks
+
+- Compaction acquires `sp_op_log_compact` lock
+- Sync operations use separate locks
+- **Verified safe**: Compaction only deletes ops with `syncedAt` set, so unsynced ops from active sync are preserved
+
+---
+
+# Implementation Status
+
+## Part A: Local Persistence
+
+### Complete ✅
+
+- SUP_OPS IndexedDB store (ops + state_cache)
+- NgRx effect capture with isPersistent pattern
+- Snapshot + tail replay hydration
+- Multi-tab BroadcastChannel coordination
+- Web Locks + localStorage fallback
+- Genesis migration from legacy data
+- Compaction with 7-day retention window
+- Disaster recovery from legacy 'pf' database
+- Schema migration service infrastructure (no migrations defined yet)
+- Persistent action metadata on all model actions
+- Rollback notification on persistence failure (shows snackbar with reload action)
+- Hydration optimizations (skip replay for SyncImport, save snapshot after >10 ops replayed)
+- **Migration safety backup (A.7.12)** - Creates backup before migration, restores on failure
+- **Tail ops migration (A.7.13)** - Migrates operations during hydration before replay
+- **Unified migration interface (A.7.15)** - `SchemaMigration` includes both `migrateState` and optional `migrateOperation`
+- **Persistent compaction counter** - Counter stored in `state_cache`, shared across tabs/restarts
+- **`syncedAt` index** - Index on ops store for faster `getUnsynced()` queries
+
+### Not Implemented ⚠️
+
+| Item | Section | Risk if Missing |
+| ------------------------------- | ------- | ---------------------------------------- |
+| **Conflict-aware op migration** | A.7.11 | Conflicts may compare mismatched schemas |
+
+> **Note**: A.7.11 (conflict-aware migration) is only needed when Part C (Server Sync) is implemented. The local migration system (A.7.12, A.7.13, A.7.15) is complete.
+
+## Part B: Legacy Sync Bridge
+
+### Complete ✅
+
+- `PfapiStoreDelegateService` (reads all NgRx models for sync)
+- META_MODEL vector clock update (B.2)
+- Sync download persistence via `hydrateFromRemoteSync()` (B.3)
+- All models in NgRx (no hybrid persistence)
+- Skip META_MODEL update during sync (prevents lock errors)
+
+## Part C: Server Sync
+
+### Complete ✅
+
+- Operation sync protocol interface (`OperationSyncCapable`)
+- `OperationLogSyncService` (orchestration, processRemoteOps, detectConflicts)
+- `OperationLogUploadService` (API upload + file-based fallback)
+- `OperationLogDownloadService` (API download + file-based fallback)
+- Entity-level conflict detection (vector clock comparisons)
+- `ConflictResolutionService` (UI presentation + apply resolutions)
+- `DependencyResolverService` (extract/check dependencies)
+- `OperationApplierService` (retry queue + topological sort)
+- Rejected operation tracking (`rejectedAt` field)
+
+> **Clarification**: Part C describes the server-based sync system. While fully implemented in the core, it is currently utilized by the new sync providers (when configured) alongside the legacy sync bridge.
+
+## Part D: Validation & Repair
+
+### Complete ✅
+
+- Payload validation at write (Checkpoint A - structural validation before IndexedDB write)
+- State validation during hydration (Checkpoints B & C - Typia + cross-model validation)
+- Post-sync validation (Checkpoint D - validation after applying remote ops)
+- REPAIR operation type (auto-repair with full state + repair summary)
+- ValidateStateService (wraps PFAPI validation + repair)
+- RepairOperationService (creates REPAIR ops, user notification)
+- User notification on repair (snackbar with issue count)
+
+## Future Enhancements 🔮
+
+| Component | Description | Priority |
+| -------------- | ------------------------------------------ | -------- |
+| Auto-merge | Automatic merge for non-conflicting fields | Low |
+| Undo/Redo | Leverage op-log for undo history | Low |
+| Quota handling | Graceful degradation on storage exhaustion | Medium |
+
+> **Recently Completed:** `syncedAt` index (for faster getUnsynced()) and persistent compaction counter (tracks ops across tabs/restarts) are now implemented.
+
+---
+
+# File Reference
+
+```
+src/app/core/persistence/operation-log/
+├── operation.types.ts # Type definitions (Operation, OpType, EntityType)
+├── operation-log.const.ts # Constants
+├── operation-log.effects.ts # Action capture + META_MODEL bridge
+├── operation-converter.util.ts # Op ↔ Action conversion
+├── persistent-action.interface.ts # PersistentAction type + isPersistentAction guard
+├── entity-key.util.ts # Entity key generation utilities
+├── store/
+│ ├── operation-log-store.service.ts # SUP_OPS IndexedDB wrapper
+│ ├── operation-log-hydrator.service.ts # Startup hydration
+│ ├── operation-log-compaction.service.ts # Snapshot + cleanup
+│ ├── operation-log-migration.service.ts # Genesis migration from legacy
+│ └── schema-migration.service.ts # State schema migrations
+├── sync/
+│ ├── operation-log-sync.service.ts # Upload/download operations (Part C)
+│ ├── lock.service.ts # Cross-tab locking (Web Locks + fallback)
+│ ├── multi-tab-coordinator.service.ts # BroadcastChannel coordination
+│ ├── dependency-resolver.service.ts # Extract/check operation dependencies
+│ └── conflict-resolution.service.ts # Conflict UI presentation
+└── processing/
+ ├── operation-applier.service.ts # Apply ops to store with dependency handling
+ ├── validate-state.service.ts # Typia + cross-model validation wrapper
+ ├── validate-operation-payload.ts # Checkpoint A - payload validation
+ └── repair-operation.service.ts # REPAIR operation creation + notification
+
+src/app/pfapi/
+├── pfapi-store-delegate.service.ts # Reads NgRx for sync (Part B)
+└── pfapi.service.ts # Sync orchestration
+```
+
+---
+
+# References
+
+- [Execution Plan](./operation-log-execution-plan.md) - Implementation tasks
+- [PFAPI Architecture](./pfapi-sync-persistence-architecture.md) - Legacy sync system
+- [Server Sync Architecture](./server-sync-architecture.md) - Server-based sync details
diff --git a/src/app/core/persistence/operation-log/docs/pfapi-sync-persistence-architecture.md b/src/app/core/persistence/operation-log/docs/pfapi-sync-persistence-architecture.md
new file mode 100644
index 000000000..57af7a35a
--- /dev/null
+++ b/src/app/core/persistence/operation-log/docs/pfapi-sync-persistence-architecture.md
@@ -0,0 +1,860 @@
+# PFAPI Sync and Persistence Architecture
+
+This document describes the architecture and implementation of the persistence and synchronization system (PFAPI) in Super Productivity.
+
+## Overview
+
+PFAPI (Persistence Framework API) is a comprehensive system for:
+
+1. **Local Persistence**: Storing application data in IndexedDB
+2. **Cross-Device Synchronization**: Syncing data across devices via multiple cloud providers
+3. **Conflict Detection**: Using vector clocks for distributed conflict detection
+4. **Data Validation & Migration**: Ensuring data integrity across versions
+
+## Architecture Layers
+
+```
+┌─────────────────────────────────────────────────────────────────┐
+│ Angular Application │
+│ (Components & Services) │
+└────────────────────────────┬────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────┐
+│ PfapiService (Angular) │
+│ - Injectable wrapper around Pfapi │
+│ - Exposes RxJS Observables for UI integration │
+│ - Manages sync provider activation │
+└────────────────────────────┬────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────┐
+│ Pfapi (Core) │
+│ - Main orchestrator for all persistence operations │
+│ - Coordinates Database, Models, Sync, and Migration │
+└────────────────────────────┬────────────────────────────────────┘
+ │
+ ┌────────────────────┼────────────────────┐
+ │ │ │
+ ▼ ▼ ▼
+┌───────────────┐ ┌───────────────┐ ┌───────────────┐
+│ Database │ │ SyncService │ │ Migration │
+│ (IndexedDB) │ │ (Orchestrator)│ │ Service │
+└───────────────┘ └───────┬───────┘ └───────────────┘
+ │
+ ┌────────────┼────────────┐
+ │ │ │
+ ▼ ▼ ▼
+ ┌──────────┐ ┌───────────┐ ┌───────────┐
+ │ Meta │ │ Model │ │ Encrypt/ │
+ │ Sync │ │ Sync │ │ Compress │
+ └──────────┘ └───────────┘ └───────────┘
+ │ │
+ └────────────┼────────────┐
+ │ │
+ ▼ ▼
+ ┌───────────────────────────┐
+ │ SyncProvider Interface │
+ └───────────────┬───────────┘
+ │
+ ┌───────────────────────────┼───────────────────────────┐
+ │ │ │
+ ▼ ▼ ▼
+┌───────────────┐ ┌───────────────┐ ┌───────────────┐
+│ Dropbox │ │ WebDAV │ │ Local File │
+└───────────────┘ └───────────────┘ └───────────────┘
+```
+
+## Directory Structure
+
+```
+src/app/pfapi/
+├── pfapi.service.ts # Angular service wrapper
+├── pfapi-config.ts # Model and provider configuration
+├── pfapi-helper.ts # RxJS integration helpers
+├── api/
+│ ├── pfapi.ts # Main API class
+│ ├── pfapi.model.ts # Type definitions
+│ ├── pfapi.const.ts # Enums and constants
+│ ├── db/ # Database abstraction
+│ │ ├── database.ts # Database wrapper with locking
+│ │ ├── database-adapter.model.ts
+│ │ └── indexed-db-adapter.ts # IndexedDB implementation
+│ ├── model-ctrl/ # Model controllers
+│ │ ├── model-ctrl.ts # Generic model controller
+│ │ └── meta-model-ctrl.ts # Metadata controller
+│ ├── sync/ # Sync orchestration
+│ │ ├── sync.service.ts # Main sync orchestrator
+│ │ ├── meta-sync.service.ts # Metadata sync
+│ │ ├── model-sync.service.ts # Model sync
+│ │ ├── sync-provider.interface.ts
+│ │ ├── encrypt-and-compress-handler.service.ts
+│ │ └── providers/ # Provider implementations
+│ ├── migration/ # Data migration
+│ ├── util/ # Utilities (vector-clock, etc.)
+│ └── errors/ # Custom error types
+├── migrate/ # Cross-model migrations
+├── repair/ # Data repair utilities
+└── validate/ # Validation functions
+```
+
+## Core Components
+
+### 1. Database Layer
+
+#### Database Class (`api/db/database.ts`)
+
+The `Database` class wraps the storage adapter and provides:
+
+- **Locking mechanism**: Prevents concurrent writes during sync
+- **Error handling**: Centralized error management
+- **CRUD operations**: `load`, `save`, `remove`, `loadAll`, `clearDatabase`
+
+```typescript
+class Database {
+ lock(): void; // Prevents writes
+ unlock(): void; // Re-enables writes
+ load(key: string): Promise;
+ save(key: string, data: T, isIgnoreDBLock?: boolean): Promise;
+ remove(key: string): Promise;
+}
+```
+
+The database is locked during sync operations to prevent race conditions.
+
+#### IndexedDB Adapter (`api/db/indexed-db-adapter.ts`)
+
+Implements `DatabaseAdapter` interface using IndexedDB:
+
+- Database name: `'pf'`
+- Main store: `'main'`
+- Uses the `idb` library for async IndexedDB operations
+
+```typescript
+class IndexedDbAdapter implements DatabaseAdapter {
+ async init(): Promise; // Opens/creates database
+ async load(key: string): Promise; // db.get(store, key)
+ async save(key: string, data: T): Promise; // db.put(store, data, key)
+ async remove(key: string): Promise; // db.delete(store, key)
+ async loadAll(): Promise; // Returns all entries as object
+ async clearDatabase(): Promise; // db.clear(store)
+}
+```
+
+## Local Storage Structure (IndexedDB)
+
+All data is stored in a single IndexedDB database with one object store. Each entry is keyed by a string identifier.
+
+### IndexedDB Keys
+
+#### System Keys
+
+| Key | Content | Description |
+| --------------------- | ------------------------- | ------------------------------------------------------- |
+| `__meta_` | `LocalMeta` | Sync metadata (vector clock, revMap, timestamps) |
+| `__client_id_` | `string` | Unique client identifier (e.g., `"BCL1234567890_12_5"`) |
+| `__sp_cred_Dropbox` | `DropboxPrivateCfg` | Dropbox credentials |
+| `__sp_cred_WebDAV` | `WebdavPrivateCfg` | WebDAV credentials |
+| `__sp_cred_LocalFile` | `LocalFileSyncPrivateCfg` | Local file sync config |
+| `__TMP_BACKUP` | `AllSyncModels` | Temporary backup during imports |
+
+#### Model Keys (all defined in `pfapi-config.ts`)
+
+| Key | Content | Main File | Description |
+| ---------------- | --------------------- | --------- | ----------------------------- |
+| `task` | `TaskState` | Yes | Tasks data (EntityState) |
+| `timeTracking` | `TimeTrackingState` | Yes | Time tracking records |
+| `project` | `ProjectState` | Yes | Projects (EntityState) |
+| `tag` | `TagState` | Yes | Tags (EntityState) |
+| `simpleCounter` | `SimpleCounterState` | Yes | Simple counters (EntityState) |
+| `note` | `NoteState` | Yes | Notes (EntityState) |
+| `taskRepeatCfg` | `TaskRepeatCfgState` | Yes | Recurring task configs |
+| `reminders` | `Reminder[]` | Yes | Reminder array |
+| `planner` | `PlannerState` | Yes | Planner state |
+| `boards` | `BoardsState` | Yes | Kanban boards |
+| `menuTree` | `MenuTreeState` | No | Menu structure |
+| `globalConfig` | `GlobalConfigState` | No | User settings |
+| `issueProvider` | `IssueProviderState` | No | Issue tracker configs |
+| `metric` | `MetricState` | No | Metrics (EntityState) |
+| `improvement` | `ImprovementState` | No | Improvements (EntityState) |
+| `obstruction` | `ObstructionState` | No | Obstructions (EntityState) |
+| `pluginUserData` | `PluginUserDataState` | No | Plugin user data |
+| `pluginMetadata` | `PluginMetaDataState` | No | Plugin metadata |
+| `archiveYoung` | `ArchiveModel` | No | Recent archived tasks |
+| `archiveOld` | `ArchiveModel` | No | Old archived tasks |
+
+### Local Storage Diagram
+
+```
+┌──────────────────────────────────────────────────────────────────┐
+│ IndexedDB: "pf" │
+│ Store: "main" │
+├──────────────────────┬───────────────────────────────────────────┤
+│ Key │ Value │
+├──────────────────────┼───────────────────────────────────────────┤
+│ __meta_ │ { lastUpdate, vectorClock, revMap, ... } │
+│ __client_id_ │ "BCLm1abc123_12_5" │
+│ __sp_cred_Dropbox │ { accessToken, refreshToken, encryptKey } │
+│ __sp_cred_WebDAV │ { url, username, password, encryptKey } │
+├──────────────────────┼───────────────────────────────────────────┤
+│ task │ { ids: [...], entities: {...} } │
+│ project │ { ids: [...], entities: {...} } │
+│ tag │ { ids: [...], entities: {...} } │
+│ note │ { ids: [...], entities: {...} } │
+│ globalConfig │ { misc: {...}, keyboard: {...}, ... } │
+│ timeTracking │ { ... } │
+│ planner │ { ... } │
+│ boards │ { ... } │
+│ archiveYoung │ { task: {...}, timeTracking: {...} } │
+│ archiveOld │ { task: {...}, timeTracking: {...} } │
+│ ... │ ... │
+└──────────────────────┴───────────────────────────────────────────┘
+```
+
+### How Models Are Saved Locally
+
+When a model is saved via `ModelCtrl.save()`:
+
+```typescript
+// 1. Data is validated
+if (modelCfg.validate) {
+ const result = modelCfg.validate(data);
+ if (!result.success && modelCfg.repair) {
+ data = modelCfg.repair(data); // Auto-repair if possible
+ }
+}
+
+// 2. Metadata is updated (if requested via isUpdateRevAndLastUpdate)
+// Always:
+vectorClock = incrementVectorClock(vectorClock, clientId);
+lastUpdate = Date.now();
+
+// Only for NON-main-file models (isMainFileModel: false):
+if (!modelCfg.isMainFileModel) {
+ revMap[modelId] = Date.now().toString();
+}
+// Main file models are tracked via mainModelData in the meta file, not revMap
+
+// 3. Data is saved to IndexedDB
+await db.put('main', data, modelId); // e.g., key='task', value=TaskState
+```
+
+**Important distinction:**
+
+- **Main file models** (`isMainFileModel: true`): Vector clock is incremented, but `revMap` is NOT updated. These models are embedded in `mainModelData` within the meta file.
+- **Separate model files** (`isMainFileModel: false`): Both vector clock and `revMap` are updated. The `revMap` entry tracks the revision of the individual remote file.
+
+### 2. Model Control Layer
+
+#### ModelCtrl (`api/model-ctrl/model-ctrl.ts`)
+
+Generic controller for each data model (tasks, projects, tags, etc.):
+
+```typescript
+class ModelCtrl {
+ save(
+ data: MT,
+ options?: {
+ isUpdateRevAndLastUpdate: boolean;
+ isIgnoreDBLock?: boolean;
+ },
+ ): Promise;
+
+ load(): Promise;
+ remove(): Promise;
+}
+```
+
+Key behaviors:
+
+- **Validation on save**: Uses Typia for runtime type checking
+- **Auto-repair**: Attempts to repair invalid data if `repair` function is provided
+- **In-memory caching**: Keeps data in memory for fast reads
+- **Revision tracking**: Updates metadata on save when `isUpdateRevAndLastUpdate` is true
+
+#### MetaModelCtrl (`api/model-ctrl/meta-model-ctrl.ts`)
+
+Manages synchronization metadata:
+
+```typescript
+interface LocalMeta {
+ lastUpdate: number; // Timestamp of last local change
+ lastSyncedUpdate: number | null; // Timestamp of last sync
+ metaRev: string | null; // Remote metadata revision
+ vectorClock: VectorClock; // Client-specific clock values
+ lastSyncedVectorClock: VectorClock | null;
+ revMap: RevMap; // Model ID -> revision mapping
+ crossModelVersion: number; // Data schema version
+}
+```
+
+Key responsibilities:
+
+- **Client ID management**: Generates and stores unique client identifiers
+- **Vector clock updates**: Increments on local changes
+- **Revision map tracking**: Tracks which model versions are synced
+
+### 3. Sync Service Layer
+
+#### SyncService (`api/sync/sync.service.ts`)
+
+Main sync orchestrator. The `sync()` method:
+
+1. **Check readiness**: Verify sync provider is configured and authenticated
+2. **Operation log sync**: Upload/download operation logs (new feature)
+3. **Early return check**: If `lastSyncedUpdate === lastUpdate` and meta revision matches, return `InSync`
+4. **Download remote metadata**: Get current remote state
+5. **Determine sync direction**: Compare local and remote states using `getSyncStatusFromMetaFiles`
+6. **Execute sync**: Upload, download, or report conflict
+
+```typescript
+async sync(): Promise<{ status: SyncStatus; conflictData?: ConflictData }>
+```
+
+Possible sync statuses:
+
+- `InSync` - No changes needed
+- `UpdateLocal` - Download needed (remote is newer)
+- `UpdateRemote` - Upload needed (local is newer)
+- `UpdateLocalAll` / `UpdateRemoteAll` - Full sync needed
+- `Conflict` - Concurrent changes detected
+- `NotConfigured` - No sync provider set
+
+#### MetaSyncService (`api/sync/meta-sync.service.ts`)
+
+Handles metadata file operations:
+
+- `download()`: Gets remote metadata, checks for locks
+- `upload()`: Uploads metadata with encryption
+- `lock()`: Creates a lock file during multi-file upload
+- `getRev()`: Gets remote metadata revision
+
+#### ModelSyncService (`api/sync/model-sync.service.ts`)
+
+Handles individual model file operations:
+
+- `upload()`: Uploads a model with encryption
+- `download()`: Downloads a model with revision verification
+- `remove()`: Deletes a remote model file
+- `getModelIdsToUpdateFromRevMaps()`: Determines which models need syncing
+
+### 4. Vector Clock System
+
+#### Purpose
+
+Vector clocks provide **causality-based conflict detection** for distributed systems. Unlike simple timestamps:
+
+- They detect **concurrent changes** (true conflicts)
+- They preserve **happened-before relationships**
+- They work without synchronized clocks
+
+#### Implementation (`api/util/vector-clock.ts`)
+
+```typescript
+interface VectorClock {
+ [clientId: string]: number; // Maps client ID to update count
+}
+
+enum VectorClockComparison {
+ EQUAL, // Same state
+ LESS_THAN, // A happened before B
+ GREATER_THAN, // B happened before A
+ CONCURRENT, // True conflict - both changed independently
+}
+```
+
+Key operations:
+
+- `incrementVectorClock(clock, clientId)` - Increment on local change
+- `mergeVectorClocks(a, b)` - Take max of each component
+- `compareVectorClocks(a, b)` - Determine relationship
+- `hasVectorClockChanges(current, reference)` - Check for local changes
+- `limitVectorClockSize(clock, clientId)` - Prune to max 50 clients
+
+#### Sync Status Determination (`api/util/get-sync-status-from-meta-files.ts`)
+
+```typescript
+function getSyncStatusFromMetaFiles(remote: RemoteMeta, local: LocalMeta) {
+ // 1. Check for empty local/remote
+ // 2. Compare vector clocks
+ // 3. Return appropriate SyncStatus
+}
+```
+
+The algorithm (simplified - actual implementation has more nuances):
+
+1. **Empty data checks:**
+
+ - If remote has no data (`isRemoteDataEmpty`), return `UpdateRemoteAll`
+ - If local has no data (`isLocalDataEmpty`), return `UpdateLocalAll`
+
+2. **Vector clock validation:**
+
+ - If either local or remote lacks a vector clock, return `Conflict` with reason `NoLastSync`
+ - Both `vectorClock` and `lastSyncedVectorClock` must be present
+
+3. **Change detection using `hasVectorClockChanges`:**
+
+ - Local changes: Compare current `vectorClock` vs `lastSyncedVectorClock`
+ - Remote changes: Compare remote `vectorClock` vs local `lastSyncedVectorClock`
+
+4. **Sync status determination:**
+ - No local changes + no remote changes -> `InSync`
+ - Local changes only -> `UpdateRemote`
+ - Remote changes only -> `UpdateLocal`
+ - Both have changes -> `Conflict` with reason `BothNewerLastSync`
+
+**Note:** The actual implementation also handles edge cases like minimal-update bootstrap scenarios and validates that clocks are properly initialized.
+
+### 5. Sync Providers
+
+#### Interface (`api/sync/sync-provider.interface.ts`)
+
+```typescript
+interface SyncProviderServiceInterface {
+ id: PID;
+ isUploadForcePossible?: boolean;
+ isLimitedToSingleFileSync?: boolean;
+ maxConcurrentRequests: number;
+
+ getFileRev(targetPath: string, localRev: string | null): Promise;
+ downloadFile(targetPath: string): Promise;
+ uploadFile(
+ targetPath: string,
+ dataStr: string,
+ revToMatch: string | null,
+ isForceOverwrite?: boolean,
+ ): Promise;
+ removeFile(targetPath: string): Promise;
+ listFiles?(targetPath: string): Promise;
+ isReady(): Promise;
+ setPrivateCfg(privateCfg): Promise;
+}
+```
+
+#### Available Providers
+
+| Provider | Description | Force Upload | Max Concurrent |
+| ------------- | --------------------------- | ------------ | -------------- |
+| **Dropbox** | OAuth2 PKCE authentication | Yes | 4 |
+| **WebDAV** | Nextcloud, ownCloud, etc. | No | 10 |
+| **LocalFile** | Electron/Android filesystem | No | 10 |
+| **SuperSync** | WebDAV-based custom sync | No | 10 |
+
+### 6. Data Encryption & Compression
+
+#### EncryptAndCompressHandlerService
+
+Handles data transformation before upload/after download:
+
+- **Compression**: Uses compression algorithms to reduce data size
+- **Encryption**: AES encryption with user-provided key
+
+Data format prefix: `pf_` indicates processed data.
+
+### 7. Migration System
+
+#### MigrationService (`api/migration/migration.service.ts`)
+
+Handles data schema evolution:
+
+- Checks version on app startup
+- Applies cross-model migrations sequentially in order
+- **Only supports forward (upgrade) migrations** - throws `CanNotMigrateMajorDownError` if data version is higher than code version (major version mismatch)
+
+```typescript
+interface CrossModelMigrations {
+ [version: number]: (fullData) => transformedData;
+}
+```
+
+**Migration behavior:**
+
+- If `dataVersion === codeVersion`: No migration needed
+- If `dataVersion < codeVersion`: Run all migrations from `dataVersion` to `codeVersion`
+- If `dataVersion > codeVersion` (major version differs): Throws error - downgrade not supported
+
+Current version: `4.4` (from `pfapi-config.ts`)
+
+### 8. Validation & Repair
+
+#### Validation
+
+Uses **Typia** for runtime type validation:
+
+- Each model can define a `validate` function
+- Returns `IValidation` with success flag and errors
+
+#### Repair
+
+Auto-repair system for corrupted data:
+
+- Each model can define a `repair` function
+- Applied when validation fails
+- Falls back to error if repair fails
+
+## Sync Flow Diagrams
+
+### Normal Sync Flow
+
+```
+┌─────────┐ ┌─────────┐ ┌─────────┐
+│ Device A│ │ Remote │ │ Device B│
+└────┬────┘ └────┬────┘ └────┬────┘
+ │ │ │
+ │ 1. sync() │ │
+ ├────────────────►│ │
+ │ │ │
+ │ 2. download │ │
+ │ metadata │ │
+ │◄────────────────┤ │
+ │ │ │
+ │ 3. compare │ │
+ │ vector clocks │ │
+ │ │ │
+ │ 4. upload │ │
+ │ changes │ │
+ ├────────────────►│ │
+ │ │ │
+ │ │ 5. sync() │
+ │ │◄────────────────┤
+ │ │ │
+ │ │ 6. download │
+ │ │ metadata │
+ │ ├────────────────►│
+ │ │ │
+ │ │ 7. download │
+ │ │ changed │
+ │ │ models │
+ │ ├────────────────►│
+```
+
+### Conflict Detection Flow
+
+```
+┌─────────┐ ┌─────────┐
+│ Device A│ │ Device B│
+│ VC: {A:5, B:3} │ VC: {A:4, B:5}
+└────┬────┘ └────┬────┘
+ │ │
+ │ Both made changes offline │
+ │ │
+ │ ┌─────────────────────────┼───────────────────────────┐
+ │ │ Compare: CONCURRENT │ │
+ │ │ A has A:5 (higher) │ B has B:5 (higher) │
+ │ │ Neither dominates │ │
+ │ └─────────────────────────┴───────────────────────────┘
+ │ │
+ │ Conflict! │
+ │ User must choose which │
+ │ version to keep │
+```
+
+### Multi-File Upload with Locking
+
+```
+┌─────────┐ ┌─────────┐
+│ Client │ │ Remote │
+└────┬────┘ └────┬────┘
+ │ │
+ │ 1. Create lock │
+ │ (upload lock │
+ │ content) │
+ ├────────────────►│
+ │ │
+ │ 2. Upload │
+ │ model A │
+ ├────────────────►│
+ │ │
+ │ 3. Upload │
+ │ model B │
+ ├────────────────►│
+ │ │
+ │ 4. Upload │
+ │ metadata │
+ │ (replaces lock)│
+ ├────────────────►│
+ │ │
+ │ Lock released │
+```
+
+## Remote Storage Structure
+
+The remote storage (Dropbox, WebDAV, local folder) contains multiple files. The structure is designed to optimize sync performance by separating frequently-changed small data from large archives.
+
+### Remote Files Overview
+
+```
+/ (or /DEV/ in development)
+├── __meta_ # Metadata file (REQUIRED - always synced first)
+├── globalConfig # User settings
+├── menuTree # Menu structure
+├── issueProvider # Issue tracker configurations
+├── metric # Metrics data
+├── improvement # Improvement entries
+├── obstruction # Obstruction entries
+├── pluginUserData # Plugin user data
+├── pluginMetadata # Plugin metadata
+├── archiveYoung # Recent archived tasks (can be large)
+└── archiveOld # Old archived tasks (can be very large)
+```
+
+### The Meta File (`__meta_`)
+
+The meta file is the **central coordination file** for sync. It contains:
+
+1. **Sync metadata** (vector clock, timestamps, version)
+2. **Revision map** (`revMap`) - tracks which revision each model file has
+3. **Main file model data** - frequently-accessed data embedded directly
+
+```typescript
+interface RemoteMeta {
+ // Sync coordination
+ lastUpdate: number; // When data was last changed
+ crossModelVersion: number; // Schema version (e.g., 4.4)
+ vectorClock: VectorClock; // For conflict detection
+ revMap: RevMap; // Model ID -> revision string
+
+ // Embedded data (main file models)
+ mainModelData: {
+ task: TaskState;
+ project: ProjectState;
+ tag: TagState;
+ note: NoteState;
+ timeTracking: TimeTrackingState;
+ simpleCounter: SimpleCounterState;
+ taskRepeatCfg: TaskRepeatCfgState;
+ reminders: Reminder[];
+ planner: PlannerState;
+ boards: BoardsState;
+ };
+
+ // For single-file sync providers
+ isFullData?: boolean; // If true, all data is in this file
+}
+```
+
+### Main File Models vs Separate Model Files
+
+Models are categorized into two types:
+
+#### Main File Models (`isMainFileModel: true`)
+
+These are embedded in the `__meta_` file's `mainModelData` field:
+
+| Model | Reason |
+| --------------- | ------------------------------------- |
+| `task` | Frequently accessed, relatively small |
+| `project` | Core data, always needed |
+| `tag` | Small, frequently referenced |
+| `note` | Often viewed together with tasks |
+| `timeTracking` | Frequently updated |
+| `simpleCounter` | Small, frequently updated |
+| `taskRepeatCfg` | Needed for task creation |
+| `reminders` | Small array, time-critical |
+| `planner` | Viewed on app startup |
+| `boards` | Part of main UI |
+
+**Benefits:**
+
+- Single HTTP request to get all core data
+- Atomic update of related models
+- Faster initial sync
+
+#### Separate Model Files (`isMainFileModel: false` or undefined)
+
+These are stored as individual files:
+
+| Model | Reason |
+| -------------------------------------- | ------------------------------------------- |
+| `globalConfig` | User-specific, rarely synced |
+| `menuTree` | UI state, not critical |
+| `issueProvider` | Contains credentials, separate for security |
+| `metric`, `improvement`, `obstruction` | Historical data, can grow large |
+| `archiveYoung` | Can be large, changes infrequently |
+| `archiveOld` | Very large, rarely accessed |
+| `pluginUserData`, `pluginMetadata` | Plugin-specific, isolated |
+
+**Benefits:**
+
+- Only download what changed (via `revMap` comparison)
+- Large files (archives) don't slow down regular sync
+- Can sync individual models independently
+
+### RevMap: Tracking Model Versions
+
+The `revMap` tracks which version of each separate model file is on the remote:
+
+```typescript
+interface RevMap {
+ [modelId: string]: string; // Model ID -> revision/timestamp
+}
+
+// Example
+{
+ "globalConfig": "1701234567890",
+ "menuTree": "1701234567891",
+ "archiveYoung": "1701234500000",
+ "archiveOld": "1701200000000",
+ // ... (main file models NOT included - they're in mainModelData)
+}
+```
+
+When syncing:
+
+1. Download `__meta_` file
+2. Compare remote `revMap` with local `revMap`
+3. Only download model files where revision differs
+
+### Upload Flow
+
+```
+┌─────────────────────────────────────────────────────────────────────────┐
+│ UPLOAD FLOW │
+└─────────────────────────────────────────────────────────────────────────┘
+
+1. Determine what changed (compare local/remote revMaps)
+ local.revMap: { archiveYoung: "100", globalConfig: "200" }
+ remote.revMap: { archiveYoung: "100", globalConfig: "150" }
+ → globalConfig needs upload
+
+2. For multi-file upload, create lock:
+ Upload to __meta_: "SYNC_IN_PROGRESS__BCLm1abc123_12_5"
+
+3. Upload changed model files:
+ Upload to globalConfig: { encrypted/compressed data }
+ → Get new revision: "250"
+
+4. Upload metadata (replaces lock):
+ Upload to __meta_: {
+ lastUpdate: 1701234567890,
+ vectorClock: { "BCLm1abc123_12_5": 42 },
+ revMap: { archiveYoung: "100", globalConfig: "250" },
+ mainModelData: { task: {...}, project: {...}, ... }
+ }
+```
+
+### Download Flow
+
+```
+┌─────────────────────────────────────────────────────────────────────────┐
+│ DOWNLOAD FLOW │
+└─────────────────────────────────────────────────────────────────────────┘
+
+1. Download __meta_ file
+ → Get mainModelData (task, project, tag, etc.)
+ → Get revMap for separate files
+
+2. Compare revMaps:
+ remote.revMap: { archiveYoung: "300", globalConfig: "250" }
+ local.revMap: { archiveYoung: "100", globalConfig: "250" }
+ → archiveYoung needs download
+
+3. Download changed model files (parallel with load balancing):
+ Download archiveYoung → decrypt/decompress → save locally
+
+4. Update local metadata:
+ - Save all mainModelData to IndexedDB
+ - Save downloaded models to IndexedDB
+ - Update local revMap to match remote
+ - Merge vector clocks
+ - Set lastSyncedUpdate = lastUpdate
+```
+
+### Single-File Sync Mode
+
+Some providers (or configurations) use `isLimitedToSingleFileSync: true`. In this mode:
+
+- **All data** is stored in the `__meta_` file
+- `mainModelData` contains ALL models, not just main file models
+- `isFullData: true` flag is set
+- No separate model files are created
+- Simpler but less efficient for large datasets
+
+### File Content Format
+
+All files are stored as JSON strings with optional encryption/compression:
+
+```
+Raw: { "ids": [...], "entities": {...} }
+ ↓ (if compression enabled)
+Compressed:
+ ↓ (if encryption enabled)
+Encrypted:
+ ↓
+Prefixed: "pf_" + + "__" +
+```
+
+The `pf_` prefix indicates the data has been processed and needs decryption/decompression.
+
+## Data Model Configurations
+
+From `pfapi-config.ts`:
+
+| Model | Main File | Description |
+| ---------------- | --------- | ---------------------- |
+| `task` | Yes | Tasks data |
+| `timeTracking` | Yes | Time tracking records |
+| `project` | Yes | Projects |
+| `tag` | Yes | Tags |
+| `simpleCounter` | Yes | Simple Counters |
+| `note` | Yes | Notes |
+| `taskRepeatCfg` | Yes | Recurring task configs |
+| `reminders` | Yes | Reminders |
+| `planner` | Yes | Planner data |
+| `boards` | Yes | Kanban boards |
+| `menuTree` | No | Menu structure |
+| `globalConfig` | No | User settings |
+| `issueProvider` | No | Issue tracker configs |
+| `metric` | No | Metrics data |
+| `improvement` | No | Metric improvements |
+| `obstruction` | No | Metric obstructions |
+| `pluginUserData` | No | Plugin user data |
+| `pluginMetadata` | No | Plugin metadata |
+| `archiveYoung` | No | Recent archive |
+| `archiveOld` | No | Old archive |
+
+**Main file models** are stored in the metadata file itself for faster sync of frequently-accessed data.
+
+## Error Handling
+
+Custom error types in `api/errors/errors.ts`:
+
+- **API Errors**: `NoRevAPIError`, `RemoteFileNotFoundAPIError`, `AuthFailSPError`
+- **Sync Errors**: `LockPresentError`, `LockFromLocalClientPresentError`, `UnknownSyncStateError`
+- **Data Errors**: `DataValidationFailedError`, `ModelValidationError`, `DataRepairNotPossibleError`
+
+## Event System
+
+```typescript
+type PfapiEvents =
+ | 'syncDone' // Sync completed
+ | 'syncStart' // Sync starting
+ | 'syncError' // Sync failed
+ | 'syncStatusChange' // Status changed
+ | 'metaModelChange' // Metadata updated
+ | 'providerChange' // Provider switched
+ | 'providerReady' // Provider authenticated
+ | 'providerPrivateCfgChange' // Provider credentials updated
+ | 'onBeforeUpdateLocal'; // About to download changes
+```
+
+## Security Considerations
+
+1. **Encryption**: Optional AES encryption with user-provided key
+2. **No tracking**: All data stays local unless explicitly synced
+3. **Credential storage**: Provider credentials stored in IndexedDB with prefix `__sp_cred_`
+4. **OAuth security**: Dropbox uses PKCE flow
+
+## Key Design Decisions
+
+1. **Vector clocks over timestamps**: More reliable conflict detection in distributed systems
+2. **Main file models**: Frequently accessed data bundled with metadata for faster sync
+3. **Database locking**: Prevents corruption during sync operations
+4. **Adapter pattern**: Easy to add new storage backends
+5. **Provider abstraction**: Consistent interface across Dropbox, WebDAV, local files
+6. **Typia validation**: Runtime type safety without heavy dependencies
+
+## Future Considerations
+
+The system has been extended with **Operation Log Sync** for more granular synchronization at the operation level rather than full model replacement. See `operation-log-architecture.md` for details.