mirror of
https://github.com/johannesjo/super-productivity.git
synced 2026-01-23 02:36:05 +00:00
docs: fix outdated file paths and types in diagrams
- Fix FileBasedSyncData type: remove non-existent lastSeq, add clientId - Fix file paths: op-log/processing → op-log/apply - Fix file paths: features/time-tracking → features/archive - Fix file path: super-sync not supersync - Fix vector-clock path: now in op-log/sync/ - Remove non-existent state-capture.meta-reducer.ts reference - Remove pfapi-migration.service.ts (no longer exists) docs: remove outdated .bak file references from diagrams The backup file (sync-data.json.bak) is no longer created during upload. It's only deleted as cleanup from legacy implementations. docs: add sync comparison and simple sync flow diagrams - Add 07-supersync-vs-file-based.md comparing the two sync approaches - Add 08-sync-flow-explained.md with step-by-step sync explanation - Remove consolidated unified-oplog-sync-diagrams.md - Update diagrams README with new entries docs(sync): reorganize diagrams into subfolder and update for unified architecture - Create docs/sync-and-op-log/diagrams/ with topic-based diagram files - Remove outdated PFAPI Legacy Bridge references from diagrams - Update archive diagrams to use generic "Archive Database" naming - Fix file paths from sync/providers/ to sync-providers/ - Update quick-reference Area 12 to show unified file-based sync - Update README to reference new diagram locations docs: update architecture docs to reflect PFAPI elimination - Delete obsolete PFAPI documentation: - docs/sync-and-op-log/pfapi-sync-persistence-architecture.md - docs/sync-and-op-log/pfapi-sync-overview.md - docs/plans/pfapi-elimination-status.md - Update sync-and-op-log/README.md: - Describe unified operation log architecture - Document file-based sync (Part B) and server sync (Part C) - Update file structure to reflect sync-providers location - Update operation-log-architecture.md: - Rewrite Part B from "Legacy Sync Bridge" to "File-Based Sync" - Remove all PFAPI code examples and references - Update IndexedDB structure diagram (single SUP_OPS database) - Update architecture overview to show current provider structure - Add notes about PFAPI elimination (January 2026) - Mark completed implementation plans: - replace-pfapi-with-oplog-plan.md - marked as COMPLETED - file-based-oplog-sync-implementation-plan.md - marked as COMPLETED Also includes fix for file-based sync gap detection to handle snapshot replacement (when "Use Local" is chosen in conflict resolution).
This commit is contained in:
parent
2b5fafccae
commit
9f0adbb95c
18 changed files with 2105 additions and 1779 deletions
|
|
@ -1,22 +1,32 @@
|
|||
# Implementation Plan: Unified Op-Log Sync for File-Based Providers
|
||||
|
||||
## Goal
|
||||
> **STATUS: COMPLETED (January 2026)**
|
||||
>
|
||||
> This plan has been fully implemented. PFAPI has been completely eliminated.
|
||||
> All sync providers now use the unified operation log system.
|
||||
>
|
||||
> **Current Implementation:**
|
||||
>
|
||||
> - File-based adapter: `src/app/op-log/sync-providers/file-based/file-based-sync-adapter.service.ts`
|
||||
> - Sync providers: `src/app/op-log/sync-providers/`
|
||||
|
||||
## Original Goal
|
||||
|
||||
Replace PFAPI's model-per-file sync with operation-log sync for ALL providers (WebDAV, Dropbox, LocalFile), enabling full PFAPI deprecation and reducing codebase complexity.
|
||||
|
||||
## Background
|
||||
## Background (Historical)
|
||||
|
||||
**Current State:**
|
||||
**State Before Implementation:**
|
||||
|
||||
- PFAPI (~13,200 LOC): Model-level sync for WebDAV/Dropbox/LocalFile
|
||||
- Op-Log (~23,000 LOC, 85% generic): Operation-level sync for SuperSync only
|
||||
- Two parallel systems with duplicate concepts
|
||||
|
||||
**Target State:**
|
||||
**Final State (Achieved):**
|
||||
|
||||
- Single op-log sync system for ALL providers
|
||||
- File-based providers use simplified single-file approach
|
||||
- PFAPI reduced to transport-only layer
|
||||
- PFAPI completely deleted (~83 files, 2.0 MB removed)
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
|
|
@ -1,144 +0,0 @@
|
|||
# PFAPI Elimination - Current Status
|
||||
|
||||
## Goal
|
||||
|
||||
Delete the entire `src/app/pfapi/` directory (~83 files, 2.0 MB) by moving necessary code into sync/ or core/, removing the legacy abstraction layer.
|
||||
|
||||
## Completed Phases
|
||||
|
||||
### Phase 1: Delete Dead Code ✅
|
||||
|
||||
- Deleted empty migration system
|
||||
- Deleted custom Observable/Event system
|
||||
- Deleted PFAPI migration service
|
||||
|
||||
### Phase 2: Refactor ClientIdService ✅
|
||||
|
||||
- Modified `src/app/core/util/client-id.service.ts` to use direct IndexedDB
|
||||
- Removed PfapiService dependency
|
||||
|
||||
### Phase 3: Move Sync Providers ✅
|
||||
|
||||
- Created `src/app/sync/providers/` structure
|
||||
- Moved SuperSync, Dropbox, WebDAV, LocalFile providers
|
||||
- Moved encryption/compression utilities
|
||||
- Created `sync-exports.ts` barrel for backward compatibility
|
||||
|
||||
### Phase 4: Transform PfapiService → SyncService ✅
|
||||
|
||||
- Renamed to `src/app/sync/sync.service.ts`
|
||||
- Added direct methods to replace `pf.*` accessors:
|
||||
- `sync()`, `clearDatabase()`, `loadGlobalConfig()`
|
||||
- `getSyncProviderById()`, `setPrivateCfgForSyncProvider()`
|
||||
- `forceUploadLocalState()`, `forceDownloadRemoteState()`
|
||||
- `isSyncInProgress` getter
|
||||
|
||||
### Phase 5: Move Validation & Config ✅
|
||||
|
||||
- Moved validation/repair files to `src/app/sync/validation/`
|
||||
- Moved model config to `src/app/sync/model-config.ts`
|
||||
- Moved types to `src/app/sync/sync.types.ts`
|
||||
|
||||
### Phase 6: Delete PFAPI Core ✅
|
||||
|
||||
- Deleted entire `src/app/pfapi/` directory
|
||||
- Fixed task-archive.service.ts to use ArchiveDbAdapter
|
||||
- Fixed time-tracking.service.ts to use ArchiveDbAdapter
|
||||
- Fixed user-profile.service.ts
|
||||
- Fixed file-imex.component.ts
|
||||
|
||||
## Phase 7: In Progress - Fix Remaining `pf.*` References
|
||||
|
||||
### Files Still Needing Fixes
|
||||
|
||||
These files still have `pf.*` references that need to be replaced with direct service methods:
|
||||
|
||||
1. **`src/app/imex/sync/sync-wrapper.service.ts`**
|
||||
|
||||
- `pf.metaModel.setVectorClockFromBridge()`
|
||||
- `pf.metaModel.load()`
|
||||
- `pf.ev.emit('syncStatusChange')`
|
||||
|
||||
2. **`src/app/imex/sync/sync-config.service.ts`**
|
||||
|
||||
- `pf.getSyncProviderById()`
|
||||
- `pf.getActiveSyncProvider()`
|
||||
|
||||
3. **`src/app/imex/sync/sync-safety-backup.service.ts`**
|
||||
|
||||
- Multiple `pf.*` calls
|
||||
|
||||
4. **`src/app/imex/sync/dropbox/store/dropbox.effects.ts`**
|
||||
|
||||
- `currentProviderPrivateCfg$` observable type issues
|
||||
|
||||
5. **`src/app/imex/sync/super-sync-restore.service.ts`**
|
||||
|
||||
- `pf.getActiveSyncProvider()`
|
||||
|
||||
6. **`src/app/imex/sync/encryption-password-change.service.ts`**
|
||||
|
||||
- `pf.getActiveSyncProvider()`
|
||||
|
||||
7. **Op-log files** (various `pf.*` references)
|
||||
- `operation-log-hydrator.service.ts`
|
||||
- Others in `src/app/op-log/`
|
||||
|
||||
### Missing Error Exports
|
||||
|
||||
Add to `src/app/sync/sync-exports.ts`:
|
||||
|
||||
- `CanNotMigrateMajorDownError`
|
||||
- `LockPresentError`
|
||||
- `NoRemoteModelFile`
|
||||
- `PotentialCorsError`
|
||||
- `RevMismatchForModelError`
|
||||
- `SyncInvalidTimeValuesError`
|
||||
|
||||
### Type Issues
|
||||
|
||||
- `currentProviderPrivateCfg$` observable returns `{}` type instead of proper provider config type
|
||||
- Need to fix typing in `sync.service.ts` or create proper type union
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Run `ng build --no-watch --configuration=development` to get current error list
|
||||
2. Fix each file's `pf.*` references by:
|
||||
- Using existing PfapiService methods where available
|
||||
- Adding new methods to PfapiService if needed
|
||||
- For `pf.ev.emit()` calls, use RxJS Subject emissions
|
||||
3. Add missing error class exports to `sync-exports.ts`
|
||||
4. Fix type issues with observable returns
|
||||
5. Run full test suite: `npm test`, `npm run e2e:supersync`, `npm run e2e:webdav`
|
||||
|
||||
## Key Design Decisions
|
||||
|
||||
1. **No backward compat for old PFAPI format** - Users on old format need fresh sync
|
||||
2. **Preserve OAuth tokens** - Use SAME DB name (`pf`) and key format (`PRIVATE_CFG__<id>`)
|
||||
3. **Preserve client ID** - Use SAME DB name (`pf`) and key (`CLIENT_ID`)
|
||||
4. **Keep legacy PBKDF2 decryption** - For reading old encrypted data
|
||||
5. **Use ArchiveDbAdapter** - For direct archive persistence (not through pfapiService.m)
|
||||
|
||||
## Files Already Fixed
|
||||
|
||||
- `src/app/features/time-tracking/task-archive.service.ts` - Uses ArchiveDbAdapter
|
||||
- `src/app/features/time-tracking/time-tracking.service.ts` - Uses ArchiveDbAdapter
|
||||
- `src/app/features/user-profile/user-profile.service.ts` - Direct service methods
|
||||
- `src/app/imex/file-imex/file-imex.component.ts` - loadCompleteBackup(true)
|
||||
- `src/app/imex/local-backup/local-backup.service.ts` - getAllSyncModelDataFromStore()
|
||||
|
||||
## Commands to Run
|
||||
|
||||
```bash
|
||||
# Check current build errors
|
||||
ng build --no-watch --configuration=development
|
||||
|
||||
# Check individual file
|
||||
npm run checkFile <filepath>
|
||||
|
||||
# Run tests
|
||||
npm test
|
||||
npm run e2e:supersync
|
||||
npm run e2e:webdav
|
||||
npm run e2e
|
||||
```
|
||||
|
|
@ -1,54 +1,50 @@
|
|||
# Operation Log Documentation
|
||||
|
||||
**Last Updated:** December 2025
|
||||
**Last Updated:** January 2026
|
||||
|
||||
This directory contains the architectural documentation for Super Productivity's Operation Log system - an event-sourced persistence and synchronization layer.
|
||||
This directory contains the architectural documentation for Super Productivity's Operation Log system - an event-sourced persistence and synchronization layer that handles ALL sync providers (SuperSync, WebDAV, Dropbox, LocalFile).
|
||||
|
||||
## Quick Start
|
||||
|
||||
| If you want to... | Read this |
|
||||
| ----------------------------------- | ------------------------------------------------------------------------------------ |
|
||||
| Understand the overall architecture | [operation-log-architecture.md](./operation-log-architecture.md) |
|
||||
| See visual diagrams | [operation-log-architecture-diagrams.md](./operation-log-architecture-diagrams.md) |
|
||||
| Learn the design rules | [operation-rules.md](./operation-rules.md) |
|
||||
| Understand file-based sync | [hybrid-manifest-architecture.md](./long-term-plans/hybrid-manifest-architecture.md) |
|
||||
| Understand SuperSync encryption | [supersync-encryption-architecture.md](./supersync-encryption-architecture.md) |
|
||||
| Understand legacy PFAPI sync | [pfapi-sync-persistence-architecture.md](./pfapi-sync-persistence-architecture.md) |
|
||||
| If you want to... | Read this |
|
||||
| ----------------------------------- | ------------------------------------------------------------------------------ |
|
||||
| Understand the overall architecture | [operation-log-architecture.md](./operation-log-architecture.md) |
|
||||
| See visual diagrams | [diagrams/](./diagrams/) (split by topic) |
|
||||
| Learn the design rules | [operation-rules.md](./operation-rules.md) |
|
||||
| Understand file-based sync | [diagrams/04-file-based-sync.md](./diagrams/04-file-based-sync.md) |
|
||||
| Understand SuperSync encryption | [supersync-encryption-architecture.md](./supersync-encryption-architecture.md) |
|
||||
|
||||
## Documentation Overview
|
||||
|
||||
### Core Documentation
|
||||
|
||||
| Document | Description | Status |
|
||||
| ---------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- |
|
||||
| [operation-log-architecture.md](./operation-log-architecture.md) | Comprehensive architecture reference covering Parts A-F: Local Persistence, Legacy Sync Bridge, Server Sync, Validation & Repair, Smart Archive Handling, and Atomic State Consistency | ✅ Active |
|
||||
| [operation-log-architecture-diagrams.md](./operation-log-architecture-diagrams.md) | Mermaid diagrams visualizing data flows, sync protocols, and state management | ✅ Active |
|
||||
| [operation-rules.md](./operation-rules.md) | Design rules and guidelines for the operation log store and operations | ✅ Active |
|
||||
| Document | Description | Status |
|
||||
| ---------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ |
|
||||
| [operation-log-architecture.md](./operation-log-architecture.md) | Comprehensive architecture reference covering Parts A-F: Local Persistence, File-Based Sync, Server Sync, Validation & Repair, Smart Archive Handling, and Atomic State Consistency | Active |
|
||||
| [diagrams/](./diagrams/) | Mermaid diagrams split by topic (local persistence, server sync, file-based sync, etc.) | Active |
|
||||
| [operation-rules.md](./operation-rules.md) | Design rules and guidelines for the operation log store and operations | Active |
|
||||
|
||||
### Sync Architecture
|
||||
|
||||
| Document | Description | Status |
|
||||
| ------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------- | -------------- |
|
||||
| [hybrid-manifest-architecture.md](./long-term-plans/hybrid-manifest-architecture.md) | File-based sync optimization using embedded operations buffer and snapshots (WebDAV/Dropbox) | ✅ Implemented |
|
||||
| [supersync-encryption-architecture.md](./supersync-encryption-architecture.md) | End-to-end encryption for SuperSync (AES-256-GCM + Argon2id) | ✅ Implemented |
|
||||
| [pfapi-sync-persistence-architecture.md](./pfapi-sync-persistence-architecture.md) | Legacy PFAPI sync system that coexists with operation log | ✅ Active |
|
||||
| Document | Description | Status |
|
||||
| ------------------------------------------------------------------------------ | --------------------------------------------------------------------- | ----------- |
|
||||
| [diagrams/04-file-based-sync.md](./diagrams/04-file-based-sync.md) | File-based sync with single sync-data.json (WebDAV/Dropbox/LocalFile) | Implemented |
|
||||
| [diagrams/02-server-sync.md](./diagrams/02-server-sync.md) | SuperSync server sync architecture | Implemented |
|
||||
| [supersync-encryption-architecture.md](./supersync-encryption-architecture.md) | End-to-end encryption for SuperSync (AES-256-GCM + Argon2id) | Implemented |
|
||||
|
||||
### Planning & Proposals
|
||||
### Historical / Completed Plans
|
||||
|
||||
| Document | Description | Status |
|
||||
| ---------------------------------------------------------------------------------------------- | -------------------------------------------------------- | ------------------------- |
|
||||
| [replace-pfapi-with-oplog-plan.md](./long-term-plans/replace-pfapi-with-oplog-plan.md) | Plan to unify sync by replacing PFAPI with operation log | 📋 Planned |
|
||||
| [e2e-encryption-plan.md](./long-term-plans/e2e-encryption-plan.md) | Original E2EE design (see supersync-encryption for impl) | ✅ Implemented (Dec 2025) |
|
||||
| [operation-payload-optimization-discussion.md](./operation-payload-optimization-discussion.md) | Discussion on payload optimization strategies | 📋 Historical |
|
||||
| Document | Description | Status |
|
||||
| -------------------------------------------------------------------------------------- | -------------------------------------------------------- | ---------------------- |
|
||||
| [replace-pfapi-with-oplog-plan.md](./long-term-plans/replace-pfapi-with-oplog-plan.md) | Plan to unify sync by replacing PFAPI with operation log | Completed (Jan 2026) |
|
||||
| [e2e-encryption-plan.md](./long-term-plans/e2e-encryption-plan.md) | Original E2EE design (see supersync-encryption for impl) | Implemented (Dec 2025) |
|
||||
|
||||
## Architecture at a Glance
|
||||
|
||||
The Operation Log system serves four distinct purposes:
|
||||
The Operation Log system is the **single sync system** for all providers:
|
||||
|
||||
```
|
||||
┌────────────────────────────────────────────────────────────────────┐
|
||||
│ User Action │
|
||||
└────────────────────────────────────────────────────────────────────┘
|
||||
User Action
|
||||
│
|
||||
▼
|
||||
NgRx Store
|
||||
|
|
@ -59,20 +55,28 @@ The Operation Log system serves four distinct purposes:
|
|||
OpLogEffects │ Other Effects
|
||||
│ │
|
||||
├──► SUP_OPS ◄──────┘
|
||||
│ (Local Persistence - Part A)
|
||||
│ (Local Persistence - IndexedDB)
|
||||
│
|
||||
└──► META_MODEL vector clock
|
||||
(Legacy Sync Bridge - Part B)
|
||||
|
||||
PFAPI reads from NgRx for sync (not from op-log)
|
||||
└──► Sync Providers
|
||||
├── SuperSync (operation-based, real-time)
|
||||
├── WebDAV (file-based, single-file snapshot)
|
||||
├── Dropbox (file-based, single-file snapshot)
|
||||
└── LocalFile (file-based, single-file snapshot)
|
||||
```
|
||||
|
||||
### The Four Parts
|
||||
### Sync Provider Types
|
||||
|
||||
| Provider Type | Providers | How It Works |
|
||||
| ---------------- | -------------------------- | ------------------------------------------------------------- |
|
||||
| **Server-based** | SuperSync | Individual operations uploaded/downloaded via HTTP API |
|
||||
| **File-based** | WebDAV, Dropbox, LocalFile | Single `sync-data.json` file with state snapshot + recent ops |
|
||||
|
||||
### The Core Parts
|
||||
|
||||
| Part | Purpose | Description |
|
||||
| -------------------------- | --------------------------- | ----------------------------------------------------------------------------- |
|
||||
| **A. Local Persistence** | Fast writes, crash recovery | Operations stored in IndexedDB (`SUP_OPS`), with snapshots for fast hydration |
|
||||
| **B. Legacy Sync Bridge** | PFAPI compatibility | Updates vector clocks so WebDAV/Dropbox sync continues to work |
|
||||
| **B. File-Based Sync** | WebDAV/Dropbox/LocalFile | Single-file sync with state snapshot and embedded operations buffer |
|
||||
| **C. Server Sync** | Operation-based sync | Upload/download individual operations via SuperSync server |
|
||||
| **D. Validation & Repair** | Data integrity | Checkpoint validation with automatic repair and REPAIR operations |
|
||||
|
||||
|
|
@ -111,6 +115,34 @@ private _actions$ = inject(LOCAL_ACTIONS); // Excludes remote operations
|
|||
|
||||
This prevents duplicate side effects when syncing operations from other clients.
|
||||
|
||||
## Key Files
|
||||
|
||||
### Sync Providers
|
||||
|
||||
```
|
||||
src/app/op-log/sync-providers/
|
||||
├── super-sync/ # SuperSync server provider
|
||||
├── file-based/ # File-based providers
|
||||
│ ├── file-based-sync-adapter.service.ts # Unified adapter for file providers
|
||||
│ ├── file-based-sync.types.ts # FileBasedSyncData types
|
||||
│ ├── webdav/ # WebDAV provider
|
||||
│ ├── dropbox/ # Dropbox provider
|
||||
│ └── local-file/ # Local file sync provider
|
||||
├── provider-manager.service.ts # Provider activation/management
|
||||
├── wrapped-provider.service.ts # Provider wrapper with encryption
|
||||
└── credential-store.service.ts # OAuth/credential storage
|
||||
```
|
||||
|
||||
### Core Operation Log
|
||||
|
||||
```
|
||||
src/app/op-log/
|
||||
├── core/ # Core types and operations
|
||||
├── persistence/ # IndexedDB storage
|
||||
├── sync/ # Sync orchestration
|
||||
└── validation/ # Data validation and repair
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
| Location | Content |
|
||||
|
|
@ -121,14 +153,15 @@ This prevents duplicate side effects when syncing operations from other clients.
|
|||
|
||||
## Implementation Status
|
||||
|
||||
| Component | Status |
|
||||
| ---------------------------- | --------------------------------------------------- |
|
||||
| Local Persistence (Part A) | ✅ Complete |
|
||||
| Legacy Sync Bridge (Part B) | ✅ Complete |
|
||||
| Server Sync (Part C) | ✅ Complete (single-version) |
|
||||
| Validation & Repair (Part D) | ✅ Complete |
|
||||
| End-to-End Encryption | ✅ Complete (AES-256-GCM + Argon2id) |
|
||||
| Cross-version Sync (A.7.11) | 📋 Documented (not yet implemented) |
|
||||
| Schema Migrations | ✅ Infrastructure ready (no migrations defined yet) |
|
||||
| Component | Status |
|
||||
| ---------------------------- | ------------------------------------------------ |
|
||||
| Local Persistence (Part A) | Complete |
|
||||
| File-Based Sync (Part B) | Complete (WebDAV, Dropbox, LocalFile) |
|
||||
| Server Sync (Part C) | Complete (SuperSync) |
|
||||
| Validation & Repair (Part D) | Complete |
|
||||
| End-to-End Encryption | Complete (AES-256-GCM + Argon2id) |
|
||||
| PFAPI Elimination | Complete (Jan 2026) |
|
||||
| Cross-version Sync (A.7.11) | Documented (not yet implemented) |
|
||||
| Schema Migrations | Infrastructure ready (no migrations defined yet) |
|
||||
|
||||
See [operation-log-architecture.md#implementation-status](./operation-log-architecture.md#implementation-status) for detailed status.
|
||||
|
|
|
|||
105
docs/sync-and-op-log/diagrams/01-local-persistence.md
Normal file
105
docs/sync-and-op-log/diagrams/01-local-persistence.md
Normal file
|
|
@ -0,0 +1,105 @@
|
|||
# Local Persistence Architecture
|
||||
|
||||
**Last Updated:** January 2026
|
||||
**Status:** Implemented
|
||||
|
||||
This diagram illustrates how user actions flow through the system, how they are persisted to IndexedDB (`SUP_OPS`), and how the system hydrates on startup.
|
||||
|
||||
## Operation Log Architecture
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
%% Styles
|
||||
classDef storage fill:#f9f,stroke:#333,stroke-width:2px,color:black;
|
||||
classDef process fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:black;
|
||||
classDef trigger fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px,color:black;
|
||||
classDef archive fill:#e8eaf6,stroke:#3949ab,stroke-width:2px,color:black;
|
||||
|
||||
User((User / UI)) -->|Dispatch Action| NgRx["NgRx Store <br/> Runtime Source of Truth<br/><sub>*.effects.ts / *.reducer.ts</sub>"]
|
||||
|
||||
subgraph "Write Path (Runtime)"
|
||||
NgRx -->|Action Stream| OpEffects["OperationLogEffects<br/><sub>operation-log.effects.ts</sub>"]
|
||||
|
||||
OpEffects -->|1. Check isPersistent| Filter{"Is Persistent?<br/><sub>persistent-action.interface.ts</sub>"}
|
||||
Filter -- No --> Ignore[Ignore / UI Only]
|
||||
Filter -- Yes --> Transform["Transform to Operation<br/>UUIDv7, Timestamp, VectorClock<br/><sub>operation-converter.util.ts</sub>"]
|
||||
|
||||
Transform -->|2. Validate| PayloadValid{"Payload<br/>Valid?<br/><sub>processing/validate-operation-payload.ts</sub>"}
|
||||
PayloadValid -- No --> ErrorSnack[Show Error Snackbar]
|
||||
PayloadValid -- Yes --> DBWrite
|
||||
end
|
||||
|
||||
subgraph "Persistence Layer (IndexedDB: SUP_OPS)"
|
||||
DBWrite["Write to SUP_OPS<br/><sub>store/operation-log-store.service.ts</sub>"]:::storage
|
||||
|
||||
DBWrite -->|Append| OpsTable["Table: ops<br/>The Event Log<br/><sub>IndexedDB</sub>"]:::storage
|
||||
DBWrite -->|Update| StateCache["Table: state_cache<br/>Snapshots<br/><sub>IndexedDB</sub>"]:::storage
|
||||
end
|
||||
|
||||
subgraph "Archive Storage (IndexedDB)"
|
||||
ArchiveWrite["ArchiveService<br/><sub>time-tracking/archive.service.ts</sub>"]:::archive
|
||||
ArchiveWrite -->|Write BEFORE dispatch| ArchiveYoung["archiveYoung<br/>━━━━━━━━━━━━━━━<br/>• task: TaskArchive<br/>• timeTracking: State<br/>━━━━━━━━━━━━━━━<br/><sub>Tasks < 21 days old</sub>"]:::archive
|
||||
ArchiveYoung -->|"flushYoungToOld action<br/>(every ~14 days)"| ArchiveOld["archiveOld<br/>━━━━━━━━━━━━━━━<br/>• task: TaskArchive<br/>• timeTracking: State<br/>━━━━━━━━━━━━━━━<br/><sub>Tasks > 21 days old</sub>"]:::archive
|
||||
end
|
||||
|
||||
User -->|Archive Tasks| ArchiveWrite
|
||||
NgRx -.->|moveToArchive action<br/>AFTER archive write| OpEffects
|
||||
|
||||
subgraph "Compaction System"
|
||||
OpsTable -->|Count > 500| CompactionTrig{"Compaction<br/>Trigger<br/><sub>operation-log.effects.ts</sub>"}:::trigger
|
||||
CompactionTrig -->|Yes| Compactor["CompactionService<br/><sub>store/operation-log-compaction.service.ts</sub>"]:::process
|
||||
Compactor -->|Read State| NgRx
|
||||
Compactor -->|Save Snapshot| StateCache
|
||||
Compactor -->|Delete Old Ops| OpsTable
|
||||
end
|
||||
|
||||
subgraph "Read Path (Hydration)"
|
||||
Startup((App Startup)) --> Hydrator["OperationLogHydrator<br/><sub>store/operation-log-hydrator.service.ts</sub>"]:::process
|
||||
Hydrator -->|1. Load| StateCache
|
||||
|
||||
StateCache -->|Check| Schema{"Schema<br/>Version?<br/><sub>store/schema-migration.service.ts</sub>"}
|
||||
Schema -- Old --> Migrator["SchemaMigrationService<br/><sub>store/schema-migration.service.ts</sub>"]:::process
|
||||
Migrator -->|Transform State| MigratedState
|
||||
Schema -- Current --> CurrentState
|
||||
|
||||
CurrentState -->|Load State| StoreInit[Init NgRx State]
|
||||
MigratedState -->|Load State| StoreInit
|
||||
|
||||
Hydrator -->|2. Load Tail| OpsTable
|
||||
OpsTable -->|Replay Ops| Replayer["OperationApplier<br/><sub>processing/operation-applier.service.ts</sub>"]:::process
|
||||
Replayer -->|Dispatch| NgRx
|
||||
end
|
||||
|
||||
subgraph "Single Instance + Sync Locking"
|
||||
Startup2((App Startup)) -->|BroadcastChannel| SingleCheck{"Already<br/>Open?<br/><sub>startup.service.ts</sub>"}
|
||||
SingleCheck -- Yes --> Block[Block New Tab]
|
||||
SingleCheck -- No --> Allow[Allow]
|
||||
|
||||
DBWrite -.->|Critical ops use| WebLocks["Web Locks API<br/><sub>sync/lock.service.ts</sub>"]
|
||||
end
|
||||
|
||||
class OpsTable,StateCache storage;
|
||||
class ArchiveWrite,ArchiveYoung,ArchiveOld,TimeTracking archive;
|
||||
```
|
||||
|
||||
## Archive Data Flow Notes
|
||||
|
||||
- **Archive writes happen BEFORE dispatch**: When a user archives tasks, `ArchiveService` writes to IndexedDB first, then dispatches the `moveToArchive` action. This ensures data is safely stored before state updates.
|
||||
- **ArchiveModel structure**: Each archive tier stores `{ task: TaskArchive, timeTracking: TimeTrackingState, lastTimeTrackingFlush: number }`. Both archived Task entities AND their time tracking data are stored together.
|
||||
- **Two-tier archive**: Recent tasks go to `archiveYoung` (tasks < 21 days old). Older tasks are flushed to `archiveOld` via `flushYoungToOld` action (checked every ~14 days when archiving tasks).
|
||||
- **Flush mechanism**: `flushYoungToOld` is a persistent action that:
|
||||
1. Triggers when `lastTimeTrackingFlush > 14 days` during `moveTasksToArchiveAndFlushArchiveIfDue()`
|
||||
2. Moves tasks older than 21 days from `archiveYoung.task` to `archiveOld.task`
|
||||
3. Syncs via operation log so all clients execute the same flush deterministically
|
||||
- **Not in NgRx state**: Archive data is stored directly in IndexedDB, not in the NgRx store. Only the operations (`moveToArchive`, `flushYoungToOld`) are logged for sync.
|
||||
- **Sync handling**: On remote clients, `ArchiveOperationHandler` writes archive data AFTER receiving the operation (see [archive-operations.md](./06-archive-operations.md)).
|
||||
|
||||
## Key Files
|
||||
|
||||
| File | Purpose |
|
||||
| ------------------------------------------------------ | -------------------------------------- |
|
||||
| `op-log/effects/operation-log.effects.ts` | Captures actions and writes operations |
|
||||
| `op-log/store/operation-log-store.service.ts` | IndexedDB wrapper for SUP_OPS |
|
||||
| `op-log/persistence/operation-log-hydrator.service.ts` | Startup hydration |
|
||||
| `op-log/processing/operation-applier.service.ts` | Replays operations to NgRx |
|
||||
| `features/time-tracking/archive.service.ts` | Archive write logic |
|
||||
327
docs/sync-and-op-log/diagrams/02-server-sync.md
Normal file
327
docs/sync-and-op-log/diagrams/02-server-sync.md
Normal file
|
|
@ -0,0 +1,327 @@
|
|||
# Server Sync Architecture (SuperSync)
|
||||
|
||||
**Last Updated:** January 2026
|
||||
**Status:** Implemented
|
||||
|
||||
This diagram shows the complete sync architecture for SuperSync: client-side flow, server API endpoints, PostgreSQL database operations, and server-side processing.
|
||||
|
||||
## Master Architecture Diagram
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
%% Styles
|
||||
classDef client fill:#fff,stroke:#333,stroke-width:2px,color:black;
|
||||
classDef api fill:#e3f2fd,stroke:#1565c0,stroke-width:2px,color:black;
|
||||
classDef db fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px,color:black;
|
||||
classDef conflict fill:#ffebee,stroke:#c62828,stroke-width:2px,color:black;
|
||||
classDef validation fill:#fff3e0,stroke:#ef6c00,stroke-width:2px,color:black;
|
||||
|
||||
%% CLIENT SIDE
|
||||
subgraph Client["CLIENT (Angular)"]
|
||||
direction TB
|
||||
|
||||
subgraph SyncLoop["Sync Loop"]
|
||||
Scheduler((Scheduler)) -->|Interval| SyncService["OperationLogSyncService"]
|
||||
SyncService -->|1. Get lastSyncedSeq| LocalMeta["SUP_OPS IndexedDB"]
|
||||
end
|
||||
|
||||
subgraph DownloadFlow["Download Flow"]
|
||||
SyncService -->|"2. GET /api/sync/ops?sinceSeq=N"| DownAPI
|
||||
DownAPI -->|Response| GapCheck{Gap Detected?}
|
||||
GapCheck -- "Yes + Empty Server" --> ServerMigration["Server Migration:<br/>Create SYNC_IMPORT"]
|
||||
GapCheck -- "Yes + Has Ops" --> ResetSeq["Reset sinceSeq=0<br/>Re-download all"]
|
||||
GapCheck -- No --> FreshCheck{Fresh Client?}
|
||||
ResetSeq --> FreshCheck
|
||||
FreshCheck -- "Yes + Has Ops" --> ConfirmDialog["Confirmation Dialog"]
|
||||
FreshCheck -- No --> FilterApplied
|
||||
ConfirmDialog -- Confirmed --> FilterApplied{Already Applied?}
|
||||
ConfirmDialog -- Cancelled --> SkipDownload[Skip]
|
||||
FilterApplied -- Yes --> Discard[Discard]
|
||||
FilterApplied -- No --> ConflictDet
|
||||
end
|
||||
|
||||
subgraph ConflictMgmt["Conflict Management (LWW Auto-Resolution)"]
|
||||
ConflictDet{"Compare<br/>Vector Clocks"}:::conflict
|
||||
ConflictDet -- Sequential --> ApplyRemote
|
||||
ConflictDet -- Concurrent --> AutoCheck{"Auto-Resolve?"}
|
||||
|
||||
AutoCheck -- "Both DELETE or<br/>Identical payload" --> AutoResolve["Auto: Keep Remote"]
|
||||
AutoCheck -- "Real conflict" --> LWWResolve["LWW: Compare<br/>Timestamps"]:::conflict
|
||||
|
||||
AutoResolve --> MarkRejected
|
||||
LWWResolve -- "Remote newer<br/>or tie" --> MarkRejected[Mark Local Rejected]:::conflict
|
||||
LWWResolve -- "Local newer" --> LocalWins["Create Update Op<br/>with local state"]:::conflict
|
||||
LocalWins --> RejectBoth[Mark both rejected]
|
||||
RejectBoth --> CreateNewOp[New op syncs local state]
|
||||
MarkRejected --> ApplyRemote
|
||||
end
|
||||
|
||||
subgraph Application["Application & Validation"]
|
||||
ApplyRemote -->|Dispatch| NgRx["NgRx Store"]
|
||||
NgRx --> Validator{Valid State?}
|
||||
Validator -- Yes --> SyncDone((Done))
|
||||
Validator -- No --> Repair["Auto-Repair"]:::conflict
|
||||
Repair --> NgRx
|
||||
end
|
||||
|
||||
subgraph UploadFlow["Upload Flow"]
|
||||
LocalMeta -->|Get Unsynced| PendingOps[Pending Ops]
|
||||
PendingOps --> FreshUploadCheck{Fresh Client?}
|
||||
FreshUploadCheck -- Yes --> BlockUpload["Block Upload<br/>(must download first)"]
|
||||
FreshUploadCheck -- No --> FilterRejected{Rejected?}
|
||||
FilterRejected -- Yes --> SkipRejected[Skip]
|
||||
FilterRejected -- No --> ClassifyOp{Op Type?}
|
||||
|
||||
ClassifyOp -- "SYNC_IMPORT<br/>BACKUP_IMPORT<br/>REPAIR" --> SnapshotAPI
|
||||
ClassifyOp -- "CRT/UPD/DEL/MOV/BATCH" --> OpsAPI
|
||||
|
||||
OpsAPI -->|Response with<br/>piggybackedOps| ProcessPiggybacked["Process Piggybacked<br/>(→ Conflict Detection)"]
|
||||
ProcessPiggybacked --> ConflictDet
|
||||
end
|
||||
end
|
||||
|
||||
%% SERVER API LAYER
|
||||
subgraph Server["SERVER (Fastify + Node.js)"]
|
||||
direction TB
|
||||
|
||||
subgraph APIEndpoints["API Endpoints"]
|
||||
DownAPI["GET /api/sync/ops<br/>━━━━━━━━━━━━━━━<br/>Download operations<br/>Query: sinceSeq, limit"]:::api
|
||||
OpsAPI["POST /api/sync/ops<br/>━━━━━━━━━━━━━━━<br/>Upload operations<br/>Body: ops[], clientId"]:::api
|
||||
SnapshotAPI["POST /api/sync/snapshot<br/>━━━━━━━━━━━━━━━<br/>Upload full state<br/>Body: state, reason"]:::api
|
||||
GetSnapshotAPI["GET /api/sync/snapshot<br/>━━━━━━━━━━━━━━━<br/>Get full state"]:::api
|
||||
StatusAPI["GET /api/sync/status<br/>━━━━━━━━━━━━━━━<br/>Check sync status"]:::api
|
||||
RestoreAPI["GET /api/sync/restore/:seq<br/>━━━━━━━━━━━━━━━<br/>Restore to point"]:::api
|
||||
end
|
||||
|
||||
subgraph ServerProcessing["Server-Side Processing (SyncService)"]
|
||||
direction TB
|
||||
|
||||
subgraph Validation["1. Validation"]
|
||||
V1["Validate op.id, opType"]
|
||||
V2["Validate entityType allowlist"]
|
||||
V3["Sanitize vectorClock"]
|
||||
V4["Check payload size"]
|
||||
V5["Check timestamp drift"]
|
||||
end
|
||||
|
||||
subgraph ConflictCheck["2. Conflict Detection"]
|
||||
C1["Find latest op for entity"]
|
||||
C2["Compare vector clocks"]
|
||||
C3{Result?}
|
||||
C3 -- GREATER_THAN --> C4[Accept]
|
||||
C3 -- CONCURRENT --> C5[Reject]
|
||||
C3 -- LESS_THAN --> C6[Reject]
|
||||
end
|
||||
|
||||
subgraph Persist["3. Persistence (REPEATABLE_READ)"]
|
||||
P1["Increment lastSeq"]
|
||||
P2["Re-check conflict"]
|
||||
P3["INSERT operation"]
|
||||
P4{DEL op?}
|
||||
P4 -- Yes --> P5["UPSERT tombstone"]
|
||||
P4 -- No --> P6[Skip]
|
||||
P7["UPSERT sync_device"]
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
%% POSTGRESQL DATABASE
|
||||
subgraph PostgreSQL["POSTGRESQL DATABASE"]
|
||||
direction TB
|
||||
|
||||
OpsTable[("operations<br/>━━━━━━━━━━━━━━━<br/>id, serverSeq<br/>opType, entityType<br/>entityId, payload<br/>vectorClock<br/>clientTimestamp")]:::db
|
||||
|
||||
SyncState[("user_sync_state<br/>━━━━━━━━━━━━━━━<br/>lastSeq<br/>snapshotData<br/>lastSnapshotSeq")]:::db
|
||||
|
||||
Devices[("sync_devices<br/>━━━━━━━━━━━━━━━<br/>clientId<br/>lastSeenAt<br/>lastAckedSeq")]:::db
|
||||
|
||||
Tombstones[("tombstones<br/>━━━━━━━━━━━━━━━<br/>entityType<br/>entityId<br/>deletedAt")]:::db
|
||||
end
|
||||
|
||||
%% CONNECTIONS: API -> Processing
|
||||
OpsAPI --> V1
|
||||
SnapshotAPI --> V1
|
||||
V1 --> V2 --> V3 --> V4 --> V5
|
||||
V5 --> C1 --> C2 --> C3
|
||||
C4 --> P1 --> P2 --> P3 --> P4
|
||||
P5 --> P7
|
||||
P6 --> P7
|
||||
|
||||
%% CONNECTIONS: Processing -> Database
|
||||
P1 -.->|"UPDATE"| SyncState
|
||||
P3 -.->|"INSERT"| OpsTable
|
||||
P5 -.->|"UPSERT"| Tombstones
|
||||
P7 -.->|"UPSERT"| Devices
|
||||
|
||||
%% CONNECTIONS: Read endpoints -> Database
|
||||
DownAPI -.->|"SELECT ops > sinceSeq"| OpsTable
|
||||
DownAPI -.->|"SELECT lastSeq"| SyncState
|
||||
GetSnapshotAPI -.->|"SELECT snapshot"| SyncState
|
||||
GetSnapshotAPI -.->|"SELECT (replay)"| OpsTable
|
||||
StatusAPI -.->|"SELECT"| SyncState
|
||||
StatusAPI -.->|"COUNT"| Devices
|
||||
RestoreAPI -.->|"SELECT (replay)"| OpsTable
|
||||
|
||||
%% Subgraph styles
|
||||
style Validation fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
|
||||
style ConflictCheck fill:#ffebee,stroke:#c62828,stroke-width:2px
|
||||
style Persist fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
style PostgreSQL fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
style APIEndpoints fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
```
|
||||
|
||||
## Quick Reference Tables
|
||||
|
||||
### API Endpoints
|
||||
|
||||
| Endpoint | Method | Purpose | DB Operations |
|
||||
| -------------------------- | ------ | ------------------------------- | -------------------------------------------------------------------- |
|
||||
| `/api/sync/ops` | POST | Upload operations | INSERT ops, UPDATE lastSeq, UPSERT device, UPSERT tombstone (if DEL) |
|
||||
| `/api/sync/ops?sinceSeq=N` | GET | Download operations | SELECT ops, SELECT lastSeq, find latest snapshot (skip optimization) |
|
||||
| `/api/sync/snapshot` | POST | Upload full state (SYNC_IMPORT) | Same as POST /ops + UPDATE snapshot cache |
|
||||
| `/api/sync/snapshot` | GET | Get full state | SELECT snapshot (or replay ops if stale) |
|
||||
| `/api/sync/status` | GET | Check sync status | SELECT lastSeq, COUNT devices |
|
||||
| `/api/sync/restore-points` | GET | List restore points | SELECT ops (filter SYNC_IMPORT, BACKUP_IMPORT, REPAIR) |
|
||||
| `/api/sync/restore/:seq` | GET | Restore to specific point | SELECT ops, replay to targetSeq |
|
||||
|
||||
### PostgreSQL Tables
|
||||
|
||||
| Table | Purpose | Key Columns |
|
||||
| ----------------- | ------------------------------------------ | ------------------------------------------------------- |
|
||||
| `operations` | Event log (append-only) | id, serverSeq, opType, entityType, payload, vectorClock |
|
||||
| `user_sync_state` | Per-user metadata + cached snapshot | lastSeq, snapshotData, lastSnapshotSeq |
|
||||
| `sync_devices` | Device tracking | clientId, lastSeenAt, lastAckedSeq |
|
||||
| `tombstones` | Deleted entity tracking (30-day retention) | entityType, entityId, deletedAt, expiresAt |
|
||||
|
||||
### Key Implementation Details
|
||||
|
||||
- **Transaction Isolation**: `REPEATABLE_READ` prevents phantom reads during conflict detection
|
||||
- **Double Conflict Check**: Before AND after sequence allocation (race condition guard)
|
||||
- **Idempotency**: Duplicate op IDs rejected with `DUPLICATE_OPERATION` error
|
||||
- **Gzip Support**: Both upload/download support `Content-Encoding: gzip` for bandwidth savings
|
||||
- **Rate Limiting**: Per-user limits (100 uploads/min, 200 downloads/min)
|
||||
- **Auto-Resolve Conflicts (Identical)**: Identical conflicts (both DELETE, or same payload) auto-resolved as "remote" without user intervention
|
||||
- **LWW Conflict Resolution**: Real conflicts are automatically resolved using Last-Write-Wins (timestamp comparison)
|
||||
- **Fresh Client Safety**: Clients with no history blocked from uploading; confirmation dialog shown before accepting first remote data
|
||||
- **Piggybacked Ops**: Upload response includes new remote ops → processed immediately to trigger conflict detection
|
||||
- **Gap Detection**: Server returns `gapDetected: true` when client sinceSeq is invalid → client resets to seq=0 and re-downloads all ops
|
||||
- **Server Migration**: Gap + empty server (no ops) → client creates SYNC_IMPORT to seed new server
|
||||
- **Snapshot Skip Optimization**: Server skips pre-snapshot operations when `sinceSeq < latestSnapshotSeq`
|
||||
|
||||
## Full-State Operations via Snapshot Endpoint
|
||||
|
||||
Full-state operations (BackupImport, Repair, SyncImport) contain the entire application state and can exceed the regular `/api/sync/ops` body size limit (~30MB). These operations are routed through the `/api/sync/snapshot` endpoint instead.
|
||||
|
||||
```mermaid
|
||||
flowchart TB
|
||||
subgraph "Upload Decision Flow"
|
||||
GetUnsynced[Get Unsynced Operations<br/>from IndexedDB]
|
||||
Classify{Classify by OpType}
|
||||
|
||||
GetUnsynced --> Classify
|
||||
|
||||
subgraph FullStateOps["Full-State Operations"]
|
||||
SyncImport[OpType.SyncImport]
|
||||
BackupImport[OpType.BackupImport]
|
||||
Repair[OpType.Repair]
|
||||
end
|
||||
|
||||
subgraph RegularOps["Regular Operations"]
|
||||
CRT[OpType.CRT]
|
||||
UPD[OpType.UPD]
|
||||
DEL[OpType.DEL]
|
||||
MOV[OpType.MOV]
|
||||
BATCH[OpType.BATCH]
|
||||
end
|
||||
|
||||
Classify --> FullStateOps
|
||||
Classify --> RegularOps
|
||||
|
||||
FullStateOps --> SnapshotPath
|
||||
RegularOps --> OpsPath
|
||||
|
||||
subgraph SnapshotPath["Snapshot Endpoint Path"]
|
||||
MapReason["Map OpType to reason:<br/>SyncImport → 'initial'<br/>BackupImport → 'recovery'<br/>Repair → 'recovery'"]
|
||||
Encrypt1{E2E Encryption<br/>Enabled?}
|
||||
EncryptPayload[Encrypt state payload]
|
||||
UploadSnapshot["POST /api/sync/snapshot<br/>{state, clientId, reason,<br/>vectorClock, schemaVersion}"]
|
||||
end
|
||||
|
||||
subgraph OpsPath["Ops Endpoint Path"]
|
||||
Encrypt2{E2E Encryption<br/>Enabled?}
|
||||
EncryptOps[Encrypt operation payloads]
|
||||
Batch[Batch up to 100 ops]
|
||||
UploadOps["POST /api/sync/ops<br/>{ops[], clientId, lastKnownSeq}"]
|
||||
end
|
||||
|
||||
MapReason --> Encrypt1
|
||||
Encrypt1 -- Yes --> EncryptPayload
|
||||
Encrypt1 -- No --> UploadSnapshot
|
||||
EncryptPayload --> UploadSnapshot
|
||||
|
||||
Encrypt2 -- Yes --> EncryptOps
|
||||
Encrypt2 -- No --> Batch
|
||||
EncryptOps --> Batch
|
||||
Batch --> UploadOps
|
||||
end
|
||||
|
||||
UploadSnapshot --> MarkSynced[Mark Operation as Synced]
|
||||
UploadOps --> MarkSynced
|
||||
|
||||
style FullStateOps fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
style RegularOps fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
|
||||
style SnapshotPath fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
|
||||
style OpsPath fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
```
|
||||
|
||||
## Gap Detection
|
||||
|
||||
Gap detection identifies situations where the client cannot reliably sync incrementally and must take corrective action.
|
||||
|
||||
### The Four Gap Cases
|
||||
|
||||
| Case | Condition | Meaning | Typical Cause |
|
||||
| ---- | --------------------------------- | ----------------------------------- | -------------------------------------- |
|
||||
| 1 | `sinceSeq > 0 && latestSeq === 0` | Client has history, server is empty | Server was reset/migrated |
|
||||
| 2 | `sinceSeq > latestSeq` | Client is ahead of server | Server DB restored from old backup |
|
||||
| 3 | `sinceSeq < minSeq - 1` | Requested ops were purged | Retention policy deleted old ops |
|
||||
| 4 | `firstOpSeq > sinceSeq + 1` | Gap in sequence numbers | Database corruption or manual deletion |
|
||||
|
||||
### Client-Side Handling
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
Download["Download ops from server"]
|
||||
GapCheck{gapDetected?}
|
||||
Reset["Reset sinceSeq = 0<br/>Clear accumulated ops"]
|
||||
ReDownload["Re-download from beginning"]
|
||||
HasReset{Already reset<br/>this session?}
|
||||
ServerEmpty{Server empty?<br/>latestSeq === 0}
|
||||
Migration["Server Migration:<br/>Create SYNC_IMPORT<br/>with full local state"]
|
||||
Continue["Process downloaded ops normally"]
|
||||
|
||||
Download --> GapCheck
|
||||
GapCheck -->|Yes| HasReset
|
||||
HasReset -->|No| Reset
|
||||
Reset --> ReDownload
|
||||
ReDownload --> GapCheck
|
||||
HasReset -->|Yes| ServerEmpty
|
||||
GapCheck -->|No| Continue
|
||||
ServerEmpty -->|Yes| Migration
|
||||
ServerEmpty -->|No| Continue
|
||||
Migration --> Continue
|
||||
|
||||
style Migration fill:#fff3e0,stroke:#e65100,stroke-width:2px
|
||||
style Reset fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
```
|
||||
|
||||
## Key Files
|
||||
|
||||
| File | Purpose |
|
||||
| ------------------------------------------------------- | ------------------------------- |
|
||||
| `src/app/op-log/sync/operation-log-sync.service.ts` | Main sync orchestration |
|
||||
| `src/app/op-log/sync/operation-log-upload.service.ts` | Upload logic |
|
||||
| `src/app/op-log/sync/operation-log-download.service.ts` | Download logic |
|
||||
| `src/app/op-log/sync/conflict-resolution.service.ts` | LWW conflict resolution |
|
||||
| `src/app/op-log/sync/server-migration.service.ts` | Server migration (empty server) |
|
||||
| `packages/super-sync-server/src/sync/` | Server-side sync implementation |
|
||||
310
docs/sync-and-op-log/diagrams/03-conflict-resolution.md
Normal file
310
docs/sync-and-op-log/diagrams/03-conflict-resolution.md
Normal file
|
|
@ -0,0 +1,310 @@
|
|||
# Conflict Resolution & SYNC_IMPORT Filtering
|
||||
|
||||
**Last Updated:** January 2026
|
||||
**Status:** Implemented
|
||||
|
||||
This document covers LWW (Last-Write-Wins) conflict auto-resolution and SYNC_IMPORT filtering with clean slate semantics.
|
||||
|
||||
## LWW (Last-Write-Wins) Conflict Auto-Resolution
|
||||
|
||||
When two clients make concurrent changes to the same entity, a conflict occurs. Rather than interrupting the user with a dialog, the system automatically resolves conflicts using **Last-Write-Wins (LWW)** based on operation timestamps.
|
||||
|
||||
### What is a Conflict?
|
||||
|
||||
A conflict occurs when vector clock comparison returns `CONCURRENT` - meaning neither operation "happened before" the other. They represent independent, simultaneous edits.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph Detection["Conflict Detection (Vector Clocks)"]
|
||||
Download[Download remote ops] --> Compare{Compare Vector Clocks}
|
||||
|
||||
Compare -->|"LESS_THAN<br/>(remote is older)"| Discard["Discard remote<br/>(already have it)"]
|
||||
Compare -->|"GREATER_THAN<br/>(remote is newer)"| Apply["Apply remote<br/>(sequential update)"]
|
||||
Compare -->|"CONCURRENT<br/>(independent edits)"| Conflict["⚠️ CONFLICT<br/>Both changed same entity"]
|
||||
end
|
||||
|
||||
subgraph Example["Example: Concurrent Edits"]
|
||||
direction LR
|
||||
ClientA["Client A<br/>Clock: {A:5, B:3}<br/>Marks task done"]
|
||||
ClientB["Client B<br/>Clock: {A:4, B:4}<br/>Renames task"]
|
||||
|
||||
ClientA -.->|"Neither dominates"| Concurrent["CONCURRENT<br/>A has more A,<br/>B has more B"]
|
||||
ClientB -.-> Concurrent
|
||||
end
|
||||
|
||||
Conflict --> Resolution["LWW Resolution"]
|
||||
|
||||
style Conflict fill:#ffebee,stroke:#c62828,stroke-width:2px
|
||||
style Concurrent fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
|
||||
```
|
||||
|
||||
### LWW Resolution Algorithm
|
||||
|
||||
The winner is determined by comparing the **maximum timestamp** from each operation's vector clock. The operation with the later timestamp wins. Ties go to remote (to ensure convergence).
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph Input["Conflicting Operations"]
|
||||
Local["LOCAL Operation<br/>━━━━━━━━━━━━━━━<br/>vectorClock: {A:5, B:3}<br/>timestamps: [1702900000, 1702899000]<br/>maxTimestamp: 1702900000"]
|
||||
Remote["REMOTE Operation<br/>━━━━━━━━━━━━━━━<br/>vectorClock: {A:4, B:4}<br/>timestamps: [1702898000, 1702901000]<br/>maxTimestamp: 1702901000"]
|
||||
end
|
||||
|
||||
subgraph Algorithm["LWW Comparison"]
|
||||
GetMax["Extract max timestamp<br/>from each vector clock"]
|
||||
Compare{"Compare<br/>Timestamps"}
|
||||
|
||||
GetMax --> Compare
|
||||
|
||||
Compare -->|"Local > Remote"| LocalWins["🏆 LOCAL WINS<br/>Local state preserved<br/>Create UPDATE op to sync"]
|
||||
Compare -->|"Remote > Local<br/>OR tie"| RemoteWins["🏆 REMOTE WINS<br/>Apply remote state<br/>Reject local op"]
|
||||
end
|
||||
|
||||
Local --> GetMax
|
||||
Remote --> GetMax
|
||||
|
||||
subgraph Outcome["Resolution Outcome"]
|
||||
LocalWins --> CreateOp["Create new UPDATE operation<br/>with current entity state<br/>+ merged vector clock"]
|
||||
RemoteWins --> MarkRejected["Mark local op as rejected<br/>Apply remote op"]
|
||||
|
||||
CreateOp --> Sync["New op syncs to server<br/>Other clients receive update"]
|
||||
MarkRejected --> Apply["Remote state applied<br/>User sees change"]
|
||||
end
|
||||
|
||||
style LocalWins fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
style RemoteWins fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
style CreateOp fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
|
||||
```
|
||||
|
||||
### Two Possible Outcomes
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
subgraph RemoteWinsPath["REMOTE WINS (more common)"]
|
||||
direction TB
|
||||
RW1["Remote timestamp >= Local timestamp"]
|
||||
RW2["Mark local op as REJECTED"]
|
||||
RW3["Apply remote operation"]
|
||||
RW4["Local change is overwritten"]
|
||||
|
||||
RW1 --> RW2 --> RW3 --> RW4
|
||||
end
|
||||
|
||||
subgraph LocalWinsPath["LOCAL WINS (less common)"]
|
||||
direction TB
|
||||
LW1["Local timestamp > Remote timestamp"]
|
||||
LW2["Mark BOTH ops as rejected"]
|
||||
LW3["Keep current local state"]
|
||||
LW4["Create NEW update operation<br/>with merged vector clock"]
|
||||
LW5["New op syncs to server"]
|
||||
LW6["Other clients receive<br/>local state as update"]
|
||||
|
||||
LW1 --> LW2 --> LW3 --> LW4 --> LW5 --> LW6
|
||||
end
|
||||
|
||||
style RemoteWinsPath fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
style LocalWinsPath fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
```
|
||||
|
||||
### Complete LWW Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant A as Client A
|
||||
participant S as Server
|
||||
participant B as Client B
|
||||
|
||||
Note over A,B: Both start with Task "Buy milk"
|
||||
|
||||
A->>A: User marks task done (T=100)
|
||||
B->>B: User renames to "Buy oat milk" (T=105)
|
||||
|
||||
Note over A,B: Both go offline, then reconnect
|
||||
|
||||
B->>S: Upload: Rename op (T=105)
|
||||
S-->>B: OK (serverSeq=50)
|
||||
|
||||
A->>S: Upload: Done op (T=100)
|
||||
S-->>A: Rejected (CONCURRENT with seq=50)
|
||||
S-->>A: Piggybacked: Rename op from B
|
||||
|
||||
Note over A: Conflict detected!<br/>Local: Done (T=100)<br/>Remote: Rename (T=105)
|
||||
|
||||
A->>A: LWW: Remote wins (105 > 100)
|
||||
A->>A: Mark local op REJECTED
|
||||
A->>A: Apply remote (rename)
|
||||
A->>A: Show snackbar notification
|
||||
|
||||
Note over A: Task is now "Buy oat milk"<br/>(not done - A's change lost)
|
||||
|
||||
A->>S: Sync (download only)
|
||||
B->>S: Sync
|
||||
S-->>B: No new ops
|
||||
|
||||
Note over A,B: ✅ Both clients converged<br/>Task: "Buy oat milk" (not done)
|
||||
```
|
||||
|
||||
### User Notification
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
subgraph Resolution["After LWW Resolution"]
|
||||
Resolved["Conflicts resolved"]
|
||||
end
|
||||
|
||||
subgraph Notification["User Notification"]
|
||||
Snack["📋 Snackbar<br/>━━━━━━━━━━━━━━━<br/>'X conflicts were<br/>auto-resolved'<br/>━━━━━━━━━━━━━━━<br/>Non-blocking<br/>Auto-dismisses"]
|
||||
end
|
||||
|
||||
subgraph Backup["Safety Net"]
|
||||
BackupCreated["💾 Safety Backup<br/>━━━━━━━━━━━━━━━<br/>Created BEFORE resolution<br/>User can restore if needed"]
|
||||
end
|
||||
|
||||
Resolution --> Notification
|
||||
Resolution --> Backup
|
||||
|
||||
style Snack fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
|
||||
style BackupCreated fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
```
|
||||
|
||||
### Key Implementation Details
|
||||
|
||||
| Aspect | Implementation |
|
||||
| ---------------------- | --------------------------------------------------------------------------- |
|
||||
| **Timestamp Source** | `Math.max(...Object.values(vectorClock))` - max timestamp from vector clock |
|
||||
| **Tie Breaker** | Remote wins (ensures convergence across all clients) |
|
||||
| **Safety Backup** | Created via `BackupService` before any resolution |
|
||||
| **Local Win Update** | New `OpType.UPD` operation created with merged vector clock |
|
||||
| **Vector Clock Merge** | `mergeVectorClocks(localClock, remoteClock)` for local-win ops |
|
||||
| **Entity State** | Retrieved from NgRx store via entity-specific selectors |
|
||||
| **Notification** | Non-blocking snackbar showing count of resolved conflicts |
|
||||
|
||||
---
|
||||
|
||||
## SYNC_IMPORT Filtering with Clean Slate Semantics
|
||||
|
||||
When a SYNC_IMPORT or BACKUP_IMPORT operation is received, it represents an explicit user action to restore **all clients** to a specific point in time. Operations created without knowledge of the import are filtered out using vector clock comparison.
|
||||
|
||||
### The Problem: Stale Operations After Import
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant A as Client A
|
||||
participant S as Server
|
||||
participant B as Client B
|
||||
|
||||
Note over A,B: Both start synced
|
||||
|
||||
A->>A: Create Op1, Op2 (offline)
|
||||
|
||||
Note over B: Client B does SYNC_IMPORT<br/>(restores from backup)
|
||||
|
||||
B->>S: Upload SYNC_IMPORT
|
||||
|
||||
Note over A: Client A comes online
|
||||
|
||||
A->>S: Upload Op1, Op2
|
||||
A->>A: Download SYNC_IMPORT
|
||||
|
||||
Note over A: Problem: Op1, Op2 reference<br/>entities that were WIPED by import
|
||||
```
|
||||
|
||||
### The Solution: Clean Slate Semantics
|
||||
|
||||
SYNC_IMPORT/BACKUP_IMPORT are explicit user actions to restore to a specific state. **ALL operations without knowledge of the import are dropped** - this ensures a true "restore to point in time" semantic.
|
||||
|
||||
We use **vector clock comparison** (not UUIDv7 timestamps) because vector clocks track **causality** ("did the client know about the import?") rather than wall-clock time (which can be affected by clock drift).
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph Input["Remote Operations Received"]
|
||||
Ops["Op1, Op2, SYNC_IMPORT, Op3, Op4"]
|
||||
end
|
||||
|
||||
subgraph Filter["SyncImportFilterService"]
|
||||
FindImport["Find latest SYNC_IMPORT<br/>(in batch or local store)"]
|
||||
Compare["Compare each op's vector clock<br/>against import's vector clock"]
|
||||
end
|
||||
|
||||
subgraph Results["Vector Clock Comparison"]
|
||||
GT["GREATER_THAN<br/>Op created AFTER seeing import"]
|
||||
EQ["EQUAL<br/>Same causal history"]
|
||||
LT["LESS_THAN<br/>Op dominated by import"]
|
||||
CC["CONCURRENT<br/>Op created WITHOUT<br/>knowledge of import"]
|
||||
end
|
||||
|
||||
subgraph Outcome["Outcome"]
|
||||
Keep["✅ KEEP"]
|
||||
Drop["❌ DROP"]
|
||||
end
|
||||
|
||||
Input --> FindImport
|
||||
FindImport --> Compare
|
||||
Compare --> GT
|
||||
Compare --> EQ
|
||||
Compare --> LT
|
||||
Compare --> CC
|
||||
|
||||
GT --> Keep
|
||||
EQ --> Keep
|
||||
LT --> Drop
|
||||
CC --> Drop
|
||||
|
||||
style GT fill:#c8e6c9,stroke:#2e7d32
|
||||
style EQ fill:#c8e6c9,stroke:#2e7d32
|
||||
style LT fill:#ffcdd2,stroke:#c62828
|
||||
style CC fill:#ffcdd2,stroke:#c62828
|
||||
style Keep fill:#e8f5e9,stroke:#2e7d32
|
||||
style Drop fill:#ffebee,stroke:#c62828
|
||||
```
|
||||
|
||||
### Vector Clock Comparison Results
|
||||
|
||||
| Comparison | Meaning | Action |
|
||||
| -------------- | -------------------------------------- | -------------------------- |
|
||||
| `GREATER_THAN` | Op created after seeing import | ✅ Keep (has knowledge) |
|
||||
| `EQUAL` | Same causal history as import | ✅ Keep |
|
||||
| `LESS_THAN` | Op dominated by import | ❌ Drop (already captured) |
|
||||
| `CONCURRENT` | Op created without knowledge of import | ❌ Drop (clean slate) |
|
||||
|
||||
### Why Vector Clocks Instead of UUIDv7?
|
||||
|
||||
Vector clocks track **causality** - whether a client "knew about" the import when it created an operation. UUIDv7 timestamps only track wall-clock time, which is unreliable due to clock drift between devices. An operation created 5 seconds after an import (by timestamp) may still reference entities that no longer exist if the client hadn't seen the import yet.
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
subgraph UUIDv7["❌ UUIDv7 Approach (Previous)"]
|
||||
direction TB
|
||||
U1["Client B's clock is 2 hours AHEAD"]
|
||||
U2["B creates op at REAL time 10:00"]
|
||||
U3["UUIDv7 timestamp = 12:00<br/>(wrong due to clock drift)"]
|
||||
U4["SYNC_IMPORT at 11:00"]
|
||||
U5["Filter check: 12:00 > 11:00"]
|
||||
U6["🐛 NOT FILTERED!<br/>Old op applied, corrupts state"]
|
||||
|
||||
U1 --> U2 --> U3 --> U4 --> U5 --> U6
|
||||
end
|
||||
|
||||
subgraph VectorClock["✅ Vector Clock Approach (Current)"]
|
||||
direction TB
|
||||
V1["Client B's clock is 2 hours AHEAD"]
|
||||
V2["B creates op (offline)"]
|
||||
V3["op.vectorClock = {A: 2, B: 3}<br/>(wall-clock time irrelevant)"]
|
||||
V4["SYNC_IMPORT.vectorClock = {A: 3}"]
|
||||
V5["Compare: {A:2,B:3} vs {A:3}<br/>Result: CONCURRENT"]
|
||||
V6["✅ FILTERED!<br/>Op created without knowledge of import"]
|
||||
|
||||
V1 --> V2 --> V3 --> V4 --> V5 --> V6
|
||||
end
|
||||
|
||||
style U6 fill:#ffcccc
|
||||
style V6 fill:#ccffcc
|
||||
```
|
||||
|
||||
## Key Files
|
||||
|
||||
| File | Purpose |
|
||||
| ------------------------------------------------------- | --------------------------------- |
|
||||
| `src/app/op-log/sync/conflict-resolution.service.ts` | LWW conflict auto-resolution |
|
||||
| `src/app/op-log/sync/sync-import-filter.service.ts` | SYNC_IMPORT filtering logic |
|
||||
| `src/app/op-log/sync/operation-log-download.service.ts` | Download and apply remote ops |
|
||||
| `src/app/op-log/sync/vector-clock.service.ts` | Vector clock comparison utilities |
|
||||
|
|
@ -1,40 +1,30 @@
|
|||
# Unified Op-Log Sync Architecture Diagrams
|
||||
# File-Based Sync Architecture
|
||||
|
||||
**Status:** Implemented (Phase 4 Testing Complete)
|
||||
**Related:** [Implementation Plan](../ai/file-based-oplog-sync-implementation-plan.md)
|
||||
**Last Updated:** January 2026
|
||||
**Status:** Implemented
|
||||
|
||||
This document contains Mermaid diagrams explaining the unified operation-log sync architecture for file-based providers (WebDAV, Dropbox, LocalFile).
|
||||
This document contains diagrams explaining the unified operation-log sync architecture for file-based providers (WebDAV, Dropbox, LocalFile).
|
||||
|
||||
## Table of Contents
|
||||
## Overview
|
||||
|
||||
1. [Remote Storage Structure](#1-remote-storage-structure) - What gets stored on providers
|
||||
2. [Architecture Overview](#2-architecture-overview) - System components and flow
|
||||
3. [TypeScript Types](#3-typescript-types) - Data structure definitions
|
||||
4. [Sync Flow](#4-sync-flow-content-based-optimistic-locking) - Upload/download sequence
|
||||
5. [Conflict Resolution](#5-conflict-resolution-two-clients-syncing-simultaneously) - How conflicts are handled
|
||||
6. [Migration Flow](#6-migration-flow-pfapi-to-op-log) - PFAPI to op-log migration
|
||||
7. [Archive Data Flow](#7-archive-data-flow-via-op-log) - How archive operations sync
|
||||
8. [FlushYoungToOld](#8-flushyoungtoold-operation) - Archive compaction
|
||||
9. [Complete System Flow](#9-complete-system-flow) - End-to-end overview
|
||||
File-based sync uses a single `sync-data.json` file that contains:
|
||||
|
||||
---
|
||||
|
||||
## 1. Remote Storage Structure
|
||||
|
||||
Shows what data is stored on file-based sync providers (WebDAV, Dropbox, LocalFile).
|
||||
- Full application state snapshot
|
||||
- Recent operations buffer (last 200 ops)
|
||||
- Vector clock for conflict detection
|
||||
- Archive data for late-joining clients
|
||||
|
||||
```mermaid
|
||||
flowchart TB
|
||||
subgraph Remote["Remote Storage (WebDAV/Dropbox/LocalFile)"]
|
||||
subgraph Folder["/superProductivity/"]
|
||||
SyncFile["sync-data.json<br/>━━━━━━━━━━━━━━━━━━━<br/>Encrypted + Compressed"]
|
||||
BackupFile["sync-data.json.bak<br/>━━━━━━━━━━━━━━━━━━━<br/>Previous version"]
|
||||
end
|
||||
end
|
||||
|
||||
subgraph Contents["sync-data.json Contents"]
|
||||
direction TB
|
||||
Meta["📋 Metadata<br/>• version: 2<br/>• syncVersion: N (locking)<br/>• schemaVersion<br/>• lastModified<br/>• checksum"]
|
||||
Meta["📋 Metadata<br/>• version: 2<br/>• syncVersion: N (locking)<br/>• schemaVersion<br/>• lastModified<br/>• clientId<br/>• checksum"]
|
||||
|
||||
VClock["🕐 Vector Clock<br/>• {clientA: 42, clientB: 17}<br/>• Tracks causality"]
|
||||
|
||||
|
|
@ -46,7 +36,6 @@ flowchart TB
|
|||
end
|
||||
|
||||
SyncFile --> Contents
|
||||
SyncFile -.->|"Replaced on<br/>successful upload"| BackupFile
|
||||
|
||||
style SyncFile fill:#fff3e0,stroke:#e65100,stroke-width:2px
|
||||
style State fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
|
|
@ -54,21 +43,19 @@ flowchart TB
|
|||
style Ops fill:#e1f5fe,stroke:#01579b,stroke-width:2px
|
||||
```
|
||||
|
||||
**Why single file instead of separate snapshot + ops files?**
|
||||
### Why Single File Instead of Separate Snapshot + Ops Files?
|
||||
|
||||
| Single File (chosen) | Two Files (considered) |
|
||||
| ------------------------------ | ------------------------------ |
|
||||
| ✅ Atomic: all or nothing | ❌ Partial upload risk |
|
||||
| ✅ One version to track | ❌ Version coordination |
|
||||
| ✅ Simple conflict resolution | ❌ Two places to handle |
|
||||
| ✅ Easy recovery | ❌ Inconsistent state possible |
|
||||
| ❌ Upload full state each time | ✅ Often just ops |
|
||||
| Single File (chosen) | Two Files (considered) |
|
||||
| --------------------------- | --------------------------- |
|
||||
| Atomic: all or nothing | Partial upload risk |
|
||||
| One version to track | Version coordination |
|
||||
| Simple conflict resolution | Two places to handle |
|
||||
| Easy recovery | Inconsistent state possible |
|
||||
| Upload full state each time | Often just ops |
|
||||
|
||||
The bandwidth cost is acceptable: state compresses well (~90%), and sync is infrequent.
|
||||
|
||||
---
|
||||
|
||||
## 2. Architecture Overview
|
||||
## Architecture Overview
|
||||
|
||||
Shows how `FileBasedSyncAdapter` integrates into the existing op-log system, implementing `OperationSyncCapable` using file operations.
|
||||
|
||||
|
|
@ -99,7 +86,6 @@ flowchart TB
|
|||
|
||||
subgraph RemoteStorage["Remote Storage"]
|
||||
SyncFile["sync-data.json<br/>━━━━━━━━━━━━━━━<br/>• syncVersion<br/>• state snapshot<br/>• recentOps (200)<br/>• vectorClock"]
|
||||
Backup["sync-data.json.bak"]
|
||||
end
|
||||
|
||||
NgRx --> OpLogEffects
|
||||
|
|
@ -118,18 +104,13 @@ flowchart TB
|
|||
WebDAV --> SyncFile
|
||||
Dropbox --> SyncFile
|
||||
LocalFile --> SyncFile
|
||||
SyncFile -.-> Backup
|
||||
|
||||
style FileAdapter fill:#e1f5fe,stroke:#01579b,stroke-width:2px
|
||||
style SyncFile fill:#fff3e0,stroke:#e65100,stroke-width:2px
|
||||
style OpLogStore fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. TypeScript Types
|
||||
|
||||
The TypeScript interfaces for the sync data structures.
|
||||
## TypeScript Types
|
||||
|
||||
```mermaid
|
||||
classDiagram
|
||||
|
|
@ -138,8 +119,8 @@ classDiagram
|
|||
+number syncVersion
|
||||
+number schemaVersion
|
||||
+VectorClock vectorClock
|
||||
+number lastSeq
|
||||
+number lastModified
|
||||
+string clientId
|
||||
+AppDataComplete state
|
||||
+ArchiveModel archiveYoung
|
||||
+ArchiveModel archiveOld
|
||||
|
|
@ -189,14 +170,7 @@ classDiagram
|
|||
CompactOperation --> VectorClock : vectorClock
|
||||
```
|
||||
|
||||
**Key files:**
|
||||
|
||||
- Types: `src/app/op-log/sync/providers/file-based/file-based-sync.types.ts`
|
||||
- Adapter: `src/app/op-log/sync/providers/file-based/file-based-sync-adapter.service.ts`
|
||||
|
||||
---
|
||||
|
||||
## 4. Sync Flow (Content-Based Optimistic Locking with Piggybacking)
|
||||
## Sync Flow (Content-Based Optimistic Locking with Piggybacking)
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
|
|
@ -240,7 +214,6 @@ sequenceDiagram
|
|||
Adapter->>Adapter: Set syncVersion = M+1
|
||||
Adapter->>Adapter: Find piggybacked ops<br/>(ops from other clients we haven't seen)
|
||||
|
||||
Adapter->>Provider: uploadFile("sync-data.json.bak", currentData)
|
||||
Adapter->>Provider: uploadFile("sync-data.json", newData)
|
||||
Provider->>Remote: PUT
|
||||
Remote-->>Provider: Success
|
||||
|
|
@ -253,7 +226,7 @@ sequenceDiagram
|
|||
end
|
||||
```
|
||||
|
||||
**Key Insight: Piggybacking**
|
||||
### Key Insight: Piggybacking
|
||||
|
||||
Instead of throwing an error on version mismatch, the adapter:
|
||||
|
||||
|
|
@ -263,9 +236,7 @@ Instead of throwing an error on version mismatch, the adapter:
|
|||
|
||||
This ensures no ops are missed, even when clients sync concurrently.
|
||||
|
||||
---
|
||||
|
||||
## 5. Conflict Resolution (Two Clients Syncing Simultaneously)
|
||||
## Conflict Resolution (Two Clients Syncing Simultaneously)
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
|
|
@ -318,7 +289,7 @@ sequenceDiagram
|
|||
A->>A: Apply TaskY → both clients have both tasks
|
||||
```
|
||||
|
||||
**How Piggybacking Resolves Conflicts:**
|
||||
### How Piggybacking Resolves Conflicts
|
||||
|
||||
| Step | What Happens |
|
||||
| ---------------------------- | ----------------------------------------------------------- |
|
||||
|
|
@ -333,172 +304,31 @@ sequenceDiagram
|
|||
|
||||
If both A and B modified the same task, the piggybacked ops flow through `ConflictResolutionService` which uses vector clocks and timestamps to determine the winner.
|
||||
|
||||
---
|
||||
## First-Sync Conflict Handling
|
||||
|
||||
## 6. Migration Flow (PFAPI to Op-Log)
|
||||
When a client with local data syncs for the first time to a remote that already has data, a conflict dialog is shown:
|
||||
|
||||
```mermaid
|
||||
flowchart TB
|
||||
Start["App starts with<br/>new version"]
|
||||
flowchart TD
|
||||
Start[First sync attempt] --> Download[Download sync-data.json]
|
||||
Download --> HasLocal{Has local data?}
|
||||
HasLocal -->|No| Apply[Apply remote state]
|
||||
HasLocal -->|Yes| HasRemote{Remote has data?}
|
||||
HasRemote -->|No| Upload[Upload local state]
|
||||
HasRemote -->|Yes| Dialog[Show conflict dialog]
|
||||
|
||||
CheckRemote{"Check remote<br/>storage"}
|
||||
Dialog --> UseLocal[User chooses: Use Local]
|
||||
Dialog --> UseRemote[User chooses: Use Remote]
|
||||
|
||||
subgraph Detection["Detection Phase"]
|
||||
HasPfapi{"Has PFAPI files?<br/>(meta.json, task.json, etc.)"}
|
||||
HasOpLog{"Has sync-data.json?"}
|
||||
IsEmpty{"Remote empty?"}
|
||||
end
|
||||
UseLocal --> CreateImport[Create SYNC_IMPORT<br/>with local state]
|
||||
CreateImport --> UploadImport[Upload to remote]
|
||||
|
||||
subgraph MigrationPath["Migration Path"]
|
||||
AcquireLock["Write migration.lock<br/>(clientId:timestamp)"]
|
||||
CheckLock{"Lock exists<br/>from other client?"}
|
||||
StaleLock{"Lock > 5 min old?"}
|
||||
WaitRetry["Wait & retry"]
|
||||
UseRemote --> ApplyRemote[Apply remote state<br/>Discard local]
|
||||
|
||||
DownloadPfapi["Download all PFAPI<br/>model files"]
|
||||
AssembleState["Assemble into<br/>AppDataComplete"]
|
||||
CreateImport["Create SYNC_IMPORT<br/>operation"]
|
||||
BuildSync["Build sync-data.json"]
|
||||
UploadSync["Upload sync-data.json"]
|
||||
BackupPfapi["Rename PFAPI files<br/>to .migrated"]
|
||||
ReleaseLock["Delete migration.lock"]
|
||||
end
|
||||
|
||||
subgraph FreshPath["Fresh Start Path"]
|
||||
LocalState["Get local NgRx state"]
|
||||
CreateInitial["Build initial<br/>sync-data.json"]
|
||||
UploadInitial["Upload sync-data.json"]
|
||||
end
|
||||
|
||||
subgraph AlreadyMigrated["Already Migrated"]
|
||||
NormalSync["Continue normal<br/>op-log sync"]
|
||||
end
|
||||
|
||||
Start --> CheckRemote
|
||||
CheckRemote --> HasPfapi
|
||||
CheckRemote --> HasOpLog
|
||||
CheckRemote --> IsEmpty
|
||||
|
||||
HasOpLog -->|Yes| NormalSync
|
||||
HasPfapi -->|Yes| AcquireLock
|
||||
IsEmpty -->|Yes| LocalState
|
||||
|
||||
AcquireLock --> CheckLock
|
||||
CheckLock -->|Yes| StaleLock
|
||||
CheckLock -->|No| DownloadPfapi
|
||||
StaleLock -->|Yes| DownloadPfapi
|
||||
StaleLock -->|No| WaitRetry
|
||||
WaitRetry --> AcquireLock
|
||||
|
||||
DownloadPfapi --> AssembleState
|
||||
AssembleState --> CreateImport
|
||||
CreateImport --> BuildSync
|
||||
BuildSync --> UploadSync
|
||||
UploadSync --> BackupPfapi
|
||||
BackupPfapi --> ReleaseLock
|
||||
ReleaseLock --> NormalSync
|
||||
|
||||
LocalState --> CreateInitial
|
||||
CreateInitial --> UploadInitial
|
||||
UploadInitial --> NormalSync
|
||||
|
||||
style AcquireLock fill:#fff9c4,stroke:#f57f17,stroke-width:2px
|
||||
style CreateImport fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
style NormalSync fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
style Dialog fill:#fff3e0,stroke:#e65100,stroke-width:2px
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Archive Data Flow via Op-Log
|
||||
|
||||
Archive operations sync via the operation log. `ArchiveOperationHandler` writes archive data to IndexedDB on both local and remote clients.
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant User as User
|
||||
participant Store as NgRx Store
|
||||
participant LocalHandler as ArchiveOperationHandler<br/>(LOCAL_ACTIONS)
|
||||
participant Archive as Archive IndexedDB
|
||||
participant OpLog as SUP_OPS
|
||||
participant Sync as sync-data.json
|
||||
participant Remote as Remote Client
|
||||
participant RemoteHandler as ArchiveOperationHandler<br/>(via OperationApplier)
|
||||
|
||||
Note over User,RemoteHandler: ═══ LOCAL CLIENT ═══
|
||||
|
||||
User->>Store: Archive completed tasks
|
||||
Store->>Store: Dispatch moveToArchive
|
||||
|
||||
par State update
|
||||
Store->>Store: Reducer removes from active
|
||||
and Archive write (local)
|
||||
Store->>LocalHandler: LOCAL_ACTIONS stream
|
||||
LocalHandler->>Archive: Write to archiveYoung
|
||||
and Operation capture
|
||||
Store->>OpLog: Append moveToArchive op
|
||||
end
|
||||
|
||||
Note over User,RemoteHandler: ═══ SYNC ═══
|
||||
|
||||
OpLog->>Sync: Upload ops
|
||||
|
||||
Note over User,RemoteHandler: ═══ REMOTE CLIENT ═══
|
||||
|
||||
Remote->>Sync: Download ops
|
||||
Remote->>Remote: OperationApplierService
|
||||
Remote->>Store: Dispatch moveToArchive<br/>(isRemote: true)
|
||||
|
||||
Note right of Store: LOCAL_ACTIONS filtered<br/>(effect skipped)
|
||||
|
||||
Remote->>RemoteHandler: Explicit call
|
||||
RemoteHandler->>Archive: Write to archiveYoung
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. FlushYoungToOld Operation
|
||||
|
||||
The `flushYoungToOld` operation moves old tasks from `archiveYoung` to `archiveOld`. Using the same timestamp ensures deterministic results on all clients.
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
subgraph Local["Local Client"]
|
||||
Trigger["Trigger:<br/>lastFlush > 14 days"]
|
||||
Action["Dispatch flushYoungToOld<br/>(timestamp: T)"]
|
||||
LocalSort["sortTimeTracking...<br/>(cutoff: T - 21 days)"]
|
||||
LocalYoung["archiveYoung"]
|
||||
LocalOld["archiveOld"]
|
||||
end
|
||||
|
||||
subgraph Sync["Sync"]
|
||||
SyncFile["sync-data.json"]
|
||||
end
|
||||
|
||||
subgraph Remote["Remote Client"]
|
||||
RemoteApply["Apply flushYoungToOld"]
|
||||
RemoteSort["sortTimeTracking...<br/>(same cutoff!)"]
|
||||
RemoteYoung["archiveYoung"]
|
||||
RemoteOld["archiveOld"]
|
||||
end
|
||||
|
||||
Trigger --> Action
|
||||
Action --> LocalSort
|
||||
LocalSort --> LocalYoung
|
||||
LocalSort --> LocalOld
|
||||
Action --> SyncFile
|
||||
SyncFile --> RemoteApply
|
||||
RemoteApply --> RemoteSort
|
||||
RemoteSort --> RemoteYoung
|
||||
RemoteSort --> RemoteOld
|
||||
|
||||
Note["Using same timestamp<br/>= deterministic results"]
|
||||
|
||||
style Note fill:#fff9c4,stroke:#f57f17
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Complete System Flow
|
||||
## Complete System Flow
|
||||
|
||||
```mermaid
|
||||
flowchart TB
|
||||
|
|
@ -572,8 +402,6 @@ flowchart TB
|
|||
style OpStore fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Points
|
||||
|
||||
1. **Single Sync File**: All data in `sync-data.json` - state snapshot + recent ops + vector clock
|
||||
|
|
@ -583,14 +411,12 @@ flowchart TB
|
|||
- `_expectedSyncVersions`: Tracks file's syncVersion (for version mismatch detection)
|
||||
- `_localSeqCounters`: Tracks ops we've processed (updated via `setLastServerSeq`)
|
||||
5. **Archive via Op-Log**: Archive operations sync; `ArchiveOperationHandler` writes data
|
||||
6. **Migration Lock**: Prevents concurrent PFAPI → op-log migration
|
||||
7. **Deterministic Replay**: Same operation + same timestamp = same result everywhere
|
||||
6. **Deterministic Replay**: Same operation + same timestamp = same result everywhere
|
||||
|
||||
## Implementation Files
|
||||
|
||||
| File | Purpose |
|
||||
| ----------------------------------------- | ------------------------------ |
|
||||
| `file-based-sync-adapter.service.ts` | Main adapter (~600 LOC) |
|
||||
| `file-based-sync.types.ts` | TypeScript types and constants |
|
||||
| `pfapi-migration.service.ts` | PFAPI → op-log migration |
|
||||
| `file-based-sync-adapter.service.spec.ts` | 26 unit tests |
|
||||
| File | Purpose |
|
||||
| ---------------------------------------------------------------------------------- | ------------------------------ |
|
||||
| `src/app/op-log/sync-providers/file-based/file-based-sync-adapter.service.ts` | Main adapter (~800 LOC) |
|
||||
| `src/app/op-log/sync-providers/file-based/file-based-sync.types.ts` | TypeScript types and constants |
|
||||
| `src/app/op-log/sync-providers/file-based/file-based-sync-adapter.service.spec.ts` | Unit tests |
|
||||
191
docs/sync-and-op-log/diagrams/05-meta-reducers.md
Normal file
191
docs/sync-and-op-log/diagrams/05-meta-reducers.md
Normal file
|
|
@ -0,0 +1,191 @@
|
|||
# Atomic State Consistency (Meta-Reducer Pattern)
|
||||
|
||||
**Last Updated:** January 2026
|
||||
**Status:** Implemented
|
||||
|
||||
This document illustrates how meta-reducers ensure atomic state changes across multiple entities, preventing inconsistency during sync.
|
||||
|
||||
## Meta-Reducer Flow for Multi-Entity Operations
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph UserAction["User Action (e.g., Delete Tag)"]
|
||||
Action[deleteTag action]
|
||||
end
|
||||
|
||||
subgraph MetaReducers["Meta-Reducer Chain (Atomic)"]
|
||||
Capture["stateCaptureMetaReducer<br/>━━━━━━━━━━━━━━━<br/>Captures before-state"]
|
||||
TagMeta["tagSharedMetaReducer<br/>━━━━━━━━━━━━━━━<br/>• Remove tag from tasks<br/>• Delete orphaned tasks<br/>• Clean TaskRepeatCfgs<br/>• Clean TimeTracking"]
|
||||
OtherMeta["Other meta-reducers<br/>━━━━━━━━━━━━━━━<br/>Pass through"]
|
||||
end
|
||||
|
||||
subgraph FeatureReducers["Feature Reducers"]
|
||||
TagReducer["tag.reducer<br/>━━━━━━━━━━━━━━━<br/>Delete tag entity"]
|
||||
end
|
||||
|
||||
subgraph Effects["Effects Layer"]
|
||||
OpEffect["OperationLogEffects<br/>━━━━━━━━━━━━━━━<br/>• Compute state diff<br/>• Create single Operation<br/>• with entityChanges[]"]
|
||||
end
|
||||
|
||||
subgraph Result["Single Atomic Operation"]
|
||||
Op["Operation {<br/> opType: 'DEL',<br/> entityType: 'TAG',<br/> entityChanges: [<br/> {TAG, delete},<br/> {TASK, update}x3,<br/> {TASK_REPEAT_CFG, delete}<br/> ]<br/>}"]
|
||||
end
|
||||
|
||||
Action --> Capture
|
||||
Capture --> TagMeta
|
||||
TagMeta --> OtherMeta
|
||||
OtherMeta --> FeatureReducers
|
||||
FeatureReducers --> OpEffect
|
||||
OpEffect --> Result
|
||||
|
||||
style UserAction fill:#fff,stroke:#333,stroke-width:2px
|
||||
style MetaReducers fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
style FeatureReducers fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
style Effects fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
|
||||
style Result fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
|
||||
```
|
||||
|
||||
## Why Meta-Reducers vs Effects
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
subgraph Problem["❌ Effects Pattern (Non-Atomic)"]
|
||||
direction TB
|
||||
A1[deleteTag action] --> E1[tag.reducer]
|
||||
E1 --> A2[effect: removeTagFromTasks]
|
||||
A2 --> E2[task.reducer]
|
||||
E2 --> A3[effect: cleanTaskRepeatCfgs]
|
||||
A3 --> E3[taskRepeatCfg.reducer]
|
||||
|
||||
Note1["Each action = separate operation<br/>Sync may deliver partially<br/>→ Inconsistent state"]
|
||||
end
|
||||
|
||||
subgraph Solution["✅ Meta-Reducer Pattern (Atomic)"]
|
||||
direction TB
|
||||
B1[deleteTag action] --> M1[tagSharedMetaReducer]
|
||||
M1 --> M2["All changes in one pass:<br/>• tasks updated<br/>• repeatCfgs cleaned<br/>• tag deleted"]
|
||||
M2 --> R1[Single reduced state]
|
||||
|
||||
Note2["One action = one operation<br/>All changes sync together<br/>→ Consistent state"]
|
||||
end
|
||||
|
||||
style Problem fill:#ffebee,stroke:#c62828,stroke-width:2px
|
||||
style Solution fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
```
|
||||
|
||||
## State Change Detection
|
||||
|
||||
The `StateChangeCaptureService` computes entity changes by comparing before and after states:
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph Before["Before State (captured by meta-reducer)"]
|
||||
B1["tasks: {t1, t2, t3}"]
|
||||
B2["tags: {tag1, tag2}"]
|
||||
B3["taskRepeatCfgs: {cfg1}"]
|
||||
end
|
||||
|
||||
subgraph After["After State (post-reducer)"]
|
||||
A1["tasks: {t1', t2', t3}"]
|
||||
A2["tags: {tag2}"]
|
||||
A3["taskRepeatCfgs: {}"]
|
||||
end
|
||||
|
||||
subgraph Diff["State Diff Computation"]
|
||||
D1["Compare entity collections"]
|
||||
D2["Identify: created, updated, deleted"]
|
||||
end
|
||||
|
||||
subgraph Changes["Entity Changes"]
|
||||
C1["TAG tag1: DELETED"]
|
||||
C2["TASK t1: UPDATED (tagId removed)"]
|
||||
C3["TASK t2: UPDATED (tagId removed)"]
|
||||
C4["TASK_REPEAT_CFG cfg1: DELETED"]
|
||||
end
|
||||
|
||||
Before --> Diff
|
||||
After --> Diff
|
||||
Diff --> Changes
|
||||
|
||||
style Before fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
|
||||
style After fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
style Diff fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
style Changes fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
|
||||
```
|
||||
|
||||
## Multi-Entity Operations That Use Meta-Reducers
|
||||
|
||||
| Action | Entities Affected | Meta-Reducer |
|
||||
| ------------------- | ------------------------------------------------------------- | -------------------------- |
|
||||
| `deleteTag` | Tag, Tasks (remove tagId), TaskRepeatCfgs, TimeTracking | `tagSharedMetaReducer` |
|
||||
| `deleteTags` | Tags, Tasks, TaskRepeatCfgs, TimeTracking | `tagSharedMetaReducer` |
|
||||
| `deleteProject` | Project, Tasks (cascade delete), TaskRepeatCfgs, TimeTracking | `projectSharedMetaReducer` |
|
||||
| `convertToMainTask` | Parent task, Child task, Sub-tasks | `taskSharedMetaReducer` |
|
||||
| `moveTaskUp/Down` | Multiple tasks (reorder) | `taskSharedMetaReducer` |
|
||||
|
||||
## Operation Structure with Entity Changes
|
||||
|
||||
```mermaid
|
||||
classDiagram
|
||||
class Operation {
|
||||
+string id
|
||||
+string clientId
|
||||
+OpType opType
|
||||
+EntityType entityType
|
||||
+string entityId
|
||||
+VectorClock vectorClock
|
||||
+number timestamp
|
||||
+EntityChange[] entityChanges
|
||||
}
|
||||
|
||||
class EntityChange {
|
||||
+EntityType entityType
|
||||
+string entityId
|
||||
+ChangeType changeType
|
||||
+unknown beforeState
|
||||
+unknown afterState
|
||||
}
|
||||
|
||||
class ChangeType {
|
||||
<<enumeration>>
|
||||
CREATED
|
||||
UPDATED
|
||||
DELETED
|
||||
}
|
||||
|
||||
Operation --> EntityChange : contains 0..*
|
||||
EntityChange --> ChangeType : has
|
||||
```
|
||||
|
||||
## Sync Replay: All-or-Nothing
|
||||
|
||||
When remote operations are applied, all entity changes are replayed atomically:
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant Remote as Remote Op
|
||||
participant Applier as OperationApplierService
|
||||
participant Store as NgRx Store
|
||||
participant State as Final State
|
||||
|
||||
Remote->>Applier: Operation with entityChanges[]
|
||||
|
||||
loop For each entityChange
|
||||
Applier->>Applier: Convert to action
|
||||
Applier->>Store: dispatch(action)
|
||||
end
|
||||
|
||||
Note over Store: All changes applied<br/>in single reducer pass
|
||||
|
||||
Store->>State: Consistent state
|
||||
|
||||
Note over State: Either ALL changes applied<br/>or NONE (transaction semantics)
|
||||
```
|
||||
|
||||
## Key Files
|
||||
|
||||
| File | Purpose |
|
||||
| ----------------------------------------------------------------------------- | --------------------------------- |
|
||||
| `src/app/root-store/meta/task-shared-meta-reducers/` | Task-related multi-entity changes |
|
||||
| `src/app/root-store/meta/task-shared-meta-reducers/tag-shared.reducer.ts` | Tag deletion with cleanup |
|
||||
| `src/app/root-store/meta/task-shared-meta-reducers/project-shared.reducer.ts` | Project deletion with cleanup |
|
||||
172
docs/sync-and-op-log/diagrams/06-archive-operations.md
Normal file
172
docs/sync-and-op-log/diagrams/06-archive-operations.md
Normal file
|
|
@ -0,0 +1,172 @@
|
|||
# Archive Operations & Side Effects
|
||||
|
||||
**Last Updated:** January 2026
|
||||
**Status:** Implemented
|
||||
|
||||
This section documents how archive-related side effects are handled, establishing the general rule that **effects should never run for remote operations**.
|
||||
|
||||
## The General Rule: Effects Only for Local Actions
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph Rule["🔒 GENERAL RULE"]
|
||||
R1["All NgRx effects MUST use LOCAL_ACTIONS"]
|
||||
R2["Effects should NEVER run for remote operations"]
|
||||
R3["Side effects for remote ops are handled<br/>explicitly by OperationApplierService"]
|
||||
end
|
||||
|
||||
subgraph Why["Why This Matters"]
|
||||
W1["• Prevents duplicate side effects"]
|
||||
W2["• Makes sync behavior predictable"]
|
||||
W3["• Side effects happen exactly once<br/>(on originating client)"]
|
||||
W4["• Receiving clients only update state"]
|
||||
end
|
||||
|
||||
Rule --> Why
|
||||
|
||||
style Rule fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px
|
||||
style Why fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
```
|
||||
|
||||
## Dual-Database Architecture
|
||||
|
||||
Super Productivity uses **two separate IndexedDB databases** for persistence:
|
||||
|
||||
```mermaid
|
||||
flowchart TB
|
||||
subgraph Browser["Browser IndexedDB"]
|
||||
subgraph SUPOPS["SUP_OPS Database (Operation Log)"]
|
||||
direction TB
|
||||
OpsTable["ops table<br/>━━━━━━━━━━━━━━━<br/>Operation event log<br/>UUIDv7, vectorClock, payload"]
|
||||
StateCache["state_cache table<br/>━━━━━━━━━━━━━━━<br/>NgRx state snapshots<br/>for fast hydration"]
|
||||
end
|
||||
|
||||
subgraph ArchiveDB["Archive Database"]
|
||||
direction TB
|
||||
ArchiveYoung["archiveYoung<br/>━━━━━━━━━━━━━━━<br/>ArchiveModel:<br/>• task: TaskArchive<br/>• timeTracking: State<br/>━━━━━━━━━━━━━━━<br/>Tasks < 21 days old"]
|
||||
ArchiveOld["archiveOld<br/>━━━━━━━━━━━━━━━<br/>ArchiveModel:<br/>• task: TaskArchive<br/>• timeTracking: State<br/>━━━━━━━━━━━━━━━<br/>Tasks > 21 days old"]
|
||||
end
|
||||
end
|
||||
|
||||
subgraph Writers["What Writes Where"]
|
||||
OpLog["OperationLogStoreService"] -->|ops, snapshots| SUPOPS
|
||||
Archive["ArchiveService<br/>ArchiveOperationHandler"] -->|"ArchiveModel:<br/>tasks + time tracking"| ArchiveDB
|
||||
end
|
||||
|
||||
style SUPOPS fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
style ArchiveDB fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
|
||||
style Writers fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
```
|
||||
|
||||
**Key Points:**
|
||||
|
||||
| Database | Purpose | Written By |
|
||||
| ---------- | ------------------------------ | ------------------------------------------- |
|
||||
| `SUP_OPS` | Operation log (event sourcing) | `OperationLogStoreService` |
|
||||
| Archive DB | Archive data, time tracking | `ArchiveService`, `ArchiveOperationHandler` |
|
||||
|
||||
## Archive Operations Flow
|
||||
|
||||
Archive data is stored in a separate IndexedDB database, **not** in NgRx state or the operation log. This requires special handling through a **unified** `ArchiveOperationHandler`:
|
||||
|
||||
- **Local operations**: `ArchiveOperationHandlerEffects` routes through `ArchiveOperationHandler` (using LOCAL_ACTIONS)
|
||||
- **Remote operations**: `OperationApplierService` calls `ArchiveOperationHandler` directly after dispatch
|
||||
|
||||
Both paths use the same handler to ensure consistent behavior.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph LocalOp["LOCAL Operation (User Action)"]
|
||||
L1[User archives tasks] --> L2["ArchiveService writes<br/>to IndexedDB<br/>BEFORE dispatch"]
|
||||
L2 --> L3[Dispatch moveToArchive]
|
||||
L3 --> L4[Meta-reducers update NgRx state]
|
||||
L4 --> L5[ArchiveOperationHandlerEffects<br/>via LOCAL_ACTIONS]
|
||||
L5 --> L6["ArchiveOperationHandler<br/>.handleOperation<br/>(skips - already written)"]
|
||||
L4 --> L7[OperationLogEffects<br/>creates operation in SUP_OPS]
|
||||
end
|
||||
|
||||
subgraph RemoteOp["REMOTE Operation (Sync)"]
|
||||
R1[Download operation<br/>from sync] --> R2[OperationApplierService<br/>dispatches action]
|
||||
R2 --> R3[Meta-reducers update NgRx state]
|
||||
R3 --> R4["ArchiveOperationHandler<br/>.handleOperation"]
|
||||
R4 --> R5["Write to IndexedDB<br/>(archiveYoung/archiveOld)"]
|
||||
|
||||
NoEffect["❌ Regular effects DON'T run<br/>(action has meta.isRemote=true)"]
|
||||
end
|
||||
|
||||
subgraph Storage["Storage Layer"]
|
||||
ArchiveDB[("Archive IndexedDB<br/>archiveYoung<br/>archiveOld")]
|
||||
SUPOPS_DB[("SUP_OPS IndexedDB<br/>ops table")]
|
||||
end
|
||||
|
||||
L2 --> ArchiveDB
|
||||
L7 --> SUPOPS_DB
|
||||
R5 --> ArchiveDB
|
||||
SUPOPS_DB -.->|"Sync downloads ops"| R1
|
||||
|
||||
style LocalOp fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
style RemoteOp fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
style NoEffect fill:#ffebee,stroke:#c62828,stroke-width:2px
|
||||
style ArchiveDB fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
|
||||
style SUPOPS_DB fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
```
|
||||
|
||||
## ArchiveOperationHandler Integration
|
||||
|
||||
The `OperationApplierService` uses a **fail-fast** approach: if hard dependencies are missing, it throws `SyncStateCorruptedError` rather than attempting complex retry logic. This triggers a full re-sync, which is safer than partial recovery.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph OperationApplierService["OperationApplierService (Fail-Fast)"]
|
||||
OA1[Receive operation] --> OA2{Check hard<br/>dependencies}
|
||||
OA2 -->|Missing| OA_ERR["throw SyncStateCorruptedError<br/>(triggers full re-sync)"]
|
||||
OA2 -->|OK| OA3[convertOpToAction]
|
||||
OA3 --> OA4["store.dispatch(action)<br/>with meta.isRemote=true"]
|
||||
OA4 --> OA5["archiveOperationHandler<br/>.handleOperation(action)"]
|
||||
end
|
||||
|
||||
subgraph Handler["ArchiveOperationHandler"]
|
||||
H1{Action Type?}
|
||||
H1 -->|moveToArchive| H2[Write tasks to<br/>archiveYoung<br/>REMOTE ONLY]
|
||||
H1 -->|restoreTask| H3[Delete task from<br/>archive]
|
||||
H1 -->|flushYoungToOld| H4[Move old tasks<br/>Young → Old]
|
||||
H1 -->|deleteProject| H5[Remove tasks<br/>for project +<br/>cleanup time tracking]
|
||||
H1 -->|deleteTag/deleteTags| H6[Remove tag<br/>from tasks +<br/>cleanup time tracking]
|
||||
H1 -->|deleteTaskRepeatCfg| H7[Remove repeatCfgId<br/>from tasks]
|
||||
H1 -->|deleteIssueProvider| H8[Unlink issue data<br/>from tasks]
|
||||
H1 -->|deleteIssueProviders| H8b[Unlink multiple<br/>issue providers]
|
||||
H1 -->|other| H9[No-op]
|
||||
end
|
||||
|
||||
OA5 --> H1
|
||||
|
||||
style OperationApplierService fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
style Handler fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
|
||||
style OA_ERR fill:#ffcdd2,stroke:#c62828,stroke-width:2px
|
||||
```
|
||||
|
||||
**Why Fail-Fast?**
|
||||
|
||||
The server guarantees operations arrive in sequence order, and delete operations are atomic via meta-reducers. If dependencies are missing, something is fundamentally wrong with sync state. A full re-sync is safer than attempting partial recovery with potential inconsistencies.
|
||||
|
||||
## Archive Operations Summary
|
||||
|
||||
| Operation | Local Handling | Remote Handling |
|
||||
| ---------------------- | ---------------------------------------------------------------------- | ------------------------------------------------------------ |
|
||||
| `moveToArchive` | ArchiveService writes BEFORE dispatch; handler skips (no double-write) | ArchiveOperationHandler writes AFTER dispatch |
|
||||
| `restoreTask` | ArchiveOperationHandlerEffects → ArchiveOperationHandler | ArchiveOperationHandler removes from archive |
|
||||
| `flushYoungToOld` | ArchiveOperationHandlerEffects → ArchiveOperationHandler | ArchiveOperationHandler executes flush |
|
||||
| `deleteProject` | ArchiveOperationHandlerEffects → ArchiveOperationHandler | ArchiveOperationHandler removes tasks + cleans time tracking |
|
||||
| `deleteTag/deleteTags` | ArchiveOperationHandlerEffects → ArchiveOperationHandler | ArchiveOperationHandler removes tags + cleans time tracking |
|
||||
| `deleteTaskRepeatCfg` | ArchiveOperationHandlerEffects → ArchiveOperationHandler | ArchiveOperationHandler removes repeatCfgId from tasks |
|
||||
| `deleteIssueProvider` | ArchiveOperationHandlerEffects → ArchiveOperationHandler | ArchiveOperationHandler unlinks issue data |
|
||||
|
||||
## Key Files
|
||||
|
||||
| File | Purpose |
|
||||
| ----------------------------------------------------------- | ------------------------------------------------------------------- |
|
||||
| `src/app/op-log/apply/archive-operation-handler.service.ts` | **Unified** handler for all archive side effects (local AND remote) |
|
||||
| `src/app/op-log/apply/archive-operation-handler.effects.ts` | Routes local actions to ArchiveOperationHandler via LOCAL_ACTIONS |
|
||||
| `src/app/op-log/apply/operation-applier.service.ts` | Calls ArchiveOperationHandler after dispatching remote operations |
|
||||
| `src/app/features/archive/archive.service.ts` | Local archive write logic (moveToArchive writes BEFORE dispatch) |
|
||||
| `src/app/features/archive/task-archive.service.ts` | Archive CRUD operations |
|
||||
264
docs/sync-and-op-log/diagrams/07-supersync-vs-file-based.md
Normal file
264
docs/sync-and-op-log/diagrams/07-supersync-vs-file-based.md
Normal file
|
|
@ -0,0 +1,264 @@
|
|||
# SuperSync vs File-Based Sync Comparison
|
||||
|
||||
**Last Updated:** January 2026
|
||||
**Status:** Implemented
|
||||
|
||||
This document compares the two sync provider architectures: SuperSync (server-based) and File-Based (WebDAV/Dropbox/LocalFile).
|
||||
|
||||
## High-Level Architecture Comparison
|
||||
|
||||
```mermaid
|
||||
flowchart TB
|
||||
subgraph Client["Client Application"]
|
||||
NgRx["NgRx Store"]
|
||||
OpLog["Operation Log<br/>(SUP_OPS IndexedDB)"]
|
||||
SyncService["OperationLogSyncService"]
|
||||
end
|
||||
|
||||
subgraph SuperSyncPath["SuperSync Path"]
|
||||
SSAdapter["SuperSyncProvider"]
|
||||
SSApi["REST API"]
|
||||
SSPG["PostgreSQL"]
|
||||
end
|
||||
|
||||
subgraph FileBasedPath["File-Based Path"]
|
||||
FBAdapter["FileBasedSyncAdapter"]
|
||||
|
||||
subgraph Providers["File Providers"]
|
||||
WebDAV["WebDAV"]
|
||||
Dropbox["Dropbox"]
|
||||
LocalFile["LocalFile"]
|
||||
end
|
||||
|
||||
SyncFile["sync-data.json"]
|
||||
end
|
||||
|
||||
NgRx --> OpLog
|
||||
OpLog --> SyncService
|
||||
|
||||
SyncService --> SSAdapter
|
||||
SyncService --> FBAdapter
|
||||
|
||||
SSAdapter --> SSApi
|
||||
SSApi --> SSPG
|
||||
|
||||
FBAdapter --> WebDAV
|
||||
FBAdapter --> Dropbox
|
||||
FBAdapter --> LocalFile
|
||||
|
||||
WebDAV --> SyncFile
|
||||
Dropbox --> SyncFile
|
||||
LocalFile --> SyncFile
|
||||
|
||||
style SuperSyncPath fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
style FileBasedPath fill:#fff3e0,stroke:#e65100,stroke-width:2px
|
||||
style Client fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
```
|
||||
|
||||
## Feature Comparison
|
||||
|
||||
| Feature | SuperSync | File-Based |
|
||||
| ---------------------- | ------------------------- | --------------------------------- |
|
||||
| **Storage** | PostgreSQL database | Single JSON file |
|
||||
| **Operations** | Stored individually | Buffered (last 200) |
|
||||
| **State Snapshot** | Not stored (derived) | Included in sync file |
|
||||
| **Archive Data** | Via operation replay | Embedded in sync file |
|
||||
| **Conflict Detection** | Server sequence numbers | syncVersion counter |
|
||||
| **Gap Detection** | Server validates sequence | Client-side via lastSeq |
|
||||
| **Concurrency** | Server handles locks | Optimistic locking + piggybacking |
|
||||
| **Bandwidth** | Delta ops only | Full state + recent ops |
|
||||
| **Late Joiners** | Full replay from server | State snapshot in file |
|
||||
|
||||
## Data Storage Comparison
|
||||
|
||||
```mermaid
|
||||
flowchart TB
|
||||
subgraph SuperSync["SuperSync Storage"]
|
||||
direction TB
|
||||
PG["PostgreSQL Database"]
|
||||
|
||||
subgraph Tables["Tables"]
|
||||
OpsTable["operations<br/>━━━━━━━━━━━━━━━<br/>id, client_id, seq<br/>action_type, payload<br/>vector_clock, timestamp"]
|
||||
ClientsTable["clients<br/>━━━━━━━━━━━━━━━<br/>client_id, last_seq<br/>created_at"]
|
||||
end
|
||||
|
||||
PG --> Tables
|
||||
end
|
||||
|
||||
subgraph FileBased["File-Based Storage"]
|
||||
direction TB
|
||||
SyncFile["sync-data.json"]
|
||||
|
||||
subgraph Contents["Contents"]
|
||||
Meta["Metadata<br/>━━━━━━━━━━━━━━━<br/>version, syncVersion<br/>lastModified, checksum"]
|
||||
State["State Snapshot<br/>━━━━━━━━━━━━━━━<br/>Full AppDataComplete"]
|
||||
Archive["Archive Data<br/>━━━━━━━━━━━━━━━<br/>archiveYoung, archiveOld"]
|
||||
RecentOps["Recent Ops (200)<br/>━━━━━━━━━━━━━━━<br/>CompactOperation[]"]
|
||||
end
|
||||
|
||||
SyncFile --> Contents
|
||||
end
|
||||
|
||||
style SuperSync fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
style FileBased fill:#fff3e0,stroke:#e65100,stroke-width:2px
|
||||
```
|
||||
|
||||
## Sync Flow Comparison
|
||||
|
||||
### Download Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant Client
|
||||
participant SS as SuperSync Server
|
||||
participant FB as File Provider
|
||||
|
||||
rect rgb(227, 242, 253)
|
||||
Note over Client,SS: SuperSync Download
|
||||
Client->>SS: GET /ops?since={lastSeq}
|
||||
SS->>SS: Query ops WHERE seq > lastSeq
|
||||
SS-->>Client: {ops: [...], lastSeq: N}
|
||||
Note over Client: Only receives new ops<br/>Bandwidth efficient
|
||||
end
|
||||
|
||||
rect rgb(255, 243, 224)
|
||||
Note over Client,FB: File-Based Download
|
||||
Client->>FB: downloadFile("sync-data.json")
|
||||
FB-->>Client: {state, recentOps, syncVersion}
|
||||
Client->>Client: Filter ops by lastProcessedSeq
|
||||
Note over Client: Downloads full file<br/>Filters locally
|
||||
end
|
||||
```
|
||||
|
||||
### Upload Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant Client
|
||||
participant SS as SuperSync Server
|
||||
participant FB as File Provider
|
||||
|
||||
rect rgb(227, 242, 253)
|
||||
Note over Client,SS: SuperSync Upload
|
||||
Client->>SS: POST /ops {ops: [...], lastKnownSeq}
|
||||
SS->>SS: Validate sequence continuity
|
||||
alt Gap detected
|
||||
SS-->>Client: 409 Conflict + missing ops
|
||||
else No gap
|
||||
SS->>SS: Insert ops, assign seq numbers
|
||||
SS-->>Client: 200 OK {assignedSeqs}
|
||||
end
|
||||
end
|
||||
|
||||
rect rgb(255, 243, 224)
|
||||
Note over Client,FB: File-Based Upload
|
||||
Client->>FB: downloadFile (get current state)
|
||||
FB-->>Client: {syncVersion: N, recentOps}
|
||||
Client->>Client: Merge local ops + file ops
|
||||
Client->>Client: Find piggybacked ops
|
||||
Client->>Client: Set syncVersion = N+1
|
||||
Client->>FB: uploadFile(merged data)
|
||||
FB-->>Client: Success
|
||||
Note over Client: Returns piggybacked ops<br/>for immediate processing
|
||||
end
|
||||
```
|
||||
|
||||
## Conflict Handling Comparison
|
||||
|
||||
```mermaid
|
||||
flowchart TB
|
||||
subgraph SuperSync["SuperSync Conflict Handling"]
|
||||
SS1["Client uploads ops"]
|
||||
SS2{"Server checks<br/>sequence gap?"}
|
||||
SS3["Gap: Return 409<br/>+ missing ops"]
|
||||
SS4["No gap: Accept ops"]
|
||||
SS5["Client downloads<br/>missing ops"]
|
||||
SS6["LWW resolution<br/>on client"]
|
||||
|
||||
SS1 --> SS2
|
||||
SS2 -->|Yes| SS3
|
||||
SS2 -->|No| SS4
|
||||
SS3 --> SS5
|
||||
SS5 --> SS6
|
||||
end
|
||||
|
||||
subgraph FileBased["File-Based Conflict Handling"]
|
||||
FB1["Client downloads file"]
|
||||
FB2{"syncVersion<br/>changed?"}
|
||||
FB3["Version match:<br/>Clean upload"]
|
||||
FB4["Version changed:<br/>Piggybacking"]
|
||||
FB5["Merge all ops"]
|
||||
FB6["Upload merged file"]
|
||||
FB7["Return piggybacked ops"]
|
||||
FB8["LWW resolution<br/>on client"]
|
||||
|
||||
FB1 --> FB2
|
||||
FB2 -->|No| FB3
|
||||
FB2 -->|Yes| FB4
|
||||
FB4 --> FB5
|
||||
FB5 --> FB6
|
||||
FB6 --> FB7
|
||||
FB7 --> FB8
|
||||
end
|
||||
|
||||
style SuperSync fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
style FileBased fill:#fff3e0,stroke:#e65100,stroke-width:2px
|
||||
```
|
||||
|
||||
## When to Use Each
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
Start["Choose Sync Provider"] --> Q1{Need real-time<br/>multi-device sync?}
|
||||
|
||||
Q1 -->|Yes| Q2{Have SuperSync<br/>account?}
|
||||
Q1 -->|No| FileBased
|
||||
|
||||
Q2 -->|Yes| SuperSync
|
||||
Q2 -->|No| Q3{Have cloud<br/>storage?}
|
||||
|
||||
Q3 -->|WebDAV/Dropbox| FileBased
|
||||
Q3 -->|No| LocalFile
|
||||
|
||||
SuperSync["SuperSync<br/>━━━━━━━━━━━━━━━<br/>• Real-time sync<br/>• Efficient bandwidth<br/>• Server-managed gaps<br/>• Best for active teams"]
|
||||
|
||||
FileBased["File-Based Sync<br/>━━━━━━━━━━━━━━━<br/>• Uses existing storage<br/>• No additional account<br/>• Self-hosted option<br/>• Good for individuals"]
|
||||
|
||||
LocalFile["Local File<br/>━━━━━━━━━━━━━━━<br/>• Manual sync<br/>• Full control<br/>• Backup purposes"]
|
||||
|
||||
style SuperSync fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
style FileBased fill:#fff3e0,stroke:#e65100,stroke-width:2px
|
||||
style LocalFile fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
```
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Shared Infrastructure
|
||||
|
||||
Both providers implement `OperationSyncCapable` interface and use:
|
||||
|
||||
| Component | Purpose |
|
||||
| --------------------------- | ------------------------------------- |
|
||||
| `OperationLogSyncService` | Orchestrates sync timing and triggers |
|
||||
| `ConflictResolutionService` | LWW resolution for concurrent edits |
|
||||
| `VectorClockService` | Causality tracking for all operations |
|
||||
| `OperationApplierService` | Applies remote ops to NgRx state |
|
||||
| `ArchiveOperationHandler` | Handles archive side effects |
|
||||
|
||||
### Provider-Specific Components
|
||||
|
||||
| SuperSync | File-Based |
|
||||
| ------------------------------- | -------------------------------- |
|
||||
| `SuperSyncProvider` | `FileBasedSyncAdapter` |
|
||||
| REST API client | File provider abstraction |
|
||||
| Server-side sequence management | Client-side syncVersion tracking |
|
||||
| Gap detection via HTTP 409 | Piggybacking on version mismatch |
|
||||
|
||||
## Key Files
|
||||
|
||||
| File | Purpose |
|
||||
| ---------------------------------------------------- | --------------------------------- |
|
||||
| `src/app/op-log/sync-providers/super-sync/` | SuperSync provider implementation |
|
||||
| `src/app/op-log/sync-providers/file-based/` | File-based adapter and types |
|
||||
| `src/app/op-log/sync/operation-log-sync.service.ts` | Shared sync orchestration |
|
||||
| `src/app/op-log/sync/conflict-resolution.service.ts` | LWW conflict resolution |
|
||||
274
docs/sync-and-op-log/diagrams/08-sync-flow-explained.md
Normal file
274
docs/sync-and-op-log/diagrams/08-sync-flow-explained.md
Normal file
|
|
@ -0,0 +1,274 @@
|
|||
# Sync Flow Explained
|
||||
|
||||
**Last Updated:** January 2026
|
||||
**Status:** Implemented
|
||||
|
||||
This document explains how synchronization works in simple terms.
|
||||
|
||||
## The Big Picture
|
||||
|
||||
When you make changes on one device, those changes need to reach your other devices. Here's how it works:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ YOUR CHANGE │
|
||||
│ │
|
||||
│ Phone Cloud Desktop │
|
||||
│ ┌─────┐ ┌─────┐ ┌─────┐ │
|
||||
│ │ You │ ──UPLOAD──► │ │ ──DOWNLOAD──► │ │ │
|
||||
│ │edit │ │sync │ │sees │ │
|
||||
│ │task │ │data │ │edit │ │
|
||||
│ └─────┘ └─────┘ └─────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Step-by-Step: What Happens When You Edit a Task
|
||||
|
||||
### Step 1: You Make a Change
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ You click "Mark task as done" │
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────┐ │
|
||||
│ │ Your Device │ │
|
||||
│ │ │ │
|
||||
│ │ Task: "Buy milk" │ │
|
||||
│ │ Status: Not Done ──► Done ✓ │ │
|
||||
│ │ │ │
|
||||
│ └──────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Step 2: An "Operation" is Created
|
||||
|
||||
The app doesn't sync the whole task. It syncs _what changed_:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ Operation Created: │
|
||||
│ ┌──────────────────────────────────────┐ │
|
||||
│ │ │ │
|
||||
│ │ Type: UPDATE │ │
|
||||
│ │ Entity: TASK │ │
|
||||
│ │ ID: task-abc-123 │ │
|
||||
│ │ Change: isDone = true │ │
|
||||
│ │ When: 2026-01-08 14:30:00 │ │
|
||||
│ │ Who: your-device-id │ │
|
||||
│ │ │ │
|
||||
│ └──────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ This gets saved locally in IndexedDB │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Step 3: Upload to Cloud
|
||||
|
||||
When sync triggers (automatically or manually):
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ Your Device Cloud │
|
||||
│ ┌────────────┐ ┌────────────┐ │
|
||||
│ │ │ │ │ │
|
||||
│ │ Operations │ ────── UPLOAD ────────► │ Stored │ │
|
||||
│ │ to sync: │ │ │ │
|
||||
│ │ • task ✓ │ │ • task ✓ │ │
|
||||
│ │ │ │ │ │
|
||||
│ └────────────┘ └────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Step 4: Other Devices Download
|
||||
|
||||
Your other devices periodically check for new operations:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ Cloud Other Device │
|
||||
│ ┌────────────┐ ┌────────────┐ │
|
||||
│ │ │ │ │ │
|
||||
│ │ Stored │ ────── DOWNLOAD ──────► │ Applies │ │
|
||||
│ │ │ │ changes │ │
|
||||
│ │ • task ✓ │ │ • task ✓ │ │
|
||||
│ │ │ │ │ │
|
||||
│ └────────────┘ └────────────┘ │
|
||||
│ │
|
||||
│ Now both devices show the task as done! │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## What About Conflicts?
|
||||
|
||||
When two devices change the same thing at the same time:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ Phone (offline) Desktop (offline) │
|
||||
│ ┌────────────────┐ ┌────────────────┐ │
|
||||
│ │ │ │ │ │
|
||||
│ │ Task: Buy milk │ │ Task: Buy milk │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ You rename to: │ │ You mark as: │ │
|
||||
│ │ "Buy oat milk" │ │ "Done ✓" │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ Time: 2:30 PM │ │ Time: 2:35 PM │ │
|
||||
│ │ │ │ │ │
|
||||
│ └────────────────┘ └────────────────┘ │
|
||||
│ │
|
||||
│ Both go online... CONFLICT! │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Resolution: Last Write Wins
|
||||
|
||||
The change made later (by timestamp) wins:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ Phone (2:30 PM) vs Desktop (2:35 PM) │
|
||||
│ "Buy oat milk" "Done ✓" │
|
||||
│ │
|
||||
│ ⬇ │
|
||||
│ │
|
||||
│ Desktop wins (later) │
|
||||
│ │
|
||||
│ ⬇ │
|
||||
│ │
|
||||
│ Result on ALL devices: │
|
||||
│ ┌────────────────────────────────────┐ │
|
||||
│ │ │ │
|
||||
│ │ Task: "Buy milk" (name unchanged) │ │
|
||||
│ │ Status: Done ✓ │ │
|
||||
│ │ │ │
|
||||
│ └────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ Note: Phone's rename was lost, but both devices are consistent │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## SuperSync vs File-Based: The Difference
|
||||
|
||||
### SuperSync (Server-Based)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ Your Device Server Other Device │
|
||||
│ ┌────────┐ ┌────────┐ ┌────────┐ │
|
||||
│ │ │ │ │ │ │ │
|
||||
│ │ Upload │ ──op #5───► │ Stores │ ◄──asks── │ "What's │ │
|
||||
│ │ op #5 │ │ op #5 │ new? │ new?" │ │
|
||||
│ │ │ │ │ ──op #5─► │ │ │
|
||||
│ └────────┘ └────────┘ └────────┘ │
|
||||
│ │
|
||||
│ Server keeps ALL operations │
|
||||
│ Devices only download what they're missing │
|
||||
│ Very efficient bandwidth │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### File-Based (Dropbox/WebDAV)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ Your Device Cloud File Other Device │
|
||||
│ ┌────────┐ ┌────────┐ ┌────────┐ │
|
||||
│ │ │ │ │ │ │ │
|
||||
│ │Download│ ◄────────── │ sync- │ ──────► │Download│ │
|
||||
│ │ whole │ │ data. │ │ whole │ │
|
||||
│ │ file │ │ json │ │ file │ │
|
||||
│ │ │ ──────────► │ │ ◄────── │ │ │
|
||||
│ │Upload │ │(state +│ │Upload │ │
|
||||
│ │ whole │ │ ops) │ │ whole │ │
|
||||
│ │ file │ │ │ │ file │ │
|
||||
│ └────────┘ └────────┘ └────────┘ │
|
||||
│ │
|
||||
│ File contains EVERYTHING: │
|
||||
│ - Current state (all your data) │
|
||||
│ - Recent operations (last 200) │
|
||||
│ - Vector clock (for conflict detection) │
|
||||
│ │
|
||||
│ Less efficient, but works with any storage │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## The Complete Sync Cycle
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ 1. TRIGGER │
|
||||
│ ├── Timer (every few minutes) │
|
||||
│ ├── App starts │
|
||||
│ └── Manual sync button │
|
||||
│ │
|
||||
│ ▼ │
|
||||
│ │
|
||||
│ 2. DOWNLOAD FIRST │
|
||||
│ ├── Get operations from cloud │
|
||||
│ ├── Check for conflicts │
|
||||
│ ├── Apply changes to local state │
|
||||
│ └── Update "last synced" marker │
|
||||
│ │
|
||||
│ ▼ │
|
||||
│ │
|
||||
│ 3. UPLOAD LOCAL CHANGES │
|
||||
│ ├── Gather pending operations │
|
||||
│ ├── Send to cloud │
|
||||
│ └── Mark as synced │
|
||||
│ │
|
||||
│ ▼ │
|
||||
│ │
|
||||
│ 4. DONE │
|
||||
│ └── All devices now have same data │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## What Gets Synced?
|
||||
|
||||
| Synced | Not Synced |
|
||||
| ----------------------- | -------------------- |
|
||||
| Tasks | Local UI preferences |
|
||||
| Projects | Window position |
|
||||
| Tags | Cached data |
|
||||
| Notes | Temporary state |
|
||||
| Time tracking | |
|
||||
| Repeat configs | |
|
||||
| Issue provider settings | |
|
||||
|
||||
## Key Terms Glossary
|
||||
|
||||
| Term | Meaning |
|
||||
| ---------------- | -------------------------------------------------- |
|
||||
| **Operation** | A record of one change (create, update, delete) |
|
||||
| **Vector Clock** | Tracks which device made changes when |
|
||||
| **LWW** | "Last Write Wins" - later timestamp wins conflicts |
|
||||
| **Piggybacking** | Getting other devices' changes during your upload |
|
||||
| **syncVersion** | Counter that increases with each file update |
|
||||
|
||||
## Key Files
|
||||
|
||||
| File | Purpose |
|
||||
| ------------------------------------------------------- | --------------------------- |
|
||||
| `src/app/op-log/sync/operation-log-sync.service.ts` | Main sync orchestration |
|
||||
| `src/app/op-log/sync/operation-log-download.service.ts` | Handles downloading ops |
|
||||
| `src/app/op-log/sync/operation-log-upload.service.ts` | Handles uploading ops |
|
||||
| `src/app/op-log/sync/conflict-resolution.service.ts` | Resolves conflicts with LWW |
|
||||
71
docs/sync-and-op-log/diagrams/README.md
Normal file
71
docs/sync-and-op-log/diagrams/README.md
Normal file
|
|
@ -0,0 +1,71 @@
|
|||
# Operation Log Architecture Diagrams
|
||||
|
||||
**Last Updated:** January 2026
|
||||
|
||||
This directory contains visual diagrams explaining the Operation Log sync architecture.
|
||||
|
||||
## Diagram Index
|
||||
|
||||
| Diagram | Description | Status |
|
||||
| ---------------------------------------------------------------- | --------------------------------------------------------- | ----------- |
|
||||
| [01-local-persistence.md](./01-local-persistence.md) | Local IndexedDB persistence, hydration, compaction | Implemented |
|
||||
| [02-server-sync.md](./02-server-sync.md) | SuperSync server API, PostgreSQL, upload/download flows | Implemented |
|
||||
| [03-conflict-resolution.md](./03-conflict-resolution.md) | LWW auto-resolution, SYNC_IMPORT filtering, vector clocks | Implemented |
|
||||
| [04-file-based-sync.md](./04-file-based-sync.md) | WebDAV/Dropbox/LocalFile sync via single sync-data.json | Implemented |
|
||||
| [05-meta-reducers.md](./05-meta-reducers.md) | Atomic multi-entity operations, state consistency | Implemented |
|
||||
| [06-archive-operations.md](./06-archive-operations.md) | Archive side effects, dual-database architecture | Implemented |
|
||||
| [07-supersync-vs-file-based.md](./07-supersync-vs-file-based.md) | Comparison of SuperSync and file-based sync providers | Implemented |
|
||||
| [08-sync-flow-explained.md](./08-sync-flow-explained.md) | Simple explanation of how sync works | Implemented |
|
||||
|
||||
## Quick Navigation
|
||||
|
||||
### By Topic
|
||||
|
||||
**Getting Started:**
|
||||
|
||||
- Start with [01-local-persistence.md](./01-local-persistence.md) to understand how data is stored locally
|
||||
- Then [04-file-based-sync.md](./04-file-based-sync.md) or [02-server-sync.md](./02-server-sync.md) depending on your sync provider
|
||||
|
||||
**Understanding Conflicts:**
|
||||
|
||||
- [03-conflict-resolution.md](./03-conflict-resolution.md) explains how concurrent edits are resolved
|
||||
|
||||
**Advanced Topics:**
|
||||
|
||||
- [05-meta-reducers.md](./05-meta-reducers.md) for atomic multi-entity operations
|
||||
- [06-archive-operations.md](./06-archive-operations.md) for archive-specific handling
|
||||
|
||||
**Comparisons & Overviews:**
|
||||
|
||||
- [07-supersync-vs-file-based.md](./07-supersync-vs-file-based.md) compares the two sync approaches
|
||||
- [08-sync-flow-explained.md](./08-sync-flow-explained.md) simple step-by-step sync explanation
|
||||
|
||||
### By Sync Provider
|
||||
|
||||
| Provider | Primary Diagram |
|
||||
| --------- | ------------------------------------------------ |
|
||||
| SuperSync | [02-server-sync.md](./02-server-sync.md) |
|
||||
| WebDAV | [04-file-based-sync.md](./04-file-based-sync.md) |
|
||||
| Dropbox | [04-file-based-sync.md](./04-file-based-sync.md) |
|
||||
| LocalFile | [04-file-based-sync.md](./04-file-based-sync.md) |
|
||||
|
||||
## Related Documentation
|
||||
|
||||
| Document | Description |
|
||||
| -------------------------------------------------------------------- | ------------------------------------ |
|
||||
| [../operation-log-architecture.md](../operation-log-architecture.md) | Comprehensive architecture reference |
|
||||
| [../operation-rules.md](../operation-rules.md) | Design rules and guidelines |
|
||||
| [../vector-clocks.md](../vector-clocks.md) | Vector clock implementation details |
|
||||
| [../quick-reference.md](../quick-reference.md) | Quick lookup for common patterns |
|
||||
|
||||
## Diagram Conventions
|
||||
|
||||
All diagrams use Mermaid syntax and follow these conventions:
|
||||
|
||||
| Color | Meaning |
|
||||
| ------------------ | --------------------------------------------- |
|
||||
| Green (`#e8f5e9`) | Success paths, valid states, local operations |
|
||||
| Blue (`#e3f2fd`) | Server/API operations, remote operations |
|
||||
| Orange (`#fff3e0`) | Storage, file operations, warnings |
|
||||
| Red (`#ffebee`) | Errors, conflicts, filtered operations |
|
||||
| Purple (`#f3e5f5`) | Results, outputs, final states |
|
||||
|
|
@ -1,155 +1,80 @@
|
|||
# Plan: Replace PFAPI with Operation Log Sync for All Providers
|
||||
|
||||
## Goal
|
||||
> **STATUS: COMPLETED (January 2026)**
|
||||
>
|
||||
> This plan has been fully implemented. The entire `src/app/pfapi/` directory has been deleted.
|
||||
> All sync providers now use the unified operation log system via `FileBasedSyncAdapter`.
|
||||
>
|
||||
> **Current Implementation:**
|
||||
>
|
||||
> - Sync providers: `src/app/op-log/sync-providers/`
|
||||
> - File-based adapter: `src/app/op-log/sync-providers/file-based/file-based-sync-adapter.service.ts`
|
||||
> - Server migration: `src/app/op-log/sync/server-migration.service.ts`
|
||||
|
||||
---
|
||||
|
||||
## Original Goal
|
||||
|
||||
Simplify the codebase by removing PFAPI's model-by-model sync and using operation logs exclusively for **all sync providers** (WebDAV, Dropbox, LocalFile). Migration required for existing users; old PFAPI files kept as backup.
|
||||
|
||||
## Complexity: MEDIUM-HIGH
|
||||
## What Was Implemented
|
||||
|
||||
**Estimated effort**: ~2-3 weeks
|
||||
### Phase 1: Enable Operation Log Sync (All Providers) - DONE
|
||||
|
||||
---
|
||||
All providers now use operation log sync:
|
||||
|
||||
## Decisions Made
|
||||
- WebDAV: `src/app/op-log/sync-providers/file-based/webdav/`
|
||||
- Dropbox: `src/app/op-log/sync-providers/file-based/dropbox/`
|
||||
- LocalFile: `src/app/op-log/sync-providers/file-based/local-file/`
|
||||
- SuperSync: `src/app/op-log/sync-providers/super-sync/`
|
||||
|
||||
- ✅ All providers migrate (including LocalFile)
|
||||
- ✅ Keep old PFAPI files as backup after migration
|
||||
- ✅ Migration required for existing users
|
||||
### Phase 2: Migration Logic - DONE
|
||||
|
||||
---
|
||||
Migration from legacy PFAPI format is handled by `ServerMigrationService`:
|
||||
|
||||
## Implementation Phases
|
||||
- Checks for existing PFAPI metadata file on remote
|
||||
- Downloads full state and creates SYNC_IMPORT operation
|
||||
- Uploads initial snapshot via operation log
|
||||
|
||||
### Phase 1: Enable Operation Log Sync (All Providers)
|
||||
### Phase 3: PFAPI Code Removal - DONE
|
||||
|
||||
**Files to modify:**
|
||||
The entire `src/app/pfapi/` directory has been deleted (~83 files, 2.0 MB).
|
||||
|
||||
- `src/app/pfapi/api/sync/providers/webdav/webdav-base-provider.ts` - Add `supportsOperationSync: true`
|
||||
- `src/app/pfapi/api/sync/providers/dropbox/dropbox.ts` - Add `supportsOperationSync: true`
|
||||
- `src/app/pfapi/api/sync/providers/local-file-sync/local-file-sync-base.ts` - Add `supportsOperationSync: true`
|
||||
What was kept (moved to op-log):
|
||||
|
||||
**Tasks:**
|
||||
|
||||
1. Mark all providers as operation-sync-capable
|
||||
2. Test file-based operation sync path (`_uploadPendingOpsViaFiles`, `_downloadRemoteOpsViaFiles`)
|
||||
3. Verify manifest management works for all providers
|
||||
|
||||
### Phase 2: Migration Logic
|
||||
|
||||
**New file:**
|
||||
|
||||
- `src/app/op-log/migration/pfapi-migration.service.ts`
|
||||
|
||||
**Migration flow:**
|
||||
|
||||
1. On first sync with operation-log provider:
|
||||
- Check for existing PFAPI metadata file on remote
|
||||
- If exists: download full state via PFAPI model sync
|
||||
- Create SYNC_IMPORT operation from downloaded state
|
||||
- Upload initial snapshot via operation log
|
||||
- Mark PFAPI files as migrated (rename or add marker file)
|
||||
2. Subsequent syncs use operation log only
|
||||
|
||||
**Edge cases:**
|
||||
|
||||
- Mid-migration failure recovery
|
||||
- Multiple devices migrating concurrently
|
||||
- Empty remote (no migration needed)
|
||||
|
||||
### Phase 3: PFAPI Code Removal
|
||||
|
||||
**Files to simplify/remove:**
|
||||
|
||||
- `src/app/pfapi/api/sync/model-sync.service.ts` - Remove or reduce to migration-only
|
||||
- `src/app/pfapi/api/sync/meta-sync.service.ts` - Simplify (remove revMap logic)
|
||||
- `src/app/pfapi/api/util/get-sync-status-from-meta-files.ts` - Remove (replaced by operation log status)
|
||||
- `src/app/pfapi/api/sync/sync.service.ts` - Remove file-sync branches
|
||||
|
||||
**Keep:**
|
||||
|
||||
- Provider interface file operations (needed for operation log files)
|
||||
- Auth flows
|
||||
- Provider implementations (WebDAV, Dropbox, LocalFile)
|
||||
- Encryption/compression utilities
|
||||
- Auth flows
|
||||
|
||||
### Phase 4: Testing & Cleanup
|
||||
### Phase 4: Testing & Cleanup - DONE
|
||||
|
||||
1. Multi-device sync scenarios
|
||||
2. Migration testing (PFAPI → operation log)
|
||||
3. Large operation log handling
|
||||
4. Update/remove obsolete tests
|
||||
- Multi-device sync scenarios tested via E2E tests
|
||||
- Migration testing completed
|
||||
- Large operation log handling verified
|
||||
- All tests pass
|
||||
|
||||
---
|
||||
|
||||
## Key Files
|
||||
|
||||
### Providers
|
||||
## Final Architecture
|
||||
|
||||
```
|
||||
src/app/pfapi/api/sync/providers/webdav/webdav-base-provider.ts
|
||||
src/app/pfapi/api/sync/providers/dropbox/dropbox.ts
|
||||
src/app/pfapi/api/sync/providers/local-file-sync/local-file-sync-base.ts
|
||||
src/app/op-log/
|
||||
├── sync-providers/
|
||||
│ ├── super-sync/ # Server-based sync
|
||||
│ ├── file-based/ # File-based providers
|
||||
│ │ ├── file-based-sync-adapter.service.ts
|
||||
│ │ ├── webdav/
|
||||
│ │ ├── dropbox/
|
||||
│ │ └── local-file/
|
||||
│ ├── provider-manager.service.ts
|
||||
│ └── wrapped-provider.service.ts
|
||||
├── sync/
|
||||
│ ├── operation-log-sync.service.ts
|
||||
│ └── server-migration.service.ts
|
||||
└── ...
|
||||
```
|
||||
|
||||
### Operation Log Sync
|
||||
## Key Decisions Made
|
||||
|
||||
```
|
||||
src/app/op-log/sync/operation-log-upload.service.ts
|
||||
src/app/op-log/sync/operation-log-download.service.ts
|
||||
src/app/op-log/sync/operation-log-manifest.service.ts
|
||||
src/app/op-log/sync/operation-log-sync.service.ts
|
||||
```
|
||||
|
||||
### PFAPI (to simplify/remove)
|
||||
|
||||
```
|
||||
src/app/pfapi/api/sync/sync.service.ts
|
||||
src/app/pfapi/api/sync/model-sync.service.ts
|
||||
src/app/pfapi/api/sync/meta-sync.service.ts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Risks & Mitigations
|
||||
|
||||
| Risk | Mitigation |
|
||||
| --------------------- | ------------------------------------------------------------------ |
|
||||
| Migration data loss | Download PFAPI state fully before any writes; keep files as backup |
|
||||
| Concurrent migrations | Lock mechanism during migration |
|
||||
| Large operation logs | Existing compaction/snapshot system handles this |
|
||||
| Encryption compat | Reuse existing operation encryption service |
|
||||
| Rollback needed | PFAPI files kept as backup; could restore if needed |
|
||||
|
||||
---
|
||||
|
||||
## Current Architecture Context
|
||||
|
||||
### PFAPI (Legacy Sync)
|
||||
|
||||
- **What it does**: Full state snapshots with vector clocks
|
||||
- **How it works**: Uploads complete model states as individual files (15 models: tasks, projects, tags, etc.)
|
||||
- **Conflict detection**: Vector clock comparison on metadata file
|
||||
- **Files**: One file per model + metadata file
|
||||
|
||||
### Operation Log (New Sync)
|
||||
|
||||
- **What it does**: Event sourcing - stores every change as an operation
|
||||
- **How it works**: Appends operations to a log, replays to reconstruct state
|
||||
- **Conflict detection**: Vector clocks per operation + dependency resolution
|
||||
- **Files**: Operation batch files in `ops/` directory + manifest
|
||||
|
||||
### Why This Simplification Works
|
||||
|
||||
The operation log system already has file-based sync infrastructure:
|
||||
|
||||
- `_uploadPendingOpsViaFiles()` in operation-log-upload.service.ts
|
||||
- `_downloadRemoteOpsViaFiles()` in operation-log-download.service.ts
|
||||
|
||||
These use the generic `SyncProviderServiceInterface` methods that WebDAV, Dropbox, and LocalFile already implement:
|
||||
|
||||
- `uploadFile()`, `downloadFile()`, `removeFile()`, `listFiles()`
|
||||
|
||||
So the core sync logic is already provider-agnostic - we just need to:
|
||||
|
||||
1. Enable it for all providers
|
||||
2. Handle migration from existing PFAPI data
|
||||
3. Remove the now-unused PFAPI model sync code
|
||||
- Single-file sync format (`sync-data.json`) with state snapshot + recent ops
|
||||
- Content-based optimistic locking via `syncVersion` counter
|
||||
- Piggybacking mechanism for concurrent sync handling
|
||||
- Server migration service handles legacy PFAPI data migration
|
||||
|
|
|
|||
|
|
@ -2,7 +2,10 @@
|
|||
|
||||
**Status:** Parts A, B, C, D Complete (single-version; cross-version sync A.7.11 documented, not implemented)
|
||||
**Branch:** `feat/operation-logs`
|
||||
**Last Updated:** January 2, 2026
|
||||
**Last Updated:** January 8, 2026
|
||||
|
||||
> **Note:** As of January 2026, the legacy PFAPI system has been completely eliminated.
|
||||
> All sync providers (SuperSync, WebDAV, Dropbox, LocalFile) now use the unified operation log system.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -49,12 +52,13 @@ This is efficient and precise.
|
|||
- _Conflict:_ If Device B _also_ made a change and is at "Version 2", it knows "Wait, we both changed Version 1 at the same time!" -> **Conflict Detected**.
|
||||
- **Resolution:** The user is shown a dialog to pick the winner. The loser isn't deleted; it's marked as "Rejected" in the log but kept for history.
|
||||
|
||||
**B. "Legacy Sync" (Dropbox, WebDAV, Local File)**
|
||||
This is a compatibility bridge.
|
||||
**B. "File-Based Sync" (Dropbox, WebDAV, Local File)**
|
||||
This uses a single-file approach with embedded operations.
|
||||
|
||||
- The Operation Log itself doesn't sync files. Instead, when it saves an operation, it secretly "ticks" a version number in the legacy database.
|
||||
- The legacy sync system (PFAPI) sees this tick, realizes "Local data has changed," and triggers its standard "Upload Everything" process.
|
||||
- This ensures the new architecture works seamlessly with your existing sync providers without breaking them.
|
||||
- File-based providers sync a single `sync-data.json` file containing: full state snapshot + recent operations buffer
|
||||
- When syncing, the system downloads the remote file, merges any new operations, and uploads the combined state
|
||||
- Conflict detection uses vector clocks - if two clients sync concurrently, the "piggybacking" mechanism ensures no operations are lost
|
||||
- This provides entity-level conflict resolution (vs old model-level "last write wins")
|
||||
|
||||
### 4. Safety & Self-Healing
|
||||
|
||||
|
|
@ -77,18 +81,18 @@ If we kept every operation forever, the database would grow huge.
|
|||
|
||||
The Operation Log serves **four distinct purposes**:
|
||||
|
||||
| Purpose | Description | Status |
|
||||
| -------------------------- | --------------------------------------------- | ----------------------------- |
|
||||
| **A. Local Persistence** | Fast writes, crash recovery, event sourcing | Complete ✅ |
|
||||
| **B. Legacy Sync Bridge** | Vector clock updates for PFAPI sync detection | Complete ✅ |
|
||||
| **C. Server Sync** | Upload/download individual operations | Complete ✅ (single-version)¹ |
|
||||
| **D. Validation & Repair** | Prevent corruption, auto-repair invalid state | Complete ✅ |
|
||||
| Purpose | Description | Status |
|
||||
| -------------------------- | ------------------------------------------------- | ----------------------------- |
|
||||
| **A. Local Persistence** | Fast writes, crash recovery, event sourcing | Complete ✅ |
|
||||
| **B. File-Based Sync** | Single-file sync for WebDAV/Dropbox/LocalFile | Complete ✅ |
|
||||
| **C. Server Sync** | Upload/download individual operations (SuperSync) | Complete ✅ (single-version)¹ |
|
||||
| **D. Validation & Repair** | Prevent corruption, auto-repair invalid state | Complete ✅ |
|
||||
|
||||
> ¹ **Cross-version sync limitation**: Part C is complete for clients on the same schema version. Cross-version sync (A.7.11) is not yet implemented—see [A.7.11 Conflict-Aware Migration](#a711-conflict-aware-migration-strategy) for guardrails.
|
||||
|
||||
> **✅ Migration Ready**: Migration safety (A.7.12), tail ops consistency (A.7.13), and unified migration interface (A.7.15) are now implemented. The system is ready for schema migrations when `CURRENT_SCHEMA_VERSION > 1`.
|
||||
|
||||
This document is structured around these four purposes. Most complexity lives in **Part A** (local persistence). **Part B** is a thin bridge to PFAPI. **Part C** handles operation-based sync with servers. **Part D** integrates validation and automatic repair.
|
||||
This document is structured around these four purposes. Most complexity lives in **Part A** (local persistence). **Part B** handles file-based sync via the `FileBasedSyncAdapter`. **Part C** handles operation-based sync with SuperSync server. **Part D** integrates validation and automatic repair.
|
||||
|
||||
```
|
||||
┌───────────────────────────────────────────────────────────────────┐
|
||||
|
|
@ -105,10 +109,9 @@ This document is structured around these four purposes. Most complexity lives in
|
|||
├──► SUP_OPS ◄──────┘
|
||||
│ (Local Persistence - Part A)
|
||||
│
|
||||
└──► META_MODEL vector clock
|
||||
(Legacy Sync Bridge - Part B)
|
||||
|
||||
PFAPI reads from NgRx for sync (not from op-log)
|
||||
└──► Sync Providers
|
||||
├── SuperSync (Part C - operation-based)
|
||||
└── WebDAV/Dropbox/LocalFile (Part B - file-based)
|
||||
```
|
||||
|
||||
---
|
||||
|
|
@ -146,27 +149,27 @@ interface StateCache {
|
|||
}
|
||||
```
|
||||
|
||||
### Relationship to 'pf' Database
|
||||
### IndexedDB Structure
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ IndexedDB │
|
||||
├────────────────────────────────┬────────────────────────────────────┤
|
||||
│ 'pf' database │ 'SUP_OPS' database │
|
||||
│ (PFAPI Metadata) │ (Operation Log) │
|
||||
│ │ │
|
||||
│ ┌──────────────────────┐ │ ┌──────────────────────┐ │
|
||||
│ │ META_MODEL │◄─────┼──│ ops (event log) │ │
|
||||
│ │ - vectorClock │ │ │ state_cache │ │
|
||||
│ │ - revMap │ │ └──────────────────────┘ │
|
||||
│ │ - lastSyncedUpdate │ │ │
|
||||
│ └──────────────────────┘ │ ALL model data persisted here │
|
||||
│ │ │
|
||||
│ Model tables NOT used │ │
|
||||
└────────────────────────────────┴────────────────────────────────────┘
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ 'SUP_OPS' database (Operation Log) │
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||
│ │ ops (event log) - Append-only operation log │ │
|
||||
│ │ state_cache - Periodic state snapshots │ │
|
||||
│ │ meta - Vector clocks, sync state │ │
|
||||
│ │ archive_young - Recent archived tasks │ │
|
||||
│ │ archive_old - Old archived tasks │ │
|
||||
│ └──────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ALL model data persisted here │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Key insight:** The `pf` database is only for PFAPI sync metadata. All model data (task, project, tag, etc.) is persisted in SUP_OPS.
|
||||
**Key insight:** All application data is persisted in the `SUP_OPS` database via the operation log system.
|
||||
|
||||
## A.2 Write Path
|
||||
|
||||
|
|
@ -309,16 +312,16 @@ Two optimizations speed up hydration:
|
|||
|
||||
### Genesis Migration
|
||||
|
||||
On first startup (SUP_OPS empty):
|
||||
On first startup (SUP_OPS empty), the system initializes with default state:
|
||||
|
||||
```typescript
|
||||
async createGenesisSnapshot(): Promise<void> {
|
||||
// Load ALL models from legacy pf database
|
||||
const allModels = await this.pfapiService.pf.getAllSyncModelData();
|
||||
// Initialize with default state or migrate from legacy if present
|
||||
const initialState = await this.getInitialState();
|
||||
|
||||
// Create initial snapshot
|
||||
await this.opLogStore.saveStateCache({
|
||||
state: allModels,
|
||||
state: initialState,
|
||||
lastAppliedOpSeq: 0,
|
||||
vectorClock: {},
|
||||
compactedAt: Date.now(),
|
||||
|
|
@ -327,6 +330,8 @@ async createGenesisSnapshot(): Promise<void> {
|
|||
}
|
||||
```
|
||||
|
||||
For users upgrading from legacy formats, `ServerMigrationService` handles the migration during first sync.
|
||||
|
||||
## A.4 Compaction
|
||||
|
||||
### Purpose
|
||||
|
|
@ -537,13 +542,13 @@ async hydrateStore(): Promise<void> {
|
|||
}
|
||||
|
||||
private async attemptRecovery(): Promise<void> {
|
||||
// 1. Try legacy database
|
||||
const legacyData = await this.pfapi.getAllSyncModelDataFromModelCtrls();
|
||||
if (legacyData && this.hasData(legacyData)) {
|
||||
await this.recoverFromLegacyData(legacyData);
|
||||
// 1. Try backup from state cache
|
||||
const backupState = await this.tryLoadBackupSnapshot();
|
||||
if (backupState) {
|
||||
await this.recoverFromBackup(backupState);
|
||||
return;
|
||||
}
|
||||
// 2. Try remote sync
|
||||
// 2. Try remote sync (triggers ServerMigrationService if needed)
|
||||
// 3. Show error to user
|
||||
}
|
||||
```
|
||||
|
|
@ -760,19 +765,20 @@ interface SchemaMigration {
|
|||
|
||||
**Version Mismatch Handling:** Remote data too new → prompt user to update app. Remote data too old → show error, may need manual intervention.
|
||||
|
||||
### A.7.10 Relationship with Legacy PFAPI Migrations (CROSS_MODEL_MIGRATION)
|
||||
### A.7.10 Legacy Data Migration
|
||||
|
||||
The application contains a legacy migration system (`CROSS_MODEL_MIGRATION` in `src/app/pfapi/migrate/cross-model-migrations.ts`) used by the old persistence layer.
|
||||
> **Note:** The legacy PFAPI system has been removed (January 2026). This section documents historical migration paths.
|
||||
|
||||
**Do we keep it?**
|
||||
Yes, for now. The **Genesis Migration** (A.3) relies on `pfapi` services to load the initial state from the legacy database. This loading process executes `CROSS_MODEL_MIGRATION`s to ensure the legacy data is in a consistent state before it is imported into the Operation Log.
|
||||
For users upgrading from older versions (pre-operation-log), the `ServerMigrationService` handles migration:
|
||||
|
||||
**Should we remove it?**
|
||||
No, not yet. It provides the bridge from older versions of the app to the Operation Log version. However:
|
||||
1. On first sync, it detects legacy remote data format
|
||||
2. Downloads the full state from the legacy format
|
||||
3. Creates a `SYNC_IMPORT` operation with the imported state
|
||||
4. Uploads the new format to the sync provider
|
||||
|
||||
1. **No new migrations** should be added to `CROSS_MODEL_MIGRATION`.
|
||||
2. All future schema changes should use the **Schema Migration** system (A.7) described above.
|
||||
3. Once the Operation Log is fully established and legacy data is considered obsolete (e.g., after several major versions), the legacy migration code can be removed.
|
||||
**Key file:** `src/app/op-log/sync/server-migration.service.ts`
|
||||
|
||||
All future schema changes should use the **Schema Migration** system (A.7) described above.
|
||||
|
||||
### A.7.6 Implemented Safety Features
|
||||
|
||||
|
|
@ -1001,188 +1007,139 @@ Before releasing any migration:
|
|||
|
||||
---
|
||||
|
||||
# Part B: Legacy Sync Bridge
|
||||
# Part B: File-Based Sync
|
||||
|
||||
The operation log does **NOT** participate in legacy sync protocol. PFAPI handles all sync logic for WebDAV, Dropbox, and LocalFile providers.
|
||||
File-based sync providers (WebDAV, Dropbox, LocalFile) use a single-file approach via the `FileBasedSyncAdapter`.
|
||||
|
||||
However, the op-log must **bridge** to PFAPI by updating `META_MODEL.vectorClock` so PFAPI can detect local changes.
|
||||
|
||||
## B.1 How Legacy Sync Works
|
||||
## B.1 How File-Based Sync Works
|
||||
|
||||
```
|
||||
Sync Triggered (WebDAV/Dropbox/LocalFile)
|
||||
│
|
||||
▼
|
||||
PFAPI compares local vs remote vector clocks
|
||||
FileBasedSyncAdapter.downloadOps()
|
||||
│
|
||||
└──► META_MODEL.vectorClock vs remote __meta.vectorClock
|
||||
└──► Downloads sync-data.json from remote
|
||||
│
|
||||
└──► If different: local changes exist
|
||||
├──► Contains: state snapshot + recent ops buffer
|
||||
│
|
||||
└──► Compares vector clocks for conflict detection
|
||||
│
|
||||
▼
|
||||
PFAPI.getAllSyncModelData()
|
||||
Process new ops, merge state
|
||||
│
|
||||
▼
|
||||
PfapiStoreDelegateService
|
||||
FileBasedSyncAdapter.uploadOps()
|
||||
│
|
||||
└──► Read ALL models from NgRx via selectors
|
||||
│
|
||||
▼
|
||||
Upload to provider
|
||||
└──► Upload merged state + ops
|
||||
```
|
||||
|
||||
**Key point:** PFAPI reads current state from NgRx, NOT from the operation log. The op-log is invisible to sync.
|
||||
**Key file:** `src/app/op-log/sync-providers/file-based/file-based-sync-adapter.service.ts`
|
||||
|
||||
## B.2 Vector Clock Bridge
|
||||
|
||||
When `OperationLogEffects` writes an operation, it must also update META_MODEL:
|
||||
## B.2 FileBasedSyncData Format
|
||||
|
||||
```typescript
|
||||
private async writeOperation(op: Operation): Promise<void> {
|
||||
// 1. Write to SUP_OPS (Part A)
|
||||
await this.opLogStore.appendOperation(op);
|
||||
interface FileBasedSyncData {
|
||||
version: 2;
|
||||
schemaVersion: number;
|
||||
vectorClock: VectorClock;
|
||||
syncVersion: number; // Content-based optimistic locking
|
||||
lastSeq: number;
|
||||
lastModified: number;
|
||||
|
||||
// 2. Bridge to PFAPI (Part B) - Update META_MODEL vector clock
|
||||
// Skip if sync is in progress (database locked) - the op is already safe in SUP_OPS
|
||||
if (!this.pfapiService.pf.isSyncInProgress) {
|
||||
await this.pfapiService.pf.metaModel.incrementVectorClockForLocalChange(this.clientId);
|
||||
}
|
||||
// Full state snapshot (~95% of file size)
|
||||
state: AppDataComplete;
|
||||
|
||||
// 3. Broadcast to other tabs (Part A)
|
||||
this.multiTabCoordinator.broadcastOperation(op);
|
||||
// Recent operations for conflict detection (last 200, ~5% of file)
|
||||
recentOps: CompactOperation[];
|
||||
|
||||
// Checksum for integrity verification
|
||||
checksum?: string;
|
||||
}
|
||||
```
|
||||
|
||||
This ensures:
|
||||
## B.3 Piggybacking Mechanism
|
||||
|
||||
- PFAPI can detect "there are local changes to sync"
|
||||
- Legacy sync providers work unchanged
|
||||
- No changes needed to PFAPI sync protocol
|
||||
- **No lock errors during sync** - META_MODEL update is skipped when sync is in progress (op is still safely persisted in SUP_OPS)
|
||||
When two clients sync concurrently, the adapter uses "piggybacking" to ensure no operations are lost:
|
||||
|
||||
## B.3 Sync Download Persistence
|
||||
|
||||
When PFAPI downloads remote data, the hydrator persists it to SUP_OPS:
|
||||
1. Client A uploads state (syncVersion 1 → 2)
|
||||
2. Client B tries to upload, detects version mismatch
|
||||
3. Client B downloads A's changes, finds ops it hasn't seen
|
||||
4. Client B merges A's ops into its state, uploads (syncVersion 2 → 3)
|
||||
5. Both clients end up with all operations
|
||||
|
||||
```typescript
|
||||
async hydrateFromRemoteSync(): Promise<void> {
|
||||
// 1. Read synced data from 'pf' database
|
||||
const syncedData = await this.pfapiService.pf.getAllSyncModelDataFromModelCtrls();
|
||||
// In FileBasedSyncAdapter.uploadOps()
|
||||
const remote = await this._downloadRemoteData(provider);
|
||||
if (remote && remote.syncVersion !== expectedSyncVersion) {
|
||||
// Another client synced - find ops we haven't processed
|
||||
const newOps = remote.recentOps.filter((op) => op.seq > lastProcessedSeq);
|
||||
// Return these as "piggybacked" ops for the caller to process
|
||||
return { localOps, newOps };
|
||||
}
|
||||
```
|
||||
|
||||
// 2. Create SYNC_IMPORT operation
|
||||
## B.4 Sync Download Persistence
|
||||
|
||||
When remote data is downloaded, the sync system creates a SYNC_IMPORT operation:
|
||||
|
||||
```typescript
|
||||
async hydrateFromRemoteSync(downloadedMainModelData?: Record<string, unknown>): Promise<void> {
|
||||
// 1. Create SYNC_IMPORT operation with downloaded state
|
||||
const op: Operation = {
|
||||
id: uuidv7(),
|
||||
opType: 'SYNC_IMPORT',
|
||||
entityType: 'ALL',
|
||||
payload: syncedData,
|
||||
payload: downloadedMainModelData,
|
||||
// ...
|
||||
};
|
||||
await this.opLogStore.append(op, 'remote');
|
||||
|
||||
// 3. Force snapshot for crash safety
|
||||
// 2. Force snapshot for crash safety
|
||||
await this.opLogStore.saveStateCache({
|
||||
state: syncedData,
|
||||
state: downloadedMainModelData,
|
||||
lastAppliedOpSeq: lastSeq,
|
||||
// ...
|
||||
});
|
||||
|
||||
// 4. Dispatch to NgRx
|
||||
this.store.dispatch(loadAllData({ appDataComplete: syncedData }));
|
||||
// 3. Dispatch to NgRx
|
||||
this.store.dispatch(loadAllData({ appDataComplete: downloadedMainModelData }));
|
||||
}
|
||||
```
|
||||
|
||||
### loadAllData Variants
|
||||
|
||||
```typescript
|
||||
interface LoadAllDataMeta {
|
||||
isHydration?: boolean; // From SUP_OPS startup - skip logging
|
||||
isRemoteSync?: boolean; // From sync download - create import op
|
||||
isBackupImport?: boolean; // From file import - create import op
|
||||
}
|
||||
```
|
||||
|
||||
| Source | Create Op? | Force Snapshot? |
|
||||
| -------------------- | ------------------- | --------------- |
|
||||
| Hydration (startup) | No | No |
|
||||
| Remote sync download | Yes (SYNC_IMPORT) | Yes |
|
||||
| Backup file import | Yes (BACKUP_IMPORT) | Yes |
|
||||
|
||||
## B.4 PfapiStoreDelegateService
|
||||
## B.5 Archive Data Handling
|
||||
|
||||
This service reads ALL sync models from NgRx for PFAPI:
|
||||
|
||||
```typescript
|
||||
@Injectable({ providedIn: 'root' })
|
||||
export class PfapiStoreDelegateService {
|
||||
getAllSyncModelDataFromStore(): Promise<AllSyncModels> {
|
||||
return firstValueFrom(
|
||||
combineLatest([
|
||||
this._store.select(selectTaskFeatureState),
|
||||
this._store.select(selectProjectFeatureState),
|
||||
this._store.select(selectTagFeatureState),
|
||||
this._store.select(selectConfigFeatureState),
|
||||
this._store.select(selectNoteFeatureState),
|
||||
this._store.select(selectIssueProviderState),
|
||||
this._store.select(selectPlannerState),
|
||||
this._store.select(selectBoardsState),
|
||||
this._store.select(selectMetricFeatureState),
|
||||
this._store.select(selectSimpleCounterFeatureState),
|
||||
this._store.select(selectTaskRepeatCfgFeatureState),
|
||||
this._store.select(selectMenuTreeState),
|
||||
this._store.select(selectTimeTrackingState),
|
||||
this._store.select(selectPluginUserDataFeatureState),
|
||||
this._store.select(selectPluginMetadataFeatureState),
|
||||
this._store.select(selectReminderFeatureState),
|
||||
this._store.select(selectArchiveYoungFeatureState),
|
||||
this._store.select(selectArchiveOldFeatureState),
|
||||
]).pipe(first(), map(/* combine into AllSyncModels */)),
|
||||
);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
All sync models are now in NgRx - no hybrid persistence.
|
||||
|
||||
## B.5 Archive Data (Direct PFAPI Pattern)
|
||||
|
||||
Archive data (`archiveYoung`, `archiveOld`) bypasses the NgRx store and operation log entirely.
|
||||
This is intentional for performance and to bound operation log size.
|
||||
Archive data (`archiveYoung`, `archiveOld`) is included in the state snapshot for file-based sync.
|
||||
Archives are written directly to IndexedDB via `ArchiveDbAdapter` (bypassing the operation log for performance).
|
||||
|
||||
### Why Archives Bypass Operation Log
|
||||
|
||||
1. **Size**: Archived tasks can grow to tens of thousands of entries over years
|
||||
2. **Frequency**: Archive updates are rare (only when archiving tasks or flushing old data)
|
||||
3. **Sync needs**: Archives still need to sync via PFAPI, but don't need operation-level granularity
|
||||
3. **Sync needs**: Archives sync as part of the state snapshot, but don't need operation-level granularity
|
||||
|
||||
### Data Flow for Archive Operations
|
||||
### Archive Write Path
|
||||
|
||||
```
|
||||
Archive Operation (e.g., archiving a completed task)
|
||||
│
|
||||
├──► 1. Update archive directly via ModelCtrl.save()
|
||||
│ └──► isUpdateRevAndLastUpdate: true
|
||||
│ └──► Increments META_MODEL.vectorClock
|
||||
│ └──► PFAPI sync detects change
|
||||
├──► 1. Update archive directly via ArchiveDbAdapter
|
||||
│
|
||||
└──► 2. (No operation log entry - by design)
|
||||
└──► 2. On next sync, archive is included in state snapshot
|
||||
```
|
||||
|
||||
### Services Using Direct PFAPI Pattern
|
||||
**Key files:**
|
||||
|
||||
| Service | Purpose | Why Direct PFAPI |
|
||||
| --------------------- | -------------------------- | ------------------- |
|
||||
| `TaskArchiveService` | CRUD for archived tasks | Size, low frequency |
|
||||
| `ArchiveService` | Archive/unarchive tasks | Size, low frequency |
|
||||
| `TimeTrackingService` | Flush time data to archive | Size, low frequency |
|
||||
| `PluginEffects` | Plugin metadata/data | Isolated feature |
|
||||
|
||||
### Key Rule
|
||||
|
||||
When modifying archive or plugin data directly via PFAPI:
|
||||
|
||||
- **Always use `{ isUpdateRevAndLastUpdate: true }`**
|
||||
- This ensures PFAPI sync detects the change
|
||||
- Without it, changes won't sync to other devices
|
||||
- `src/app/op-log/archive/archive-db-adapter.service.ts`
|
||||
- `src/app/op-log/archive/archive-operation-handler.service.ts`
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -1190,14 +1147,15 @@ When modifying archive or plugin data directly via PFAPI:
|
|||
|
||||
For server-based sync, the operation log IS the sync mechanism. Individual operations are uploaded/downloaded rather than full state snapshots.
|
||||
|
||||
## C.1 How Server Sync Differs
|
||||
## C.1 How Server Sync Differs from File-Based
|
||||
|
||||
| Aspect | Legacy Sync (Part B) | Server Sync (Part C) |
|
||||
| ------------------- | -------------------- | --------------------- |
|
||||
| What syncs | Full state snapshot | Individual operations |
|
||||
| Conflict detection | File-level LWW | Entity-level |
|
||||
| Op-log role | Not involved | IS the sync |
|
||||
| `syncedAt` tracking | Not needed | Required |
|
||||
| Aspect | File-Based Sync (Part B) | Server Sync (Part C) |
|
||||
| ------------------- | ---------------------------- | --------------------- |
|
||||
| What syncs | State snapshot + recent ops | Individual operations |
|
||||
| Conflict detection | Vector clock on snapshot | Entity-level per-op |
|
||||
| Transport | Single file (sync-data.json) | HTTP API |
|
||||
| Op-log role | Builds snapshot from ops | IS the sync |
|
||||
| `syncedAt` tracking | Not needed | Required |
|
||||
|
||||
## C.2 Operation Sync Protocol
|
||||
|
||||
|
|
@ -1556,7 +1514,7 @@ See [operation-log-architecture-diagrams.md](./operation-log-architecture-diagra
|
|||
|
||||
# Part D: Data Validation & Repair
|
||||
|
||||
The operation log integrates with PFAPI's validation and repair system to prevent data corruption and automatically recover from invalid states.
|
||||
The operation log includes comprehensive validation and automatic repair to prevent data corruption and recover from invalid states.
|
||||
|
||||
## D.1 Validation Architecture
|
||||
|
||||
|
|
@ -1571,7 +1529,7 @@ Four validation checkpoints ensure data integrity throughout the operation lifec
|
|||
|
||||
## D.2 REPAIR Operation Type
|
||||
|
||||
When validation fails at checkpoints B, C, or D, the system attempts automatic repair using PFAPI's `dataRepair()` function. If repair succeeds, a REPAIR operation is created:
|
||||
When validation fails at checkpoints B, C, or D, the system attempts automatic repair using the `dataRepair()` function. If repair succeeds, a REPAIR operation is created:
|
||||
|
||||
```typescript
|
||||
enum OpType {
|
||||
|
|
@ -1683,7 +1641,7 @@ This catches:
|
|||
|
||||
## D.6 ValidateStateService
|
||||
|
||||
Wraps PFAPI's validation and repair functionality:
|
||||
Wraps validation and repair functionality using Typia and cross-model validation:
|
||||
|
||||
```typescript
|
||||
@Injectable({ providedIn: 'root' })
|
||||
|
|
@ -2226,7 +2184,7 @@ When adding new entities or relationships:
|
|||
- State validation during hydration (Checkpoints B & C - Typia + cross-model validation)
|
||||
- Post-sync validation (Checkpoint D - validation after applying remote ops)
|
||||
- REPAIR operation type (auto-repair with full state + repair summary)
|
||||
- ValidateStateService (wraps PFAPI validation + repair)
|
||||
- ValidateStateService (wraps Typia validation + data repair)
|
||||
- RepairOperationService (creates REPAIR ops, user notification)
|
||||
- User notification on repair (snackbar with issue count)
|
||||
|
||||
|
|
@ -2322,9 +2280,19 @@ src/app/features/work-context/store/
|
|||
├── work-context-meta.actions.ts # Move actions (moveTaskInTodayList, etc.)
|
||||
└── work-context-meta.helper.ts # Anchor-based positioning helpers
|
||||
|
||||
src/app/pfapi/
|
||||
├── pfapi-store-delegate.service.ts # Reads NgRx for sync (Part B)
|
||||
└── pfapi.service.ts # Sync orchestration
|
||||
src/app/op-log/sync-providers/
|
||||
├── super-sync/ # SuperSync server provider
|
||||
│ ├── super-sync.ts # Server-based sync implementation
|
||||
│ └── super-sync.model.ts # SuperSync types
|
||||
├── file-based/ # File-based providers (Part B)
|
||||
│ ├── file-based-sync-adapter.service.ts # Unified adapter for file providers
|
||||
│ ├── file-based-sync.types.ts # FileBasedSyncData types
|
||||
│ ├── webdav/ # WebDAV provider
|
||||
│ ├── dropbox/ # Dropbox provider
|
||||
│ └── local-file/ # Local file sync provider
|
||||
├── provider-manager.service.ts # Provider activation/management
|
||||
├── wrapped-provider.service.ts # Provider wrapper with encryption
|
||||
└── credential-store.service.ts # OAuth/credential storage
|
||||
|
||||
e2e/
|
||||
├── tests/sync/supersync.spec.ts # E2E SuperSync tests (Playwright)
|
||||
|
|
@ -2336,8 +2304,8 @@ e2e/
|
|||
|
||||
# References
|
||||
|
||||
- [PFAPI Architecture](./pfapi-sync-persistence-architecture.md) - Legacy sync system
|
||||
- [Operation Rules](./operation-rules.md) - Payload and validation rules
|
||||
- [SuperSync Encryption](./supersync-encryption-architecture.md) - End-to-end encryption implementation
|
||||
- [Hybrid Manifest Architecture](./long-term-plans/hybrid-manifest-architecture.md) - File-based sync optimization
|
||||
- [Vector Clocks](./vector-clocks.md) - Vector clock implementation details
|
||||
- [File-Based Sync Implementation](../ai/file-based-oplog-sync-implementation-plan.md) - Historical implementation plan
|
||||
|
|
|
|||
|
|
@ -1,156 +0,0 @@
|
|||
# Sync System Overview (PFAPI)
|
||||
|
||||
**Last Updated:** December 2025
|
||||
|
||||
This directory contains the **legacy PFAPI** synchronization implementation for Super Productivity. This system enables data sync across devices through file-based providers (Dropbox, WebDAV, Local File).
|
||||
|
||||
> **Note:** Super Productivity now has **two sync systems** running in parallel:
|
||||
>
|
||||
> 1. **PFAPI Sync** (this directory) - File-based sync via Dropbox/WebDAV
|
||||
> 2. **Operation Log Sync** - Operation-based sync via SuperSync Server
|
||||
>
|
||||
> See [Operation Log Architecture](/docs/sync-and-op-log/operation-log-architecture.md) for the newer operation-based system.
|
||||
|
||||
## Key Components
|
||||
|
||||
### Core Services
|
||||
|
||||
- **`sync.service.ts`** - Main orchestrator for sync operations
|
||||
- **`meta-sync.service.ts`** - Handles sync metadata file operations
|
||||
- **`model-sync.service.ts`** - Manages individual model synchronization
|
||||
- **`conflict-handler.service.ts`** - User interface for conflict resolution
|
||||
|
||||
### Sync Providers
|
||||
|
||||
Located in `sync-providers/`:
|
||||
|
||||
- Dropbox
|
||||
- WebDAV
|
||||
- Local File System
|
||||
|
||||
### Sync Algorithm
|
||||
|
||||
The sync system uses vector clocks for accurate conflict detection:
|
||||
|
||||
1. **Physical Timestamps** (`lastUpdate`) - For ordering events
|
||||
2. **Vector Clocks** (`vectorClock`) - For accurate causality tracking and conflict detection
|
||||
3. **Sync State** (`lastSyncedUpdate`, `lastSyncedVectorClock`) - To track last successful sync
|
||||
|
||||
## How Sync Works
|
||||
|
||||
### 1. Change Detection
|
||||
|
||||
When a user modifies data:
|
||||
|
||||
```typescript
|
||||
// In meta-model-ctrl.ts
|
||||
lastUpdate = Date.now();
|
||||
vectorClock[clientId] = vectorClock[clientId] + 1;
|
||||
```
|
||||
|
||||
### 2. Sync Status Determination
|
||||
|
||||
The system compares local and remote metadata to determine:
|
||||
|
||||
- **InSync**: No changes needed
|
||||
- **UpdateLocal**: Download remote changes
|
||||
- **UpdateRemote**: Upload local changes
|
||||
- **Conflict**: Both have changes (requires user resolution)
|
||||
|
||||
### 3. Conflict Detection
|
||||
|
||||
Uses vector clocks for accurate detection:
|
||||
|
||||
```typescript
|
||||
const comparison = compareVectorClocks(localVector, remoteVector);
|
||||
if (comparison === VectorClockComparison.CONCURRENT) {
|
||||
// True conflict - changes were made independently
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Data Transfer
|
||||
|
||||
- **Upload**: Sends changed models and updated metadata
|
||||
- **Download**: Retrieves and merges remote changes
|
||||
- **Conflict Resolution**: User chooses which version to keep
|
||||
|
||||
## Key Files
|
||||
|
||||
### Metadata Structure
|
||||
|
||||
```typescript
|
||||
interface LocalMeta {
|
||||
lastUpdate: number; // Physical timestamp
|
||||
lastSyncedUpdate: number; // Last synced timestamp
|
||||
vectorClock?: VectorClock; // Causality tracking
|
||||
lastSyncedVectorClock?: VectorClock; // Last synced vector clock
|
||||
revMap: RevMap; // Model revision map
|
||||
crossModelVersion: number; // Schema version
|
||||
}
|
||||
```
|
||||
|
||||
### Important Considerations
|
||||
|
||||
1. **Vector Clocks**: Each client maintains its own counter for accurate causality tracking
|
||||
2. **Backwards Compatibility**: Supports migration from older versions
|
||||
3. **Conflict Minimization**: Vector clocks eliminate false conflicts
|
||||
4. **Atomic Operations**: Meta file serves as transaction coordinator
|
||||
|
||||
## Common Sync Scenarios
|
||||
|
||||
### Scenario 1: Simple Update
|
||||
|
||||
1. Device A makes changes
|
||||
2. Device A uploads to cloud
|
||||
3. Device B downloads changes
|
||||
4. Both devices now in sync
|
||||
|
||||
### Scenario 2: Conflict Resolution
|
||||
|
||||
1. Device A and B both make changes
|
||||
2. Device A syncs first
|
||||
3. Device B detects conflict
|
||||
4. User chooses which version to keep
|
||||
5. Chosen version propagates to all devices
|
||||
|
||||
### Scenario 3: Multiple Devices
|
||||
|
||||
1. Devices A, B, C all synced
|
||||
2. Device A makes changes while offline
|
||||
3. Device B makes different changes
|
||||
4. Device C acts as intermediary
|
||||
5. Vector clocks ensure proper ordering
|
||||
|
||||
## Debugging Sync Issues
|
||||
|
||||
1. Enable verbose logging in `pfapi/api/util/log.ts`
|
||||
2. Check vector clock states in sync status
|
||||
3. Verify client IDs are stable
|
||||
4. Ensure metadata files are properly updated
|
||||
|
||||
## Integration with Operation Log
|
||||
|
||||
When using file-based sync (Dropbox, WebDAV), the Operation Log system maintains compatibility through:
|
||||
|
||||
1. **Vector Clock Updates**: `VectorClockFacadeService` updates the PFAPI meta-model's vector clock when operations are persisted locally
|
||||
2. **State Source**: PFAPI reads current state from NgRx store (not from operation log IndexedDB)
|
||||
3. **Bridge Effect**: `updateModelVectorClock$` in `operation-log.effects.ts` ensures clocks stay in sync
|
||||
|
||||
This allows users to:
|
||||
|
||||
- Use file-based sync (Dropbox/WebDAV) while benefiting from Operation Log's local persistence
|
||||
- Migrate between sync providers without data loss
|
||||
|
||||
## Future Direction
|
||||
|
||||
The PFAPI sync system is **stable but not receiving new features**. New sync features are being developed in the Operation Log system:
|
||||
|
||||
- ✅ Entity-level conflict resolution (Operation Log)
|
||||
- ✅ Incremental sync (Operation Log)
|
||||
- 📋 Planned: Deprecate file-based sync in favor of Operation Log with file fallback
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Vector Clocks](./vector-clocks.md) - Conflict detection implementation
|
||||
- [Operation Log Architecture](/docs/sync-and-op-log/operation-log-architecture.md) - Newer operation-based sync
|
||||
- [Hybrid Manifest Architecture](/docs/sync-and-op-log/long-term-plans/hybrid-manifest-architecture.md) - File-based Operation Log sync
|
||||
|
|
@ -1,860 +0,0 @@
|
|||
# PFAPI Sync and Persistence Architecture
|
||||
|
||||
This document describes the architecture and implementation of the persistence and synchronization system (PFAPI) in Super Productivity.
|
||||
|
||||
## Overview
|
||||
|
||||
PFAPI (Persistence Framework API) is a comprehensive system for:
|
||||
|
||||
1. **Local Persistence**: Storing application data in IndexedDB
|
||||
2. **Cross-Device Synchronization**: Syncing data across devices via multiple cloud providers
|
||||
3. **Conflict Detection**: Using vector clocks for distributed conflict detection
|
||||
4. **Data Validation & Migration**: Ensuring data integrity across versions
|
||||
|
||||
## Architecture Layers
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Angular Application │
|
||||
│ (Components & Services) │
|
||||
└────────────────────────────┬────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ PfapiService (Angular) │
|
||||
│ - Injectable wrapper around Pfapi │
|
||||
│ - Exposes RxJS Observables for UI integration │
|
||||
│ - Manages sync provider activation │
|
||||
└────────────────────────────┬────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Pfapi (Core) │
|
||||
│ - Main orchestrator for all persistence operations │
|
||||
│ - Coordinates Database, Models, Sync, and Migration │
|
||||
└────────────────────────────┬────────────────────────────────────┘
|
||||
│
|
||||
┌────────────────────┼────────────────────┐
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
|
||||
│ Database │ │ SyncService │ │ Migration │
|
||||
│ (IndexedDB) │ │ (Orchestrator)│ │ Service │
|
||||
└───────────────┘ └───────┬───────┘ └───────────────┘
|
||||
│
|
||||
┌────────────┼────────────┐
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌──────────┐ ┌───────────┐ ┌───────────┐
|
||||
│ Meta │ │ Model │ │ Encrypt/ │
|
||||
│ Sync │ │ Sync │ │ Compress │
|
||||
└──────────┘ └───────────┘ └───────────┘
|
||||
│ │
|
||||
└────────────┼────────────┐
|
||||
│ │
|
||||
▼ ▼
|
||||
┌───────────────────────────┐
|
||||
│ SyncProvider Interface │
|
||||
└───────────────┬───────────┘
|
||||
│
|
||||
┌───────────────────────────┼───────────────────────────┐
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
|
||||
│ Dropbox │ │ WebDAV │ │ Local File │
|
||||
└───────────────┘ └───────────────┘ └───────────────┘
|
||||
```
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
src/app/pfapi/
|
||||
├── pfapi.service.ts # Angular service wrapper
|
||||
├── pfapi-config.ts # Model and provider configuration
|
||||
├── pfapi-helper.ts # RxJS integration helpers
|
||||
├── api/
|
||||
│ ├── pfapi.ts # Main API class
|
||||
│ ├── pfapi.model.ts # Type definitions
|
||||
│ ├── pfapi.const.ts # Enums and constants
|
||||
│ ├── db/ # Database abstraction
|
||||
│ │ ├── database.ts # Database wrapper with locking
|
||||
│ │ ├── database-adapter.model.ts
|
||||
│ │ └── indexed-db-adapter.ts # IndexedDB implementation
|
||||
│ ├── model-ctrl/ # Model controllers
|
||||
│ │ ├── model-ctrl.ts # Generic model controller
|
||||
│ │ └── meta-model-ctrl.ts # Metadata controller
|
||||
│ ├── sync/ # Sync orchestration
|
||||
│ │ ├── sync.service.ts # Main sync orchestrator
|
||||
│ │ ├── meta-sync.service.ts # Metadata sync
|
||||
│ │ ├── model-sync.service.ts # Model sync
|
||||
│ │ ├── sync-provider.interface.ts
|
||||
│ │ ├── encrypt-and-compress-handler.service.ts
|
||||
│ │ └── providers/ # Provider implementations
|
||||
│ ├── migration/ # Data migration
|
||||
│ ├── util/ # Utilities (vector-clock, etc.)
|
||||
│ └── errors/ # Custom error types
|
||||
├── migrate/ # Cross-model migrations
|
||||
├── repair/ # Data repair utilities
|
||||
└── validate/ # Validation functions
|
||||
```
|
||||
|
||||
## Core Components
|
||||
|
||||
### 1. Database Layer
|
||||
|
||||
#### Database Class (`api/db/database.ts`)
|
||||
|
||||
The `Database` class wraps the storage adapter and provides:
|
||||
|
||||
- **Locking mechanism**: Prevents concurrent writes during sync
|
||||
- **Error handling**: Centralized error management
|
||||
- **CRUD operations**: `load`, `save`, `remove`, `loadAll`, `clearDatabase`
|
||||
|
||||
```typescript
|
||||
class Database {
|
||||
lock(): void; // Prevents writes
|
||||
unlock(): void; // Re-enables writes
|
||||
load<T>(key: string): Promise<T>;
|
||||
save<T>(key: string, data: T, isIgnoreDBLock?: boolean): Promise<void>;
|
||||
remove(key: string): Promise<unknown>;
|
||||
}
|
||||
```
|
||||
|
||||
The database is locked during sync operations to prevent race conditions.
|
||||
|
||||
#### IndexedDB Adapter (`api/db/indexed-db-adapter.ts`)
|
||||
|
||||
Implements `DatabaseAdapter` interface using IndexedDB:
|
||||
|
||||
- Database name: `'pf'`
|
||||
- Main store: `'main'`
|
||||
- Uses the `idb` library for async IndexedDB operations
|
||||
|
||||
```typescript
|
||||
class IndexedDbAdapter implements DatabaseAdapter {
|
||||
async init(): Promise<IDBPDatabase>; // Opens/creates database
|
||||
async load<T>(key: string): Promise<T>; // db.get(store, key)
|
||||
async save<T>(key: string, data: T): Promise<void>; // db.put(store, data, key)
|
||||
async remove(key: string): Promise<unknown>; // db.delete(store, key)
|
||||
async loadAll<A>(): Promise<A>; // Returns all entries as object
|
||||
async clearDatabase(): Promise<void>; // db.clear(store)
|
||||
}
|
||||
```
|
||||
|
||||
## Local Storage Structure (IndexedDB)
|
||||
|
||||
All data is stored in a single IndexedDB database with one object store. Each entry is keyed by a string identifier.
|
||||
|
||||
### IndexedDB Keys
|
||||
|
||||
#### System Keys
|
||||
|
||||
| Key | Content | Description |
|
||||
| --------------------- | ------------------------- | ------------------------------------------------------- |
|
||||
| `__meta_` | `LocalMeta` | Sync metadata (vector clock, revMap, timestamps) |
|
||||
| `__client_id_` | `string` | Unique client identifier (e.g., `"BCL1234567890_12_5"`) |
|
||||
| `__sp_cred_Dropbox` | `DropboxPrivateCfg` | Dropbox credentials |
|
||||
| `__sp_cred_WebDAV` | `WebdavPrivateCfg` | WebDAV credentials |
|
||||
| `__sp_cred_LocalFile` | `LocalFileSyncPrivateCfg` | Local file sync config |
|
||||
| `__TMP_BACKUP` | `AllSyncModels` | Temporary backup during imports |
|
||||
|
||||
#### Model Keys (all defined in `pfapi-config.ts`)
|
||||
|
||||
| Key | Content | Main File | Description |
|
||||
| ---------------- | --------------------- | --------- | ----------------------------- |
|
||||
| `task` | `TaskState` | Yes | Tasks data (EntityState) |
|
||||
| `timeTracking` | `TimeTrackingState` | Yes | Time tracking records |
|
||||
| `project` | `ProjectState` | Yes | Projects (EntityState) |
|
||||
| `tag` | `TagState` | Yes | Tags (EntityState) |
|
||||
| `simpleCounter` | `SimpleCounterState` | Yes | Simple counters (EntityState) |
|
||||
| `note` | `NoteState` | Yes | Notes (EntityState) |
|
||||
| `taskRepeatCfg` | `TaskRepeatCfgState` | Yes | Recurring task configs |
|
||||
| `reminders` | `Reminder[]` | Yes | Reminder array |
|
||||
| `planner` | `PlannerState` | Yes | Planner state |
|
||||
| `boards` | `BoardsState` | Yes | Kanban boards |
|
||||
| `menuTree` | `MenuTreeState` | No | Menu structure |
|
||||
| `globalConfig` | `GlobalConfigState` | No | User settings |
|
||||
| `issueProvider` | `IssueProviderState` | No | Issue tracker configs |
|
||||
| `metric` | `MetricState` | No | Metrics (EntityState) |
|
||||
| `improvement` | `ImprovementState` | No | Improvements (EntityState) |
|
||||
| `obstruction` | `ObstructionState` | No | Obstructions (EntityState) |
|
||||
| `pluginUserData` | `PluginUserDataState` | No | Plugin user data |
|
||||
| `pluginMetadata` | `PluginMetaDataState` | No | Plugin metadata |
|
||||
| `archiveYoung` | `ArchiveModel` | No | Recent archived tasks |
|
||||
| `archiveOld` | `ArchiveModel` | No | Old archived tasks |
|
||||
|
||||
### Local Storage Diagram
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ IndexedDB: "pf" │
|
||||
│ Store: "main" │
|
||||
├──────────────────────┬───────────────────────────────────────────┤
|
||||
│ Key │ Value │
|
||||
├──────────────────────┼───────────────────────────────────────────┤
|
||||
│ __meta_ │ { lastUpdate, vectorClock, revMap, ... } │
|
||||
│ __client_id_ │ "BCLm1abc123_12_5" │
|
||||
│ __sp_cred_Dropbox │ { accessToken, refreshToken, encryptKey } │
|
||||
│ __sp_cred_WebDAV │ { url, username, password, encryptKey } │
|
||||
├──────────────────────┼───────────────────────────────────────────┤
|
||||
│ task │ { ids: [...], entities: {...} } │
|
||||
│ project │ { ids: [...], entities: {...} } │
|
||||
│ tag │ { ids: [...], entities: {...} } │
|
||||
│ note │ { ids: [...], entities: {...} } │
|
||||
│ globalConfig │ { misc: {...}, keyboard: {...}, ... } │
|
||||
│ timeTracking │ { ... } │
|
||||
│ planner │ { ... } │
|
||||
│ boards │ { ... } │
|
||||
│ archiveYoung │ { task: {...}, timeTracking: {...} } │
|
||||
│ archiveOld │ { task: {...}, timeTracking: {...} } │
|
||||
│ ... │ ... │
|
||||
└──────────────────────┴───────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### How Models Are Saved Locally
|
||||
|
||||
When a model is saved via `ModelCtrl.save()`:
|
||||
|
||||
```typescript
|
||||
// 1. Data is validated
|
||||
if (modelCfg.validate) {
|
||||
const result = modelCfg.validate(data);
|
||||
if (!result.success && modelCfg.repair) {
|
||||
data = modelCfg.repair(data); // Auto-repair if possible
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Metadata is updated (if requested via isUpdateRevAndLastUpdate)
|
||||
// Always:
|
||||
vectorClock = incrementVectorClock(vectorClock, clientId);
|
||||
lastUpdate = Date.now();
|
||||
|
||||
// Only for NON-main-file models (isMainFileModel: false):
|
||||
if (!modelCfg.isMainFileModel) {
|
||||
revMap[modelId] = Date.now().toString();
|
||||
}
|
||||
// Main file models are tracked via mainModelData in the meta file, not revMap
|
||||
|
||||
// 3. Data is saved to IndexedDB
|
||||
await db.put('main', data, modelId); // e.g., key='task', value=TaskState
|
||||
```
|
||||
|
||||
**Important distinction:**
|
||||
|
||||
- **Main file models** (`isMainFileModel: true`): Vector clock is incremented, but `revMap` is NOT updated. These models are embedded in `mainModelData` within the meta file.
|
||||
- **Separate model files** (`isMainFileModel: false`): Both vector clock and `revMap` are updated. The `revMap` entry tracks the revision of the individual remote file.
|
||||
|
||||
### 2. Model Control Layer
|
||||
|
||||
#### ModelCtrl (`api/model-ctrl/model-ctrl.ts`)
|
||||
|
||||
Generic controller for each data model (tasks, projects, tags, etc.):
|
||||
|
||||
```typescript
|
||||
class ModelCtrl<MT extends ModelBase> {
|
||||
save(
|
||||
data: MT,
|
||||
options?: {
|
||||
isUpdateRevAndLastUpdate: boolean;
|
||||
isIgnoreDBLock?: boolean;
|
||||
},
|
||||
): Promise<unknown>;
|
||||
|
||||
load(): Promise<MT>;
|
||||
remove(): Promise<unknown>;
|
||||
}
|
||||
```
|
||||
|
||||
Key behaviors:
|
||||
|
||||
- **Validation on save**: Uses Typia for runtime type checking
|
||||
- **Auto-repair**: Attempts to repair invalid data if `repair` function is provided
|
||||
- **In-memory caching**: Keeps data in memory for fast reads
|
||||
- **Revision tracking**: Updates metadata on save when `isUpdateRevAndLastUpdate` is true
|
||||
|
||||
#### MetaModelCtrl (`api/model-ctrl/meta-model-ctrl.ts`)
|
||||
|
||||
Manages synchronization metadata:
|
||||
|
||||
```typescript
|
||||
interface LocalMeta {
|
||||
lastUpdate: number; // Timestamp of last local change
|
||||
lastSyncedUpdate: number | null; // Timestamp of last sync
|
||||
metaRev: string | null; // Remote metadata revision
|
||||
vectorClock: VectorClock; // Client-specific clock values
|
||||
lastSyncedVectorClock: VectorClock | null;
|
||||
revMap: RevMap; // Model ID -> revision mapping
|
||||
crossModelVersion: number; // Data schema version
|
||||
}
|
||||
```
|
||||
|
||||
Key responsibilities:
|
||||
|
||||
- **Client ID management**: Generates and stores unique client identifiers
|
||||
- **Vector clock updates**: Increments on local changes
|
||||
- **Revision map tracking**: Tracks which model versions are synced
|
||||
|
||||
### 3. Sync Service Layer
|
||||
|
||||
#### SyncService (`api/sync/sync.service.ts`)
|
||||
|
||||
Main sync orchestrator. The `sync()` method:
|
||||
|
||||
1. **Check readiness**: Verify sync provider is configured and authenticated
|
||||
2. **Operation log sync**: Upload/download operation logs (new feature)
|
||||
3. **Early return check**: If `lastSyncedUpdate === lastUpdate` and meta revision matches, return `InSync`
|
||||
4. **Download remote metadata**: Get current remote state
|
||||
5. **Determine sync direction**: Compare local and remote states using `getSyncStatusFromMetaFiles`
|
||||
6. **Execute sync**: Upload, download, or report conflict
|
||||
|
||||
```typescript
|
||||
async sync(): Promise<{ status: SyncStatus; conflictData?: ConflictData }>
|
||||
```
|
||||
|
||||
Possible sync statuses:
|
||||
|
||||
- `InSync` - No changes needed
|
||||
- `UpdateLocal` - Download needed (remote is newer)
|
||||
- `UpdateRemote` - Upload needed (local is newer)
|
||||
- `UpdateLocalAll` / `UpdateRemoteAll` - Full sync needed
|
||||
- `Conflict` - Concurrent changes detected
|
||||
- `NotConfigured` - No sync provider set
|
||||
|
||||
#### MetaSyncService (`api/sync/meta-sync.service.ts`)
|
||||
|
||||
Handles metadata file operations:
|
||||
|
||||
- `download()`: Gets remote metadata, checks for locks
|
||||
- `upload()`: Uploads metadata with encryption
|
||||
- `lock()`: Creates a lock file during multi-file upload
|
||||
- `getRev()`: Gets remote metadata revision
|
||||
|
||||
#### ModelSyncService (`api/sync/model-sync.service.ts`)
|
||||
|
||||
Handles individual model file operations:
|
||||
|
||||
- `upload()`: Uploads a model with encryption
|
||||
- `download()`: Downloads a model with revision verification
|
||||
- `remove()`: Deletes a remote model file
|
||||
- `getModelIdsToUpdateFromRevMaps()`: Determines which models need syncing
|
||||
|
||||
### 4. Vector Clock System
|
||||
|
||||
#### Purpose
|
||||
|
||||
Vector clocks provide **causality-based conflict detection** for distributed systems. Unlike simple timestamps:
|
||||
|
||||
- They detect **concurrent changes** (true conflicts)
|
||||
- They preserve **happened-before relationships**
|
||||
- They work without synchronized clocks
|
||||
|
||||
#### Implementation (`api/util/vector-clock.ts`)
|
||||
|
||||
```typescript
|
||||
interface VectorClock {
|
||||
[clientId: string]: number; // Maps client ID to update count
|
||||
}
|
||||
|
||||
enum VectorClockComparison {
|
||||
EQUAL, // Same state
|
||||
LESS_THAN, // A happened before B
|
||||
GREATER_THAN, // B happened before A
|
||||
CONCURRENT, // True conflict - both changed independently
|
||||
}
|
||||
```
|
||||
|
||||
Key operations:
|
||||
|
||||
- `incrementVectorClock(clock, clientId)` - Increment on local change
|
||||
- `mergeVectorClocks(a, b)` - Take max of each component
|
||||
- `compareVectorClocks(a, b)` - Determine relationship
|
||||
- `hasVectorClockChanges(current, reference)` - Check for local changes
|
||||
- `limitVectorClockSize(clock, clientId)` - Prune to max 50 clients
|
||||
|
||||
#### Sync Status Determination (`api/util/get-sync-status-from-meta-files.ts`)
|
||||
|
||||
```typescript
|
||||
function getSyncStatusFromMetaFiles(remote: RemoteMeta, local: LocalMeta) {
|
||||
// 1. Check for empty local/remote
|
||||
// 2. Compare vector clocks
|
||||
// 3. Return appropriate SyncStatus
|
||||
}
|
||||
```
|
||||
|
||||
The algorithm (simplified - actual implementation has more nuances):
|
||||
|
||||
1. **Empty data checks:**
|
||||
|
||||
- If remote has no data (`isRemoteDataEmpty`), return `UpdateRemoteAll`
|
||||
- If local has no data (`isLocalDataEmpty`), return `UpdateLocalAll`
|
||||
|
||||
2. **Vector clock validation:**
|
||||
|
||||
- If either local or remote lacks a vector clock, return `Conflict` with reason `NoLastSync`
|
||||
- Both `vectorClock` and `lastSyncedVectorClock` must be present
|
||||
|
||||
3. **Change detection using `hasVectorClockChanges`:**
|
||||
|
||||
- Local changes: Compare current `vectorClock` vs `lastSyncedVectorClock`
|
||||
- Remote changes: Compare remote `vectorClock` vs local `lastSyncedVectorClock`
|
||||
|
||||
4. **Sync status determination:**
|
||||
- No local changes + no remote changes -> `InSync`
|
||||
- Local changes only -> `UpdateRemote`
|
||||
- Remote changes only -> `UpdateLocal`
|
||||
- Both have changes -> `Conflict` with reason `BothNewerLastSync`
|
||||
|
||||
**Note:** The actual implementation also handles edge cases like minimal-update bootstrap scenarios and validates that clocks are properly initialized.
|
||||
|
||||
### 5. Sync Providers
|
||||
|
||||
#### Interface (`api/sync/sync-provider.interface.ts`)
|
||||
|
||||
```typescript
|
||||
interface SyncProviderServiceInterface<PID extends SyncProviderId> {
|
||||
id: PID;
|
||||
isUploadForcePossible?: boolean;
|
||||
isLimitedToSingleFileSync?: boolean;
|
||||
maxConcurrentRequests: number;
|
||||
|
||||
getFileRev(targetPath: string, localRev: string | null): Promise<FileRevResponse>;
|
||||
downloadFile(targetPath: string): Promise<FileDownloadResponse>;
|
||||
uploadFile(
|
||||
targetPath: string,
|
||||
dataStr: string,
|
||||
revToMatch: string | null,
|
||||
isForceOverwrite?: boolean,
|
||||
): Promise<FileRevResponse>;
|
||||
removeFile(targetPath: string): Promise<void>;
|
||||
listFiles?(targetPath: string): Promise<string[]>;
|
||||
isReady(): Promise<boolean>;
|
||||
setPrivateCfg(privateCfg): Promise<void>;
|
||||
}
|
||||
```
|
||||
|
||||
#### Available Providers
|
||||
|
||||
| Provider | Description | Force Upload | Max Concurrent |
|
||||
| ------------- | --------------------------- | ------------ | -------------- |
|
||||
| **Dropbox** | OAuth2 PKCE authentication | Yes | 4 |
|
||||
| **WebDAV** | Nextcloud, ownCloud, etc. | No | 10 |
|
||||
| **LocalFile** | Electron/Android filesystem | No | 10 |
|
||||
| **SuperSync** | WebDAV-based custom sync | No | 10 |
|
||||
|
||||
### 6. Data Encryption & Compression
|
||||
|
||||
#### EncryptAndCompressHandlerService
|
||||
|
||||
Handles data transformation before upload/after download:
|
||||
|
||||
- **Compression**: Uses compression algorithms to reduce data size
|
||||
- **Encryption**: AES encryption with user-provided key
|
||||
|
||||
Data format prefix: `pf_` indicates processed data.
|
||||
|
||||
### 7. Migration System
|
||||
|
||||
#### MigrationService (`api/migration/migration.service.ts`)
|
||||
|
||||
Handles data schema evolution:
|
||||
|
||||
- Checks version on app startup
|
||||
- Applies cross-model migrations sequentially in order
|
||||
- **Only supports forward (upgrade) migrations** - throws `CanNotMigrateMajorDownError` if data version is higher than code version (major version mismatch)
|
||||
|
||||
```typescript
|
||||
interface CrossModelMigrations {
|
||||
[version: number]: (fullData) => transformedData;
|
||||
}
|
||||
```
|
||||
|
||||
**Migration behavior:**
|
||||
|
||||
- If `dataVersion === codeVersion`: No migration needed
|
||||
- If `dataVersion < codeVersion`: Run all migrations from `dataVersion` to `codeVersion`
|
||||
- If `dataVersion > codeVersion` (major version differs): Throws error - downgrade not supported
|
||||
|
||||
Current version: `4.4` (from `pfapi-config.ts`)
|
||||
|
||||
### 8. Validation & Repair
|
||||
|
||||
#### Validation
|
||||
|
||||
Uses **Typia** for runtime type validation:
|
||||
|
||||
- Each model can define a `validate` function
|
||||
- Returns `IValidation<T>` with success flag and errors
|
||||
|
||||
#### Repair
|
||||
|
||||
Auto-repair system for corrupted data:
|
||||
|
||||
- Each model can define a `repair` function
|
||||
- Applied when validation fails
|
||||
- Falls back to error if repair fails
|
||||
|
||||
## Sync Flow Diagrams
|
||||
|
||||
### Normal Sync Flow
|
||||
|
||||
```
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||
│ Device A│ │ Remote │ │ Device B│
|
||||
└────┬────┘ └────┬────┘ └────┬────┘
|
||||
│ │ │
|
||||
│ 1. sync() │ │
|
||||
├────────────────►│ │
|
||||
│ │ │
|
||||
│ 2. download │ │
|
||||
│ metadata │ │
|
||||
│◄────────────────┤ │
|
||||
│ │ │
|
||||
│ 3. compare │ │
|
||||
│ vector clocks │ │
|
||||
│ │ │
|
||||
│ 4. upload │ │
|
||||
│ changes │ │
|
||||
├────────────────►│ │
|
||||
│ │ │
|
||||
│ │ 5. sync() │
|
||||
│ │◄────────────────┤
|
||||
│ │ │
|
||||
│ │ 6. download │
|
||||
│ │ metadata │
|
||||
│ ├────────────────►│
|
||||
│ │ │
|
||||
│ │ 7. download │
|
||||
│ │ changed │
|
||||
│ │ models │
|
||||
│ ├────────────────►│
|
||||
```
|
||||
|
||||
### Conflict Detection Flow
|
||||
|
||||
```
|
||||
┌─────────┐ ┌─────────┐
|
||||
│ Device A│ │ Device B│
|
||||
│ VC: {A:5, B:3} │ VC: {A:4, B:5}
|
||||
└────┬────┘ └────┬────┘
|
||||
│ │
|
||||
│ Both made changes offline │
|
||||
│ │
|
||||
│ ┌─────────────────────────┼───────────────────────────┐
|
||||
│ │ Compare: CONCURRENT │ │
|
||||
│ │ A has A:5 (higher) │ B has B:5 (higher) │
|
||||
│ │ Neither dominates │ │
|
||||
│ └─────────────────────────┴───────────────────────────┘
|
||||
│ │
|
||||
│ Conflict! │
|
||||
│ User must choose which │
|
||||
│ version to keep │
|
||||
```
|
||||
|
||||
### Multi-File Upload with Locking
|
||||
|
||||
```
|
||||
┌─────────┐ ┌─────────┐
|
||||
│ Client │ │ Remote │
|
||||
└────┬────┘ └────┬────┘
|
||||
│ │
|
||||
│ 1. Create lock │
|
||||
│ (upload lock │
|
||||
│ content) │
|
||||
├────────────────►│
|
||||
│ │
|
||||
│ 2. Upload │
|
||||
│ model A │
|
||||
├────────────────►│
|
||||
│ │
|
||||
│ 3. Upload │
|
||||
│ model B │
|
||||
├────────────────►│
|
||||
│ │
|
||||
│ 4. Upload │
|
||||
│ metadata │
|
||||
│ (replaces lock)│
|
||||
├────────────────►│
|
||||
│ │
|
||||
│ Lock released │
|
||||
```
|
||||
|
||||
## Remote Storage Structure
|
||||
|
||||
The remote storage (Dropbox, WebDAV, local folder) contains multiple files. The structure is designed to optimize sync performance by separating frequently-changed small data from large archives.
|
||||
|
||||
### Remote Files Overview
|
||||
|
||||
```
|
||||
/ (or /DEV/ in development)
|
||||
├── __meta_ # Metadata file (REQUIRED - always synced first)
|
||||
├── globalConfig # User settings
|
||||
├── menuTree # Menu structure
|
||||
├── issueProvider # Issue tracker configurations
|
||||
├── metric # Metrics data
|
||||
├── improvement # Improvement entries
|
||||
├── obstruction # Obstruction entries
|
||||
├── pluginUserData # Plugin user data
|
||||
├── pluginMetadata # Plugin metadata
|
||||
├── archiveYoung # Recent archived tasks (can be large)
|
||||
└── archiveOld # Old archived tasks (can be very large)
|
||||
```
|
||||
|
||||
### The Meta File (`__meta_`)
|
||||
|
||||
The meta file is the **central coordination file** for sync. It contains:
|
||||
|
||||
1. **Sync metadata** (vector clock, timestamps, version)
|
||||
2. **Revision map** (`revMap`) - tracks which revision each model file has
|
||||
3. **Main file model data** - frequently-accessed data embedded directly
|
||||
|
||||
```typescript
|
||||
interface RemoteMeta {
|
||||
// Sync coordination
|
||||
lastUpdate: number; // When data was last changed
|
||||
crossModelVersion: number; // Schema version (e.g., 4.4)
|
||||
vectorClock: VectorClock; // For conflict detection
|
||||
revMap: RevMap; // Model ID -> revision string
|
||||
|
||||
// Embedded data (main file models)
|
||||
mainModelData: {
|
||||
task: TaskState;
|
||||
project: ProjectState;
|
||||
tag: TagState;
|
||||
note: NoteState;
|
||||
timeTracking: TimeTrackingState;
|
||||
simpleCounter: SimpleCounterState;
|
||||
taskRepeatCfg: TaskRepeatCfgState;
|
||||
reminders: Reminder[];
|
||||
planner: PlannerState;
|
||||
boards: BoardsState;
|
||||
};
|
||||
|
||||
// For single-file sync providers
|
||||
isFullData?: boolean; // If true, all data is in this file
|
||||
}
|
||||
```
|
||||
|
||||
### Main File Models vs Separate Model Files
|
||||
|
||||
Models are categorized into two types:
|
||||
|
||||
#### Main File Models (`isMainFileModel: true`)
|
||||
|
||||
These are embedded in the `__meta_` file's `mainModelData` field:
|
||||
|
||||
| Model | Reason |
|
||||
| --------------- | ------------------------------------- |
|
||||
| `task` | Frequently accessed, relatively small |
|
||||
| `project` | Core data, always needed |
|
||||
| `tag` | Small, frequently referenced |
|
||||
| `note` | Often viewed together with tasks |
|
||||
| `timeTracking` | Frequently updated |
|
||||
| `simpleCounter` | Small, frequently updated |
|
||||
| `taskRepeatCfg` | Needed for task creation |
|
||||
| `reminders` | Small array, time-critical |
|
||||
| `planner` | Viewed on app startup |
|
||||
| `boards` | Part of main UI |
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- Single HTTP request to get all core data
|
||||
- Atomic update of related models
|
||||
- Faster initial sync
|
||||
|
||||
#### Separate Model Files (`isMainFileModel: false` or undefined)
|
||||
|
||||
These are stored as individual files:
|
||||
|
||||
| Model | Reason |
|
||||
| -------------------------------------- | ------------------------------------------- |
|
||||
| `globalConfig` | User-specific, rarely synced |
|
||||
| `menuTree` | UI state, not critical |
|
||||
| `issueProvider` | Contains credentials, separate for security |
|
||||
| `metric`, `improvement`, `obstruction` | Historical data, can grow large |
|
||||
| `archiveYoung` | Can be large, changes infrequently |
|
||||
| `archiveOld` | Very large, rarely accessed |
|
||||
| `pluginUserData`, `pluginMetadata` | Plugin-specific, isolated |
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- Only download what changed (via `revMap` comparison)
|
||||
- Large files (archives) don't slow down regular sync
|
||||
- Can sync individual models independently
|
||||
|
||||
### RevMap: Tracking Model Versions
|
||||
|
||||
The `revMap` tracks which version of each separate model file is on the remote:
|
||||
|
||||
```typescript
|
||||
interface RevMap {
|
||||
[modelId: string]: string; // Model ID -> revision/timestamp
|
||||
}
|
||||
|
||||
// Example
|
||||
{
|
||||
"globalConfig": "1701234567890",
|
||||
"menuTree": "1701234567891",
|
||||
"archiveYoung": "1701234500000",
|
||||
"archiveOld": "1701200000000",
|
||||
// ... (main file models NOT included - they're in mainModelData)
|
||||
}
|
||||
```
|
||||
|
||||
When syncing:
|
||||
|
||||
1. Download `__meta_` file
|
||||
2. Compare remote `revMap` with local `revMap`
|
||||
3. Only download model files where revision differs
|
||||
|
||||
### Upload Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ UPLOAD FLOW │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
|
||||
1. Determine what changed (compare local/remote revMaps)
|
||||
local.revMap: { archiveYoung: "100", globalConfig: "200" }
|
||||
remote.revMap: { archiveYoung: "100", globalConfig: "150" }
|
||||
→ globalConfig needs upload
|
||||
|
||||
2. For multi-file upload, create lock:
|
||||
Upload to __meta_: "SYNC_IN_PROGRESS__BCLm1abc123_12_5"
|
||||
|
||||
3. Upload changed model files:
|
||||
Upload to globalConfig: { encrypted/compressed data }
|
||||
→ Get new revision: "250"
|
||||
|
||||
4. Upload metadata (replaces lock):
|
||||
Upload to __meta_: {
|
||||
lastUpdate: 1701234567890,
|
||||
vectorClock: { "BCLm1abc123_12_5": 42 },
|
||||
revMap: { archiveYoung: "100", globalConfig: "250" },
|
||||
mainModelData: { task: {...}, project: {...}, ... }
|
||||
}
|
||||
```
|
||||
|
||||
### Download Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ DOWNLOAD FLOW │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
|
||||
1. Download __meta_ file
|
||||
→ Get mainModelData (task, project, tag, etc.)
|
||||
→ Get revMap for separate files
|
||||
|
||||
2. Compare revMaps:
|
||||
remote.revMap: { archiveYoung: "300", globalConfig: "250" }
|
||||
local.revMap: { archiveYoung: "100", globalConfig: "250" }
|
||||
→ archiveYoung needs download
|
||||
|
||||
3. Download changed model files (parallel with load balancing):
|
||||
Download archiveYoung → decrypt/decompress → save locally
|
||||
|
||||
4. Update local metadata:
|
||||
- Save all mainModelData to IndexedDB
|
||||
- Save downloaded models to IndexedDB
|
||||
- Update local revMap to match remote
|
||||
- Merge vector clocks
|
||||
- Set lastSyncedUpdate = lastUpdate
|
||||
```
|
||||
|
||||
### Single-File Sync Mode
|
||||
|
||||
Some providers (or configurations) use `isLimitedToSingleFileSync: true`. In this mode:
|
||||
|
||||
- **All data** is stored in the `__meta_` file
|
||||
- `mainModelData` contains ALL models, not just main file models
|
||||
- `isFullData: true` flag is set
|
||||
- No separate model files are created
|
||||
- Simpler but less efficient for large datasets
|
||||
|
||||
### File Content Format
|
||||
|
||||
All files are stored as JSON strings with optional encryption/compression:
|
||||
|
||||
```
|
||||
Raw: { "ids": [...], "entities": {...} }
|
||||
↓ (if compression enabled)
|
||||
Compressed: <binary compressed data>
|
||||
↓ (if encryption enabled)
|
||||
Encrypted: <AES encrypted data>
|
||||
↓
|
||||
Prefixed: "pf_" + <cross_model_version> + "__" + <base64 encoded data>
|
||||
```
|
||||
|
||||
The `pf_` prefix indicates the data has been processed and needs decryption/decompression.
|
||||
|
||||
## Data Model Configurations
|
||||
|
||||
From `pfapi-config.ts`:
|
||||
|
||||
| Model | Main File | Description |
|
||||
| ---------------- | --------- | ---------------------- |
|
||||
| `task` | Yes | Tasks data |
|
||||
| `timeTracking` | Yes | Time tracking records |
|
||||
| `project` | Yes | Projects |
|
||||
| `tag` | Yes | Tags |
|
||||
| `simpleCounter` | Yes | Simple Counters |
|
||||
| `note` | Yes | Notes |
|
||||
| `taskRepeatCfg` | Yes | Recurring task configs |
|
||||
| `reminders` | Yes | Reminders |
|
||||
| `planner` | Yes | Planner data |
|
||||
| `boards` | Yes | Kanban boards |
|
||||
| `menuTree` | No | Menu structure |
|
||||
| `globalConfig` | No | User settings |
|
||||
| `issueProvider` | No | Issue tracker configs |
|
||||
| `metric` | No | Metrics data |
|
||||
| `improvement` | No | Metric improvements |
|
||||
| `obstruction` | No | Metric obstructions |
|
||||
| `pluginUserData` | No | Plugin user data |
|
||||
| `pluginMetadata` | No | Plugin metadata |
|
||||
| `archiveYoung` | No | Recent archive |
|
||||
| `archiveOld` | No | Old archive |
|
||||
|
||||
**Main file models** are stored in the metadata file itself for faster sync of frequently-accessed data.
|
||||
|
||||
## Error Handling
|
||||
|
||||
Custom error types in `api/errors/errors.ts`:
|
||||
|
||||
- **API Errors**: `NoRevAPIError`, `RemoteFileNotFoundAPIError`, `AuthFailSPError`
|
||||
- **Sync Errors**: `LockPresentError`, `LockFromLocalClientPresentError`, `UnknownSyncStateError`
|
||||
- **Data Errors**: `DataValidationFailedError`, `ModelValidationError`, `DataRepairNotPossibleError`
|
||||
|
||||
## Event System
|
||||
|
||||
```typescript
|
||||
type PfapiEvents =
|
||||
| 'syncDone' // Sync completed
|
||||
| 'syncStart' // Sync starting
|
||||
| 'syncError' // Sync failed
|
||||
| 'syncStatusChange' // Status changed
|
||||
| 'metaModelChange' // Metadata updated
|
||||
| 'providerChange' // Provider switched
|
||||
| 'providerReady' // Provider authenticated
|
||||
| 'providerPrivateCfgChange' // Provider credentials updated
|
||||
| 'onBeforeUpdateLocal'; // About to download changes
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Encryption**: Optional AES encryption with user-provided key
|
||||
2. **No tracking**: All data stays local unless explicitly synced
|
||||
3. **Credential storage**: Provider credentials stored in IndexedDB with prefix `__sp_cred_`
|
||||
4. **OAuth security**: Dropbox uses PKCE flow
|
||||
|
||||
## Key Design Decisions
|
||||
|
||||
1. **Vector clocks over timestamps**: More reliable conflict detection in distributed systems
|
||||
2. **Main file models**: Frequently accessed data bundled with metadata for faster sync
|
||||
3. **Database locking**: Prevents corruption during sync operations
|
||||
4. **Adapter pattern**: Easy to add new storage backends
|
||||
5. **Provider abstraction**: Consistent interface across Dropbox, WebDAV, local files
|
||||
6. **Typia validation**: Runtime type safety without heavy dependencies
|
||||
|
||||
## Future Considerations
|
||||
|
||||
The system has been extended with **Operation Log Sync** for more granular synchronization at the operation level rather than full model replacement. See `operation-log-architecture.md` for details.
|
||||
|
|
@ -649,70 +649,68 @@ End-to-end encryption ensures the server never sees plaintext data.
|
|||
|
||||
---
|
||||
|
||||
## Area 12: Legacy PFAPI Bridge (File-Based Sync)
|
||||
## Area 12: Unified File-Based Sync
|
||||
|
||||
PFAPI provides file-based sync for WebDAV, Dropbox, and LocalFile providers.
|
||||
All sync providers (WebDAV, Dropbox, LocalFile, SuperSync) now use the unified operation log system.
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ SYNC PROVIDER ARCHITECTURE │
|
||||
│ UNIFIED SYNC ARCHITECTURE │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||
│ │ SyncService │ │
|
||||
│ │ OperationLogSyncService │ │
|
||||
│ └─────────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ┌─────────────┴─────────────┐ │
|
||||
│ ▼ ▼ │
|
||||
│ ┌────────────────────┐ ┌────────────────────────────────┐ │
|
||||
│ │ FILE-BASED SYNC │ │ OPERATION-BASED SYNC │ │
|
||||
│ │ (Legacy PFAPI) │ │ (SuperSync) │ │
|
||||
│ │ FileBasedSyncAdapter│ │ SuperSyncProvider │ │
|
||||
│ │ (OperationSyncable) │ │ (OperationSyncable) │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ ├─ getFileRev() │ │ ├─ uploadOps() │ │
|
||||
│ │ ├─ downloadFile() │ │ ├─ downloadOps() │ │
|
||||
│ │ ├─ uploadFile() │ │ └─ uploadSnapshot() │ │
|
||||
│ │ └─ removeFile() │ │ │ │
|
||||
│ │ ├─ uploadOps() │ │ ├─ uploadOps() │ │
|
||||
│ │ ├─ downloadOps() │ │ ├─ downloadOps() │ │
|
||||
│ │ └─ uploadSnapshot │ │ └─ uploadSnapshot() │ │
|
||||
│ └────────────────────┘ └────────────────────────────────┘ │
|
||||
│ │ │ │
|
||||
│ ┌───────┬───┴───┬──────────┐ │ │
|
||||
│ ▼ ▼ ▼ ▼ ▼ │
|
||||
│ WebDAV Dropbox LocalFile ... SuperSyncProvider │
|
||||
│ WebDAV Dropbox LocalFile ... SuperSync Server │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**File-Based Sync Model:**
|
||||
**File-Based Sync Model (Unified):**
|
||||
|
||||
```
|
||||
Remote Storage (WebDAV/Dropbox folder):
|
||||
|
||||
/superProductivity/
|
||||
├── meta.json ← Global metadata + vector clock
|
||||
├── task.json ← Task entity state
|
||||
├── project.json ← Project entity state
|
||||
├── tag.json ← Tag entity state
|
||||
├── globalConfig.json ← Global configuration
|
||||
└── ... ← Other model files
|
||||
├── sync-data.json ← Single file with:
|
||||
│ • Full state snapshot
|
||||
│ • Recent ops buffer (200)
|
||||
│ • Vector clock
|
||||
│ • Archive data
|
||||
└── sync-data.json.bak ← Backup of previous version
|
||||
```
|
||||
|
||||
**SuperSync vs PFAPI Comparison:**
|
||||
**All Providers Now Use Same Interface:**
|
||||
|
||||
| Aspect | PFAPI (File) | SuperSync (Ops) |
|
||||
| ------------- | ----------------- | --------------------- |
|
||||
| Granularity | Whole model files | Individual operations |
|
||||
| Conflict Unit | Entire model | Single entity |
|
||||
| Resolution | User choice | Automatic (LWW) |
|
||||
| Bandwidth | High (full state) | Low (deltas only) |
|
||||
| History | None | Full op log |
|
||||
| Aspect | File-Based (WebDAV/Dropbox/LocalFile) | SuperSync |
|
||||
| ------------- | ------------------------------------- | --------------------- |
|
||||
| Granularity | Individual operations | Individual operations |
|
||||
| Conflict Unit | Single entity | Single entity |
|
||||
| Resolution | Automatic (LWW) | Automatic (LWW) |
|
||||
| Storage | Single sync-data.json | PostgreSQL |
|
||||
| History | Recent 200 ops | Full op log |
|
||||
|
||||
**Key Files:**
|
||||
|
||||
- `sync-provider.interface.ts` - Provider interfaces
|
||||
- `sync.service.ts` - Main sync orchestration
|
||||
- `model-sync.service.ts` - Model file upload/download
|
||||
- `providers/webdav/webdav.ts` - WebDAV implementation
|
||||
- `providers/dropbox/dropbox.ts` - Dropbox implementation
|
||||
- `providers/super-sync/super-sync.ts` - Operation-based provider
|
||||
- `op-log/sync-providers/file-based/file-based-sync-adapter.service.ts` - File-based adapter
|
||||
- `op-log/sync-providers/file-based/file-based-sync.types.ts` - Types for sync-data.json
|
||||
- `op-log/sync-providers/super-sync/super-sync.ts` - SuperSync provider
|
||||
- `op-log/sync/operation-log-sync.service.ts` - Main sync orchestration
|
||||
- `op-log/persistence/pfapi-migration.service.ts` - Legacy PFAPI migration
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue