mirror of
https://github.com/johannesjo/super-productivity.git
synced 2026-01-23 02:36:05 +00:00
docs: cleanup
This commit is contained in:
parent
642f8d3984
commit
f928e6b18e
5 changed files with 0 additions and 1149 deletions
|
|
@ -1,102 +0,0 @@
|
|||
# Archive Write Points Documentation
|
||||
|
||||
**Last Updated:** December 8, 2025
|
||||
|
||||
This document tracks all locations where archive storage (IndexedDB) is written to, ensuring archive integrity and preventing unexpected writes.
|
||||
|
||||
## Single Source of Truth: ArchiveOperationHandler
|
||||
|
||||
All archive write operations MUST go through `ArchiveOperationHandler`. This handler is the **unified, centralized** point for all archive storage operations, used by both local and remote operations.
|
||||
|
||||
### File Location
|
||||
|
||||
`src/app/core/persistence/operation-log/processing/archive-operation-handler.service.ts`
|
||||
|
||||
### Entry Points
|
||||
|
||||
Both local and remote operations flow through the same handler:
|
||||
|
||||
1. **Local Operations**: `ArchiveOperationHandlerEffects` (using `LOCAL_ACTIONS`) → `ArchiveOperationHandler.handleOperation()`
|
||||
2. **Remote Operations**: `OperationApplierService` → `ArchiveOperationHandler.handleOperation()`
|
||||
|
||||
This unified architecture ensures:
|
||||
|
||||
- Identical behavior for local and remote operations
|
||||
- Single place to add new archive-affecting operations
|
||||
- Eliminates duplicate code between local effects and remote handling
|
||||
|
||||
## Archive-Affecting Actions
|
||||
|
||||
The following actions trigger archive writes (defined in `ARCHIVE_AFFECTING_ACTION_TYPES`):
|
||||
|
||||
| Action | Handler Method | Archive Operation |
|
||||
| ------------------------------------------- | ----------------------------- | ---------------------------------------------- |
|
||||
| `TaskSharedActions.moveToArchive` | `_handleMoveToArchive` | Write tasks to archiveYoung (remote only) |
|
||||
| `TaskSharedActions.restoreTask` | `_handleRestoreTask` | Delete task from archive |
|
||||
| `flushYoungToOld` | `_handleFlushYoungToOld` | Move old tasks from archiveYoung to archiveOld |
|
||||
| `TaskSharedActions.deleteProject` | `_handleDeleteProject` | Remove all tasks for deleted project |
|
||||
| `deleteTag` / `deleteTags` | `_handleDeleteTags` | Remove tag references, delete orphaned tasks |
|
||||
| `TaskSharedActions.deleteTaskRepeatCfg` | `_handleDeleteTaskRepeatCfg` | Remove repeatCfgId from tasks |
|
||||
| `TaskSharedActions.deleteIssueProvider` | `_handleDeleteIssueProvider` | Unlink issue data from tasks |
|
||||
| `IssueProviderActions.deleteIssueProviders` | `_handleDeleteIssueProviders` | Unlink issue data from multiple tasks |
|
||||
|
||||
## Special Case: moveToArchive
|
||||
|
||||
For `moveToArchive`, there's a special handling distinction:
|
||||
|
||||
- **Local operations**: Archive is written BEFORE the action is dispatched by `ArchiveService.moveToArchive()`. The handler skips local operations to avoid double-writes.
|
||||
- **Remote operations**: Archive is written AFTER the action is dispatched by `ArchiveOperationHandler._handleMoveToArchive()`.
|
||||
|
||||
## Operation Flow
|
||||
|
||||
```
|
||||
LOCAL OPERATION FLOW:
|
||||
User Action → Action Dispatched → Reducer → ArchiveOperationHandlerEffects
|
||||
↓
|
||||
ArchiveOperationHandler.handleOperation()
|
||||
↓
|
||||
Archive Written (IndexedDB)
|
||||
|
||||
REMOTE OPERATION FLOW:
|
||||
OperationApplierService → store.dispatch(action) → Reducer
|
||||
↓
|
||||
ArchiveOperationHandler.handleOperation()
|
||||
↓
|
||||
Archive Written (IndexedDB)
|
||||
```
|
||||
|
||||
## Files That Should NOT Write to Archive
|
||||
|
||||
The following effect files previously contained archive write logic and have been deprecated:
|
||||
|
||||
| Deprecated File | Replacement |
|
||||
| -------------------------------------------------- | ------------------------------ |
|
||||
| `archive.effects.ts` | ArchiveOperationHandlerEffects |
|
||||
| `project.effects.ts` (archive portion) | ArchiveOperationHandlerEffects |
|
||||
| `tag.effects.ts` (archive portion) | ArchiveOperationHandlerEffects |
|
||||
| `task-repeat-cfg.effects.ts` (archive portion) | ArchiveOperationHandlerEffects |
|
||||
| `unlink-all-tasks-on-provider-deletion.effects.ts` | ArchiveOperationHandlerEffects |
|
||||
|
||||
## Verification Checklist
|
||||
|
||||
When adding new archive-affecting operations:
|
||||
|
||||
1. [ ] Add action type to `ARCHIVE_AFFECTING_ACTION_TYPES` in `archive-operation-handler.service.ts`
|
||||
2. [ ] Implement handler method in `ArchiveOperationHandler`
|
||||
3. [ ] Add test cases in `archive-operation-handler.service.spec.ts`
|
||||
4. [ ] Update this documentation
|
||||
5. [ ] Ensure no direct archive writes in effects (use ArchiveOperationHandler)
|
||||
|
||||
## How to Find All Archive Writes
|
||||
|
||||
To verify no unexpected archive writes exist:
|
||||
|
||||
```bash
|
||||
# Search for direct PfapiService archive writes
|
||||
grep -r "archiveYoung.save\|archiveOld.save" src/app --include="*.ts" | grep -v "archive-operation-handler\|archive.service"
|
||||
|
||||
# Search for TaskArchiveService archive-modifying methods
|
||||
grep -r "removeAllArchiveTasksForProject\|removeTagsFromAllTasks\|removeRepeatCfgFromArchiveTasks\|unlinkIssueProviderFromArchiveTasks\|deleteTasks" src/app --include="*.ts" | grep -v "archive-operation-handler\|\.spec\.ts"
|
||||
```
|
||||
|
||||
Expected results: All matches should be in `ArchiveOperationHandler`, `ArchiveService`, or `TaskArchiveService`.
|
||||
|
|
@ -1,57 +0,0 @@
|
|||
# Sync Documentation (AI Reference)
|
||||
|
||||
This directory contains research, planning, and historical documentation related to the sync system. These documents were created during the development of the Operation Log architecture.
|
||||
|
||||
## Canonical Documentation
|
||||
|
||||
For the current, authoritative documentation, see:
|
||||
|
||||
**[/src/app/core/persistence/operation-log/docs/](/src/app/core/persistence/operation-log/docs/)**
|
||||
|
||||
That directory contains:
|
||||
|
||||
- `operation-log-architecture.md` - Main architecture reference
|
||||
- `operation-log-architecture-diagrams.md` - Visual diagrams
|
||||
- `operation-rules.md` - Design rules
|
||||
- `hybrid-manifest-architecture.md` - File-based sync optimization
|
||||
- `pfapi-sync-persistence-architecture.md` - Legacy PFAPI sync
|
||||
|
||||
## Documents in This Directory
|
||||
|
||||
### Active/Useful
|
||||
|
||||
| Document | Purpose |
|
||||
| ------------------------------------------------------------------------------ | ----------------------------------------------- |
|
||||
| [operation-log-sync-best-practices.md](./operation-log-sync-best-practices.md) | Industry best practices for op-log sync servers |
|
||||
| [operation-log-best-practises2.md](./operation-log-best-practises2.md) | Research from Figma, Linear, Replicache |
|
||||
| [server-sync-architecture.md](./server-sync-architecture.md) | SuperSync server design considerations |
|
||||
|
||||
### Historical (Reference Only)
|
||||
|
||||
These documents were created during planning and may be outdated:
|
||||
|
||||
| Document | Purpose | Status |
|
||||
| ---------------------------------------------------------------------------------------- | ----------------------- | -------------------- |
|
||||
| [operation-log-implementation-review.md](./operation-log-implementation-review.md) | Code review findings | 📋 Mostly resolved |
|
||||
| [operation-log-integration-testing-plan.md](./operation-log-integration-testing-plan.md) | Test planning | ✅ Tests implemented |
|
||||
| [replace-pfapi-with-oplog-plan.md](./replace-pfapi-with-oplog-plan.md) | Migration planning | ✅ Complete |
|
||||
| [supersync-e2e-test-plan.md](./supersync-e2e-test-plan.md) | E2E test scenarios | ✅ Tests implemented |
|
||||
| [synthesized-delta-vs-oplog.md](./synthesized-delta-vs-oplog.md) | Architecture comparison | 📋 Historical |
|
||||
| [synthesized-delta-sync-analysis.md](./synthesized-delta-sync-analysis.md) | Delta sync analysis | 📋 Historical |
|
||||
|
||||
### Redirects
|
||||
|
||||
These documents have moved to canonical locations:
|
||||
|
||||
| Document | Redirects To |
|
||||
| ---------------------------------------------------------------------------------- | ----------------------------------------------- |
|
||||
| [pfapi-sync-persistence-architecture.md](./pfapi-sync-persistence-architecture.md) | `/src/app/core/persistence/operation-log/docs/` |
|
||||
| [hybrid-manifest-architecture.md](./hybrid-manifest-architecture.md) | `/src/app/core/persistence/operation-log/docs/` |
|
||||
|
||||
## Related Documentation
|
||||
|
||||
| Location | Content |
|
||||
| ----------------------------------- | ------------------------------ |
|
||||
| `/docs/sync/vector-clocks.md` | Vector clock implementation |
|
||||
| `/packages/super-sync-server/` | SuperSync server code and docs |
|
||||
| `/src/app/pfapi/api/sync/README.md` | PFAPI sync overview |
|
||||
|
|
@ -1,345 +0,0 @@
|
|||
# Operation Log Implementation Review
|
||||
|
||||
**Date:** 2025-12-08
|
||||
**Branch:** `feat/operation-logs`
|
||||
**Last Updated:** December 11, 2025 (code review and test improvements)
|
||||
|
||||
This document summarizes findings from a comprehensive review of the operation log implementation, covering bugs, redundancies, architecture issues, and test coverage gaps.
|
||||
|
||||
---
|
||||
|
||||
## December 11, 2025 Update: Security Review
|
||||
|
||||
A comprehensive security review was conducted on December 11, 2025. Several "critical" security issues were initially flagged but upon verification were found to be **already addressed**:
|
||||
|
||||
### Verified Security Measures ✅
|
||||
|
||||
| Issue | Status | Location |
|
||||
| -------------------------- | ---------------------- | ------------------------------------------------------------- |
|
||||
| **Timing attack on login** | ✅ Already mitigated | `auth.ts:214-219` - Uses dummy hash comparison |
|
||||
| **Vector clock DoS** | ✅ Already implemented | `sync.types.ts:66-110` - Limits to 100 entries, 255 char keys |
|
||||
| **CSRF protection** | ✅ Not needed | JWT Bearer tokens in Authorization header, not cookies |
|
||||
|
||||
### Verified Reliability Measures ✅
|
||||
|
||||
| Issue | Status | Location |
|
||||
| ------------------------------ | --------------------------- | ------------------------------------------------------------------------ |
|
||||
| **Error handling in reducers** | ✅ Defensive checks present | `short-syntax-shared.reducer.ts:69-71` - Returns early if task not found |
|
||||
| **ClientId recovery** | ✅ Handled upstream | `metaModel.loadClientId()` creates clientId if missing |
|
||||
|
||||
### Tests Added (December 11, 2025)
|
||||
|
||||
| Test | File | Description |
|
||||
| ------------------------------ | ------------------------------------- | ----------------------------------------- |
|
||||
| 3-way conflict E2E | `supersync-edge-cases.spec.ts` | 3 clients editing same task concurrently |
|
||||
| Delete vs Update E2E | `supersync-edge-cases.spec.ts` | One client deletes while another updates |
|
||||
| Large conflict sets (100+ ops) | `conflict-resolution.service.spec.ts` | Stress test with 50 local + 50 remote ops |
|
||||
| Multi-entity conflicts | `conflict-resolution.service.spec.ts` | 10 entities with 10 ops each |
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
The operation log implementation has a solid architectural foundation with event sourcing, vector clocks, and meta-reducers for atomic operations. However, several issues and test gaps were discovered that should be addressed before production use.
|
||||
|
||||
| Category | Critical | High | Medium | Total |
|
||||
| --------- | ----------- | ---- | ------ | ----- |
|
||||
| Bugs | 0 (1 fixed) | 3 | 6 | 9 |
|
||||
| Test Gaps | 3 | 4 | 3 | 10 |
|
||||
|
||||
---
|
||||
|
||||
## Corrected Analysis (False Positives Removed)
|
||||
|
||||
### ~~Repair Mutex Race Condition~~ - FALSE POSITIVE
|
||||
|
||||
**Status:** Not a bug in JavaScript's single-threaded model.
|
||||
|
||||
**Original Claim:** Between promise creation and mutex assignment, another call can pass the `if (!this._repairMutex)` check.
|
||||
|
||||
**Why It's Not a Bug:** JavaScript is single-threaded. The code at lines 700-722:
|
||||
|
||||
```typescript
|
||||
const repairPromise = (async () => { ... })(); // Returns immediately
|
||||
this._repairMutex = repairPromise; // Assigned synchronously on next line
|
||||
```
|
||||
|
||||
There is no `await` between these lines, so the event loop cannot yield. No other code can execute between promise creation and mutex assignment.
|
||||
|
||||
**Optional Improvement:** Setting the mutex before creating the promise is a valid defensive coding practice for clarity, but the current implementation is not buggy.
|
||||
|
||||
---
|
||||
|
||||
### ~~Cache Not Invalidated on Append~~ - FALSE POSITIVE
|
||||
|
||||
**Status:** Not a bug - cache self-invalidates.
|
||||
|
||||
**Original Claim:** `append()` doesn't invalidate `_appliedOpIdsCache`, causing stale data.
|
||||
|
||||
**Actual Implementation:**
|
||||
|
||||
```typescript
|
||||
async getAppliedOpIds(): Promise<Set<string>> {
|
||||
const currentLastSeq = await this.getLastSeq();
|
||||
// Cache valid only if lastSeq hasn't changed
|
||||
if (this._appliedOpIdsCache && this._cacheLastSeq === currentLastSeq) {
|
||||
return new Set(this._appliedOpIdsCache);
|
||||
}
|
||||
// Rebuild cache if seq changed
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
When new ops are appended, `lastSeq` changes (auto-increment). The cache automatically invalidates because `currentLastSeq !== _cacheLastSeq`.
|
||||
|
||||
**Conclusion:** No bug - the sequence-based invalidation works correctly.
|
||||
|
||||
---
|
||||
|
||||
## Critical Issues (Fix Immediately)
|
||||
|
||||
### 1. ~~incrementCompactionCounter() Bug - Compaction Never Triggers~~ - FIXED
|
||||
|
||||
**File:** `src/app/core/persistence/operation-log/store/operation-log-store.service.ts:434-439`
|
||||
**Status:** Fixed on 2025-12-08
|
||||
|
||||
**Original Issue:** When no cache exists, returned 1 without persisting. Compaction threshold was never reached.
|
||||
|
||||
**Fix Applied:**
|
||||
|
||||
```typescript
|
||||
if (!cache) {
|
||||
// No state cache yet - create one with counter starting at 1
|
||||
await store.put({ id: 'current', compactionCounter: 1 });
|
||||
await tx.done;
|
||||
return 1;
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## High Priority Issues
|
||||
|
||||
### 2. ~~Missing Subtask Cascade in Tag Deletion~~ - FALSE POSITIVE
|
||||
|
||||
**File:** `src/app/root-store/meta/task-shared-meta-reducers/tag-shared.reducer.ts:85-116`
|
||||
**Status:** Re-evaluated on 2025-12-08 - NOT A BUG
|
||||
|
||||
**Original Claim:** Subtasks of surviving parents that lose all tags become orphaned.
|
||||
|
||||
**Why It's Not a Bug:** A subtask with a surviving parent is still accessible through the parent. The current logic correctly:
|
||||
|
||||
- Only marks tasks as orphaned if they have no tags, no project, AND no parent
|
||||
- Deletes subtasks when their parent is orphaned
|
||||
|
||||
### 3. ~~Snapshot Saved Before Validation~~ - FIXED
|
||||
|
||||
**File:** `src/app/core/persistence/operation-log/store/operation-log-hydrator.service.ts:166-179`
|
||||
**Status:** Fixed on 2025-12-08
|
||||
|
||||
**Original Issue:** Snapshot was saved before validation ran. If validation found corruption, the snapshot was stale.
|
||||
|
||||
**Fix Applied:** Moved validation (CHECKPOINT C) BEFORE saving snapshot in both tail replay and full replay code paths.
|
||||
|
||||
### 4. No Error Handling in applyShortSyntax Sub-Functions
|
||||
|
||||
**File:** `src/app/root-store/meta/task-shared-meta-reducers/short-syntax-shared.reducer.ts:53-126`
|
||||
|
||||
Multiple state mutations (`moveTaskToProject`, `handleScheduleWithTime`, `handlePlanForDay`) with no try-catch. If any throws mid-operation, state becomes partially updated.
|
||||
|
||||
### 5. Missing Recovery When clientId Load Fails
|
||||
|
||||
**File:** `src/app/core/persistence/operation-log/operation-log.effects.ts:70-76`
|
||||
|
||||
If `loadClientId()` fails or returns empty, operation is lost. No retry mechanism.
|
||||
|
||||
---
|
||||
|
||||
## Medium Priority Issues
|
||||
|
||||
### 6. TOCTOU Race in Compaction
|
||||
|
||||
**File:** `src/app/core/persistence/operation-log/store/operation-log-compaction.service.ts:66-93`
|
||||
|
||||
Time-of-check-time-of-use between `getLastSeq()` and `deleteOpsWhere()`. New ops written between these calls could theoretically cause issues.
|
||||
|
||||
### 7. Orphan Detection Doesn't Verify Project Exists
|
||||
|
||||
**File:** `src/app/root-store/meta/task-shared-meta-reducers/tag-shared.reducer.ts:105-108`
|
||||
|
||||
```typescript
|
||||
if (newTagIds.length === 0 && !task.projectId && !task.parentId) {
|
||||
```
|
||||
|
||||
Checks `!task.projectId` but doesn't verify project still exists (could be deleted in same batch).
|
||||
|
||||
### 8. State Capture ID Non-Deterministic
|
||||
|
||||
**File:** `src/app/core/persistence/operation-log/processing/state-change-capture.service.ts:354-360`
|
||||
|
||||
`JSON.stringify(action)` hash is non-deterministic for object property order. Could cause capture ID mismatch.
|
||||
|
||||
### 9. Stale Capture Cleanup Only on New Capture
|
||||
|
||||
**File:** `src/app/core/persistence/operation-log/processing/state-change-capture.service.ts:231`
|
||||
|
||||
`cleanupStaleCaptures()` only runs when new capture created. If operations are infrequent, stale captures accumulate.
|
||||
|
||||
### 10. TimeTracking Archive Not Cleaned Atomically
|
||||
|
||||
**File:** `src/app/root-store/meta/task-shared-meta-reducers/tag-shared.reducer.ts:188-210`
|
||||
|
||||
Current time tracking cleaned in meta-reducer, archives handled async in effect. Can cause sync inconsistency.
|
||||
|
||||
### 11. LOCAL_ACTIONS Filter Edge Case
|
||||
|
||||
**File:** `src/app/util/local-actions.token.ts:37`
|
||||
|
||||
```typescript
|
||||
return actions$.pipe(filter((action: Action) => !(action as any).meta?.isRemote));
|
||||
```
|
||||
|
||||
If action has no `meta` field, it passes (undefined !== true). Remote operations without explicit `isRemote: true` could trigger effects twice.
|
||||
|
||||
---
|
||||
|
||||
## Test Coverage Gaps
|
||||
|
||||
### Critical Missing Tests
|
||||
|
||||
| Gap | Scenario | Impact | Status |
|
||||
| -------------------------- | ------------------------------ | ---------------- | --------------------------------------------------------------- |
|
||||
| ~~**Race conditions**~~ | Concurrent append + compaction | Data loss | ✅ Tested (compaction.service.spec.ts - 6 race condition tests) |
|
||||
| **Error recovery** | Quota exceeded mid-write | State corruption | ✅ Tested (8 tests in operation-log.effects.spec.ts) |
|
||||
| **Multi-tab coordination** | Tab A append + Tab B compact | Lock deadlock | Covered by lock service tests |
|
||||
|
||||
### Missing Scenarios
|
||||
|
||||
1. ~~**Concurrent operations:** No test for append during compaction~~ → ✅ Added race condition tests
|
||||
2. ~~**Quota exceeded:** Emergency compaction path never tested~~ → ✅ Extensively tested (8 test cases)
|
||||
3. ~~**Schema migration:** Version mismatch during hydration~~ → ✅ Added 6 version mismatch tests (hydrator.service.spec.ts)
|
||||
4. ~~**3-way conflicts:** Only 2-way conflict resolution tested~~ → ✅ Added E2E test (supersync-edge-cases.spec.ts)
|
||||
5. ~~**Conflict on deleted entity:** Update for entity local deleted~~ → ✅ Added E2E test (supersync-edge-cases.spec.ts)
|
||||
6. ~~**Very large conflict sets:** 100+ ops on same entity~~ → ✅ Added unit tests (conflict-resolution.service.spec.ts)
|
||||
7. ~~**Download retry:** Network failure mid-pagination (2 tests skipped with `pending()`)~~ → ✅ Fixed (4 retry tests now passing)
|
||||
|
||||
### Test Infrastructure Issues
|
||||
|
||||
- ~~`sync-scenarios.integration.spec.ts:18-32`: All SimulatedClients share same IndexedDB (not true isolation)~~ → ✅ Documented as intentional design choice with clear explanation of logical isolation strategy (see `simulated-client.helper.ts`)
|
||||
- ~~`operation-log-download.service.spec.ts:360-373`: 2 tests marked `pending()` due to timeout~~ → ✅ Fixed with proper retry testing (30s timeouts)
|
||||
- Benchmark tests disabled (`xdescribe`) - intentional, for manual performance testing only
|
||||
|
||||
---
|
||||
|
||||
## Architecture Observations
|
||||
|
||||
### Well-Designed Aspects
|
||||
|
||||
1. **Event sourcing model** with vector clocks for conflict detection
|
||||
2. **Meta-reducer pattern** for atomic multi-entity operations
|
||||
3. **Snapshot + tail replay** for fast hydration
|
||||
4. **LOCAL_ACTIONS token** for filtering remote operations from effects
|
||||
5. **Sequence-based cache invalidation** for `_appliedOpIdsCache`
|
||||
|
||||
### Potential Redundancies
|
||||
|
||||
1. **Dual compaction triggers:** Both counter-based and time-based cleanup
|
||||
2. **Multiple validation checkpoints:** A, B, C, D with similar logic
|
||||
3. **Lock service fallback:** Both Web Locks API and localStorage locking maintained
|
||||
|
||||
### Known Limitations (Documented)
|
||||
|
||||
- Cross-version sync (A.7.11) not implemented
|
||||
- Archive data bypasses operation log intentionally
|
||||
- Schema version currently fixed at v1
|
||||
|
||||
---
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: Fix Critical Bug
|
||||
|
||||
#### 1.1 Fix incrementCompactionCounter() Bug
|
||||
|
||||
**File:** `src/app/core/persistence/operation-log/store/operation-log-store.service.ts`
|
||||
**Lines:** 428-447
|
||||
|
||||
### Phase 2: Fix High Priority Issues
|
||||
|
||||
#### 2.1 Fix Snapshot Timing in Hydrator
|
||||
|
||||
**File:** `src/app/core/persistence/operation-log/store/operation-log-hydrator.service.ts`
|
||||
**Lines:** 167-176
|
||||
|
||||
**Fix:** Move validation (Checkpoint C) BEFORE saving snapshot.
|
||||
|
||||
#### 2.2 Add Subtask Orphan Re-detection
|
||||
|
||||
**File:** `src/app/root-store/meta/task-shared-meta-reducers/tag-shared.reducer.ts`
|
||||
**Lines:** After 116
|
||||
|
||||
**Fix:** After removing tags from tasks, run orphan detection on subtasks of surviving parents.
|
||||
|
||||
### Phase 3: Add Missing Tests
|
||||
|
||||
#### 3.1 Race Condition Tests
|
||||
|
||||
**File:** `src/app/core/persistence/operation-log/integration/race-conditions.integration.spec.ts` (new)
|
||||
|
||||
Tests to add:
|
||||
|
||||
- Concurrent append + compaction
|
||||
- Concurrent upload + download
|
||||
- Multi-tab lock acquisition
|
||||
- Rapid local ops during sync
|
||||
|
||||
#### 3.2 Error Recovery Tests
|
||||
|
||||
**File:** `src/app/core/persistence/operation-log/integration/error-recovery.integration.spec.ts` (new)
|
||||
|
||||
Tests to add:
|
||||
|
||||
- Quota exceeded mid-write
|
||||
- Network failure during download pagination
|
||||
- Corrupted operation in log
|
||||
- Schema mismatch during hydration
|
||||
|
||||
#### 3.3 Fix Pending Download Tests
|
||||
|
||||
**File:** `src/app/core/persistence/operation-log/sync/operation-log-download.service.spec.ts`
|
||||
**Lines:** 360-373
|
||||
|
||||
**Fix:** Either mock timers properly or increase test timeout.
|
||||
|
||||
### Phase 4: Verify & Run Tests
|
||||
|
||||
1. Run `npm run checkFile` on all modified files
|
||||
2. Run existing tests: `npm test`
|
||||
3. Run new integration tests
|
||||
4. Verify no regressions
|
||||
|
||||
---
|
||||
|
||||
## Files Reference
|
||||
|
||||
| Category | Files |
|
||||
| ----------------- | --------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Core effects** | `operation-log.effects.ts` |
|
||||
| **Store** | `operation-log-store.service.ts`, `operation-log-hydrator.service.ts`, `operation-log-compaction.service.ts` |
|
||||
| **State capture** | `state-change-capture.service.ts`, `state-capture.meta-reducer.ts` |
|
||||
| **Sync** | `operation-log-sync.service.ts`, `conflict-resolution.service.ts`, `vector-clock.service.ts` |
|
||||
| **Meta-reducers** | `planner-shared.reducer.ts`, `short-syntax-shared.reducer.ts`, `tag-shared.reducer.ts`, `task-shared-crud.reducer.ts` |
|
||||
| **Filter** | `local-actions.token.ts` |
|
||||
|
||||
---
|
||||
|
||||
## Files to Modify
|
||||
|
||||
| File | Changes |
|
||||
| ------------------------------------------- | ---------------------------- |
|
||||
| `operation-log-store.service.ts` | Fix compaction counter |
|
||||
| `operation-log-hydrator.service.ts` | Fix snapshot timing |
|
||||
| `tag-shared.reducer.ts` | Add subtask orphan detection |
|
||||
| `operation-log-download.service.spec.ts` | Fix pending tests |
|
||||
| (new) `race-conditions.integration.spec.ts` | Add race condition tests |
|
||||
| (new) `error-recovery.integration.spec.ts` | Add error recovery tests |
|
||||
|
|
@ -1,153 +0,0 @@
|
|||
# Integration Test Concept for Operation Log
|
||||
|
||||
## Implementation Status: COMPLETE
|
||||
|
||||
All integration tests have been implemented and pass:
|
||||
|
||||
- **16 tests** in `multi-client-sync.integration.spec.ts`
|
||||
- **17 tests** in `state-consistency.integration.spec.ts`
|
||||
|
||||
## Recommendation Summary
|
||||
|
||||
**Use Karma-based integration tests** (not Playwright E2E) for the following reasons:
|
||||
|
||||
- Karma runs in ChromeHeadless with full IndexedDB support
|
||||
- Existing `operation-log-store.service.spec.ts` already demonstrates real IndexedDB testing
|
||||
- Service-level tests are faster and more deterministic than full E2E
|
||||
- Better control over vector clocks, clientIds, and operation timing
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
src/app/core/persistence/operation-log/
|
||||
integration/
|
||||
helpers/
|
||||
test-client.helper.ts # Simulated client with its own clientId/vectorClock
|
||||
operation-factory.helper.ts # Factories for creating test operations
|
||||
state-assertions.helper.ts # Assertion helpers for state verification
|
||||
multi-client-sync.integration.spec.ts
|
||||
state-consistency.integration.spec.ts
|
||||
```
|
||||
|
||||
## Key Test Utilities
|
||||
|
||||
### TestClient Helper
|
||||
|
||||
Simulates a client with its own `clientId` and vector clock:
|
||||
|
||||
```typescript
|
||||
class TestClient {
|
||||
readonly clientId: string;
|
||||
private vectorClock: VectorClock = {};
|
||||
|
||||
createOperation(params): Operation {
|
||||
this.vectorClock[this.clientId]++;
|
||||
return { ...params, clientId: this.clientId, vectorClock: { ...this.vectorClock } };
|
||||
}
|
||||
|
||||
mergeRemoteClock(remoteClock: VectorClock): void {
|
||||
this.vectorClock = mergeVectorClocks(this.vectorClock, remoteClock);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Operation Factories
|
||||
|
||||
Extend existing `createTestOperation` pattern from `operation-log-store.service.spec.ts:12-24`:
|
||||
|
||||
```typescript
|
||||
const createTaskOperation = (client: TestClient, taskId: string, opType: OpType, payload): Operation
|
||||
const createProjectOperation = (client: TestClient, projectId: string, opType: OpType, payload): Operation
|
||||
```
|
||||
|
||||
## Test Scenarios
|
||||
|
||||
### Category 1: Multi-Client Sync (multi-client-sync.integration.spec.ts)
|
||||
|
||||
| Scenario | Description |
|
||||
| ---------------------------------------- | --------------------------------------------------------------------------------------- |
|
||||
| **1.1 Non-conflicting concurrent edits** | Client A modifies task-1, Client B modifies task-2. Both merge cleanly. |
|
||||
| **1.2 Conflict detection (same entity)** | Client A and B both edit task-1 without seeing each other's changes. Conflict detected. |
|
||||
| **1.3 Three-client vector clock merge** | A -> B -> C chain: verify merged clock contains all three clients' knowledge |
|
||||
| **1.4 Fresh client sync** | New client with empty state receives all remote operations |
|
||||
| **1.5 Stale operation rejection** | Remote sends operation with older vector clock than local - should be skipped |
|
||||
| **1.6 Duplicate operation handling** | Same operation ID received twice - only stored once |
|
||||
|
||||
### Category 2: State Consistency (state-consistency.integration.spec.ts)
|
||||
|
||||
| Scenario | Description |
|
||||
| ------------------------------------------ | ---------------------------------------------------------------------------------- |
|
||||
| **2.1 Full replay produces correct state** | Create -> Update -> Update sequence rebuilds expected state |
|
||||
| **2.2 Snapshot + tail equals full replay** | State from snapshot + tail ops equals state from full replay |
|
||||
| **2.3 Delete operation handling** | Create -> Delete sequence results in entity not existing |
|
||||
| **2.4 Operation order independence** | Same ops in different arrival order produce same final state (for non-conflicting) |
|
||||
| **2.5 Compaction preserves state** | State after compaction matches state before |
|
||||
|
||||
## Implementation Approach
|
||||
|
||||
### Phase 1: Test Infrastructure
|
||||
|
||||
1. Create `integration/` directory structure
|
||||
2. Implement `TestClient` helper class
|
||||
3. Implement operation factory helpers
|
||||
4. Add state assertion utilities
|
||||
|
||||
### Phase 2: Multi-Client Sync Tests
|
||||
|
||||
1. Two-client non-conflicting operations
|
||||
2. Two-client conflict detection
|
||||
3. Three-client vector clock convergence
|
||||
4. Fresh client sync scenario
|
||||
|
||||
### Phase 3: State Consistency Tests
|
||||
|
||||
1. Full operation replay
|
||||
2. Snapshot + tail replay equivalence
|
||||
3. Delete operation handling
|
||||
4. Compaction state preservation
|
||||
|
||||
## Test Setup Pattern
|
||||
|
||||
```typescript
|
||||
describe('Multi-Client Sync Integration', () => {
|
||||
let storeService: OperationLogStoreService;
|
||||
|
||||
beforeEach(async () => {
|
||||
TestBed.configureTestingModule({
|
||||
providers: [OperationLogStoreService, VectorClockService],
|
||||
});
|
||||
storeService = TestBed.inject(OperationLogStoreService);
|
||||
await storeService.init();
|
||||
await storeService._clearAllDataForTesting(); // Ensures test isolation
|
||||
});
|
||||
|
||||
it('should merge non-conflicting ops from two clients', async () => {
|
||||
const clientA = new TestClient('client-a');
|
||||
const clientB = new TestClient('client-b');
|
||||
|
||||
const opA = createTaskOperation(clientA, 'task-1', OpType.Create, { title: 'A' });
|
||||
const opB = createTaskOperation(clientB, 'task-2', OpType.Create, { title: 'B' });
|
||||
|
||||
await storeService.append(opA, 'local');
|
||||
await storeService.append(opB, 'remote');
|
||||
|
||||
const ops = await storeService.getOpsAfterSeq(0);
|
||||
expect(ops.length).toBe(2);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Critical Files
|
||||
|
||||
- `src/app/core/persistence/operation-log/store/operation-log-store.service.ts` - Core storage with `_clearAllDataForTesting()`
|
||||
- `src/app/core/persistence/operation-log/store/operation-log-store.service.spec.ts` - Pattern to follow (lines 12-24 for `createTestOperation`)
|
||||
- `src/app/core/persistence/operation-log/operation.types.ts` - Operation, OpType, EntityType types
|
||||
- `src/app/pfapi/api/util/vector-clock.ts` - `compareVectorClocks`, `mergeVectorClocks` utilities
|
||||
- `src/app/core/persistence/operation-log/sync/vector-clock.service.ts` - Vector clock management
|
||||
|
||||
## Determinism Strategy
|
||||
|
||||
1. **Isolated database**: `_clearAllDataForTesting()` before each test
|
||||
2. **Controlled UUIDs**: Test helper generates predictable IDs (`test-uuid-${counter}`)
|
||||
3. **Controlled time**: Use `jasmine.clock().mockDate()` for timestamp determinism
|
||||
4. **Sequential execution**: Integration tests run sequentially (no parallel spec execution)
|
||||
|
|
@ -1,492 +0,0 @@
|
|||
# SuperSync E2E Test Plan
|
||||
|
||||
## Status: ✅ IMPLEMENTED
|
||||
|
||||
**Last Updated:** December 11, 2025
|
||||
|
||||
This document describes the full end-to-end tests for operation-log sync between multiple browser clients using the real super-sync-server. These tests complement the existing Karma-based integration tests by testing the complete sync flow including network, authentication, and UI.
|
||||
|
||||
### Implementation Status
|
||||
|
||||
| Component | Status |
|
||||
| -------------------------- | ----------------------------- |
|
||||
| Test mode server endpoints | ✅ Implemented |
|
||||
| Docker Compose setup | ✅ Implemented |
|
||||
| SuperSyncPage page object | ✅ Implemented |
|
||||
| Helper utilities | ✅ Implemented |
|
||||
| Basic sync tests | ✅ Implemented |
|
||||
| Conflict resolution tests | ✅ Implemented |
|
||||
| Edge case tests | ✅ Implemented (Dec 11, 2025) |
|
||||
|
||||
### Test Files
|
||||
|
||||
| File | Description |
|
||||
| --------------------------------------------- | -------------------------------------------------------------- |
|
||||
| `e2e/tests/sync/supersync.spec.ts` | Core sync scenarios (create, bidirectional, basic conflict) |
|
||||
| `e2e/tests/sync/supersync-advanced.spec.ts` | Advanced scenarios (delete conflicts, time tracking) |
|
||||
| `e2e/tests/sync/supersync-edge-cases.spec.ts` | Edge cases (3-way conflicts, delete vs update, offline bursts) |
|
||||
| `e2e/tests/sync/supersync-models.spec.ts` | Model-specific sync (projects, tags) |
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
# Start server and run all supersync tests
|
||||
npm run e2e:supersync
|
||||
|
||||
# Run specific test file
|
||||
npm run e2e:playwright:file e2e/tests/sync/supersync-edge-cases.spec.ts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Test sync between two independent browser clients using:
|
||||
|
||||
- Real super-sync-server running in Docker
|
||||
- Two Playwright browser contexts (isolated IndexedDB, localStorage)
|
||||
- Same user account configured on both clients
|
||||
- Real HTTP requests to sync server
|
||||
|
||||
## Multi-Client Simulation
|
||||
|
||||
Playwright's **Browser Contexts** provide complete isolation:
|
||||
|
||||
```
|
||||
┌─────────────┐ ┌─────────────────┐ ┌─────────────┐
|
||||
│ Context A │ │ SuperSync │ │ Context B │
|
||||
│ (Client A) │ │ Server │ │ (Client B) │
|
||||
│ │ │ │ │ │
|
||||
│ IndexedDB A │──upload──▶│ SQLite DB │◀──upload──│ IndexedDB B │
|
||||
│ clientId: X │ │ (shared) │ │ clientId: Y │
|
||||
│ │◀─download─│ │─download─▶│ │
|
||||
└─────────────┘ └─────────────────┘ └─────────────┘
|
||||
```
|
||||
|
||||
Each context has its own:
|
||||
|
||||
- IndexedDB (SUP_OPS store with operations)
|
||||
- localStorage (unique clientId generated on first load)
|
||||
- Cookies/session
|
||||
|
||||
The server is the only shared component - exactly like real multi-device sync.
|
||||
|
||||
---
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### Step 1: Add Test Mode to Server
|
||||
|
||||
**File:** `packages/super-sync-server/src/config.ts`
|
||||
|
||||
```typescript
|
||||
export interface ServerConfig {
|
||||
// ... existing fields ...
|
||||
testMode?: {
|
||||
enabled: boolean;
|
||||
autoVerifyUsers: boolean;
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
Environment variable: `TEST_MODE=true`
|
||||
|
||||
### Step 2: Add Test-Only Endpoints
|
||||
|
||||
**File:** `packages/super-sync-server/src/server.ts` (or new `test-routes.ts`)
|
||||
|
||||
Only available when `TEST_MODE=true`:
|
||||
|
||||
```typescript
|
||||
// Register + auto-verify + return JWT in one call
|
||||
POST /api/test/create-user
|
||||
Body: { email: string, password: string }
|
||||
Response: { token: string, userId: number }
|
||||
|
||||
// Wipe all test data
|
||||
POST /api/test/cleanup
|
||||
Response: { cleaned: true }
|
||||
```
|
||||
|
||||
### Step 3: Create Dockerfile
|
||||
|
||||
**File:** `packages/super-sync-server/Dockerfile.test`
|
||||
|
||||
```dockerfile
|
||||
FROM node:20-alpine
|
||||
WORKDIR /app
|
||||
COPY package*.json ./
|
||||
RUN npm ci
|
||||
COPY . .
|
||||
RUN npm run build
|
||||
EXPOSE 1900
|
||||
CMD ["node", "dist/index.js"]
|
||||
```
|
||||
|
||||
### Step 4: Docker Compose Service
|
||||
|
||||
**File:** `docker-compose.yaml`
|
||||
|
||||
```yaml
|
||||
services:
|
||||
supersync:
|
||||
build:
|
||||
context: ./packages/super-sync-server
|
||||
dockerfile: Dockerfile.test
|
||||
ports:
|
||||
- '1900:1900'
|
||||
environment:
|
||||
- PORT=1900
|
||||
- TEST_MODE=true
|
||||
- JWT_SECRET=e2e-test-secret-minimum-32-chars-long
|
||||
- DATA_DIR=/data
|
||||
tmpfs:
|
||||
- /data # In-memory SQLite for test isolation
|
||||
```
|
||||
|
||||
### Step 5: NPM Scripts
|
||||
|
||||
**File:** `package.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"e2e:supersync": "docker compose up -d supersync && npm run e2e -- --grep @supersync; docker compose down supersync"
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: SuperSync Page Object
|
||||
|
||||
**File:** `e2e/pages/supersync.page.ts`
|
||||
|
||||
```typescript
|
||||
import { type Page, type Locator } from '@playwright/test';
|
||||
import { BasePage } from './base.page';
|
||||
|
||||
export class SuperSyncPage extends BasePage {
|
||||
readonly syncBtn: Locator;
|
||||
readonly providerSelect: Locator;
|
||||
readonly baseUrlInput: Locator;
|
||||
readonly accessTokenInput: Locator;
|
||||
readonly saveBtn: Locator;
|
||||
readonly syncSpinner: Locator;
|
||||
readonly syncCheckIcon: Locator;
|
||||
|
||||
constructor(page: Page) {
|
||||
super(page);
|
||||
this.syncBtn = page.locator('button.sync-btn');
|
||||
this.providerSelect = page.locator('formly-field-mat-select mat-select');
|
||||
this.baseUrlInput = page.locator('.e2e-baseUrl input');
|
||||
this.accessTokenInput = page.locator('.e2e-accessToken textarea');
|
||||
this.saveBtn = page.locator('mat-dialog-actions button[mat-stroked-button]');
|
||||
this.syncSpinner = page.locator('.sync-btn mat-icon.spin');
|
||||
this.syncCheckIcon = page.locator('.sync-btn mat-icon.sync-state-ico');
|
||||
}
|
||||
|
||||
async setupSuperSync(config: { baseUrl: string; accessToken: string }): Promise<void> {
|
||||
await this.syncBtn.click();
|
||||
await this.providerSelect.waitFor({ state: 'visible' });
|
||||
await this.providerSelect.click();
|
||||
|
||||
const superSyncOption = this.page
|
||||
.locator('mat-option')
|
||||
.filter({ hasText: 'SuperSync' });
|
||||
await superSyncOption.waitFor({ state: 'visible' });
|
||||
await superSyncOption.click();
|
||||
|
||||
await this.baseUrlInput.waitFor({ state: 'visible' });
|
||||
await this.baseUrlInput.fill(config.baseUrl);
|
||||
await this.accessTokenInput.fill(config.accessToken);
|
||||
await this.saveBtn.click();
|
||||
}
|
||||
|
||||
async triggerSync(): Promise<void> {
|
||||
await this.syncBtn.click();
|
||||
await Promise.race([
|
||||
this.syncSpinner.waitFor({ state: 'visible', timeout: 1000 }).catch(() => {}),
|
||||
this.syncCheckIcon.waitFor({ state: 'visible', timeout: 1000 }).catch(() => {}),
|
||||
]);
|
||||
}
|
||||
|
||||
async waitForSyncComplete(): Promise<void> {
|
||||
await this.syncSpinner.waitFor({ state: 'hidden', timeout: 30000 });
|
||||
await this.syncCheckIcon.waitFor({ state: 'visible' });
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 7: Test Helpers
|
||||
|
||||
**File:** `e2e/utils/supersync-helpers.ts`
|
||||
|
||||
```typescript
|
||||
const SUPERSYNC_BASE_URL = 'http://localhost:1900';
|
||||
|
||||
export async function createTestUser(
|
||||
testId: string,
|
||||
): Promise<{ email: string; token: string }> {
|
||||
const response = await fetch(`${SUPERSYNC_BASE_URL}/api/test/create-user`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
email: `test-${testId}@e2e.local`,
|
||||
password: 'TestPassword123!',
|
||||
}),
|
||||
});
|
||||
if (!response.ok) throw new Error(`Failed to create test user: ${response.status}`);
|
||||
return response.json();
|
||||
}
|
||||
|
||||
export async function cleanupTestData(): Promise<void> {
|
||||
await fetch(`${SUPERSYNC_BASE_URL}/api/test/cleanup`, { method: 'POST' });
|
||||
}
|
||||
|
||||
export function getSuperSyncConfig(token: string) {
|
||||
return { baseUrl: SUPERSYNC_BASE_URL, accessToken: token };
|
||||
}
|
||||
```
|
||||
|
||||
### Step 8: Extend Test Fixtures
|
||||
|
||||
**File:** `e2e/fixtures/test.fixture.ts`
|
||||
|
||||
Add `testRunId` for unique test isolation:
|
||||
|
||||
```typescript
|
||||
testRunId: async ({}, use, testInfo) => {
|
||||
const runId = `${Date.now()}-${testInfo.workerIndex}`;
|
||||
await use(runId);
|
||||
},
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Test Scenarios
|
||||
|
||||
### File: `e2e/tests/sync/supersync.spec.ts`
|
||||
|
||||
```typescript
|
||||
import { test, expect } from '../../fixtures/test.fixture';
|
||||
import { SuperSyncPage } from '../../pages/supersync.page';
|
||||
import { WorkViewPage } from '../../pages/work-view.page';
|
||||
import { createTestUser, getSuperSyncConfig } from '../../utils/supersync-helpers';
|
||||
import { waitForAppReady } from '../../utils/waits';
|
||||
|
||||
test.describe('@supersync SuperSync E2E', () => {
|
||||
test('basic sync: Client A creates task, Client B sees it', async ({
|
||||
browser,
|
||||
baseURL,
|
||||
testRunId,
|
||||
}) => {
|
||||
// 1. Create shared test user
|
||||
const user = await createTestUser(testRunId);
|
||||
const syncConfig = getSuperSyncConfig(user.token);
|
||||
|
||||
// 2. Set up Client A
|
||||
const contextA = await browser.newContext({ storageState: undefined, baseURL });
|
||||
const pageA = await contextA.newPage();
|
||||
await pageA.goto('/');
|
||||
await waitForAppReady(pageA);
|
||||
const syncA = new SuperSyncPage(pageA);
|
||||
const workA = new WorkViewPage(pageA, `A-${testRunId}`);
|
||||
|
||||
// 3. Configure sync on Client A
|
||||
await syncA.setupSuperSync(syncConfig);
|
||||
|
||||
// 4. Create task on Client A
|
||||
const taskName = `Task-${testRunId}-from-A`;
|
||||
await workA.addTask(taskName);
|
||||
|
||||
// 5. Sync Client A (upload)
|
||||
await syncA.triggerSync();
|
||||
await syncA.waitForSyncComplete();
|
||||
|
||||
// 6. Set up Client B
|
||||
const contextB = await browser.newContext({ storageState: undefined, baseURL });
|
||||
const pageB = await contextB.newPage();
|
||||
await pageB.goto('/');
|
||||
await waitForAppReady(pageB);
|
||||
const syncB = new SuperSyncPage(pageB);
|
||||
|
||||
// 7. Configure sync on Client B (same account)
|
||||
await syncB.setupSuperSync(syncConfig);
|
||||
|
||||
// 8. Sync Client B (download)
|
||||
await syncB.triggerSync();
|
||||
await syncB.waitForSyncComplete();
|
||||
|
||||
// 9. Verify Client B has the task
|
||||
await expect(pageB.locator(`task:has-text("${taskName}")`)).toBeVisible();
|
||||
|
||||
// Cleanup
|
||||
await contextA.close();
|
||||
await contextB.close();
|
||||
});
|
||||
|
||||
test('bidirectional: both clients create tasks', async ({
|
||||
browser,
|
||||
baseURL,
|
||||
testRunId,
|
||||
}) => {
|
||||
const user = await createTestUser(testRunId);
|
||||
const syncConfig = getSuperSyncConfig(user.token);
|
||||
|
||||
// Set up both clients
|
||||
const contextA = await browser.newContext({ storageState: undefined, baseURL });
|
||||
const pageA = await contextA.newPage();
|
||||
await pageA.goto('/');
|
||||
await waitForAppReady(pageA);
|
||||
const syncA = new SuperSyncPage(pageA);
|
||||
const workA = new WorkViewPage(pageA, `A-${testRunId}`);
|
||||
await syncA.setupSuperSync(syncConfig);
|
||||
|
||||
const contextB = await browser.newContext({ storageState: undefined, baseURL });
|
||||
const pageB = await contextB.newPage();
|
||||
await pageB.goto('/');
|
||||
await waitForAppReady(pageB);
|
||||
const syncB = new SuperSyncPage(pageB);
|
||||
const workB = new WorkViewPage(pageB, `B-${testRunId}`);
|
||||
await syncB.setupSuperSync(syncConfig);
|
||||
|
||||
// Both create tasks
|
||||
const taskFromA = `Task-${testRunId}-from-A`;
|
||||
const taskFromB = `Task-${testRunId}-from-B`;
|
||||
await workA.addTask(taskFromA);
|
||||
await workB.addTask(taskFromB);
|
||||
|
||||
// Client A syncs (uploads)
|
||||
await syncA.triggerSync();
|
||||
await syncA.waitForSyncComplete();
|
||||
|
||||
// Client B syncs (uploads + downloads)
|
||||
await syncB.triggerSync();
|
||||
await syncB.waitForSyncComplete();
|
||||
|
||||
// Client A syncs again (downloads B's task)
|
||||
await syncA.triggerSync();
|
||||
await syncA.waitForSyncComplete();
|
||||
|
||||
// Verify both see both tasks
|
||||
await expect(pageA.locator(`task:has-text("${taskFromA}")`)).toBeVisible();
|
||||
await expect(pageA.locator(`task:has-text("${taskFromB}")`)).toBeVisible();
|
||||
await expect(pageB.locator(`task:has-text("${taskFromA}")`)).toBeVisible();
|
||||
await expect(pageB.locator(`task:has-text("${taskFromB}")`)).toBeVisible();
|
||||
|
||||
await contextA.close();
|
||||
await contextB.close();
|
||||
});
|
||||
|
||||
test('conflict: both edit same task', async ({ browser, baseURL, testRunId }) => {
|
||||
const user = await createTestUser(testRunId);
|
||||
const syncConfig = getSuperSyncConfig(user.token);
|
||||
|
||||
// Set up Client A, create task, sync
|
||||
const contextA = await browser.newContext({ storageState: undefined, baseURL });
|
||||
const pageA = await contextA.newPage();
|
||||
await pageA.goto('/');
|
||||
await waitForAppReady(pageA);
|
||||
const syncA = new SuperSyncPage(pageA);
|
||||
const workA = new WorkViewPage(pageA, `A-${testRunId}`);
|
||||
await syncA.setupSuperSync(syncConfig);
|
||||
|
||||
const taskName = `Shared-${testRunId}`;
|
||||
await workA.addTask(taskName);
|
||||
await syncA.triggerSync();
|
||||
await syncA.waitForSyncComplete();
|
||||
|
||||
// Set up Client B, sync to get the task
|
||||
const contextB = await browser.newContext({ storageState: undefined, baseURL });
|
||||
const pageB = await contextB.newPage();
|
||||
await pageB.goto('/');
|
||||
await waitForAppReady(pageB);
|
||||
const syncB = new SuperSyncPage(pageB);
|
||||
await syncB.setupSuperSync(syncConfig);
|
||||
await syncB.triggerSync();
|
||||
await syncB.waitForSyncComplete();
|
||||
|
||||
// Both have the task now
|
||||
await expect(pageB.locator(`task:has-text("${taskName}")`)).toBeVisible();
|
||||
|
||||
// Client A marks done (offline)
|
||||
const taskA = pageA.locator(`task:has-text("${taskName}")`);
|
||||
await taskA.locator('.mat-mdc-checkbox').click();
|
||||
|
||||
// Client B changes something else (offline)
|
||||
// ... (could add notes, change estimate, etc.)
|
||||
|
||||
// Client A syncs
|
||||
await syncA.triggerSync();
|
||||
await syncA.waitForSyncComplete();
|
||||
|
||||
// Client B syncs - may detect conflict
|
||||
await syncB.triggerSync();
|
||||
await syncB.waitForSyncComplete();
|
||||
|
||||
// Final sync to converge
|
||||
await syncA.triggerSync();
|
||||
await syncA.waitForSyncComplete();
|
||||
|
||||
// Verify consistent state (task count should match)
|
||||
const countA = await pageA.locator('task').count();
|
||||
const countB = await pageB.locator('task').count();
|
||||
expect(countA).toBe(countB);
|
||||
|
||||
await contextA.close();
|
||||
await contextB.close();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Data Isolation Strategy
|
||||
|
||||
- **Unique test user per test**: `test-{timestamp}-{workerIndex}@e2e.local`
|
||||
- **Task prefixes**: `A-{testRunId}-TaskName` for clear identification
|
||||
- **In-memory SQLite**: Server uses `tmpfs` mount, no data persists between runs
|
||||
- **Isolated browser contexts**: Each client has separate IndexedDB
|
||||
|
||||
---
|
||||
|
||||
## Running the Tests
|
||||
|
||||
```bash
|
||||
# Start server and run all supersync tests
|
||||
npm run e2e:supersync
|
||||
|
||||
# Manual execution (server must be running)
|
||||
docker compose up -d supersync
|
||||
npm run e2e:playwright:file e2e/tests/sync/supersync.spec.ts
|
||||
docker compose down supersync
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Files to Create/Modify
|
||||
|
||||
| File | Action |
|
||||
| -------------------------------------------- | ------------------------- |
|
||||
| `packages/super-sync-server/src/config.ts` | Add testMode config |
|
||||
| `packages/super-sync-server/src/auth.ts` | Add test mode bypass |
|
||||
| `packages/super-sync-server/src/server.ts` | Add test-only routes |
|
||||
| `packages/super-sync-server/Dockerfile.test` | Create |
|
||||
| `docker-compose.yaml` | Add supersync service |
|
||||
| `package.json` | Add e2e:supersync scripts |
|
||||
| `e2e/pages/supersync.page.ts` | Create |
|
||||
| `e2e/utils/supersync-helpers.ts` | Create |
|
||||
| `e2e/fixtures/test.fixture.ts` | Add testRunId |
|
||||
| `e2e/tests/sync/supersync.spec.ts` | Create |
|
||||
|
||||
---
|
||||
|
||||
## Comparison with Integration Tests
|
||||
|
||||
| Aspect | Karma Integration Tests | Playwright E2E Tests |
|
||||
| ----------- | -------------------------- | ----------------------- |
|
||||
| Location | `src/app/.../integration/` | `e2e/tests/sync/` |
|
||||
| Server | Mocked/simulated | Real super-sync-server |
|
||||
| Browser | ChromeHeadless (single) | Multiple contexts |
|
||||
| Speed | Fast (~seconds) | Slower (~minutes) |
|
||||
| Purpose | Logic verification | Full flow validation |
|
||||
| When to run | Every PR | Before release / manual |
|
||||
|
||||
Both test types are valuable. Integration tests catch logic bugs quickly; E2E tests catch integration issues in the full stack.
|
||||
Loading…
Add table
Add a link
Reference in a new issue