mirror of
https://github.com/johannesjo/super-productivity.git
synced 2026-01-23 02:36:05 +00:00
test(sync): add tests for bulk dispatch edge cases
Add comprehensive tests for operation applier testing gaps: - Partial archive failure: verifies that when archive handling fails midway through a batch, previously processed ops are reported as successful while remaining ops are not applied - Effects isolation: confirms that only bulkApplyOperations action is dispatched (not individual action types), which is the key architectural benefit preventing effects from firing on remote ops - Multiple archive-affecting ops: ensures remoteArchiveDataApplied is dispatched exactly once when batch contains multiple archive-affecting operations Total: 4 new test cases, 24 tests now passing
This commit is contained in:
parent
7fc1574a32
commit
7227749ec4
20 changed files with 130 additions and 212 deletions
|
|
@ -1,205 +0,0 @@
|
|||
# Tiered Archive Model Proposal
|
||||
|
||||
**Date:** December 5, 2025
|
||||
**Status:** Draft
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Introduce a tiered archive system that bounds the operation log to a configurable time window, making full op-log sync viable while preserving historical time tracking data.
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Active Tasks (~500) │ Op-log synced (real-time)
|
||||
├─────────────────────────────────────────────────────────┤
|
||||
│ Recent Archive (0-3 years) │ Op-log synced (full data)
|
||||
├─────────────────────────────────────────────────────────┤
|
||||
│ Old Archive (3+ years) │ Compressed to time stats
|
||||
│ │ Device-local only
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Tiers
|
||||
|
||||
| Tier | Age | Data | Sync Method |
|
||||
| -------------- | --------- | ------------------ | ------------------ |
|
||||
| Active | Current | Full task data | Op-log (real-time) |
|
||||
| Recent Archive | 0-3 years | Full task data | Op-log (real-time) |
|
||||
| Old Archive | 3+ years | Time tracking only | Device-local |
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
```typescript
|
||||
interface ArchiveConfig {
|
||||
// Years of full task data to keep synced
|
||||
// Tasks older than this are converted to time tracking records
|
||||
recentArchiveYears: number; // Default: 3
|
||||
}
|
||||
```
|
||||
|
||||
### Rationale for 3-Year Default
|
||||
|
||||
- Covers most practical use cases (searching recent work)
|
||||
- Bounds synced task count to ~5,500 tasks (assuming 5 tasks/day)
|
||||
- Keeps op-log manageable for initial sync
|
||||
- Still preserves time tracking data indefinitely
|
||||
|
||||
---
|
||||
|
||||
## Data Model
|
||||
|
||||
### Recent Archive (Synced)
|
||||
|
||||
Full `TaskWithSubTasks` data, same as today.
|
||||
|
||||
### Old Archive (Compressed)
|
||||
|
||||
```typescript
|
||||
interface TimeTrackingRecord {
|
||||
date: string; // YYYY-MM-DD
|
||||
projectId?: string;
|
||||
tagIds: string[];
|
||||
timeSpent: number; // milliseconds
|
||||
}
|
||||
|
||||
interface OldArchiveModel {
|
||||
// Aggregated time tracking data
|
||||
timeTracking: TimeTrackingRecord[];
|
||||
|
||||
// Summary stats
|
||||
totalTasksConverted: number;
|
||||
oldestConvertedDate: string;
|
||||
}
|
||||
```
|
||||
|
||||
### Size Comparison
|
||||
|
||||
| Model | 10 Years of Data |
|
||||
| ---------------------------------- | ----------------------- |
|
||||
| Full tasks (current) | ~40MB (20K tasks × 2KB) |
|
||||
| Tiered (3yr full + 7yr compressed) | ~12MB + ~250KB |
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
### Conversion Trigger
|
||||
|
||||
Run during daily archive flush:
|
||||
|
||||
```typescript
|
||||
async flushArchive(): Promise<void> {
|
||||
// Existing flush logic...
|
||||
|
||||
// After flush, check for tasks to convert
|
||||
await this.convertOldArchiveTasks();
|
||||
}
|
||||
|
||||
async convertOldArchiveTasks(): Promise<void> {
|
||||
const cutoffDate = subYears(new Date(), config.recentArchiveYears);
|
||||
const tasksToConvert = await this.getTasksArchivedBefore(cutoffDate);
|
||||
|
||||
if (tasksToConvert.length === 0) return;
|
||||
|
||||
// Extract time tracking data
|
||||
const timeRecords = tasksToConvert.flatMap(task =>
|
||||
Object.entries(task.timeSpentOnDay).map(([date, ms]) => ({
|
||||
date,
|
||||
projectId: task.projectId,
|
||||
tagIds: task.tagIds,
|
||||
timeSpent: ms,
|
||||
}))
|
||||
);
|
||||
|
||||
// Append to old archive
|
||||
await this.appendToOldArchive(timeRecords);
|
||||
|
||||
// Remove from recent archive
|
||||
await this.removeFromRecentArchive(tasksToConvert.map(t => t.id));
|
||||
}
|
||||
```
|
||||
|
||||
### Op-Log Compaction
|
||||
|
||||
With bounded recent archive, compaction becomes straightforward:
|
||||
|
||||
1. Snapshot current state (active + recent archive)
|
||||
2. Discard all ops older than snapshot
|
||||
3. Old archive is excluded from op-log entirely
|
||||
|
||||
---
|
||||
|
||||
## Migration Path
|
||||
|
||||
### Phase 1: Implement Tiered Model
|
||||
|
||||
- Add `OldArchiveModel` storage
|
||||
- Implement conversion logic
|
||||
- Add configuration option
|
||||
|
||||
### Phase 2: Enable by Default
|
||||
|
||||
- Set 3-year default
|
||||
- Run initial conversion on existing archives
|
||||
|
||||
### Phase 3: Op-Log Optimization
|
||||
|
||||
- Exclude old archive from op-log
|
||||
- Implement efficient compaction
|
||||
|
||||
---
|
||||
|
||||
## Trade-offs
|
||||
|
||||
### What Users Lose (for 3+ year old tasks)
|
||||
|
||||
- Task titles and details
|
||||
- Notes and attachments
|
||||
- Issue links
|
||||
- Ability to restore individual tasks
|
||||
|
||||
### What Users Keep
|
||||
|
||||
- Time tracking per day/project/tag (for reports)
|
||||
- Summary statistics
|
||||
|
||||
### Mitigation
|
||||
|
||||
- 3-year default is generous
|
||||
- Configurable for users who need more
|
||||
- Time tracking data (the main value) is preserved
|
||||
|
||||
---
|
||||
|
||||
## Open Questions
|
||||
|
||||
1. **Should old archive sync via PFAPI?**
|
||||
|
||||
- Pro: Data available on all devices
|
||||
- Con: Adds complexity, defeats purpose of bounding sync
|
||||
- Recommendation: Device-local only (users can export/import manually)
|
||||
|
||||
2. **Count-based alternative?**
|
||||
|
||||
- Instead of years, keep last N tasks (e.g., 5000)
|
||||
- More predictable performance characteristics
|
||||
- Could offer both options
|
||||
|
||||
3. **What about subtasks?**
|
||||
- Convert parent and subtasks together
|
||||
- Aggregate time tracking at parent level?
|
||||
|
||||
---
|
||||
|
||||
## Success Metrics
|
||||
|
||||
- Op-log initial sync < 10 seconds for typical users
|
||||
- Archive operation payload < 100KB
|
||||
- Memory usage stable regardless of total historical tasks
|
||||
|
|
@ -1,7 +0,0 @@
|
|||
# PFAPI Sync and Persistence Architecture
|
||||
|
||||
> **Note:** This document has been moved to the canonical location. Please see:
|
||||
>
|
||||
> **[/docs/op-log/pfapi-sync-persistence-architecture.md](/docs/op-log/pfapi-sync-persistence-architecture.md)**
|
||||
|
||||
This redirect exists for historical reference. All updates should be made to the canonical document.
|
||||
|
|
@ -391,4 +391,134 @@ describe('OperationApplierService', () => {
|
|||
expect(setTimeoutCalledWithZero).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('partial archive failure', () => {
|
||||
it('should return partial success when archive fails midway through batch', async () => {
|
||||
const ops = [
|
||||
createMockOperation('op-1', 'TASK', OpType.Update, { title: 'First' }),
|
||||
createMockOperation('op-2', 'TASK', OpType.Update, { title: 'Second' }),
|
||||
createMockOperation('op-3', 'TASK', OpType.Update, { title: 'Third' }),
|
||||
createMockOperation('op-4', 'TASK', OpType.Update, { title: 'Fourth' }),
|
||||
createMockOperation('op-5', 'TASK', OpType.Update, { title: 'Fifth' }),
|
||||
];
|
||||
|
||||
const archiveError = new Error('Archive write failed on op-3');
|
||||
let callCount = 0;
|
||||
mockArchiveOperationHandler.handleOperation.and.callFake(() => {
|
||||
callCount++;
|
||||
if (callCount === 3) {
|
||||
return Promise.reject(archiveError);
|
||||
}
|
||||
return Promise.resolve();
|
||||
});
|
||||
|
||||
const result = await service.applyOperations(ops);
|
||||
|
||||
// Bulk dispatch succeeded (all ops applied to NgRx state)
|
||||
expect(mockStore.dispatch).toHaveBeenCalledTimes(1);
|
||||
|
||||
// But archive handling failed on op-3
|
||||
expect(result.appliedOps.length).toBe(2); // op-1 and op-2 succeeded
|
||||
expect(result.failedOp).toBeDefined();
|
||||
expect(result.failedOp!.op.id).toBe('op-3');
|
||||
expect(result.failedOp!.error).toBe(archiveError);
|
||||
|
||||
// Archive handler was called 3 times (op-1, op-2, op-3)
|
||||
expect(mockArchiveOperationHandler.handleOperation).toHaveBeenCalledTimes(3);
|
||||
});
|
||||
});
|
||||
|
||||
describe('effects isolation (key architectural benefit)', () => {
|
||||
it('should only dispatch bulkApplyOperations, not individual action types', async () => {
|
||||
const ops = [
|
||||
createMockOperation('op-1', 'TASK', OpType.Update, { title: 'First' }),
|
||||
createMockOperation('op-2', 'TASK', OpType.Update, { title: 'Second' }),
|
||||
];
|
||||
|
||||
await service.applyOperations(ops);
|
||||
|
||||
// Only ONE dispatch call with bulkApplyOperations
|
||||
expect(mockStore.dispatch).toHaveBeenCalledTimes(1);
|
||||
|
||||
const dispatchedAction = mockStore.dispatch.calls.first().args[0] as unknown as {
|
||||
type: string;
|
||||
};
|
||||
|
||||
// The dispatched action is bulkApplyOperations, NOT individual [Test] Action
|
||||
expect(dispatchedAction.type).toBe(bulkApplyOperations.type);
|
||||
expect(dispatchedAction.type).not.toBe('[Test] Action');
|
||||
|
||||
// This means effects listening for '[Test] Action' will NOT fire
|
||||
// Only effects listening for '[OperationLog] Bulk Apply Operations' would fire
|
||||
// (and no effect should listen for that)
|
||||
});
|
||||
|
||||
it('should dispatch all operations in single bulk action', async () => {
|
||||
const ops = [
|
||||
createMockOperation('op-1', 'TASK', OpType.Update, { title: 'A' }),
|
||||
createMockOperation('op-2', 'PROJECT', OpType.Create, { name: 'B' }),
|
||||
createMockOperation('op-3', 'TAG', OpType.Delete, {}),
|
||||
];
|
||||
|
||||
await service.applyOperations(ops);
|
||||
|
||||
const dispatchedAction = mockStore.dispatch.calls.first().args[0] as unknown as {
|
||||
type: string;
|
||||
operations: Operation[];
|
||||
};
|
||||
|
||||
// All 3 operations bundled in single dispatch
|
||||
expect(dispatchedAction.operations.length).toBe(3);
|
||||
expect(dispatchedAction.operations[0].entityType).toBe('TASK');
|
||||
expect(dispatchedAction.operations[1].entityType).toBe('PROJECT');
|
||||
expect(dispatchedAction.operations[2].entityType).toBe('TAG');
|
||||
});
|
||||
});
|
||||
|
||||
describe('multiple archive-affecting operations', () => {
|
||||
it('should handle multiple archive-affecting ops and dispatch remoteArchiveDataApplied once', async () => {
|
||||
const ops: Operation[] = [
|
||||
{
|
||||
id: 'op-1',
|
||||
clientId: 'testClient',
|
||||
actionType: TaskSharedActions.moveToArchive.type,
|
||||
opType: OpType.Update,
|
||||
entityType: 'TASK',
|
||||
entityId: 'task-1',
|
||||
payload: { tasks: [] },
|
||||
vectorClock: { testClient: 1 },
|
||||
timestamp: Date.now(),
|
||||
schemaVersion: 1,
|
||||
},
|
||||
createMockOperation('op-2', 'TASK', OpType.Update, { title: 'Non-archive' }),
|
||||
{
|
||||
id: 'op-3',
|
||||
clientId: 'testClient',
|
||||
actionType: TaskSharedActions.restoreTask.type,
|
||||
opType: OpType.Update,
|
||||
entityType: 'TASK',
|
||||
entityId: 'task-2',
|
||||
payload: { task: {}, subTasks: [] },
|
||||
vectorClock: { testClient: 2 },
|
||||
timestamp: Date.now(),
|
||||
schemaVersion: 1,
|
||||
},
|
||||
];
|
||||
|
||||
await service.applyOperations(ops);
|
||||
|
||||
// Bulk dispatch + ONE remoteArchiveDataApplied (not two)
|
||||
expect(mockStore.dispatch).toHaveBeenCalledTimes(2);
|
||||
|
||||
const dispatchCalls = mockStore.dispatch.calls.allArgs();
|
||||
const archiveDataAppliedCalls = dispatchCalls.filter(
|
||||
(args) =>
|
||||
(args[0] as unknown as { type: string }).type === remoteArchiveDataApplied.type,
|
||||
);
|
||||
expect(archiveDataAppliedCalls.length).toBe(1);
|
||||
|
||||
// Archive handler called for all 3 ops
|
||||
expect(mockArchiveOperationHandler.handleOperation).toHaveBeenCalledTimes(3);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue