mirror of
https://github.com/johannesjo/super-productivity.git
synced 2026-01-23 02:36:05 +00:00
refactor(sync): unify archive operation handling
Consolidate all archive-affecting operations into ArchiveOperationHandler to eliminate duplicate code and create a single source of truth. Changes: - Add ArchiveOperationHandlerEffects for local operations - Rename handleRemoteOperation to handleOperation - Add isArchiveAffectingAction helper for action filtering - Make isIgnoreDBLock conditional on action.meta.isRemote - Remove archive logic from individual effect files - Update feature-stores.module.ts to use new effect - Add archive write points documentation This ensures archive operations (moveToArchive, restoreTask, flushYoungToOld, deleteProject, deleteTag, deleteTaskRepeatCfg, deleteIssueProvider) all go through one handler for both local and remote operations.
This commit is contained in:
parent
9005d68d82
commit
5ffc109943
15 changed files with 448 additions and 1040 deletions
92
docs/ai/archive-write-points.md
Normal file
92
docs/ai/archive-write-points.md
Normal file
|
|
@ -0,0 +1,92 @@
|
|||
# Archive Write Points Documentation
|
||||
|
||||
This document tracks all locations where archive storage (IndexedDB) is written to, ensuring archive integrity and preventing unexpected writes.
|
||||
|
||||
## Single Source of Truth: ArchiveOperationHandler
|
||||
|
||||
All archive write operations MUST go through `ArchiveOperationHandler`. This handler is the centralized point for all archive storage operations.
|
||||
|
||||
### File Location
|
||||
|
||||
`src/app/core/persistence/operation-log/processing/archive-operation-handler.service.ts`
|
||||
|
||||
### Entry Points
|
||||
|
||||
1. **Local Operations**: `ArchiveOperationHandlerEffects` → `ArchiveOperationHandler.handleOperation()`
|
||||
2. **Remote Operations**: `OperationApplierService` → `ArchiveOperationHandler.handleOperation()`
|
||||
|
||||
## Archive-Affecting Actions
|
||||
|
||||
The following actions trigger archive writes (defined in `ARCHIVE_AFFECTING_ACTION_TYPES`):
|
||||
|
||||
| Action | Handler Method | Archive Operation |
|
||||
| ------------------------------------------- | ----------------------------- | ---------------------------------------------- |
|
||||
| `TaskSharedActions.moveToArchive` | `_handleMoveToArchive` | Write tasks to archiveYoung (remote only) |
|
||||
| `TaskSharedActions.restoreTask` | `_handleRestoreTask` | Delete task from archive |
|
||||
| `flushYoungToOld` | `_handleFlushYoungToOld` | Move old tasks from archiveYoung to archiveOld |
|
||||
| `TaskSharedActions.deleteProject` | `_handleDeleteProject` | Remove all tasks for deleted project |
|
||||
| `deleteTag` / `deleteTags` | `_handleDeleteTags` | Remove tag references, delete orphaned tasks |
|
||||
| `TaskSharedActions.deleteTaskRepeatCfg` | `_handleDeleteTaskRepeatCfg` | Remove repeatCfgId from tasks |
|
||||
| `TaskSharedActions.deleteIssueProvider` | `_handleDeleteIssueProvider` | Unlink issue data from tasks |
|
||||
| `IssueProviderActions.deleteIssueProviders` | `_handleDeleteIssueProviders` | Unlink issue data from multiple tasks |
|
||||
|
||||
## Special Case: moveToArchive
|
||||
|
||||
For `moveToArchive`, there's a special handling distinction:
|
||||
|
||||
- **Local operations**: Archive is written BEFORE the action is dispatched by `ArchiveService.moveToArchive()`. The handler skips local operations to avoid double-writes.
|
||||
- **Remote operations**: Archive is written AFTER the action is dispatched by `ArchiveOperationHandler._handleMoveToArchive()`.
|
||||
|
||||
## Operation Flow
|
||||
|
||||
```
|
||||
LOCAL OPERATION FLOW:
|
||||
User Action → Action Dispatched → Reducer → ArchiveOperationHandlerEffects
|
||||
↓
|
||||
ArchiveOperationHandler.handleOperation()
|
||||
↓
|
||||
Archive Written (IndexedDB)
|
||||
|
||||
REMOTE OPERATION FLOW:
|
||||
OperationApplierService → store.dispatch(action) → Reducer
|
||||
↓
|
||||
ArchiveOperationHandler.handleOperation()
|
||||
↓
|
||||
Archive Written (IndexedDB)
|
||||
```
|
||||
|
||||
## Files That Should NOT Write to Archive
|
||||
|
||||
The following effect files previously contained archive write logic and have been deprecated:
|
||||
|
||||
| Deprecated File | Replacement |
|
||||
| -------------------------------------------------- | ------------------------------ |
|
||||
| `archive.effects.ts` | ArchiveOperationHandlerEffects |
|
||||
| `project.effects.ts` (archive portion) | ArchiveOperationHandlerEffects |
|
||||
| `tag.effects.ts` (archive portion) | ArchiveOperationHandlerEffects |
|
||||
| `task-repeat-cfg.effects.ts` (archive portion) | ArchiveOperationHandlerEffects |
|
||||
| `unlink-all-tasks-on-provider-deletion.effects.ts` | ArchiveOperationHandlerEffects |
|
||||
|
||||
## Verification Checklist
|
||||
|
||||
When adding new archive-affecting operations:
|
||||
|
||||
1. [ ] Add action type to `ARCHIVE_AFFECTING_ACTION_TYPES` in `archive-operation-handler.service.ts`
|
||||
2. [ ] Implement handler method in `ArchiveOperationHandler`
|
||||
3. [ ] Add test cases in `archive-operation-handler.service.spec.ts`
|
||||
4. [ ] Update this documentation
|
||||
5. [ ] Ensure no direct archive writes in effects (use ArchiveOperationHandler)
|
||||
|
||||
## How to Find All Archive Writes
|
||||
|
||||
To verify no unexpected archive writes exist:
|
||||
|
||||
```bash
|
||||
# Search for direct PfapiService archive writes
|
||||
grep -r "archiveYoung.save\|archiveOld.save" src/app --include="*.ts" | grep -v "archive-operation-handler\|archive.service"
|
||||
|
||||
# Search for TaskArchiveService archive-modifying methods
|
||||
grep -r "removeAllArchiveTasksForProject\|removeTagsFromAllTasks\|removeRepeatCfgFromArchiveTasks\|unlinkIssueProviderFromArchiveTasks\|deleteTasks" src/app --include="*.ts" | grep -v "archive-operation-handler\|\.spec\.ts"
|
||||
```
|
||||
|
||||
Expected results: All matches should be in `ArchiveOperationHandler`, `ArchiveService`, or `TaskArchiveService`.
|
||||
|
|
@ -839,11 +839,12 @@ flowchart TD
|
|||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph OperationApplierService["OperationApplierService"]
|
||||
OA1[Receive operation] --> OA2[Check dependencies]
|
||||
OA2 --> OA3[convertOpToAction]
|
||||
subgraph OperationApplierService["OperationApplierService (Fail-Fast)"]
|
||||
OA1[Receive operation] --> OA2{Check hard<br/>dependencies}
|
||||
OA2 -->|Missing| OA_ERR["throw SyncStateCorruptedError<br/>(triggers full re-sync)"]
|
||||
OA2 -->|OK| OA3[convertOpToAction]
|
||||
OA3 --> OA4["store.dispatch(action)<br/>with meta.isRemote=true"]
|
||||
OA4 --> OA5["archiveOperationHandler<br/>.handleRemoteOperation(action)"]
|
||||
OA4 --> OA5["archiveOperationHandler<br/>.handleOperation(action)"]
|
||||
end
|
||||
|
||||
subgraph Handler["ArchiveOperationHandler"]
|
||||
|
|
@ -851,13 +852,18 @@ flowchart TD
|
|||
H1 -->|moveToArchive| H2[Write tasks to<br/>archiveYoung]
|
||||
H1 -->|restoreTask| H3[Delete task from<br/>archive]
|
||||
H1 -->|flushYoungToOld| H4[Move old tasks<br/>Young → Old]
|
||||
H1 -->|other| H5[No-op]
|
||||
H1 -->|deleteProject| H5[Remove tasks<br/>for project]
|
||||
H1 -->|deleteTag| H6[Remove tag<br/>from tasks]
|
||||
H1 -->|deleteTaskRepeatCfg| H7[Remove repeatCfgId<br/>from tasks]
|
||||
H1 -->|deleteIssueProvider| H8[Unlink issue data<br/>from tasks]
|
||||
H1 -->|other| H9[No-op]
|
||||
end
|
||||
|
||||
OA5 --> H1
|
||||
|
||||
style OperationApplierService fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
style Handler fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
|
||||
style OA_ERR fill:#ffcdd2,stroke:#c62828,stroke-width:2px
|
||||
```
|
||||
|
||||
### 8.4 Archive Operations Summary
|
||||
|
|
|
|||
|
|
@ -0,0 +1,76 @@
|
|||
import { inject, Injectable } from '@angular/core';
|
||||
import { createEffect } from '@ngrx/effects';
|
||||
import { LOCAL_ACTIONS } from '../../../../util/local-actions.token';
|
||||
import { concatMap, filter } from 'rxjs/operators';
|
||||
import {
|
||||
ArchiveOperationHandler,
|
||||
isArchiveAffectingAction,
|
||||
} from './archive-operation-handler.service';
|
||||
import { devError } from '../../../../util/dev-error';
|
||||
|
||||
/**
|
||||
* Unified effect for all archive-affecting operations.
|
||||
*
|
||||
* This effect routes all local archive-affecting actions through ArchiveOperationHandler,
|
||||
* making it the SINGLE SOURCE OF TRUTH for archive storage operations.
|
||||
*
|
||||
* ## How It Works
|
||||
*
|
||||
* - Uses LOCAL_ACTIONS to only process local user actions (not remote sync operations)
|
||||
* - Filters actions using isArchiveAffectingAction() helper
|
||||
* - Delegates all archive logic to ArchiveOperationHandler.handleOperation()
|
||||
*
|
||||
* ## Why This Architecture
|
||||
*
|
||||
* Previously, archive operations were scattered across multiple effect files:
|
||||
* - archive.effects.ts - flushYoungToOld, restoreTask
|
||||
* - project.effects.ts - deleteProject archive cleanup
|
||||
* - tag.effects.ts - deleteTag/deleteTags archive cleanup
|
||||
* - task-repeat-cfg.effects.ts - deleteTaskRepeatCfg archive cleanup
|
||||
* - unlink-all-tasks-on-provider-deletion.effects.ts - deleteIssueProvider cleanup
|
||||
*
|
||||
* This unified effect consolidates all archive handling to eliminate duplicate code
|
||||
* between local effects and remote operation handling (OperationApplierService).
|
||||
*
|
||||
* ## Operation Flow
|
||||
*
|
||||
* LOCAL OPERATIONS:
|
||||
* Action dispatched -> Reducers -> This effect -> ArchiveOperationHandler.handleOperation()
|
||||
*
|
||||
* REMOTE OPERATIONS:
|
||||
* OperationApplierService -> dispatch() -> Reducers -> ArchiveOperationHandler.handleOperation()
|
||||
* ^
|
||||
* (explicit call, bypasses this effect)
|
||||
*
|
||||
* @see ArchiveOperationHandler for list of supported archive-affecting actions
|
||||
* @see isArchiveAffectingAction for the action filtering logic
|
||||
*/
|
||||
@Injectable()
|
||||
export class ArchiveOperationHandlerEffects {
|
||||
private _actions$ = inject(LOCAL_ACTIONS);
|
||||
private _archiveOperationHandler = inject(ArchiveOperationHandler);
|
||||
|
||||
/**
|
||||
* Processes all archive-affecting local actions through the central handler.
|
||||
*
|
||||
* Uses concatMap to ensure operations are processed sequentially, which is
|
||||
* important for archive consistency (e.g., deleteProject shouldn't overlap
|
||||
* with other archive writes).
|
||||
*/
|
||||
handleArchiveOperations$ = createEffect(
|
||||
() =>
|
||||
this._actions$.pipe(
|
||||
filter(isArchiveAffectingAction),
|
||||
concatMap(async (action) => {
|
||||
try {
|
||||
await this._archiveOperationHandler.handleOperation(action);
|
||||
} catch (e) {
|
||||
// Archive operations failing is serious but not critical to app function.
|
||||
// Log the error but don't crash the effect stream.
|
||||
devError(e);
|
||||
}
|
||||
}),
|
||||
),
|
||||
{ dispatch: false },
|
||||
);
|
||||
}
|
||||
|
|
@ -109,7 +109,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
service = TestBed.inject(ArchiveOperationHandler);
|
||||
});
|
||||
|
||||
describe('handleRemoteOperation', () => {
|
||||
describe('handleOperation', () => {
|
||||
describe('moveToArchive action', () => {
|
||||
it('should write tasks to archive for remote sync', async () => {
|
||||
const tasks = [
|
||||
|
|
@ -124,7 +124,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(mockArchiveService.writeTasksToArchiveForRemoteSync).toHaveBeenCalledWith(
|
||||
tasks,
|
||||
|
|
@ -143,7 +143,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(mockArchiveService.writeTasksToArchiveForRemoteSync).toHaveBeenCalledWith(
|
||||
[],
|
||||
|
|
@ -159,7 +159,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(mockTaskArchiveService.deleteTasks).not.toHaveBeenCalled();
|
||||
});
|
||||
|
|
@ -178,7 +178,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(mockTaskArchiveService.deleteTasks).toHaveBeenCalledWith(
|
||||
['task-1', 'subtask-1', 'subtask-2'],
|
||||
|
|
@ -196,7 +196,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(mockTaskArchiveService.deleteTasks).toHaveBeenCalledWith(['task-1'], {
|
||||
isIgnoreDBLock: true,
|
||||
|
|
@ -213,7 +213,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(
|
||||
mockArchiveService.writeTasksToArchiveForRemoteSync,
|
||||
|
|
@ -236,7 +236,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
// The action should be recognized and try to load archive
|
||||
// (may fail due to sorting, but we verify it attempted to process)
|
||||
try {
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
} catch {
|
||||
// Expected - sort function returns undefined in tests
|
||||
}
|
||||
|
|
@ -257,7 +257,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(
|
||||
mockTaskArchiveService.removeAllArchiveTasksForProject,
|
||||
|
|
@ -276,7 +276,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(
|
||||
mockArchiveService.writeTasksToArchiveForRemoteSync,
|
||||
|
|
@ -293,7 +293,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(mockTaskArchiveService.removeTagsFromAllTasks).toHaveBeenCalledWith([
|
||||
'tag-1',
|
||||
|
|
@ -310,7 +310,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(mockTaskArchiveService.removeTagsFromAllTasks).toHaveBeenCalledWith([
|
||||
'tag-1',
|
||||
|
|
@ -336,7 +336,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(
|
||||
mockArchiveService.writeTasksToArchiveForRemoteSync,
|
||||
|
|
@ -357,7 +357,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(
|
||||
mockTaskArchiveService.removeRepeatCfgFromArchiveTasks,
|
||||
|
|
@ -372,7 +372,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(
|
||||
mockArchiveService.writeTasksToArchiveForRemoteSync,
|
||||
|
|
@ -394,7 +394,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(
|
||||
mockTaskArchiveService.unlinkIssueProviderFromArchiveTasks,
|
||||
|
|
@ -409,7 +409,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(
|
||||
mockArchiveService.writeTasksToArchiveForRemoteSync,
|
||||
|
|
@ -433,7 +433,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(
|
||||
mockTaskArchiveService.unlinkIssueProviderFromArchiveTasks,
|
||||
|
|
@ -456,7 +456,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(
|
||||
mockArchiveService.writeTasksToArchiveForRemoteSync,
|
||||
|
|
@ -479,7 +479,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(
|
||||
mockArchiveService.writeTasksToArchiveForRemoteSync,
|
||||
|
|
@ -504,7 +504,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as PersistentAction;
|
||||
|
||||
await expectAsync(service.handleRemoteOperation(action)).toBeResolved();
|
||||
await expectAsync(service.handleOperation(action)).toBeResolved();
|
||||
});
|
||||
});
|
||||
|
||||
|
|
@ -517,8 +517,8 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(mockArchiveService.writeTasksToArchiveForRemoteSync).toHaveBeenCalledTimes(
|
||||
2,
|
||||
|
|
@ -534,8 +534,8 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleRemoteOperation(action);
|
||||
await service.handleOperation(action);
|
||||
await service.handleOperation(action);
|
||||
|
||||
expect(mockTaskArchiveService.deleteTasks).toHaveBeenCalledTimes(2);
|
||||
});
|
||||
|
|
@ -555,7 +555,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await expectAsync(service.handleRemoteOperation(action)).toBeRejectedWith(error);
|
||||
await expectAsync(service.handleOperation(action)).toBeRejectedWith(error);
|
||||
});
|
||||
|
||||
it('should propagate errors from deleteTasks', async () => {
|
||||
|
|
@ -570,7 +570,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await expectAsync(service.handleRemoteOperation(action)).toBeRejectedWith(error);
|
||||
await expectAsync(service.handleOperation(action)).toBeRejectedWith(error);
|
||||
});
|
||||
|
||||
it('should propagate errors from archive load in flushYoungToOld', async () => {
|
||||
|
|
@ -586,7 +586,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await expectAsync(service.handleRemoteOperation(action)).toBeRejectedWithError(
|
||||
await expectAsync(service.handleOperation(action)).toBeRejectedWithError(
|
||||
'Load failed',
|
||||
);
|
||||
});
|
||||
|
|
@ -605,7 +605,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await expectAsync(service.handleRemoteOperation(action)).toBeRejectedWith(error);
|
||||
await expectAsync(service.handleOperation(action)).toBeRejectedWith(error);
|
||||
});
|
||||
|
||||
it('should propagate errors from removeTagsFromAllTasks', async () => {
|
||||
|
|
@ -620,7 +620,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await expectAsync(service.handleRemoteOperation(action)).toBeRejectedWith(error);
|
||||
await expectAsync(service.handleOperation(action)).toBeRejectedWith(error);
|
||||
});
|
||||
|
||||
it('should propagate errors from removeRepeatCfgFromArchiveTasks', async () => {
|
||||
|
|
@ -636,7 +636,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await expectAsync(service.handleRemoteOperation(action)).toBeRejectedWith(error);
|
||||
await expectAsync(service.handleOperation(action)).toBeRejectedWith(error);
|
||||
});
|
||||
|
||||
it('should propagate errors from unlinkIssueProviderFromArchiveTasks for single provider', async () => {
|
||||
|
|
@ -652,7 +652,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await expectAsync(service.handleRemoteOperation(action)).toBeRejectedWith(error);
|
||||
await expectAsync(service.handleOperation(action)).toBeRejectedWith(error);
|
||||
});
|
||||
|
||||
it('should propagate errors from unlinkIssueProviderFromArchiveTasks for multiple providers', async () => {
|
||||
|
|
@ -667,7 +667,7 @@ describe('ArchiveOperationHandler', () => {
|
|||
meta: { isPersistent: true, isRemote: true },
|
||||
} as unknown as PersistentAction;
|
||||
|
||||
await expectAsync(service.handleRemoteOperation(action)).toBeRejectedWith(error);
|
||||
await expectAsync(service.handleOperation(action)).toBeRejectedWith(error);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import { inject, Injectable, Injector } from '@angular/core';
|
||||
import { Action } from '@ngrx/store';
|
||||
import { PersistentAction } from '../persistent-action.interface';
|
||||
import { TaskSharedActions } from '../../../../root-store/meta/task-shared.actions';
|
||||
import { flushYoungToOld } from '../../../../features/time-tracking/store/archive.actions';
|
||||
|
|
@ -14,21 +15,49 @@ import { TimeTrackingService } from '../../../../features/time-tracking/time-tra
|
|||
import { IssueProviderActions } from '../../../../features/issue/store/issue-provider.actions';
|
||||
|
||||
/**
|
||||
* Handles archive-specific side effects for REMOTE operations.
|
||||
* Action types that affect archive storage and require special handling.
|
||||
*/
|
||||
const ARCHIVE_AFFECTING_ACTION_TYPES: string[] = [
|
||||
TaskSharedActions.moveToArchive.type,
|
||||
TaskSharedActions.restoreTask.type,
|
||||
flushYoungToOld.type,
|
||||
TaskSharedActions.deleteProject.type,
|
||||
deleteTag.type,
|
||||
deleteTags.type,
|
||||
TaskSharedActions.deleteTaskRepeatCfg.type,
|
||||
TaskSharedActions.deleteIssueProvider.type,
|
||||
IssueProviderActions.deleteIssueProviders.type,
|
||||
];
|
||||
|
||||
/**
|
||||
* Helper function to check if an action affects archive storage.
|
||||
* Used by ArchiveOperationHandlerEffects to filter actions.
|
||||
*/
|
||||
export const isArchiveAffectingAction = (action: Action): action is PersistentAction => {
|
||||
return ARCHIVE_AFFECTING_ACTION_TYPES.includes(action.type);
|
||||
};
|
||||
|
||||
/**
|
||||
* Centralized handler for all archive-specific side effects.
|
||||
*
|
||||
* This service is called by OperationApplierService AFTER dispatching remote operations.
|
||||
* It ensures that archive storage (IndexedDB) is updated to match the NgRx state changes.
|
||||
* This is the SINGLE SOURCE OF TRUTH for archive storage operations. All code paths
|
||||
* that need to update archive storage (IndexedDB) should go through this handler.
|
||||
*
|
||||
* ## Why This Exists
|
||||
* ## Entry Points
|
||||
*
|
||||
* Archive data is stored in IndexedDB (via PFAPI), not in NgRx state. When remote operations
|
||||
* are applied, we need to update the archive storage to maintain consistency. This cannot
|
||||
* be done in effects because:
|
||||
* 1. **Local operations**: Called by ArchiveOperationHandlerEffects after action dispatch
|
||||
* 2. **Remote operations**: Called by OperationApplierService after applying remote operations
|
||||
*
|
||||
* 1. Effects should only run for LOCAL_ACTIONS (local user actions)
|
||||
* 2. Running effects for remote operations would cause side effects to happen twice
|
||||
* (once on original client, once on receiving client)
|
||||
* 3. The OperationApplierService has full control over when side effects happen
|
||||
* ## Why This Architecture
|
||||
*
|
||||
* Archive data is stored in IndexedDB (via PFAPI), not in NgRx state. Previously,
|
||||
* archive operations were duplicated across multiple effect files. This handler
|
||||
* consolidates all archive logic to:
|
||||
*
|
||||
* 1. Eliminate duplicate code between local effects and remote operation handling
|
||||
* 2. Make it easy to add new archive-affecting operations (update one switch statement)
|
||||
* 3. Ensure consistent behavior between local and remote operations
|
||||
* 4. Provide a clear audit point for all archive writes
|
||||
*
|
||||
* ## Operations Handled
|
||||
*
|
||||
|
|
@ -42,9 +71,9 @@ import { IssueProviderActions } from '../../../../features/issue/store/issue-pro
|
|||
*
|
||||
* ## Important Notes
|
||||
*
|
||||
* - Uses `isIgnoreDBLock: true` because this runs during sync processing when PFAPI
|
||||
* has the database locked
|
||||
* - For remote operations, uses `isIgnoreDBLock: true` because sync processing has the DB locked
|
||||
* - All operations are idempotent - safe to run multiple times
|
||||
* - Use `isArchiveAffectingAction()` helper to check if an action needs archive handling
|
||||
*/
|
||||
@Injectable({
|
||||
providedIn: 'root',
|
||||
|
|
@ -61,12 +90,16 @@ export class ArchiveOperationHandler {
|
|||
private _getTimeTrackingService = lazyInject(this._injector, TimeTrackingService);
|
||||
|
||||
/**
|
||||
* Process a remote operation and handle any archive-related side effects.
|
||||
* Process an action and handle any archive-related side effects.
|
||||
*
|
||||
* @param action The action that was dispatched (already has meta.isRemote = true)
|
||||
* This method handles both local and remote operations. For remote operations
|
||||
* (action.meta.isRemote === true), it uses isIgnoreDBLock: true because sync
|
||||
* processing has the database locked.
|
||||
*
|
||||
* @param action The action that was dispatched
|
||||
* @returns Promise that resolves when archive operations are complete
|
||||
*/
|
||||
async handleRemoteOperation(action: PersistentAction): Promise<void> {
|
||||
async handleOperation(action: PersistentAction): Promise<void> {
|
||||
switch (action.type) {
|
||||
case TaskSharedActions.moveToArchive.type:
|
||||
await this._handleMoveToArchive(action);
|
||||
|
|
@ -105,32 +138,41 @@ export class ArchiveOperationHandler {
|
|||
|
||||
/**
|
||||
* Writes archived tasks to archiveYoung storage.
|
||||
* Called when receiving a remote moveToArchive operation.
|
||||
* REMOTE ONLY: For local operations, archive is written BEFORE action dispatch
|
||||
* by ArchiveService.moveToArchive(), so we skip here to avoid double-writes.
|
||||
*/
|
||||
private async _handleMoveToArchive(action: PersistentAction): Promise<void> {
|
||||
if (!action.meta.isRemote) {
|
||||
return; // Local: already written by ArchiveService before dispatch
|
||||
}
|
||||
const tasks = (action as ReturnType<typeof TaskSharedActions.moveToArchive>).tasks;
|
||||
await this._getArchiveService().writeTasksToArchiveForRemoteSync(tasks);
|
||||
}
|
||||
|
||||
/**
|
||||
* Removes a restored task from archive storage.
|
||||
* Called when receiving a remote restoreTask operation.
|
||||
* Called for both local and remote restoreTask operations.
|
||||
*/
|
||||
private async _handleRestoreTask(action: PersistentAction): Promise<void> {
|
||||
const task = (action as ReturnType<typeof TaskSharedActions.restoreTask>).task;
|
||||
const taskIds = [task.id, ...task.subTaskIds];
|
||||
await this._getTaskArchiveService().deleteTasks(taskIds, { isIgnoreDBLock: true });
|
||||
const isRemote = !!action.meta?.isRemote;
|
||||
await this._getTaskArchiveService().deleteTasks(
|
||||
taskIds,
|
||||
isRemote ? { isIgnoreDBLock: true } : {},
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Executes the flush from archiveYoung to archiveOld.
|
||||
* Called when receiving a remote flushYoungToOld operation.
|
||||
* Called for both local and remote flushYoungToOld operations.
|
||||
*
|
||||
* This operation is deterministic - given the same timestamp and archive state,
|
||||
* it will produce the same result on all clients.
|
||||
*/
|
||||
private async _handleFlushYoungToOld(action: PersistentAction): Promise<void> {
|
||||
const timestamp = (action as ReturnType<typeof flushYoungToOld>).timestamp;
|
||||
const isRemote = !!action.meta?.isRemote;
|
||||
const pfapi = this._getPfapiService();
|
||||
|
||||
const archiveYoung = await pfapi.m.archiveYoung.load();
|
||||
|
|
@ -150,7 +192,7 @@ export class ArchiveOperationHandler {
|
|||
},
|
||||
{
|
||||
isUpdateRevAndLastUpdate: true,
|
||||
isIgnoreDBLock: true,
|
||||
isIgnoreDBLock: isRemote ? true : undefined,
|
||||
},
|
||||
);
|
||||
|
||||
|
|
@ -161,12 +203,12 @@ export class ArchiveOperationHandler {
|
|||
},
|
||||
{
|
||||
isUpdateRevAndLastUpdate: true,
|
||||
isIgnoreDBLock: true,
|
||||
isIgnoreDBLock: isRemote ? true : undefined,
|
||||
},
|
||||
);
|
||||
|
||||
Log.log(
|
||||
'______________________\nFLUSHED ALL FROM ARCHIVE YOUNG TO OLD (via remote op handler)\n_______________________',
|
||||
`______________________\nFLUSHED ALL FROM ARCHIVE YOUNG TO OLD (via ${isRemote ? 'remote' : 'local'} op handler)\n_______________________`,
|
||||
);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -6,15 +6,13 @@ import {
|
|||
OperationDependency,
|
||||
} from '../sync/dependency-resolver.service';
|
||||
import { Operation, OpType, EntityType } from '../operation.types';
|
||||
import { MAX_DEPENDENCY_RETRY_ATTEMPTS } from '../operation-log.const';
|
||||
import { SnackService } from '../../../snack/snack.service';
|
||||
import { ArchiveOperationHandler } from './archive-operation-handler.service';
|
||||
import { SyncStateCorruptedError } from '../sync-state-corrupted.error';
|
||||
|
||||
describe('OperationApplierService', () => {
|
||||
let service: OperationApplierService;
|
||||
let mockStore: jasmine.SpyObj<Store>;
|
||||
let mockDependencyResolver: jasmine.SpyObj<DependencyResolverService>;
|
||||
let mockSnackService: jasmine.SpyObj<SnackService>;
|
||||
let mockArchiveOperationHandler: jasmine.SpyObj<ArchiveOperationHandler>;
|
||||
|
||||
const createMockOperation = (
|
||||
|
|
@ -42,9 +40,8 @@ describe('OperationApplierService', () => {
|
|||
'extractDependencies',
|
||||
'checkDependencies',
|
||||
]);
|
||||
mockSnackService = jasmine.createSpyObj('SnackService', ['open']);
|
||||
mockArchiveOperationHandler = jasmine.createSpyObj('ArchiveOperationHandler', [
|
||||
'handleRemoteOperation',
|
||||
'handleOperation',
|
||||
]);
|
||||
|
||||
// Default: no dependencies
|
||||
|
|
@ -52,14 +49,13 @@ describe('OperationApplierService', () => {
|
|||
mockDependencyResolver.checkDependencies.and.returnValue(
|
||||
Promise.resolve({ missing: [] }),
|
||||
);
|
||||
mockArchiveOperationHandler.handleRemoteOperation.and.returnValue(Promise.resolve());
|
||||
mockArchiveOperationHandler.handleOperation.and.returnValue(Promise.resolve());
|
||||
|
||||
TestBed.configureTestingModule({
|
||||
providers: [
|
||||
OperationApplierService,
|
||||
{ provide: Store, useValue: mockStore },
|
||||
{ provide: DependencyResolverService, useValue: mockDependencyResolver },
|
||||
{ provide: SnackService, useValue: mockSnackService },
|
||||
{ provide: ArchiveOperationHandler, useValue: mockArchiveOperationHandler },
|
||||
],
|
||||
});
|
||||
|
|
@ -86,19 +82,40 @@ describe('OperationApplierService', () => {
|
|||
);
|
||||
});
|
||||
|
||||
it('should dispatch actions for multiple operations', async () => {
|
||||
it('should dispatch actions for multiple operations in order', async () => {
|
||||
const ops = [
|
||||
createMockOperation('op-1'),
|
||||
createMockOperation('op-2'),
|
||||
createMockOperation('op-3'),
|
||||
createMockOperation('op-1', 'TASK', OpType.Update, { title: 'First' }),
|
||||
createMockOperation('op-2', 'TASK', OpType.Update, { title: 'Second' }),
|
||||
createMockOperation('op-3', 'TASK', OpType.Update, { title: 'Third' }),
|
||||
];
|
||||
|
||||
await service.applyOperations(ops);
|
||||
|
||||
expect(mockStore.dispatch).toHaveBeenCalledTimes(3);
|
||||
|
||||
const calls = mockStore.dispatch.calls.all();
|
||||
expect((calls[0].args[0] as any).title).toBe('First');
|
||||
expect((calls[1].args[0] as any).title).toBe('Second');
|
||||
expect((calls[2].args[0] as any).title).toBe('Third');
|
||||
});
|
||||
|
||||
it('should not dispatch action for operation with missing hard dependency', async () => {
|
||||
it('should handle empty operations array', async () => {
|
||||
await service.applyOperations([]);
|
||||
|
||||
expect(mockStore.dispatch).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should call archiveOperationHandler after dispatching', async () => {
|
||||
const op = createMockOperation('op-1');
|
||||
|
||||
await service.applyOperations([op]);
|
||||
|
||||
expect(mockArchiveOperationHandler.handleOperation).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('dependency handling', () => {
|
||||
it('should throw SyncStateCorruptedError for operation with missing hard dependency', async () => {
|
||||
const op = createMockOperation('op-1', 'TASK', OpType.Create, {
|
||||
parentId: 'parent-123',
|
||||
});
|
||||
|
|
@ -115,10 +132,40 @@ describe('OperationApplierService', () => {
|
|||
Promise.resolve({ missing: [parentDep] }),
|
||||
);
|
||||
|
||||
await service.applyOperations([op]);
|
||||
await expectAsync(service.applyOperations([op])).toBeRejectedWithError(
|
||||
SyncStateCorruptedError,
|
||||
);
|
||||
|
||||
expect(mockStore.dispatch).not.toHaveBeenCalled();
|
||||
expect(service.getPendingCount()).toBe(1);
|
||||
});
|
||||
|
||||
it('should include operation details in SyncStateCorruptedError', async () => {
|
||||
const op = createMockOperation('op-1', 'TASK', OpType.Create, {
|
||||
parentId: 'parent-123',
|
||||
});
|
||||
|
||||
const parentDep: OperationDependency = {
|
||||
entityType: 'TASK',
|
||||
entityId: 'parent-123',
|
||||
mustExist: true,
|
||||
relation: 'parent',
|
||||
};
|
||||
|
||||
mockDependencyResolver.extractDependencies.and.returnValue([parentDep]);
|
||||
mockDependencyResolver.checkDependencies.and.returnValue(
|
||||
Promise.resolve({ missing: [parentDep] }),
|
||||
);
|
||||
|
||||
try {
|
||||
await service.applyOperations([op]);
|
||||
fail('Expected SyncStateCorruptedError to be thrown');
|
||||
} catch (e) {
|
||||
expect(e).toBeInstanceOf(SyncStateCorruptedError);
|
||||
const error = e as SyncStateCorruptedError;
|
||||
expect(error.context.opId).toBe('op-1');
|
||||
expect(error.context.actionType).toBe('[Test] Action');
|
||||
expect(error.context.missingDependencies).toContain('TASK:parent-123');
|
||||
}
|
||||
});
|
||||
|
||||
it('should dispatch action for operation with missing soft dependency', async () => {
|
||||
|
|
@ -143,329 +190,42 @@ describe('OperationApplierService', () => {
|
|||
// Soft dependency doesn't block application
|
||||
expect(mockStore.dispatch).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('dependency sorting', () => {
|
||||
it('should apply parent before child when both are in same batch', async () => {
|
||||
const parentOp = createMockOperation(
|
||||
'parent-op',
|
||||
'TASK',
|
||||
OpType.Create,
|
||||
{},
|
||||
'parent-task',
|
||||
);
|
||||
const childOp = createMockOperation(
|
||||
'child-op',
|
||||
'TASK',
|
||||
OpType.Create,
|
||||
{ parentId: 'parent-task' },
|
||||
'child-task',
|
||||
);
|
||||
it('should throw on first operation with missing hard deps in a batch', async () => {
|
||||
const op1 = createMockOperation('op-1', 'TASK', OpType.Update, { title: 'OK' });
|
||||
const op2 = createMockOperation('op-2', 'TASK', OpType.Create, {
|
||||
parentId: 'missing',
|
||||
});
|
||||
const op3 = createMockOperation('op-3', 'TASK', OpType.Update, { title: 'Never' });
|
||||
|
||||
// Parent dependency for child
|
||||
const parentDep: OperationDependency = {
|
||||
entityType: 'TASK',
|
||||
entityId: 'parent-task',
|
||||
entityId: 'missing',
|
||||
mustExist: true,
|
||||
relation: 'parent',
|
||||
};
|
||||
|
||||
mockDependencyResolver.extractDependencies.and.callFake((op: Operation) => {
|
||||
if (op.id === 'child-op') {
|
||||
return [parentDep];
|
||||
}
|
||||
if (op.id === 'op-2') return [parentDep];
|
||||
return [];
|
||||
});
|
||||
|
||||
// First call - parent doesn't exist yet, second call - parent exists
|
||||
let parentExists = false;
|
||||
mockDependencyResolver.checkDependencies.and.callFake(
|
||||
async (deps: OperationDependency[]) => {
|
||||
const missing = deps.filter(
|
||||
(d) => d.entityId === 'parent-task' && !parentExists,
|
||||
);
|
||||
return { missing };
|
||||
},
|
||||
);
|
||||
|
||||
// When parent op is dispatched, parent "exists" for subsequent checks
|
||||
mockStore.dispatch.and.callFake((() => {
|
||||
parentExists = true;
|
||||
}) as any);
|
||||
|
||||
// Apply in reverse order to test sorting
|
||||
await service.applyOperations([childOp, parentOp]);
|
||||
|
||||
// Both should be dispatched (parent first due to sorting)
|
||||
expect(mockStore.dispatch).toHaveBeenCalledTimes(2);
|
||||
|
||||
// Verify order: parent dispatched before child
|
||||
const calls = mockStore.dispatch.calls.all();
|
||||
expect((calls[0].args[0] as any).meta.entityId).toBe('parent-task');
|
||||
expect((calls[1].args[0] as any).meta.entityId).toBe('child-task');
|
||||
});
|
||||
|
||||
it('should handle single operation without sorting', async () => {
|
||||
const op = createMockOperation('op-1');
|
||||
|
||||
await service.applyOperations([op]);
|
||||
|
||||
expect(mockStore.dispatch).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('should handle empty operations array', async () => {
|
||||
await service.applyOperations([]);
|
||||
|
||||
expect(mockStore.dispatch).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('retry mechanism', () => {
|
||||
it('should retry pending operations on next applyOperations call', async () => {
|
||||
const op1 = createMockOperation('op-1', 'TASK', OpType.Create, {
|
||||
parentId: 'parent-123',
|
||||
});
|
||||
const op2 = createMockOperation('op-2', 'TASK', OpType.Create, {}, 'parent-123');
|
||||
|
||||
const parentDep: OperationDependency = {
|
||||
entityType: 'TASK',
|
||||
entityId: 'parent-123',
|
||||
mustExist: true,
|
||||
relation: 'parent',
|
||||
};
|
||||
|
||||
mockDependencyResolver.extractDependencies.and.callFake((op: Operation) => {
|
||||
if (op.id === 'op-1') return [parentDep];
|
||||
return [];
|
||||
});
|
||||
|
||||
// First call: parent doesn't exist
|
||||
let parentExists = false;
|
||||
mockDependencyResolver.checkDependencies.and.callFake(
|
||||
async (deps: OperationDependency[]) => {
|
||||
if (deps.some((d) => d.entityId === 'parent-123') && !parentExists) {
|
||||
return { missing: deps.filter((d) => d.entityId === 'parent-123') };
|
||||
if (deps.some((d) => d.entityId === 'missing')) {
|
||||
return { missing: deps };
|
||||
}
|
||||
return { missing: [] };
|
||||
},
|
||||
);
|
||||
|
||||
// Apply op1 first - will be queued
|
||||
await service.applyOperations([op1]);
|
||||
expect(mockStore.dispatch).not.toHaveBeenCalled();
|
||||
expect(service.getPendingCount()).toBe(1);
|
||||
|
||||
// Now apply op2 (creates parent)
|
||||
mockStore.dispatch.and.callFake((() => {
|
||||
parentExists = true;
|
||||
}) as any);
|
||||
|
||||
await service.applyOperations([op2]);
|
||||
|
||||
// Both should now be dispatched (op2 + op1 from retry)
|
||||
expect(mockStore.dispatch).toHaveBeenCalledTimes(2);
|
||||
expect(service.getPendingCount()).toBe(0);
|
||||
});
|
||||
|
||||
it('should move operation to permanently failed after max retries', async () => {
|
||||
const op = createMockOperation('op-1', 'TASK', OpType.Create, {
|
||||
parentId: 'missing-parent',
|
||||
});
|
||||
|
||||
const parentDep: OperationDependency = {
|
||||
entityType: 'TASK',
|
||||
entityId: 'missing-parent',
|
||||
mustExist: true,
|
||||
relation: 'parent',
|
||||
};
|
||||
|
||||
mockDependencyResolver.extractDependencies.and.returnValue([parentDep]);
|
||||
mockDependencyResolver.checkDependencies.and.returnValue(
|
||||
Promise.resolve({ missing: [parentDep] }),
|
||||
await expectAsync(service.applyOperations([op1, op2, op3])).toBeRejectedWithError(
|
||||
SyncStateCorruptedError,
|
||||
);
|
||||
|
||||
// Apply multiple times to exhaust retries
|
||||
for (let i = 0; i <= MAX_DEPENDENCY_RETRY_ATTEMPTS; i++) {
|
||||
await service.applyOperations([]);
|
||||
if (i === 0) {
|
||||
// First real application
|
||||
await service.applyOperations([op]);
|
||||
}
|
||||
}
|
||||
|
||||
expect(service.getFailedCount()).toBe(1);
|
||||
expect(service.getPendingCount()).toBe(0);
|
||||
// First operation should have been dispatched before we hit the error
|
||||
expect(mockStore.dispatch).toHaveBeenCalledTimes(1);
|
||||
expect((mockStore.dispatch.calls.first().args[0] as any).title).toBe('OK');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getPendingCount', () => {
|
||||
it('should return 0 when no pending operations', () => {
|
||||
expect(service.getPendingCount()).toBe(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getFailedCount', () => {
|
||||
it('should return 0 when no failed operations', () => {
|
||||
expect(service.getFailedCount()).toBe(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getFailedOperations', () => {
|
||||
it('should return empty array when no failed operations', () => {
|
||||
expect(service.getFailedOperations()).toEqual([]);
|
||||
});
|
||||
|
||||
it('should return copy of failed operations', () => {
|
||||
const failed1 = service.getFailedOperations();
|
||||
const failed2 = service.getFailedOperations();
|
||||
|
||||
expect(failed1).not.toBe(failed2);
|
||||
expect(failed1).toEqual(failed2);
|
||||
});
|
||||
});
|
||||
|
||||
describe('clearFailedOperations', () => {
|
||||
it('should clear all failed operations', async () => {
|
||||
const op = createMockOperation('op-1', 'TASK', OpType.Create, {
|
||||
parentId: 'missing',
|
||||
});
|
||||
|
||||
const parentDep: OperationDependency = {
|
||||
entityType: 'TASK',
|
||||
entityId: 'missing',
|
||||
mustExist: true,
|
||||
relation: 'parent',
|
||||
};
|
||||
|
||||
mockDependencyResolver.extractDependencies.and.returnValue([parentDep]);
|
||||
mockDependencyResolver.checkDependencies.and.returnValue(
|
||||
Promise.resolve({ missing: [parentDep] }),
|
||||
);
|
||||
|
||||
// Exhaust retries to create a failed operation
|
||||
await service.applyOperations([op]);
|
||||
for (let i = 0; i < MAX_DEPENDENCY_RETRY_ATTEMPTS; i++) {
|
||||
await service.applyOperations([]);
|
||||
}
|
||||
|
||||
expect(service.getFailedCount()).toBe(1);
|
||||
|
||||
service.clearFailedOperations();
|
||||
|
||||
expect(service.getFailedCount()).toBe(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('pruneOldFailedOperations', () => {
|
||||
it('should remove operations older than maxAgeMs', async () => {
|
||||
const oldOp = createMockOperation('old-op', 'TASK', OpType.Create, {
|
||||
parentId: 'missing',
|
||||
});
|
||||
// Override timestamp to be old (8 days ago)
|
||||
const eightDaysMs = 8 * 24 * 60 * 60 * 1000;
|
||||
(oldOp as any).timestamp = Date.now() - eightDaysMs;
|
||||
|
||||
const recentOp = createMockOperation('recent-op', 'TASK', OpType.Create, {
|
||||
parentId: 'missing',
|
||||
});
|
||||
|
||||
const parentDep: OperationDependency = {
|
||||
entityType: 'TASK',
|
||||
entityId: 'missing',
|
||||
mustExist: true,
|
||||
relation: 'parent',
|
||||
};
|
||||
|
||||
mockDependencyResolver.extractDependencies.and.returnValue([parentDep]);
|
||||
mockDependencyResolver.checkDependencies.and.returnValue(
|
||||
Promise.resolve({ missing: [parentDep] }),
|
||||
);
|
||||
|
||||
// Create two failed operations
|
||||
await service.applyOperations([oldOp]);
|
||||
for (let i = 0; i < MAX_DEPENDENCY_RETRY_ATTEMPTS; i++) {
|
||||
await service.applyOperations([]);
|
||||
}
|
||||
|
||||
await service.applyOperations([recentOp]);
|
||||
for (let i = 0; i < MAX_DEPENDENCY_RETRY_ATTEMPTS; i++) {
|
||||
await service.applyOperations([]);
|
||||
}
|
||||
|
||||
expect(service.getFailedCount()).toBe(2);
|
||||
|
||||
// Prune with 7-day default
|
||||
const pruned = service.pruneOldFailedOperations();
|
||||
|
||||
expect(pruned).toBe(1);
|
||||
expect(service.getFailedCount()).toBe(1);
|
||||
expect(service.getFailedOperations()[0].op.id).toBe('recent-op');
|
||||
});
|
||||
|
||||
it('should return 0 when no operations are old enough to prune', async () => {
|
||||
const op = createMockOperation('op-1', 'TASK', OpType.Create, {
|
||||
parentId: 'missing',
|
||||
});
|
||||
|
||||
const parentDep: OperationDependency = {
|
||||
entityType: 'TASK',
|
||||
entityId: 'missing',
|
||||
mustExist: true,
|
||||
relation: 'parent',
|
||||
};
|
||||
|
||||
mockDependencyResolver.extractDependencies.and.returnValue([parentDep]);
|
||||
mockDependencyResolver.checkDependencies.and.returnValue(
|
||||
Promise.resolve({ missing: [parentDep] }),
|
||||
);
|
||||
|
||||
// Create a failed operation
|
||||
await service.applyOperations([op]);
|
||||
for (let i = 0; i < MAX_DEPENDENCY_RETRY_ATTEMPTS; i++) {
|
||||
await service.applyOperations([]);
|
||||
}
|
||||
|
||||
const pruned = service.pruneOldFailedOperations();
|
||||
|
||||
expect(pruned).toBe(0);
|
||||
expect(service.getFailedCount()).toBe(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('snack notification', () => {
|
||||
it('should show error snack on first failed operation', async () => {
|
||||
const op = createMockOperation('op-1', 'TASK', OpType.Create, {
|
||||
parentId: 'missing',
|
||||
});
|
||||
|
||||
const parentDep: OperationDependency = {
|
||||
entityType: 'TASK',
|
||||
entityId: 'missing',
|
||||
mustExist: true,
|
||||
relation: 'parent',
|
||||
};
|
||||
|
||||
mockDependencyResolver.extractDependencies.and.returnValue([parentDep]);
|
||||
mockDependencyResolver.checkDependencies.and.returnValue(
|
||||
Promise.resolve({ missing: [parentDep] }),
|
||||
);
|
||||
|
||||
// Exhaust retries to create a failed operation
|
||||
await service.applyOperations([op]);
|
||||
for (let i = 0; i < MAX_DEPENDENCY_RETRY_ATTEMPTS; i++) {
|
||||
await service.applyOperations([]);
|
||||
}
|
||||
|
||||
expect(mockSnackService.open).toHaveBeenCalledTimes(1);
|
||||
expect(mockSnackService.open).toHaveBeenCalledWith(
|
||||
jasmine.objectContaining({
|
||||
type: 'ERROR',
|
||||
}),
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// NOTE: Dependency sorting tests have been moved to sort-operations-by-dependency.util.spec.ts
|
||||
// The sorting functionality is currently disabled in the service (see applyOperations comment)
|
||||
// but the utility is kept for potential future use.
|
||||
});
|
||||
|
|
|
|||
|
|
@ -4,32 +4,32 @@ import { Operation } from '../operation.types';
|
|||
import { convertOpToAction } from '../operation-converter.util';
|
||||
import { DependencyResolverService } from '../sync/dependency-resolver.service';
|
||||
import { OpLog } from '../../../log';
|
||||
import {
|
||||
MAX_DEPENDENCY_RETRY_ATTEMPTS,
|
||||
MAX_FAILED_OPS_SIZE,
|
||||
MAX_PENDING_QUEUE_SIZE,
|
||||
} from '../operation-log.const';
|
||||
import { SnackService } from '../../../snack/snack.service';
|
||||
import { T } from '../../../../t.const';
|
||||
import { ArchiveOperationHandler } from './archive-operation-handler.service';
|
||||
|
||||
/**
|
||||
* Interface for tracking pending operations that failed due to missing dependencies.
|
||||
*/
|
||||
interface PendingOperation {
|
||||
op: Operation;
|
||||
retryCount: number;
|
||||
missingDeps: string[];
|
||||
}
|
||||
import { SyncStateCorruptedError } from '../sync-state-corrupted.error';
|
||||
|
||||
/**
|
||||
* Service responsible for applying operations to the local NgRx store.
|
||||
* It handles dependency resolution and dispatches actions, ensuring that
|
||||
* operations are applied in a causally correct order.
|
||||
*
|
||||
* Operations with missing hard dependencies are queued for retry.
|
||||
* If dependencies still aren't resolved after MAX_RETRY_ATTEMPTS,
|
||||
* the operation is logged as permanently failed for debugging.
|
||||
* ## Design Philosophy: Fail Fast, Re-sync Clean
|
||||
*
|
||||
* This service uses a deliberately simple approach to dependency handling:
|
||||
* - If all dependencies are satisfied → apply the operation
|
||||
* - If hard dependencies are missing → throw SyncStateCorruptedError
|
||||
*
|
||||
* We do NOT attempt complex retry logic because:
|
||||
* 1. The sync server guarantees operations arrive in sequence order
|
||||
* 2. Delete operations are atomic via meta-reducers (no separate cleanup operations)
|
||||
* 3. If dependencies are missing, something is fundamentally wrong with sync state
|
||||
* 4. A full re-sync is safer than partial recovery with potential inconsistencies
|
||||
*
|
||||
* ### What Happens on Failure?
|
||||
*
|
||||
* When this service throws SyncStateCorruptedError:
|
||||
* 1. The caller (OperationLogSyncService) catches the error
|
||||
* 2. The operations are marked as failed in the operation log
|
||||
* 3. The user is notified and can trigger a full re-sync
|
||||
*
|
||||
* This approach trades occasional re-syncs for guaranteed correctness.
|
||||
*/
|
||||
@Injectable({
|
||||
providedIn: 'root',
|
||||
|
|
@ -37,168 +37,65 @@ interface PendingOperation {
|
|||
export class OperationApplierService {
|
||||
private store = inject(Store);
|
||||
private dependencyResolver = inject(DependencyResolverService);
|
||||
private snackService = inject(SnackService);
|
||||
private archiveOperationHandler = inject(ArchiveOperationHandler);
|
||||
|
||||
/**
|
||||
* Queue of operations that failed due to missing dependencies.
|
||||
* These are retried when new operations are applied that might resolve them.
|
||||
* Apply operations to the NgRx store.
|
||||
* Operations are applied in order. If any operation has missing hard dependencies,
|
||||
* a SyncStateCorruptedError is thrown immediately.
|
||||
*
|
||||
* @throws SyncStateCorruptedError if any operation has missing hard dependencies
|
||||
*/
|
||||
private pendingQueue: PendingOperation[] = [];
|
||||
|
||||
/**
|
||||
* Operations that have permanently failed after exhausting retry attempts.
|
||||
* Kept for debugging and potential manual recovery.
|
||||
*/
|
||||
private permanentlyFailedOps: PendingOperation[] = [];
|
||||
|
||||
async applyOperations(ops: Operation[]): Promise<void> {
|
||||
if (ops.length === 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
OpLog.normal(
|
||||
'OperationApplierService: Applying operations:',
|
||||
ops.map((op) => op.id),
|
||||
);
|
||||
|
||||
// NOTE: Dependency sorting is currently disabled.
|
||||
// The sync server guarantees operations arrive in sequence order, and delete
|
||||
// operations are atomic via meta-reducers (no separate cleanup operations).
|
||||
// If ordering issues arise, re-enable using sortOperationsByDependency from:
|
||||
// ./sort-operations-by-dependency.util.ts
|
||||
for (const op of ops) {
|
||||
await this._tryApplyOperation({ op, retryCount: 0, missingDeps: [] });
|
||||
await this._applyOperation(op);
|
||||
}
|
||||
|
||||
// After applying new operations, retry any pending operations
|
||||
// (new ops might have resolved their dependencies)
|
||||
await this._retryPendingOperations();
|
||||
|
||||
OpLog.normal('OperationApplierService: Finished applying operations.');
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the count of operations currently pending in the retry queue.
|
||||
* Apply a single operation, checking dependencies first.
|
||||
* Throws SyncStateCorruptedError if hard dependencies are missing.
|
||||
*/
|
||||
getPendingCount(): number {
|
||||
return this.pendingQueue.length;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the count of operations that permanently failed.
|
||||
*/
|
||||
getFailedCount(): number {
|
||||
return this.permanentlyFailedOps.length;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns a copy of the permanently failed operations for debugging.
|
||||
*/
|
||||
getFailedOperations(): PendingOperation[] {
|
||||
return [...this.permanentlyFailedOps];
|
||||
}
|
||||
|
||||
/**
|
||||
* Clears permanently failed operations (e.g., after user acknowledgment).
|
||||
*/
|
||||
clearFailedOperations(): void {
|
||||
this.permanentlyFailedOps = [];
|
||||
}
|
||||
|
||||
/**
|
||||
* Clears old failed operations that are beyond useful debugging.
|
||||
* Called periodically (e.g., on app startup or after sync) to prevent memory growth.
|
||||
*
|
||||
* @param maxAgeMs - Maximum age of failed operations to keep (default: 7 days)
|
||||
* @returns Number of operations pruned
|
||||
*/
|
||||
pruneOldFailedOperations(maxAgeMs: number = 7 * 24 * 60 * 60 * 1000): number {
|
||||
const cutoff = Date.now() - maxAgeMs;
|
||||
const before = this.permanentlyFailedOps.length;
|
||||
|
||||
this.permanentlyFailedOps = this.permanentlyFailedOps.filter(
|
||||
(pending) => pending.op.timestamp > cutoff,
|
||||
);
|
||||
|
||||
const pruned = before - this.permanentlyFailedOps.length;
|
||||
if (pruned > 0) {
|
||||
OpLog.normal(`OperationApplierService: Pruned ${pruned} old failed operations`);
|
||||
}
|
||||
|
||||
return pruned;
|
||||
}
|
||||
|
||||
/**
|
||||
* Attempts to apply a single operation, handling dependency failures.
|
||||
*/
|
||||
private async _tryApplyOperation(pending: PendingOperation): Promise<boolean> {
|
||||
const { op, retryCount } = pending;
|
||||
private async _applyOperation(op: Operation): Promise<void> {
|
||||
const deps = this.dependencyResolver.extractDependencies(op);
|
||||
const { missing } = await this.dependencyResolver.checkDependencies(deps);
|
||||
|
||||
// Check for hard dependencies
|
||||
// Only hard dependencies block application
|
||||
const missingHardDeps = missing.filter((dep) => dep.mustExist);
|
||||
|
||||
if (missingHardDeps.length > 0) {
|
||||
const missingDepIds = missingHardDeps.map((d) => d.entityId);
|
||||
const missingDepIds = missingHardDeps.map((d) => `${d.entityType}:${d.entityId}`);
|
||||
|
||||
// Check if we should mark as failed (max retries or queue at capacity)
|
||||
const shouldMarkFailed =
|
||||
retryCount >= MAX_DEPENDENCY_RETRY_ATTEMPTS ||
|
||||
this.pendingQueue.length >= MAX_PENDING_QUEUE_SIZE;
|
||||
|
||||
if (shouldMarkFailed) {
|
||||
// Log reason for failure
|
||||
if (this.pendingQueue.length >= MAX_PENDING_QUEUE_SIZE) {
|
||||
OpLog.warn(
|
||||
'OperationApplierService: Pending queue at capacity, marking operation as failed immediately.',
|
||||
{ opId: op.id, queueSize: this.pendingQueue.length },
|
||||
);
|
||||
} else {
|
||||
OpLog.err(
|
||||
'OperationApplierService: Operation permanently failed after max retries.',
|
||||
{
|
||||
opId: op.id,
|
||||
actionType: op.actionType,
|
||||
missingDeps: missingDepIds,
|
||||
retryCount,
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
// Ensure failed ops list doesn't exceed capacity
|
||||
if (this.permanentlyFailedOps.length >= MAX_FAILED_OPS_SIZE) {
|
||||
OpLog.warn(
|
||||
'OperationApplierService: Failed ops queue at capacity, dropping oldest.',
|
||||
);
|
||||
this.permanentlyFailedOps.shift();
|
||||
}
|
||||
|
||||
this.permanentlyFailedOps.push({
|
||||
op,
|
||||
retryCount,
|
||||
OpLog.err(
|
||||
'OperationApplierService: Operation has missing hard dependencies. ' +
|
||||
'This indicates corrupted sync state - a full re-sync is required.',
|
||||
{
|
||||
opId: op.id,
|
||||
actionType: op.actionType,
|
||||
missingDeps: missingDepIds,
|
||||
});
|
||||
|
||||
// Notify user about failed operation (only once per batch to avoid spam)
|
||||
// Check if this is the first failed op or if we just hit a threshold
|
||||
if (this.permanentlyFailedOps.length === 1) {
|
||||
this.snackService.open({
|
||||
type: 'ERROR',
|
||||
msg: T.F.SYNC.S.OPERATION_PERMANENTLY_FAILED,
|
||||
});
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
// Queue for retry
|
||||
OpLog.warn(
|
||||
'OperationApplierService: Queuing operation for retry due to missing dependencies.',
|
||||
{ opId: op.id, missingDeps: missingDepIds, retryCount: retryCount + 1 },
|
||||
},
|
||||
);
|
||||
|
||||
throw new SyncStateCorruptedError(
|
||||
`Operation ${op.id} cannot be applied: missing hard dependencies [${missingDepIds.join(', ')}]. ` +
|
||||
'This indicates corrupted sync state. A full re-sync is required to restore consistency.',
|
||||
{
|
||||
opId: op.id,
|
||||
actionType: op.actionType,
|
||||
missingDependencies: missingDepIds,
|
||||
},
|
||||
);
|
||||
this.pendingQueue.push({
|
||||
op,
|
||||
retryCount: retryCount + 1,
|
||||
missingDeps: missingDepIds,
|
||||
});
|
||||
return false;
|
||||
}
|
||||
|
||||
// Dependencies satisfied, apply the operation
|
||||
|
|
@ -213,32 +110,6 @@ export class OperationApplierService {
|
|||
|
||||
// Handle archive-specific side effects for remote operations
|
||||
// This is called AFTER dispatch because the NgRx state must be updated first
|
||||
await this.archiveOperationHandler.handleRemoteOperation(action);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Retries all pending operations once.
|
||||
* Operations that still fail are kept in the queue for the next cycle.
|
||||
*/
|
||||
private async _retryPendingOperations(): Promise<void> {
|
||||
if (this.pendingQueue.length === 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
OpLog.normal(
|
||||
'OperationApplierService: Retrying pending operations:',
|
||||
this.pendingQueue.length,
|
||||
);
|
||||
|
||||
// Take all pending ops and clear the queue
|
||||
const toRetry = [...this.pendingQueue];
|
||||
this.pendingQueue = [];
|
||||
|
||||
// Try to apply each one (they may re-queue themselves if still missing deps)
|
||||
for (const pending of toRetry) {
|
||||
await this._tryApplyOperation(pending);
|
||||
}
|
||||
await this.archiveOperationHandler.handleOperation(action);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,193 +0,0 @@
|
|||
import { generateCaptureId, simpleHash } from './operation-capture.util';
|
||||
import { PersistentAction } from '../persistent-action.interface';
|
||||
import { EntityType, OpType } from '../operation.types';
|
||||
|
||||
describe('operation-capture.util', () => {
|
||||
describe('simpleHash', () => {
|
||||
it('should return consistent hash for same input', () => {
|
||||
const input = 'test string';
|
||||
const hash1 = simpleHash(input);
|
||||
const hash2 = simpleHash(input);
|
||||
|
||||
expect(hash1).toBe(hash2);
|
||||
});
|
||||
|
||||
it('should return different hashes for different inputs', () => {
|
||||
const hash1 = simpleHash('input1');
|
||||
const hash2 = simpleHash('input2');
|
||||
|
||||
expect(hash1).not.toBe(hash2);
|
||||
});
|
||||
|
||||
it('should return base-36 encoded string', () => {
|
||||
const hash = simpleHash('test');
|
||||
|
||||
// Base-36 uses 0-9 and a-z
|
||||
expect(hash).toMatch(/^[0-9a-z]+$/);
|
||||
});
|
||||
|
||||
it('should handle empty string', () => {
|
||||
const hash = simpleHash('');
|
||||
|
||||
expect(hash).toBe('0');
|
||||
});
|
||||
|
||||
it('should handle unicode characters', () => {
|
||||
const hash = simpleHash('日本語 🎉 emoji');
|
||||
|
||||
expect(hash).toMatch(/^[0-9a-z]+$/);
|
||||
});
|
||||
|
||||
it('should handle very long strings', () => {
|
||||
const longString = 'x'.repeat(10000);
|
||||
const hash = simpleHash(longString);
|
||||
|
||||
expect(hash).toMatch(/^[0-9a-z]+$/);
|
||||
});
|
||||
|
||||
it('should produce different hashes for similar strings', () => {
|
||||
const hash1 = simpleHash('abc');
|
||||
const hash2 = simpleHash('abd');
|
||||
|
||||
expect(hash1).not.toBe(hash2);
|
||||
});
|
||||
});
|
||||
|
||||
describe('generateCaptureId', () => {
|
||||
const createMockAction = (
|
||||
overrides: Partial<PersistentAction> = {},
|
||||
): PersistentAction =>
|
||||
({
|
||||
type: '[TaskShared] Update Task',
|
||||
meta: {
|
||||
isPersistent: true,
|
||||
entityType: 'TASK' as EntityType,
|
||||
entityId: 'task-123',
|
||||
opType: OpType.Update,
|
||||
},
|
||||
...overrides,
|
||||
}) as PersistentAction;
|
||||
|
||||
it('should include action type in capture ID', () => {
|
||||
const action = createMockAction({ type: '[Custom] Action Type' });
|
||||
const captureId = generateCaptureId(action);
|
||||
|
||||
expect(captureId).toContain('[Custom] Action Type');
|
||||
});
|
||||
|
||||
it('should include entityId in capture ID', () => {
|
||||
const action = createMockAction({
|
||||
meta: {
|
||||
isPersistent: true,
|
||||
entityType: 'TASK' as EntityType,
|
||||
entityId: 'unique-entity-id',
|
||||
opType: OpType.Update,
|
||||
},
|
||||
});
|
||||
const captureId = generateCaptureId(action);
|
||||
|
||||
expect(captureId).toContain('unique-entity-id');
|
||||
});
|
||||
|
||||
it('should include entityIds joined by comma for bulk actions', () => {
|
||||
const action = createMockAction({
|
||||
meta: {
|
||||
isPersistent: true,
|
||||
entityType: 'TASK' as EntityType,
|
||||
entityIds: ['task-1', 'task-2', 'task-3'],
|
||||
opType: OpType.Update,
|
||||
isBulk: true,
|
||||
},
|
||||
});
|
||||
const captureId = generateCaptureId(action);
|
||||
|
||||
expect(captureId).toContain('task-1,task-2,task-3');
|
||||
});
|
||||
|
||||
it('should use "no-id" when no entityId or entityIds present', () => {
|
||||
const action = createMockAction({
|
||||
meta: {
|
||||
isPersistent: true,
|
||||
entityType: 'GLOBAL_CONFIG' as EntityType,
|
||||
opType: OpType.Update,
|
||||
},
|
||||
});
|
||||
const captureId = generateCaptureId(action);
|
||||
|
||||
expect(captureId).toContain('no-id');
|
||||
});
|
||||
|
||||
it('should generate consistent ID for same action', () => {
|
||||
const action = createMockAction();
|
||||
const captureId1 = generateCaptureId(action);
|
||||
const captureId2 = generateCaptureId(action);
|
||||
|
||||
expect(captureId1).toBe(captureId2);
|
||||
});
|
||||
|
||||
it('should generate different IDs for different actions', () => {
|
||||
const action1 = createMockAction({
|
||||
meta: {
|
||||
isPersistent: true,
|
||||
entityType: 'TASK' as EntityType,
|
||||
entityId: 'task-1',
|
||||
opType: OpType.Update,
|
||||
},
|
||||
});
|
||||
const action2 = createMockAction({
|
||||
meta: {
|
||||
isPersistent: true,
|
||||
entityType: 'TASK' as EntityType,
|
||||
entityId: 'task-2',
|
||||
opType: OpType.Update,
|
||||
},
|
||||
});
|
||||
|
||||
const captureId1 = generateCaptureId(action1);
|
||||
const captureId2 = generateCaptureId(action2);
|
||||
|
||||
expect(captureId1).not.toBe(captureId2);
|
||||
});
|
||||
|
||||
it('should generate different IDs for same entityId with different payloads', () => {
|
||||
const action1 = createMockAction({
|
||||
task: { id: 'task-1', changes: { title: 'Title A' } },
|
||||
} as Partial<PersistentAction>);
|
||||
const action2 = createMockAction({
|
||||
task: { id: 'task-1', changes: { title: 'Title B' } },
|
||||
} as Partial<PersistentAction>);
|
||||
|
||||
const captureId1 = generateCaptureId(action1);
|
||||
const captureId2 = generateCaptureId(action2);
|
||||
|
||||
expect(captureId1).not.toBe(captureId2);
|
||||
});
|
||||
|
||||
it('should have format: type:entityKey:hash', () => {
|
||||
const action = createMockAction();
|
||||
const captureId = generateCaptureId(action);
|
||||
|
||||
const parts = captureId.split(':');
|
||||
expect(parts.length).toBe(3);
|
||||
expect(parts[0]).toBe('[TaskShared] Update Task');
|
||||
expect(parts[1]).toBe('task-123');
|
||||
expect(parts[2]).toMatch(/^[0-9a-z]+$/); // hash
|
||||
});
|
||||
|
||||
it('should prefer entityId over entityIds when both present', () => {
|
||||
const action = createMockAction({
|
||||
meta: {
|
||||
isPersistent: true,
|
||||
entityType: 'TASK' as EntityType,
|
||||
entityId: 'single-id',
|
||||
entityIds: ['bulk-1', 'bulk-2'],
|
||||
opType: OpType.Update,
|
||||
},
|
||||
});
|
||||
const captureId = generateCaptureId(action);
|
||||
|
||||
expect(captureId).toContain('single-id');
|
||||
expect(captureId).not.toContain('bulk-1');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
import { PersistentAction } from '../persistent-action.interface';
|
||||
|
||||
/**
|
||||
* Generates a unique capture ID for correlating meta-reducer and effect.
|
||||
* Uses action type + entity info + hash for uniqueness.
|
||||
*
|
||||
* @param action The persistent action to generate an ID for
|
||||
* @returns A unique string ID for this specific action instance
|
||||
*/
|
||||
export const generateCaptureId = (action: PersistentAction): string => {
|
||||
const entityKey = action.meta.entityId || action.meta.entityIds?.join(',') || 'no-id';
|
||||
const actionHash = simpleHash(JSON.stringify(action));
|
||||
return `${action.type}:${entityKey}:${actionHash}`;
|
||||
};
|
||||
|
||||
/**
|
||||
* Simple hash function for generating unique IDs.
|
||||
* Produces a base-36 encoded 32-bit hash.
|
||||
*
|
||||
* @param str The string to hash
|
||||
* @returns A base-36 encoded hash string
|
||||
*/
|
||||
export const simpleHash = (str: string): string => {
|
||||
let hash = 0;
|
||||
for (let i = 0; i < str.length; i++) {
|
||||
const char = str.charCodeAt(i);
|
||||
hash = (hash << 5) - hash + char;
|
||||
hash = hash & hash; // Convert to 32-bit integer
|
||||
}
|
||||
return Math.abs(hash).toString(36);
|
||||
};
|
||||
|
|
@ -1,82 +1,15 @@
|
|||
import { inject, Injectable } from '@angular/core';
|
||||
import { createEffect, ofType } from '@ngrx/effects';
|
||||
import { LOCAL_ACTIONS } from '../../../util/local-actions.token';
|
||||
import { Task, TaskCopy } from '../../tasks/task.model';
|
||||
import { IssueProviderActions } from './issue-provider.actions';
|
||||
import { tap } from 'rxjs/operators';
|
||||
import { Observable } from 'rxjs';
|
||||
import { Update } from '@ngrx/entity/src/models';
|
||||
import { TaskArchiveService } from '../../time-tracking/task-archive.service';
|
||||
import { IssueLog } from '../../../core/log';
|
||||
import { TaskSharedActions } from '../../../root-store/meta/task-shared.actions';
|
||||
|
||||
const UNLINKED_PARTIAL_TASK: Partial<TaskCopy> = {
|
||||
issueId: undefined,
|
||||
issueProviderId: undefined,
|
||||
issueType: undefined,
|
||||
issueWasUpdated: undefined,
|
||||
issueLastUpdated: undefined,
|
||||
issueAttachmentNr: undefined,
|
||||
issueTimeTracked: undefined,
|
||||
issuePoints: undefined,
|
||||
} as const;
|
||||
|
||||
/**
|
||||
* Handles unlinking tasks from deleted issue providers.
|
||||
* @deprecated This file is no longer used. Issue provider deletion archive cleanup has
|
||||
* been consolidated into ArchiveOperationHandlerEffects for unified handling of both
|
||||
* local and remote operations.
|
||||
*
|
||||
* Regular tasks are handled atomically by the meta-reducer (issue-provider-shared.reducer.ts)
|
||||
* to ensure proper sync. This effect only handles archive tasks which are stored separately.
|
||||
* @see ArchiveOperationHandler._handleDeleteIssueProvider
|
||||
* @see ArchiveOperationHandler._handleDeleteIssueProviders
|
||||
* @see ArchiveOperationHandlerEffects at:
|
||||
* src/app/core/persistence/operation-log/processing/archive-operation-handler.effects.ts
|
||||
*
|
||||
* All archive-affecting operations are now routed through ArchiveOperationHandler,
|
||||
* which is the SINGLE SOURCE OF TRUTH for archive storage operations.
|
||||
*
|
||||
* This file is kept temporarily for reference and should be deleted in a future cleanup.
|
||||
*/
|
||||
@Injectable()
|
||||
export class UnlinkAllTasksOnProviderDeletionEffects {
|
||||
private _actions$ = inject(LOCAL_ACTIONS);
|
||||
private _taskArchiveService = inject(TaskArchiveService);
|
||||
|
||||
/**
|
||||
* Unlink archive tasks when an issue provider is deleted via TaskSharedActions.
|
||||
* Regular tasks are handled by the meta-reducer for atomic sync.
|
||||
*/
|
||||
unlinkArchiveTasksOnProviderDeletion$: Observable<unknown> = createEffect(
|
||||
() =>
|
||||
this._actions$.pipe(
|
||||
ofType(TaskSharedActions.deleteIssueProvider),
|
||||
tap(({ issueProviderId }) => this._unlinkArchiveTasks(issueProviderId)),
|
||||
),
|
||||
{ dispatch: false },
|
||||
);
|
||||
|
||||
/**
|
||||
* Handle bulk deletion of issue providers (used by sync/import).
|
||||
* This still needs to unlink both regular and archive tasks since bulk deletions
|
||||
* don't go through the new TaskSharedActions flow.
|
||||
*/
|
||||
unlinkAllTasksOnProviderDeletions$: Observable<unknown> = createEffect(
|
||||
() =>
|
||||
this._actions$.pipe(
|
||||
ofType(IssueProviderActions.deleteIssueProviders),
|
||||
tap((v) => v.ids.forEach((id) => this._unlinkArchiveTasks(id))),
|
||||
),
|
||||
{ dispatch: false },
|
||||
);
|
||||
|
||||
private async _unlinkArchiveTasks(issueProviderId: string): Promise<void> {
|
||||
const archiveTasks = await this._taskArchiveService.load();
|
||||
const archiveTaskUpdates: Update<TaskCopy>[] = Object.values(archiveTasks.entities)
|
||||
.filter(
|
||||
(task): task is Task =>
|
||||
task !== undefined && task.issueProviderId === issueProviderId,
|
||||
)
|
||||
.map((task) => ({
|
||||
id: task.id,
|
||||
changes: UNLINKED_PARTIAL_TASK,
|
||||
}));
|
||||
|
||||
if (archiveTaskUpdates.length > 0) {
|
||||
await this._taskArchiveService.updateTasks(archiveTaskUpdates);
|
||||
IssueLog.log('unlinkArchiveTasksOnProviderDeletion$', {
|
||||
issueProviderId,
|
||||
archiveTaskUpdates,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -13,31 +13,29 @@ import { GlobalConfigService } from '../../config/global-config.service';
|
|||
import { T } from '../../../t.const';
|
||||
import { Project } from '../project.model';
|
||||
import { Observable } from 'rxjs';
|
||||
import { TaskArchiveService } from '../../time-tracking/task-archive.service';
|
||||
import { TimeTrackingService } from '../../time-tracking/time-tracking.service';
|
||||
|
||||
@Injectable()
|
||||
export class ProjectEffects {
|
||||
private _actions$ = inject(LOCAL_ACTIONS);
|
||||
private _snackService = inject(SnackService);
|
||||
private _taskArchiveService = inject(TaskArchiveService);
|
||||
private _globalConfigService = inject(GlobalConfigService);
|
||||
private _timeTrackingService = inject(TimeTrackingService);
|
||||
|
||||
/**
|
||||
* Handles non-archive cleanup when a project is deleted.
|
||||
*
|
||||
* NOTE: Archive cleanup (task removal, time tracking cleanup) is now handled by
|
||||
* ArchiveOperationHandler, which is the single source of truth for archive operations.
|
||||
*
|
||||
* @see ArchiveOperationHandler._handleDeleteProject
|
||||
*/
|
||||
deleteProjectRelatedData: Observable<unknown> = createEffect(
|
||||
() =>
|
||||
this._actions$.pipe(
|
||||
ofType(TaskSharedActions.deleteProject),
|
||||
tap(async ({ projectId }) => {
|
||||
// NOTE: we also do stuff on a reducer level (probably better to handle on this level @TODO refactor)
|
||||
const id = projectId;
|
||||
this._taskArchiveService.removeAllArchiveTasksForProject(id);
|
||||
// Reminders are now stored on tasks (task.remindAt), so they're deleted with the tasks
|
||||
this._timeTrackingService.cleanupDataEverywhereForProject(id);
|
||||
|
||||
// we also might need to account for this unlikely but very nasty scenario
|
||||
tap(({ projectId }) => {
|
||||
// Clear defaultProjectId if the deleted project was the default
|
||||
const cfg = this._globalConfigService.cfg();
|
||||
if (cfg && id === cfg.misc.defaultProjectId) {
|
||||
if (cfg && projectId === cfg.misc.defaultProjectId) {
|
||||
this._globalConfigService.updateSection('misc', { defaultProjectId: null });
|
||||
}
|
||||
}),
|
||||
|
|
|
|||
|
|
@ -2,7 +2,6 @@ import { inject, Injectable } from '@angular/core';
|
|||
import { createEffect, ofType } from '@ngrx/effects';
|
||||
import { LOCAL_ACTIONS } from '../../../util/local-actions.token';
|
||||
import {
|
||||
concatMap,
|
||||
distinctUntilChanged,
|
||||
filter,
|
||||
first,
|
||||
|
|
@ -22,8 +21,6 @@ import { WorkContextType } from '../../work-context/work-context.model';
|
|||
import { WorkContextService } from '../../work-context/work-context.service';
|
||||
import { Router } from '@angular/router';
|
||||
import { TODAY_TAG } from '../tag.const';
|
||||
import { TaskArchiveService } from '../../time-tracking/task-archive.service';
|
||||
import { TimeTrackingService } from '../../time-tracking/time-tracking.service';
|
||||
import { fastArrayCompare } from '../../../util/fast-array-compare';
|
||||
import { getDbDateStr } from '../../../util/get-db-date-str';
|
||||
import { TranslateService } from '@ngx-translate/core';
|
||||
|
|
@ -31,7 +28,6 @@ import { PlannerService } from '../../planner/planner.service';
|
|||
import { selectAllTasksDueToday } from '../../planner/store/planner.selectors';
|
||||
import { TaskSharedActions } from '../../../root-store/meta/task-shared.actions';
|
||||
import { Log } from '../../../core/log';
|
||||
import { devError } from '../../../util/dev-error';
|
||||
|
||||
@Injectable()
|
||||
export class TagEffects {
|
||||
|
|
@ -41,8 +37,6 @@ export class TagEffects {
|
|||
private _tagService = inject(TagService);
|
||||
private _workContextService = inject(WorkContextService);
|
||||
private _router = inject(Router);
|
||||
private _taskArchiveService = inject(TaskArchiveService);
|
||||
private _timeTrackingService = inject(TimeTrackingService);
|
||||
private _translateService = inject(TranslateService);
|
||||
private _plannerService = inject(PlannerService);
|
||||
|
||||
|
|
@ -87,48 +81,14 @@ export class TagEffects {
|
|||
);
|
||||
|
||||
/**
|
||||
* Handles async cleanup when tags are deleted.
|
||||
* Redirects to Today tag if the current tag is deleted.
|
||||
*
|
||||
* NOTE: Most tag deletion cleanup is now handled atomically in the meta-reducer
|
||||
* (tag-shared.reducer.ts), including:
|
||||
* - Removing tag references from tasks
|
||||
* - Deleting orphaned tasks (no project, no tags, no parent)
|
||||
* - Cleaning up task repeat configs
|
||||
* - Cleaning up current time tracking state
|
||||
* NOTE: Archive cleanup (removing tags from archived tasks, cleaning up time tracking)
|
||||
* is now handled by ArchiveOperationHandler, which is the single source of truth for
|
||||
* archive operations.
|
||||
*
|
||||
* This effect only handles async operations that can't be in a reducer:
|
||||
* - Archive cleanup (IndexedDB, not NgRx state)
|
||||
* - Archive time tracking cleanup (IndexedDB, not NgRx state)
|
||||
* @see ArchiveOperationHandler._handleDeleteTags
|
||||
*/
|
||||
deleteTagRelatedData: Observable<unknown> = createEffect(
|
||||
() =>
|
||||
this._actions$.pipe(
|
||||
ofType(deleteTag, deleteTags),
|
||||
map((a: any) => (a.ids ? a.ids : [a.id])),
|
||||
concatMap(async (tagIdsToRemove: string[]) => {
|
||||
try {
|
||||
// Remove tags from archived tasks (async - IndexedDB, not NgRx state)
|
||||
await this._taskArchiveService.removeTagsFromAllTasks(tagIdsToRemove);
|
||||
|
||||
// Clean up time tracking data in archives (async - IndexedDB)
|
||||
// Note: Current time tracking state is handled in the meta-reducer
|
||||
for (const tagId of tagIdsToRemove) {
|
||||
await this._timeTrackingService.cleanupArchiveDataForTag(tagId);
|
||||
}
|
||||
} catch (e) {
|
||||
// Archive cleanup is non-critical - log and notify but don't break
|
||||
devError('[TagEffects] Failed to clean up tag archive data');
|
||||
this._snackService.open({
|
||||
type: 'ERROR',
|
||||
msg: T.F.TAG.S.ARCHIVE_CLEANUP_FAILED,
|
||||
});
|
||||
}
|
||||
return tagIdsToRemove;
|
||||
}),
|
||||
),
|
||||
{ dispatch: false },
|
||||
);
|
||||
|
||||
redirectIfCurrentTagIsDeleted: Observable<unknown> = createEffect(
|
||||
() =>
|
||||
this._actions$.pipe(
|
||||
|
|
|
|||
|
|
@ -110,20 +110,9 @@ export class TaskRepeatCfgEffects {
|
|||
),
|
||||
);
|
||||
|
||||
/**
|
||||
* Unlink archive tasks when a task repeat config is deleted.
|
||||
* Regular tasks are handled by the meta-reducer for atomic sync.
|
||||
*/
|
||||
removeConfigIdFromTaskArchiveTasks$ = createEffect(
|
||||
() =>
|
||||
this._localActions$.pipe(
|
||||
ofType(TaskSharedActions.deleteTaskRepeatCfg),
|
||||
tap(({ taskRepeatCfgId }) => {
|
||||
this._taskArchiveService.removeRepeatCfgFromArchiveTasks(taskRepeatCfgId);
|
||||
}),
|
||||
),
|
||||
{ dispatch: false },
|
||||
);
|
||||
// NOTE: Archive cleanup for deleteTaskRepeatCfg is now handled by
|
||||
// ArchiveOperationHandler._handleDeleteTaskRepeatCfg, which is the single
|
||||
// source of truth for archive operations.
|
||||
|
||||
updateTaskAfterMakingItRepeatable$ = createEffect(
|
||||
() =>
|
||||
|
|
|
|||
|
|
@ -1,102 +1,12 @@
|
|||
import { Injectable, inject } from '@angular/core';
|
||||
import { createEffect, ofType } from '@ngrx/effects';
|
||||
import { LOCAL_ACTIONS } from '../../../util/local-actions.token';
|
||||
import { tap } from 'rxjs/operators';
|
||||
import { flushYoungToOld } from './archive.actions';
|
||||
import { PfapiService } from '../../../pfapi/pfapi.service';
|
||||
import { sortTimeTrackingAndTasksFromArchiveYoungToOld } from '../sort-data-to-flush';
|
||||
import { ARCHIVE_TASK_YOUNG_TO_OLD_THRESHOLD } from '../archive.service';
|
||||
import { Log } from '../../../core/log';
|
||||
import { TaskSharedActions } from '../../../root-store/meta/task-shared.actions';
|
||||
import { TaskArchiveService } from '../task-archive.service';
|
||||
import { Task } from '../../tasks/task.model';
|
||||
|
||||
/**
|
||||
* Effects for archive-related side effects triggered by LOCAL user actions.
|
||||
* @deprecated This file is no longer used. Archive effects have been consolidated into
|
||||
* ArchiveOperationHandlerEffects for unified handling of both local and remote operations.
|
||||
*
|
||||
* IMPORTANT: All effects in this file use LOCAL_ACTIONS, meaning they only run
|
||||
* for local user actions, NOT for remote operations received via sync.
|
||||
* @see ArchiveOperationHandlerEffects at:
|
||||
* src/app/core/persistence/operation-log/processing/archive-operation-handler.effects.ts
|
||||
*
|
||||
* Remote archive operations are handled by ArchiveOperationHandler, which is
|
||||
* called directly by OperationApplierService. This ensures that:
|
||||
* 1. Effects never run during sync replay (preventing duplicate side effects)
|
||||
* 2. Archive state is explicitly managed in a predictable way
|
||||
* 3. The general rule "effects should never run for remote operations" is followed
|
||||
* All archive-affecting operations are now routed through ArchiveOperationHandler,
|
||||
* which is the SINGLE SOURCE OF TRUTH for archive storage operations.
|
||||
*
|
||||
* For moveToArchive, local operations are handled by ArchiveService BEFORE
|
||||
* dispatching the action, so there's no effect needed for that case.
|
||||
* This file is kept temporarily for reference and should be deleted in a future cleanup.
|
||||
*/
|
||||
@Injectable()
|
||||
export class ArchiveEffects {
|
||||
private _actions$ = inject(LOCAL_ACTIONS);
|
||||
private _pfapiService = inject(PfapiService);
|
||||
private _taskArchiveService = inject(TaskArchiveService);
|
||||
|
||||
/**
|
||||
* Handles the flushYoungToOld action by executing the actual flush operation.
|
||||
* Only runs for LOCAL dispatches - remote flushes are handled by ArchiveOperationHandler.
|
||||
*
|
||||
* The flush operation:
|
||||
* 1. Loads archiveYoung and archiveOld
|
||||
* 2. Moves old tasks and time tracking data from Young to Old
|
||||
* 3. Saves both archives
|
||||
*/
|
||||
flushYoungToOld$ = createEffect(
|
||||
() =>
|
||||
this._actions$.pipe(
|
||||
ofType(flushYoungToOld),
|
||||
tap(async ({ timestamp }) => {
|
||||
const now = timestamp;
|
||||
|
||||
const archiveYoung = await this._pfapiService.m.archiveYoung.load();
|
||||
const archiveOld = await this._pfapiService.m.archiveOld.load();
|
||||
|
||||
const newSorted = sortTimeTrackingAndTasksFromArchiveYoungToOld({
|
||||
archiveYoung,
|
||||
archiveOld,
|
||||
threshold: ARCHIVE_TASK_YOUNG_TO_OLD_THRESHOLD,
|
||||
now,
|
||||
});
|
||||
|
||||
await this._pfapiService.m.archiveYoung.save(
|
||||
{
|
||||
...newSorted.archiveYoung,
|
||||
lastTimeTrackingFlush: now,
|
||||
},
|
||||
{ isUpdateRevAndLastUpdate: true },
|
||||
);
|
||||
|
||||
await this._pfapiService.m.archiveOld.save(
|
||||
{
|
||||
...newSorted.archiveOld,
|
||||
lastTimeTrackingFlush: now,
|
||||
},
|
||||
{ isUpdateRevAndLastUpdate: true },
|
||||
);
|
||||
|
||||
Log.log(
|
||||
'______________________\nFLUSHED ALL FROM ARCHIVE YOUNG TO OLD (via local action)\n_______________________',
|
||||
);
|
||||
}),
|
||||
),
|
||||
{ dispatch: false },
|
||||
);
|
||||
|
||||
/**
|
||||
* When a task is restored from archive via LOCAL user action, remove it from
|
||||
* archive storage. Remote restoreTask operations are handled by ArchiveOperationHandler.
|
||||
*/
|
||||
removeFromArchiveForRestoreTask$ = createEffect(
|
||||
() =>
|
||||
this._actions$.pipe(
|
||||
ofType(TaskSharedActions.restoreTask),
|
||||
tap(({ task }) => this._removeFromArchive(task)),
|
||||
),
|
||||
{ dispatch: false },
|
||||
);
|
||||
|
||||
private async _removeFromArchive(task: Task): Promise<void> {
|
||||
const taskIds = [task.id, ...task.subTaskIds];
|
||||
await this._taskArchiveService.deleteTasks(taskIds);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -21,7 +21,6 @@ import { IdleEffects } from '../features/idle/store/idle.effects';
|
|||
import { issueProvidersFeature } from '../features/issue/store/issue-provider.reducer';
|
||||
import { PollToBacklogEffects } from '../features/issue/store/poll-to-backlog.effects';
|
||||
import { PollIssueUpdatesEffects } from '../features/issue/store/poll-issue-updates.effects';
|
||||
import { UnlinkAllTasksOnProviderDeletionEffects } from '../features/issue/store/unlink-all-tasks-on-provider-deletion.effects';
|
||||
import {
|
||||
METRIC_FEATURE_NAME,
|
||||
metricReducer,
|
||||
|
|
@ -76,7 +75,7 @@ import { ReminderCountdownEffects } from '../features/reminder/store/reminder-co
|
|||
import { SyncEffects } from '../imex/sync/sync.effects';
|
||||
import { boardsFeature } from '../features/boards/store/boards.reducer';
|
||||
import { timeTrackingFeature } from '../features/time-tracking/store/time-tracking.reducer';
|
||||
import { ArchiveEffects } from '../features/time-tracking/store/archive.effects';
|
||||
import { ArchiveOperationHandlerEffects } from '../core/persistence/operation-log/processing/archive-operation-handler.effects';
|
||||
import { plannerFeature } from '../features/planner/store/planner.reducer';
|
||||
import { PlannerEffects } from '../features/planner/store/planner.effects';
|
||||
import { AppStateEffects } from './app-state/app-state.effects';
|
||||
|
|
@ -126,11 +125,7 @@ import {
|
|||
EffectsModule.forFeature([IdleEffects]),
|
||||
|
||||
StoreModule.forFeature(issueProvidersFeature),
|
||||
EffectsModule.forFeature([
|
||||
PollToBacklogEffects,
|
||||
PollIssueUpdatesEffects,
|
||||
UnlinkAllTasksOnProviderDeletionEffects,
|
||||
]),
|
||||
EffectsModule.forFeature([PollToBacklogEffects, PollIssueUpdatesEffects]),
|
||||
|
||||
StoreModule.forFeature(METRIC_FEATURE_NAME, metricReducer),
|
||||
|
||||
|
|
@ -168,7 +163,7 @@ import {
|
|||
StoreModule.forFeature(boardsFeature),
|
||||
|
||||
StoreModule.forFeature(timeTrackingFeature),
|
||||
EffectsModule.forFeature([ArchiveEffects]),
|
||||
EffectsModule.forFeature([ArchiveOperationHandlerEffects]),
|
||||
|
||||
StoreModule.forFeature(plannerFeature),
|
||||
EffectsModule.forFeature([PlannerEffects]),
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue