Compare commits

..

66 commits

Author SHA1 Message Date
SergeantPanda
8521df94ad
Merge pull request #868 from DawtCom:main
Some checks failed
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
Move caching to client to remove burden on dispatch server
2026-01-18 17:26:49 -06:00
DawtCom
c970cfcf9a Move caching to client to remove burden on dispatch server 2026-01-18 00:49:17 -06:00
SergeantPanda
fe60c4f3bc Enhancement: Update frontend tests workflow to ensure proper triggering on push and pull request events only when frontend code changes.
Some checks failed
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
Frontend Tests / test (push) Has been cancelled
2026-01-17 18:30:13 -06:00
SergeantPanda
7cf7aecdf2
Merge pull request #857 from Dispatcharr/dev
Some checks failed
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
Frontend Tests / test (push) Has been cancelled
Dev
2026-01-15 09:05:06 -06:00
SergeantPanda
54644df9a3 Test Fix: Fixed SettingsUtils frontend tests for new grouped settings architecture. Updated test suite to properly verify grouped JSON settings (stream_settings, dvr_settings, etc.) instead of individual CharField settings, including tests for type conversions, array-to-CSV transformations, and special handling of proxy_settings and network_access. Frontend tests GitHub workflow now uses Node.js 24 (matching Dockerfile) and runs on both main and dev branch pushes and pull requests for comprehensive CI coverage.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
Frontend Tests / test (push) Waiting to run
2026-01-15 08:55:38 -06:00
SergeantPanda
38fa0fe99d Bug Fix: Fixed NumPy baseline detection in Docker entrypoint. Now calls numpy.show_config() directly with case-insensitive grep instead of incorrectly wrapping the output.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-14 17:10:06 -06:00
SergeantPanda
a772f5c353 changelog: Update missed close on 0.17.0 changelog.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-13 17:03:30 -06:00
GitHub Actions
da186bcb9d Release v0.17.0
Some checks failed
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
Frontend Tests / test (push) Has been cancelled
2026-01-13 22:48:13 +00:00
SergeantPanda
75df00e329
Merge pull request #849 from Dispatcharr/dev
Version 0.17.0
2026-01-13 16:45:33 -06:00
SergeantPanda
d0ed682b3d Bug Fix: Fixed bulk channel profile membership update endpoint silently ignoring channels without existing membership records. The endpoint now creates missing memberships automatically (matching single-channel endpoint behavior), validates that all channel IDs exist before processing, and provides detailed response feedback including counts of updated vs. created memberships. Added comprehensive Swagger documentation with request/response schemas.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-13 15:43:44 -06:00
SergeantPanda
60955a39c7 changelog: Update changelog for settings refactor. 2026-01-13 14:35:01 -06:00
SergeantPanda
6c15ae940d changelog: Update changelog for PR 837 2026-01-13 14:30:25 -06:00
SergeantPanda
516d0e02aa
Merge pull request #837 from mdellavo/bulk-update-fix 2026-01-13 14:26:04 -06:00
Marc DellaVolpe
6607cef5d4 fix bulk update error on unmatched first entry, add tests to cover bulk update api 2026-01-13 15:05:40 -05:00
SergeantPanda
2f9b544519
Merge pull request #848 from Dispatcharr/settings-refactor
Refactor CoreSettings to use JSONField for value storage and update r…
2026-01-13 13:34:50 -06:00
SergeantPanda
36967c10ce Refactor CoreSettings to use JSONField for value storage and update related logic for proper type handling. Adjusted serializers and forms to accommodate new data structure, ensuring seamless integration across the application. 2026-01-13 12:18:34 -06:00
SergeantPanda
4bfdd15b37 Bug Fix: Fixed PostgreSQL backup restore not completely cleaning database before restoration. The restore process now drops and recreates the entire public schema before running pg_restore, ensuring a truly clean restore that removes all tables, functions, and other objects not present in the backup file. This prevents leftover database objects from persisting when restoring backups from older branches or versions. Added --no-owner flag to pg_restore to avoid role permission errors when the backup was created by a different PostgreSQL user.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-12 16:38:20 -06:00
SergeantPanda
2a3d0db670 Enhancement: Loading feedback for all confirmation dialogs: Extended visual loading indicators across all confirmation dialogs throughout the application. Delete, cleanup, and bulk operation dialogs now show an animated dots loader and disabled state during async operations, providing consistent user feedback for backups (restore/delete), channels, EPGs, logos, VOD logos, M3U accounts, streams, users, groups, filters, profiles, batch operations, and network access changes.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-12 13:53:44 -06:00
SergeantPanda
43636a84d0 Enhancement: Added visual loading indicator to the backup restore confirmation dialog. When clicking "Restore", the button now displays an animated dots loader and becomes disabled, providing clear feedback that the restore operation is in progress. 2026-01-12 13:22:24 -06:00
SergeantPanda
6d5d16d667 Enhancement: Add check for existing NumPy baseline support before reinstalling legacy NumPy to avoid unnecessary installations. 2026-01-12 12:29:54 -06:00
SergeantPanda
f821dabe8e Enhancement: Users can now rename existing channel profiles and create duplicates with automatic channel membership cloning. Each profile action (edit, duplicate, delete) in the profile dropdown for quick access. 2026-01-12 11:29:33 -06:00
SergeantPanda
564dceb210 Bug fix: Fixed TV Guide loading overlay not disappearing after navigating from DVR page. The fetchRecordings() function in the channels store was setting isLoading: true on start but never resetting it to false on successful completion, causing the Guide page's loading overlay to remain visible indefinitely when accessed after the DVR page.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-11 20:51:26 -06:00
SergeantPanda
2e9280cf59
Merge pull request #834 from justinforlenza/stream-profile-argument-parse
Fix: Support quoted stream profile arguments
2026-01-11 20:27:15 -06:00
SergeantPanda
7594ba0a08 changelog: Update changelog for command line parsing PR. 2026-01-11 20:24:59 -06:00
Justin
e8d949db86 replaced standard str.split() with shlex.split() 2026-01-11 20:40:18 -05:00
SergeantPanda
a9a433bc5b changelog: Update changelog for test PR submitted. 2026-01-11 19:24:18 -06:00
SergeantPanda
e72e0215cb
Merge pull request #841 from nick4810/tests/frontend-unit-tests
Tests/frontend unit tests
2026-01-11 19:20:37 -06:00
SergeantPanda
b8374fcc68 Refactor/Enhancement: Refactored channel numbering dialogs into a unified CreateChannelModal component that now includes channel profile selection alongside channel number assignment for both single and bulk channel creation. Users can choose to add channels to all profiles, no profiles, or specific profiles with mutual exclusivity between special options ("All Profiles", "None") and specific profile selections. Profile selection defaults to the current table filter for intuitive workflow. 2026-01-11 19:05:07 -06:00
SergeantPanda
6b873be3cf Bug Fix: Fixed bulk and manual channel creation not refreshing channel profile memberships in the UI for all connected clients. WebSocket channels_created event now calls fetchChannelProfiles() to ensure profile membership updates are reflected in real-time for all users without requiring a page refresh. 2026-01-11 17:51:00 -06:00
SergeantPanda
edfa497203 Enhancement: Channel Profile membership control for manual channel creation and bulk operations: Extended the existing channel_profile_ids parameter from POST /api/channels/from-stream/ to also support POST /api/channels/ (manual creation) and bulk creation tasks with the same flexible semantics:
- Omitted parameter (default): Channels are added to ALL profiles (preserves backward compatibility)
  - Empty array `[]`: Channels are added to NO profiles
  - Sentinel value `[0]`: Channels are added to ALL profiles (explicit)
  - Specific IDs `[1, 2, ...]`: Channels are added only to the specified profiles
  This allows API consumers to control profile membership across all channel creation methods without requiring all channels to be added to every profile by default.
2026-01-11 17:31:15 -06:00
Nick Sandstrom
0242eb69ee Updated tests for mocked regex 2026-01-10 20:22:36 -08:00
Nick Sandstrom
93f74c9d91 Squashed commit of the following:
commit df18a89d0562edc8fd8fb5bc4cac702aefb5272c
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sat Jan 10 19:18:23 2026 -0800

    Updated tests

commit 90240344b89717fbad0e16fe209dbf00c567b1a8
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sun Jan 4 03:18:41 2026 -0800

    Updated tests

commit 525b7cb32bc8d235613706d6795795a0177ea24b
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sun Jan 4 03:18:31 2026 -0800

    Extracted component and util logic

commit e54ea2c3173c0ce3cfb0a2d70d76fdd0a66accc8
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Wed Dec 31 11:55:40 2025 -0800

    Updated tests

commit 5cbe164cb9818d8eab607af037da5faee2c1556f
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Wed Dec 31 11:55:14 2025 -0800

    Minor changes

    Exporting UiSettingsForm as default
    Reverted admin level type check

commit f9ab0d2a06091a2eed3ee6f34268c81bfd746f1e
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Tue Dec 30 23:31:29 2025 -0800

    Extracted component and util logic

commit a705a4db4a32d0851d087a984111837a0a83f722
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sun Dec 28 00:47:29 2025 -0800

    Updated tests

commit a72c6720a3980d0f279edf050b6b51eaae11cdbd
Merge: e8dcab6f 43525ca3
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sun Dec 28 00:04:24 2025 -0800

    Merge branch 'enhancement/component-cleanup' into test/component-cleanup

commit e8dcab6f832570cb986f114cfa574db4994b3aab
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sat Dec 27 22:35:59 2025 -0800

    Updated tests

commit 0fd230503844fba0c418ab0a03c46dc878697a55
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sat Dec 27 22:35:53 2025 -0800

    Added plugins store

commit d987f2de72272f24e26b1ed5bc04bb5c83033868
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sat Dec 27 22:35:43 2025 -0800

    Extracted component and util logic

commit 5a3138370a468a99c9f1ed0a36709a173656d809
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Wed Dec 24 23:13:07 2025 -0800

    Lazy-loading button modals

commit ac6945b5b55e0e16d050d4412a20c82f19250c4b
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Wed Dec 24 22:41:51 2025 -0800

    Extracted notification util

commit befe159fc06b67ee415f7498b5400fee0dc82528
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Wed Dec 24 22:28:12 2025 -0800

    Extracted component and util logic

commit ec10a3a4200a0c94cae29691a9fe06e5c4317bb7
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Wed Dec 24 22:22:09 2025 -0800

    Updated tests

commit c1c7214c8589c0ce7645ea24418d9dd978ac8c1f
Merge: eba6dce7 9c9cbab9
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Tue Dec 23 12:41:25 2025 -0800

    Merge branch 'enhancement/component-cleanup' into test/component-cleanup

commit eba6dce786495e352d4696030500db41d028036e
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sun Dec 21 10:12:19 2025 -0800

    Updated style props

commit 2024b0b267b849a5f100e5543b9188e8ad6dd3d9
Merge: b3700956 1029eb5b
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sun Dec 21 09:27:21 2025 -0800

    Merge branch 'enhancement/component-cleanup' into test/component-cleanup

commit b3700956a4c2f473f1e977826f9537d27ea018ae
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Thu Dec 18 07:45:36 2025 -0800

    Reverted Channels change

commit 137cbb02473b7f2f41488601e3b64e5ff45ac656
Merge: 644ed001 2a0df81c
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Wed Dec 17 13:36:05 2025 -0800

    Merge branch 'enhancement/component-cleanup' into test/component-cleanup

commit 644ed00196c41eaa44df1b98236b7e5cc3124d82
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Wed Dec 17 13:29:13 2025 -0800

    Updated tests

commit c62d1bd0534aa19be99b8f87232ba872420111a0
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Tue Dec 16 14:12:31 2025 -0800

    Updated tests

commit 0cc0ee31d5ad84c59d8eba9fc4424f118f5e0ee2
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Tue Dec 16 13:44:55 2025 -0800

    Extracted component and util logic

commit 25d1b112af250b5ccebb1006511bff8e4387fc76
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Tue Dec 16 13:44:11 2025 -0800

    Added correct import for Text component

commit d8a04c6c09edf158220d3073939c9fb60069745c
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Tue Dec 16 13:43:55 2025 -0800

    Fixed component syntax

commit 59e35d3a4d0da8ed8476560cedacadf76162ea43
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Tue Dec 16 13:43:39 2025 -0800

    Fixed cache_url fallback

commit d2a170d2efd3d2b0e6078c9eebeb8dcea237be3b
Merge: b8f7e435 6c1b0f9a
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Tue Dec 16 12:00:45 2025 -0800

    Merge branch 'enhancement/component-cleanup' into test/component-cleanup

commit b8f7e4358a23f2e3a902929b57ab7a7d115241c5
Merge: 5b12c68a d97f0c90
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Mon Dec 15 07:42:06 2025 -0800

    Merge branch 'enhancement/component-cleanup' into test/component-cleanup

commit 5b12c68ab8ce429adc8d1355632aa411007d365b
Merge: eff58126 c63cb75b
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Mon Dec 8 16:56:14 2025 -0800

    Merge branch 'enhancement/unit-tests' into stage

commit eff58126fb6aba4ebe9a0c67eee65773bffb8ae9
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Mon Dec 8 16:49:43 2025 -0800

    Update .gitignore

commit c63cb75b8cad204d48a392a28d8a5bdf8c270496
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Mon Dec 8 16:28:03 2025 -0800

    Added unit tests for pages

commit 75306a6181ddeb2eaeb306387ba2b44c7fcfd5e3
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Mon Dec 8 16:27:19 2025 -0800

    Added Actions workflow
2026-01-10 19:36:23 -08:00
Nick Sandstrom
e2e6f61dee Merge remote-tracking branch 'upstream/dev' into tests/frontend-unit-tests 2026-01-10 19:35:40 -08:00
SergeantPanda
719a975210 Enhancement: Visual stale indicators for streams and groups: Added is_stale field to Stream and both is_stale and last_seen fields to ChannelGroupM3UAccount models to track items in their retention grace period. Stale groups display with orange buttons and a warning tooltip, while stale streams show with a red background color matching the visual treatment of empty channels.
Some checks failed
CI Pipeline / prepare (push) Has been cancelled
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Has been cancelled
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Has been cancelled
CI Pipeline / create-manifest (push) Has been cancelled
2026-01-09 14:57:07 -06:00
SergeantPanda
a84553d15c Enhancement: Stale status indicators for streams and groups: Added is_stale field to both Stream and ChannelGroupM3UAccount models to track items in their grace period (seen in previous refresh but not current). 2026-01-09 13:53:01 -06:00
SergeantPanda
cc9d38212e Enhancement: Groups now follow the same stale retention logic as streams, using the account's stale_stream_days setting. Groups that temporarily disappear from an M3U source are retained for the configured retention period instead of being immediately deleted, preserving user settings and preventing data loss when providers temporarily remove/re-add groups. (Closes #809) 2026-01-09 12:03:55 -06:00
SergeantPanda
caf56a59f3 Bug Fix: Fixed manual channel creation not adding channels to channel profiles. Manually created channels are now added to the selected profile if one is active, or to all profiles if "All" is selected, matching the behavior of channels created from streams.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-09 10:41:04 -06:00
SergeantPanda
ba5aa861e3 Bug Fix: Fixed Channel Profile filter incorrectly applying profile membership filtering even when "Show Disabled" was enabled, preventing all channels from being displayed. Profile filter now only applies when hiding disabled channels. (Fixes #825) 2026-01-09 10:26:09 -06:00
SergeantPanda
312fa11cfb More cleanup of base image.
Some checks failed
Base Image Build / prepare (push) Has been cancelled
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
Base Image Build / docker (amd64, ubuntu-24.04) (push) Has been cancelled
Base Image Build / docker (arm64, ubuntu-24.04-arm) (push) Has been cancelled
Base Image Build / create-manifest (push) Has been cancelled
2026-01-08 14:53:25 -06:00
SergeantPanda
ad334347a9 More cleanup of base image.
Some checks failed
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
Base Image Build / prepare (push) Has been cancelled
Base Image Build / docker (amd64, ubuntu-24.04) (push) Has been cancelled
Base Image Build / docker (arm64, ubuntu-24.04-arm) (push) Has been cancelled
Base Image Build / create-manifest (push) Has been cancelled
2026-01-08 14:52:58 -06:00
SergeantPanda
74a9d3d0cb
Merge pull request #823 from patchy8736/uwsgi-socket-timeout
Some checks are pending
Base Image Build / prepare (push) Waiting to run
Base Image Build / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
Base Image Build / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
Base Image Build / create-manifest (push) Blocked by required conditions
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
Bug Fix: Add socket-timeout to uWSGI production config to prevent VOD stream timeouts/streams vanishing from stats page
2026-01-08 13:36:18 -06:00
SergeantPanda
fa6315de33 changelog: Fix VOD streams disappearing from stats page during playback by updating uWSGI config to prevent premature cleanup 2026-01-08 13:35:38 -06:00
SergeantPanda
d6c1a2369b Merge branch 'dev' of https://github.com/Dispatcharr/Dispatcharr into pr/patchy8736/823 2026-01-08 13:27:42 -06:00
SergeantPanda
72d9125c36
Merge pull request #811 from nick4810/enhancement/component-cleanup
Extracted component and util logic
2026-01-08 13:07:41 -06:00
SergeantPanda
6e74c370cb changelog: Document refactor of Stats and VOD pages for improved readability and maintainability 2026-01-08 13:06:30 -06:00
SergeantPanda
10447f8c86 Merge branch 'dev' of https://github.com/Dispatcharr/Dispatcharr into pr/nick4810/811 2026-01-08 11:50:39 -06:00
SergeantPanda
1a2d39de91 Merge branch 'dev' of https://github.com/Dispatcharr/Dispatcharr into dev 2026-01-08 10:28:15 -06:00
SergeantPanda
f389420251 Add optional legacy NumPy support for older CPUs in Docker configurations 2026-01-08 10:27:58 -06:00
SergeantPanda
3f6eff96fc changelog: Update changelog for USE_LEGACY_NUMPY support 2026-01-08 10:26:08 -06:00
SergeantPanda
02faa1a4a7
Merge pull request #827 from Dispatcharr/numpy-none-baseline
Enhance Docker setup for legacy NumPy support and streamline installa…
2026-01-08 10:04:58 -06:00
SergeantPanda
c5a3a2af81 Enhance Docker setup for legacy NumPy support and streamline installation process
Some checks are pending
Base Image Build / prepare (push) Waiting to run
Base Image Build / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
Base Image Build / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
Base Image Build / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-08 10:02:29 -06:00
SergeantPanda
01370e8892 Bug fix: Fixed duplicate key constraint violations by treating TMDB/IMDB ID values of 0 or '0' as invalid (some providers use this to indicate "no ID"), converting them to NULL to prevent multiple items from incorrectly sharing the same ID. (Fixes #813)
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-07 16:38:09 -06:00
SergeantPanda
8cbb55c44b Bug Fix: Fixed Channels table EPG column showing "Not Assigned" on initial load for users with large EPG datasets. Added tvgsLoaded flag to EPG store to track when EPG data has finished loading, ensuring the table waits for EPG data before displaying. EPG cells now show animated skeleton placeholders while loading instead of incorrectly showing "Not Assigned". (Fixes #810) 2026-01-07 16:08:53 -06:00
patchy8736
0441dd7b7e Bug Fix: Add socket-timeout to uWSGI production config to prevent VOD stream timeouts during client buffering
The production uWSGI configuration (docker/uwsgi.ini) was missing the socket-timeout directive, causing it to default to 4 seconds. When clients (e.g., VLC) buffer VOD streams and temporarily stop reading from the HTTP socket, uWSGI's write operations timeout after 4 seconds, triggering premature stream cleanup and causing VOD streams to disappear from the stats page.

The fix adds socket-timeout = 600 to match the existing http-timeout = 600 value, giving uWSGI sufficient time to wait for clients to resume reading from buffered sockets. This prevents:
- uwsgi_response_write_body_do() TIMEOUT !!! errors in logs
- GeneratorExit exceptions and premature stream cleanup
- VOD streams vanishing from the stats page when clients buffer

The debug config already had socket-timeout = 3600, which is why the issue wasn't observed in debug mode. This fix aligns production behavior with the debug config while maintaining the production-appropriate 10-minute timeout duration.
2026-01-07 14:10:17 +01:00
SergeantPanda
30d093a2d3 Fixed bulk_create and bulk_update errors during VOD content refresh by pre-checking object existence with optimized bulk queries (3 queries total instead of N per batch) before creating new objects. This ensures all movie/series objects have primary keys before relation operations, preventing "prohibited to prevent data loss due to unsaved related object" errors. (Fixes #813)
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-06 16:12:50 -06:00
SergeantPanda
518c93c398 Enhance Docker setup for legacy NumPy support and streamline installation process 2026-01-06 14:07:37 -06:00
SergeantPanda
cc09c89156
Merge pull request #812 from Dispatcharr/React-Hooke-Form
Some checks failed
CI Pipeline / prepare (push) Has been cancelled
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Has been cancelled
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Has been cancelled
CI Pipeline / create-manifest (push) Has been cancelled
Refactor forms to use react-hook-form and Yup validation
2026-01-04 21:03:10 -06:00
Nick Sandstrom
21c0758cc9 Extracted component and util logic 2026-01-04 18:51:09 -08:00
SergeantPanda
f664910bf4 changelog: Add RHF and removetrailingzeros changes . 2026-01-04 20:49:39 -06:00
SergeantPanda
bc19bf8629 Remove "removeTrailingZeros" prop from the Channel Edit Form 2026-01-04 20:45:52 -06:00
SergeantPanda
16bbc1d875 Refactor forms to use react-hook-form and Yup for validation
- Replaced Formik with react-hook-form in Logo, M3UGroupFilter, M3UProfile, Stream, StreamProfile, and UserAgent components.
- Integrated Yup for schema validation in all updated forms.
- Updated form submission logic to accommodate new form handling methods.
- Adjusted state management and error handling to align with react-hook-form's API.
- Ensured compatibility with existing functionality while improving code readability and maintainability.
2026-01-04 20:40:16 -06:00
SergeantPanda
9612a67412 Change: VOD upstream read timeout reduced from 30 seconds to 10 seconds to minimize lock hold time when clients disconnect during connection phase
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-04 15:21:22 -06:00
SergeantPanda
4e65ffd113 Bug fix: Fixed VOD profile connection count not being decremented when stream connection fails (timeout, 404, etc.), preventing profiles from reaching capacity limits and rejecting valid stream requests 2026-01-04 15:00:08 -06:00
SergeantPanda
6031885537 Bug Fix: M3UMovieRelation.get_stream_url() and M3UEpisodeRelation.get_stream_url() to use XC client's _normalize_url() method instead of simple rstrip('/'). This properly handles malformed M3U account URLs (e.g., containing /player_api.php or query parameters) before constructing VOD stream endpoints, matching behavior of live channel URL building. (Closes #722) 2026-01-04 14:36:03 -06:00
SergeantPanda
8ae1a98a3b Bug Fix: Fixed onboarding message appearing in the Channels Table when filtered results are empty. The onboarding message now only displays when there are no channels created at all, not when channels exist but are filtered out by current filters. 2026-01-04 14:05:30 -06:00
SergeantPanda
48bdcfbd65 Bug fix: Release workflow Docker tagging: Fixed issue where latest and version tags (e.g., 0.16.0) were creating separate manifests instead of pointing to the same image digest, which caused old latest tags to become orphaned/untagged after new releases. Now creates a single multi-arch manifest with both tags, maintaining proper tag relationships and download statistics visibility on GitHub. 2026-01-04 12:05:01 -06:00
125 changed files with 15489 additions and 3403 deletions

41
.github/workflows/frontend-tests.yml vendored Normal file
View file

@ -0,0 +1,41 @@
name: Frontend Tests
on:
push:
branches: [main, dev]
paths:
- 'frontend/**'
- '.github/workflows/frontend-tests.yml'
pull_request:
branches: [main, dev]
paths:
- 'frontend/**'
- '.github/workflows/frontend-tests.yml'
jobs:
test:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./frontend
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '24'
cache: 'npm'
cache-dependency-path: './frontend/package-lock.json'
- name: Install dependencies
run: npm ci
# - name: Run linter
# run: npm run lint
- name: Run tests
run: npm test

View file

@ -184,13 +184,13 @@ jobs:
echo "Creating multi-arch manifest for ${OWNER}/${REPO}"
# GitHub Container Registry manifests
# latest tag
# Create one manifest with both latest and version tags
docker buildx imagetools create \
--annotation "index:org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}" \
--annotation "index:org.opencontainers.image.description=Your ultimate IPTV & stream Management companion." \
--annotation "index:org.opencontainers.image.url=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.source=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.version=latest" \
--annotation "index:org.opencontainers.image.version=${VERSION}" \
--annotation "index:org.opencontainers.image.created=${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.revision=${{ github.sha }}" \
--annotation "index:org.opencontainers.image.licenses=See repository" \
@ -200,9 +200,11 @@ jobs:
--annotation "index:maintainer=${{ github.actor }}" \
--annotation "index:build_version=Dispatcharr version: ${VERSION} Build date: ${TIMESTAMP}" \
--tag ghcr.io/${OWNER}/${REPO}:latest \
ghcr.io/${OWNER}/${REPO}:latest-amd64 ghcr.io/${OWNER}/${REPO}:latest-arm64
--tag ghcr.io/${OWNER}/${REPO}:${VERSION} \
ghcr.io/${OWNER}/${REPO}:${VERSION}-amd64 ghcr.io/${OWNER}/${REPO}:${VERSION}-arm64
# version tag
# Docker Hub manifests
# Create one manifest with both latest and version tags
docker buildx imagetools create \
--annotation "index:org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}" \
--annotation "index:org.opencontainers.image.description=Your ultimate IPTV & stream Management companion." \
@ -217,43 +219,7 @@ jobs:
--annotation "index:org.opencontainers.image.authors=${{ github.actor }}" \
--annotation "index:maintainer=${{ github.actor }}" \
--annotation "index:build_version=Dispatcharr version: ${VERSION} Build date: ${TIMESTAMP}" \
--tag ghcr.io/${OWNER}/${REPO}:${VERSION} \
ghcr.io/${OWNER}/${REPO}:${VERSION}-amd64 ghcr.io/${OWNER}/${REPO}:${VERSION}-arm64
# Docker Hub manifests
# latest tag
docker buildx imagetools create \
--annotation "index:org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}" \
--annotation "index:org.opencontainers.image.description=Your ultimate IPTV & stream Management companion." \
--annotation "index:org.opencontainers.image.url=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.source=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.version=latest" \
--annotation "index:org.opencontainers.image.created=${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.revision=${{ github.sha }}" \
--annotation "index:org.opencontainers.image.licenses=See repository" \
--annotation "index:org.opencontainers.image.documentation=https://dispatcharr.github.io/Dispatcharr-Docs/" \
--annotation "index:org.opencontainers.image.vendor=${OWNER}" \
--annotation "index:org.opencontainers.image.authors=${{ github.actor }}" \
--annotation "index:maintainer=${{ github.actor }}" \
--annotation "index:build_version=Dispatcharr version: ${VERSION} Build date: ${TIMESTAMP}" \
--tag docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:latest \
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:latest-amd64 docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:latest-arm64
# version tag
docker buildx imagetools create \
--annotation "index:org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}" \
--annotation "index:org.opencontainers.image.description=Your ultimate IPTV & stream Management companion." \
--annotation "index:org.opencontainers.image.url=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.source=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.version=${VERSION}" \
--annotation "index:org.opencontainers.image.created=${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.revision=${{ github.sha }}" \
--annotation "index:org.opencontainers.image.licenses=See repository" \
--annotation "index:org.opencontainers.image.documentation=https://dispatcharr.github.io/Dispatcharr-Docs/" \
--annotation "index:org.opencontainers.image.vendor=${OWNER}" \
--annotation "index:org.opencontainers.image.authors=${{ github.actor }}" \
--annotation "index:maintainer=${{ github.actor }}" \
--annotation "index:build_version=Dispatcharr version: ${VERSION} Build date: ${TIMESTAMP}" \
--tag docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${VERSION} \
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${VERSION}-amd64 docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${VERSION}-arm64

3
.gitignore vendored
View file

@ -18,4 +18,5 @@ dump.rdb
debugpy*
uwsgi.sock
package-lock.json
models
models
.idea

View file

@ -7,6 +7,61 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
### Changed
- Frontend tests GitHub workflow now uses Node.js 24 (matching Dockerfile) and runs on both `main` and `dev` branch pushes and pull requests for comprehensive CI coverage.
### Fixed
- Fixed NumPy baseline detection in Docker entrypoint. Now calls `numpy.show_config()` directly with case-insensitive grep instead of incorrectly wrapping the output.
- Fixed SettingsUtils frontend tests for new grouped settings architecture. Updated test suite to properly verify grouped JSON settings (stream_settings, dvr_settings, etc.) instead of individual CharField settings, including tests for type conversions, array-to-CSV transformations, and special handling of proxy_settings and network_access.
## [0.17.0] - 2026-01-13
### Added
- Loading feedback for all confirmation dialogs: Extended visual loading indicators across all confirmation dialogs throughout the application. Delete, cleanup, and bulk operation dialogs now show an animated dots loader and disabled state during async operations, providing consistent user feedback for backups (restore/delete), channels, EPGs, logos, VOD logos, M3U accounts, streams, users, groups, filters, profiles, batch operations, and network access changes.
- Channel profile edit and duplicate functionality: Users can now rename existing channel profiles and create duplicates with automatic channel membership cloning. Each profile action (edit, duplicate, delete) in the profile dropdown for quick access.
- ProfileModal component extracted for improved code organization and maintainability of channel profile management operations.
- Frontend unit tests for pages and utilities: Added comprehensive unit test coverage for frontend components within pages/ and JS files within utils/, along with a GitHub Actions workflow (`frontend-tests.yml`) to automatically run tests on commits and pull requests - Thanks [@nick4810](https://github.com/nick4810)
- Channel Profile membership control for manual channel creation and bulk operations: Extended the existing `channel_profile_ids` parameter from `POST /api/channels/from-stream/` to also support `POST /api/channels/` (manual creation) and bulk creation tasks with the same flexible semantics:
- Omitted parameter (default): Channels are added to ALL profiles (preserves backward compatibility)
- Empty array `[]`: Channels are added to NO profiles
- Sentinel value `[0]`: Channels are added to ALL profiles (explicit)
- Specific IDs `[1, 2, ...]`: Channels are added only to the specified profiles
This allows API consumers to control profile membership across all channel creation methods without requiring all channels to be added to every profile by default.
- Channel profile selection in creation modal: Users can now choose which profiles to add channels to when creating channels from streams (both single and bulk operations). Options include adding to all profiles, no profiles, or specific profiles with mutual exclusivity between special options ("All Profiles", "None") and specific profile selections. Profile selection defaults to the current table filter for intuitive workflow.
- Group retention policy for M3U accounts: Groups now follow the same stale retention logic as streams, using the account's `stale_stream_days` setting. Groups that temporarily disappear from an M3U source are retained for the configured retention period instead of being immediately deleted, preserving user settings and preventing data loss when providers temporarily remove/re-add groups. (Closes #809)
- Visual stale indicators for streams and groups: Added `is_stale` field to Stream and both `is_stale` and `last_seen` fields to ChannelGroupM3UAccount models to track items in their retention grace period. Stale groups display with orange buttons and a warning tooltip, while stale streams show with a red background color matching the visual treatment of empty channels.
### Changed
- Settings architecture refactored to use grouped JSON storage: Migrated from individual CharField settings to grouped JSONField settings for improved performance, maintainability, and type safety. Settings are now organized into logical groups (stream_settings, dvr_settings, backup_settings, system_settings, proxy_settings, network_access) with automatic migration handling. Backend provides helper methods (`get_stream_settings()`, `get_default_user_agent_id()`, etc.) for easy access. Frontend simplified by removing complex key mapping logic and standardizing on underscore-based field names throughout.
- Docker setup enhanced for legacy CPU support: Added `USE_LEGACY_NUMPY` environment variable to enable custom-built NumPy with no CPU baseline, allowing Dispatcharr to run on older CPUs (circa 2009) that lack support for newer baseline CPU features. When set to `true`, the entrypoint script will install the legacy NumPy build instead of the standard distribution. (Fixes #805)
- VOD upstream read timeout reduced from 30 seconds to 10 seconds to minimize lock hold time when clients disconnect during connection phase
- Form management refactored across application: Migrated Channel, Stream, M3U Profile, Stream Profile, Logo, and User Agent forms from Formik to React Hook Form (RHF) with Yup validation for improved form handling, better validation feedback, and enhanced code maintainability
- Stats and VOD pages refactored for clearer separation of concerns: extracted Stream/VOD connection cards (StreamConnectionCard, VodConnectionCard, VODCard, SeriesCard), moved page logic into dedicated utils, and lazy-loaded heavy components with ErrorBoundary fallbacks to improve readability and maintainability - Thanks [@nick4810](https://github.com/nick4810)
- Channel creation modal refactored: Extracted and unified channel numbering dialogs from StreamsTable into a dedicated CreateChannelModal component that handles both single and bulk channel creation with cleaner, more maintainable implementation and integrated profile selection controls.
### Fixed
- Fixed bulk channel profile membership update endpoint silently ignoring channels without existing membership records. The endpoint now creates missing memberships automatically (matching single-channel endpoint behavior), validates that all channel IDs exist before processing, and provides detailed response feedback including counts of updated vs. created memberships. Added comprehensive Swagger documentation with request/response schemas.
- Fixed bulk channel edit endpoint crashing with `ValueError: Field names must be given to bulk_update()` when the first channel in the update list had no actual field changes. The endpoint now collects all unique field names from all channels being updated instead of only looking at the first channel, properly handling cases where different channels update different fields or when some channels have no changes - Thanks [@mdellavo](https://github.com/mdellavo) (Fixes #804)
- Fixed PostgreSQL backup restore not completely cleaning database before restoration. The restore process now drops and recreates the entire `public` schema before running `pg_restore`, ensuring a truly clean restore that removes all tables, functions, and other objects not present in the backup file. This prevents leftover database objects from persisting when restoring backups from older branches or versions. Added `--no-owner` flag to `pg_restore` to avoid role permission errors when the backup was created by a different PostgreSQL user.
- Fixed TV Guide loading overlay not disappearing after navigating from DVR page. The `fetchRecordings()` function in the channels store was setting `isLoading: true` on start but never resetting it to `false` on successful completion, causing the Guide page's loading overlay to remain visible indefinitely when accessed after the DVR page.
- Fixed stream profile parameters not properly handling quoted arguments. Switched from basic `.split()` to `shlex.split()` for parsing command-line parameters, allowing proper handling of multi-word arguments in quotes (e.g., OAuth tokens in HTTP headers like `"--twitch-api-header=Authorization=OAuth token123"`). This ensures external streaming tools like Streamlink and FFmpeg receive correctly formatted arguments when using stream profiles with complex parameters - Thanks [@justinforlenza](https://github.com/justinforlenza) (Fixes #833)
- Fixed bulk and manual channel creation not refreshing channel profile memberships in the UI for all connected clients. WebSocket `channels_created` event now calls `fetchChannelProfiles()` to ensure profile membership updates are reflected in real-time for all users without requiring a page refresh.
- Fixed Channel Profile filter incorrectly applying profile membership filtering even when "Show Disabled" was enabled, preventing all channels from being displayed. Profile filter now only applies when hiding disabled channels. (Fixes #825)
- Fixed manual channel creation not adding channels to channel profiles. Manually created channels are now added to the selected profile if one is active, or to all profiles if "All" is selected, matching the behavior of channels created from streams.
- Fixed VOD streams disappearing from stats page during playback by adding `socket-timeout = 600` to production uWSGI config. The missing directive caused uWSGI to use its default 4-second timeout, triggering premature cleanup when clients buffered content. Now matches the existing `http-timeout = 600` value and prevents timeout errors during normal client buffering - Thanks [@patchy8736](https://github.com/patchy8736)
- Fixed Channels table EPG column showing "Not Assigned" on initial load for users with large EPG datasets. Added `tvgsLoaded` flag to EPG store to track when EPG data has finished loading, ensuring the table waits for EPG data before displaying. EPG cells now show animated skeleton placeholders while loading instead of incorrectly showing "Not Assigned". (Fixes #810)
- Fixed VOD profile connection count not being decremented when stream connection fails (timeout, 404, etc.), preventing profiles from reaching capacity limits and rejecting valid stream requests
- Fixed React warning in Channel form by removing invalid `removeTrailingZeros` prop from NumberInput component
- Release workflow Docker tagging: Fixed issue where `latest` and version tags (e.g., `0.16.0`) were creating separate manifests instead of pointing to the same image digest, which caused old `latest` tags to become orphaned/untagged after new releases. Now creates a single multi-arch manifest with both tags, maintaining proper tag relationships and download statistics visibility on GitHub.
- Fixed onboarding message appearing in the Channels Table when filtered results are empty. The onboarding message now only displays when there are no channels created at all, not when channels exist but are filtered out by current filters.
- Fixed `M3UMovieRelation.get_stream_url()` and `M3UEpisodeRelation.get_stream_url()` to use XC client's `_normalize_url()` method instead of simple `rstrip('/')`. This properly handles malformed M3U account URLs (e.g., containing `/player_api.php` or query parameters) before constructing VOD stream endpoints, matching behavior of live channel URL building. (Closes #722)
- Fixed bulk_create and bulk_update errors during VOD content refresh by pre-checking object existence with optimized bulk queries (3 queries total instead of N per batch) before creating new objects. This ensures all movie/series objects have primary keys before relation operations, preventing "prohibited to prevent data loss due to unsaved related object" errors. Additionally fixed duplicate key constraint violations by treating TMDB/IMDB ID values of `0` or `'0'` as invalid (some providers use this to indicate "no ID"), converting them to NULL to prevent multiple items from incorrectly sharing the same ID. (Fixes #813)
## [0.16.0] - 2026-01-04
### Added

View file

@ -9,60 +9,47 @@ logger = logging.getLogger(__name__)
BACKUP_SCHEDULE_TASK_NAME = "backup-scheduled-task"
SETTING_KEYS = {
"enabled": "backup_schedule_enabled",
"frequency": "backup_schedule_frequency",
"time": "backup_schedule_time",
"day_of_week": "backup_schedule_day_of_week",
"retention_count": "backup_retention_count",
"cron_expression": "backup_schedule_cron_expression",
}
DEFAULTS = {
"enabled": True,
"frequency": "daily",
"time": "03:00",
"day_of_week": 0, # Sunday
"schedule_enabled": True,
"schedule_frequency": "daily",
"schedule_time": "03:00",
"schedule_day_of_week": 0, # Sunday
"retention_count": 3,
"cron_expression": "",
"schedule_cron_expression": "",
}
def _get_setting(key: str, default=None):
"""Get a backup setting from CoreSettings."""
def _get_backup_settings():
"""Get all backup settings from CoreSettings grouped JSON."""
try:
setting = CoreSettings.objects.get(key=SETTING_KEYS[key])
value = setting.value
if key == "enabled":
return value.lower() == "true"
elif key in ("day_of_week", "retention_count"):
return int(value)
return value
settings_obj = CoreSettings.objects.get(key="backup_settings")
return settings_obj.value if isinstance(settings_obj.value, dict) else DEFAULTS.copy()
except CoreSettings.DoesNotExist:
return default if default is not None else DEFAULTS.get(key)
return DEFAULTS.copy()
def _set_setting(key: str, value) -> None:
"""Set a backup setting in CoreSettings."""
str_value = str(value).lower() if isinstance(value, bool) else str(value)
CoreSettings.objects.update_or_create(
key=SETTING_KEYS[key],
defaults={
"name": f"Backup {key.replace('_', ' ').title()}",
"value": str_value,
},
def _update_backup_settings(updates: dict) -> None:
"""Update backup settings in the grouped JSON."""
obj, created = CoreSettings.objects.get_or_create(
key="backup_settings",
defaults={"name": "Backup Settings", "value": DEFAULTS.copy()}
)
current = obj.value if isinstance(obj.value, dict) else {}
current.update(updates)
obj.value = current
obj.save()
def get_schedule_settings() -> dict:
"""Get all backup schedule settings."""
settings = _get_backup_settings()
return {
"enabled": _get_setting("enabled"),
"frequency": _get_setting("frequency"),
"time": _get_setting("time"),
"day_of_week": _get_setting("day_of_week"),
"retention_count": _get_setting("retention_count"),
"cron_expression": _get_setting("cron_expression"),
"enabled": bool(settings.get("schedule_enabled", DEFAULTS["schedule_enabled"])),
"frequency": str(settings.get("schedule_frequency", DEFAULTS["schedule_frequency"])),
"time": str(settings.get("schedule_time", DEFAULTS["schedule_time"])),
"day_of_week": int(settings.get("schedule_day_of_week", DEFAULTS["schedule_day_of_week"])),
"retention_count": int(settings.get("retention_count", DEFAULTS["retention_count"])),
"cron_expression": str(settings.get("schedule_cron_expression", DEFAULTS["schedule_cron_expression"])),
}
@ -90,10 +77,22 @@ def update_schedule_settings(data: dict) -> dict:
if count < 0:
raise ValueError("retention_count must be >= 0")
# Update settings
for key in ("enabled", "frequency", "time", "day_of_week", "retention_count", "cron_expression"):
if key in data:
_set_setting(key, data[key])
# Update settings with proper key names
updates = {}
if "enabled" in data:
updates["schedule_enabled"] = bool(data["enabled"])
if "frequency" in data:
updates["schedule_frequency"] = str(data["frequency"])
if "time" in data:
updates["schedule_time"] = str(data["time"])
if "day_of_week" in data:
updates["schedule_day_of_week"] = int(data["day_of_week"])
if "retention_count" in data:
updates["retention_count"] = int(data["retention_count"])
if "cron_expression" in data:
updates["schedule_cron_expression"] = str(data["cron_expression"])
_update_backup_settings(updates)
# Sync the periodic task
_sync_periodic_task()

View file

@ -72,17 +72,47 @@ def _dump_postgresql(output_file: Path) -> None:
logger.debug(f"pg_dump output: {result.stderr}")
def _clean_postgresql_schema() -> None:
"""Drop and recreate the public schema to ensure a completely clean restore."""
logger.info("[PG_CLEAN] Dropping and recreating public schema...")
# Commands to drop and recreate schema
sql_commands = "DROP SCHEMA IF EXISTS public CASCADE; CREATE SCHEMA public; GRANT ALL ON SCHEMA public TO public;"
cmd = [
"psql",
*_get_pg_args(),
"-c", sql_commands,
]
result = subprocess.run(
cmd,
env=_get_pg_env(),
capture_output=True,
text=True,
)
if result.returncode != 0:
logger.error(f"[PG_CLEAN] Failed to clean schema: {result.stderr}")
raise RuntimeError(f"Failed to clean PostgreSQL schema: {result.stderr}")
logger.info("[PG_CLEAN] Schema cleaned successfully")
def _restore_postgresql(dump_file: Path) -> None:
"""Restore PostgreSQL database using pg_restore."""
logger.info("[PG_RESTORE] Starting pg_restore...")
logger.info(f"[PG_RESTORE] Dump file: {dump_file}")
# Drop and recreate schema to ensure a completely clean restore
_clean_postgresql_schema()
pg_args = _get_pg_args()
logger.info(f"[PG_RESTORE] Connection args: {pg_args}")
cmd = [
"pg_restore",
"--clean", # Clean (drop) database objects before recreating
"--no-owner", # Skip ownership commands (we already created schema)
*pg_args,
"-v", # Verbose
str(dump_file),

View file

@ -9,7 +9,8 @@ from drf_yasg import openapi
from django.shortcuts import get_object_or_404, get_list_or_404
from django.db import transaction
from django.db.models import Q
import os, json, requests, logging
import os, json, requests, logging, mimetypes
from django.utils.http import http_date
from urllib.parse import unquote
from apps.accounts.permissions import (
Authenticated,
@ -130,6 +131,8 @@ class StreamViewSet(viewsets.ModelViewSet):
ordering = ["-name"]
def get_permissions(self):
if self.action == "duplicate":
return [IsAdmin()]
try:
return [perm() for perm in permission_classes_by_action[self.action]]
except KeyError:
@ -236,12 +239,8 @@ class ChannelGroupViewSet(viewsets.ModelViewSet):
return [Authenticated()]
def get_queryset(self):
"""Add annotation for association counts"""
from django.db.models import Count
return ChannelGroup.objects.annotate(
channel_count=Count('channels', distinct=True),
m3u_account_count=Count('m3u_accounts', distinct=True)
)
"""Return channel groups with prefetched relations for efficient counting"""
return ChannelGroup.objects.prefetch_related('channels', 'm3u_accounts').all()
def update(self, request, *args, **kwargs):
"""Override update to check M3U associations"""
@ -277,15 +276,20 @@ class ChannelGroupViewSet(viewsets.ModelViewSet):
@action(detail=False, methods=["post"], url_path="cleanup")
def cleanup_unused_groups(self, request):
"""Delete all channel groups with no channels or M3U account associations"""
from django.db.models import Count
from django.db.models import Q, Exists, OuterRef
# Find groups with no channels and no M3U account associations using Exists subqueries
from .models import Channel, ChannelGroupM3UAccount
has_channels = Channel.objects.filter(channel_group_id=OuterRef('pk'))
has_accounts = ChannelGroupM3UAccount.objects.filter(channel_group_id=OuterRef('pk'))
# Find groups with no channels and no M3U account associations
unused_groups = ChannelGroup.objects.annotate(
channel_count=Count('channels', distinct=True),
m3u_account_count=Count('m3u_accounts', distinct=True)
has_channels=Exists(has_channels),
has_accounts=Exists(has_accounts)
).filter(
channel_count=0,
m3u_account_count=0
has_channels=False,
has_accounts=False
)
deleted_count = unused_groups.count()
@ -386,6 +390,72 @@ class ChannelViewSet(viewsets.ModelViewSet):
ordering_fields = ["channel_number", "name", "channel_group__name"]
ordering = ["-channel_number"]
def create(self, request, *args, **kwargs):
"""Override create to handle channel profile membership"""
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
with transaction.atomic():
channel = serializer.save()
# Handle channel profile membership
# Semantics:
# - Omitted (None): add to ALL profiles (backward compatible default)
# - Empty array []: add to NO profiles
# - Sentinel [0] or 0: add to ALL profiles (explicit)
# - [1,2,...]: add to specified profile IDs only
channel_profile_ids = request.data.get("channel_profile_ids")
if channel_profile_ids is not None:
# Normalize single ID to array
if not isinstance(channel_profile_ids, list):
channel_profile_ids = [channel_profile_ids]
# Determine action based on semantics
if channel_profile_ids is None:
# Omitted -> add to all profiles (backward compatible)
profiles = ChannelProfile.objects.all()
ChannelProfileMembership.objects.bulk_create([
ChannelProfileMembership(channel_profile=profile, channel=channel, enabled=True)
for profile in profiles
])
elif isinstance(channel_profile_ids, list) and len(channel_profile_ids) == 0:
# Empty array -> add to no profiles
pass
elif isinstance(channel_profile_ids, list) and 0 in channel_profile_ids:
# Sentinel 0 -> add to all profiles (explicit)
profiles = ChannelProfile.objects.all()
ChannelProfileMembership.objects.bulk_create([
ChannelProfileMembership(channel_profile=profile, channel=channel, enabled=True)
for profile in profiles
])
else:
# Specific profile IDs
try:
channel_profiles = ChannelProfile.objects.filter(id__in=channel_profile_ids)
if len(channel_profiles) != len(channel_profile_ids):
missing_ids = set(channel_profile_ids) - set(channel_profiles.values_list('id', flat=True))
return Response(
{"error": f"Channel profiles with IDs {list(missing_ids)} not found"},
status=status.HTTP_400_BAD_REQUEST,
)
ChannelProfileMembership.objects.bulk_create([
ChannelProfileMembership(
channel_profile=profile,
channel=channel,
enabled=True
)
for profile in channel_profiles
])
except Exception as e:
return Response(
{"error": f"Error creating profile memberships: {str(e)}"},
status=status.HTTP_400_BAD_REQUEST,
)
headers = self.get_success_headers(serializer.data)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
def get_permissions(self):
if self.action in [
"edit_bulk",
@ -431,10 +501,15 @@ class ChannelViewSet(viewsets.ModelViewSet):
if channel_profile_id:
try:
profile_id_int = int(channel_profile_id)
filters["channelprofilemembership__channel_profile_id"] = profile_id_int
if show_disabled_param is None:
# Show only enabled channels: channels that have a membership
# record for this profile with enabled=True
# Default is DISABLED (channels without membership are hidden)
filters["channelprofilemembership__channel_profile_id"] = profile_id_int
filters["channelprofilemembership__enabled"] = True
# If show_disabled is True, show all channels (no filtering needed)
except (ValueError, TypeError):
# Ignore invalid profile id values
pass
@ -546,11 +621,18 @@ class ChannelViewSet(viewsets.ModelViewSet):
# Single bulk_update query instead of individual saves
channels_to_update = [channel for channel, _ in validated_updates]
if channels_to_update:
Channel.objects.bulk_update(
channels_to_update,
fields=list(validated_updates[0][1].keys()),
batch_size=100
)
# Collect all unique field names from all updates
all_fields = set()
for _, validated_data in validated_updates:
all_fields.update(validated_data.keys())
# Only call bulk_update if there are fields to update
if all_fields:
Channel.objects.bulk_update(
channels_to_update,
fields=list(all_fields),
batch_size=100
)
# Return the updated objects (already in memory)
serialized_channels = ChannelSerializer(
@ -735,7 +817,7 @@ class ChannelViewSet(viewsets.ModelViewSet):
"channel_profile_ids": openapi.Schema(
type=openapi.TYPE_ARRAY,
items=openapi.Items(type=openapi.TYPE_INTEGER),
description="(Optional) Channel profile ID(s) to add the channel to. Can be a single ID or array of IDs. If not provided, channel is added to all profiles."
description="(Optional) Channel profile ID(s). Behavior: omitted = add to ALL profiles (default); empty array [] = add to NO profiles; [0] = add to ALL profiles (explicit); [1,2,...] = add only to specified profiles."
),
},
),
@ -828,14 +910,37 @@ class ChannelViewSet(viewsets.ModelViewSet):
channel.streams.add(stream)
# Handle channel profile membership
# Semantics:
# - Omitted (None): add to ALL profiles (backward compatible default)
# - Empty array []: add to NO profiles
# - Sentinel [0] or 0: add to ALL profiles (explicit)
# - [1,2,...]: add to specified profile IDs only
channel_profile_ids = request.data.get("channel_profile_ids")
if channel_profile_ids is not None:
# Normalize single ID to array
if not isinstance(channel_profile_ids, list):
channel_profile_ids = [channel_profile_ids]
if channel_profile_ids:
# Add channel only to the specified profiles
# Determine action based on semantics
if channel_profile_ids is None:
# Omitted -> add to all profiles (backward compatible)
profiles = ChannelProfile.objects.all()
ChannelProfileMembership.objects.bulk_create([
ChannelProfileMembership(channel_profile=profile, channel=channel, enabled=True)
for profile in profiles
])
elif isinstance(channel_profile_ids, list) and len(channel_profile_ids) == 0:
# Empty array -> add to no profiles
pass
elif isinstance(channel_profile_ids, list) and 0 in channel_profile_ids:
# Sentinel 0 -> add to all profiles (explicit)
profiles = ChannelProfile.objects.all()
ChannelProfileMembership.objects.bulk_create([
ChannelProfileMembership(channel_profile=profile, channel=channel, enabled=True)
for profile in profiles
])
else:
# Specific profile IDs
try:
channel_profiles = ChannelProfile.objects.filter(id__in=channel_profile_ids)
if len(channel_profiles) != len(channel_profile_ids):
@ -858,13 +963,6 @@ class ChannelViewSet(viewsets.ModelViewSet):
{"error": f"Error creating profile memberships: {str(e)}"},
status=status.HTTP_400_BAD_REQUEST,
)
else:
# Default behavior: add to all profiles
profiles = ChannelProfile.objects.all()
ChannelProfileMembership.objects.bulk_create([
ChannelProfileMembership(channel_profile=profile, channel=channel, enabled=True)
for profile in profiles
])
# Send WebSocket notification for single channel creation
from core.utils import send_websocket_update
@ -897,7 +995,7 @@ class ChannelViewSet(viewsets.ModelViewSet):
"channel_profile_ids": openapi.Schema(
type=openapi.TYPE_ARRAY,
items=openapi.Items(type=openapi.TYPE_INTEGER),
description="(Optional) Channel profile ID(s) to add the channels to. If not provided, channels are added to all profiles."
description="(Optional) Channel profile ID(s). Behavior: omitted = add to ALL profiles (default); empty array [] = add to NO profiles; [0] = add to ALL profiles (explicit); [1,2,...] = add only to specified profiles."
),
"starting_channel_number": openapi.Schema(
type=openapi.TYPE_INTEGER,
@ -1556,11 +1654,10 @@ class LogoViewSet(viewsets.ModelViewSet):
"""Streams the logo file, whether it's local or remote."""
logo = self.get_object()
logo_url = logo.url
if logo_url.startswith("/data"): # Local file
if not os.path.exists(logo_url):
raise Http404("Image not found")
stat = os.stat(logo_url)
# Get proper mime type (first item of the tuple)
content_type, _ = mimetypes.guess_type(logo_url)
if not content_type:
@ -1570,6 +1667,8 @@ class LogoViewSet(viewsets.ModelViewSet):
response = StreamingHttpResponse(
open(logo_url, "rb"), content_type=content_type
)
response["Cache-Control"] = "public, max-age=14400" # Cache in browser for 4 hours
response["Last-Modified"] = http_date(stat.st_mtime)
response["Content-Disposition"] = 'inline; filename="{}"'.format(
os.path.basename(logo_url)
)
@ -1609,6 +1708,10 @@ class LogoViewSet(viewsets.ModelViewSet):
remote_response.iter_content(chunk_size=8192),
content_type=content_type,
)
if(remote_response.headers.get("Cache-Control")):
response["Cache-Control"] = remote_response.headers.get("Cache-Control")
if(remote_response.headers.get("Last-Modified")):
response["Last-Modified"] = remote_response.headers.get("Last-Modified")
response["Content-Disposition"] = 'inline; filename="{}"'.format(
os.path.basename(logo_url)
)
@ -1640,11 +1743,58 @@ class ChannelProfileViewSet(viewsets.ModelViewSet):
return self.request.user.channel_profiles.all()
def get_permissions(self):
if self.action == "duplicate":
return [IsAdmin()]
try:
return [perm() for perm in permission_classes_by_action[self.action]]
except KeyError:
return [Authenticated()]
@action(detail=True, methods=["post"], url_path="duplicate", permission_classes=[IsAdmin])
def duplicate(self, request, pk=None):
requested_name = str(request.data.get("name", "")).strip()
if not requested_name:
return Response(
{"detail": "Name is required to duplicate a profile."},
status=status.HTTP_400_BAD_REQUEST,
)
if ChannelProfile.objects.filter(name=requested_name).exists():
return Response(
{"detail": "A channel profile with this name already exists."},
status=status.HTTP_400_BAD_REQUEST,
)
source_profile = self.get_object()
with transaction.atomic():
new_profile = ChannelProfile.objects.create(name=requested_name)
source_memberships = ChannelProfileMembership.objects.filter(
channel_profile=source_profile
)
source_enabled_map = {
membership.channel_id: membership.enabled
for membership in source_memberships
}
new_memberships = list(
ChannelProfileMembership.objects.filter(channel_profile=new_profile)
)
for membership in new_memberships:
membership.enabled = source_enabled_map.get(
membership.channel_id, False
)
if new_memberships:
ChannelProfileMembership.objects.bulk_update(
new_memberships, ["enabled"]
)
serializer = self.get_serializer(new_profile)
return Response(serializer.data, status=status.HTTP_201_CREATED)
class GetChannelStreamsAPIView(APIView):
def get_permissions(self):
@ -1701,6 +1851,30 @@ class BulkUpdateChannelMembershipAPIView(APIView):
except KeyError:
return [Authenticated()]
@swagger_auto_schema(
operation_description="Bulk enable or disable channels for a specific profile. Creates membership records if they don't exist.",
request_body=BulkChannelProfileMembershipSerializer,
responses={
200: openapi.Response(
description="Channels updated successfully",
schema=openapi.Schema(
type=openapi.TYPE_OBJECT,
properties={
"status": openapi.Schema(type=openapi.TYPE_STRING, example="success"),
"updated": openapi.Schema(type=openapi.TYPE_INTEGER, description="Number of channels updated"),
"created": openapi.Schema(type=openapi.TYPE_INTEGER, description="Number of new memberships created"),
"invalid_channels": openapi.Schema(
type=openapi.TYPE_ARRAY,
items=openapi.Schema(type=openapi.TYPE_INTEGER),
description="List of channel IDs that don't exist"
),
},
),
),
400: "Invalid request data",
404: "Profile not found",
},
)
def patch(self, request, profile_id):
"""Bulk enable or disable channels for a specific profile"""
# Get the channel profile
@ -1713,21 +1887,67 @@ class BulkUpdateChannelMembershipAPIView(APIView):
updates = serializer.validated_data["channels"]
channel_ids = [entry["channel_id"] for entry in updates]
memberships = ChannelProfileMembership.objects.filter(
# Validate that all channels exist
existing_channels = set(
Channel.objects.filter(id__in=channel_ids).values_list("id", flat=True)
)
invalid_channels = [cid for cid in channel_ids if cid not in existing_channels]
if invalid_channels:
return Response(
{
"error": "Some channels do not exist",
"invalid_channels": invalid_channels,
},
status=status.HTTP_400_BAD_REQUEST,
)
# Get existing memberships
existing_memberships = ChannelProfileMembership.objects.filter(
channel_profile=channel_profile, channel_id__in=channel_ids
)
membership_dict = {m.channel_id: m for m in existing_memberships}
membership_dict = {m.channel.id: m for m in memberships}
# Prepare lists for bulk operations
memberships_to_update = []
memberships_to_create = []
for entry in updates:
channel_id = entry["channel_id"]
enabled_status = entry["enabled"]
if channel_id in membership_dict:
# Update existing membership
membership_dict[channel_id].enabled = enabled_status
memberships_to_update.append(membership_dict[channel_id])
else:
# Create new membership
memberships_to_create.append(
ChannelProfileMembership(
channel_profile=channel_profile,
channel_id=channel_id,
enabled=enabled_status,
)
)
ChannelProfileMembership.objects.bulk_update(memberships, ["enabled"])
# Perform bulk operations
with transaction.atomic():
if memberships_to_update:
ChannelProfileMembership.objects.bulk_update(
memberships_to_update, ["enabled"]
)
if memberships_to_create:
ChannelProfileMembership.objects.bulk_create(memberships_to_create)
return Response({"status": "success"}, status=status.HTTP_200_OK)
return Response(
{
"status": "success",
"updated": len(memberships_to_update),
"created": len(memberships_to_create),
"invalid_channels": [],
},
status=status.HTTP_200_OK,
)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
@ -1773,7 +1993,7 @@ class RecordingViewSet(viewsets.ModelViewSet):
def get_permissions(self):
# Allow unauthenticated playback of recording files (like other streaming endpoints)
if getattr(self, 'action', None) == 'file':
if self.action == 'file':
return [AllowAny()]
try:
return [perm() for perm in permission_classes_by_action[self.action]]

View file

@ -0,0 +1,29 @@
# Generated by Django 5.2.9 on 2026-01-09 18:19
import datetime
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('dispatcharr_channels', '0030_alter_stream_url'),
]
operations = [
migrations.AddField(
model_name='channelgroupm3uaccount',
name='is_stale',
field=models.BooleanField(db_index=True, default=False, help_text='Whether this group relationship is stale (not seen in recent refresh, pending deletion)'),
),
migrations.AddField(
model_name='channelgroupm3uaccount',
name='last_seen',
field=models.DateTimeField(db_index=True, default=datetime.datetime.now, help_text='Last time this group was seen in the M3U source during a refresh'),
),
migrations.AddField(
model_name='stream',
name='is_stale',
field=models.BooleanField(db_index=True, default=False, help_text='Whether this stream is stale (not seen in recent refresh, pending deletion)'),
),
]

View file

@ -94,6 +94,11 @@ class Stream(models.Model):
db_index=True,
)
last_seen = models.DateTimeField(db_index=True, default=datetime.now)
is_stale = models.BooleanField(
default=False,
db_index=True,
help_text="Whether this stream is stale (not seen in recent refresh, pending deletion)"
)
custom_properties = models.JSONField(default=dict, blank=True, null=True)
# Stream statistics fields
@ -589,6 +594,16 @@ class ChannelGroupM3UAccount(models.Model):
blank=True,
help_text='Starting channel number for auto-created channels in this group'
)
last_seen = models.DateTimeField(
default=datetime.now,
db_index=True,
help_text='Last time this group was seen in the M3U source during a refresh'
)
is_stale = models.BooleanField(
default=False,
db_index=True,
help_text='Whether this group relationship is stale (not seen in recent refresh, pending deletion)'
)
class Meta:
unique_together = ("channel_group", "m3u_account")

View file

@ -119,6 +119,7 @@ class StreamSerializer(serializers.ModelSerializer):
"current_viewers",
"updated_at",
"last_seen",
"is_stale",
"stream_profile_id",
"is_custom",
"channel_group",
@ -155,7 +156,7 @@ class ChannelGroupM3UAccountSerializer(serializers.ModelSerializer):
class Meta:
model = ChannelGroupM3UAccount
fields = ["m3u_accounts", "channel_group", "enabled", "auto_channel_sync", "auto_sync_channel_start", "custom_properties"]
fields = ["m3u_accounts", "channel_group", "enabled", "auto_channel_sync", "auto_sync_channel_start", "custom_properties", "is_stale", "last_seen"]
def to_representation(self, instance):
data = super().to_representation(instance)
@ -179,8 +180,8 @@ class ChannelGroupM3UAccountSerializer(serializers.ModelSerializer):
# Channel Group
#
class ChannelGroupSerializer(serializers.ModelSerializer):
channel_count = serializers.IntegerField(read_only=True)
m3u_account_count = serializers.IntegerField(read_only=True)
channel_count = serializers.SerializerMethodField()
m3u_account_count = serializers.SerializerMethodField()
m3u_accounts = ChannelGroupM3UAccountSerializer(
many=True,
read_only=True
@ -190,6 +191,14 @@ class ChannelGroupSerializer(serializers.ModelSerializer):
model = ChannelGroup
fields = ["id", "name", "channel_count", "m3u_account_count", "m3u_accounts"]
def get_channel_count(self, obj):
"""Get count of channels in this group"""
return obj.channels.count()
def get_m3u_account_count(self, obj):
"""Get count of M3U accounts associated with this group"""
return obj.m3u_accounts.count()
class ChannelProfileSerializer(serializers.ModelSerializer):
channels = serializers.SerializerMethodField()

View file

@ -2679,7 +2679,38 @@ def bulk_create_channels_from_streams(self, stream_ids, channel_profile_ids=None
)
# Handle channel profile membership
if profile_ids:
# Semantics:
# - None: add to ALL profiles (backward compatible default)
# - Empty array []: add to NO profiles
# - Sentinel [0] or 0 in array: add to ALL profiles (explicit)
# - [1,2,...]: add to specified profile IDs only
if profile_ids is None:
# Omitted -> add to all profiles (backward compatible)
all_profiles = ChannelProfile.objects.all()
channel_profile_memberships.extend([
ChannelProfileMembership(
channel_profile=profile,
channel=channel,
enabled=True
)
for profile in all_profiles
])
elif isinstance(profile_ids, list) and len(profile_ids) == 0:
# Empty array -> add to no profiles
pass
elif isinstance(profile_ids, list) and 0 in profile_ids:
# Sentinel 0 -> add to all profiles (explicit)
all_profiles = ChannelProfile.objects.all()
channel_profile_memberships.extend([
ChannelProfileMembership(
channel_profile=profile,
channel=channel,
enabled=True
)
for profile in all_profiles
])
else:
# Specific profile IDs
try:
specific_profiles = ChannelProfile.objects.filter(id__in=profile_ids)
channel_profile_memberships.extend([
@ -2695,17 +2726,6 @@ def bulk_create_channels_from_streams(self, stream_ids, channel_profile_ids=None
'channel_id': channel.id,
'error': f'Failed to add to profiles: {str(e)}'
})
else:
# Add to all profiles by default
all_profiles = ChannelProfile.objects.all()
channel_profile_memberships.extend([
ChannelProfileMembership(
channel_profile=profile,
channel=channel,
enabled=True
)
for profile in all_profiles
])
# Bulk update channels with logos
if update:

View file

@ -0,0 +1,211 @@
from django.test import TestCase
from django.contrib.auth import get_user_model
from rest_framework.test import APIClient
from rest_framework import status
from apps.channels.models import Channel, ChannelGroup
User = get_user_model()
class ChannelBulkEditAPITests(TestCase):
def setUp(self):
# Create a test admin user (user_level >= 10) and authenticate
self.user = User.objects.create_user(username="testuser", password="testpass123")
self.user.user_level = 10 # Set admin level
self.user.save()
self.client = APIClient()
self.client.force_authenticate(user=self.user)
self.bulk_edit_url = "/api/channels/channels/edit/bulk/"
# Create test channel group
self.group1 = ChannelGroup.objects.create(name="Test Group 1")
self.group2 = ChannelGroup.objects.create(name="Test Group 2")
# Create test channels
self.channel1 = Channel.objects.create(
channel_number=1.0,
name="Channel 1",
tvg_id="channel1",
channel_group=self.group1
)
self.channel2 = Channel.objects.create(
channel_number=2.0,
name="Channel 2",
tvg_id="channel2",
channel_group=self.group1
)
self.channel3 = Channel.objects.create(
channel_number=3.0,
name="Channel 3",
tvg_id="channel3"
)
def test_bulk_edit_success(self):
"""Test successful bulk update of multiple channels"""
data = [
{"id": self.channel1.id, "name": "Updated Channel 1"},
{"id": self.channel2.id, "name": "Updated Channel 2", "channel_number": 22.0},
]
response = self.client.patch(self.bulk_edit_url, data, format="json")
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data["message"], "Successfully updated 2 channels")
self.assertEqual(len(response.data["channels"]), 2)
# Verify database changes
self.channel1.refresh_from_db()
self.channel2.refresh_from_db()
self.assertEqual(self.channel1.name, "Updated Channel 1")
self.assertEqual(self.channel2.name, "Updated Channel 2")
self.assertEqual(self.channel2.channel_number, 22.0)
def test_bulk_edit_with_empty_validated_data_first(self):
"""
Test the bug fix: when first channel has empty validated_data.
This was causing: ValueError: Field names must be given to bulk_update()
"""
# Create a channel with data that will be "unchanged" (empty validated_data)
# We'll send the same data it already has
data = [
# First channel: no actual changes (this would create empty validated_data)
{"id": self.channel1.id},
# Second channel: has changes
{"id": self.channel2.id, "name": "Updated Channel 2"},
]
response = self.client.patch(self.bulk_edit_url, data, format="json")
# Should not crash with ValueError
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data["message"], "Successfully updated 2 channels")
# Verify the channel with changes was updated
self.channel2.refresh_from_db()
self.assertEqual(self.channel2.name, "Updated Channel 2")
def test_bulk_edit_all_empty_updates(self):
"""Test when all channels have empty updates (no actual changes)"""
data = [
{"id": self.channel1.id},
{"id": self.channel2.id},
]
response = self.client.patch(self.bulk_edit_url, data, format="json")
# Should succeed without calling bulk_update
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data["message"], "Successfully updated 2 channels")
def test_bulk_edit_mixed_fields(self):
"""Test bulk update where different channels update different fields"""
data = [
{"id": self.channel1.id, "name": "New Name 1"},
{"id": self.channel2.id, "channel_number": 99.0},
{"id": self.channel3.id, "tvg_id": "new_tvg_id", "name": "New Name 3"},
]
response = self.client.patch(self.bulk_edit_url, data, format="json")
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data["message"], "Successfully updated 3 channels")
# Verify all updates
self.channel1.refresh_from_db()
self.channel2.refresh_from_db()
self.channel3.refresh_from_db()
self.assertEqual(self.channel1.name, "New Name 1")
self.assertEqual(self.channel2.channel_number, 99.0)
self.assertEqual(self.channel3.tvg_id, "new_tvg_id")
self.assertEqual(self.channel3.name, "New Name 3")
def test_bulk_edit_with_channel_group(self):
"""Test bulk update with channel_group_id changes"""
data = [
{"id": self.channel1.id, "channel_group_id": self.group2.id},
{"id": self.channel3.id, "channel_group_id": self.group1.id},
]
response = self.client.patch(self.bulk_edit_url, data, format="json")
self.assertEqual(response.status_code, status.HTTP_200_OK)
# Verify group changes
self.channel1.refresh_from_db()
self.channel3.refresh_from_db()
self.assertEqual(self.channel1.channel_group, self.group2)
self.assertEqual(self.channel3.channel_group, self.group1)
def test_bulk_edit_nonexistent_channel(self):
"""Test bulk update with a channel that doesn't exist"""
nonexistent_id = 99999
data = [
{"id": nonexistent_id, "name": "Should Fail"},
{"id": self.channel1.id, "name": "Should Still Update"},
]
response = self.client.patch(self.bulk_edit_url, data, format="json")
# Should return 400 with errors
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn("errors", response.data)
self.assertEqual(len(response.data["errors"]), 1)
self.assertEqual(response.data["errors"][0]["channel_id"], nonexistent_id)
self.assertEqual(response.data["errors"][0]["error"], "Channel not found")
# The valid channel should still be updated
self.assertEqual(response.data["updated_count"], 1)
def test_bulk_edit_validation_error(self):
"""Test bulk update with invalid data (validation error)"""
data = [
{"id": self.channel1.id, "channel_number": "invalid_number"},
]
response = self.client.patch(self.bulk_edit_url, data, format="json")
# Should return 400 with validation errors
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn("errors", response.data)
self.assertEqual(len(response.data["errors"]), 1)
self.assertIn("channel_number", response.data["errors"][0]["errors"])
def test_bulk_edit_empty_channel_updates(self):
"""Test bulk update with empty list"""
data = []
response = self.client.patch(self.bulk_edit_url, data, format="json")
# Empty list is accepted and returns success with 0 updates
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data["message"], "Successfully updated 0 channels")
def test_bulk_edit_missing_channel_updates(self):
"""Test bulk update without proper format (dict instead of list)"""
data = {"channel_updates": {}}
response = self.client.patch(self.bulk_edit_url, data, format="json")
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(response.data["error"], "Expected a list of channel updates")
def test_bulk_edit_preserves_other_fields(self):
"""Test that bulk update only changes specified fields"""
original_channel_number = self.channel1.channel_number
original_tvg_id = self.channel1.tvg_id
data = [
{"id": self.channel1.id, "name": "Only Name Changed"},
]
response = self.client.patch(self.bulk_edit_url, data, format="json")
self.assertEqual(response.status_code, status.HTTP_200_OK)
# Verify only name changed, other fields preserved
self.channel1.refresh_from_db()
self.assertEqual(self.channel1.name, "Only Name Changed")
self.assertEqual(self.channel1.channel_number, original_channel_number)
self.assertEqual(self.channel1.tvg_id, original_tvg_id)

View file

@ -286,11 +286,12 @@ def fetch_xmltv(source):
logger.info(f"Fetching XMLTV data from source: {source.name}")
try:
# Get default user agent from settings
default_user_agent_setting = CoreSettings.objects.filter(key='default-user-agent').first()
stream_settings = CoreSettings.get_stream_settings()
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:138.0) Gecko/20100101 Firefox/138.0" # Fallback default
if default_user_agent_setting and default_user_agent_setting.value:
default_user_agent_id = stream_settings.get('default_user_agent')
if default_user_agent_id:
try:
user_agent_obj = UserAgent.objects.filter(id=int(default_user_agent_setting.value)).first()
user_agent_obj = UserAgent.objects.filter(id=int(default_user_agent_id)).first()
if user_agent_obj and user_agent_obj.user_agent:
user_agent = user_agent_obj.user_agent
logger.debug(f"Using default user agent: {user_agent}")
@ -1714,12 +1715,13 @@ def fetch_schedules_direct(source):
logger.info(f"Fetching Schedules Direct data from source: {source.name}")
try:
# Get default user agent from settings
default_user_agent_setting = CoreSettings.objects.filter(key='default-user-agent').first()
stream_settings = CoreSettings.get_stream_settings()
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:138.0) Gecko/20100101 Firefox/138.0" # Fallback default
default_user_agent_id = stream_settings.get('default_user_agent')
if default_user_agent_setting and default_user_agent_setting.value:
if default_user_agent_id:
try:
user_agent_obj = UserAgent.objects.filter(id=int(default_user_agent_setting.value)).first()
user_agent_obj = UserAgent.objects.filter(id=int(default_user_agent_id)).first()
if user_agent_obj and user_agent_obj.user_agent:
user_agent = user_agent_obj.user_agent
logger.debug(f"Using default user agent: {user_agent}")

View file

@ -513,7 +513,19 @@ def check_field_lengths(streams_to_create):
@shared_task
def process_groups(account, groups):
def process_groups(account, groups, scan_start_time=None):
"""Process groups and update their relationships with the M3U account.
Args:
account: M3UAccount instance
groups: Dict of group names to custom properties
scan_start_time: Timestamp when the scan started (for consistent last_seen marking)
"""
# Use scan_start_time if provided, otherwise current time
# This ensures consistency with stream processing and cleanup logic
if scan_start_time is None:
scan_start_time = timezone.now()
existing_groups = {
group.name: group
for group in ChannelGroup.objects.filter(name__in=groups.keys())
@ -553,24 +565,8 @@ def process_groups(account, groups):
).select_related('channel_group')
}
# Get ALL existing relationships for this account to identify orphaned ones
all_existing_relationships = {
rel.channel_group.name: rel
for rel in ChannelGroupM3UAccount.objects.filter(
m3u_account=account
).select_related('channel_group')
}
relations_to_create = []
relations_to_update = []
relations_to_delete = []
# Find orphaned relationships (groups that no longer exist in the source)
current_group_names = set(groups.keys())
for group_name, rel in all_existing_relationships.items():
if group_name not in current_group_names:
relations_to_delete.append(rel)
logger.debug(f"Marking relationship for deletion: group '{group_name}' no longer exists in source for account {account.id}")
for group in all_group_objs:
custom_props = groups.get(group.name, {})
@ -597,9 +593,15 @@ def process_groups(account, groups):
del updated_custom_props["xc_id"]
existing_rel.custom_properties = updated_custom_props
existing_rel.last_seen = scan_start_time
existing_rel.is_stale = False
relations_to_update.append(existing_rel)
logger.debug(f"Updated xc_id for group '{group.name}' from '{existing_xc_id}' to '{new_xc_id}' - account {account.id}")
else:
# Update last_seen even if xc_id hasn't changed
existing_rel.last_seen = scan_start_time
existing_rel.is_stale = False
relations_to_update.append(existing_rel)
logger.debug(f"xc_id unchanged for group '{group.name}' - account {account.id}")
else:
# Create new relationship - this group is new to this M3U account
@ -613,6 +615,8 @@ def process_groups(account, groups):
m3u_account=account,
custom_properties=custom_props,
enabled=auto_enable_new_groups_live,
last_seen=scan_start_time,
is_stale=False,
)
)
@ -623,15 +627,38 @@ def process_groups(account, groups):
# Bulk update existing relationships
if relations_to_update:
ChannelGroupM3UAccount.objects.bulk_update(relations_to_update, ['custom_properties'])
logger.info(f"Updated {len(relations_to_update)} existing group relationships with new xc_id values for account {account.id}")
ChannelGroupM3UAccount.objects.bulk_update(relations_to_update, ['custom_properties', 'last_seen', 'is_stale'])
logger.info(f"Updated {len(relations_to_update)} existing group relationships for account {account.id}")
# Delete orphaned relationships
if relations_to_delete:
ChannelGroupM3UAccount.objects.filter(
id__in=[rel.id for rel in relations_to_delete]
).delete()
logger.info(f"Deleted {len(relations_to_delete)} orphaned group relationships for account {account.id}: {[rel.channel_group.name for rel in relations_to_delete]}")
def cleanup_stale_group_relationships(account, scan_start_time):
"""
Remove group relationships that haven't been seen since the stale retention period.
This follows the same logic as stream cleanup for consistency.
"""
# Calculate cutoff date for stale group relationships
stale_cutoff = scan_start_time - timezone.timedelta(days=account.stale_stream_days)
logger.info(
f"Removing group relationships not seen since {stale_cutoff} for M3U account {account.id}"
)
# Find stale relationships
stale_relationships = ChannelGroupM3UAccount.objects.filter(
m3u_account=account,
last_seen__lt=stale_cutoff
).select_related('channel_group')
relations_to_delete = list(stale_relationships)
deleted_count = len(relations_to_delete)
if deleted_count > 0:
logger.info(
f"Found {deleted_count} stale group relationships for account {account.id}: "
f"{[rel.channel_group.name for rel in relations_to_delete]}"
)
# Delete the stale relationships
stale_relationships.delete()
# Check if any of the deleted relationships left groups with no remaining associations
orphaned_group_ids = []
@ -656,6 +683,10 @@ def process_groups(account, groups):
deleted_groups = list(ChannelGroup.objects.filter(id__in=orphaned_group_ids).values_list('name', flat=True))
ChannelGroup.objects.filter(id__in=orphaned_group_ids).delete()
logger.info(f"Deleted {len(orphaned_group_ids)} orphaned groups that had no remaining associations: {deleted_groups}")
else:
logger.debug(f"No stale group relationships found for account {account.id}")
return deleted_count
def collect_xc_streams(account_id, enabled_groups):
@ -803,6 +834,7 @@ def process_xc_category_direct(account_id, batch, groups, hash_keys):
"channel_group_id": int(group_id),
"stream_hash": stream_hash,
"custom_properties": stream,
"is_stale": False,
}
if stream_hash not in stream_hashes:
@ -838,10 +870,12 @@ def process_xc_category_direct(account_id, batch, groups, hash_keys):
setattr(obj, key, value)
obj.last_seen = timezone.now()
obj.updated_at = timezone.now() # Update timestamp only for changed streams
obj.is_stale = False
streams_to_update.append(obj)
else:
# Always update last_seen, even if nothing else changed
obj.last_seen = timezone.now()
obj.is_stale = False
# Don't update updated_at for unchanged streams
streams_to_update.append(obj)
@ -852,6 +886,7 @@ def process_xc_category_direct(account_id, batch, groups, hash_keys):
stream_props["updated_at"] = (
timezone.now()
) # Set initial updated_at for new streams
stream_props["is_stale"] = False
streams_to_create.append(Stream(**stream_props))
try:
@ -863,7 +898,7 @@ def process_xc_category_direct(account_id, batch, groups, hash_keys):
# Simplified bulk update for better performance
Stream.objects.bulk_update(
streams_to_update,
['name', 'url', 'logo_url', 'tvg_id', 'custom_properties', 'last_seen', 'updated_at'],
['name', 'url', 'logo_url', 'tvg_id', 'custom_properties', 'last_seen', 'updated_at', 'is_stale'],
batch_size=150 # Smaller batch size for XC processing
)
@ -976,6 +1011,7 @@ def process_m3u_batch_direct(account_id, batch, groups, hash_keys):
"channel_group_id": int(groups.get(group_title)),
"stream_hash": stream_hash,
"custom_properties": stream_info["attributes"],
"is_stale": False,
}
if stream_hash not in stream_hashes:
@ -1015,11 +1051,15 @@ def process_m3u_batch_direct(account_id, batch, groups, hash_keys):
obj.custom_properties = stream_props["custom_properties"]
obj.updated_at = timezone.now()
# Always mark as not stale since we saw it in this refresh
obj.is_stale = False
streams_to_update.append(obj)
else:
# New stream
stream_props["last_seen"] = timezone.now()
stream_props["updated_at"] = timezone.now()
stream_props["is_stale"] = False
streams_to_create.append(Stream(**stream_props))
try:
@ -1031,7 +1071,7 @@ def process_m3u_batch_direct(account_id, batch, groups, hash_keys):
# Update all streams in a single bulk operation
Stream.objects.bulk_update(
streams_to_update,
['name', 'url', 'logo_url', 'tvg_id', 'custom_properties', 'last_seen', 'updated_at'],
['name', 'url', 'logo_url', 'tvg_id', 'custom_properties', 'last_seen', 'updated_at', 'is_stale'],
batch_size=200
)
except Exception as e:
@ -1092,7 +1132,15 @@ def cleanup_streams(account_id, scan_start_time=timezone.now):
@shared_task
def refresh_m3u_groups(account_id, use_cache=False, full_refresh=False):
def refresh_m3u_groups(account_id, use_cache=False, full_refresh=False, scan_start_time=None):
"""Refresh M3U groups for an account.
Args:
account_id: ID of the M3U account
use_cache: Whether to use cached M3U file
full_refresh: Whether this is part of a full refresh
scan_start_time: Timestamp when the scan started (for consistent last_seen marking)
"""
if not acquire_task_lock("refresh_m3u_account_groups", account_id):
return f"Task already running for account_id={account_id}.", None
@ -1419,7 +1467,7 @@ def refresh_m3u_groups(account_id, use_cache=False, full_refresh=False):
send_m3u_update(account_id, "processing_groups", 0)
process_groups(account, groups)
process_groups(account, groups, scan_start_time)
release_task_lock("refresh_m3u_account_groups", account_id)
@ -2526,7 +2574,7 @@ def refresh_single_m3u_account(account_id):
if not extinf_data:
try:
logger.info(f"Calling refresh_m3u_groups for account {account_id}")
result = refresh_m3u_groups(account_id, full_refresh=True)
result = refresh_m3u_groups(account_id, full_refresh=True, scan_start_time=refresh_start_timestamp)
logger.trace(f"refresh_m3u_groups result: {result}")
# Check for completely empty result or missing groups
@ -2806,9 +2854,26 @@ def refresh_single_m3u_account(account_id):
id=-1
).exists() # This will never find anything but ensures DB sync
# Mark streams that weren't seen in this refresh as stale (pending deletion)
stale_stream_count = Stream.objects.filter(
m3u_account=account,
last_seen__lt=refresh_start_timestamp
).update(is_stale=True)
logger.info(f"Marked {stale_stream_count} streams as stale for account {account_id}")
# Mark group relationships that weren't seen in this refresh as stale (pending deletion)
stale_group_count = ChannelGroupM3UAccount.objects.filter(
m3u_account=account,
last_seen__lt=refresh_start_timestamp
).update(is_stale=True)
logger.info(f"Marked {stale_group_count} group relationships as stale for account {account_id}")
# Now run cleanup
streams_deleted = cleanup_streams(account_id, refresh_start_timestamp)
# Cleanup stale group relationships (follows same retention policy as streams)
cleanup_stale_group_relationships(account, refresh_start_timestamp)
# Run auto channel sync after successful refresh
auto_sync_message = ""
try:

View file

@ -7,7 +7,6 @@ from django.views.decorators.csrf import csrf_exempt
from django.views.decorators.http import require_http_methods
from apps.epg.models import ProgramData
from apps.accounts.models import User
from core.models import CoreSettings, NETWORK_ACCESS
from dispatcharr.utils import network_access_allowed
from django.utils import timezone as django_timezone
from django.shortcuts import get_object_or_404

View file

@ -357,12 +357,12 @@ class RedisBackedVODConnection:
logger.info(f"[{self.session_id}] Making request #{state.request_count} to {'final' if state.final_url else 'original'} URL")
# Make request
# Make request (10s connect, 10s read timeout - keeps lock time reasonable if client disconnects)
response = self.local_session.get(
target_url,
headers=headers,
stream=True,
timeout=(10, 30),
timeout=(10, 10),
allow_redirects=allow_redirects
)
response.raise_for_status()
@ -712,6 +712,10 @@ class MultiWorkerVODConnectionManager:
content_name = content_obj.name if hasattr(content_obj, 'name') else str(content_obj)
client_id = session_id
# Track whether we incremented profile connections (for cleanup on error)
profile_connections_incremented = False
redis_connection = None
logger.info(f"[{client_id}] Worker {self.worker_id} - Redis-backed streaming request for {content_type} {content_name}")
try:
@ -802,6 +806,7 @@ class MultiWorkerVODConnectionManager:
# Increment profile connections after successful connection creation
self._increment_profile_connections(m3u_profile)
profile_connections_incremented = True
logger.info(f"[{client_id}] Worker {self.worker_id} - Created consolidated connection with session metadata")
else:
@ -1024,6 +1029,19 @@ class MultiWorkerVODConnectionManager:
except Exception as e:
logger.error(f"[{client_id}] Worker {self.worker_id} - Error in Redis-backed stream_content_with_session: {e}", exc_info=True)
# Decrement profile connections if we incremented them but failed before streaming started
if profile_connections_incremented:
logger.info(f"[{client_id}] Connection error occurred after profile increment - decrementing profile connections")
self._decrement_profile_connections(m3u_profile.id)
# Also clean up the Redis connection state since we won't be using it
if redis_connection:
try:
redis_connection.cleanup(connection_manager=self, current_worker_id=self.worker_id)
except Exception as cleanup_error:
logger.error(f"[{client_id}] Error during cleanup after connection failure: {cleanup_error}")
return HttpResponse(f"Streaming error: {str(e)}", status=500)
def _apply_timeshift_parameters(self, original_url, utc_start=None, utc_end=None, offset=None):

View file

@ -245,10 +245,13 @@ class M3UMovieRelation(models.Model):
"""Get the full stream URL for this movie from this provider"""
# Build URL dynamically for XtreamCodes accounts
if self.m3u_account.account_type == 'XC':
server_url = self.m3u_account.server_url.rstrip('/')
from core.xtream_codes import Client as XCClient
# Use XC client's URL normalization to handle malformed URLs
# (e.g., URLs with /player_api.php or query parameters)
normalized_url = XCClient(self.m3u_account.server_url, '', '')._normalize_url(self.m3u_account.server_url)
username = self.m3u_account.username
password = self.m3u_account.password
return f"{server_url}/movie/{username}/{password}/{self.stream_id}.{self.container_extension or 'mp4'}"
return f"{normalized_url}/movie/{username}/{password}/{self.stream_id}.{self.container_extension or 'mp4'}"
else:
# For other account types, we would need another way to build URLs
return None
@ -285,10 +288,12 @@ class M3UEpisodeRelation(models.Model):
if self.m3u_account.account_type == 'XC':
# For XtreamCodes accounts, build the URL dynamically
server_url = self.m3u_account.server_url.rstrip('/')
# Use XC client's URL normalization to handle malformed URLs
# (e.g., URLs with /player_api.php or query parameters)
normalized_url = XtreamCodesClient(self.m3u_account.server_url, '', '')._normalize_url(self.m3u_account.server_url)
username = self.m3u_account.username
password = self.m3u_account.password
return f"{server_url}/series/{username}/{password}/{self.stream_id}.{self.container_extension or 'mp4'}"
return f"{normalized_url}/series/{username}/{password}/{self.stream_id}.{self.container_extension or 'mp4'}"
else:
# We might support non XC accounts in the future
# For now, return None

View file

@ -410,10 +410,10 @@ def process_movie_batch(account, batch, categories, relations, scan_start_time=N
tmdb_id = movie_data.get('tmdb_id') or movie_data.get('tmdb')
imdb_id = movie_data.get('imdb_id') or movie_data.get('imdb')
# Clean empty string IDs
if tmdb_id == '':
# Clean empty string IDs and zero values (some providers use 0 to indicate no ID)
if tmdb_id == '' or tmdb_id == 0 or tmdb_id == '0':
tmdb_id = None
if imdb_id == '':
if imdb_id == '' or imdb_id == 0 or imdb_id == '0':
imdb_id = None
# Create a unique key for this movie (priority: TMDB > IMDB > name+year)
@ -614,26 +614,41 @@ def process_movie_batch(account, batch, categories, relations, scan_start_time=N
# First, create new movies and get their IDs
created_movies = {}
if movies_to_create:
Movie.objects.bulk_create(movies_to_create, ignore_conflicts=True)
# Bulk query to check which movies already exist
tmdb_ids = [m.tmdb_id for m in movies_to_create if m.tmdb_id]
imdb_ids = [m.imdb_id for m in movies_to_create if m.imdb_id]
name_year_pairs = [(m.name, m.year) for m in movies_to_create if not m.tmdb_id and not m.imdb_id]
# Get the newly created movies with their IDs
# We need to re-fetch them to get the primary keys
existing_by_tmdb = {m.tmdb_id: m for m in Movie.objects.filter(tmdb_id__in=tmdb_ids)} if tmdb_ids else {}
existing_by_imdb = {m.imdb_id: m for m in Movie.objects.filter(imdb_id__in=imdb_ids)} if imdb_ids else {}
existing_by_name_year = {}
if name_year_pairs:
for movie in Movie.objects.filter(tmdb_id__isnull=True, imdb_id__isnull=True):
key = (movie.name, movie.year)
if key in name_year_pairs:
existing_by_name_year[key] = movie
# Check each movie against the bulk query results
movies_actually_created = []
for movie in movies_to_create:
# Find the movie by its unique identifiers
if movie.tmdb_id:
db_movie = Movie.objects.filter(tmdb_id=movie.tmdb_id).first()
elif movie.imdb_id:
db_movie = Movie.objects.filter(imdb_id=movie.imdb_id).first()
else:
db_movie = Movie.objects.filter(
name=movie.name,
year=movie.year,
tmdb_id__isnull=True,
imdb_id__isnull=True
).first()
existing = None
if movie.tmdb_id and movie.tmdb_id in existing_by_tmdb:
existing = existing_by_tmdb[movie.tmdb_id]
elif movie.imdb_id and movie.imdb_id in existing_by_imdb:
existing = existing_by_imdb[movie.imdb_id]
elif not movie.tmdb_id and not movie.imdb_id:
existing = existing_by_name_year.get((movie.name, movie.year))
if db_movie:
created_movies[id(movie)] = db_movie
if existing:
created_movies[id(movie)] = existing
else:
movies_actually_created.append(movie)
created_movies[id(movie)] = movie
# Bulk create only movies that don't exist
if movies_actually_created:
Movie.objects.bulk_create(movies_actually_created)
# Update existing movies
if movies_to_update:
@ -649,12 +664,16 @@ def process_movie_batch(account, batch, categories, relations, scan_start_time=N
movie.logo = movie._logo_to_update
movie.save(update_fields=['logo'])
# Update relations to reference the correct movie objects
# Update relations to reference the correct movie objects (with PKs)
for relation in relations_to_create:
if id(relation.movie) in created_movies:
relation.movie = created_movies[id(relation.movie)]
# Handle relations
for relation in relations_to_update:
if id(relation.movie) in created_movies:
relation.movie = created_movies[id(relation.movie)]
# All movies now have PKs, safe to bulk create/update relations
if relations_to_create:
M3UMovieRelation.objects.bulk_create(relations_to_create, ignore_conflicts=True)
@ -724,10 +743,10 @@ def process_series_batch(account, batch, categories, relations, scan_start_time=
tmdb_id = series_data.get('tmdb') or series_data.get('tmdb_id')
imdb_id = series_data.get('imdb') or series_data.get('imdb_id')
# Clean empty string IDs
if tmdb_id == '':
# Clean empty string IDs and zero values (some providers use 0 to indicate no ID)
if tmdb_id == '' or tmdb_id == 0 or tmdb_id == '0':
tmdb_id = None
if imdb_id == '':
if imdb_id == '' or imdb_id == 0 or imdb_id == '0':
imdb_id = None
# Create a unique key for this series (priority: TMDB > IMDB > name+year)
@ -945,26 +964,41 @@ def process_series_batch(account, batch, categories, relations, scan_start_time=
# First, create new series and get their IDs
created_series = {}
if series_to_create:
Series.objects.bulk_create(series_to_create, ignore_conflicts=True)
# Bulk query to check which series already exist
tmdb_ids = [s.tmdb_id for s in series_to_create if s.tmdb_id]
imdb_ids = [s.imdb_id for s in series_to_create if s.imdb_id]
name_year_pairs = [(s.name, s.year) for s in series_to_create if not s.tmdb_id and not s.imdb_id]
# Get the newly created series with their IDs
# We need to re-fetch them to get the primary keys
existing_by_tmdb = {s.tmdb_id: s for s in Series.objects.filter(tmdb_id__in=tmdb_ids)} if tmdb_ids else {}
existing_by_imdb = {s.imdb_id: s for s in Series.objects.filter(imdb_id__in=imdb_ids)} if imdb_ids else {}
existing_by_name_year = {}
if name_year_pairs:
for series in Series.objects.filter(tmdb_id__isnull=True, imdb_id__isnull=True):
key = (series.name, series.year)
if key in name_year_pairs:
existing_by_name_year[key] = series
# Check each series against the bulk query results
series_actually_created = []
for series in series_to_create:
# Find the series by its unique identifiers
if series.tmdb_id:
db_series = Series.objects.filter(tmdb_id=series.tmdb_id).first()
elif series.imdb_id:
db_series = Series.objects.filter(imdb_id=series.imdb_id).first()
else:
db_series = Series.objects.filter(
name=series.name,
year=series.year,
tmdb_id__isnull=True,
imdb_id__isnull=True
).first()
existing = None
if series.tmdb_id and series.tmdb_id in existing_by_tmdb:
existing = existing_by_tmdb[series.tmdb_id]
elif series.imdb_id and series.imdb_id in existing_by_imdb:
existing = existing_by_imdb[series.imdb_id]
elif not series.tmdb_id and not series.imdb_id:
existing = existing_by_name_year.get((series.name, series.year))
if db_series:
created_series[id(series)] = db_series
if existing:
created_series[id(series)] = existing
else:
series_actually_created.append(series)
created_series[id(series)] = series
# Bulk create only series that don't exist
if series_actually_created:
Series.objects.bulk_create(series_actually_created)
# Update existing series
if series_to_update:
@ -980,12 +1014,16 @@ def process_series_batch(account, batch, categories, relations, scan_start_time=
series.logo = series._logo_to_update
series.save(update_fields=['logo'])
# Update relations to reference the correct series objects
# Update relations to reference the correct series objects (with PKs)
for relation in relations_to_create:
if id(relation.series) in created_series:
relation.series = created_series[id(relation.series)]
# Handle relations
for relation in relations_to_update:
if id(relation.series) in created_series:
relation.series = created_series[id(relation.series)]
# All series now have PKs, safe to bulk create/update relations
if relations_to_create:
M3USeriesRelation.objects.bulk_create(relations_to_create, ignore_conflicts=True)

View file

@ -15,8 +15,9 @@ from .models import (
UserAgent,
StreamProfile,
CoreSettings,
STREAM_HASH_KEY,
NETWORK_ACCESS,
STREAM_SETTINGS_KEY,
DVR_SETTINGS_KEY,
NETWORK_ACCESS_KEY,
PROXY_SETTINGS_KEY,
)
from .serializers import (
@ -68,16 +69,28 @@ class CoreSettingsViewSet(viewsets.ModelViewSet):
def update(self, request, *args, **kwargs):
instance = self.get_object()
old_value = instance.value
response = super().update(request, *args, **kwargs)
if instance.key == STREAM_HASH_KEY:
if instance.value != request.data["value"]:
rehash_streams.delay(request.data["value"].split(","))
# If DVR pre/post offsets changed, reschedule upcoming recordings
try:
from core.models import DVR_PRE_OFFSET_MINUTES_KEY, DVR_POST_OFFSET_MINUTES_KEY
if instance.key in (DVR_PRE_OFFSET_MINUTES_KEY, DVR_POST_OFFSET_MINUTES_KEY):
if instance.value != request.data.get("value"):
# If stream settings changed and m3u_hash_key is different, rehash streams
if instance.key == STREAM_SETTINGS_KEY:
new_value = request.data.get("value", {})
if isinstance(new_value, dict) and isinstance(old_value, dict):
old_hash = old_value.get("m3u_hash_key", "")
new_hash = new_value.get("m3u_hash_key", "")
if old_hash != new_hash:
hash_keys = new_hash.split(",") if isinstance(new_hash, str) else new_hash
rehash_streams.delay(hash_keys)
# If DVR settings changed and pre/post offsets are different, reschedule upcoming recordings
if instance.key == DVR_SETTINGS_KEY:
new_value = request.data.get("value", {})
if isinstance(new_value, dict) and isinstance(old_value, dict):
old_pre = old_value.get("pre_offset_minutes")
new_pre = new_value.get("pre_offset_minutes")
old_post = old_value.get("post_offset_minutes")
new_post = new_value.get("post_offset_minutes")
if old_pre != new_pre or old_post != new_post:
try:
# Prefer async task if Celery is available
from apps.channels.tasks import reschedule_upcoming_recordings_for_offset_change
@ -86,24 +99,23 @@ class CoreSettingsViewSet(viewsets.ModelViewSet):
# Fallback to synchronous implementation
from apps.channels.tasks import reschedule_upcoming_recordings_for_offset_change_impl
reschedule_upcoming_recordings_for_offset_change_impl()
except Exception:
pass
return response
def create(self, request, *args, **kwargs):
response = super().create(request, *args, **kwargs)
# If creating DVR pre/post offset settings, also reschedule upcoming recordings
# If creating DVR settings with offset values, reschedule upcoming recordings
try:
key = request.data.get("key")
from core.models import DVR_PRE_OFFSET_MINUTES_KEY, DVR_POST_OFFSET_MINUTES_KEY
if key in (DVR_PRE_OFFSET_MINUTES_KEY, DVR_POST_OFFSET_MINUTES_KEY):
try:
from apps.channels.tasks import reschedule_upcoming_recordings_for_offset_change
reschedule_upcoming_recordings_for_offset_change.delay()
except Exception:
from apps.channels.tasks import reschedule_upcoming_recordings_for_offset_change_impl
reschedule_upcoming_recordings_for_offset_change_impl()
if key == DVR_SETTINGS_KEY:
value = request.data.get("value", {})
if isinstance(value, dict) and ("pre_offset_minutes" in value or "post_offset_minutes" in value):
try:
from apps.channels.tasks import reschedule_upcoming_recordings_for_offset_change
reschedule_upcoming_recordings_for_offset_change.delay()
except Exception:
from apps.channels.tasks import reschedule_upcoming_recordings_for_offset_change_impl
reschedule_upcoming_recordings_for_offset_change_impl()
except Exception:
pass
return response
@ -111,13 +123,13 @@ class CoreSettingsViewSet(viewsets.ModelViewSet):
def check(self, request, *args, **kwargs):
data = request.data
if data.get("key") == NETWORK_ACCESS:
if data.get("key") == NETWORK_ACCESS_KEY:
client_ip = ipaddress.ip_address(get_client_ip(request))
in_network = {}
invalid = []
value = json.loads(data.get("value", "{}"))
value = data.get("value", {})
for key, val in value.items():
in_network[key] = []
cidrs = val.split(",")
@ -142,7 +154,7 @@ class CoreSettingsViewSet(viewsets.ModelViewSet):
},
status=status.HTTP_200_OK,
)
response_data = {
**in_network,
"client_ip": str(client_ip)
@ -161,8 +173,8 @@ class ProxySettingsViewSet(viewsets.ViewSet):
"""Get or create the proxy settings CoreSettings entry"""
try:
settings_obj = CoreSettings.objects.get(key=PROXY_SETTINGS_KEY)
settings_data = json.loads(settings_obj.value)
except (CoreSettings.DoesNotExist, json.JSONDecodeError):
settings_data = settings_obj.value
except CoreSettings.DoesNotExist:
# Create default settings
settings_data = {
"buffering_timeout": 15,
@ -175,7 +187,7 @@ class ProxySettingsViewSet(viewsets.ViewSet):
key=PROXY_SETTINGS_KEY,
defaults={
"name": "Proxy Settings",
"value": json.dumps(settings_data)
"value": settings_data
}
)
return settings_obj, settings_data
@ -197,8 +209,8 @@ class ProxySettingsViewSet(viewsets.ViewSet):
serializer = ProxySettingsSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
# Update the JSON data
settings_obj.value = json.dumps(serializer.validated_data)
# Update the JSON data - store as dict directly
settings_obj.value = serializer.validated_data
settings_obj.save()
return Response(serializer.validated_data)
@ -213,8 +225,8 @@ class ProxySettingsViewSet(viewsets.ViewSet):
serializer = ProxySettingsSerializer(data=updated_data)
serializer.is_valid(raise_exception=True)
# Update the JSON data
settings_obj.value = json.dumps(serializer.validated_data)
# Update the JSON data - store as dict directly
settings_obj.value = serializer.validated_data
settings_obj.save()
return Response(serializer.validated_data)
@ -332,8 +344,8 @@ def rehash_streams_endpoint(request):
"""Trigger the rehash streams task"""
try:
# Get the current hash keys from settings
hash_key_setting = CoreSettings.objects.get(key=STREAM_HASH_KEY)
hash_keys = hash_key_setting.value.split(",")
hash_key = CoreSettings.get_m3u_hash_key()
hash_keys = hash_key.split(",") if isinstance(hash_key, str) else hash_key
# Queue the rehash task
task = rehash_streams.delay(hash_keys)
@ -344,10 +356,10 @@ def rehash_streams_endpoint(request):
"task_id": task.id
}, status=status.HTTP_200_OK)
except CoreSettings.DoesNotExist:
except Exception as e:
return Response({
"success": False,
"message": "Hash key settings not found"
"message": f"Error triggering rehash: {str(e)}"
}, status=status.HTTP_400_BAD_REQUEST)
except Exception as e:

View file

@ -1,13 +1,13 @@
# your_app/management/commands/update_column.py
from django.core.management.base import BaseCommand
from core.models import CoreSettings, NETWORK_ACCESS
from core.models import CoreSettings, NETWORK_ACCESS_KEY
class Command(BaseCommand):
help = "Reset network access settings"
def handle(self, *args, **options):
setting = CoreSettings.objects.get(key=NETWORK_ACCESS)
setting.value = "{}"
setting = CoreSettings.objects.get(key=NETWORK_ACCESS_KEY)
setting.value = {}
setting.save()

View file

@ -0,0 +1,267 @@
# Generated migration to change CoreSettings value field to JSONField and consolidate settings
import json
from django.db import migrations, models
def convert_string_to_json(apps, schema_editor):
"""Convert existing string values to appropriate JSON types before changing column type"""
CoreSettings = apps.get_model("core", "CoreSettings")
for setting in CoreSettings.objects.all():
value = setting.value
if not value:
# Empty strings become empty string in JSON
setting.value = json.dumps("")
setting.save(update_fields=['value'])
continue
# Try to parse as JSON if it looks like JSON (objects/arrays)
if value.startswith('{') or value.startswith('['):
try:
parsed = json.loads(value)
# Store as JSON string temporarily (column is still CharField)
setting.value = json.dumps(parsed)
setting.save(update_fields=['value'])
continue
except (json.JSONDecodeError, ValueError):
pass
# Try to parse as number
try:
# Check if it's an integer
if '.' not in value and value.lstrip('-').isdigit():
setting.value = json.dumps(int(value))
setting.save(update_fields=['value'])
continue
# Check if it's a float
float_val = float(value)
setting.value = json.dumps(float_val)
setting.save(update_fields=['value'])
continue
except (ValueError, AttributeError):
pass
# Check for booleans
if value.lower() in ('true', 'false', '1', '0', 'yes', 'no', 'on', 'off'):
bool_val = value.lower() in ('true', '1', 'yes', 'on')
setting.value = json.dumps(bool_val)
setting.save(update_fields=['value'])
continue
# Default: store as JSON string
setting.value = json.dumps(value)
setting.save(update_fields=['value'])
def consolidate_settings(apps, schema_editor):
"""Consolidate individual setting rows into grouped JSON objects."""
CoreSettings = apps.get_model("core", "CoreSettings")
# Helper to get setting value
def get_value(key, default=None):
try:
obj = CoreSettings.objects.get(key=key)
return obj.value if obj.value is not None else default
except CoreSettings.DoesNotExist:
return default
# STREAM SETTINGS
stream_settings = {
"default_user_agent": get_value("default-user-agent"),
"default_stream_profile": get_value("default-stream-profile"),
"m3u_hash_key": get_value("m3u-hash-key", ""),
"preferred_region": get_value("preferred-region"),
"auto_import_mapped_files": get_value("auto-import-mapped-files"),
}
CoreSettings.objects.update_or_create(
key="stream_settings",
defaults={"name": "Stream Settings", "value": stream_settings}
)
# DVR SETTINGS
dvr_settings = {
"tv_template": get_value("dvr-tv-template", "TV_Shows/{show}/S{season:02d}E{episode:02d}.mkv"),
"movie_template": get_value("dvr-movie-template", "Movies/{title} ({year}).mkv"),
"tv_fallback_dir": get_value("dvr-tv-fallback-dir", "TV_Shows"),
"tv_fallback_template": get_value("dvr-tv-fallback-template", "TV_Shows/{show}/{start}.mkv"),
"movie_fallback_template": get_value("dvr-movie-fallback-template", "Movies/{start}.mkv"),
"comskip_enabled": bool(get_value("dvr-comskip-enabled", False)),
"comskip_custom_path": get_value("dvr-comskip-custom-path", ""),
"pre_offset_minutes": int(get_value("dvr-pre-offset-minutes", 0) or 0),
"post_offset_minutes": int(get_value("dvr-post-offset-minutes", 0) or 0),
"series_rules": get_value("dvr-series-rules", []),
}
CoreSettings.objects.update_or_create(
key="dvr_settings",
defaults={"name": "DVR Settings", "value": dvr_settings}
)
# BACKUP SETTINGS - using underscore keys (not dashes)
backup_settings = {
"schedule_enabled": get_value("backup_schedule_enabled") if get_value("backup_schedule_enabled") is not None else True,
"schedule_frequency": get_value("backup_schedule_frequency") or "daily",
"schedule_time": get_value("backup_schedule_time") or "03:00",
"schedule_day_of_week": get_value("backup_schedule_day_of_week") if get_value("backup_schedule_day_of_week") is not None else 0,
"retention_count": get_value("backup_retention_count") if get_value("backup_retention_count") is not None else 3,
"schedule_cron_expression": get_value("backup_schedule_cron_expression") or "",
}
CoreSettings.objects.update_or_create(
key="backup_settings",
defaults={"name": "Backup Settings", "value": backup_settings}
)
# SYSTEM SETTINGS
system_settings = {
"time_zone": get_value("system-time-zone", "UTC"),
"max_system_events": int(get_value("max-system-events", 100) or 100),
}
CoreSettings.objects.update_or_create(
key="system_settings",
defaults={"name": "System Settings", "value": system_settings}
)
# Rename proxy-settings to proxy_settings (if it exists with old name)
try:
old_proxy = CoreSettings.objects.get(key="proxy-settings")
old_proxy.key = "proxy_settings"
old_proxy.save()
except CoreSettings.DoesNotExist:
pass
# Ensure proxy_settings exists with defaults if not present
proxy_obj, proxy_created = CoreSettings.objects.get_or_create(
key="proxy_settings",
defaults={
"name": "Proxy Settings",
"value": {
"buffering_timeout": 15,
"buffering_speed": 1.0,
"redis_chunk_ttl": 60,
"channel_shutdown_delay": 0,
"channel_init_grace_period": 5,
}
}
)
# Rename network-access to network_access (if it exists with old name)
try:
old_network = CoreSettings.objects.get(key="network-access")
old_network.key = "network_access"
old_network.save()
except CoreSettings.DoesNotExist:
pass
# Ensure network_access exists with defaults if not present
network_obj, network_created = CoreSettings.objects.get_or_create(
key="network_access",
defaults={
"name": "Network Access",
"value": {}
}
)
# Delete old individual setting rows (keep only the new grouped settings)
grouped_keys = ["stream_settings", "dvr_settings", "backup_settings", "system_settings", "proxy_settings", "network_access"]
CoreSettings.objects.exclude(key__in=grouped_keys).delete()
def reverse_migration(apps, schema_editor):
"""Reverse migration: split grouped settings and convert JSON back to strings"""
CoreSettings = apps.get_model("core", "CoreSettings")
# Helper to create individual setting
def create_setting(key, name, value):
# Convert value back to string representation for CharField
if isinstance(value, str):
str_value = value
elif isinstance(value, bool):
str_value = "true" if value else "false"
elif isinstance(value, (int, float)):
str_value = str(value)
elif isinstance(value, (dict, list)):
str_value = json.dumps(value)
elif value is None:
str_value = ""
else:
str_value = str(value)
CoreSettings.objects.update_or_create(
key=key,
defaults={"name": name, "value": str_value}
)
# Split stream_settings
try:
stream = CoreSettings.objects.get(key="stream_settings")
if isinstance(stream.value, dict):
create_setting("default_user_agent", "Default User Agent", stream.value.get("default_user_agent"))
create_setting("default_stream_profile", "Default Stream Profile", stream.value.get("default_stream_profile"))
create_setting("stream_hash_key", "Stream Hash Key", stream.value.get("m3u_hash_key", ""))
create_setting("preferred_region", "Preferred Region", stream.value.get("preferred_region"))
create_setting("auto_import_mapped_files", "Auto Import Mapped Files", stream.value.get("auto_import_mapped_files"))
stream.delete()
except CoreSettings.DoesNotExist:
pass
# Split dvr_settings
try:
dvr = CoreSettings.objects.get(key="dvr_settings")
if isinstance(dvr.value, dict):
create_setting("dvr_tv_template", "DVR TV Template", dvr.value.get("tv_template", "TV_Shows/{show}/S{season:02d}E{episode:02d}.mkv"))
create_setting("dvr_movie_template", "DVR Movie Template", dvr.value.get("movie_template", "Movies/{title} ({year}).mkv"))
create_setting("dvr_tv_fallback_dir", "DVR TV Fallback Dir", dvr.value.get("tv_fallback_dir", "TV_Shows"))
create_setting("dvr_tv_fallback_template", "DVR TV Fallback Template", dvr.value.get("tv_fallback_template", "TV_Shows/{show}/{start}.mkv"))
create_setting("dvr_movie_fallback_template", "DVR Movie Fallback Template", dvr.value.get("movie_fallback_template", "Movies/{start}.mkv"))
create_setting("dvr_comskip_enabled", "DVR Comskip Enabled", dvr.value.get("comskip_enabled", False))
create_setting("dvr_comskip_custom_path", "DVR Comskip Custom Path", dvr.value.get("comskip_custom_path", ""))
create_setting("dvr_pre_offset_minutes", "DVR Pre Offset Minutes", dvr.value.get("pre_offset_minutes", 0))
create_setting("dvr_post_offset_minutes", "DVR Post Offset Minutes", dvr.value.get("post_offset_minutes", 0))
create_setting("dvr_series_rules", "DVR Series Rules", dvr.value.get("series_rules", []))
dvr.delete()
except CoreSettings.DoesNotExist:
pass
# Split backup_settings
try:
backup = CoreSettings.objects.get(key="backup_settings")
if isinstance(backup.value, dict):
create_setting("backup_schedule_enabled", "Backup Schedule Enabled", backup.value.get("schedule_enabled", False))
create_setting("backup_schedule_frequency", "Backup Schedule Frequency", backup.value.get("schedule_frequency", "weekly"))
create_setting("backup_schedule_time", "Backup Schedule Time", backup.value.get("schedule_time", "02:00"))
create_setting("backup_schedule_day_of_week", "Backup Schedule Day of Week", backup.value.get("schedule_day_of_week", 0))
create_setting("backup_retention_count", "Backup Retention Count", backup.value.get("retention_count", 7))
create_setting("backup_schedule_cron_expression", "Backup Schedule Cron Expression", backup.value.get("schedule_cron_expression", ""))
backup.delete()
except CoreSettings.DoesNotExist:
pass
# Split system_settings
try:
system = CoreSettings.objects.get(key="system_settings")
if isinstance(system.value, dict):
create_setting("system_time_zone", "System Time Zone", system.value.get("time_zone", "UTC"))
create_setting("max_system_events", "Max System Events", system.value.get("max_system_events", 100))
system.delete()
except CoreSettings.DoesNotExist:
pass
class Migration(migrations.Migration):
dependencies = [
('core', '0019_add_vlc_stream_profile'),
]
operations = [
# First, convert all data to valid JSON strings while column is still CharField
migrations.RunPython(convert_string_to_json, migrations.RunPython.noop),
# Then change the field type to JSONField
migrations.AlterField(
model_name='coresettings',
name='value',
field=models.JSONField(blank=True, default=dict),
),
# Finally, consolidate individual settings into grouped JSON objects
migrations.RunPython(consolidate_settings, reverse_migration),
]

View file

@ -1,4 +1,7 @@
# core/models.py
from shlex import split as shlex_split
from django.conf import settings
from django.db import models
from django.utils.text import slugify
@ -133,7 +136,7 @@ class StreamProfile(models.Model):
# Split the command and iterate through each part to apply replacements
cmd = [self.command] + [
self._replace_in_part(part, replacements)
for part in self.parameters.split()
for part in shlex_split(self.parameters) # use shlex to handle quoted strings
]
return cmd
@ -145,24 +148,13 @@ class StreamProfile(models.Model):
return part
DEFAULT_USER_AGENT_KEY = slugify("Default User-Agent")
DEFAULT_STREAM_PROFILE_KEY = slugify("Default Stream Profile")
STREAM_HASH_KEY = slugify("M3U Hash Key")
PREFERRED_REGION_KEY = slugify("Preferred Region")
AUTO_IMPORT_MAPPED_FILES = slugify("Auto-Import Mapped Files")
NETWORK_ACCESS = slugify("Network Access")
PROXY_SETTINGS_KEY = slugify("Proxy Settings")
DVR_TV_TEMPLATE_KEY = slugify("DVR TV Template")
DVR_MOVIE_TEMPLATE_KEY = slugify("DVR Movie Template")
DVR_SERIES_RULES_KEY = slugify("DVR Series Rules")
DVR_TV_FALLBACK_DIR_KEY = slugify("DVR TV Fallback Dir")
DVR_TV_FALLBACK_TEMPLATE_KEY = slugify("DVR TV Fallback Template")
DVR_MOVIE_FALLBACK_TEMPLATE_KEY = slugify("DVR Movie Fallback Template")
DVR_COMSKIP_ENABLED_KEY = slugify("DVR Comskip Enabled")
DVR_COMSKIP_CUSTOM_PATH_KEY = slugify("DVR Comskip Custom Path")
DVR_PRE_OFFSET_MINUTES_KEY = slugify("DVR Pre-Offset Minutes")
DVR_POST_OFFSET_MINUTES_KEY = slugify("DVR Post-Offset Minutes")
SYSTEM_TIME_ZONE_KEY = slugify("System Time Zone")
# Setting group keys
STREAM_SETTINGS_KEY = "stream_settings"
DVR_SETTINGS_KEY = "dvr_settings"
BACKUP_SETTINGS_KEY = "backup_settings"
PROXY_SETTINGS_KEY = "proxy_settings"
NETWORK_ACCESS_KEY = "network_access"
SYSTEM_SETTINGS_KEY = "system_settings"
class CoreSettings(models.Model):
@ -173,208 +165,166 @@ class CoreSettings(models.Model):
name = models.CharField(
max_length=255,
)
value = models.CharField(
max_length=255,
value = models.JSONField(
default=dict,
blank=True,
)
def __str__(self):
return "Core Settings"
# Helper methods to get/set grouped settings
@classmethod
def _get_group(cls, key, defaults=None):
"""Get a settings group, returning defaults if not found."""
try:
return cls.objects.get(key=key).value or (defaults or {})
except cls.DoesNotExist:
return defaults or {}
@classmethod
def _update_group(cls, key, name, updates):
"""Update specific fields in a settings group."""
obj, created = cls.objects.get_or_create(
key=key,
defaults={"name": name, "value": {}}
)
current = obj.value if isinstance(obj.value, dict) else {}
current.update(updates)
obj.value = current
obj.save()
return current
# Stream Settings
@classmethod
def get_stream_settings(cls):
"""Get all stream-related settings."""
return cls._get_group(STREAM_SETTINGS_KEY, {
"default_user_agent": None,
"default_stream_profile": None,
"m3u_hash_key": "",
"preferred_region": None,
"auto_import_mapped_files": None,
})
@classmethod
def get_default_user_agent_id(cls):
"""Retrieve a system profile by name (or return None if not found)."""
return cls.objects.get(key=DEFAULT_USER_AGENT_KEY).value
return cls.get_stream_settings().get("default_user_agent")
@classmethod
def get_default_stream_profile_id(cls):
return cls.objects.get(key=DEFAULT_STREAM_PROFILE_KEY).value
return cls.get_stream_settings().get("default_stream_profile")
@classmethod
def get_m3u_hash_key(cls):
return cls.objects.get(key=STREAM_HASH_KEY).value
return cls.get_stream_settings().get("m3u_hash_key", "")
@classmethod
def get_preferred_region(cls):
"""Retrieve the preferred region setting (or return None if not found)."""
try:
return cls.objects.get(key=PREFERRED_REGION_KEY).value
except cls.DoesNotExist:
return None
return cls.get_stream_settings().get("preferred_region")
@classmethod
def get_auto_import_mapped_files(cls):
"""Retrieve the preferred region setting (or return None if not found)."""
try:
return cls.objects.get(key=AUTO_IMPORT_MAPPED_FILES).value
except cls.DoesNotExist:
return None
return cls.get_stream_settings().get("auto_import_mapped_files")
# DVR Settings
@classmethod
def get_proxy_settings(cls):
"""Retrieve proxy settings as dict (or return defaults if not found)."""
try:
import json
settings_json = cls.objects.get(key=PROXY_SETTINGS_KEY).value
return json.loads(settings_json)
except (cls.DoesNotExist, json.JSONDecodeError):
# Return defaults if not found or invalid JSON
return {
"buffering_timeout": 15,
"buffering_speed": 1.0,
"redis_chunk_ttl": 60,
"channel_shutdown_delay": 0,
"channel_init_grace_period": 5,
}
def get_dvr_settings(cls):
"""Get all DVR-related settings."""
return cls._get_group(DVR_SETTINGS_KEY, {
"tv_template": "TV_Shows/{show}/S{season:02d}E{episode:02d}.mkv",
"movie_template": "Movies/{title} ({year}).mkv",
"tv_fallback_dir": "TV_Shows",
"tv_fallback_template": "TV_Shows/{show}/{start}.mkv",
"movie_fallback_template": "Movies/{start}.mkv",
"comskip_enabled": False,
"comskip_custom_path": "",
"pre_offset_minutes": 0,
"post_offset_minutes": 0,
"series_rules": [],
})
@classmethod
def get_dvr_tv_template(cls):
try:
return cls.objects.get(key=DVR_TV_TEMPLATE_KEY).value
except cls.DoesNotExist:
# Default: relative to recordings root (/data/recordings)
return "TV_Shows/{show}/S{season:02d}E{episode:02d}.mkv"
return cls.get_dvr_settings().get("tv_template", "TV_Shows/{show}/S{season:02d}E{episode:02d}.mkv")
@classmethod
def get_dvr_movie_template(cls):
try:
return cls.objects.get(key=DVR_MOVIE_TEMPLATE_KEY).value
except cls.DoesNotExist:
return "Movies/{title} ({year}).mkv"
return cls.get_dvr_settings().get("movie_template", "Movies/{title} ({year}).mkv")
@classmethod
def get_dvr_tv_fallback_dir(cls):
"""Folder name to use when a TV episode has no season/episode information.
Defaults to 'TV_Show' to match existing behavior but can be overridden in settings.
"""
try:
return cls.objects.get(key=DVR_TV_FALLBACK_DIR_KEY).value or "TV_Shows"
except cls.DoesNotExist:
return "TV_Shows"
return cls.get_dvr_settings().get("tv_fallback_dir", "TV_Shows")
@classmethod
def get_dvr_tv_fallback_template(cls):
"""Full path template used when season/episode are missing for a TV airing."""
try:
return cls.objects.get(key=DVR_TV_FALLBACK_TEMPLATE_KEY).value
except cls.DoesNotExist:
# default requested by user
return "TV_Shows/{show}/{start}.mkv"
return cls.get_dvr_settings().get("tv_fallback_template", "TV_Shows/{show}/{start}.mkv")
@classmethod
def get_dvr_movie_fallback_template(cls):
"""Full path template used when movie metadata is incomplete."""
try:
return cls.objects.get(key=DVR_MOVIE_FALLBACK_TEMPLATE_KEY).value
except cls.DoesNotExist:
return "Movies/{start}.mkv"
return cls.get_dvr_settings().get("movie_fallback_template", "Movies/{start}.mkv")
@classmethod
def get_dvr_comskip_enabled(cls):
"""Return boolean-like string value ('true'/'false') for comskip enablement."""
try:
val = cls.objects.get(key=DVR_COMSKIP_ENABLED_KEY).value
return str(val).lower() in ("1", "true", "yes", "on")
except cls.DoesNotExist:
return False
return bool(cls.get_dvr_settings().get("comskip_enabled", False))
@classmethod
def get_dvr_comskip_custom_path(cls):
"""Return configured comskip.ini path or empty string if unset."""
try:
return cls.objects.get(key=DVR_COMSKIP_CUSTOM_PATH_KEY).value
except cls.DoesNotExist:
return ""
return cls.get_dvr_settings().get("comskip_custom_path", "")
@classmethod
def set_dvr_comskip_custom_path(cls, path: str | None):
"""Persist the comskip.ini path setting, normalizing nulls to empty string."""
value = (path or "").strip()
obj, _ = cls.objects.get_or_create(
key=DVR_COMSKIP_CUSTOM_PATH_KEY,
defaults={"name": "DVR Comskip Custom Path", "value": value},
)
if obj.value != value:
obj.value = value
obj.save(update_fields=["value"])
cls._update_group(DVR_SETTINGS_KEY, "DVR Settings", {"comskip_custom_path": value})
return value
@classmethod
def get_dvr_pre_offset_minutes(cls):
"""Minutes to start recording before scheduled start (default 0)."""
try:
val = cls.objects.get(key=DVR_PRE_OFFSET_MINUTES_KEY).value
return int(val)
except cls.DoesNotExist:
return 0
except Exception:
try:
return int(float(val))
except Exception:
return 0
return int(cls.get_dvr_settings().get("pre_offset_minutes", 0) or 0)
@classmethod
def get_dvr_post_offset_minutes(cls):
"""Minutes to stop recording after scheduled end (default 0)."""
try:
val = cls.objects.get(key=DVR_POST_OFFSET_MINUTES_KEY).value
return int(val)
except cls.DoesNotExist:
return 0
except Exception:
try:
return int(float(val))
except Exception:
return 0
@classmethod
def get_system_time_zone(cls):
"""Return configured system time zone or fall back to Django settings."""
try:
value = cls.objects.get(key=SYSTEM_TIME_ZONE_KEY).value
if value:
return value
except cls.DoesNotExist:
pass
return getattr(settings, "TIME_ZONE", "UTC") or "UTC"
@classmethod
def set_system_time_zone(cls, tz_name: str | None):
"""Persist the desired system time zone identifier."""
value = (tz_name or "").strip() or getattr(settings, "TIME_ZONE", "UTC") or "UTC"
obj, _ = cls.objects.get_or_create(
key=SYSTEM_TIME_ZONE_KEY,
defaults={"name": "System Time Zone", "value": value},
)
if obj.value != value:
obj.value = value
obj.save(update_fields=["value"])
return value
return int(cls.get_dvr_settings().get("post_offset_minutes", 0) or 0)
@classmethod
def get_dvr_series_rules(cls):
"""Return list of series recording rules. Each: {tvg_id, title, mode: 'all'|'new'}"""
import json
try:
raw = cls.objects.get(key=DVR_SERIES_RULES_KEY).value
rules = json.loads(raw) if raw else []
if isinstance(rules, list):
return rules
return []
except cls.DoesNotExist:
# Initialize empty if missing
cls.objects.create(key=DVR_SERIES_RULES_KEY, name="DVR Series Rules", value="[]")
return []
return cls.get_dvr_settings().get("series_rules", [])
@classmethod
def set_dvr_series_rules(cls, rules):
import json
try:
obj, _ = cls.objects.get_or_create(key=DVR_SERIES_RULES_KEY, defaults={"name": "DVR Series Rules", "value": "[]"})
obj.value = json.dumps(rules)
obj.save(update_fields=["value"])
return rules
except Exception:
return rules
cls._update_group(DVR_SETTINGS_KEY, "DVR Settings", {"series_rules": rules})
return rules
# Proxy Settings
@classmethod
def get_proxy_settings(cls):
"""Get proxy settings."""
return cls._get_group(PROXY_SETTINGS_KEY, {
"buffering_timeout": 15,
"buffering_speed": 1.0,
"redis_chunk_ttl": 60,
"channel_shutdown_delay": 0,
"channel_init_grace_period": 5,
})
# System Settings
@classmethod
def get_system_settings(cls):
"""Get all system-related settings."""
return cls._get_group(SYSTEM_SETTINGS_KEY, {
"time_zone": getattr(settings, "TIME_ZONE", "UTC") or "UTC",
"max_system_events": 100,
})
@classmethod
def get_system_time_zone(cls):
return cls.get_system_settings().get("time_zone") or getattr(settings, "TIME_ZONE", "UTC") or "UTC"
@classmethod
def set_system_time_zone(cls, tz_name: str | None):
value = (tz_name or "").strip() or getattr(settings, "TIME_ZONE", "UTC") or "UTC"
cls._update_group(SYSTEM_SETTINGS_KEY, "System Settings", {"time_zone": value})
return value
class SystemEvent(models.Model):

View file

@ -3,7 +3,7 @@ import json
import ipaddress
from rest_framework import serializers
from .models import CoreSettings, UserAgent, StreamProfile, NETWORK_ACCESS
from .models import CoreSettings, UserAgent, StreamProfile, NETWORK_ACCESS_KEY
class UserAgentSerializer(serializers.ModelSerializer):
@ -40,10 +40,10 @@ class CoreSettingsSerializer(serializers.ModelSerializer):
fields = "__all__"
def update(self, instance, validated_data):
if instance.key == NETWORK_ACCESS:
if instance.key == NETWORK_ACCESS_KEY:
errors = False
invalid = {}
value = json.loads(validated_data.get("value"))
value = validated_data.get("value")
for key, val in value.items():
cidrs = val.split(",")
for cidr in cidrs:

View file

@ -417,8 +417,12 @@ def log_system_event(event_type, channel_id=None, channel_name=None, **details):
# Get max events from settings (default 100)
try:
max_events_setting = CoreSettings.objects.filter(key='max-system-events').first()
max_events = int(max_events_setting.value) if max_events_setting else 100
from .models import CoreSettings
system_settings = CoreSettings.objects.filter(key='system_settings').first()
if system_settings and isinstance(system_settings.value, dict):
max_events = int(system_settings.value.get('max_system_events', 100))
else:
max_events = 100
except Exception:
max_events = 100

View file

@ -1,5 +1,6 @@
# core/views.py
import os
from shlex import split as shlex_split
import sys
import subprocess
import logging
@ -131,7 +132,7 @@ def stream_view(request, channel_uuid):
stream_profile = channel.stream_profile
if not stream_profile:
logger.error("No stream profile set for channel ID=%s, using default", channel.id)
stream_profile = StreamProfile.objects.get(id=CoreSettings.objects.get(key="default-stream-profile").value)
stream_profile = StreamProfile.objects.get(id=CoreSettings.get_default_stream_profile_id())
logger.debug("Stream profile used: %s", stream_profile.name)
@ -144,7 +145,7 @@ def stream_view(request, channel_uuid):
logger.debug("Formatted parameters: %s", parameters)
# Build the final command.
cmd = [stream_profile.command] + parameters.split()
cmd = [stream_profile.command] + shlex_split(parameters)
logger.debug("Executing command: %s", cmd)
try:

View file

@ -3,7 +3,7 @@ import json
import ipaddress
from django.http import JsonResponse
from django.core.exceptions import ValidationError
from core.models import CoreSettings, NETWORK_ACCESS
from core.models import CoreSettings, NETWORK_ACCESS_KEY
def json_error_response(message, status=400):
@ -39,7 +39,10 @@ def get_client_ip(request):
def network_access_allowed(request, settings_key):
network_access = json.loads(CoreSettings.objects.get(key=NETWORK_ACCESS).value)
try:
network_access = CoreSettings.objects.get(key=NETWORK_ACCESS_KEY).value
except CoreSettings.DoesNotExist:
network_access = {}
cidrs = (
network_access[settings_key].split(",")

View file

@ -4,27 +4,44 @@ ENV DEBIAN_FRONTEND=noninteractive
ENV VIRTUAL_ENV=/dispatcharrpy
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# --- Install Python 3.13 and system dependencies ---
# --- Install Python 3.13 and build dependencies ---
# Note: Hardware acceleration (VA-API, VDPAU, NVENC) already included in base ffmpeg image
RUN apt-get update && apt-get install --no-install-recommends -y \
ca-certificates software-properties-common gnupg2 curl wget \
&& add-apt-repository ppa:deadsnakes/ppa \
&& apt-get update \
&& apt-get install --no-install-recommends -y \
python3.13 python3.13-dev python3.13-venv \
python3.13 python3.13-dev python3.13-venv libpython3.13 \
python-is-python3 python3-pip \
libpcre3 libpcre3-dev libpq-dev procps \
build-essential gcc pciutils \
libpcre3 libpcre3-dev libpq-dev procps pciutils \
nginx streamlink comskip \
vlc-bin vlc-plugin-base \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
build-essential gcc g++ gfortran libopenblas-dev libopenblas0 ninja-build
# --- Create Python virtual environment ---
RUN python3.13 -m venv $VIRTUAL_ENV && $VIRTUAL_ENV/bin/pip install --upgrade pip
# --- Install Python dependencies ---
COPY requirements.txt /tmp/requirements.txt
RUN $VIRTUAL_ENV/bin/pip install --no-cache-dir -r /tmp/requirements.txt && rm /tmp/requirements.txt
RUN $VIRTUAL_ENV/bin/pip install --no-cache-dir -r /tmp/requirements.txt && \
rm /tmp/requirements.txt
# --- Build legacy NumPy wheel for old hardware (store for runtime switching) ---
RUN $VIRTUAL_ENV/bin/pip install --no-cache-dir build && \
cd /tmp && \
$VIRTUAL_ENV/bin/pip download --no-binary numpy --no-deps numpy && \
tar -xzf numpy-*.tar.gz && \
cd numpy-*/ && \
$VIRTUAL_ENV/bin/python -m build --wheel -Csetup-args=-Dcpu-baseline="none" -Csetup-args=-Dcpu-dispatch="none" && \
mv dist/*.whl /opt/ && \
cd / && rm -rf /tmp/numpy-* /tmp/*.tar.gz && \
$VIRTUAL_ENV/bin/pip uninstall -y build
# --- Clean up build dependencies to reduce image size ---
RUN apt-get remove -y build-essential gcc g++ gfortran libopenblas-dev libpcre3-dev python3.13-dev ninja-build && \
apt-get autoremove -y --purge && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /root/.cache /tmp/*
# --- Set up Redis 7.x ---
RUN curl -fsSL https://packages.redis.io/gpg | gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg && \

View file

@ -14,6 +14,10 @@ services:
- REDIS_HOST=localhost
- CELERY_BROKER_URL=redis://localhost:6379/0
- DISPATCHARR_LOG_LEVEL=info
# Legacy CPU Support (Optional)
# Uncomment to enable legacy NumPy build for older CPUs (circa 2009)
# that lack support for newer baseline CPU features
#- USE_LEGACY_NUMPY=true
# Process Priority Configuration (Optional)
# Lower values = higher priority. Range: -20 (highest) to 19 (lowest)
# Negative values require cap_add: SYS_NICE (uncomment below)

View file

@ -18,6 +18,10 @@ services:
- REDIS_HOST=localhost
- CELERY_BROKER_URL=redis://localhost:6379/0
- DISPATCHARR_LOG_LEVEL=trace
# Legacy CPU Support (Optional)
# Uncomment to enable legacy NumPy build for older CPUs (circa 2009)
# that lack support for newer baseline CPU features
#- USE_LEGACY_NUMPY=true
# Process Priority Configuration (Optional)
# Lower values = higher priority. Range: -20 (highest) to 19 (lowest)
# Negative values require cap_add: SYS_NICE (uncomment below)

View file

@ -17,6 +17,10 @@ services:
- REDIS_HOST=localhost
- CELERY_BROKER_URL=redis://localhost:6379/0
- DISPATCHARR_LOG_LEVEL=debug
# Legacy CPU Support (Optional)
# Uncomment to enable legacy NumPy build for older CPUs (circa 2009)
# that lack support for newer baseline CPU features
#- USE_LEGACY_NUMPY=true
# Process Priority Configuration (Optional)
# Lower values = higher priority. Range: -20 (highest) to 19 (lowest)
# Negative values require cap_add: SYS_NICE (uncomment below)

View file

@ -17,6 +17,10 @@ services:
- REDIS_HOST=redis
- CELERY_BROKER_URL=redis://redis:6379/0
- DISPATCHARR_LOG_LEVEL=info
# Legacy CPU Support (Optional)
# Uncomment to enable legacy NumPy build for older CPUs (circa 2009)
# that lack support for newer baseline CPU features
#- USE_LEGACY_NUMPY=true
# Process Priority Configuration (Optional)
# Lower values = higher priority. Range: -20 (highest) to 19 (lowest)
# Negative values require cap_add: SYS_NICE (uncomment below)

View file

@ -27,6 +27,18 @@ echo_with_timestamp() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1"
}
# --- NumPy version switching for legacy hardware ---
if [ "$USE_LEGACY_NUMPY" = "true" ]; then
# Check if NumPy was compiled with baseline support
if /dispatcharrpy/bin/python -c "import numpy; numpy.show_config()" 2>&1 | grep -qi "baseline"; then
echo_with_timestamp "🔧 Switching to legacy NumPy (no CPU baseline)..."
/dispatcharrpy/bin/pip install --no-cache-dir --force-reinstall --no-deps /opt/numpy-*.whl
echo_with_timestamp "✅ Legacy NumPy installed"
else
echo_with_timestamp "✅ Legacy NumPy (no baseline) already installed, skipping reinstallation"
fi
fi
# Set PostgreSQL environment variables
export POSTGRES_DB=${POSTGRES_DB:-dispatcharr}
export POSTGRES_USER=${POSTGRES_USER:-dispatch}

View file

@ -37,6 +37,7 @@ http-keepalive = 1
buffer-size = 65536 # Increase buffer for large payloads
post-buffering = 4096 # Reduce buffering for real-time streaming
http-timeout = 600 # Prevent disconnects from long streams
socket-timeout = 600 # Prevent write timeouts when client buffers
lazy-apps = true # Improve memory efficiency
# Async mode (use gevent for high concurrency)
@ -58,4 +59,4 @@ logformat-strftime = true
log-date = %%Y-%%m-%%d %%H:%%M:%%S,000
# Use formatted time with environment variable for log level
log-format = %(ftime) $(DISPATCHARR_LOG_LEVEL) uwsgi.requests Worker ID: %(wid) %(method) %(status) %(uri) %(msecs)ms
log-buffering = 1024 # Add buffer size limit for logging
log-buffering = 1024 # Add buffer size limit for logging

View file

@ -12,6 +12,7 @@
"@dnd-kit/modifiers": "^9.0.0",
"@dnd-kit/sortable": "^10.0.0",
"@dnd-kit/utilities": "^3.2.2",
"@hookform/resolvers": "^5.2.2",
"@mantine/charts": "~8.0.1",
"@mantine/core": "~8.0.1",
"@mantine/dates": "~8.0.1",
@ -22,13 +23,13 @@
"@tanstack/react-table": "^8.21.2",
"allotment": "^1.20.4",
"dayjs": "^1.11.13",
"formik": "^2.4.6",
"hls.js": "^1.5.20",
"lucide-react": "^0.511.0",
"mpegts.js": "^1.8.0",
"react": "^19.1.0",
"react-dom": "^19.1.0",
"react-draggable": "^4.4.6",
"react-hook-form": "^7.70.0",
"react-pro-sidebar": "^1.1.0",
"react-router-dom": "^7.3.0",
"react-virtualized": "^9.22.6",
@ -1248,6 +1249,18 @@
"integrity": "sha512-aGTxbpbg8/b5JfU1HXSrbH3wXZuLPJcNEcZQFMxLs3oSzgtVu6nFPkbbGGUvBcUjKV2YyB9Wxxabo+HEH9tcRQ==",
"license": "MIT"
},
"node_modules/@hookform/resolvers": {
"version": "5.2.2",
"resolved": "https://registry.npmjs.org/@hookform/resolvers/-/resolvers-5.2.2.tgz",
"integrity": "sha512-A/IxlMLShx3KjV/HeTcTfaMxdwy690+L/ZADoeaTltLx+CVuzkeVIPuybK3jrRfw7YZnmdKsVVHAlEPIAEUNlA==",
"license": "MIT",
"dependencies": {
"@standard-schema/utils": "^0.3.0"
},
"peerDependencies": {
"react-hook-form": "^7.55.0"
}
},
"node_modules/@humanfs/core": {
"version": "0.19.1",
"resolved": "https://registry.npmjs.org/@humanfs/core/-/core-0.19.1.tgz",
@ -1776,6 +1789,12 @@
"win32"
]
},
"node_modules/@standard-schema/utils": {
"version": "0.3.0",
"resolved": "https://registry.npmjs.org/@standard-schema/utils/-/utils-0.3.0.tgz",
"integrity": "sha512-e7Mew686owMaPJVNNLs55PUvgz371nKgwsc4vxE49zsODpJEnxgxRo2y/OKrqueavXgZNMDVj3DdHFlaSAeU8g==",
"license": "MIT"
},
"node_modules/@swc/core": {
"name": "@swc/wasm",
"version": "1.13.20",
@ -2008,18 +2027,6 @@
"dev": true,
"license": "MIT"
},
"node_modules/@types/hoist-non-react-statics": {
"version": "3.3.7",
"resolved": "https://registry.npmjs.org/@types/hoist-non-react-statics/-/hoist-non-react-statics-3.3.7.tgz",
"integrity": "sha512-PQTyIulDkIDro8P+IHbKCsw7U2xxBYflVzW/FgWdCAePD9xGSidgA76/GeJ6lBKoblyhf9pBY763gbrN+1dI8g==",
"license": "MIT",
"dependencies": {
"hoist-non-react-statics": "^3.3.0"
},
"peerDependencies": {
"@types/react": "*"
}
},
"node_modules/@types/json-schema": {
"version": "7.0.15",
"resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.15.tgz",
@ -2037,6 +2044,7 @@
"version": "19.2.7",
"resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.7.tgz",
"integrity": "sha512-MWtvHrGZLFttgeEj28VXHxpmwYbor/ATPYbBfSFZEIRK0ecCFLl2Qo55z52Hss+UV9CRN7trSeq1zbgx7YDWWg==",
"devOptional": true,
"license": "MIT",
"dependencies": {
"csstype": "^3.2.2"
@ -2833,15 +2841,6 @@
"dev": true,
"license": "MIT"
},
"node_modules/deepmerge": {
"version": "2.2.1",
"resolved": "https://registry.npmjs.org/deepmerge/-/deepmerge-2.2.1.tgz",
"integrity": "sha512-R9hc1Xa/NOBi9WRVUWg19rl1UB7Tt4kuPd+thNJgFZoxXsTz7ncaPaeIm+40oSGuP33DfMb4sZt1QIGiJzC4EA==",
"license": "MIT",
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/dequal": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/dequal/-/dequal-2.0.3.tgz",
@ -3288,31 +3287,6 @@
"dev": true,
"license": "ISC"
},
"node_modules/formik": {
"version": "2.4.9",
"resolved": "https://registry.npmjs.org/formik/-/formik-2.4.9.tgz",
"integrity": "sha512-5nI94BMnlFDdQRBY4Sz39WkhxajZJ57Fzs8wVbtsQlm5ScKIR1QLYqv/ultBnobObtlUyxpxoLodpixrsf36Og==",
"funding": [
{
"type": "individual",
"url": "https://opencollective.com/formik"
}
],
"license": "Apache-2.0",
"dependencies": {
"@types/hoist-non-react-statics": "^3.3.1",
"deepmerge": "^2.1.1",
"hoist-non-react-statics": "^3.3.0",
"lodash": "^4.17.21",
"lodash-es": "^4.17.21",
"react-fast-compare": "^2.0.1",
"tiny-warning": "^1.0.2",
"tslib": "^2.0.0"
},
"peerDependencies": {
"react": ">=16.8.0"
}
},
"node_modules/fsevents": {
"version": "2.3.3",
"resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.3.tgz",
@ -3751,12 +3725,6 @@
"integrity": "sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==",
"license": "MIT"
},
"node_modules/lodash-es": {
"version": "4.17.22",
"resolved": "https://registry.npmjs.org/lodash-es/-/lodash-es-4.17.22.tgz",
"integrity": "sha512-XEawp1t0gxSi9x01glktRZ5HDy0HXqrM0x5pXQM98EaI0NxO6jVM7omDOxsuEo5UIASAnm2bRp1Jt/e0a2XU8Q==",
"license": "MIT"
},
"node_modules/lodash.clamp": {
"version": "4.0.3",
"resolved": "https://registry.npmjs.org/lodash.clamp/-/lodash.clamp-4.0.3.tgz",
@ -4334,11 +4302,21 @@
"react": ">= 16.8 || 18.0.0"
}
},
"node_modules/react-fast-compare": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/react-fast-compare/-/react-fast-compare-2.0.4.tgz",
"integrity": "sha512-suNP+J1VU1MWFKcyt7RtjiSWUjvidmQSlqu+eHslq+342xCbGTYmC0mEhPCOHxlW0CywylOC1u2DFAT+bv4dBw==",
"license": "MIT"
"node_modules/react-hook-form": {
"version": "7.70.0",
"resolved": "https://registry.npmjs.org/react-hook-form/-/react-hook-form-7.70.0.tgz",
"integrity": "sha512-COOMajS4FI3Wuwrs3GPpi/Jeef/5W1DRR84Yl5/ShlT3dKVFUfoGiEZ/QE6Uw8P4T2/CLJdcTVYKvWBMQTEpvw==",
"license": "MIT",
"engines": {
"node": ">=18.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/react-hook-form"
},
"peerDependencies": {
"react": "^16.8.0 || ^17 || ^18 || ^19"
}
},
"node_modules/react-is": {
"version": "16.13.1",
@ -4923,12 +4901,6 @@
"integrity": "sha512-+FbBPE1o9QAYvviau/qC5SE3caw21q3xkvWKBtja5vgqOWIHHJ3ioaq1VPfn/Szqctz2bU/oYeKd9/z5BL+PVg==",
"license": "MIT"
},
"node_modules/tiny-warning": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/tiny-warning/-/tiny-warning-1.0.3.tgz",
"integrity": "sha512-lBN9zLN/oAf68o3zNXYrdCt1kP8WsiGW8Oo2ka41b2IM5JL/S1CTyX1rW0mb/zSuJun0ZUrDxx4sqvYS2FWzPA==",
"license": "MIT"
},
"node_modules/tinybench": {
"version": "2.9.0",
"resolved": "https://registry.npmjs.org/tinybench/-/tinybench-2.9.0.tgz",

View file

@ -23,11 +23,12 @@
"@mantine/form": "~8.0.1",
"@mantine/hooks": "~8.0.1",
"@mantine/notifications": "~8.0.1",
"@hookform/resolvers": "^5.2.2",
"@tanstack/react-table": "^8.21.2",
"allotment": "^1.20.4",
"dayjs": "^1.11.13",
"formik": "^2.4.6",
"hls.js": "^1.5.20",
"react-hook-form": "^7.70.0",
"lucide-react": "^0.511.0",
"mpegts.js": "^1.8.0",
"react": "^19.1.0",

View file

@ -756,6 +756,7 @@ export const WebsocketProvider = ({ children }) => {
try {
await API.requeryChannels();
await useChannelsStore.getState().fetchChannels();
await fetchChannelProfiles();
console.log('Channels refreshed after bulk creation');
} catch (error) {
console.error(

View file

@ -336,6 +336,15 @@ export default class API {
delete channelData.channel_number;
}
// Add channel profile IDs based on current selection
const selectedProfileId = useChannelsStore.getState().selectedProfileId;
if (selectedProfileId && selectedProfileId !== '0') {
// Specific profile selected - add only to that profile
channelData.channel_profile_ids = [parseInt(selectedProfileId)];
}
// If selectedProfileId is '0' or not set, don't include channel_profile_ids
// which will trigger the backend's default behavior of adding to all profiles
if (channel.logo_file) {
// Must send FormData for file upload
body = new FormData();
@ -2112,6 +2121,24 @@ export default class API {
}
}
static async duplicateChannelProfile(id, name) {
try {
const response = await request(
`${host}/api/channels/profiles/${id}/duplicate/`,
{
method: 'POST',
body: { name },
}
);
useChannelsStore.getState().addProfile(response);
return response;
} catch (e) {
errorNotification(`Failed to duplicate channel profile ${id}`, e);
}
}
static async deleteChannelProfile(id) {
try {
await request(`${host}/api/channels/profiles/${id}/`, {

View file

@ -16,6 +16,7 @@ import useWarningsStore from '../store/warnings';
* @param {string} props.actionKey - Unique key for this type of action (used for suppression)
* @param {Function} props.onSuppressChange - Called when "don't show again" option changes
* @param {string} [props.size='md'] - Size of the modal
* @param {boolean} [props.loading=false] - Whether the confirm button should show loading state
*/
const ConfirmationDialog = ({
opened,
@ -31,6 +32,7 @@ const ConfirmationDialog = ({
zIndex = 1000,
showDeleteFileOption = false,
deleteFileLabel = 'Also delete files from disk',
loading = false,
}) => {
const suppressWarning = useWarningsStore((s) => s.suppressWarning);
const isWarningSuppressed = useWarningsStore((s) => s.isWarningSuppressed);
@ -93,10 +95,16 @@ const ConfirmationDialog = ({
)}
<Group justify="flex-end">
<Button variant="outline" onClick={handleClose}>
<Button variant="outline" onClick={handleClose} disabled={loading}>
{cancelLabel}
</Button>
<Button color="red" onClick={handleConfirm}>
<Button
color="red"
onClick={handleConfirm}
loading={loading}
disabled={loading}
loaderProps={{ type: 'dots' }}
>
{confirmLabel}
</Button>
</Group>

View file

@ -34,7 +34,13 @@ import useLocalStorage from '../../hooks/useLocalStorage';
import useWarningsStore from '../../store/warnings';
import { CustomTable, useTable } from '../tables/CustomTable';
const RowActions = ({ row, handleDownload, handleRestoreClick, handleDeleteClick, downloading }) => {
const RowActions = ({
row,
handleDownload,
handleRestoreClick,
handleDeleteClick,
downloading,
}) => {
return (
<Flex gap={4} wrap="nowrap">
<Tooltip label="Download">
@ -98,7 +104,6 @@ function to24Hour(time12, period) {
return `${String(hours24).padStart(2, '0')}:${String(minutes).padStart(2, '0')}`;
}
// Get default timezone (same as Settings page)
function getDefaultTimeZone() {
try {
@ -116,35 +121,60 @@ function validateCronExpression(expression) {
const parts = expression.trim().split(/\s+/);
if (parts.length !== 5) {
return { valid: false, error: 'Cron expression must have exactly 5 parts: minute hour day month weekday' };
return {
valid: false,
error:
'Cron expression must have exactly 5 parts: minute hour day month weekday',
};
}
const [minute, hour, dayOfMonth, month, dayOfWeek] = parts;
// Validate each part (allowing *, */N steps, ranges, lists, steps)
// Supports: *, */2, 5, 1-5, 1-5/2, 1,3,5, etc.
const cronPartRegex = /^(\*\/\d+|\*|\d+(-\d+)?(\/\d+)?(,\d+(-\d+)?(\/\d+)?)*)$/;
const cronPartRegex =
/^(\*\/\d+|\*|\d+(-\d+)?(\/\d+)?(,\d+(-\d+)?(\/\d+)?)*)$/;
if (!cronPartRegex.test(minute)) {
return { valid: false, error: 'Invalid minute field (0-59, *, or cron syntax)' };
return {
valid: false,
error: 'Invalid minute field (0-59, *, or cron syntax)',
};
}
if (!cronPartRegex.test(hour)) {
return { valid: false, error: 'Invalid hour field (0-23, *, or cron syntax)' };
return {
valid: false,
error: 'Invalid hour field (0-23, *, or cron syntax)',
};
}
if (!cronPartRegex.test(dayOfMonth)) {
return { valid: false, error: 'Invalid day field (1-31, *, or cron syntax)' };
return {
valid: false,
error: 'Invalid day field (1-31, *, or cron syntax)',
};
}
if (!cronPartRegex.test(month)) {
return { valid: false, error: 'Invalid month field (1-12, *, or cron syntax)' };
return {
valid: false,
error: 'Invalid month field (1-12, *, or cron syntax)',
};
}
if (!cronPartRegex.test(dayOfWeek)) {
return { valid: false, error: 'Invalid weekday field (0-6, *, or cron syntax)' };
return {
valid: false,
error: 'Invalid weekday field (0-6, *, or cron syntax)',
};
}
// Additional range validation for numeric values
const validateRange = (value, min, max, name) => {
// Skip if it's * or contains special characters
if (value === '*' || value.includes('/') || value.includes('-') || value.includes(',')) {
if (
value === '*' ||
value.includes('/') ||
value.includes('-') ||
value.includes(',')
) {
return null;
}
const num = parseInt(value, 10);
@ -200,6 +230,8 @@ export default function BackupManager() {
const [restoreConfirmOpen, setRestoreConfirmOpen] = useState(false);
const [deleteConfirmOpen, setDeleteConfirmOpen] = useState(false);
const [selectedBackup, setSelectedBackup] = useState(null);
const [restoring, setRestoring] = useState(false);
const [deleting, setDeleting] = useState(false);
// Read user's preferences from settings
const [timeFormat] = useLocalStorage('time-format', '12h');
@ -482,6 +514,7 @@ export default function BackupManager() {
};
const handleDeleteConfirm = async () => {
setDeleting(true);
try {
await API.deleteBackup(selectedBackup.name);
notifications.show({
@ -497,6 +530,7 @@ export default function BackupManager() {
color: 'red',
});
} finally {
setDeleting(false);
setDeleteConfirmOpen(false);
setSelectedBackup(null);
}
@ -508,11 +542,13 @@ export default function BackupManager() {
};
const handleRestoreConfirm = async () => {
setRestoring(true);
try {
await API.restoreBackup(selectedBackup.name);
notifications.show({
title: 'Success',
message: 'Backup restored successfully. You may need to refresh the page.',
message:
'Backup restored successfully. You may need to refresh the page.',
color: 'green',
});
setTimeout(() => window.location.reload(), 2000);
@ -523,6 +559,7 @@ export default function BackupManager() {
color: 'red',
});
} finally {
setRestoring(false);
setRestoreConfirmOpen(false);
setSelectedBackup(null);
}
@ -555,16 +592,22 @@ export default function BackupManager() {
{/* Schedule Settings */}
<Stack gap="sm">
<Group justify="space-between">
<Text size="sm" fw={500}>Scheduled Backups</Text>
<Text size="sm" fw={500}>
Scheduled Backups
</Text>
<Switch
checked={schedule.enabled}
onChange={(e) => handleScheduleChange('enabled', e.currentTarget.checked)}
onChange={(e) =>
handleScheduleChange('enabled', e.currentTarget.checked)
}
label={schedule.enabled ? 'Enabled' : 'Disabled'}
/>
</Group>
<Group justify="space-between">
<Text size="sm" fw={500}>Advanced (Cron Expression)</Text>
<Text size="sm" fw={500}>
Advanced (Cron Expression)
</Text>
<Switch
checked={advancedMode}
onChange={(e) => setAdvancedMode(e.currentTarget.checked)}
@ -584,18 +627,24 @@ export default function BackupManager() {
<TextInput
label="Cron Expression"
value={schedule.cron_expression}
onChange={(e) => handleScheduleChange('cron_expression', e.currentTarget.value)}
onChange={(e) =>
handleScheduleChange(
'cron_expression',
e.currentTarget.value
)
}
placeholder="0 3 * * *"
description="Format: minute hour day month weekday (e.g., '0 3 * * *' = 3:00 AM daily)"
disabled={!schedule.enabled}
error={cronError}
/>
<Text size="xs" c="dimmed">
Examples: <br />
<code>0 3 * * *</code> - Every day at 3:00 AM<br />
<code>0 2 * * 0</code> - Every Sunday at 2:00 AM<br />
<code>0 */6 * * *</code> - Every 6 hours<br />
<code>30 14 1 * *</code> - 1st of every month at 2:30 PM
Examples: <br /> <code>0 3 * * *</code> - Every day at 3:00
AM
<br /> <code>0 2 * * 0</code> - Every Sunday at 2:00 AM
<br /> <code>0 */6 * * *</code> - Every 6 hours
<br /> <code>30 14 1 * *</code> - 1st of every month at
2:30 PM
</Text>
</Stack>
<Group grow align="flex-end">
@ -603,7 +652,9 @@ export default function BackupManager() {
label="Retention"
description="0 = keep all"
value={schedule.retention_count}
onChange={(value) => handleScheduleChange('retention_count', value || 0)}
onChange={(value) =>
handleScheduleChange('retention_count', value || 0)
}
min={0}
disabled={!schedule.enabled}
/>
@ -623,7 +674,9 @@ export default function BackupManager() {
<Select
label="Frequency"
value={schedule.frequency}
onChange={(value) => handleScheduleChange('frequency', value)}
onChange={(value) =>
handleScheduleChange('frequency', value)
}
data={[
{ value: 'daily', label: 'Daily' },
{ value: 'weekly', label: 'Weekly' },
@ -634,7 +687,9 @@ export default function BackupManager() {
<Select
label="Day"
value={String(schedule.day_of_week)}
onChange={(value) => handleScheduleChange('day_of_week', parseInt(value, 10))}
onChange={(value) =>
handleScheduleChange('day_of_week', parseInt(value, 10))
}
data={DAYS_OF_WEEK}
disabled={!schedule.enabled}
/>
@ -645,7 +700,9 @@ export default function BackupManager() {
label="Hour"
value={displayTime ? displayTime.split(':')[0] : '12'}
onChange={(value) => {
const minute = displayTime ? displayTime.split(':')[1] : '00';
const minute = displayTime
? displayTime.split(':')[1]
: '00';
handleTimeChange12h(`${value}:${minute}`, null);
}}
data={Array.from({ length: 12 }, (_, i) => ({
@ -659,7 +716,9 @@ export default function BackupManager() {
label="Minute"
value={displayTime ? displayTime.split(':')[1] : '00'}
onChange={(value) => {
const hour = displayTime ? displayTime.split(':')[0] : '12';
const hour = displayTime
? displayTime.split(':')[0]
: '12';
handleTimeChange12h(`${hour}:${value}`, null);
}}
data={Array.from({ length: 60 }, (_, i) => ({
@ -684,9 +743,13 @@ export default function BackupManager() {
<>
<Select
label="Hour"
value={schedule.time ? schedule.time.split(':')[0] : '00'}
value={
schedule.time ? schedule.time.split(':')[0] : '00'
}
onChange={(value) => {
const minute = schedule.time ? schedule.time.split(':')[1] : '00';
const minute = schedule.time
? schedule.time.split(':')[1]
: '00';
handleTimeChange24h(`${value}:${minute}`);
}}
data={Array.from({ length: 24 }, (_, i) => ({
@ -698,9 +761,13 @@ export default function BackupManager() {
/>
<Select
label="Minute"
value={schedule.time ? schedule.time.split(':')[1] : '00'}
value={
schedule.time ? schedule.time.split(':')[1] : '00'
}
onChange={(value) => {
const hour = schedule.time ? schedule.time.split(':')[0] : '00';
const hour = schedule.time
? schedule.time.split(':')[0]
: '00';
handleTimeChange24h(`${hour}:${value}`);
}}
data={Array.from({ length: 60 }, (_, i) => ({
@ -718,7 +785,9 @@ export default function BackupManager() {
label="Retention"
description="0 = keep all"
value={schedule.retention_count}
onChange={(value) => handleScheduleChange('retention_count', value || 0)}
onChange={(value) =>
handleScheduleChange('retention_count', value || 0)
}
min={0}
disabled={!schedule.enabled}
/>
@ -737,7 +806,8 @@ export default function BackupManager() {
{/* Timezone info - only show in simple mode */}
{!advancedMode && schedule.enabled && schedule.time && (
<Text size="xs" c="dimmed" mt="xs">
System Timezone: {userTimezone} Backup will run at {schedule.time} {userTimezone}
System Timezone: {userTimezone} Backup will run at{' '}
{schedule.time} {userTimezone}
</Text>
)}
</>
@ -861,7 +931,11 @@ export default function BackupManager() {
>
Cancel
</Button>
<Button onClick={handleUploadSubmit} disabled={!uploadFile} variant="default">
<Button
onClick={handleUploadSubmit}
disabled={!uploadFile}
variant="default"
>
Upload
</Button>
</Group>
@ -881,6 +955,7 @@ export default function BackupManager() {
cancelLabel="Cancel"
actionKey="restore-backup"
onSuppressChange={suppressWarning}
loading={restoring}
/>
<ConfirmationDialog
@ -896,6 +971,7 @@ export default function BackupManager() {
cancelLabel="Cancel"
actionKey="delete-backup"
onSuppressChange={suppressWarning}
loading={deleting}
/>
</Stack>
);

View file

@ -0,0 +1,85 @@
import {
Badge,
Box,
Card,
CardSection,
Group,
Image,
Stack,
Text,
} from '@mantine/core';
import {Calendar, Play, Star} from "lucide-react";
import React from "react";
const SeriesCard = ({ series, onClick }) => {
return (
<Card
shadow="sm"
padding="md"
radius="md"
withBorder
style={{ cursor: 'pointer', backgroundColor: '#27272A' }}
onClick={() => onClick(series)}
>
<CardSection>
<Box pos="relative" h={300}>
{series.logo?.url ? (
<Image
src={series.logo.url}
height={300}
alt={series.name}
fit="contain"
/>
) : (
<Box
style={{
backgroundColor: '#404040',
alignItems: 'center',
justifyContent: 'center',
}}
h={300}
display="flex"
>
<Play size={48} color="#666" />
</Box>
)}
{/* Add Series badge in the same position as Movie badge */}
<Badge pos="absolute" bottom={8} left={8} color="purple">
Series
</Badge>
</Box>
</CardSection>
<Stack spacing={8} mt="md">
<Text weight={500}>{series.name}</Text>
<Group spacing={16}>
{series.year && (
<Group spacing={4}>
<Calendar size={14} color="#666" />
<Text size="xs" c="dimmed">
{series.year}
</Text>
</Group>
)}
{series.rating && (
<Group spacing={4}>
<Star size={14} color="#666" />
<Text size="xs" c="dimmed">
{series.rating}
</Text>
</Group>
)}
</Group>
{series.genre && (
<Text size="xs" c="dimmed" lineClamp={1}>
{series.genre}
</Text>
)}
</Stack>
</Card>
);
};
export default SeriesCard;

View file

@ -0,0 +1,613 @@
import { useLocation } from 'react-router-dom';
import React, { useEffect, useMemo, useState } from 'react';
import useLocalStorage from '../../hooks/useLocalStorage.jsx';
import usePlaylistsStore from '../../store/playlists.jsx';
import useSettingsStore from '../../store/settings.jsx';
import {
ActionIcon,
Badge,
Box,
Card,
Center,
Flex,
Group,
Select,
Stack,
Text,
Tooltip,
} from '@mantine/core';
import {
Gauge,
HardDriveDownload,
HardDriveUpload,
SquareX,
Timer,
Users,
Video,
} from 'lucide-react';
import { toFriendlyDuration } from '../../utils/dateTimeUtils.js';
import { CustomTable, useTable } from '../tables/CustomTable/index.jsx';
import { TableHelper } from '../../helpers/index.jsx';
import logo from '../../images/logo.png';
import { formatBytes, formatSpeed } from '../../utils/networkUtils.js';
import { showNotification } from '../../utils/notificationUtils.js';
import {
connectedAccessor,
durationAccessor,
getBufferingSpeedThreshold,
getChannelStreams,
getLogoUrl,
getM3uAccountsMap,
getMatchingStreamByUrl,
getSelectedStream,
getStartDate,
getStreamOptions,
getStreamsByIds,
switchStream,
} from '../../utils/cards/StreamConnectionCardUtils.js';
// Create a separate component for each channel card to properly handle the hook
const StreamConnectionCard = ({
channel,
clients,
stopClient,
stopChannel,
logos,
channelsByUUID,
}) => {
const location = useLocation();
const [availableStreams, setAvailableStreams] = useState([]);
const [isLoadingStreams, setIsLoadingStreams] = useState(false);
const [activeStreamId, setActiveStreamId] = useState(null);
const [currentM3UProfile, setCurrentM3UProfile] = useState(null); // Add state for current M3U profile
const [data, setData] = useState([]);
const [previewedStream, setPreviewedStream] = useState(null);
// Get M3U account data from the playlists store
const m3uAccounts = usePlaylistsStore((s) => s.playlists);
// Get settings for speed threshold
const settings = useSettingsStore((s) => s.settings);
// Get Date-format from localStorage
const [dateFormatSetting] = useLocalStorage('date-format', 'mdy');
const dateFormat = dateFormatSetting === 'mdy' ? 'MM/DD' : 'DD/MM';
const [tableSize] = useLocalStorage('table-size', 'default');
// Create a map of M3U account IDs to names for quick lookup
const m3uAccountsMap = useMemo(() => {
return getM3uAccountsMap(m3uAccounts);
}, [m3uAccounts]);
// Update M3U profile information when channel data changes
useEffect(() => {
// If the channel data includes M3U profile information, update our state
if (channel.m3u_profile || channel.m3u_profile_name) {
setCurrentM3UProfile({
name:
channel.m3u_profile?.name ||
channel.m3u_profile_name ||
'Default M3U',
});
}
}, [channel.m3u_profile, channel.m3u_profile_name, channel.stream_id]);
// Fetch available streams for this channel
useEffect(() => {
const fetchStreams = async () => {
setIsLoadingStreams(true);
try {
// Get channel ID from UUID
const channelId = channelsByUUID[channel.channel_id];
if (channelId) {
const streamData = await getChannelStreams(channelId);
// Use streams in the order returned by the API without sorting
setAvailableStreams(streamData);
// If we have a channel URL, try to find the matching stream
if (channel.url && streamData.length > 0) {
// Try to find matching stream based on URL
const matchingStream = getMatchingStreamByUrl(
streamData,
channel.url
);
if (matchingStream) {
setActiveStreamId(matchingStream.id.toString());
// If the stream has M3U profile info, save it
if (matchingStream.m3u_profile) {
setCurrentM3UProfile(matchingStream.m3u_profile);
}
}
}
}
} catch (error) {
console.error('Error fetching streams:', error);
} finally {
setIsLoadingStreams(false);
}
};
fetchStreams();
}, [channel.channel_id, channel.url, channelsByUUID]);
useEffect(() => {
setData(
clients
.filter((client) => client.channel.channel_id === channel.channel_id)
.map((client) => ({
id: client.client_id,
...client,
}))
);
}, [clients, channel.channel_id]);
const renderHeaderCell = (header) => {
switch (header.id) {
default:
return (
<Group>
<Text size="sm" name={header.id}>
{header.column.columnDef.header}
</Text>
</Group>
);
}
};
const renderBodyCell = ({ cell, row }) => {
switch (cell.column.id) {
case 'actions':
return (
<Box sx={{ justifyContent: 'right' }}>
<Center>
<Tooltip label="Disconnect client">
<ActionIcon
size="sm"
variant="transparent"
color="red.9"
onClick={() =>
stopClient(
row.original.channel.uuid,
row.original.client_id
)
}
>
<SquareX size="18" />
</ActionIcon>
</Tooltip>
</Center>
</Box>
);
}
};
const checkStreamsAfterChange = (streamId) => {
return async () => {
try {
const channelId = channelsByUUID[channel.channel_id];
if (channelId) {
const updatedStreamData = await getChannelStreams(channelId);
console.log('Channel streams after switch:', updatedStreamData);
// Update current stream information with fresh data
const updatedStream = getSelectedStream(updatedStreamData, streamId);
if (updatedStream?.m3u_profile) {
setCurrentM3UProfile(updatedStream.m3u_profile);
}
}
} catch (error) {
console.error('Error checking streams after switch:', error);
}
};
};
// Handle stream switching
const handleStreamChange = async (streamId) => {
try {
console.log('Switching to stream ID:', streamId);
// Find the selected stream in availableStreams for debugging
const selectedStream = getSelectedStream(availableStreams, streamId);
console.log('Selected stream details:', selectedStream);
// Make sure we're passing the correct ID to the API
const response = await switchStream(channel, streamId);
console.log('Stream switch API response:', response);
// Update the local active stream ID immediately
setActiveStreamId(streamId);
// Update M3U profile information if available in the response
if (response?.m3u_profile) {
setCurrentM3UProfile(response.m3u_profile);
} else if (selectedStream && selectedStream.m3u_profile) {
// Fallback to the profile from the selected stream
setCurrentM3UProfile(selectedStream.m3u_profile);
}
// Show detailed notification with stream name
showNotification({
title: 'Stream switching',
message: `Switching to "${selectedStream?.name}" for ${channel.name}`,
color: 'blue.5',
});
// After a short delay, fetch streams again to confirm the switch
setTimeout(checkStreamsAfterChange(streamId), 2000);
} catch (error) {
console.error('Stream switch error:', error);
showNotification({
title: 'Error switching stream',
message: error.toString(),
color: 'red.5',
});
}
};
const clientsColumns = useMemo(
() => [
{
id: 'expand',
size: 20,
},
{
header: 'IP Address',
accessorKey: 'ip_address',
},
// Updated Connected column with tooltip
{
id: 'connected',
header: 'Connected',
accessorFn: connectedAccessor(dateFormat),
cell: ({ cell }) => (
<Tooltip
label={
cell.getValue() !== 'Unknown'
? `Connected at ${cell.getValue()}`
: 'Unknown connection time'
}
>
<Text size="xs">{cell.getValue()}</Text>
</Tooltip>
),
},
// Update Duration column with tooltip showing exact seconds
{
id: 'duration',
header: 'Duration',
accessorFn: durationAccessor(),
cell: ({ cell, row }) => {
const exactDuration =
row.original.connected_since || row.original.connection_duration;
return (
<Tooltip
label={
exactDuration
? `${exactDuration.toFixed(1)} seconds`
: 'Unknown duration'
}
>
<Text size="xs">{cell.getValue()}</Text>
</Tooltip>
);
},
},
{
id: 'actions',
header: 'Actions',
size: tableSize == 'compact' ? 75 : 100,
},
],
[]
);
const channelClientsTable = useTable({
...TableHelper.defaultProperties,
columns: clientsColumns,
data,
allRowIds: data.map((client) => client.id),
tableCellProps: () => ({
padding: 4,
borderColor: '#444',
color: '#E0E0E0',
fontSize: '0.85rem',
}),
headerCellRenderFns: {
ip_address: renderHeaderCell,
connected: renderHeaderCell,
duration: renderHeaderCell,
actions: renderHeaderCell,
},
bodyCellRenderFns: {
actions: renderBodyCell,
},
getExpandedRowHeight: (row) => {
return 20 + 28 * row.original.streams.length;
},
expandedRowRenderer: ({ row }) => {
return (
<Box p="xs">
<Group spacing="xs" align="flex-start">
<Text size="xs" fw={500} color="dimmed">
User Agent:
</Text>
<Text size="xs">{row.original.user_agent || 'Unknown'}</Text>
</Group>
</Box>
);
},
mantineExpandButtonProps: ({ row, table }) => ({
size: 'xs',
style: {
transform: row.getIsExpanded() ? 'rotate(180deg)' : 'rotate(-90deg)',
transition: 'transform 0.2s',
},
}),
displayColumnDefOptions: {
'mrt-row-expand': {
size: 15,
header: '',
},
'mrt-row-actions': {
size: 74,
},
},
});
// Get logo URL from the logos object if available
const logoUrl = getLogoUrl(channel.logo_id, logos, previewedStream);
useEffect(() => {
let isMounted = true;
// Only fetch if we have a stream_id and NO channel.name
if (!channel.name && channel.stream_id) {
getStreamsByIds(channel.stream_id).then((streams) => {
if (isMounted && streams && streams.length > 0) {
setPreviewedStream(streams[0]);
}
});
}
return () => {
isMounted = false;
};
}, [channel.name, channel.stream_id]);
const channelName =
channel.name || previewedStream?.name || 'Unnamed Channel';
const uptime = channel.uptime || 0;
const bitrates = channel.bitrates || [];
const totalBytes = channel.total_bytes || 0;
const clientCount = channel.client_count || 0;
const avgBitrate = channel.avg_bitrate || '0 Kbps';
const streamProfileName = channel.stream_profile?.name || 'Unknown Profile';
// Use currentM3UProfile if available, otherwise fall back to channel data
const m3uProfileName =
currentM3UProfile?.name ||
channel.m3u_profile?.name ||
channel.m3u_profile_name ||
'Unknown M3U Profile';
// Create select options for available streams
const streamOptions = getStreamOptions(availableStreams, m3uAccountsMap);
if (location.pathname !== '/stats') {
return <></>;
}
// Safety check - if channel doesn't have required data, don't render
if (!channel || !channel.channel_id) {
return null;
}
return (
<Card
key={channel.channel_id}
shadow="sm"
padding="md"
radius="md"
withBorder
style={{
backgroundColor: '#27272A',
}}
color="#fff"
maw={700}
w={'100%'}
>
<Stack pos="relative">
<Group justify="space-between">
<Box
style={{
alignItems: 'center',
justifyContent: 'center',
}}
w={100}
h={50}
display="flex"
>
<img
src={logoUrl || logo}
style={{
maxWidth: '100%',
maxHeight: '100%',
objectFit: 'contain',
}}
alt="channel logo"
/>
</Box>
<Group>
<Box>
<Tooltip label={getStartDate(uptime)}>
<Center>
<Timer pr={5} />
{toFriendlyDuration(uptime, 'seconds')}
</Center>
</Tooltip>
</Box>
<Center>
<Tooltip label="Stop Channel">
<ActionIcon
variant="transparent"
color="red.9"
onClick={() => stopChannel(channel.channel_id)}
>
<SquareX size="24" />
</ActionIcon>
</Tooltip>
</Center>
</Group>
</Group>
<Flex justify="space-between" align="center">
<Group>
<Text fw={500}>{channelName}</Text>
</Group>
<Tooltip label="Active Stream Profile">
<Group gap={5}>
<Video size="18" />
{streamProfileName}
</Group>
</Tooltip>
</Flex>
{/* Display M3U profile information */}
<Flex justify="flex-end" align="center" mt={-8}>
<Group gap={5}>
<HardDriveUpload size="18" />
<Tooltip label="Current M3U Profile">
<Text size="xs">{m3uProfileName}</Text>
</Tooltip>
</Group>
</Flex>
{/* Add stream selection dropdown */}
{availableStreams.length > 0 && (
<Tooltip label="Switch to another stream source">
<Select
size="xs"
label="Active Stream"
placeholder={
isLoadingStreams ? 'Loading streams...' : 'Select stream'
}
data={streamOptions}
value={activeStreamId || channel.stream_id?.toString() || null}
onChange={handleStreamChange}
disabled={isLoadingStreams}
mt={8}
/>
</Tooltip>
)}
{/* Add stream information badges */}
<Group gap="xs" mt="xs">
{channel.resolution && (
<Tooltip label="Video resolution">
<Badge size="sm" variant="light" color="red">
{channel.resolution}
</Badge>
</Tooltip>
)}
{channel.source_fps && (
<Tooltip label="Source frames per second">
<Badge size="sm" variant="light" color="orange">
{channel.source_fps} FPS
</Badge>
</Tooltip>
)}
{channel.video_codec && (
<Tooltip label="Video codec">
<Badge size="sm" variant="light" color="blue">
{channel.video_codec.toUpperCase()}
</Badge>
</Tooltip>
)}
{channel.audio_codec && (
<Tooltip label="Audio codec">
<Badge size="sm" variant="light" color="pink">
{channel.audio_codec.toUpperCase()}
</Badge>
</Tooltip>
)}
{channel.audio_channels && (
<Tooltip label="Audio channel configuration">
<Badge size="sm" variant="light" color="pink">
{channel.audio_channels}
</Badge>
</Tooltip>
)}
{channel.stream_type && (
<Tooltip label="Stream type">
<Badge size="sm" variant="light" color="cyan">
{channel.stream_type.toUpperCase()}
</Badge>
</Tooltip>
)}
{channel.ffmpeg_speed && (
<Tooltip
label={`Current Speed: ${parseFloat(channel.ffmpeg_speed).toFixed(2)}x`}
>
<Badge
size="sm"
variant="light"
color={
parseFloat(channel.ffmpeg_speed) >=
getBufferingSpeedThreshold(settings['proxy_settings'])
? 'green'
: 'red'
}
>
{parseFloat(channel.ffmpeg_speed).toFixed(2)}x
</Badge>
</Tooltip>
)}
</Group>
<Group justify="space-between">
<Group gap={4}>
<Tooltip
label={`Current bitrate: ${formatSpeed(bitrates.at(-1) || 0)}`}
>
<Group gap={4} style={{ cursor: 'help' }}>
<Gauge pr={5} size="22" />
<Text size="sm">{formatSpeed(bitrates.at(-1) || 0)}</Text>
</Group>
</Tooltip>
</Group>
<Tooltip label={`Average bitrate: ${avgBitrate}`}>
<Text size="sm" style={{ cursor: 'help' }}>
Avg: {avgBitrate}
</Text>
</Tooltip>
<Group gap={4}>
<Tooltip label={`Total transferred: ${formatBytes(totalBytes)}`}>
<Group gap={4} style={{ cursor: 'help' }}>
<HardDriveDownload size="18" />
<Text size="sm">{formatBytes(totalBytes)}</Text>
</Group>
</Tooltip>
</Group>
<Group gap={5}>
<Tooltip
label={`${clientCount} active client${clientCount !== 1 ? 's' : ''}`}
>
<Group gap={4} style={{ cursor: 'help' }}>
<Users size="18" />
<Text size="sm">{clientCount}</Text>
</Group>
</Tooltip>
</Group>
</Group>
<CustomTable table={channelClientsTable} />
</Stack>
</Card>
);
};
export default StreamConnectionCard;

View file

@ -0,0 +1,143 @@
import {
ActionIcon,
Badge,
Box,
Card,
CardSection,
Group,
Image,
Stack,
Text,
} from '@mantine/core';
import { Calendar, Clock, Play, Star } from 'lucide-react';
import React from 'react';
import {
formatDuration,
getSeasonLabel,
} from '../../utils/cards/VODCardUtils.js';
const VODCard = ({ vod, onClick }) => {
const isEpisode = vod.type === 'episode';
const getDisplayTitle = () => {
if (isEpisode && vod.series) {
return (
<Stack spacing={4}>
<Text size="sm" c="dimmed">
{vod.series.name}
</Text>
<Text weight={500}>
{getSeasonLabel(vod)} - {vod.name}
</Text>
</Stack>
);
}
return <Text weight={500}>{vod.name}</Text>;
};
const handleCardClick = async () => {
// Just pass the basic vod info to the parent handler
onClick(vod);
};
return (
<Card
shadow="sm"
padding="md"
radius="md"
withBorder
style={{ cursor: 'pointer', backgroundColor: '#27272A' }}
onClick={handleCardClick}
>
<CardSection>
<Box pos="relative" h={300}>
{vod.logo?.url ? (
<Image
src={vod.logo.url}
height={300}
alt={vod.name}
fit="contain"
/>
) : (
<Box
style={{
backgroundColor: '#404040',
alignItems: 'center',
justifyContent: 'center',
}}
h={300}
display="flex"
>
<Play size={48} color="#666" />
</Box>
)}
<ActionIcon
style={{
backgroundColor: 'rgba(0,0,0,0.7)',
}}
pos="absolute"
top={8}
right={8}
onClick={(e) => {
e.stopPropagation();
onClick(vod);
}}
>
<Play size={16} color="white" />
</ActionIcon>
<Badge
pos="absolute"
bottom={8}
left={8}
color={isEpisode ? 'blue' : 'green'}
>
{isEpisode ? 'Episode' : 'Movie'}
</Badge>
</Box>
</CardSection>
<Stack spacing={8} mt="md">
{getDisplayTitle()}
<Group spacing={16}>
{vod.year && (
<Group spacing={4}>
<Calendar size={14} color="#666" />
<Text size="xs" c="dimmed">
{vod.year}
</Text>
</Group>
)}
{vod.duration && (
<Group spacing={4}>
<Clock size={14} color="#666" />
<Text size="xs" c="dimmed">
{formatDuration(vod.duration_secs)}
</Text>
</Group>
)}
{vod.rating && (
<Group spacing={4}>
<Star size={14} color="#666" />
<Text size="xs" c="dimmed">
{vod.rating}
</Text>
</Group>
)}
</Group>
{vod.genre && (
<Text size="xs" c="dimmed" lineClamp={1}>
{vod.genre}
</Text>
)}
</Stack>
</Card>
);
};
export default VODCard;

View file

@ -0,0 +1,422 @@
// Format duration for content length
import useLocalStorage from '../../hooks/useLocalStorage.jsx';
import React, { useCallback, useEffect, useState } from 'react';
import logo from '../../images/logo.png';
import { ActionIcon, Badge, Box, Card, Center, Flex, Group, Progress, Stack, Text, Tooltip } from '@mantine/core';
import { convertToSec, fromNow, toFriendlyDuration } from '../../utils/dateTimeUtils.js';
import { ChevronDown, HardDriveUpload, SquareX, Timer, Video } from 'lucide-react';
import {
calculateConnectionDuration,
calculateConnectionStartTime,
calculateProgress,
formatDuration,
formatTime,
getEpisodeDisplayTitle,
getEpisodeSubtitle,
getMovieDisplayTitle,
getMovieSubtitle,
} from '../../utils/cards/VodConnectionCardUtils.js';
const ClientDetails = ({ connection, connectionStartTime }) => {
return (
<Stack
gap="xs"
style={{
backgroundColor: 'rgba(255, 255, 255, 0.02)',
}}
p={12}
bdrs={6}
bd={'1px solid rgba(255, 255, 255, 0.08)'}
>
{connection.user_agent &&
connection.user_agent !== 'Unknown' && (
<Group gap={8} align="flex-start">
<Text size="xs" fw={500} c="dimmed" miw={80}>
User Agent:
</Text>
<Text size="xs" ff={'monospace'} flex={1}>
{connection.user_agent.length > 100
? `${connection.user_agent.substring(0, 100)}...`
: connection.user_agent}
</Text>
</Group>
)}
<Group gap={8}>
<Text size="xs" fw={500} c="dimmed" miw={80}>
Client ID:
</Text>
<Text size="xs" ff={'monospace'}>
{connection.client_id || 'Unknown'}
</Text>
</Group>
{connection.connected_at && (
<Group gap={8}>
<Text size="xs" fw={500} c="dimmed" miw={80}>
Connected:
</Text>
<Text size="xs">{connectionStartTime}</Text>
</Group>
)}
{connection.duration && connection.duration > 0 && (
<Group gap={8}>
<Text size="xs" fw={500} c="dimmed" miw={80}>
Watch Duration:
</Text>
<Text size="xs">
{toFriendlyDuration(connection.duration, 'seconds')}
</Text>
</Group>
)}
{/* Seek/Position Information */}
{(connection.last_seek_percentage > 0 ||
connection.last_seek_byte > 0) && (
<>
<Group gap={8}>
<Text size="xs" fw={500} c="dimmed" miw={80}>
Last Seek:
</Text>
<Text size="xs">
{connection.last_seek_percentage?.toFixed(1)}%
{connection.total_content_size > 0 && (
<span style={{ color: 'var(--mantine-color-dimmed)' }}>
{' '}
({Math.round(connection.last_seek_byte / (1024 * 1024))}
MB /{' '}
{Math.round(
connection.total_content_size / (1024 * 1024)
)}
MB)
</span>
)}
</Text>
</Group>
{Number(connection.last_seek_timestamp) > 0 && (
<Group gap={8}>
<Text size="xs" fw={500} c="dimmed" miw={80}>
Seek Time:
</Text>
<Text size="xs">
{fromNow(convertToSec(Number(connection.last_seek_timestamp)))}
</Text>
</Group>
)}
</>
)}
{connection.bytes_sent > 0 && (
<Group gap={8}>
<Text size="xs" fw={500} c="dimmed" miw={80}>
Data Sent:
</Text>
<Text size="xs">
{(connection.bytes_sent / (1024 * 1024)).toFixed(1)} MB
</Text>
</Group>
)}
</Stack>
);
}
// Create a VOD Card component similar to ChannelCard
const VodConnectionCard = ({ vodContent, stopVODClient }) => {
const [dateFormatSetting] = useLocalStorage('date-format', 'mdy');
const dateFormat = dateFormatSetting === 'mdy' ? 'MM/DD' : 'DD/MM';
const [isClientExpanded, setIsClientExpanded] = useState(false);
const [, setUpdateTrigger] = useState(0); // Force re-renders for progress updates
// Get metadata from the VOD content
const metadata = vodContent.content_metadata || {};
const contentType = vodContent.content_type;
const isMovie = contentType === 'movie';
const isEpisode = contentType === 'episode';
// Set up timer to update progress every second
useEffect(() => {
const interval = setInterval(() => {
setUpdateTrigger((prev) => prev + 1);
}, 1000);
return () => clearInterval(interval);
}, []);
// Get the individual connection (since we now separate cards per connection)
const connection =
vodContent.individual_connection ||
(vodContent.connections && vodContent.connections[0]);
// Get poster/logo URL
const posterUrl = metadata.logo_url || logo;
// Get display title
const getDisplayTitle = () => {
if (isMovie) {
return getMovieDisplayTitle(vodContent);
} else if (isEpisode) {
return getEpisodeDisplayTitle(metadata);
}
return vodContent.content_name;
};
// Get subtitle info
const getSubtitle = () => {
if (isMovie) {
return getMovieSubtitle(metadata);
} else if (isEpisode) {
return getEpisodeSubtitle(metadata);
}
return [];
};
// Render subtitle
const renderSubtitle = () => {
const subtitleParts = getSubtitle();
if (subtitleParts.length === 0) return null;
return (
<Text size="sm" c="dimmed">
{subtitleParts.join(' • ')}
</Text>
);
};
// Calculate progress percentage and time
const getProgressInfo = useCallback(() => {
return calculateProgress(connection, metadata.duration_secs);
}, [connection, metadata.duration_secs]);
// Calculate duration for connection
const getConnectionDuration = useCallback((connection) => {
return calculateConnectionDuration(connection);
}, []);
// Get connection start time for tooltip
const getConnectionStartTime = useCallback(
(connection) => {
return calculateConnectionStartTime(connection, dateFormat);
},
[dateFormat]
);
return (
<Card
shadow="sm"
padding="md"
radius="md"
withBorder
style={{
backgroundColor: '#27272A',
}}
color='#FFF'
maw={700}
w={'100%'}
>
<Stack pos='relative' >
{/* Header with poster and basic info */}
<Group justify="space-between">
<Box h={100} display='flex'
style={{
alignItems: 'center',
justifyContent: 'center',
}}
>
<img
src={posterUrl}
style={{
maxWidth: '100%',
maxHeight: '100%',
objectFit: 'contain',
}}
alt="content poster"
/>
</Box>
<Group>
{connection && (
<Tooltip
label={`Connected at ${getConnectionStartTime(connection)}`}
>
<Center>
<Timer pr={5} />
{getConnectionDuration(connection)}
</Center>
</Tooltip>
)}
{connection && stopVODClient && (
<Center>
<Tooltip label="Stop VOD Connection">
<ActionIcon
variant="transparent"
color="red.9"
onClick={() => stopVODClient(connection.client_id)}
>
<SquareX size="24" />
</ActionIcon>
</Tooltip>
</Center>
)}
</Group>
</Group>
{/* Title and type */}
<Flex justify="space-between" align="center">
<Group>
<Text fw={500}>{getDisplayTitle()}</Text>
</Group>
<Tooltip label="Content Type">
<Group gap={5}>
<Video size="18" />
{isMovie ? 'Movie' : 'TV Episode'}
</Group>
</Tooltip>
</Flex>
{/* Display M3U profile information - matching channel card style */}
{connection &&
connection.m3u_profile &&
(connection.m3u_profile.profile_name ||
connection.m3u_profile.account_name) && (
<Flex justify="flex-end" align="flex-start" mt={-8}>
<Group gap={5} align="flex-start">
<HardDriveUpload size="18" mt={2} />
<Stack gap={0}>
<Tooltip label="M3U Account">
<Text size="xs" fw={500}>
{connection.m3u_profile.account_name || 'Unknown Account'}
</Text>
</Tooltip>
<Tooltip label="M3U Profile">
<Text size="xs" c="dimmed">
{connection.m3u_profile.profile_name || 'Default Profile'}
</Text>
</Tooltip>
</Stack>
</Group>
</Flex>
)}
{/* Subtitle/episode info */}
{getSubtitle().length > 0 && (
<Flex justify="flex-start" align="center" mt={-12}>
{renderSubtitle()}
</Flex>
)}
{/* Content information badges - streamlined to avoid duplication */}
<Group gap="xs" mt={-4}>
{metadata.year && (
<Tooltip label="Release Year">
<Badge size="sm" variant="light" color="orange">
{metadata.year}
</Badge>
</Tooltip>
)}
{metadata.duration_secs && (
<Tooltip label="Content Duration">
<Badge size="sm" variant="light" color="blue">
{formatDuration(metadata.duration_secs)}
</Badge>
</Tooltip>
)}
{metadata.rating && (
<Tooltip label="Critic Rating (out of 10)">
<Badge size="sm" variant="light" color="yellow">
{parseFloat(metadata.rating).toFixed(1)}/10
</Badge>
</Tooltip>
)}
</Group>
{/* Progress bar - show current position in content */}
{connection &&
metadata.duration_secs &&
(() => {
const { totalTime, currentTime, percentage} = getProgressInfo();
return totalTime > 0 ? (
<Stack gap="xs" mt="sm">
<Group justify="space-between" align="center">
<Text size="xs" fw={500} c="dimmed">
Progress
</Text>
<Text size="xs" c="dimmed">
{formatTime(currentTime)} /{' '}
{formatTime(totalTime)}
</Text>
</Group>
<Progress
value={percentage}
size="sm"
color="blue"
style={{
backgroundColor: 'rgba(255, 255, 255, 0.1)',
}}
/>
<Text size="xs" c="dimmed" ta="center">
{percentage.toFixed(1)}% watched
</Text>
</Stack>
) : null;
})()}
{/* Client information section - collapsible like channel cards */}
{connection && (
<Stack gap="xs" mt="xs">
{/* Client summary header - always visible */}
<Group
justify="space-between"
align="center"
style={{
cursor: 'pointer',
backgroundColor: 'rgba(255, 255, 255, 0.05)',
}}
p={'8px 12px'}
bdrs={6}
bd={'1px solid rgba(255, 255, 255, 0.1)'}
onClick={() => setIsClientExpanded(!isClientExpanded)}
>
<Group gap={8}>
<Text size="sm" fw={500} color="dimmed">
Client:
</Text>
<Text size="sm" ff={'monospace'}>
{connection.client_ip || 'Unknown IP'}
</Text>
</Group>
<Group gap={8}>
<Text size="xs" color="dimmed">
{isClientExpanded ? 'Hide Details' : 'Show Details'}
</Text>
<ChevronDown
size={16}
style={{
transform: isClientExpanded
? 'rotate(0deg)'
: 'rotate(180deg)',
transition: 'transform 0.2s',
}}
/>
</Group>
</Group>
{/* Expanded client details */}
{isClientExpanded && (
<ClientDetails
connection={connection}
connectionStartTime={getConnectionStartTime(connection)} />
)}
</Stack>
)}
</Stack>
</Card>
);
};
export default VodConnectionCard;

View file

@ -1,5 +1,6 @@
import React, { useState, useEffect, useRef, useMemo } from 'react';
import { useFormik } from 'formik';
import { useForm } from 'react-hook-form';
import { yupResolver } from '@hookform/resolvers/yup';
import * as Yup from 'yup';
import useChannelsStore from '../../store/channels';
import API from '../../api';
@ -42,6 +43,11 @@ import useEPGsStore from '../../store/epgs';
import { FixedSizeList as List } from 'react-window';
import { USER_LEVELS, USER_LEVEL_LABELS } from '../../constants';
const validationSchema = Yup.object({
name: Yup.string().required('Name is required'),
channel_group_id: Yup.string().required('Channel group is required'),
});
const ChannelForm = ({ channel = null, isOpen, onClose }) => {
const theme = useMantineTheme();
@ -100,7 +106,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
const handleLogoSuccess = ({ logo }) => {
if (logo && logo.id) {
formik.setFieldValue('logo_id', logo.id);
setValue('logo_id', logo.id);
ensureLogosLoaded(); // Refresh logos
}
setLogoModalOpen(false);
@ -124,7 +130,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
if (response.matched) {
// Update the form with the new EPG data
if (response.channel && response.channel.epg_data_id) {
formik.setFieldValue('epg_data_id', response.channel.epg_data_id);
setValue('epg_data_id', response.channel.epg_data_id);
}
notifications.show({
@ -152,7 +158,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
};
const handleSetNameFromEpg = () => {
const epgDataId = formik.values.epg_data_id;
const epgDataId = watch('epg_data_id');
if (!epgDataId) {
notifications.show({
title: 'No EPG Selected',
@ -164,7 +170,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
const tvg = tvgsById[epgDataId];
if (tvg && tvg.name) {
formik.setFieldValue('name', tvg.name);
setValue('name', tvg.name);
notifications.show({
title: 'Success',
message: `Channel name set to "${tvg.name}"`,
@ -180,7 +186,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
};
const handleSetLogoFromEpg = async () => {
const epgDataId = formik.values.epg_data_id;
const epgDataId = watch('epg_data_id');
if (!epgDataId) {
notifications.show({
title: 'No EPG Selected',
@ -207,7 +213,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
);
if (matchingLogo) {
formik.setFieldValue('logo_id', matchingLogo.id);
setValue('logo_id', matchingLogo.id);
notifications.show({
title: 'Success',
message: `Logo set to "${matchingLogo.name}"`,
@ -231,7 +237,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
// Create logo by calling the Logo API directly
const newLogo = await API.createLogo(newLogoData);
formik.setFieldValue('logo_id', newLogo.id);
setValue('logo_id', newLogo.id);
notifications.update({
id: 'creating-logo',
@ -264,7 +270,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
};
const handleSetTvgIdFromEpg = () => {
const epgDataId = formik.values.epg_data_id;
const epgDataId = watch('epg_data_id');
if (!epgDataId) {
notifications.show({
title: 'No EPG Selected',
@ -276,7 +282,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
const tvg = tvgsById[epgDataId];
if (tvg && tvg.tvg_id) {
formik.setFieldValue('tvg_id', tvg.tvg_id);
setValue('tvg_id', tvg.tvg_id);
notifications.show({
title: 'Success',
message: `TVG-ID set to "${tvg.tvg_id}"`,
@ -291,130 +297,130 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
}
};
const formik = useFormik({
initialValues: {
name: '',
channel_number: '', // Change from 0 to empty string for consistency
channel_group_id:
Object.keys(channelGroups).length > 0
const defaultValues = useMemo(
() => ({
name: channel?.name || '',
channel_number:
channel?.channel_number !== null &&
channel?.channel_number !== undefined
? channel.channel_number
: '',
channel_group_id: channel?.channel_group_id
? `${channel.channel_group_id}`
: Object.keys(channelGroups).length > 0
? Object.keys(channelGroups)[0]
: '',
stream_profile_id: '0',
tvg_id: '',
tvc_guide_stationid: '',
epg_data_id: '',
logo_id: '',
user_level: '0',
},
validationSchema: Yup.object({
name: Yup.string().required('Name is required'),
channel_group_id: Yup.string().required('Channel group is required'),
stream_profile_id: channel?.stream_profile_id
? `${channel.stream_profile_id}`
: '0',
tvg_id: channel?.tvg_id || '',
tvc_guide_stationid: channel?.tvc_guide_stationid || '',
epg_data_id: channel?.epg_data_id ?? '',
logo_id: channel?.logo_id ? `${channel.logo_id}` : '',
user_level: `${channel?.user_level ?? '0'}`,
}),
onSubmit: async (values, { setSubmitting }) => {
let response;
[channel, channelGroups]
);
try {
const formattedValues = { ...values };
const {
register,
handleSubmit,
setValue,
watch,
reset,
formState: { errors, isSubmitting },
} = useForm({
defaultValues,
resolver: yupResolver(validationSchema),
});
// Convert empty or "0" stream_profile_id to null for the API
if (
!formattedValues.stream_profile_id ||
formattedValues.stream_profile_id === '0'
) {
formattedValues.stream_profile_id = null;
}
const onSubmit = async (values) => {
let response;
// Ensure tvg_id is properly included (no empty strings)
formattedValues.tvg_id = formattedValues.tvg_id || null;
try {
const formattedValues = { ...values };
// Ensure tvc_guide_stationid is properly included (no empty strings)
formattedValues.tvc_guide_stationid =
formattedValues.tvc_guide_stationid || null;
// Convert empty or "0" stream_profile_id to null for the API
if (
!formattedValues.stream_profile_id ||
formattedValues.stream_profile_id === '0'
) {
formattedValues.stream_profile_id = null;
}
if (channel) {
// If there's an EPG to set, use our enhanced endpoint
if (values.epg_data_id !== (channel.epg_data_id ?? '')) {
// Use the special endpoint to set EPG and trigger refresh
const epgResponse = await API.setChannelEPG(
channel.id,
values.epg_data_id
);
// Ensure tvg_id is properly included (no empty strings)
formattedValues.tvg_id = formattedValues.tvg_id || null;
// Remove epg_data_id from values since we've handled it separately
const { epg_data_id, ...otherValues } = formattedValues;
// Ensure tvc_guide_stationid is properly included (no empty strings)
formattedValues.tvc_guide_stationid =
formattedValues.tvc_guide_stationid || null;
// Update other channel fields if needed
if (Object.keys(otherValues).length > 0) {
response = await API.updateChannel({
id: channel.id,
...otherValues,
streams: channelStreams.map((stream) => stream.id),
});
}
} else {
// No EPG change, regular update
if (channel) {
// If there's an EPG to set, use our enhanced endpoint
if (values.epg_data_id !== (channel.epg_data_id ?? '')) {
// Use the special endpoint to set EPG and trigger refresh
const epgResponse = await API.setChannelEPG(
channel.id,
values.epg_data_id
);
// Remove epg_data_id from values since we've handled it separately
const { epg_data_id, ...otherValues } = formattedValues;
// Update other channel fields if needed
if (Object.keys(otherValues).length > 0) {
response = await API.updateChannel({
id: channel.id,
...formattedValues,
...otherValues,
streams: channelStreams.map((stream) => stream.id),
});
}
} else {
// New channel creation - use the standard method
response = await API.addChannel({
// No EPG change, regular update
response = await API.updateChannel({
id: channel.id,
...formattedValues,
streams: channelStreams.map((stream) => stream.id),
});
}
} catch (error) {
console.error('Error saving channel:', error);
} else {
// New channel creation - use the standard method
response = await API.addChannel({
...formattedValues,
streams: channelStreams.map((stream) => stream.id),
});
}
} catch (error) {
console.error('Error saving channel:', error);
}
formik.resetForm();
API.requeryChannels();
reset();
API.requeryChannels();
// Refresh channel profiles to update the membership information
useChannelsStore.getState().fetchChannelProfiles();
// Refresh channel profiles to update the membership information
useChannelsStore.getState().fetchChannelProfiles();
setSubmitting(false);
setTvgFilter('');
setLogoFilter('');
onClose();
},
});
setTvgFilter('');
setLogoFilter('');
onClose();
};
useEffect(() => {
if (channel) {
if (channel.epg_data_id) {
const epgSource = epgs[tvgsById[channel.epg_data_id]?.epg_source];
setSelectedEPG(epgSource ? `${epgSource.id}` : '');
}
reset(defaultValues);
setChannelStreams(channel?.streams || []);
formik.setValues({
name: channel.name || '',
channel_number:
channel.channel_number !== null ? channel.channel_number : '',
channel_group_id: channel.channel_group_id
? `${channel.channel_group_id}`
: '',
stream_profile_id: channel.stream_profile_id
? `${channel.stream_profile_id}`
: '0',
tvg_id: channel.tvg_id || '',
tvc_guide_stationid: channel.tvc_guide_stationid || '',
epg_data_id: channel.epg_data_id ?? '',
logo_id: channel.logo_id ? `${channel.logo_id}` : '',
user_level: `${channel.user_level}`,
});
setChannelStreams(channel.streams || []);
if (channel?.epg_data_id) {
const epgSource = epgs[tvgsById[channel.epg_data_id]?.epg_source];
setSelectedEPG(epgSource ? `${epgSource.id}` : '');
} else {
formik.resetForm();
setSelectedEPG('');
}
if (!channel) {
setTvgFilter('');
setLogoFilter('');
setChannelStreams([]); // Ensure streams are cleared when adding a new channel
}
}, [channel, tvgsById, channelGroups]);
}, [defaultValues, channel, reset, epgs, tvgsById]);
// Memoize logo options to prevent infinite re-renders during background loading
const logoOptions = useMemo(() => {
@ -431,10 +437,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
// If a new group was created and returned, update the form with it
if (newGroup && newGroup.id) {
// Preserve all current form values while updating just the channel_group_id
formik.setValues({
...formik.values,
channel_group_id: `${newGroup.id}`,
});
setValue('channel_group_id', `${newGroup.id}`);
}
};
@ -472,7 +475,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
}
styles={{ content: { '--mantine-color-body': '#27272A' } }}
>
<form onSubmit={formik.handleSubmit}>
<form onSubmit={handleSubmit(onSubmit)}>
<Group justify="space-between" align="top">
<Stack gap="5" style={{ flex: 1 }}>
<TextInput
@ -481,7 +484,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
label={
<Group gap="xs">
<span>Channel Name</span>
{formik.values.epg_data_id && (
{watch('epg_data_id') && (
<Button
size="xs"
variant="transparent"
@ -495,9 +498,8 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
)}
</Group>
}
value={formik.values.name}
onChange={formik.handleChange}
error={formik.errors.name ? formik.touched.name : ''}
{...register('name')}
error={errors.name?.message}
size="xs"
style={{ flex: 1 }}
/>
@ -516,8 +518,8 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
label="Channel Group"
readOnly
value={
channelGroups[formik.values.channel_group_id]
? channelGroups[formik.values.channel_group_id].name
channelGroups[watch('channel_group_id')]
? channelGroups[watch('channel_group_id')].name
: ''
}
onClick={() => setGroupPopoverOpened(true)}
@ -557,7 +559,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
>
<UnstyledButton
onClick={() => {
formik.setFieldValue(
setValue(
'channel_group_id',
filteredGroups[index].id
);
@ -587,16 +589,12 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
id="channel_group_id"
name="channel_group_id"
label="Channel Group"
value={formik.values.channel_group_id}
value={watch('channel_group_id')}
searchable
onChange={(value) => {
formik.setFieldValue('channel_group_id', value); // Update Formik's state with the new value
setValue('channel_group_id', value);
}}
error={
formik.errors.channel_group_id
? formik.touched.channel_group_id
: ''
}
error={errors.channel_group_id?.message}
data={Object.values(channelGroups).map((option, index) => ({
value: `${option.id}`,
label: option.name,
@ -622,15 +620,11 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
id="stream_profile_id"
label="Stream Profile"
name="stream_profile_id"
value={formik.values.stream_profile_id}
value={watch('stream_profile_id')}
onChange={(value) => {
formik.setFieldValue('stream_profile_id', value); // Update Formik's state with the new value
setValue('stream_profile_id', value);
}}
error={
formik.errors.stream_profile_id
? formik.touched.stream_profile_id
: ''
}
error={errors.stream_profile_id?.message}
data={[{ value: '0', label: '(use default)' }].concat(
streamProfiles.map((option) => ({
value: `${option.id}`,
@ -648,13 +642,11 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
value: `${value}`,
};
})}
value={formik.values.user_level}
value={watch('user_level')}
onChange={(value) => {
formik.setFieldValue('user_level', value);
setValue('user_level', value);
}}
error={
formik.errors.user_level ? formik.touched.user_level : ''
}
error={errors.user_level?.message}
/>
</Stack>
@ -684,7 +676,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
label={
<Group gap="xs">
<span>Logo</span>
{formik.values.epg_data_id && (
{watch('epg_data_id') && (
<Button
size="xs"
variant="transparent"
@ -699,9 +691,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
</Group>
}
readOnly
value={
channelLogos[formik.values.logo_id]?.name || 'Default'
}
value={channelLogos[watch('logo_id')]?.name || 'Default'}
onClick={() => {
console.log(
'Logo input clicked, setting popover opened to true'
@ -756,10 +746,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
borderRadius: '4px',
}}
onClick={() => {
formik.setFieldValue(
'logo_id',
filteredLogos[index].id
);
setValue('logo_id', filteredLogos[index].id);
setLogoPopoverOpened(false);
}}
onMouseEnter={(e) => {
@ -810,7 +797,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
<Stack gap="xs" align="center">
<LazyLogo
logoId={formik.values.logo_id}
logoId={watch('logo_id')}
alt="channel logo"
style={{ height: 40 }}
/>
@ -833,19 +820,12 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
id="channel_number"
name="channel_number"
label="Channel # (blank to auto-assign)"
value={formik.values.channel_number}
onChange={(value) =>
formik.setFieldValue('channel_number', value)
}
error={
formik.errors.channel_number
? formik.touched.channel_number
: ''
}
value={watch('channel_number')}
onChange={(value) => setValue('channel_number', value)}
error={errors.channel_number?.message}
size="xs"
step={0.1} // Add step prop to allow decimal inputs
precision={1} // Specify decimal precision
removeTrailingZeros // Optional: remove trailing zeros for cleaner display
/>
<TextInput
@ -854,7 +834,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
label={
<Group gap="xs">
<span>TVG-ID</span>
{formik.values.epg_data_id && (
{watch('epg_data_id') && (
<Button
size="xs"
variant="transparent"
@ -868,9 +848,8 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
)}
</Group>
}
value={formik.values.tvg_id}
onChange={formik.handleChange}
error={formik.errors.tvg_id ? formik.touched.tvg_id : ''}
{...register('tvg_id')}
error={errors.tvg_id?.message}
size="xs"
/>
@ -878,13 +857,8 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
id="tvc_guide_stationid"
name="tvc_guide_stationid"
label="Gracenote StationId"
value={formik.values.tvc_guide_stationid}
onChange={formik.handleChange}
error={
formik.errors.tvc_guide_stationid
? formik.touched.tvc_guide_stationid
: ''
}
{...register('tvc_guide_stationid')}
error={errors.tvc_guide_stationid?.message}
size="xs"
/>
@ -904,9 +878,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
<Button
size="xs"
variant="transparent"
onClick={() =>
formik.setFieldValue('epg_data_id', null)
}
onClick={() => setValue('epg_data_id', null)}
>
Use Dummy
</Button>
@ -933,7 +905,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
}
readOnly
value={(() => {
const tvg = tvgsById[formik.values.epg_data_id];
const tvg = tvgsById[watch('epg_data_id')];
const epgSource = tvg && epgs[tvg.epg_source];
const tvgLabel = tvg ? tvg.name || tvg.id : '';
if (epgSource && tvgLabel) {
@ -953,7 +925,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
color="white"
onClick={(e) => {
e.stopPropagation();
formik.setFieldValue('epg_data_id', null);
setValue('epg_data_id', null);
}}
title="Create new group"
size="small"
@ -1012,12 +984,9 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
size="xs"
onClick={() => {
if (filteredTvgs[index].id == '0') {
formik.setFieldValue('epg_data_id', null);
setValue('epg_data_id', null);
} else {
formik.setFieldValue(
'epg_data_id',
filteredTvgs[index].id
);
setValue('epg_data_id', filteredTvgs[index].id);
// Also update selectedEPG to match the EPG source of the selected tvg
if (filteredTvgs[index].epg_source) {
setSelectedEPG(
@ -1047,11 +1016,11 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
<Button
type="submit"
variant="default"
disabled={formik.isSubmitting}
loading={formik.isSubmitting}
disabled={isSubmitting}
loading={isSubmitting}
loaderProps={{ type: 'dots' }}
>
{formik.isSubmitting ? 'Saving...' : 'Submit'}
{isSubmitting ? 'Saving...' : 'Submit'}
</Button>
</Flex>
</form>

View file

@ -77,6 +77,9 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
const [confirmSetLogosOpen, setConfirmSetLogosOpen] = useState(false);
const [confirmSetTvgIdsOpen, setConfirmSetTvgIdsOpen] = useState(false);
const [confirmBatchUpdateOpen, setConfirmBatchUpdateOpen] = useState(false);
const [settingNames, setSettingNames] = useState(false);
const [settingLogos, setSettingLogos] = useState(false);
const [settingTvgIds, setSettingTvgIds] = useState(false);
const isWarningSuppressed = useWarningsStore((s) => s.isWarningSuppressed);
const suppressWarning = useWarningsStore((s) => s.suppressWarning);
@ -328,6 +331,7 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
};
const executeSetNamesFromEpg = async () => {
setSettingNames(true);
try {
// Start the backend task
await API.setChannelNamesFromEpg(channelIds);
@ -341,7 +345,6 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
});
// Close the modal since the task is now running in background
setConfirmSetNamesOpen(false);
onClose();
} catch (error) {
console.error('Failed to start EPG name setting task:', error);
@ -350,6 +353,8 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
message: 'Failed to start EPG name setting task.',
color: 'red',
});
} finally {
setSettingNames(false);
setConfirmSetNamesOpen(false);
}
};
@ -373,6 +378,7 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
};
const executeSetLogosFromEpg = async () => {
setSettingLogos(true);
try {
// Start the backend task
await API.setChannelLogosFromEpg(channelIds);
@ -386,7 +392,6 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
});
// Close the modal since the task is now running in background
setConfirmSetLogosOpen(false);
onClose();
} catch (error) {
console.error('Failed to start EPG logo setting task:', error);
@ -395,6 +400,8 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
message: 'Failed to start EPG logo setting task.',
color: 'red',
});
} finally {
setSettingLogos(false);
setConfirmSetLogosOpen(false);
}
};
@ -418,6 +425,7 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
};
const executeSetTvgIdsFromEpg = async () => {
setSettingTvgIds(true);
try {
// Start the backend task
await API.setChannelTvgIdsFromEpg(channelIds);
@ -431,7 +439,6 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
});
// Close the modal since the task is now running in background
setConfirmSetTvgIdsOpen(false);
onClose();
} catch (error) {
console.error('Failed to start EPG TVG-ID setting task:', error);
@ -440,6 +447,8 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
message: 'Failed to start EPG TVG-ID setting task.',
color: 'red',
});
} finally {
setSettingTvgIds(false);
setConfirmSetTvgIdsOpen(false);
}
};
@ -947,6 +956,7 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
opened={confirmSetNamesOpen}
onClose={() => setConfirmSetNamesOpen(false)}
onConfirm={executeSetNamesFromEpg}
loading={settingNames}
title="Confirm Set Names from EPG"
message={
<div style={{ whiteSpace: 'pre-line' }}>
@ -968,6 +978,7 @@ This action cannot be undone.`}
opened={confirmSetLogosOpen}
onClose={() => setConfirmSetLogosOpen(false)}
onConfirm={executeSetLogosFromEpg}
loading={settingLogos}
title="Confirm Set Logos from EPG"
message={
<div style={{ whiteSpace: 'pre-line' }}>
@ -989,6 +1000,7 @@ This action cannot be undone.`}
opened={confirmSetTvgIdsOpen}
onClose={() => setConfirmSetTvgIdsOpen(false)}
onConfirm={executeSetTvgIdsFromEpg}
loading={settingTvgIds}
title="Confirm Set TVG-IDs from EPG"
message={
<div style={{ whiteSpace: 'pre-line' }}>
@ -1010,6 +1022,7 @@ This action cannot be undone.`}
opened={confirmBatchUpdateOpen}
onClose={() => setConfirmBatchUpdateOpen(false)}
onConfirm={onSubmit}
loading={isSubmitting}
title="Confirm Batch Update"
message={
<div>

View file

@ -183,6 +183,7 @@ const GroupManager = React.memo(({ isOpen, onClose }) => {
const [confirmDeleteOpen, setConfirmDeleteOpen] = useState(false);
const [groupToDelete, setGroupToDelete] = useState(null);
const [confirmCleanupOpen, setConfirmCleanupOpen] = useState(false);
const [deletingGroup, setDeletingGroup] = useState(false);
// Memoize the channel groups array to prevent unnecessary re-renders
const channelGroupsArray = useMemo(
@ -382,6 +383,7 @@ const GroupManager = React.memo(({ isOpen, onClose }) => {
const executeDeleteGroup = useCallback(
async (group) => {
setDeletingGroup(true);
try {
await API.deleteChannelGroup(group.id);
@ -392,13 +394,14 @@ const GroupManager = React.memo(({ isOpen, onClose }) => {
});
await fetchGroupUsage(); // Refresh usage data
setConfirmDeleteOpen(false);
} catch (error) {
notifications.show({
title: 'Error',
message: 'Failed to delete group',
color: 'red',
});
} finally {
setDeletingGroup(false);
setConfirmDeleteOpen(false);
}
},
@ -680,6 +683,7 @@ const GroupManager = React.memo(({ isOpen, onClose }) => {
opened={confirmDeleteOpen}
onClose={() => setConfirmDeleteOpen(false)}
onConfirm={() => groupToDelete && executeDeleteGroup(groupToDelete)}
loading={deletingGroup}
title="Confirm Group Deletion"
message={
groupToDelete ? (
@ -706,6 +710,7 @@ This action cannot be undone.`}
opened={confirmCleanupOpen}
onClose={() => setConfirmCleanupOpen(false)}
onConfirm={executeCleanup}
loading={isCleaningUp}
title="Confirm Group Cleanup"
message={
<div style={{ whiteSpace: 'pre-line' }}>

View file

@ -263,25 +263,42 @@ const LiveGroupFilter = ({
}}
>
{/* Group Enable/Disable Button */}
<Button
color={group.enabled ? 'green' : 'gray'}
variant="filled"
onClick={() => toggleGroupEnabled(group.channel_group)}
radius="md"
size="xs"
leftSection={
group.enabled ? (
<CircleCheck size={14} />
) : (
<CircleX size={14} />
)
<Tooltip
label={
group.enabled && group.is_stale
? 'This group was not seen in the last M3U refresh and will be deleted after the retention period expires'
: ''
}
fullWidth
disabled={!group.enabled || !group.is_stale}
multiline
w={220}
>
<Text size="xs" truncate>
{group.name}
</Text>
</Button>
<Button
color={
group.enabled
? group.is_stale
? 'orange'
: 'green'
: 'gray'
}
variant="filled"
onClick={() => toggleGroupEnabled(group.channel_group)}
radius="md"
size="xs"
leftSection={
group.enabled ? (
<CircleCheck size={14} />
) : (
<CircleX size={14} />
)
}
fullWidth
>
<Text size="xs" truncate>
{group.name}
</Text>
</Button>
</Tooltip>
{/* Auto Sync Controls */}
<Stack spacing="xs" style={{ '--stack-gap': '4px' }}>

View file

@ -1,5 +1,6 @@
import React, { useState, useEffect } from 'react';
import { useFormik } from 'formik';
import React, { useState, useEffect, useMemo } from 'react';
import { useForm } from 'react-hook-form';
import { yupResolver } from '@hookform/resolvers/yup';
import * as Yup from 'yup';
import {
Modal,
@ -18,143 +19,148 @@ import { Upload, FileImage, X } from 'lucide-react';
import { notifications } from '@mantine/notifications';
import API from '../../api';
const schema = Yup.object({
name: Yup.string().required('Name is required'),
url: Yup.string()
.required('URL is required')
.test(
'valid-url-or-path',
'Must be a valid URL or local file path',
(value) => {
if (!value) return false;
// Allow local file paths starting with /data/logos/
if (value.startsWith('/data/logos/')) return true;
// Allow valid URLs
try {
new URL(value);
return true;
} catch {
return false;
}
}
),
});
const LogoForm = ({ logo = null, isOpen, onClose, onSuccess }) => {
const [logoPreview, setLogoPreview] = useState(null);
const [uploading, setUploading] = useState(false);
const [selectedFile, setSelectedFile] = useState(null); // Store selected file
const formik = useFormik({
initialValues: {
name: '',
url: '',
},
validationSchema: Yup.object({
name: Yup.string().required('Name is required'),
url: Yup.string()
.required('URL is required')
.test(
'valid-url-or-path',
'Must be a valid URL or local file path',
(value) => {
if (!value) return false;
// Allow local file paths starting with /data/logos/
if (value.startsWith('/data/logos/')) return true;
// Allow valid URLs
try {
new URL(value);
return true;
} catch {
return false;
}
}
),
const defaultValues = useMemo(
() => ({
name: logo?.name || '',
url: logo?.url || '',
}),
onSubmit: async (values, { setSubmitting }) => {
try {
setUploading(true);
let uploadResponse = null; // Store upload response for later use
[logo]
);
// If we have a selected file, upload it first
if (selectedFile) {
try {
uploadResponse = await API.uploadLogo(selectedFile, values.name);
// Use the uploaded file data instead of form values
values.name = uploadResponse.name;
values.url = uploadResponse.url;
} catch (uploadError) {
let errorMessage = 'Failed to upload logo file';
if (
uploadError.code === 'NETWORK_ERROR' ||
uploadError.message?.includes('timeout')
) {
errorMessage = 'Upload timed out. Please try again.';
} else if (uploadError.status === 413) {
errorMessage = 'File too large. Please choose a smaller file.';
} else if (uploadError.body?.error) {
errorMessage = uploadError.body.error;
}
notifications.show({
title: 'Upload Error',
message: errorMessage,
color: 'red',
});
return; // Don't proceed with creation if upload fails
}
}
// Now create or update the logo with the final values
// Only proceed if we don't already have a logo from file upload
if (logo) {
const updatedLogo = await API.updateLogo(logo.id, values);
notifications.show({
title: 'Success',
message: 'Logo updated successfully',
color: 'green',
});
onSuccess?.({ type: 'update', logo: updatedLogo }); // Call onSuccess for updates
} else if (!selectedFile) {
// Only create a new logo entry if we're not uploading a file
// (file upload already created the logo entry)
const newLogo = await API.createLogo(values);
notifications.show({
title: 'Success',
message: 'Logo created successfully',
color: 'green',
});
onSuccess?.({ type: 'create', logo: newLogo }); // Call onSuccess for creates
} else {
// File was uploaded and logo was already created
notifications.show({
title: 'Success',
message: 'Logo uploaded successfully',
color: 'green',
});
onSuccess?.({ type: 'create', logo: uploadResponse });
}
onClose();
} catch (error) {
let errorMessage = logo
? 'Failed to update logo'
: 'Failed to create logo';
// Handle specific timeout errors
if (
error.code === 'NETWORK_ERROR' ||
error.message?.includes('timeout')
) {
errorMessage = 'Request timed out. Please try again.';
} else if (error.response?.data?.error) {
errorMessage = error.response.data.error;
}
notifications.show({
title: 'Error',
message: errorMessage,
color: 'red',
});
} finally {
setSubmitting(false);
setUploading(false);
}
},
const {
register,
handleSubmit,
formState: { errors, isSubmitting },
reset,
setValue,
watch,
} = useForm({
defaultValues,
resolver: yupResolver(schema),
});
useEffect(() => {
if (logo) {
formik.setValues({
name: logo.name || '',
url: logo.url || '',
const onSubmit = async (values) => {
try {
setUploading(true);
let uploadResponse = null; // Store upload response for later use
// If we have a selected file, upload it first
if (selectedFile) {
try {
uploadResponse = await API.uploadLogo(selectedFile, values.name);
// Use the uploaded file data instead of form values
values.name = uploadResponse.name;
values.url = uploadResponse.url;
} catch (uploadError) {
let errorMessage = 'Failed to upload logo file';
if (
uploadError.code === 'NETWORK_ERROR' ||
uploadError.message?.includes('timeout')
) {
errorMessage = 'Upload timed out. Please try again.';
} else if (uploadError.status === 413) {
errorMessage = 'File too large. Please choose a smaller file.';
} else if (uploadError.body?.error) {
errorMessage = uploadError.body.error;
}
notifications.show({
title: 'Upload Error',
message: errorMessage,
color: 'red',
});
return; // Don't proceed with creation if upload fails
}
}
// Now create or update the logo with the final values
// Only proceed if we don't already have a logo from file upload
if (logo) {
const updatedLogo = await API.updateLogo(logo.id, values);
notifications.show({
title: 'Success',
message: 'Logo updated successfully',
color: 'green',
});
onSuccess?.({ type: 'update', logo: updatedLogo }); // Call onSuccess for updates
} else if (!selectedFile) {
// Only create a new logo entry if we're not uploading a file
// (file upload already created the logo entry)
const newLogo = await API.createLogo(values);
notifications.show({
title: 'Success',
message: 'Logo created successfully',
color: 'green',
});
onSuccess?.({ type: 'create', logo: newLogo }); // Call onSuccess for creates
} else {
// File was uploaded and logo was already created
notifications.show({
title: 'Success',
message: 'Logo uploaded successfully',
color: 'green',
});
onSuccess?.({ type: 'create', logo: uploadResponse });
}
onClose();
} catch (error) {
let errorMessage = logo
? 'Failed to update logo'
: 'Failed to create logo';
// Handle specific timeout errors
if (
error.code === 'NETWORK_ERROR' ||
error.message?.includes('timeout')
) {
errorMessage = 'Request timed out. Please try again.';
} else if (error.response?.data?.error) {
errorMessage = error.response.data.error;
}
notifications.show({
title: 'Error',
message: errorMessage,
color: 'red',
});
setLogoPreview(logo.cache_url);
} else {
formik.resetForm();
setLogoPreview(null);
} finally {
setUploading(false);
}
// Clear any selected file when logo changes
};
useEffect(() => {
reset(defaultValues);
setLogoPreview(logo?.cache_url || null);
setSelectedFile(null);
}, [logo, isOpen]);
}, [defaultValues, logo, reset]);
const handleFileSelect = (files) => {
if (files.length === 0) return;
@ -180,18 +186,19 @@ const LogoForm = ({ logo = null, isOpen, onClose, onSuccess }) => {
setLogoPreview(previewUrl);
// Auto-fill the name field if empty
if (!formik.values.name) {
const currentName = watch('name');
if (!currentName) {
const nameWithoutExtension = file.name.replace(/\.[^/.]+$/, '');
formik.setFieldValue('name', nameWithoutExtension);
setValue('name', nameWithoutExtension);
}
// Set a placeholder URL (will be replaced after upload)
formik.setFieldValue('url', 'file://pending-upload');
setValue('url', 'file://pending-upload');
};
const handleUrlChange = (event) => {
const url = event.target.value;
formik.setFieldValue('url', url);
setValue('url', url);
// Clear any selected file when manually entering URL
if (selectedFile) {
@ -219,7 +226,7 @@ const LogoForm = ({ logo = null, isOpen, onClose, onSuccess }) => {
const filename = pathname.substring(pathname.lastIndexOf('/') + 1);
const nameWithoutExtension = filename.replace(/\.[^/.]+$/, '');
if (nameWithoutExtension) {
formik.setFieldValue('name', nameWithoutExtension);
setValue('name', nameWithoutExtension);
}
} catch (error) {
// If the URL is invalid, do nothing.
@ -244,7 +251,7 @@ const LogoForm = ({ logo = null, isOpen, onClose, onSuccess }) => {
title={logo ? 'Edit Logo' : 'Add Logo'}
size="md"
>
<form onSubmit={formik.handleSubmit}>
<form onSubmit={handleSubmit(onSubmit)}>
<Stack spacing="md">
{/* Logo Preview */}
{logoPreview && (
@ -338,18 +345,18 @@ const LogoForm = ({ logo = null, isOpen, onClose, onSuccess }) => {
<TextInput
label="Logo URL"
placeholder="https://example.com/logo.png"
{...formik.getFieldProps('url')}
{...register('url')}
onChange={handleUrlChange}
onBlur={handleUrlBlur}
error={formik.touched.url && formik.errors.url}
error={errors.url?.message}
disabled={!!selectedFile} // Disable when file is selected
/>
<TextInput
label="Name"
placeholder="Enter logo name"
{...formik.getFieldProps('name')}
error={formik.touched.name && formik.errors.name}
{...register('name')}
error={errors.name?.message}
/>
{selectedFile && (
@ -363,7 +370,7 @@ const LogoForm = ({ logo = null, isOpen, onClose, onSuccess }) => {
<Button variant="light" onClick={onClose}>
Cancel
</Button>
<Button type="submit" loading={formik.isSubmitting || uploading}>
<Button type="submit" loading={isSubmitting || uploading}>
{logo ? 'Update' : 'Create'}
</Button>
</Group>

View file

@ -151,6 +151,7 @@ const M3UFilters = ({ playlist, isOpen, onClose }) => {
const [deleteTarget, setDeleteTarget] = useState(null);
const [filterToDelete, setFilterToDelete] = useState(null);
const [filters, setFilters] = useState([]);
const [deleting, setDeleting] = useState(false);
const isWarningSuppressed = useWarningsStore((s) => s.isWarningSuppressed);
const suppressWarning = useWarningsStore((s) => s.suppressWarning);
@ -192,16 +193,17 @@ const M3UFilters = ({ playlist, isOpen, onClose }) => {
const deleteFilter = async (id) => {
if (!playlist || !playlist.id) return;
setDeleting(true);
try {
await API.deleteM3UFilter(playlist.id, id);
setConfirmDeleteOpen(false);
fetchPlaylist(playlist.id);
setFilters(filters.filter((f) => f.id !== id));
} catch (error) {
console.error('Error deleting profile:', error);
} finally {
setDeleting(false);
setConfirmDeleteOpen(false);
}
fetchPlaylist(playlist.id);
setFilters(filters.filter((f) => f.id !== id));
};
const closeEditor = (updatedPlaylist = null) => {
@ -321,6 +323,7 @@ const M3UFilters = ({ playlist, isOpen, onClose }) => {
opened={confirmDeleteOpen}
onClose={() => setConfirmDeleteOpen(false)}
onConfirm={() => deleteFilter(deleteTarget)}
loading={deleting}
title="Confirm Filter Deletion"
message={
filterToDelete ? (

View file

@ -1,6 +1,5 @@
// Modal.js
import React, { useState, useEffect, forwardRef } from 'react';
import { useFormik } from 'formik';
import * as Yup from 'yup';
import API from '../../api';
import M3UProfiles from './M3UProfiles';

View file

@ -1,5 +1,6 @@
import React, { useState, useEffect } from 'react';
import { useFormik } from 'formik';
import React, { useState, useEffect, useMemo } from 'react';
import { useForm } from 'react-hook-form';
import { yupResolver } from '@hookform/resolvers/yup';
import * as Yup from 'yup';
import API from '../../api';
import {
@ -31,6 +32,89 @@ const RegexFormAndView = ({ profile = null, m3u, isOpen, onClose }) => {
const [sampleInput, setSampleInput] = useState('');
const isDefaultProfile = profile?.is_default;
const defaultValues = useMemo(
() => ({
name: profile?.name || '',
max_streams: profile?.max_streams || 0,
search_pattern: profile?.search_pattern || '',
replace_pattern: profile?.replace_pattern || '',
notes: profile?.custom_properties?.notes || '',
}),
[profile]
);
const schema = Yup.object({
name: Yup.string().required('Name is required'),
search_pattern: Yup.string().when([], {
is: () => !isDefaultProfile,
then: (schema) => schema.required('Search pattern is required'),
otherwise: (schema) => schema.notRequired(),
}),
replace_pattern: Yup.string().when([], {
is: () => !isDefaultProfile,
then: (schema) => schema.required('Replace pattern is required'),
otherwise: (schema) => schema.notRequired(),
}),
notes: Yup.string(), // Optional field
});
const {
register,
handleSubmit,
formState: { errors, isSubmitting },
reset,
setValue,
watch,
} = useForm({
defaultValues,
resolver: yupResolver(schema),
});
const onSubmit = async (values) => {
console.log('submiting');
// For default profiles, only send name and custom_properties (notes)
let submitValues;
if (isDefaultProfile) {
submitValues = {
name: values.name,
custom_properties: {
// Preserve existing custom_properties and add/update notes
...(profile?.custom_properties || {}),
notes: values.notes || '',
},
};
} else {
// For regular profiles, send all fields
submitValues = {
name: values.name,
max_streams: values.max_streams,
search_pattern: values.search_pattern,
replace_pattern: values.replace_pattern,
custom_properties: {
// Preserve existing custom_properties and add/update notes
...(profile?.custom_properties || {}),
notes: values.notes || '',
},
};
}
if (profile?.id) {
await API.updateM3UProfile(m3u.id, {
id: profile.id,
...submitValues,
});
} else {
await API.addM3UProfile(m3u.id, submitValues);
}
reset();
// Reset local state to sync with form reset
setSearchPattern('');
setReplacePattern('');
onClose();
};
useEffect(() => {
async function fetchStreamUrl() {
try {
@ -79,99 +163,22 @@ const RegexFormAndView = ({ profile = null, m3u, isOpen, onClose }) => {
}, [searchPattern, replacePattern]);
const onSearchPatternUpdate = (e) => {
formik.handleChange(e);
setSearchPattern(e.target.value);
const value = e.target.value;
setSearchPattern(value);
setValue('search_pattern', value);
};
const onReplacePatternUpdate = (e) => {
formik.handleChange(e);
setReplacePattern(e.target.value);
const value = e.target.value;
setReplacePattern(value);
setValue('replace_pattern', value);
};
const formik = useFormik({
initialValues: {
name: '',
max_streams: 0,
search_pattern: '',
replace_pattern: '',
notes: '',
},
validationSchema: Yup.object({
name: Yup.string().required('Name is required'),
search_pattern: Yup.string().when([], {
is: () => !isDefaultProfile,
then: (schema) => schema.required('Search pattern is required'),
otherwise: (schema) => schema.notRequired(),
}),
replace_pattern: Yup.string().when([], {
is: () => !isDefaultProfile,
then: (schema) => schema.required('Replace pattern is required'),
otherwise: (schema) => schema.notRequired(),
}),
notes: Yup.string(), // Optional field
}),
onSubmit: async (values, { setSubmitting, resetForm }) => {
console.log('submiting');
// For default profiles, only send name and custom_properties (notes)
let submitValues;
if (isDefaultProfile) {
submitValues = {
name: values.name,
custom_properties: {
// Preserve existing custom_properties and add/update notes
...(profile?.custom_properties || {}),
notes: values.notes || '',
},
};
} else {
// For regular profiles, send all fields
submitValues = {
name: values.name,
max_streams: values.max_streams,
search_pattern: values.search_pattern,
replace_pattern: values.replace_pattern,
custom_properties: {
// Preserve existing custom_properties and add/update notes
...(profile?.custom_properties || {}),
notes: values.notes || '',
},
};
}
if (profile?.id) {
await API.updateM3UProfile(m3u.id, {
id: profile.id,
...submitValues,
});
} else {
await API.addM3UProfile(m3u.id, submitValues);
}
resetForm();
// Reset local state to sync with formik reset
setSearchPattern('');
setReplacePattern('');
setSubmitting(false);
onClose();
},
});
useEffect(() => {
if (profile) {
setSearchPattern(profile.search_pattern);
setReplacePattern(profile.replace_pattern);
formik.setValues({
name: profile.name,
max_streams: profile.max_streams,
search_pattern: profile.search_pattern,
replace_pattern: profile.replace_pattern,
notes: profile.custom_properties?.notes || '',
});
} else {
formik.resetForm();
}
}, [profile]); // eslint-disable-line react-hooks/exhaustive-deps
reset(defaultValues);
setSearchPattern(profile?.search_pattern || '');
setReplacePattern(profile?.replace_pattern || '');
}, [defaultValues, profile, reset]);
const handleSampleInputChange = (e) => {
setSampleInput(e.target.value);
@ -212,27 +219,21 @@ const RegexFormAndView = ({ profile = null, m3u, isOpen, onClose }) => {
}
size="lg"
>
<form onSubmit={formik.handleSubmit}>
<form onSubmit={handleSubmit(onSubmit)}>
<TextInput
id="name"
name="name"
label="Name"
value={formik.values.name}
onChange={formik.handleChange}
error={formik.errors.name ? formik.touched.name : ''}
{...register('name')}
error={errors.name?.message}
/>
{/* Only show max streams field for non-default profiles */}
{!isDefaultProfile && (
<NumberInput
id="max_streams"
name="max_streams"
label="Max Streams"
value={formik.values.max_streams}
onChange={(value) =>
formik.setFieldValue('max_streams', value || 0)
}
error={formik.errors.max_streams ? formik.touched.max_streams : ''}
{...register('max_streams')}
value={watch('max_streams')}
onChange={(value) => setValue('max_streams', value || 0)}
error={errors.max_streams?.message}
min={0}
placeholder="0 = unlimited"
/>
@ -242,40 +243,25 @@ const RegexFormAndView = ({ profile = null, m3u, isOpen, onClose }) => {
{!isDefaultProfile && (
<>
<TextInput
id="search_pattern"
name="search_pattern"
label="Search Pattern (Regex)"
value={searchPattern}
onChange={onSearchPatternUpdate}
error={
formik.errors.search_pattern
? formik.touched.search_pattern
: ''
}
error={errors.search_pattern?.message}
/>
<TextInput
id="replace_pattern"
name="replace_pattern"
label="Replace Pattern"
value={replacePattern}
onChange={onReplacePatternUpdate}
error={
formik.errors.replace_pattern
? formik.touched.replace_pattern
: ''
}
error={errors.replace_pattern?.message}
/>
</>
)}
<Textarea
id="notes"
name="notes"
label="Notes"
placeholder="Add any notes or comments about this profile..."
value={formik.values.notes}
onChange={formik.handleChange}
error={formik.errors.notes ? formik.touched.notes : ''}
{...register('notes')}
error={errors.notes?.message}
minRows={2}
maxRows={4}
autosize
@ -290,9 +276,9 @@ const RegexFormAndView = ({ profile = null, m3u, isOpen, onClose }) => {
>
<Button
type="submit"
disabled={formik.isSubmitting}
disabled={isSubmitting}
size="xs"
style={{ width: formik.isSubmitting ? 'auto' : 'auto' }}
style={{ width: isSubmitting ? 'auto' : 'auto' }}
>
Submit
</Button>

View file

@ -38,6 +38,7 @@ const M3UProfiles = ({ playlist = null, isOpen, onClose }) => {
const [confirmDeleteOpen, setConfirmDeleteOpen] = useState(false);
const [deleteTarget, setDeleteTarget] = useState(null);
const [profileToDelete, setProfileToDelete] = useState(null);
const [deletingProfile, setDeletingProfile] = useState(false);
const [accountInfoOpen, setAccountInfoOpen] = useState(false);
const [selectedProfileForInfo, setSelectedProfileForInfo] = useState(null);
@ -88,11 +89,13 @@ const M3UProfiles = ({ playlist = null, isOpen, onClose }) => {
const executeDeleteProfile = async (id) => {
if (!playlist || !playlist.id) return;
setDeletingProfile(true);
try {
await API.deleteM3UProfile(playlist.id, id);
setConfirmDeleteOpen(false);
} catch (error) {
console.error('Error deleting profile:', error);
} finally {
setDeletingProfile(false);
setConfirmDeleteOpen(false);
}
};
@ -359,6 +362,7 @@ const M3UProfiles = ({ playlist = null, isOpen, onClose }) => {
opened={confirmDeleteOpen}
onClose={() => setConfirmDeleteOpen(false)}
onConfirm={() => executeDeleteProfile(deleteTarget)}
loading={deletingProfile}
title="Confirm Profile Deletion"
message={
profileToDelete ? (

View file

@ -1,108 +1,104 @@
// Modal.js
import React, { useEffect } from 'react';
import { useFormik } from 'formik';
import React, { useEffect, useMemo } from 'react';
import { useForm } from 'react-hook-form';
import { yupResolver } from '@hookform/resolvers/yup';
import * as Yup from 'yup';
import API from '../../api';
import useStreamProfilesStore from '../../store/streamProfiles';
import { Modal, TextInput, Select, Button, Flex } from '@mantine/core';
import useChannelsStore from '../../store/channels';
const schema = Yup.object({
name: Yup.string().required('Name is required'),
url: Yup.string().required('URL is required').min(0),
});
const Stream = ({ stream = null, isOpen, onClose }) => {
const streamProfiles = useStreamProfilesStore((state) => state.profiles);
const channelGroups = useChannelsStore((s) => s.channelGroups);
const formik = useFormik({
initialValues: {
name: '',
url: '',
channel_group: null,
stream_profile_id: '',
},
validationSchema: Yup.object({
name: Yup.string().required('Name is required'),
url: Yup.string().required('URL is required').min(0),
// stream_profile_id: Yup.string().required('Stream profile is required'),
const defaultValues = useMemo(
() => ({
name: stream?.name || '',
url: stream?.url || '',
channel_group: stream?.channel_group
? String(stream.channel_group)
: null,
stream_profile_id: stream?.stream_profile_id
? String(stream.stream_profile_id)
: '',
}),
onSubmit: async (values, { setSubmitting, resetForm }) => {
console.log(values);
[stream]
);
// Convert string IDs back to integers for the API
const payload = {
...values,
channel_group: values.channel_group
? parseInt(values.channel_group, 10)
: null,
stream_profile_id: values.stream_profile_id
? parseInt(values.stream_profile_id, 10)
: null,
};
if (stream?.id) {
await API.updateStream({ id: stream.id, ...payload });
} else {
await API.addStream(payload);
}
resetForm();
setSubmitting(false);
onClose();
},
const {
register,
handleSubmit,
formState: { errors, isSubmitting },
reset,
setValue,
watch,
} = useForm({
defaultValues,
resolver: yupResolver(schema),
});
useEffect(() => {
if (stream) {
formik.setValues({
name: stream.name,
url: stream.url,
// Convert IDs to strings to match Select component values
channel_group: stream.channel_group
? String(stream.channel_group)
: null,
stream_profile_id: stream.stream_profile_id
? String(stream.stream_profile_id)
: '',
});
const onSubmit = async (values) => {
console.log(values);
// Convert string IDs back to integers for the API
const payload = {
...values,
channel_group: values.channel_group
? parseInt(values.channel_group, 10)
: null,
stream_profile_id: values.stream_profile_id
? parseInt(values.stream_profile_id, 10)
: null,
};
if (stream?.id) {
await API.updateStream({ id: stream.id, ...payload });
} else {
formik.resetForm();
await API.addStream(payload);
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [stream]);
reset();
onClose();
};
useEffect(() => {
reset(defaultValues);
}, [defaultValues, reset]);
if (!isOpen) {
return <></>;
}
const channelGroupValue = watch('channel_group');
const streamProfileValue = watch('stream_profile_id');
return (
<Modal opened={isOpen} onClose={onClose} title="Stream" zIndex={10}>
<form onSubmit={formik.handleSubmit}>
<form onSubmit={handleSubmit(onSubmit)}>
<TextInput
id="name"
name="name"
label="Stream Name"
value={formik.values.name}
onChange={formik.handleChange}
error={formik.errors.name}
{...register('name')}
error={errors.name?.message}
/>
<TextInput
id="url"
name="url"
label="Stream URL"
value={formik.values.url}
onChange={formik.handleChange}
error={formik.errors.url}
{...register('url')}
error={errors.url?.message}
/>
<Select
id="channel_group"
name="channel_group"
label="Group"
searchable
value={formik.values.channel_group}
onChange={(value) => {
formik.setFieldValue('channel_group', value); // Update Formik's state with the new value
}}
error={formik.errors.channel_group}
value={channelGroupValue}
onChange={(value) => setValue('channel_group', value)}
error={errors.channel_group?.message}
data={Object.values(channelGroups).map((group) => ({
label: group.name,
value: `${group.id}`,
@ -110,16 +106,12 @@ const Stream = ({ stream = null, isOpen, onClose }) => {
/>
<Select
id="stream_profile_id"
name="stream_profile_id"
label="Stream Profile"
placeholder="Optional"
searchable
value={formik.values.stream_profile_id}
onChange={(value) => {
formik.setFieldValue('stream_profile_id', value); // Update Formik's state with the new value
}}
error={formik.errors.stream_profile_id}
value={streamProfileValue}
onChange={(value) => setValue('stream_profile_id', value)}
error={errors.stream_profile_id?.message}
data={streamProfiles.map((profile) => ({
label: profile.name,
value: `${profile.id}`,
@ -132,7 +124,7 @@ const Stream = ({ stream = null, isOpen, onClose }) => {
type="submit"
variant="contained"
color="primary"
disabled={formik.isSubmitting}
disabled={isSubmitting}
>
Submit
</Button>

View file

@ -1,96 +1,91 @@
// Modal.js
import React, { useEffect } from 'react';
import { useFormik } from 'formik';
import React, { useEffect, useMemo } from 'react';
import { useForm } from 'react-hook-form';
import { yupResolver } from '@hookform/resolvers/yup';
import * as Yup from 'yup';
import API from '../../api';
import useUserAgentsStore from '../../store/userAgents';
import { Modal, TextInput, Select, Button, Flex } from '@mantine/core';
const schema = Yup.object({
name: Yup.string().required('Name is required'),
command: Yup.string().required('Command is required'),
parameters: Yup.string().required('Parameters are is required'),
});
const StreamProfile = ({ profile = null, isOpen, onClose }) => {
const userAgents = useUserAgentsStore((state) => state.userAgents);
const formik = useFormik({
initialValues: {
name: '',
command: '',
parameters: '',
is_active: true,
user_agent: '',
},
validationSchema: Yup.object({
name: Yup.string().required('Name is required'),
command: Yup.string().required('Command is required'),
parameters: Yup.string().required('Parameters are is required'),
const defaultValues = useMemo(
() => ({
name: profile?.name || '',
command: profile?.command || '',
parameters: profile?.parameters || '',
is_active: profile?.is_active ?? true,
user_agent: profile?.user_agent || '',
}),
onSubmit: async (values, { setSubmitting, resetForm }) => {
if (profile?.id) {
await API.updateStreamProfile({ id: profile.id, ...values });
} else {
await API.addStreamProfile(values);
}
[profile]
);
resetForm();
setSubmitting(false);
onClose();
},
const {
register,
handleSubmit,
formState: { errors, isSubmitting },
reset,
watch,
} = useForm({
defaultValues,
resolver: yupResolver(schema),
});
useEffect(() => {
if (profile) {
formik.setValues({
name: profile.name,
command: profile.command,
parameters: profile.parameters,
is_active: profile.is_active,
user_agent: profile.user_agent,
});
const onSubmit = async (values) => {
if (profile?.id) {
await API.updateStreamProfile({ id: profile.id, ...values });
} else {
formik.resetForm();
await API.addStreamProfile(values);
}
}, [profile]);
reset();
onClose();
};
useEffect(() => {
reset(defaultValues);
}, [defaultValues, reset]);
if (!isOpen) {
return <></>;
}
const userAgentValue = watch('user_agent');
return (
<Modal opened={isOpen} onClose={onClose} title="Stream Profile">
<form onSubmit={formik.handleSubmit}>
<form onSubmit={handleSubmit(onSubmit)}>
<TextInput
id="name"
name="name"
label="Name"
value={formik.values.name}
onChange={formik.handleChange}
error={formik.errors.name}
{...register('name')}
error={errors.name?.message}
disabled={profile ? profile.locked : false}
/>
<TextInput
id="command"
name="command"
label="Command"
value={formik.values.command}
onChange={formik.handleChange}
error={formik.errors.command}
{...register('command')}
error={errors.command?.message}
disabled={profile ? profile.locked : false}
/>
<TextInput
id="parameters"
name="parameters"
label="Parameters"
value={formik.values.parameters}
onChange={formik.handleChange}
error={formik.errors.parameters}
{...register('parameters')}
error={errors.parameters?.message}
disabled={profile ? profile.locked : false}
/>
<Select
id="user_agent"
name="user_agent"
label="User-Agent"
value={formik.values.user_agent}
onChange={formik.handleChange}
error={formik.errors.user_agent}
{...register('user_agent')}
value={userAgentValue}
error={errors.user_agent?.message}
data={userAgents.map((ua) => ({
label: ua.name,
value: `${ua.id}`,
@ -102,7 +97,7 @@ const StreamProfile = ({ profile = null, isOpen, onClose }) => {
type="submit"
variant="contained"
color="primary"
disabled={formik.isSubmitting}
disabled={isSubmitting}
size="small"
>
Submit

View file

@ -1,6 +1,7 @@
// Modal.js
import React, { useEffect } from 'react';
import { useFormik } from 'formik';
import React, { useEffect, useMemo } from 'react';
import { useForm } from 'react-hook-form';
import { yupResolver } from '@hookform/resolvers/yup';
import * as Yup from 'yup';
import API from '../../api';
import {
@ -16,87 +17,82 @@ import {
} from '@mantine/core';
import { NETWORK_ACCESS_OPTIONS } from '../../constants';
const UserAgent = ({ userAgent = null, isOpen, onClose }) => {
const formik = useFormik({
initialValues: {
name: '',
user_agent: '',
description: '',
is_active: true,
},
validationSchema: Yup.object({
name: Yup.string().required('Name is required'),
user_agent: Yup.string().required('User-Agent is required'),
}),
onSubmit: async (values, { setSubmitting, resetForm }) => {
if (userAgent?.id) {
await API.updateUserAgent({ id: userAgent.id, ...values });
} else {
await API.addUserAgent(values);
}
const schema = Yup.object({
name: Yup.string().required('Name is required'),
user_agent: Yup.string().required('User-Agent is required'),
});
resetForm();
setSubmitting(false);
onClose();
},
const UserAgent = ({ userAgent = null, isOpen, onClose }) => {
const defaultValues = useMemo(
() => ({
name: userAgent?.name || '',
user_agent: userAgent?.user_agent || '',
description: userAgent?.description || '',
is_active: userAgent?.is_active ?? true,
}),
[userAgent]
);
const {
register,
handleSubmit,
formState: { errors, isSubmitting },
reset,
setValue,
watch,
} = useForm({
defaultValues,
resolver: yupResolver(schema),
});
useEffect(() => {
if (userAgent) {
formik.setValues({
name: userAgent.name,
user_agent: userAgent.user_agent,
description: userAgent.description,
is_active: userAgent.is_active,
});
const onSubmit = async (values) => {
if (userAgent?.id) {
await API.updateUserAgent({ id: userAgent.id, ...values });
} else {
formik.resetForm();
await API.addUserAgent(values);
}
}, [userAgent]);
reset();
onClose();
};
useEffect(() => {
reset(defaultValues);
}, [defaultValues, reset]);
if (!isOpen) {
return <></>;
}
const isActive = watch('is_active');
return (
<Modal opened={isOpen} onClose={onClose} title="User-Agent">
<form onSubmit={formik.handleSubmit}>
<form onSubmit={handleSubmit(onSubmit)}>
<TextInput
id="name"
name="name"
label="Name"
value={formik.values.name}
onChange={formik.handleChange}
error={formik.touched.name && Boolean(formik.errors.name)}
{...register('name')}
error={errors.name?.message}
/>
<TextInput
id="user_agent"
name="user_agent"
label="User-Agent"
value={formik.values.user_agent}
onChange={formik.handleChange}
error={formik.touched.user_agent && Boolean(formik.errors.user_agent)}
{...register('user_agent')}
error={errors.user_agent?.message}
/>
<TextInput
id="description"
name="description"
label="Description"
value={formik.values.description}
onChange={formik.handleChange}
error={
formik.touched.description && Boolean(formik.errors.description)
}
{...register('description')}
error={errors.description?.message}
/>
<Space h="md" />
<Checkbox
name="is_active"
label="Is Active"
checked={formik.values.is_active}
onChange={formik.handleChange}
checked={isActive}
onChange={(e) => setValue('is_active', e.currentTarget.checked)}
/>
<Flex mih={50} gap="xs" justify="flex-end" align="flex-end">
@ -104,7 +100,7 @@ const UserAgent = ({ userAgent = null, isOpen, onClose }) => {
size="small"
type="submit"
variant="contained"
disabled={formik.isSubmitting}
disabled={isSubmitting}
>
Submit
</Button>

View file

@ -50,9 +50,9 @@ const DvrSettingsForm = React.memo(({ active }) => {
form.setValues(formValues);
if (formValues['dvr-comskip-custom-path']) {
if (formValues['comskip_custom_path']) {
setComskipConfig((prev) => ({
path: formValues['dvr-comskip-custom-path'],
path: formValues['comskip_custom_path'],
exists: prev.exists,
}));
}
@ -69,7 +69,7 @@ const DvrSettingsForm = React.memo(({ active }) => {
exists: Boolean(response.exists),
});
if (response.path) {
form.setFieldValue('dvr-comskip-custom-path', response.path);
form.setFieldValue('comskip_custom_path', response.path);
}
}
} catch (error) {
@ -94,10 +94,10 @@ const DvrSettingsForm = React.memo(({ active }) => {
autoClose: 3000,
color: 'green',
});
form.setFieldValue('dvr-comskip-custom-path', response.path);
form.setFieldValue('comskip_custom_path', response.path);
useSettingsStore.getState().updateSetting({
...(settings['dvr-comskip-custom-path'] || {
key: 'dvr-comskip-custom-path',
...(settings['comskip_custom_path'] || {
key: 'comskip_custom_path',
name: 'DVR Comskip Custom Path',
}),
value: response.path,
@ -137,24 +137,19 @@ const DvrSettingsForm = React.memo(({ active }) => {
)}
<Switch
label="Enable Comskip (remove commercials after recording)"
{...form.getInputProps('dvr-comskip-enabled', {
{...form.getInputProps('comskip_enabled', {
type: 'checkbox',
})}
id={settings['dvr-comskip-enabled']?.id || 'dvr-comskip-enabled'}
name={settings['dvr-comskip-enabled']?.key || 'dvr-comskip-enabled'}
id="comskip_enabled"
name="comskip_enabled"
/>
<TextInput
label="Custom comskip.ini path"
description="Leave blank to use the built-in defaults."
placeholder="/app/docker/comskip.ini"
{...form.getInputProps('dvr-comskip-custom-path')}
id={
settings['dvr-comskip-custom-path']?.id || 'dvr-comskip-custom-path'
}
name={
settings['dvr-comskip-custom-path']?.key ||
'dvr-comskip-custom-path'
}
{...form.getInputProps('comskip_custom_path')}
id="comskip_custom_path"
name="comskip_custom_path"
/>
<Group align="flex-end" gap="sm">
<FileInput
@ -184,71 +179,50 @@ const DvrSettingsForm = React.memo(({ active }) => {
description="Begin recording this many minutes before the scheduled start."
min={0}
step={1}
{...form.getInputProps('dvr-pre-offset-minutes')}
id={
settings['dvr-pre-offset-minutes']?.id || 'dvr-pre-offset-minutes'
}
name={
settings['dvr-pre-offset-minutes']?.key || 'dvr-pre-offset-minutes'
}
{...form.getInputProps('pre_offset_minutes')}
id="pre_offset_minutes"
name="pre_offset_minutes"
/>
<NumberInput
label="End late (minutes)"
description="Continue recording this many minutes after the scheduled end."
min={0}
step={1}
{...form.getInputProps('dvr-post-offset-minutes')}
id={
settings['dvr-post-offset-minutes']?.id || 'dvr-post-offset-minutes'
}
name={
settings['dvr-post-offset-minutes']?.key ||
'dvr-post-offset-minutes'
}
{...form.getInputProps('post_offset_minutes')}
id="post_offset_minutes"
name="post_offset_minutes"
/>
<TextInput
label="TV Path Template"
description="Supports {show}, {season}, {episode}, {sub_title}, {channel}, {year}, {start}, {end}. Use format specifiers like {season:02d}. Relative paths are under your library dir."
placeholder="TV_Shows/{show}/S{season:02d}E{episode:02d}.mkv"
{...form.getInputProps('dvr-tv-template')}
id={settings['dvr-tv-template']?.id || 'dvr-tv-template'}
name={settings['dvr-tv-template']?.key || 'dvr-tv-template'}
{...form.getInputProps('tv_template')}
id="tv_template"
name="tv_template"
/>
<TextInput
label="TV Fallback Template"
description="Template used when an episode has no season/episode. Supports {show}, {start}, {end}, {channel}, {year}."
placeholder="TV_Shows/{show}/{start}.mkv"
{...form.getInputProps('dvr-tv-fallback-template')}
id={
settings['dvr-tv-fallback-template']?.id ||
'dvr-tv-fallback-template'
}
name={
settings['dvr-tv-fallback-template']?.key ||
'dvr-tv-fallback-template'
}
{...form.getInputProps('tv_fallback_template')}
id="tv_fallback_template"
name="tv_fallback_template"
/>
<TextInput
label="Movie Path Template"
description="Supports {title}, {year}, {channel}, {start}, {end}. Relative paths are under your library dir."
placeholder="Movies/{title} ({year}).mkv"
{...form.getInputProps('dvr-movie-template')}
id={settings['dvr-movie-template']?.id || 'dvr-movie-template'}
name={settings['dvr-movie-template']?.key || 'dvr-movie-template'}
{...form.getInputProps('movie_template')}
id="movie_template"
name="movie_template"
/>
<TextInput
label="Movie Fallback Template"
description="Template used when movie metadata is incomplete. Supports {start}, {end}, {channel}."
placeholder="Movies/{start}.mkv"
{...form.getInputProps('dvr-movie-fallback-template')}
id={
settings['dvr-movie-fallback-template']?.id ||
'dvr-movie-fallback-template'
}
name={
settings['dvr-movie-fallback-template']?.key ||
'dvr-movie-fallback-template'
}
{...form.getInputProps('movie_fallback_template')}
id="movie_fallback_template"
name="movie_fallback_template"
/>
<Flex mih={50} gap="xs" justify="flex-end" align="flex-end">
<Button type="submit" variant="default">
@ -260,4 +234,4 @@ const DvrSettingsForm = React.memo(({ active }) => {
);
});
export default DvrSettingsForm;
export default DvrSettingsForm;

View file

@ -20,6 +20,7 @@ const NetworkAccessForm = React.memo(({ active }) => {
const [saved, setSaved] = useState(false);
const [networkAccessConfirmOpen, setNetworkAccessConfirmOpen] =
useState(false);
const [saving, setSaving] = useState(false);
const [netNetworkAccessConfirmCIDRs, setNetNetworkAccessConfirmCIDRs] =
useState([]);
const [clientIpAddress, setClientIpAddress] = useState(null);
@ -31,13 +32,11 @@ const NetworkAccessForm = React.memo(({ active }) => {
});
useEffect(() => {
if(!active) setSaved(false);
if (!active) setSaved(false);
}, [active]);
useEffect(() => {
const networkAccessSettings = JSON.parse(
settings['network-access'].value || '{}'
);
const networkAccessSettings = settings['network_access']?.value || {};
networkAccessForm.setValues(
Object.keys(NETWORK_ACCESS_OPTIONS).reduce((acc, key) => {
acc[key] = networkAccessSettings[key] || '0.0.0.0/0,::/0';
@ -50,8 +49,8 @@ const NetworkAccessForm = React.memo(({ active }) => {
setSaved(false);
setNetworkAccessError(null);
const check = await checkSetting({
...settings['network-access'],
value: JSON.stringify(networkAccessForm.getValues()),
...settings['network_access'],
value: networkAccessForm.getValues(), // Send as object
});
if (check.error && check.message) {
@ -74,19 +73,22 @@ const NetworkAccessForm = React.memo(({ active }) => {
const saveNetworkAccess = async () => {
setSaved(false);
setSaving(true);
try {
await updateSetting({
...settings['network-access'],
value: JSON.stringify(networkAccessForm.getValues()),
...settings['network_access'],
value: networkAccessForm.getValues(), // Send as object
});
setSaved(true);
setNetworkAccessConfirmOpen(false);
} catch (e) {
const errors = {};
for (const key in e.body.value) {
errors[key] = `Invalid CIDR(s): ${e.body.value[key]}`;
}
networkAccessForm.setErrors(errors);
} finally {
setSaving(false);
setNetworkAccessConfirmOpen(false);
}
};
@ -135,6 +137,7 @@ const NetworkAccessForm = React.memo(({ active }) => {
onClose={() => setNetworkAccessConfirmOpen(false)}
onConfirm={saveNetworkAccess}
title={`Confirm Network Access Blocks`}
loading={saving}
message={
<>
<Text>
@ -158,4 +161,4 @@ const NetworkAccessForm = React.memo(({ active }) => {
);
});
export default NetworkAccessForm;
export default NetworkAccessForm;

View file

@ -91,18 +91,13 @@ const ProxySettingsForm = React.memo(({ active }) => {
});
useEffect(() => {
if(!active) setSaved(false);
if (!active) setSaved(false);
}, [active]);
useEffect(() => {
if (settings) {
if (settings['proxy-settings']?.value) {
try {
const proxySettings = JSON.parse(settings['proxy-settings'].value);
proxySettingsForm.setValues(proxySettings);
} catch (error) {
console.error('Error parsing proxy settings:', error);
}
if (settings['proxy_settings']?.value) {
proxySettingsForm.setValues(settings['proxy_settings'].value);
}
}
}, [settings]);
@ -116,8 +111,8 @@ const ProxySettingsForm = React.memo(({ active }) => {
try {
const result = await updateSetting({
...settings['proxy-settings'],
value: JSON.stringify(proxySettingsForm.getValues()),
...settings['proxy_settings'],
value: proxySettingsForm.getValues(), // Send as object
});
// API functions return undefined on error
if (result) {
@ -163,4 +158,4 @@ const ProxySettingsForm = React.memo(({ active }) => {
);
});
export default ProxySettingsForm;
export default ProxySettingsForm;

View file

@ -129,8 +129,11 @@ const StreamSettingsForm = React.memo(({ active }) => {
const values = form.getValues();
const changedSettings = getChangedSettings(values, settings);
const m3uHashKeyChanged =
settings['m3u-hash-key']?.value !== values['m3u-hash-key'].join(',');
// Check if m3u_hash_key changed from the grouped stream_settings
const currentHashKey =
settings['stream_settings']?.value?.m3u_hash_key || '';
const newHashKey = values['m3u_hash_key']?.join(',') || '';
const m3uHashKeyChanged = currentHashKey !== newHashKey;
// If M3U hash key changed, show warning (unless suppressed)
if (m3uHashKeyChanged && !isWarningSuppressed('rehash-streams')) {
@ -161,10 +164,10 @@ const StreamSettingsForm = React.memo(({ active }) => {
)}
<Select
searchable
{...form.getInputProps('default-user-agent')}
id={settings['default-user-agent']?.id || 'default-user-agent'}
name={settings['default-user-agent']?.key || 'default-user-agent'}
label={settings['default-user-agent']?.name || 'Default User Agent'}
{...form.getInputProps('default_user_agent')}
id="default_user_agent"
name="default_user_agent"
label="Default User Agent"
data={userAgents.map((option) => ({
value: `${option.id}`,
label: option.name,
@ -172,16 +175,10 @@ const StreamSettingsForm = React.memo(({ active }) => {
/>
<Select
searchable
{...form.getInputProps('default-stream-profile')}
id={
settings['default-stream-profile']?.id || 'default-stream-profile'
}
name={
settings['default-stream-profile']?.key || 'default-stream-profile'
}
label={
settings['default-stream-profile']?.name || 'Default Stream Profile'
}
{...form.getInputProps('default_stream_profile')}
id="default_stream_profile"
name="default_stream_profile"
label="Default Stream Profile"
data={streamProfiles.map((option) => ({
value: `${option.id}`,
label: option.name,
@ -189,10 +186,10 @@ const StreamSettingsForm = React.memo(({ active }) => {
/>
<Select
searchable
{...form.getInputProps('preferred-region')}
id={settings['preferred-region']?.id || 'preferred-region'}
name={settings['preferred-region']?.key || 'preferred-region'}
label={settings['preferred-region']?.name || 'Preferred Region'}
{...form.getInputProps('preferred_region')}
id="preferred_region"
name="preferred_region"
label="Preferred Region"
data={regionChoices.map((r) => ({
label: r.label,
value: `${r.value}`,
@ -204,19 +201,16 @@ const StreamSettingsForm = React.memo(({ active }) => {
Auto-Import Mapped Files
</Text>
<Switch
{...form.getInputProps('auto-import-mapped-files', {
{...form.getInputProps('auto_import_mapped_files', {
type: 'checkbox',
})}
id={
settings['auto-import-mapped-files']?.id ||
'auto-import-mapped-files'
}
id="auto_import_mapped_files"
/>
</Group>
<MultiSelect
id="m3u-hash-key"
name="m3u-hash-key"
id="m3u_hash_key"
name="m3u_hash_key"
label="M3U Hash Key"
data={[
{
@ -240,7 +234,7 @@ const StreamSettingsForm = React.memo(({ active }) => {
label: 'Group',
},
]}
{...form.getInputProps('m3u-hash-key')}
{...form.getInputProps('m3u_hash_key')}
/>
{rehashSuccess && (
@ -303,4 +297,4 @@ Please ensure you have time to let this complete before proceeding.`}
);
});
export default StreamSettingsForm;
export default StreamSettingsForm;

View file

@ -60,9 +60,9 @@ const SystemSettingsForm = React.memo(({ active }) => {
<NumberInput
label="Maximum System Events"
description="Number of events to retain (minimum: 10, maximum: 1000)"
value={form.values['max-system-events'] || 100}
value={form.values['max_system_events'] || 100}
onChange={(value) => {
form.setFieldValue('max-system-events', value);
form.setFieldValue('max_system_events', value);
}}
min={10}
max={1000}
@ -81,4 +81,4 @@ const SystemSettingsForm = React.memo(({ active }) => {
);
});
export default SystemSettingsForm;
export default SystemSettingsForm;

View file

@ -45,12 +45,11 @@ const UiSettingsForm = React.memo(() => {
useEffect(() => {
if (settings) {
const tzSetting = settings['system-time-zone'];
if (tzSetting?.value) {
const systemSettings = settings['system_settings'];
const tzValue = systemSettings?.value?.time_zone;
if (tzValue) {
timeZoneSyncedRef.current = true;
setTimeZone((prev) =>
prev === tzSetting.value ? prev : tzSetting.value
);
setTimeZone((prev) => (prev === tzValue ? prev : tzValue));
} else if (!timeZoneSyncedRef.current && timeZone) {
timeZoneSyncedRef.current = true;
persistTimeZoneSetting(timeZone);
@ -141,4 +140,4 @@ const UiSettingsForm = React.memo(() => {
);
});
export default UiSettingsForm;
export default UiSettingsForm;

View file

@ -0,0 +1,122 @@
import React from 'react';
import {
Modal,
Stack,
Text,
Radio,
NumberInput,
Checkbox,
Group,
Button,
} from '@mantine/core';
const ChannelNumberingModal = ({
opened,
onClose,
mode,
onModeChange,
numberValue,
onNumberValueChange,
rememberChoice,
onRememberChoiceChange,
onConfirm,
// Props for customizing the modal behavior
isBulk = false,
streamCount = 1,
streamName = '',
}) => {
const title = isBulk
? 'Channel Numbering Options'
: 'Channel Number Assignment';
const confirmLabel = isBulk ? 'Create Channels' : 'Create Channel';
const numberingLabel = isBulk ? 'Numbering Mode' : 'Number Assignment';
// For bulk: use 'custom' mode, for single: use 'specific' mode
const customModeValue = isBulk ? 'custom' : 'specific';
return (
<Modal opened={opened} onClose={onClose} title={title} size="md" centered>
<Stack spacing="md">
<Text size="sm" c="dimmed">
{isBulk
? `Choose how to assign channel numbers to the ${streamCount} selected streams:`
: `Choose how to assign the channel number for "${streamName}":`}
</Text>
<Radio.Group
value={mode}
onChange={onModeChange}
label={numberingLabel}
>
<Stack mt="xs" spacing="xs">
<Radio
value="provider"
label={isBulk ? 'Use Provider Numbers' : 'Use Provider Number'}
description={
isBulk
? 'Use tvg-chno or channel-number from stream metadata, auto-assign for conflicts'
: 'Use tvg-chno or channel-number from stream metadata, auto-assign if not available'
}
/>
<Radio
value="auto"
label={
isBulk ? 'Auto-Assign Sequential' : 'Auto-Assign Next Available'
}
description={
isBulk
? 'Start from the lowest available channel number and increment by 1'
: 'Automatically assign the next available channel number'
}
/>
<Radio
value={customModeValue}
label={
isBulk ? 'Start from Custom Number' : 'Use Specific Number'
}
description={
isBulk
? 'Start sequential numbering from a specific channel number'
: 'Use a specific channel number'
}
/>
</Stack>
</Radio.Group>
{mode === customModeValue && (
<NumberInput
label={isBulk ? 'Starting Channel Number' : 'Channel Number'}
description={
isBulk
? 'Channel numbers will be assigned starting from this number'
: 'The specific channel number to assign'
}
value={numberValue}
onChange={onNumberValueChange}
min={1}
placeholder={
isBulk ? 'Enter starting number...' : 'Enter channel number...'
}
/>
)}
<Checkbox
checked={rememberChoice}
onChange={(event) =>
onRememberChoiceChange(event.currentTarget.checked)
}
label="Remember this choice and don't ask again"
/>
<Group justify="flex-end" mt="md">
<Button variant="default" onClick={onClose}>
Cancel
</Button>
<Button onClick={onConfirm}>{confirmLabel}</Button>
</Group>
</Stack>
</Modal>
);
};
export default ChannelNumberingModal;

View file

@ -0,0 +1,180 @@
import React from 'react';
import {
Modal,
Stack,
Text,
Radio,
NumberInput,
Checkbox,
Group,
Button,
MultiSelect,
Divider,
} from '@mantine/core';
const CreateChannelModal = ({
opened,
onClose,
mode,
onModeChange,
numberValue,
onNumberValueChange,
rememberChoice,
onRememberChoiceChange,
onConfirm,
// Props for customizing the modal behavior
isBulk = false,
streamCount = 1,
streamName = '',
// Channel profile props
selectedProfileIds,
onProfileIdsChange,
channelProfiles = [],
}) => {
const title = isBulk ? 'Create Channels Options' : 'Create Channel';
const confirmLabel = isBulk ? 'Create Channels' : 'Create Channel';
const numberingLabel = isBulk ? 'Numbering Mode' : 'Number Assignment';
// For bulk: use 'custom' mode, for single: use 'specific' mode
const customModeValue = isBulk ? 'custom' : 'specific';
// Convert channel profiles to MultiSelect data format with groups
// Filter out the "All" profile (id '0') and add our own special options
const profileOptions = [
{
group: 'Special',
items: [
{ value: 'all', label: 'All Profiles' },
{ value: 'none', label: 'No Profiles' },
],
},
{
group: 'Profiles',
items: channelProfiles
.filter((profile) => profile.id.toString() !== '0')
.map((profile) => ({
value: profile.id.toString(),
label: profile.name,
})),
},
];
// Handle profile selection with mutual exclusivity
const handleProfileChange = (newValue) => {
const lastSelected = newValue[newValue.length - 1];
// If 'all' or 'none' was just selected, clear everything else and keep only that
if (lastSelected === 'all' || lastSelected === 'none') {
onProfileIdsChange([lastSelected]);
}
// If a specific profile was selected, remove 'all' and 'none'
else if (newValue.includes('all') || newValue.includes('none')) {
onProfileIdsChange(newValue.filter((v) => v !== 'all' && v !== 'none'));
}
// Otherwise just update normally
else {
onProfileIdsChange(newValue);
}
};
return (
<Modal opened={opened} onClose={onClose} title={title} size="md" centered>
<Stack spacing="md">
<Text size="sm" c="dimmed">
{isBulk
? `Configure options for creating ${streamCount} channels from selected streams:`
: `Configure options for creating a channel from "${streamName}":`}
</Text>
<Divider label="Channel Profiles" labelPosition="left" />
<MultiSelect
label="Channel Profiles"
description="Select 'All Profiles' to add to all profiles, 'No Profiles' to not add to any profile, or choose specific profiles"
placeholder="Select profiles..."
data={profileOptions}
value={selectedProfileIds}
onChange={handleProfileChange}
searchable
clearable
/>
<Divider label="Channel Number" labelPosition="left" />
<Radio.Group
value={mode}
onChange={onModeChange}
label={numberingLabel}
>
<Stack mt="xs" spacing="xs">
<Radio
value="provider"
label={isBulk ? 'Use Provider Numbers' : 'Use Provider Number'}
description={
isBulk
? 'Use tvg-chno or channel-number from stream metadata, auto-assign for conflicts'
: 'Use tvg-chno or channel-number from stream metadata, auto-assign if not available'
}
/>
<Radio
value="auto"
label={
isBulk ? 'Auto-Assign Sequential' : 'Auto-Assign Next Available'
}
description={
isBulk
? 'Start from the lowest available channel number and increment by 1'
: 'Automatically assign the next available channel number'
}
/>
<Radio
value={customModeValue}
label={
isBulk ? 'Start from Custom Number' : 'Use Specific Number'
}
description={
isBulk
? 'Start sequential numbering from a specific channel number'
: 'Use a specific channel number'
}
/>
</Stack>
</Radio.Group>
{mode === customModeValue && (
<NumberInput
label={isBulk ? 'Starting Channel Number' : 'Channel Number'}
description={
isBulk
? 'Channel numbers will be assigned starting from this number'
: 'The specific channel number to assign'
}
value={numberValue}
onChange={onNumberValueChange}
min={1}
placeholder={
isBulk ? 'Enter starting number...' : 'Enter channel number...'
}
/>
)}
<Checkbox
checked={rememberChoice}
onChange={(event) =>
onRememberChoiceChange(event.currentTarget.checked)
}
label="Remember this choice and don't ask again"
/>
<Group justify="flex-end" mt="md">
<Button variant="default" onClick={onClose}>
Cancel
</Button>
<Button onClick={onConfirm}>{confirmLabel}</Button>
</Group>
</Stack>
</Modal>
);
};
export default CreateChannelModal;

View file

@ -0,0 +1,193 @@
import React, { useState, useEffect } from 'react';
import {
Alert,
Box,
Button,
Group,
Modal,
Stack,
Text,
TextInput,
ActionIcon,
Tooltip,
} from '@mantine/core';
import { Copy, SquareMinus, SquarePen } from 'lucide-react';
import API from '../../api';
import { notifications } from '@mantine/notifications';
import useChannelsStore from '../../store/channels';
import { USER_LEVELS } from '../../constants';
const ProfileModal = ({ opened, onClose, mode, profile }) => {
const [profileNameInput, setProfileNameInput] = useState('');
const setSelectedProfileId = useChannelsStore((s) => s.setSelectedProfileId);
useEffect(() => {
if (opened && profile) {
setProfileNameInput(
mode === 'duplicate' ? `${profile.name} Copy` : profile.name
);
}
}, [opened, mode, profile]);
const closeModal = () => {
setProfileNameInput('');
onClose();
};
const submitProfileModal = async () => {
const trimmedName = profileNameInput.trim();
if (!mode || !profile) return;
if (!trimmedName) {
notifications.show({
title: 'Profile name is required',
color: 'red.5',
});
return;
}
if (mode === 'edit') {
if (trimmedName === profile.name) {
closeModal();
return;
}
const updatedProfile = await API.updateChannelProfile({
id: profile.id,
name: trimmedName,
});
if (updatedProfile) {
notifications.show({
title: 'Profile renamed',
message: `${profile.name}${trimmedName}`,
color: 'green.5',
});
closeModal();
}
}
if (mode === 'duplicate') {
const duplicatedProfile = await API.duplicateChannelProfile(
profile.id,
trimmedName
);
if (duplicatedProfile) {
notifications.show({
title: 'Profile duplicated',
message: `${profile.name} copied to ${duplicatedProfile.name}`,
color: 'green.5',
});
setSelectedProfileId(`${duplicatedProfile.id}`);
closeModal();
}
}
};
return (
<Modal
opened={opened}
onClose={closeModal}
title={
mode === 'duplicate'
? `Duplicate Profile: ${profile?.name}`
: `Rename Profile: ${profile?.name}`
}
centered
size="sm"
>
<Stack gap="sm">
{mode === 'edit' && (
<Alert color="yellow" title="Warning">
<Text size="sm">
If you have any profile links (M3U, EPG, HDHR) shared with
clients, they will need to be updated after renaming this profile.
</Text>
</Alert>
)}
<TextInput
label="Profile name"
placeholder="Profile name"
value={profileNameInput}
onChange={(event) => setProfileNameInput(event.currentTarget.value)}
data-autofocus
/>
<Group justify="flex-end" gap="xs">
<Button variant="default" size="xs" onClick={closeModal}>
Cancel
</Button>
<Button size="xs" onClick={submitProfileModal}>
{mode === 'duplicate' ? 'Duplicate' : 'Save'}
</Button>
</Group>
</Stack>
</Modal>
);
};
export const renderProfileOption = (
theme,
profiles,
onEditProfile,
onDeleteProfile,
authUser
) => {
return ({ option }) => {
return (
<Group justify="space-between" style={{ width: '100%' }}>
<Box>{option.label}</Box>
{option.value != '0' && (
<Group gap={4} wrap="nowrap">
<Tooltip label="Rename profile">
<ActionIcon
size="xs"
variant="transparent"
color={theme.tailwind.yellow[3]}
onClick={(e) => {
e.stopPropagation();
onEditProfile('edit', option.value);
}}
disabled={authUser.user_level != USER_LEVELS.ADMIN}
>
<SquarePen size={14} />
</ActionIcon>
</Tooltip>
<Tooltip label="Duplicate profile">
<ActionIcon
size="xs"
variant="transparent"
color={theme.tailwind.green[5]}
onClick={(e) => {
e.stopPropagation();
onEditProfile('duplicate', option.value);
}}
disabled={authUser.user_level != USER_LEVELS.ADMIN}
>
<Copy size={14} />
</ActionIcon>
</Tooltip>
<ActionIcon
size="xs"
variant="transparent"
color={theme.tailwind.red[6]}
onClick={(e) => {
e.stopPropagation();
onDeleteProfile(option.value);
}}
disabled={authUser.user_level != USER_LEVELS.ADMIN}
>
<SquareMinus />
</ActionIcon>
</Group>
)}
</Group>
);
};
};
export default ProfileModal;

View file

@ -105,6 +105,7 @@ const DraggableRow = ({ row, index }) => {
}}
>
{row.getVisibleCells().map((cell) => {
const isStale = row.original.is_stale;
return (
<Box
className="td"
@ -115,6 +116,9 @@ const DraggableRow = ({ row, index }) => {
? cell.column.getSize()
: undefined,
minWidth: 0,
...(isStale && {
backgroundColor: 'rgba(239, 68, 68, 0.15)',
}),
}}
>
<Flex align="center" style={{ height: '100%' }}>

View file

@ -52,6 +52,7 @@ import {
Select,
NumberInput,
Tooltip,
Skeleton,
} from '@mantine/core';
import { getCoreRowModel, flexRender } from '@tanstack/react-table';
import './table.css';
@ -228,6 +229,7 @@ const ChannelsTable = ({ onReady }) => {
// EPG data lookup
const tvgsById = useEPGsStore((s) => s.tvgsById);
const epgs = useEPGsStore((s) => s.epgs);
const tvgsLoaded = useEPGsStore((s) => s.tvgsLoaded);
const theme = useMantineTheme();
const channelGroups = useChannelsStore((s) => s.channelGroups);
const canEditChannelGroup = useChannelsStore((s) => s.canEditChannelGroup);
@ -314,6 +316,7 @@ const ChannelsTable = ({ onReady }) => {
const [deleteTarget, setDeleteTarget] = useState(null);
const [isBulkDelete, setIsBulkDelete] = useState(false);
const [channelToDelete, setChannelToDelete] = useState(null);
const [deleting, setDeleting] = useState(false);
const hasFetchedData = useRef(false);
@ -431,9 +434,9 @@ const ChannelsTable = ({ onReady }) => {
});
setAllRowIds(ids);
// Signal ready after first successful data fetch
// EPG data is already loaded in initData before this component mounts
if (!hasSignaledReady.current && onReady) {
// Signal ready after first successful data fetch AND EPG data is loaded
// This prevents the EPG column from showing "Not Assigned" while EPG data is still loading
if (!hasSignaledReady.current && onReady && tvgsLoaded) {
hasSignaledReady.current = true;
onReady();
}
@ -445,6 +448,7 @@ const ChannelsTable = ({ onReady }) => {
showDisabled,
selectedProfileId,
showOnlyStreamlessChannels,
tvgsLoaded,
]);
const stopPropagation = useCallback((e) => {
@ -542,9 +546,14 @@ const ChannelsTable = ({ onReady }) => {
};
const executeDeleteChannel = async (id) => {
await API.deleteChannel(id);
API.requeryChannels();
setConfirmDeleteOpen(false);
setDeleting(true);
try {
await API.deleteChannel(id);
API.requeryChannels();
} finally {
setDeleting(false);
setConfirmDeleteOpen(false);
}
};
const deleteChannels = async () => {
@ -559,12 +568,17 @@ const ChannelsTable = ({ onReady }) => {
const executeDeleteChannels = async () => {
setIsLoading(true);
await API.deleteChannels(table.selectedTableIds);
await API.requeryChannels();
setSelectedChannelIds([]);
table.setSelectedTableIds([]);
setIsLoading(false);
setConfirmDeleteOpen(false);
setDeleting(true);
try {
await API.deleteChannels(table.selectedTableIds);
await API.requeryChannels();
setSelectedChannelIds([]);
table.setSelectedTableIds([]);
} finally {
setDeleting(false);
setIsLoading(false);
setConfirmDeleteOpen(false);
}
};
const createRecording = (channel) => {
@ -750,6 +764,19 @@ const ChannelsTable = ({ onReady }) => {
setPaginationString(`${startItem} to ${endItem} of ${totalCount}`);
}, [pagination.pageIndex, pagination.pageSize, totalCount]);
// Signal ready when EPG data finishes loading (if channels were already fetched)
useEffect(() => {
if (
hasFetchedData.current &&
!hasSignaledReady.current &&
onReady &&
tvgsLoaded
) {
hasSignaledReady.current = true;
onReady();
}
}, [tvgsLoaded, onReady]);
const columns = useMemo(
() => [
{
@ -834,6 +861,10 @@ const ChannelsTable = ({ onReady }) => {
const tooltip = epgObj
? `${epgName ? `EPG Name: ${epgName}\n` : ''}${tvgName ? `TVG Name: ${tvgName}\n` : ''}${tvgId ? `TVG-ID: ${tvgId}` : ''}`.trim()
: '';
// If channel has an EPG assignment but tvgsById hasn't loaded yet, show loading
const isEpgDataPending = epgDataId && !epgObj && !tvgsLoaded;
return (
<Box
style={{
@ -856,6 +887,12 @@ const ChannelsTable = ({ onReady }) => {
</Tooltip>
) : epgObj ? (
<span>{epgObj.name}</span>
) : isEpgDataPending ? (
<Skeleton
height={14}
width={(columnSizing.epg || 200) * 0.7}
style={{ borderRadius: 4 }}
/>
) : (
<span style={{ color: '#888' }}>Not Assigned</span>
)}
@ -935,7 +972,7 @@ const ChannelsTable = ({ onReady }) => {
// Note: logos is intentionally excluded - LazyLogo components handle their own logo data
// from the store, so we don't need to recreate columns when logos load.
// eslint-disable-next-line react-hooks/exhaustive-deps
[selectedProfileId, channelGroups, theme]
[selectedProfileId, channelGroups, theme, tvgsById, epgs, tvgsLoaded]
);
const renderHeaderCell = (header) => {
@ -1380,12 +1417,13 @@ const ChannelsTable = ({ onReady }) => {
{/* Table or ghost empty state inside Paper */}
<Box>
{channelsTableLength === 0 && (
<ChannelsTableOnboarding editChannel={editChannel} />
)}
{channelsTableLength === 0 &&
Object.keys(channels).length === 0 && (
<ChannelsTableOnboarding editChannel={editChannel} />
)}
</Box>
{channelsTableLength > 0 && (
{(channelsTableLength > 0 || Object.keys(channels).length > 0) && (
<Box
style={{
display: 'flex',
@ -1471,6 +1509,7 @@ const ChannelsTable = ({ onReady }) => {
? executeDeleteChannels()
: executeDeleteChannel(deleteTarget)
}
loading={deleting}
title={`Confirm ${isBulkDelete ? 'Bulk ' : ''}Channel Deletion`}
message={
isBulkDelete ? (

View file

@ -38,6 +38,7 @@ import AssignChannelNumbersForm from '../../forms/AssignChannelNumbers';
import GroupManager from '../../forms/GroupManager';
import ConfirmationDialog from '../../ConfirmationDialog';
import useWarningsStore from '../../../store/warnings';
import ProfileModal, { renderProfileOption } from '../../modals/ProfileModal';
const CreateProfilePopover = React.memo(() => {
const [opened, setOpened] = useState(false);
@ -117,6 +118,12 @@ const ChannelTableHeader = ({
const [confirmDeleteProfileOpen, setConfirmDeleteProfileOpen] =
useState(false);
const [profileToDelete, setProfileToDelete] = useState(null);
const [deletingProfile, setDeletingProfile] = useState(false);
const [profileModalState, setProfileModalState] = useState({
opened: false,
mode: null,
profileId: null,
});
const profiles = useChannelsStore((s) => s.profiles);
const selectedProfileId = useChannelsStore((s) => s.selectedProfileId);
@ -128,6 +135,15 @@ const ChannelTableHeader = ({
setAssignNumbersModalOpen(false);
};
const closeProfileModal = () => {
setProfileModalState({ opened: false, mode: null, profileId: null });
};
const openProfileModal = (mode, profileId) => {
if (!profiles[profileId]) return;
setProfileModalState({ opened: true, mode, profileId });
};
const deleteProfile = async (id) => {
// Get profile details for the confirmation dialog
const profileObj = profiles[id];
@ -142,8 +158,13 @@ const ChannelTableHeader = ({
};
const executeDeleteProfile = async (id) => {
await API.deleteChannelProfile(id);
setConfirmDeleteProfileOpen(false);
setDeletingProfile(true);
try {
await API.deleteChannelProfile(id);
} finally {
setDeletingProfile(false);
setConfirmDeleteProfileOpen(false);
}
};
const matchEpg = async () => {
@ -192,27 +213,13 @@ const ChannelTableHeader = ({
}
};
const renderProfileOption = ({ option, checked }) => {
return (
<Group justify="space-between" style={{ width: '100%' }}>
<Box>{option.label}</Box>
{option.value != '0' && (
<ActionIcon
size="xs"
variant="transparent"
color={theme.tailwind.red[6]}
onClick={(e) => {
e.stopPropagation();
deleteProfile(option.value);
}}
disabled={authUser.user_level != USER_LEVELS.ADMIN}
>
<SquareMinus />
</ActionIcon>
)}
</Group>
);
};
const renderModalOption = renderProfileOption(
theme,
profiles,
openProfileModal,
deleteProfile,
authUser
);
const toggleShowDisabled = () => {
setShowDisabled(!showDisabled);
@ -234,7 +241,8 @@ const ChannelTableHeader = ({
label: profile.name,
value: `${profile.id}`,
}))}
renderOption={renderProfileOption}
renderOption={renderModalOption}
style={{ minWidth: 190 }}
/>
<Tooltip label="Create Profile">
@ -373,6 +381,18 @@ const ChannelTableHeader = ({
</Flex>
</Box>
<ProfileModal
opened={profileModalState.opened}
onClose={closeProfileModal}
mode={profileModalState.mode}
profile={
profileModalState.profileId
? profiles[profileModalState.profileId]
: null
}
onDeleteProfile={deleteProfile}
/>
<AssignChannelNumbersForm
channelIds={selectedTableIds}
isOpen={assignNumbersModalOpen}
@ -388,6 +408,7 @@ const ChannelTableHeader = ({
opened={confirmDeleteProfileOpen}
onClose={() => setConfirmDeleteProfileOpen(false)}
onConfirm={() => executeDeleteProfile(profileToDelete?.id)}
loading={deletingProfile}
title="Confirm Profile Deletion"
message={
profileToDelete ? (

View file

@ -110,6 +110,7 @@ const EPGsTable = () => {
const [deleteTarget, setDeleteTarget] = useState(null);
const [epgToDelete, setEpgToDelete] = useState(null);
const [data, setData] = useState([]);
const [deleting, setDeleting] = useState(false);
const epgs = useEPGsStore((s) => s.epgs);
const refreshProgress = useEPGsStore((s) => s.refreshProgress);
@ -431,10 +432,13 @@ const EPGsTable = () => {
};
const executeDeleteEPG = async (id) => {
setIsLoading(true);
await API.deleteEPG(id);
setIsLoading(false);
setConfirmDeleteOpen(false);
setDeleting(true);
try {
await API.deleteEPG(id);
} finally {
setDeleting(false);
setConfirmDeleteOpen(false);
}
};
const refreshEPG = async (id) => {
@ -688,6 +692,7 @@ const EPGsTable = () => {
opened={confirmDeleteOpen}
onClose={() => setConfirmDeleteOpen(false)}
onConfirm={() => executeDeleteEPG(deleteTarget)}
loading={deleting}
title="Confirm EPG Source Deletion"
message={
epgToDelete ? (

View file

@ -189,12 +189,12 @@ const LogosTable = () => {
color: 'red',
});
} finally {
setIsLoading(false);
setConfirmDeleteOpen(false);
setDeleteTarget(null);
setLogoToDelete(null);
setIsBulkDelete(false);
clearSelections(); // Clear selections
setIsLoading(false);
}
},
[fetchAllLogos, clearSelections]
@ -221,10 +221,10 @@ const LogosTable = () => {
color: 'red',
});
} finally {
setIsLoading(false);
setConfirmDeleteOpen(false);
setIsBulkDelete(false);
clearSelections(); // Clear selections
setIsLoading(false);
}
},
[selectedRows, fetchAllLogos, clearSelections]
@ -805,6 +805,7 @@ const LogosTable = () => {
<ConfirmationDialog
opened={confirmDeleteOpen}
onClose={() => setConfirmDeleteOpen(false)}
loading={isLoading}
onConfirm={(deleteFiles) => {
if (isBulkDelete) {
executeBulkDelete(deleteFiles);
@ -867,6 +868,7 @@ const LogosTable = () => {
<ConfirmationDialog
opened={confirmCleanupOpen}
onClose={() => setConfirmCleanupOpen(false)}
loading={isCleaningUp}
onConfirm={executeCleanupUnused}
title="Cleanup Unused Logos"
message={

View file

@ -140,6 +140,7 @@ const M3UTable = () => {
const [playlistToDelete, setPlaylistToDelete] = useState(null);
const [data, setData] = useState([]);
const [sorting, setSorting] = useState([{ id: 'name', desc: '' }]);
const [deleting, setDeleting] = useState(false);
const playlists = usePlaylistsStore((s) => s.playlists);
const refreshProgress = usePlaylistsStore((s) => s.refreshProgress);
@ -400,9 +401,14 @@ const M3UTable = () => {
const executeDeletePlaylist = async (id) => {
setIsLoading(true);
await API.deletePlaylist(id);
setIsLoading(false);
setConfirmDeleteOpen(false);
setDeleting(true);
try {
await API.deletePlaylist(id);
} finally {
setDeleting(false);
setIsLoading(false);
setConfirmDeleteOpen(false);
}
};
const toggleActive = async (playlist) => {
@ -893,6 +899,7 @@ const M3UTable = () => {
opened={confirmDeleteOpen}
onClose={() => setConfirmDeleteOpen(false)}
onConfirm={() => executeDeletePlaylist(deleteTarget)}
loading={deleting}
title="Confirm M3U Account Deletion"
message={
playlistToDelete ? (

View file

@ -155,7 +155,7 @@ const StreamProfiles = () => {
};
const deleteStreamProfile = async (id) => {
if (id == settings['default-stream-profile'].value) {
if (id == settings.default_stream_profile) {
notifications.show({
title: 'Cannot delete default stream-profile',
color: 'red.5',

View file

@ -58,6 +58,7 @@ import useWarningsStore from '../../store/warnings';
import { CustomTable, useTable } from './CustomTable';
import useLocalStorage from '../../hooks/useLocalStorage';
import ConfirmationDialog from '../ConfirmationDialog';
import CreateChannelModal from '../modals/CreateChannelModal';
const StreamRowActions = ({
theme,
@ -193,25 +194,28 @@ const StreamsTable = ({ onReady }) => {
const [sorting, setSorting] = useState([{ id: 'name', desc: false }]);
const [selectedStreamIds, setSelectedStreamIds] = useState([]);
// Channel numbering modal state
// Channel creation modal state (bulk)
const [channelNumberingModalOpen, setChannelNumberingModalOpen] =
useState(false);
const [numberingMode, setNumberingMode] = useState('provider'); // 'provider', 'auto', or 'custom'
const [customStartNumber, setCustomStartNumber] = useState(1);
const [rememberChoice, setRememberChoice] = useState(false);
const [bulkSelectedProfileIds, setBulkSelectedProfileIds] = useState([]);
// Single channel numbering modal state
// Channel creation modal state (single)
const [singleChannelModalOpen, setSingleChannelModalOpen] = useState(false);
const [singleChannelMode, setSingleChannelMode] = useState('provider'); // 'provider', 'auto', or 'specific'
const [specificChannelNumber, setSpecificChannelNumber] = useState(1);
const [rememberSingleChoice, setRememberSingleChoice] = useState(false);
const [currentStreamForChannel, setCurrentStreamForChannel] = useState(null);
const [singleSelectedProfileIds, setSingleSelectedProfileIds] = useState([]);
// Confirmation dialog state
const [confirmDeleteOpen, setConfirmDeleteOpen] = useState(false);
const [deleteTarget, setDeleteTarget] = useState(null);
const [streamToDelete, setStreamToDelete] = useState(null);
const [isBulkDelete, setIsBulkDelete] = useState(false);
const [deleting, setDeleting] = useState(false);
// const [allRowsSelected, setAllRowsSelected] = useState(false);
@ -260,6 +264,8 @@ const StreamsTable = ({ onReady }) => {
(state) =>
state.channels.find((chan) => chan.id === selectedChannelIds[0])?.streams
);
const channelProfiles = useChannelsStore((s) => s.profiles);
const selectedProfileId = useChannelsStore((s) => s.selectedProfileId);
const env_mode = useSettingsStore((s) => s.environment.env_mode);
const showVideo = useVideoStore((s) => s.showVideo);
const [tableSize, _] = useLocalStorage('table-size', 'default');
@ -462,6 +468,11 @@ const StreamsTable = ({ onReady }) => {
const createChannelsFromStreams = async () => {
if (selectedStreamIds.length === 0) return;
// Set default profile selection based on current profile filter
const defaultProfileIds =
selectedProfileId === '0' ? ['all'] : [selectedProfileId];
setBulkSelectedProfileIds(defaultProfileIds);
// Check if user has suppressed the channel numbering dialog
const actionKey = 'channel-numbering-choice';
if (isWarningSuppressed(actionKey)) {
@ -478,7 +489,10 @@ const StreamsTable = ({ onReady }) => {
? 0
: Number(savedStartNumber);
await executeChannelCreation(startingChannelNumberValue);
await executeChannelCreation(
startingChannelNumberValue,
defaultProfileIds
);
} else {
// Show the modal to let user choose
setChannelNumberingModalOpen(true);
@ -486,15 +500,32 @@ const StreamsTable = ({ onReady }) => {
};
// Separate function to actually execute the channel creation
const executeChannelCreation = async (startingChannelNumberValue) => {
const executeChannelCreation = async (
startingChannelNumberValue,
profileIds = null
) => {
try {
const selectedChannelProfileId =
useChannelsStore.getState().selectedProfileId;
// Convert profile selection: 'all' means all profiles (null), 'none' means no profiles ([]), specific IDs otherwise
let channelProfileIds;
if (profileIds) {
if (profileIds.includes('none')) {
channelProfileIds = [];
} else if (profileIds.includes('all')) {
channelProfileIds = null;
} else {
channelProfileIds = profileIds
.filter((id) => id !== 'all' && id !== 'none')
.map((id) => parseInt(id));
}
} else {
channelProfileIds =
selectedProfileId !== '0' ? [parseInt(selectedProfileId)] : null;
}
// Use the async API for all bulk operations
const response = await API.createChannelsFromStreamsAsync(
selectedStreamIds,
selectedChannelProfileId !== '0' ? [selectedChannelProfileId] : null,
channelProfileIds,
startingChannelNumberValue
);
@ -533,7 +564,10 @@ const StreamsTable = ({ onReady }) => {
: Number(customStartNumber);
setChannelNumberingModalOpen(false);
await executeChannelCreation(startingChannelNumberValue);
await executeChannelCreation(
startingChannelNumberValue,
bulkSelectedProfileIds
);
};
const editStream = async (stream = null) => {
@ -557,12 +591,17 @@ const StreamsTable = ({ onReady }) => {
};
const executeDeleteStream = async (id) => {
await API.deleteStream(id);
fetchData();
// Clear the selection for the deleted stream
setSelectedStreamIds([]);
table.setSelectedTableIds([]);
setConfirmDeleteOpen(false);
setDeleting(true);
try {
await API.deleteStream(id);
fetchData();
// Clear the selection for the deleted stream
setSelectedStreamIds([]);
table.setSelectedTableIds([]);
} finally {
setDeleting(false);
setConfirmDeleteOpen(false);
}
};
const deleteStreams = async () => {
@ -579,12 +618,17 @@ const StreamsTable = ({ onReady }) => {
const executeDeleteStreams = async () => {
setIsLoading(true);
await API.deleteStreams(selectedStreamIds);
setIsLoading(false);
fetchData();
setSelectedStreamIds([]);
table.setSelectedTableIds([]);
setConfirmDeleteOpen(false);
setDeleting(true);
try {
await API.deleteStreams(selectedStreamIds);
fetchData();
setSelectedStreamIds([]);
table.setSelectedTableIds([]);
} finally {
setDeleting(false);
setIsLoading(false);
setConfirmDeleteOpen(false);
}
};
const closeStreamForm = () => {
@ -595,6 +639,11 @@ const StreamsTable = ({ onReady }) => {
// Single channel creation functions
const createChannelFromStream = async (stream) => {
// Set default profile selection based on current profile filter
const defaultProfileIds =
selectedProfileId === '0' ? ['all'] : [selectedProfileId];
setSingleSelectedProfileIds(defaultProfileIds);
// Check if user has suppressed the single channel numbering dialog
const actionKey = 'single-channel-numbering-choice';
if (isWarningSuppressed(actionKey)) {
@ -611,7 +660,11 @@ const StreamsTable = ({ onReady }) => {
? 0
: Number(savedChannelNumber);
await executeSingleChannelCreation(stream, channelNumberValue);
await executeSingleChannelCreation(
stream,
channelNumberValue,
defaultProfileIds
);
} else {
// Show the modal to let user choose
setCurrentStreamForChannel(stream);
@ -620,18 +673,33 @@ const StreamsTable = ({ onReady }) => {
};
// Separate function to actually execute single channel creation
const executeSingleChannelCreation = async (stream, channelNumber = null) => {
const selectedChannelProfileId =
useChannelsStore.getState().selectedProfileId;
const executeSingleChannelCreation = async (
stream,
channelNumber = null,
profileIds = null
) => {
// Convert profile selection: 'all' means all profiles (null), 'none' means no profiles ([]), specific IDs otherwise
let channelProfileIds;
if (profileIds) {
if (profileIds.includes('none')) {
channelProfileIds = [];
} else if (profileIds.includes('all')) {
channelProfileIds = null;
} else {
channelProfileIds = profileIds
.filter((id) => id !== 'all' && id !== 'none')
.map((id) => parseInt(id));
}
} else {
channelProfileIds =
selectedProfileId !== '0' ? [parseInt(selectedProfileId)] : null;
}
await API.createChannelFromStream({
name: stream.name,
channel_number: channelNumber,
stream_id: stream.id,
// Only pass channel_profile_ids if a specific profile is selected (not "All")
...(selectedChannelProfileId !== '0' && {
channel_profile_ids: selectedChannelProfileId,
}),
channel_profile_ids: channelProfileIds,
});
await API.requeryChannels();
const fetchLogos = useChannelsStore.getState().fetchLogos;
@ -663,7 +731,8 @@ const StreamsTable = ({ onReady }) => {
setSingleChannelModalOpen(false);
await executeSingleChannelCreation(
currentStreamForChannel,
channelNumberValue
channelNumberValue,
singleSelectedProfileIds
);
};
@ -885,6 +954,14 @@ const StreamsTable = ({ onReady }) => {
bodyCellRenderFns: {
actions: renderBodyCell,
},
getRowStyles: (row) => {
if (row.original.is_stale) {
return {
backgroundColor: 'rgba(239, 68, 68, 0.15)',
};
}
return {};
},
});
/**
@ -1126,145 +1203,41 @@ const StreamsTable = ({ onReady }) => {
onClose={closeStreamForm}
/>
{/* Channel Numbering Modal */}
<Modal
{/* Bulk Channel Creation Modal */}
<CreateChannelModal
opened={channelNumberingModalOpen}
onClose={() => setChannelNumberingModalOpen(false)}
title="Channel Numbering Options"
size="md"
centered
>
<Stack spacing="md">
<Text size="sm" c="dimmed">
Choose how to assign channel numbers to the{' '}
{selectedStreamIds.length} selected streams:
</Text>
mode={numberingMode}
onModeChange={setNumberingMode}
numberValue={customStartNumber}
onNumberValueChange={setCustomStartNumber}
rememberChoice={rememberChoice}
onRememberChoiceChange={setRememberChoice}
onConfirm={handleChannelNumberingConfirm}
isBulk={true}
streamCount={selectedStreamIds.length}
selectedProfileIds={bulkSelectedProfileIds}
onProfileIdsChange={setBulkSelectedProfileIds}
channelProfiles={channelProfiles ? Object.values(channelProfiles) : []}
/>
<Radio.Group
value={numberingMode}
onChange={setNumberingMode}
label="Numbering Mode"
>
<Stack mt="xs" spacing="xs">
<Radio
value="provider"
label="Use Provider Numbers"
description="Use tvg-chno or channel-number from stream metadata, auto-assign for conflicts"
/>
<Radio
value="auto"
label="Auto-Assign Sequential"
description="Start from the lowest available channel number and increment by 1"
/>
<Radio
value="custom"
label="Start from Custom Number"
description="Start sequential numbering from a specific channel number"
/>
</Stack>
</Radio.Group>
{numberingMode === 'custom' && (
<NumberInput
label="Starting Channel Number"
description="Channel numbers will be assigned starting from this number"
value={customStartNumber}
onChange={setCustomStartNumber}
min={1}
placeholder="Enter starting number..."
/>
)}
<Checkbox
checked={rememberChoice}
onChange={(event) => setRememberChoice(event.currentTarget.checked)}
label="Remember this choice and don't ask again"
/>
<Group justify="flex-end" mt="md">
<Button
variant="default"
onClick={() => setChannelNumberingModalOpen(false)}
>
Cancel
</Button>
<Button onClick={handleChannelNumberingConfirm}>
Create Channels
</Button>
</Group>
</Stack>
</Modal>
{/* Single Channel Numbering Modal */}
<Modal
{/* Single Channel Creation Modal */}
<CreateChannelModal
opened={singleChannelModalOpen}
onClose={() => setSingleChannelModalOpen(false)}
title="Channel Number Assignment"
size="md"
centered
>
<Stack spacing="md">
<Text size="sm" c="dimmed">
Choose how to assign the channel number for "
{currentStreamForChannel?.name}":
</Text>
<Radio.Group
value={singleChannelMode}
onChange={setSingleChannelMode}
label="Number Assignment"
>
<Stack mt="xs" spacing="xs">
<Radio
value="provider"
label="Use Provider Number"
description="Use tvg-chno or channel-number from stream metadata, auto-assign if not available"
/>
<Radio
value="auto"
label="Auto-Assign Next Available"
description="Automatically assign the next available channel number"
/>
<Radio
value="specific"
label="Use Specific Number"
description="Use a specific channel number"
/>
</Stack>
</Radio.Group>
{singleChannelMode === 'specific' && (
<NumberInput
label="Channel Number"
description="The specific channel number to assign"
value={specificChannelNumber}
onChange={setSpecificChannelNumber}
min={1}
placeholder="Enter channel number..."
/>
)}
<Checkbox
checked={rememberSingleChoice}
onChange={(event) =>
setRememberSingleChoice(event.currentTarget.checked)
}
label="Remember this choice and don't ask again"
/>
<Group justify="flex-end" mt="md">
<Button
variant="default"
onClick={() => setSingleChannelModalOpen(false)}
>
Cancel
</Button>
<Button onClick={handleSingleChannelNumberingConfirm}>
Create Channel
</Button>
</Group>
</Stack>
</Modal>
mode={singleChannelMode}
onModeChange={setSingleChannelMode}
numberValue={specificChannelNumber}
onNumberValueChange={setSpecificChannelNumber}
rememberChoice={rememberSingleChoice}
onRememberChoiceChange={setRememberSingleChoice}
onConfirm={handleSingleChannelNumberingConfirm}
isBulk={false}
streamName={currentStreamForChannel?.name}
selectedProfileIds={singleSelectedProfileIds}
onProfileIdsChange={setSingleSelectedProfileIds}
channelProfiles={channelProfiles ? Object.values(channelProfiles) : []}
/>
<ConfirmationDialog
opened={confirmDeleteOpen}
@ -1296,6 +1269,7 @@ This action cannot be undone.`}
cancelLabel="Cancel"
actionKey={isBulkDelete ? 'delete-streams' : 'delete-stream'}
onSuppressChange={suppressWarning}
loading={deleting}
size="md"
/>
</>

View file

@ -127,7 +127,7 @@ const UserAgentsTable = () => {
const deleteUserAgent = async (ids) => {
if (Array.isArray(ids)) {
if (ids.includes(settings['default-user-agent'].value)) {
if (ids.includes(settings.default_user_agent)) {
notifications.show({
title: 'Cannot delete default user-agent',
color: 'red.5',
@ -137,7 +137,7 @@ const UserAgentsTable = () => {
await API.deleteUserAgents(ids);
} else {
if (ids == settings['default-user-agent'].value) {
if (ids == settings.default_user_agent) {
notifications.show({
title: 'Cannot delete default user-agent',
color: 'red.5',

View file

@ -96,6 +96,7 @@ const UsersTable = () => {
const [deleteTarget, setDeleteTarget] = useState(null);
const [userToDelete, setUserToDelete] = useState(null);
const [isLoading, setIsLoading] = useState(false);
const [deleting, setDeleting] = useState(false);
const [visiblePasswords, setVisiblePasswords] = useState({});
/**
@ -110,9 +111,14 @@ const UsersTable = () => {
const executeDeleteUser = useCallback(async (id) => {
setIsLoading(true);
await API.deleteUser(id);
setIsLoading(false);
setConfirmDeleteOpen(false);
setDeleting(true);
try {
await API.deleteUser(id);
} finally {
setDeleting(false);
setIsLoading(false);
setConfirmDeleteOpen(false);
}
}, []);
const editUser = useCallback(async (user = null) => {
@ -406,6 +412,7 @@ const UsersTable = () => {
opened={confirmDeleteOpen}
onClose={() => setConfirmDeleteOpen(false)}
onConfirm={() => executeDeleteUser(deleteTarget)}
loading={deleting}
title="Confirm User Deletion"
message={
userToDelete ? (

View file

@ -74,6 +74,7 @@ export default function VODLogosTable() {
const [confirmDeleteOpen, setConfirmDeleteOpen] = useState(false);
const [deleteTarget, setDeleteTarget] = useState(null);
const [confirmCleanupOpen, setConfirmCleanupOpen] = useState(false);
const [deleting, setDeleting] = useState(false);
const [paginationString, setPaginationString] = useState('');
const [isCleaningUp, setIsCleaningUp] = useState(false);
const tableRef = React.useRef(null);
@ -139,6 +140,7 @@ export default function VODLogosTable() {
}, []);
const handleConfirmDelete = async () => {
setDeleting(true);
try {
if (deleteTarget.length === 1) {
await deleteVODLogo(deleteTarget[0]);
@ -162,6 +164,7 @@ export default function VODLogosTable() {
color: 'red',
});
} finally {
setDeleting(false);
// Always clear selections and close dialog, even on error
clearSelections();
setConfirmDeleteOpen(false);
@ -571,6 +574,7 @@ export default function VODLogosTable() {
// pass deleteFiles option through
handleConfirmDelete(deleteFiles);
}}
loading={deleting}
title={
deleteTarget && deleteTarget.length > 1
? 'Delete Multiple Logos'
@ -633,6 +637,7 @@ export default function VODLogosTable() {
<ConfirmationDialog
opened={confirmCleanupOpen}
onClose={() => setConfirmCleanupOpen(false)}
loading={isCleaningUp}
onConfirm={handleConfirmCleanup}
title="Cleanup Unused Logos"
message={

File diff suppressed because it is too large Load diff

View file

@ -1,244 +1,31 @@
import React, { useState, useEffect } from 'react';
import React, { Suspense, useEffect, useState } from 'react';
import {
Box,
Button,
Card,
Flex,
Group,
Image,
Text,
Title,
Select,
TextInput,
Pagination,
Badge,
Grid,
GridCol,
Group,
Loader,
Stack,
LoadingOverlay,
Pagination,
SegmentedControl,
ActionIcon,
Select,
Stack,
TextInput,
Title,
} from '@mantine/core';
import { Search, Play, Calendar, Clock, Star } from 'lucide-react';
import { Search } from 'lucide-react';
import { useDisclosure } from '@mantine/hooks';
import useVODStore from '../store/useVODStore';
import SeriesModal from '../components/SeriesModal';
import VODModal from '../components/VODModal';
const formatDuration = (seconds) => {
if (!seconds) return '';
const hours = Math.floor(seconds / 3600);
const mins = Math.floor((seconds % 3600) / 60);
const secs = seconds % 60;
return hours > 0 ? `${hours}h ${mins}m` : `${mins}m ${secs}s`;
};
const VODCard = ({ vod, onClick }) => {
const isEpisode = vod.type === 'episode';
const getDisplayTitle = () => {
if (isEpisode && vod.series) {
const seasonEp =
vod.season_number && vod.episode_number
? `S${vod.season_number.toString().padStart(2, '0')}E${vod.episode_number.toString().padStart(2, '0')}`
: '';
return (
<Stack spacing={4}>
<Text size="sm" color="dimmed">
{vod.series.name}
</Text>
<Text weight={500}>
{seasonEp} - {vod.name}
</Text>
</Stack>
);
}
return <Text weight={500}>{vod.name}</Text>;
};
const handleCardClick = async () => {
// Just pass the basic vod info to the parent handler
onClick(vod);
};
return (
<Card
shadow="sm"
padding="md"
radius="md"
withBorder
style={{ cursor: 'pointer', backgroundColor: '#27272A' }}
onClick={handleCardClick}
>
<Card.Section>
<Box style={{ position: 'relative', height: 300 }}>
{vod.logo?.url ? (
<Image
src={vod.logo.url}
height={300}
alt={vod.name}
fit="contain"
/>
) : (
<Box
style={{
height: 300,
backgroundColor: '#404040',
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
}}
>
<Play size={48} color="#666" />
</Box>
)}
<ActionIcon
style={{
position: 'absolute',
top: 8,
right: 8,
backgroundColor: 'rgba(0,0,0,0.7)',
}}
onClick={(e) => {
e.stopPropagation();
onClick(vod);
}}
>
<Play size={16} color="white" />
</ActionIcon>
<Badge
style={{
position: 'absolute',
bottom: 8,
left: 8,
}}
color={isEpisode ? 'blue' : 'green'}
>
{isEpisode ? 'Episode' : 'Movie'}
</Badge>
</Box>
</Card.Section>
<Stack spacing={8} mt="md">
{getDisplayTitle()}
<Group spacing={16}>
{vod.year && (
<Group spacing={4}>
<Calendar size={14} color="#666" />
<Text size="xs" color="dimmed">
{vod.year}
</Text>
</Group>
)}
{vod.duration && (
<Group spacing={4}>
<Clock size={14} color="#666" />
<Text size="xs" color="dimmed">
{formatDuration(vod.duration_secs)}
</Text>
</Group>
)}
{vod.rating && (
<Group spacing={4}>
<Star size={14} color="#666" />
<Text size="xs" color="dimmed">
{vod.rating}
</Text>
</Group>
)}
</Group>
{vod.genre && (
<Text size="xs" color="dimmed" lineClamp={1}>
{vod.genre}
</Text>
)}
</Stack>
</Card>
);
};
const SeriesCard = ({ series, onClick }) => {
return (
<Card
shadow="sm"
padding="md"
radius="md"
withBorder
style={{ cursor: 'pointer', backgroundColor: '#27272A' }}
onClick={() => onClick(series)}
>
<Card.Section>
<Box style={{ position: 'relative', height: 300 }}>
{series.logo?.url ? (
<Image
src={series.logo.url}
height={300}
alt={series.name}
fit="contain"
/>
) : (
<Box
style={{
height: 300,
backgroundColor: '#404040',
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
}}
>
<Play size={48} color="#666" />
</Box>
)}
{/* Add Series badge in the same position as Movie badge */}
<Badge
style={{
position: 'absolute',
bottom: 8,
left: 8,
}}
color="purple"
>
Series
</Badge>
</Box>
</Card.Section>
<Stack spacing={8} mt="md">
<Text weight={500}>{series.name}</Text>
<Group spacing={16}>
{series.year && (
<Group spacing={4}>
<Calendar size={14} color="#666" />
<Text size="xs" color="dimmed">
{series.year}
</Text>
</Group>
)}
{series.rating && (
<Group spacing={4}>
<Star size={14} color="#666" />
<Text size="xs" color="dimmed">
{series.rating}
</Text>
</Group>
)}
</Group>
{series.genre && (
<Text size="xs" color="dimmed" lineClamp={1}>
{series.genre}
</Text>
)}
</Stack>
</Card>
);
};
import ErrorBoundary from '../components/ErrorBoundary.jsx';
import {
filterCategoriesToEnabled,
getCategoryOptions,
} from '../utils/pages/VODsUtils.js';
const SeriesModal = React.lazy(() => import('../components/SeriesModal'));
const VODModal = React.lazy(() => import('../components/VODModal'));
const VODCard = React.lazy(() => import('../components/cards/VODCard'));
const SeriesCard = React.lazy(() => import('../components/cards/SeriesCard'));
const MIN_CARD_WIDTH = 260;
const MAX_CARD_WIDTH = 320;
@ -312,19 +99,7 @@ const VODsPage = () => {
};
useEffect(() => {
// setCategories(allCategories)
setCategories(
Object.keys(allCategories).reduce((acc, key) => {
const enabled = allCategories[key].m3u_accounts.find(
(account) => account.enabled === true
);
if (enabled) {
acc[key] = allCategories[key];
}
return acc;
}, {})
);
setCategories(filterCategoriesToEnabled(allCategories));
}, [allCategories]);
useEffect(() => {
@ -356,19 +131,7 @@ const VODsPage = () => {
setPage(1);
};
const categoryOptions = [
{ value: '', label: 'All Categories' },
...Object.values(categories)
.filter((cat) => {
if (filters.type === 'movies') return cat.category_type === 'movie';
if (filters.type === 'series') return cat.category_type === 'series';
return true; // 'all' shows all
})
.map((cat) => ({
value: `${cat.name}|${cat.category_type}`,
label: `${cat.name} (${cat.category_type})`,
})),
];
const categoryOptions = getCategoryOptions(categories, filters);
const totalPages = Math.ceil(totalCount / pageSize);
@ -396,7 +159,7 @@ const VODsPage = () => {
icon={<Search size={16} />}
value={filters.search}
onChange={(e) => setFilters({ search: e.target.value })}
style={{ minWidth: 200 }}
miw={200}
/>
<Select
@ -405,7 +168,7 @@ const VODsPage = () => {
value={filters.category}
onChange={onCategoryChange}
clearable
style={{ minWidth: 150 }}
miw={150}
/>
<Select
@ -416,7 +179,7 @@ const VODsPage = () => {
value: v,
label: v,
}))}
style={{ width: 110 }}
w={110}
/>
</Group>
@ -428,23 +191,25 @@ const VODsPage = () => {
) : (
<>
<Grid gutter="md">
{getDisplayData().map((item) => (
<Grid.Col
span={12 / columns}
key={`${item.contentType}_${item.id}`}
style={{
minWidth: MIN_CARD_WIDTH,
maxWidth: MAX_CARD_WIDTH,
margin: '0 auto',
}}
>
{item.contentType === 'series' ? (
<SeriesCard series={item} onClick={handleSeriesClick} />
) : (
<VODCard vod={item} onClick={handleVODCardClick} />
)}
</Grid.Col>
))}
<ErrorBoundary>
<Suspense fallback={<Loader />}>
{getDisplayData().map((item) => (
<GridCol
span={12 / columns}
key={`${item.contentType}_${item.id}`}
miw={MIN_CARD_WIDTH}
maw={MAX_CARD_WIDTH}
m={'0 auto'}
>
{item.contentType === 'series' ? (
<SeriesCard series={item} onClick={handleSeriesClick} />
) : (
<VODCard vod={item} onClick={handleVODCardClick} />
)}
</GridCol>
))}
</Suspense>
</ErrorBoundary>
</Grid>
{/* Pagination */}
@ -462,18 +227,26 @@ const VODsPage = () => {
</Stack>
{/* Series Episodes Modal */}
<SeriesModal
series={selectedSeries}
opened={seriesModalOpened}
onClose={closeSeriesModal}
/>
<ErrorBoundary>
<Suspense fallback={<LoadingOverlay />}>
<SeriesModal
series={selectedSeries}
opened={seriesModalOpened}
onClose={closeSeriesModal}
/>
</Suspense>
</ErrorBoundary>
{/* VOD Details Modal */}
<VODModal
vod={selectedVOD}
opened={vodModalOpened}
onClose={closeVODModal}
/>
<ErrorBoundary>
<Suspense fallback={<LoadingOverlay />}>
<VODModal
vod={selectedVOD}
opened={vodModalOpened}
onClose={closeVODModal}
/>
</Suspense>
</ErrorBoundary>
</Box>
);
};

View file

@ -0,0 +1,48 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { render, screen, waitFor } from '@testing-library/react';
import useAuthStore from '../../store/auth';
import useLocalStorage from '../../hooks/useLocalStorage';
import ChannelsPage from '../Channels';
vi.mock('../../store/auth');
vi.mock('../../hooks/useLocalStorage');
vi.mock('../../components/tables/ChannelsTable', () => ({
default: () => <div data-testid="channels-table">ChannelsTable</div>
}));
vi.mock('../../components/tables/StreamsTable', () => ({
default: () => <div data-testid="streams-table">StreamsTable</div>
}));
vi.mock('@mantine/core', () => ({
Box: ({ children, ...props }) => <div {...props}>{children}</div>,
}));
vi.mock('allotment', () => ({
Allotment: ({ children }) => <div data-testid="allotment">{children}</div>,
}));
describe('ChannelsPage', () => {
beforeEach(() => {
useLocalStorage.mockReturnValue([[50, 50], vi.fn()]);
});
it('renders nothing when user is not authenticated', () => {
useAuthStore.mockReturnValue({ id: null, user_level: 0 });
const { container } = render(<ChannelsPage />);
expect(container.firstChild).toBeNull();
});
it('renders only ChannelsTable for standard users', () => {
useAuthStore.mockReturnValue({ id: 1, user_level: 1 });
render(<ChannelsPage />);
expect(screen.getByTestId('channels-table')).toBeInTheDocument();
expect(screen.queryByTestId('streams-table')).not.toBeInTheDocument();
});
it('renders split view for higher-level users', async () => {
useAuthStore.mockReturnValue({ id: 1, user_level: 2 });
render(<ChannelsPage />);
expect(screen.getByTestId('channels-table')).toBeInTheDocument();
await waitFor(() =>
expect(screen.getByTestId('streams-table')).toBeInTheDocument()
);
});
});

View file

@ -0,0 +1,33 @@
import { describe, it, expect, vi } from 'vitest';
import { render, screen } from '@testing-library/react';
import ContentSourcesPage from '../ContentSources';
import useUserAgentsStore from '../../store/userAgents';
vi.mock('../../store/userAgents');
vi.mock('../../components/tables/M3UsTable', () => ({
default: () => <div data-testid="m3us-table">M3UsTable</div>
}));
vi.mock('../../components/tables/EPGsTable', () => ({
default: () => <div data-testid="epgs-table">EPGsTable</div>
}));
vi.mock('@mantine/core', () => ({
Box: ({ children, ...props }) => <div {...props}>{children}</div>,
Stack: ({ children, ...props }) => <div {...props}>{children}</div>,
}));
describe('ContentSourcesPage', () => {
it('renders error on userAgents error', () => {
const errorMessage = 'Failed to load userAgents.';
useUserAgentsStore.mockReturnValue(errorMessage);
render(<ContentSourcesPage />);
const element = screen.getByText(/Something went wrong/i);
expect(element).toBeInTheDocument();
});
it('no error renders tables', () => {
useUserAgentsStore.mockReturnValue(null);
render(<ContentSourcesPage />);
expect(screen.getByTestId('m3us-table')).toBeInTheDocument();
expect(screen.getByTestId('epgs-table')).toBeInTheDocument();
});
});

View file

@ -0,0 +1,556 @@
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { render, screen, fireEvent } from '@testing-library/react';
import DVRPage from '../DVR';
import dayjs from 'dayjs';
import useChannelsStore from '../../store/channels';
import useSettingsStore from '../../store/settings';
import useVideoStore from '../../store/useVideoStore';
import useLocalStorage from '../../hooks/useLocalStorage';
import {
isAfter,
isBefore,
useTimeHelpers,
} from '../../utils/dateTimeUtils.js';
import { categorizeRecordings } from '../../utils/pages/DVRUtils.js';
import {
getPosterUrl,
getRecordingUrl,
getShowVideoUrl,
} from '../../utils/cards/RecordingCardUtils.js';
vi.mock('../../store/channels');
vi.mock('../../store/settings');
vi.mock('../../store/useVideoStore');
vi.mock('../../hooks/useLocalStorage');
// Mock Mantine components
vi.mock('@mantine/core', () => ({
Box: ({ children }) => <div data-testid="box">{children}</div>,
Container: ({ children }) => <div data-testid="container">{children}</div>,
Title: ({ children, order }) => <h1 data-order={order}>{children}</h1>,
Text: ({ children }) => <p>{children}</p>,
Button: ({ children, onClick, leftSection, loading, ...props }) => (
<button onClick={onClick} disabled={loading} {...props}>
{leftSection}
{children}
</button>
),
Badge: ({ children }) => <span>{children}</span>,
SimpleGrid: ({ children }) => <div data-testid="simple-grid">{children}</div>,
Group: ({ children }) => <div data-testid="group">{children}</div>,
Stack: ({ children }) => <div data-testid="stack">{children}</div>,
Divider: () => <hr data-testid="divider" />,
useMantineTheme: () => ({
tailwind: {
green: { 5: '#22c55e' },
red: { 6: '#dc2626' },
yellow: { 6: '#ca8a04' },
gray: { 6: '#52525b' },
},
}),
}));
// Mock components
vi.mock('../../components/cards/RecordingCard', () => ({
default: ({ recording, onOpenDetails, onOpenRecurring }) => (
<div data-testid={`recording-card-${recording.id}`}>
<span>{recording.custom_properties?.Title || 'Recording'}</span>
<button onClick={() => onOpenDetails(recording)}>Open Details</button>
{recording.custom_properties?.rule && (
<button onClick={() => onOpenRecurring(recording)}>
Open Recurring
</button>
)}
</div>
),
}));
vi.mock('../../components/forms/RecordingDetailsModal', () => ({
default: ({
opened,
onClose,
recording,
onEdit,
onWatchLive,
onWatchRecording,
}) =>
opened ? (
<div data-testid="details-modal">
<div data-testid="modal-title">
{recording?.custom_properties?.Title}
</div>
<button onClick={onClose}>Close Modal</button>
<button onClick={onEdit}>Edit</button>
<button onClick={onWatchLive}>Watch Live</button>
<button onClick={onWatchRecording}>Watch Recording</button>
</div>
) : null,
}));
vi.mock('../../components/forms/RecurringRuleModal', () => ({
default: ({ opened, onClose, ruleId }) =>
opened ? (
<div data-testid="recurring-modal">
<div>Rule ID: {ruleId}</div>
<button onClick={onClose}>Close Recurring</button>
</div>
) : null,
}));
vi.mock('../../components/forms/Recording', () => ({
default: ({ isOpen, onClose, recording }) =>
isOpen ? (
<div data-testid="recording-form">
<div>Recording ID: {recording?.id || 'new'}</div>
<button onClick={onClose}>Close Form</button>
</div>
) : null,
}));
vi.mock('../../components/ErrorBoundary', () => ({
default: ({ children }) => <div data-testid="error-boundary">{children}</div>,
}));
vi.mock('../../utils/dateTimeUtils.js', async (importActual) => {
const actual = await importActual();
return {
...actual,
isBefore: vi.fn(),
isAfter: vi.fn(),
useTimeHelpers: vi.fn(),
};
});
vi.mock('../../utils/cards/RecordingCardUtils.js', () => ({
getPosterUrl: vi.fn(),
getRecordingUrl: vi.fn(),
getShowVideoUrl: vi.fn(),
}));
vi.mock('../../utils/pages/DVRUtils.js', async (importActual) => {
const actual = await importActual();
return {
...actual,
categorizeRecordings: vi.fn(),
};
});
describe('DVRPage', () => {
const mockShowVideo = vi.fn();
const mockFetchRecordings = vi.fn();
const mockFetchChannels = vi.fn();
const mockFetchRecurringRules = vi.fn();
const mockRemoveRecording = vi.fn();
const defaultChannelsState = {
recordings: [],
channels: {},
recurringRules: [],
fetchRecordings: mockFetchRecordings,
fetchChannels: mockFetchChannels,
fetchRecurringRules: mockFetchRecurringRules,
removeRecording: mockRemoveRecording,
};
const defaultSettingsState = {
settings: {
system_settings: { value: { time_zone: 'America/New_York' } },
},
environment: {
env_mode: 'production',
},
};
const defaultVideoState = {
showVideo: mockShowVideo,
};
beforeEach(() => {
vi.clearAllMocks();
vi.useFakeTimers();
const now = new Date('2024-01-15T12:00:00Z');
vi.setSystemTime(now);
isAfter.mockImplementation((a, b) => new Date(a) > new Date(b));
isBefore.mockImplementation((a, b) => new Date(a) < new Date(b));
useTimeHelpers.mockReturnValue({
toUserTime: (dt) => dayjs(dt).tz('America/New_York').toDate(),
userNow: () => dayjs().tz('America/New_York').toDate(),
});
categorizeRecordings.mockImplementation((recordings, toUserTime, now) => {
const inProgress = [];
const upcoming = [];
const completed = [];
recordings.forEach((rec) => {
const start = toUserTime(rec.start_time);
const end = toUserTime(rec.end_time);
if (now >= start && now <= end) inProgress.push(rec);
else if (now < start) upcoming.push(rec);
else completed.push(rec);
});
return { inProgress, upcoming, completed };
});
getPosterUrl.mockImplementation((recording) =>
recording?.id ? `http://poster.url/${recording.id}` : null
);
getRecordingUrl.mockImplementation(
(custom_properties) => custom_properties?.recording_url
);
getShowVideoUrl.mockImplementation((channel) => channel?.stream_url);
useChannelsStore.mockImplementation((selector) => {
return selector ? selector(defaultChannelsState) : defaultChannelsState;
});
useChannelsStore.getState = () => defaultChannelsState;
useSettingsStore.mockImplementation((selector) => {
return selector ? selector(defaultSettingsState) : defaultSettingsState;
});
useSettingsStore.getState = () => defaultSettingsState;
useVideoStore.mockImplementation((selector) => {
return selector ? selector(defaultVideoState) : defaultVideoState;
});
useVideoStore.getState = () => defaultVideoState;
useLocalStorage.mockReturnValue(['America/New_York', vi.fn()]);
});
afterEach(() => {
vi.clearAllMocks();
vi.clearAllTimers(); // Clear pending timers
vi.useRealTimers();
});
describe('Initial Render', () => {
it('renders new recording buttons', () => {
render(<DVRPage />);
expect(screen.getByText('New Recording')).toBeInTheDocument();
});
it('renders empty state when no recordings', () => {
render(<DVRPage />);
expect(screen.getByText('No upcoming recordings.')).toBeInTheDocument();
});
});
describe('Recording Display', () => {
it('displays recordings grouped by date', () => {
const now = dayjs('2024-01-15T12:00:00Z');
const recordings = [
{
id: 1,
channel: 1,
start_time: now.toISOString(),
end_time: now.add(1, 'hour').toISOString(),
custom_properties: { Title: 'Show 1' },
},
{
id: 2,
channel: 1,
start_time: now.add(1, 'day').toISOString(),
end_time: now.add(1, 'day').add(1, 'hour').toISOString(),
custom_properties: { Title: 'Show 2' },
},
];
useChannelsStore.mockImplementation((selector) => {
const state = { ...defaultChannelsState, recordings };
return selector ? selector(state) : state;
});
render(<DVRPage />);
expect(screen.getByTestId('recording-card-1')).toBeInTheDocument();
expect(screen.getByTestId('recording-card-2')).toBeInTheDocument();
});
});
describe('New Recording', () => {
it('opens recording form when new recording button is clicked', async () => {
render(<DVRPage />);
const newButton = screen.getByText('New Recording');
fireEvent.click(newButton);
expect(screen.getByTestId('recording-form')).toBeInTheDocument();
});
it('closes recording form when close is clicked', async () => {
render(<DVRPage />);
const newButton = screen.getByText('New Recording');
fireEvent.click(newButton);
expect(screen.getByTestId('recording-form')).toBeInTheDocument();
const closeButton = screen.getByText('Close Form');
fireEvent.click(closeButton);
expect(screen.queryByTestId('recording-form')).not.toBeInTheDocument();
});
});
describe('Recording Details Modal', () => {
const setupRecording = () => {
const now = dayjs('2024-01-15T12:00:00Z');
const recording = {
id: 1,
channel: 1,
start_time: now.toISOString(),
end_time: now.add(1, 'hour').toISOString(),
custom_properties: { Title: 'Test Show' },
};
useChannelsStore.mockImplementation((selector) => {
const state = {
...defaultChannelsState,
recordings: [recording],
channels: {
1: { id: 1, name: 'Channel 1', stream_url: 'http://stream.url' },
},
};
return selector ? selector(state) : state;
});
return recording;
};
it('opens details modal when recording card is clicked', async () => {
vi.useRealTimers();
setupRecording();
render(<DVRPage />);
const detailsButton = screen.getByText('Open Details');
fireEvent.click(detailsButton);
await screen.findByTestId('details-modal');
expect(screen.getByTestId('modal-title')).toHaveTextContent('Test Show');
});
it('closes details modal when close is clicked', async () => {
vi.useRealTimers();
setupRecording();
render(<DVRPage />);
const detailsButton = screen.getByText('Open Details');
fireEvent.click(detailsButton);
await screen.findByTestId('details-modal');
const closeButton = screen.getByText('Close Modal');
fireEvent.click(closeButton);
expect(screen.queryByTestId('details-modal')).not.toBeInTheDocument();
});
it('opens edit form from details modal', async () => {
vi.useRealTimers();
setupRecording();
render(<DVRPage />);
const detailsButton = screen.getByText('Open Details');
fireEvent.click(detailsButton);
await screen.findByTestId('details-modal');
const editButton = screen.getByText('Edit');
fireEvent.click(editButton);
expect(screen.queryByTestId('details-modal')).not.toBeInTheDocument();
expect(screen.getByTestId('recording-form')).toBeInTheDocument();
});
});
describe('Recurring Rule Modal', () => {
it('opens recurring rule modal when recording has rule', async () => {
const now = dayjs('2024-01-15T12:00:00Z');
const recording = {
id: 1,
channel: 1,
start_time: now.toISOString(),
end_time: now.add(1, 'hour').toISOString(),
custom_properties: {
Title: 'Recurring Show',
rule: { id: 100 },
},
};
useChannelsStore.mockImplementation((selector) => {
const state = {
...defaultChannelsState,
recordings: [recording],
channels: { 1: { id: 1, name: 'Channel 1' } },
};
return selector ? selector(state) : state;
});
render(<DVRPage />);
const recurringButton = screen.getByText('Open Recurring');
fireEvent.click(recurringButton);
expect(screen.getByTestId('recurring-modal')).toBeInTheDocument();
expect(screen.getByText('Rule ID: 100')).toBeInTheDocument();
});
it('closes recurring modal when close is clicked', async () => {
const now = dayjs('2024-01-15T12:00:00Z');
const recording = {
id: 1,
channel: 1,
start_time: now.toISOString(),
end_time: now.add(1, 'hour').toISOString(),
custom_properties: {
Title: 'Recurring Show',
rule: { id: 100 },
},
};
useChannelsStore.mockImplementation((selector) => {
const state = {
...defaultChannelsState,
recordings: [recording],
channels: { 1: { id: 1, name: 'Channel 1' } },
};
return selector ? selector(state) : state;
});
render(<DVRPage />);
const recurringButton = screen.getByText('Open Recurring');
fireEvent.click(recurringButton);
expect(screen.getByTestId('recurring-modal')).toBeInTheDocument();
const closeButton = screen.getByText('Close Recurring');
fireEvent.click(closeButton);
expect(screen.queryByTestId('recurring-modal')).not.toBeInTheDocument();
});
});
describe('Watch Functionality', () => {
it('calls showVideo for watch live on in-progress recording', async () => {
vi.useRealTimers();
const now = dayjs();
const recording = {
id: 1,
channel: 1,
start_time: now.subtract(30, 'minutes').toISOString(),
end_time: now.add(30, 'minutes').toISOString(),
custom_properties: { Title: 'Live Show' },
};
useChannelsStore.mockImplementation((selector) => {
const state = {
...defaultChannelsState,
recordings: [recording],
channels: {
1: { id: 1, name: 'Channel 1', stream_url: 'http://stream.url' },
},
};
return selector ? selector(state) : state;
});
render(<DVRPage />);
const detailsButton = screen.getByText('Open Details');
fireEvent.click(detailsButton);
await screen.findByTestId('details-modal');
const watchLiveButton = screen.getByText('Watch Live');
fireEvent.click(watchLiveButton);
expect(mockShowVideo).toHaveBeenCalledWith(
expect.stringContaining('stream.url'),
'live'
);
});
it('calls showVideo for watch recording on completed recording', async () => {
vi.useRealTimers();
const now = dayjs('2024-01-15T12:00:00Z');
const recording = {
id: 1,
channel: 1,
start_time: now.subtract(2, 'hours').toISOString(),
end_time: now.subtract(1, 'hour').toISOString(),
custom_properties: {
Title: 'Recorded Show',
recording_url: 'http://recording.url/video.mp4',
},
};
useChannelsStore.mockImplementation((selector) => {
const state = {
...defaultChannelsState,
recordings: [recording],
channels: { 1: { id: 1, name: 'Channel 1' } },
};
return selector ? selector(state) : state;
});
render(<DVRPage />);
const detailsButton = screen.getByText('Open Details');
fireEvent.click(detailsButton);
await screen.findByTestId('details-modal');
const watchButton = screen.getByText('Watch Recording');
fireEvent.click(watchButton);
expect(mockShowVideo).toHaveBeenCalledWith(
expect.stringContaining('http://recording.url/video.mp4'),
'vod',
expect.objectContaining({
name: 'Recording',
})
);
});
it('does not call showVideo when recording URL is missing', async () => {
vi.useRealTimers();
const now = dayjs('2024-01-15T12:00:00Z');
const recording = {
id: 1,
channel: 1,
start_time: now.subtract(2, 'hours').toISOString(),
end_time: now.subtract(1, 'hour').toISOString(),
custom_properties: { Title: 'No URL Show' },
};
useChannelsStore.mockImplementation((selector) => {
const state = {
...defaultChannelsState,
recordings: [recording],
channels: { 1: { id: 1, name: 'Channel 1' } },
};
return selector ? selector(state) : state;
});
render(<DVRPage />);
const detailsButton = await screen.findByText('Open Details');
fireEvent.click(detailsButton);
const modal = await screen.findByTestId('details-modal');
expect(modal).toBeInTheDocument();
const watchButton = screen.getByText('Watch Recording');
fireEvent.click(watchButton);
expect(mockShowVideo).not.toHaveBeenCalled();
});
});
});

View file

@ -0,0 +1,619 @@
import { describe, it, expect, beforeEach, vi, afterEach } from 'vitest';
import {
render,
screen,
waitFor,
fireEvent,
} from '@testing-library/react';
import dayjs from 'dayjs';
import Guide from '../Guide';
import useChannelsStore from '../../store/channels';
import useLogosStore from '../../store/logos';
import useEPGsStore from '../../store/epgs';
import useSettingsStore from '../../store/settings';
import useVideoStore from '../../store/useVideoStore';
import useLocalStorage from '../../hooks/useLocalStorage';
import { showNotification } from '../../utils/notificationUtils.js';
import * as guideUtils from '../guideUtils';
import * as recordingCardUtils from '../../utils/cards/RecordingCardUtils.js';
import * as dateTimeUtils from '../../utils/dateTimeUtils.js';
import userEvent from '@testing-library/user-event';
// Mock dependencies
vi.mock('../../store/channels');
vi.mock('../../store/logos');
vi.mock('../../store/epgs');
vi.mock('../../store/settings');
vi.mock('../../store/useVideoStore');
vi.mock('../../hooks/useLocalStorage');
vi.mock('@mantine/hooks', () => ({
useElementSize: () => ({
ref: vi.fn(),
width: 1200,
height: 800,
}),
}));
vi.mock('@mantine/core', async () => {
const actual = await vi.importActual('@mantine/core');
return {
...actual,
Box: ({ children, style, onClick, className, ref }) => (
<div style={style} onClick={onClick} className={className} ref={ref}>
{children}
</div>
),
Flex: ({ children, direction, justify, align, gap, mb, style }) => (
<div
style={style}
data-direction={direction}
data-justify={justify}
data-align={align}
data-gap={gap}
data-mb={mb}
>
{children}
</div>
),
Group: ({ children, gap, justify }) => (
<div data-gap={gap} data-justify={justify}>
{children}
</div>
),
Title: ({ children, order, size }) => (
<h2 data-order={order} data-size={size}>
{children}
</h2>
),
Text: ({ children, size, c, fw, lineClamp, style, onClick }) => (
<span
data-size={size}
data-color={c}
data-fw={fw}
data-line-clamp={lineClamp}
style={style}
onClick={onClick}
>
{children}
</span>
),
Paper: ({ children, style, onClick }) => (
<div style={style} onClick={onClick}>
{children}
</div>
),
Button: ({ children, onClick, leftSection, variant, size, color, disabled }) => (
<button onClick={onClick} disabled={disabled} data-variant={variant} data-size={size} data-color={color}>
{leftSection}
{children}
</button>
),
TextInput: ({ value, onChange, placeholder, icon, rightSection }) => (
<div>
{icon}
<input type="text" value={value} onChange={onChange} placeholder={placeholder} />
{rightSection}
</div>
),
Select: ({ value, onChange, data, placeholder, clearable }) => (
<select
value={value}
onChange={(e) => onChange?.(e.target.value)}
aria-label={placeholder}
data-clearable={clearable}
>
<option value="">Select...</option>
{data?.map((option) => (
<option key={option.value} value={option.value}>
{option.label}
</option>
))}
</select>
),
ActionIcon: ({ children, onClick, variant, size, color }) => (
<button onClick={onClick} data-variant={variant} data-size={size} data-color={color}>
{children}
</button>
),
Tooltip: ({ children, label }) => <div title={label}>{children}</div>,
LoadingOverlay: ({ visible }) => (visible ? <div>Loading...</div> : null),
};
});
vi.mock('react-window', () => ({
VariableSizeList: ({ children, itemData, itemCount }) => (
<div data-testid="variable-size-list">
{Array.from({ length: Math.min(itemCount, 5) }, (_, i) =>
<div key={i}>
{children({
index: i,
style: {},
data: itemData.filteredChannels[i]
})}
</div>
)}
</div>
),
}));
vi.mock('../../components/GuideRow', () => ({
default: ({ data }) => <div data-testid="guide-row">GuideRow for {data?.name}</div>,
}));
vi.mock('../../components/HourTimeline', () => ({
default: ({ hourTimeline }) => (
<div data-testid="hour-timeline">
{hourTimeline.map((hour, i) => (
<div key={i}>{hour.label}</div>
))}
</div>
),
}));
vi.mock('../../components/forms/ProgramRecordingModal', () => ({
__esModule: true,
default: ({ opened, onClose, program, onRecordOne }) =>
opened ? (
<div data-testid="program-recording-modal">
<div>{program?.title}</div>
<button onClick={onClose}>Close</button>
<button onClick={onRecordOne}>Record One</button>
</div>
) : null,
}));
vi.mock('../../components/forms/SeriesRecordingModal', () => ({
__esModule: true,
default: ({ opened, onClose, rules }) =>
opened ? (
<div data-testid="series-recording-modal">
<div>Series Rules: {rules.length}</div>
<button onClick={onClose}>Close</button>
</div>
) : null,
}));
vi.mock('../guideUtils', async () => {
const actual = await vi.importActual('../guideUtils');
return {
...actual,
fetchPrograms: vi.fn(),
createRecording: vi.fn(),
createSeriesRule: vi.fn(),
evaluateSeriesRule: vi.fn(),
fetchRules: vi.fn(),
filterGuideChannels: vi.fn(),
getGroupOptions: vi.fn(),
getProfileOptions: vi.fn(),
};
});
vi.mock('../../utils/cards/RecordingCardUtils.js', async () => {
const actual = await vi.importActual('../../utils/cards/RecordingCardUtils.js');
return {
...actual,
getShowVideoUrl: vi.fn(),
};
});
vi.mock('../../utils/dateTimeUtils.js', async () => {
const actual = await vi.importActual('../../utils/dateTimeUtils.js');
return {
...actual,
getNow: vi.fn(),
add: vi.fn(),
format: vi.fn(),
initializeTime: vi.fn(),
startOfDay: vi.fn(),
convertToMs: vi.fn(),
useDateTimeFormat: vi.fn(),
};
});
vi.mock('../../utils/notificationUtils.js', () => ({
showNotification: vi.fn(),
}));
describe('Guide', () => {
let mockChannelsState;
let mockShowVideo;
let mockFetchRecordings;
const now = dayjs('2024-01-15T12:00:00Z');
beforeEach(() => {
vi.useFakeTimers();
vi.setSystemTime(new Date('2024-01-15T12:00:00Z'));
mockChannelsState = {
channels: {
'channel-1': {
id: 'channel-1',
uuid: 'uuid-1',
name: 'Test Channel 1',
channel_number: 1,
logo_id: 'logo-1',
stream_url: 'http://stream1.test',
},
'channel-2': {
id: 'channel-2',
uuid: 'uuid-2',
name: 'Test Channel 2',
channel_number: 2,
logo_id: 'logo-2',
stream_url: 'http://stream2.test',
},
},
recordings: [],
channelGroups: {
'group-1': { id: 'group-1', name: 'News', channels: ['channel-1'] },
},
profiles: {
'profile-1': { id: 'profile-1', name: 'HD Profile' },
},
};
mockShowVideo = vi.fn();
mockFetchRecordings = vi.fn().mockResolvedValue([]);
useChannelsStore.mockImplementation((selector) => {
const state = {
...mockChannelsState,
fetchRecordings: mockFetchRecordings,
};
return selector ? selector(state) : state;
});
useLogosStore.mockReturnValue({
'logo-1': { url: 'http://logo1.png' },
'logo-2': { url: 'http://logo2.png' },
});
useEPGsStore.mockImplementation((selector) =>
selector ? selector({ tvgsById: {}, epgs: {} }) : { tvgsById: {}, epgs: {} }
);
useSettingsStore.mockReturnValue('production');
useVideoStore.mockReturnValue(mockShowVideo);
useLocalStorage.mockReturnValue(['12h', vi.fn()]);
dateTimeUtils.getNow.mockReturnValue(now);
dateTimeUtils.format.mockImplementation((date, format) => {
if (format?.includes('dddd')) return 'Monday, 01/15/2024 • 12:00 PM';
return '12:00 PM';
});
dateTimeUtils.initializeTime.mockImplementation(date => date || now);
dateTimeUtils.startOfDay.mockReturnValue(now.startOf('day'));
dateTimeUtils.add.mockImplementation((date, amount, unit) =>
dayjs(date).add(amount, unit)
);
dateTimeUtils.convertToMs.mockImplementation(date => dayjs(date).valueOf());
dateTimeUtils.useDateTimeFormat.mockReturnValue(['12h', 'MM/DD/YYYY']);
guideUtils.fetchPrograms.mockResolvedValue([
{
id: 'prog-1',
tvg_id: 'tvg-1',
title: 'Test Program 1',
description: 'Description 1',
start_time: now.toISOString(),
end_time: now.add(1, 'hour').toISOString(),
programStart: now,
programEnd: now.add(1, 'hour'),
startMs: now.valueOf(),
endMs: now.add(1, 'hour').valueOf(),
isLive: true,
isPast: false,
},
]);
guideUtils.fetchRules.mockResolvedValue([]);
guideUtils.filterGuideChannels.mockImplementation(
(channels) => Object.values(channels)
);
guideUtils.createRecording.mockResolvedValue(undefined);
guideUtils.createSeriesRule.mockResolvedValue(undefined);
guideUtils.evaluateSeriesRule.mockResolvedValue(undefined);
guideUtils.getGroupOptions.mockReturnValue([
{ value: 'all', label: 'All Groups' },
{ value: 'group-1', label: 'News' },
]);
guideUtils.getProfileOptions.mockReturnValue([
{ value: 'all', label: 'All Profiles' },
{ value: 'profile-1', label: 'HD Profile' },
]);
recordingCardUtils.getShowVideoUrl.mockReturnValue('http://video.test');
});
afterEach(() => {
vi.clearAllTimers();
vi.useRealTimers();
});
describe('Rendering', () => {
it('renders the TV Guide title', async () => {
render(<Guide />);
expect(screen.getByText('TV Guide')).toBeInTheDocument();
});
it('displays current time in header', async () => {
render(<Guide />);
expect(screen.getByText(/Monday, 01\/15\/2024/)).toBeInTheDocument();
});
it('renders channel rows when channels are available', async () => {
render(<Guide />);
expect(screen.getAllByTestId('guide-row')).toHaveLength(2);
});
it('shows no channels message when filters exclude all channels', async () => {
guideUtils.filterGuideChannels.mockReturnValue([]);
render(<Guide />);
// await waitFor(() => {
expect(screen.getByText('No channels match your filters')).toBeInTheDocument();
// });
});
it('displays channel count', async () => {
render(<Guide />);
// await waitFor(() => {
expect(screen.getByText(/2 channels/)).toBeInTheDocument();
// });
});
});
describe('Search Functionality', () => {
it('updates search query when user types', async () => {
vi.useRealTimers();
render(<Guide />);
const searchInput = screen.getByPlaceholderText('Search channels...');
fireEvent.change(searchInput, { target: { value: 'Test' } });
expect(searchInput).toHaveValue('Test');
});
it('clears search query when clear button is clicked', async () => {
vi.useRealTimers();
const user = userEvent.setup({ delay: null });
render(<Guide />);
const searchInput = screen.getByPlaceholderText('Search channels...');
await user.type(searchInput, 'Test');
expect(searchInput).toHaveValue('Test');
await user.click(screen.getByText('Clear Filters'));
expect(searchInput).toHaveValue('');
});
it('calls filterGuideChannels with search query', async () => {
vi.useRealTimers();
const user = userEvent.setup({ delay: null });
render(<Guide />);
const searchInput = await screen.findByPlaceholderText('Search channels...');
await user.type(searchInput, 'News');
await waitFor(() => {
expect(guideUtils.filterGuideChannels).toHaveBeenCalledWith(
expect.anything(),
'News',
'all',
'all',
expect.anything()
);
});
});
});
describe('Filter Functionality', () => {
it('filters by channel group', async () => {
vi.useRealTimers();
const user = userEvent.setup({ delay: null });
render(<Guide />);
const groupSelect = await screen.findByLabelText('Filter by group');
await user.selectOptions(groupSelect, 'group-1');
await waitFor(() => {
expect(guideUtils.filterGuideChannels).toHaveBeenCalledWith(
expect.anything(),
'',
'group-1',
'all',
expect.anything()
);
});
});
it('filters by profile', async () => {
vi.useRealTimers();
const user = userEvent.setup({ delay: null });
render(<Guide />);
const profileSelect = await screen.findByLabelText('Filter by profile');
await user.selectOptions(profileSelect, 'profile-1');
await waitFor(() => {
expect(guideUtils.filterGuideChannels).toHaveBeenCalledWith(
expect.anything(),
'',
'all',
'profile-1',
expect.anything()
);
});
});
it('clears all filters when Clear Filters is clicked', async () => {
vi.useRealTimers();
const user = userEvent.setup({ delay: null });
render(<Guide />);
// Set some filters
const searchInput = await screen.findByPlaceholderText('Search channels...');
await user.type(searchInput, 'Test');
// Clear them
const clearButton = await screen.findByText('Clear Filters');
await user.click(clearButton);
expect(searchInput).toHaveValue('');
});
});
describe('Recording Functionality', () => {
it('opens Series Rules modal when button is clicked', async () => {
vi.useRealTimers();
const user = userEvent.setup();
render(<Guide />);
const rulesButton = await screen.findByText('Series Rules');
await user.click(rulesButton);
await waitFor(() => {
expect(screen.getByTestId('series-recording-modal')).toBeInTheDocument();
});
vi.useFakeTimers();
vi.setSystemTime(new Date('2024-01-15T12:00:00Z'));
});
it('fetches rules when opening Series Rules modal', async () => {
vi.useRealTimers();
const mockRules = [{ id: 1, title: 'Test Rule' }];
guideUtils.fetchRules.mockResolvedValue(mockRules);
const user = userEvent.setup();
render(<Guide />);
const rulesButton = await screen.findByText('Series Rules');
await user.click(rulesButton);
await waitFor(() => {
expect(guideUtils.fetchRules).toHaveBeenCalled();
});
vi.useFakeTimers();
vi.setSystemTime(new Date('2024-01-15T12:00:00Z'));
});
});
describe('Navigation', () => {
it('scrolls to current time when Jump to current time is clicked', async () => {
vi.useRealTimers();
const user = userEvent.setup({ delay: null });
render(<Guide />);
const jumpButton = await screen.findByTitle('Jump to current time');
await user.click(jumpButton);
// Verify button was clicked (scroll behavior is tested in integration tests)
expect(jumpButton).toBeInTheDocument();
});
});
describe('Time Updates', () => {
it('updates current time every second', async () => {
render(<Guide />);
expect(screen.getByText(/Monday, 01\/15\/2024/)).toBeInTheDocument();
// Advance time by 1 second
vi.advanceTimersByTime(1000);
expect(dateTimeUtils.getNow).toHaveBeenCalled();
});
});
describe('Error Handling', () => {
it('shows notification when no channels are available', async () => {
useChannelsStore.mockImplementation((selector) => {
const state = { channels: {}, recordings: [], channelGroups: {}, profiles: {} };
return selector ? selector(state) : state;
});
render(<Guide />);
expect(showNotification).toHaveBeenCalledWith({
title: 'No channels available',
color: 'red.5',
});
});
});
describe('Watch Functionality', () => {
it('calls showVideo when watch button is clicked on live program', async () => {
vi.useRealTimers();
// Mock a live program
const liveProgram = {
id: 'prog-live',
tvg_id: 'tvg-1',
title: 'Live Show',
description: 'Live Description',
start_time: now.subtract(30, 'minutes').toISOString(),
end_time: now.add(30, 'minutes').toISOString(),
programStart: now.subtract(30, 'minutes'),
programEnd: now.add(30, 'minutes'),
startMs: now.subtract(30, 'minutes').valueOf(),
endMs: now.add(30, 'minutes').valueOf(),
isLive: true,
isPast: false,
};
guideUtils.fetchPrograms.mockResolvedValue([liveProgram]);
render(<Guide />);
await waitFor(() => {
expect(screen.getByText('TV Guide')).toBeInTheDocument();
});
// Implementation depends on how programs are rendered - this is a placeholder
// You would need to find and click the actual watch button in the rendered program
vi.useFakeTimers();
vi.setSystemTime(new Date('2024-01-15T12:00:00Z'));
});
it('does not show watch button for past programs', async () => {
vi.useRealTimers();
const pastProgram = {
id: 'prog-past',
tvg_id: 'tvg-1',
title: 'Past Show',
description: 'Past Description',
start_time: now.subtract(2, 'hours').toISOString(),
end_time: now.subtract(1, 'hour').toISOString(),
programStart: now.subtract(2, 'hours'),
programEnd: now.subtract(1, 'hour'),
startMs: now.subtract(2, 'hours').valueOf(),
endMs: now.subtract(1, 'hour').valueOf(),
isLive: false,
isPast: true,
};
guideUtils.fetchPrograms.mockResolvedValue([pastProgram]);
render(<Guide />);
await waitFor(() => {
expect(screen.getByText('TV Guide')).toBeInTheDocument();
});
vi.useFakeTimers();
vi.setSystemTime(new Date('2024-01-15T12:00:00Z'));
});
});
});

View file

@ -0,0 +1,37 @@
import { describe, it, expect, vi } from 'vitest';
import { render, screen, waitFor } from '@testing-library/react';
import Login from '../Login';
import useAuthStore from '../../store/auth';
vi.mock('../../store/auth');
vi.mock('../../components/forms/LoginForm', () => ({
default: () => <div data-testid="login-form">LoginForm</div>
}));
vi.mock('../../components/forms/SuperuserForm', () => ({
default: () => <div data-testid="superuser-form">SuperuserForm</div>
}));
vi.mock('@mantine/core', () => ({
Text: ({ children }) => <div>{children}</div>,
}));
describe('Login', () => {
it('renders SuperuserForm when superuser does not exist', async () => {
useAuthStore.mockReturnValue(false);
render(<Login/>);
await waitFor(() => {
expect(screen.getByTestId('superuser-form')).toBeInTheDocument();
});
expect(screen.queryByTestId('login-form')).not.toBeInTheDocument();
});
it('renders LoginForm when superuser exists', () => {
useAuthStore.mockReturnValue(true);
render(<Login/>);
expect(screen.getByTestId('login-form')).toBeInTheDocument();
expect(screen.queryByTestId('superuser-form')).not.toBeInTheDocument();
});
});

View file

@ -0,0 +1,172 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { render, screen, fireEvent, waitFor } from '@testing-library/react';
import LogosPage from '../Logos';
import useLogosStore from '../../store/logos';
import useVODLogosStore from '../../store/vodLogos';
import { showNotification, updateNotification } from '../../utils/notificationUtils.js';
vi.mock('../../store/logos');
vi.mock('../../store/vodLogos');
vi.mock('../../utils/notificationUtils.js', () => ({
showNotification: vi.fn(),
updateNotification: vi.fn(),
}));
vi.mock('../../components/tables/LogosTable', () => ({
default: () => <div data-testid="logos-table">LogosTable</div>
}));
vi.mock('../../components/tables/VODLogosTable', () => ({
default: () => <div data-testid="vod-logos-table">VODLogosTable</div>
}));
vi.mock('@mantine/core', () => {
const tabsComponent = ({ children, value, onChange }) =>
<div data-testid="tabs" data-value={value} onClick={() => onChange('vod')}>{children}</div>;
tabsComponent.List = ({ children }) => <div>{children}</div>;
tabsComponent.Tab = ({ children, value }) => <button data-value={value}>{children}</button>;
return {
Box: ({ children, ...props }) => <div {...props}>{children}</div>,
Flex: ({ children, ...props }) => <div {...props}>{children}</div>,
Text: ({ children, ...props }) => <span {...props}>{children}</span>,
Tabs: tabsComponent,
TabsList: tabsComponent.List,
TabsTab: tabsComponent.Tab,
};
});
describe('LogosPage', () => {
const mockFetchAllLogos = vi.fn();
const mockNeedsAllLogos = vi.fn();
const defaultLogosState = {
fetchAllLogos: mockFetchAllLogos,
needsAllLogos: mockNeedsAllLogos,
logos: { 1: {}, 2: {}, 3: {} },
};
const defaultVODLogosState = {
totalCount: 5,
};
beforeEach(() => {
vi.clearAllMocks();
useLogosStore.mockImplementation((selector) => {
return selector ? selector(defaultLogosState) : defaultLogosState;
});
useLogosStore.getState = () => defaultLogosState;
useVODLogosStore.mockImplementation((selector) => {
return selector ? selector(defaultVODLogosState) : defaultVODLogosState;
});
mockNeedsAllLogos.mockReturnValue(true);
mockFetchAllLogos.mockResolvedValue();
});
it('renders with channel logos tab by default', () => {
render(<LogosPage />);
expect(screen.getByText('Logos')).toBeInTheDocument();
expect(screen.getByTestId('logos-table')).toBeInTheDocument();
expect(screen.queryByTestId('vod-logos-table')).not.toBeInTheDocument();
});
it('displays correct channel logos count', () => {
render(<LogosPage />);
expect(screen.getByText(/\(3 logos\)/i)).toBeInTheDocument();
});
it('displays singular "logo" when count is 1', () => {
useLogosStore.mockImplementation((selector) => {
const state = {
fetchAllLogos: mockFetchAllLogos,
needsAllLogos: mockNeedsAllLogos,
logos: { 1: {} },
};
return selector ? selector(state) : state;
});
render(<LogosPage />);
expect(screen.getByText(/\(1 logo\)/i)).toBeInTheDocument();
});
it('fetches all logos on mount when needed', async () => {
render(<LogosPage />);
await waitFor(() => {
expect(mockNeedsAllLogos).toHaveBeenCalled();
expect(mockFetchAllLogos).toHaveBeenCalled();
});
});
it('does not fetch logos when not needed', async () => {
mockNeedsAllLogos.mockReturnValue(false);
render(<LogosPage />);
await waitFor(() => {
expect(mockNeedsAllLogos).toHaveBeenCalled();
expect(mockFetchAllLogos).not.toHaveBeenCalled();
});
});
it('shows error notification when fetching logos fails', async () => {
const consoleErrorSpy = vi.spyOn(console, 'error').mockImplementation(() => {});
const error = new Error('Failed to fetch');
mockFetchAllLogos.mockRejectedValue(error);
render(<LogosPage />);
await waitFor(() => {
expect(showNotification).toHaveBeenCalledWith({
title: 'Error',
message: 'Failed to load channel logos',
color: 'red',
});
expect(consoleErrorSpy).toHaveBeenCalledWith(
'Failed to load channel logos:',
error
);
});
consoleErrorSpy.mockRestore();
});
it('switches to VOD logos tab when clicked', () => {
const { rerender } = render(<LogosPage />);
expect(screen.getByTestId('logos-table')).toBeInTheDocument();
const tabs = screen.getByTestId('tabs');
fireEvent.click(tabs);
rerender(<LogosPage />);
expect(screen.getByTestId('vod-logos-table')).toBeInTheDocument();
expect(screen.queryByTestId('logos-table')).not.toBeInTheDocument();
});
it('renders both tab options', () => {
render(<LogosPage />);
expect(screen.getByText('Channel Logos')).toBeInTheDocument();
expect(screen.getByText('VOD Logos')).toBeInTheDocument();
});
it('displays zero logos correctly', () => {
useLogosStore.mockImplementation((selector) => {
const state = {
fetchAllLogos: mockFetchAllLogos,
needsAllLogos: mockNeedsAllLogos,
logos: {},
};
return selector ? selector(state) : state;
});
render(<LogosPage />);
expect(screen.getByText(/\(0 logos\)/i)).toBeInTheDocument();
});
});

View file

@ -0,0 +1,561 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { render, screen, fireEvent, waitFor } from '@testing-library/react';
import PluginsPage from '../Plugins';
import { showNotification, updateNotification } from '../../utils/notificationUtils.js';
import {
deletePluginByKey,
importPlugin,
setPluginEnabled,
updatePluginSettings,
} from '../../utils/pages/PluginsUtils';
import { usePluginStore } from '../../store/plugins';
vi.mock('../../store/plugins');
vi.mock('../../utils/pages/PluginsUtils', () => ({
deletePluginByKey: vi.fn(),
importPlugin: vi.fn(),
setPluginEnabled: vi.fn(),
updatePluginSettings: vi.fn(),
runPluginAction: vi.fn(),
}));
vi.mock('../../utils/notificationUtils.js', () => ({
showNotification: vi.fn(),
updateNotification: vi.fn(),
}));
vi.mock('@mantine/core', async () => {
return {
AppShellMain: ({ children }) => <div>{children}</div>,
Box: ({ children, style }) => <div style={style}>{children}</div>,
Stack: ({ children, gap }) => <div data-gap={gap}>{children}</div>,
Group: ({ children, justify, mb }) => (
<div data-justify={justify} data-mb={mb}>
{children}
</div>
),
Alert: ({ children, color, title }) => (
<div data-testid="alert" data-color={color}>
{title && <div>{title}</div>}
{children}
</div>
),
Text: ({ children, size, fw, c }) => (
<span data-size={size} data-fw={fw} data-color={c}>
{children}
</span>
),
Button: ({ children, onClick, leftSection, variant, color, loading, disabled, fullWidth }) => (
<button
onClick={onClick}
disabled={loading || disabled}
data-variant={variant}
data-color={color}
data-full-width={fullWidth}
>
{leftSection}
{children}
</button>
),
Loader: () => <div data-testid="loader">Loading...</div>,
Switch: ({ checked, onChange, label, description }) => (
<label>
<input
type="checkbox"
checked={checked}
onChange={(e) => onChange(e)}
/>
{label}
{description && <span>{description}</span>}
</label>
),
Divider: ({ my }) => <hr data-my={my} />,
ActionIcon: ({ children, onClick, color, variant, title }) => (
<button onClick={onClick} data-color={color} data-variant={variant} title={title}>
{children}
</button>
),
SimpleGrid: ({ children, cols }) => (
<div data-cols={cols}>{children}</div>
),
Modal: ({ opened, onClose, title, children, size, centered }) =>
opened ? (
<div data-testid="modal" data-size={size} data-centered={centered}>
<div data-testid="modal-title">{title}</div>
<button onClick={onClose}>Close Modal</button>
{children}
</div>
) : null,
FileInput: ({ value, onChange, label, placeholder, accept }) => (
<div>
{label && <label>{label}</label>}
<input
type="file"
onChange={(e) => onChange?.(e.target.files[0])}
placeholder={placeholder}
accept={accept}
aria-label={label}
/>
</div>
),
};
});
vi.mock('@mantine/dropzone', () => ({
Dropzone: ({ children, onDrop, accept, maxSize }) => (
<div
data-testid="dropzone"
data-accept={accept}
data-max-size={maxSize}
onClick={() => {
const file = new File(['content'], 'plugin.zip', { type: 'application/zip' });
onDrop([file]);
}}
>
<div>Drop files</div>
{children}
</div>
),
}));
vi.mock('../../components/cards/PluginCard.jsx', () => ({
default: ({ plugin }) => (
<div>
<h2>{plugin.name}</h2>
<p>{plugin.description}</p>
</div>
),
}));
describe('PluginsPage', () => {
const mockPlugins = [
{
key: 'plugin1',
name: 'Test Plugin 1',
description: 'Description 1',
enabled: true,
ever_enabled: true,
},
{
key: 'plugin2',
name: 'Test Plugin 2',
description: 'Description 2',
enabled: false,
ever_enabled: false,
},
];
const mockPluginStoreState = {
plugins: mockPlugins,
loading: false,
fetchPlugins: vi.fn(),
updatePlugin: vi.fn(),
removePlugin: vi.fn(),
invalidatePlugins: vi.fn(),
};
beforeEach(() => {
vi.clearAllMocks();
usePluginStore.mockImplementation((selector) => {
return selector ? selector(mockPluginStoreState) : mockPluginStoreState;
});
usePluginStore.getState = vi.fn(() => mockPluginStoreState);
});
describe('Rendering', () => {
it('renders the page with plugins list', async () => {
render(<PluginsPage />);
await waitFor(() => {
expect(screen.getByText('Plugins')).toBeInTheDocument();
expect(screen.getByText('Test Plugin 1')).toBeInTheDocument();
expect(screen.getByText('Test Plugin 2')).toBeInTheDocument();
});
});
it('renders import button', () => {
render(<PluginsPage />);
expect(screen.getByText('Import Plugin')).toBeInTheDocument();
});
it('renders reload button', () => {
render(<PluginsPage />);
const reloadButton = screen.getByTitle('Reload');
expect(reloadButton).toBeInTheDocument();
});
it('shows loader when loading and no plugins', () => {
const loadingState = { plugins: [], loading: true, fetchPlugins: vi.fn() };
usePluginStore.mockImplementation((selector) => {
return selector ? selector(loadingState) : loadingState;
});
usePluginStore.getState = vi.fn(() => loadingState);
render(<PluginsPage />);
expect(screen.getByTestId('loader')).toBeInTheDocument();
});
it('shows empty state when no plugins', () => {
const emptyState = { plugins: [], loading: false, fetchPlugins: vi.fn() };
usePluginStore.mockImplementation((selector) => {
return selector ? selector(emptyState) : emptyState;
});
usePluginStore.getState = vi.fn(() => emptyState);
render(<PluginsPage />);
expect(screen.getByText(/No plugins found/)).toBeInTheDocument();
});
});
describe('Import Plugin', () => {
it('opens import modal when import button is clicked', () => {
render(<PluginsPage />);
fireEvent.click(screen.getByText('Import Plugin'));
expect(screen.getByTestId('modal')).toBeInTheDocument();
expect(screen.getByTestId('modal-title')).toHaveTextContent('Import Plugin');
});
it('shows dropzone and file input in import modal', () => {
render(<PluginsPage />);
fireEvent.click(screen.getByText('Import Plugin'));
expect(screen.getByTestId('dropzone')).toBeInTheDocument();
expect(screen.getByPlaceholderText('Select plugin .zip')).toBeInTheDocument();
});
it('closes import modal when close button is clicked', () => {
render(<PluginsPage />);
fireEvent.click(screen.getByText('Import Plugin'));
expect(screen.getByTestId('modal')).toBeInTheDocument();
fireEvent.click(screen.getByText('Close Modal'));
expect(screen.queryByTestId('modal')).not.toBeInTheDocument();
});
it('handles file upload via dropzone', async () => {
importPlugin.mockResolvedValue({
success: true,
plugin: { key: 'new-plugin', name: 'New Plugin', description: 'New Description' },
});
render(<PluginsPage />);
fireEvent.click(screen.getByText('Import Plugin'));
const dropzone = screen.getByTestId('dropzone');
fireEvent.click(dropzone);
await waitFor(() => {
const uploadButton = screen.getAllByText('Upload').find(btn =>
btn.tagName === 'BUTTON'
);
expect(uploadButton).not.toBeDisabled();
});
});
it('uploads plugin successfully', async () => {
const mockPlugin = {
key: 'new-plugin',
name: 'New Plugin',
description: 'New Description',
ever_enabled: false,
};
importPlugin.mockResolvedValue({
success: true,
plugin: mockPlugin,
});
render(<PluginsPage />);
fireEvent.click(screen.getByText('Import Plugin'));
const fileInput = screen.getByPlaceholderText('Select plugin .zip');
const file = new File(['content'], 'plugin.zip', { type: 'application/zip' });
fireEvent.change(fileInput, { target: { files: [file] } });
const uploadButton = screen.getAllByText('Upload').find(btn =>
btn.tagName === 'BUTTON'
);
fireEvent.click(uploadButton);
await waitFor(() => {
expect(importPlugin).toHaveBeenCalledWith(file);
expect(showNotification).toHaveBeenCalled();
expect(updateNotification).toHaveBeenCalled();
});
});
it('handles upload failure', async () => {
importPlugin.mockResolvedValue({
success: false,
error: 'Upload failed',
});
render(<PluginsPage />);
fireEvent.click(screen.getByText('Import Plugin'));
const fileInput = screen.getByPlaceholderText('Select plugin .zip');
const file = new File(['content'], 'plugin.zip', { type: 'application/zip' });
fireEvent.change(fileInput, { target: { files: [file] } });
const uploadButton = screen.getAllByText('Upload').find(btn =>
btn.tagName === 'BUTTON'
);
fireEvent.click(uploadButton);
await waitFor(() => {
expect(updateNotification).toHaveBeenCalledWith(
expect.objectContaining({
color: 'red',
title: 'Import failed',
})
);
});
});
it('shows enable switch after successful import', async () => {
const mockPlugin = {
key: 'new-plugin',
name: 'New Plugin',
description: 'New Description',
ever_enabled: false,
};
importPlugin.mockResolvedValue({
success: true,
plugin: mockPlugin,
});
render(<PluginsPage />);
fireEvent.click(screen.getByText('Import Plugin'));
const fileInput = screen.getByPlaceholderText('Select plugin .zip');
const file = new File(['content'], 'plugin.zip', { type: 'application/zip' });
fireEvent.change(fileInput, { target: { files: [file] } });
const uploadButton = screen.getAllByText('Upload').find(btn =>
btn.tagName === 'BUTTON'
);
fireEvent.click(uploadButton);
await waitFor(() => {
expect(screen.getByText('New Plugin')).toBeInTheDocument();
expect(screen.getByText('Enable now')).toBeInTheDocument();
});
});
it('enables plugin after import when switch is toggled', async () => {
const mockPlugin = {
key: 'new-plugin',
name: 'New Plugin',
description: 'New Description',
ever_enabled: true,
};
importPlugin.mockResolvedValue({
success: true,
plugin: mockPlugin,
});
setPluginEnabled.mockResolvedValue({ success: true });
render(<PluginsPage />);
fireEvent.click(screen.getByText('Import Plugin'));
const fileInput = screen.getByPlaceholderText('Select plugin .zip');
const file = new File(['content'], 'plugin.zip', { type: 'application/zip' });
fireEvent.change(fileInput, { target: { files: [file] } });
const uploadButton = screen.getAllByText('Upload').find(btn =>
btn.tagName === 'BUTTON'
);
fireEvent.click(uploadButton);
await waitFor(() => {
expect(screen.getByText('Enable now')).toBeInTheDocument();
});
const enableSwitch = screen.getByRole('checkbox');
fireEvent.click(enableSwitch);
const enableButton = screen.getAllByText('Enable').find(btn =>
btn.tagName === 'BUTTON'
);
fireEvent.click(enableButton);
await waitFor(() => {
expect(setPluginEnabled).toHaveBeenCalledWith('new-plugin', true);
});
});
});
describe('Trust Warning', () => {
it('shows trust warning for untrusted plugins', async () => {
const mockPlugin = {
key: 'new-plugin',
name: 'New Plugin',
description: 'New Description',
ever_enabled: false,
};
importPlugin.mockResolvedValue({
success: true,
plugin: mockPlugin,
});
setPluginEnabled.mockResolvedValue({ success: true, ever_enabled: true });
render(<PluginsPage />);
fireEvent.click(screen.getByText('Import Plugin'));
const fileInput = screen.getByPlaceholderText('Select plugin .zip');
const file = new File(['content'], 'plugin.zip', { type: 'application/zip' });
fireEvent.change(fileInput, { target: { files: [file] } });
const uploadButton = screen.getAllByText('Upload').find(btn =>
btn.tagName === 'BUTTON'
);
fireEvent.click(uploadButton);
await waitFor(() => {
expect(screen.getByText('Enable now')).toBeInTheDocument();
});
const enableSwitch = screen.getByRole('checkbox');
fireEvent.click(enableSwitch);
const enableButton = screen.getAllByText('Enable').find(btn =>
btn.tagName === 'BUTTON'
);
fireEvent.click(enableButton);
await waitFor(() => {
expect(screen.getByText('Enable third-party plugins?')).toBeInTheDocument();
});
});
it('enables plugin when trust is confirmed', async () => {
const mockPlugin = {
key: 'new-plugin',
name: 'New Plugin',
description: 'New Description',
ever_enabled: false,
};
importPlugin.mockResolvedValue({
success: true,
plugin: mockPlugin,
});
setPluginEnabled.mockResolvedValue({ success: true, ever_enabled: true });
render(<PluginsPage />);
fireEvent.click(screen.getByText('Import Plugin'));
const fileInput = screen.getByPlaceholderText('Select plugin .zip');
const file = new File(['content'], 'plugin.zip', { type: 'application/zip' });
fireEvent.change(fileInput, { target: { files: [file] } });
const uploadButton = screen.getAllByText('Upload').find(btn =>
btn.tagName === 'BUTTON'
);
fireEvent.click(uploadButton);
await waitFor(() => {
expect(screen.getByText('Enable now')).toBeInTheDocument();
});
const enableSwitch = screen.getByRole('checkbox');
fireEvent.click(enableSwitch);
const enableButton = screen.getAllByText('Enable').find(btn =>
btn.tagName === 'BUTTON'
);
fireEvent.click(enableButton);
await waitFor(() => {
expect(screen.getByText('I understand, enable')).toBeInTheDocument();
});
fireEvent.click(screen.getByText('I understand, enable'));
await waitFor(() => {
expect(setPluginEnabled).toHaveBeenCalledWith('new-plugin', true);
});
});
it('cancels enable when trust is denied', async () => {
const mockPlugin = {
key: 'new-plugin',
name: 'New Plugin',
description: 'New Description',
ever_enabled: false,
};
importPlugin.mockResolvedValue({
success: true,
plugin: mockPlugin,
});
render(<PluginsPage />);
fireEvent.click(screen.getByText('Import Plugin'));
const fileInput = screen.getByPlaceholderText('Select plugin .zip');
const file = new File(['content'], 'plugin.zip', { type: 'application/zip' });
fireEvent.change(fileInput, { target: { files: [file] } });
const uploadButton = screen.getAllByText('Upload').find(btn =>
btn.tagName === 'BUTTON'
);
fireEvent.click(uploadButton);
await waitFor(() => {
expect(screen.getByText('Enable now')).toBeInTheDocument();
});
const enableSwitch = screen.getByRole('checkbox');
fireEvent.click(enableSwitch);
const enableButton = screen.getAllByText('Enable').find(btn =>
btn.tagName === 'BUTTON'
);
fireEvent.click(enableButton);
await waitFor(() => {
const cancelButtons = screen.getAllByText('Cancel');
expect(cancelButtons.length).toBeGreaterThan(0);
});
const cancelButtons = screen.getAllByText('Cancel');
fireEvent.click(cancelButtons[cancelButtons.length - 1]);
await waitFor(() => {
expect(setPluginEnabled).not.toHaveBeenCalled();
});
});
});
describe('Reload', () => {
it('reloads plugins when reload button is clicked', async () => {
const invalidatePlugins = vi.fn();
usePluginStore.getState = vi.fn(() => ({
...mockPluginStoreState,
invalidatePlugins,
}));
render(<PluginsPage />);
const reloadButton = screen.getByTitle('Reload');
fireEvent.click(reloadButton);
await waitFor(() => {
expect(invalidatePlugins).toHaveBeenCalled();
});
});
});
});

View file

@ -0,0 +1,208 @@
import {
render,
screen,
waitFor,
} from '@testing-library/react';
import { describe, it, expect, vi, beforeEach } from 'vitest';
import SettingsPage from '../Settings';
import useAuthStore from '../../store/auth';
import { USER_LEVELS } from '../../constants';
import userEvent from '@testing-library/user-event';
// Mock all dependencies
vi.mock('../../store/auth');
vi.mock('../../components/tables/UserAgentsTable', () => ({
default: ({ active }) => <div data-testid="user-agents-table">UserAgentsTable {active ? 'active' : 'inactive'}</div>,
}));
vi.mock('../../components/tables/StreamProfilesTable', () => ({
default: ({ active }) => <div data-testid="stream-profiles-table">StreamProfilesTable {active ? 'active' : 'inactive'}</div>,
}));
vi.mock('../../components/backups/BackupManager', () => ({
default: ({ active }) => <div data-testid="backup-manager">BackupManager {active ? 'active' : 'inactive'}</div>,
}));
vi.mock('../../components/forms/settings/UiSettingsForm', () => ({
default: ({ active }) => <div data-testid="ui-settings-form">UiSettingsForm {active ? 'active' : 'inactive'}</div>,
}));
vi.mock('../../components/forms/settings/NetworkAccessForm', () => ({
default: ({ active }) => <div data-testid="network-access-form">NetworkAccessForm {active ? 'active' : 'inactive'}</div>,
}));
vi.mock('../../components/forms/settings/ProxySettingsForm', () => ({
default: ({ active }) => <div data-testid="proxy-settings-form">ProxySettingsForm {active ? 'active' : 'inactive'}</div>,
}));
vi.mock('../../components/forms/settings/StreamSettingsForm', () => ({
default: ({ active }) => <div data-testid="stream-settings-form">StreamSettingsForm {active ? 'active' : 'inactive'}</div>,
}));
vi.mock('../../components/forms/settings/DvrSettingsForm', () => ({
default: ({ active }) => <div data-testid="dvr-settings-form">DvrSettingsForm {active ? 'active' : 'inactive'}</div>,
}));
vi.mock('../../components/forms/settings/SystemSettingsForm', () => ({
default: ({ active }) => <div data-testid="system-settings-form">SystemSettingsForm {active ? 'active' : 'inactive'}</div>,
}));
vi.mock('../../components/ErrorBoundary', () => ({
default: ({ children }) => <div data-testid="error-boundary">{children}</div>,
}));
vi.mock('@mantine/core', async () => {
const accordionComponent = ({ children, onChange, defaultValue }) => <div data-testid="accordion">{children}</div>;
accordionComponent.Item = ({ children, value }) => (
<div data-testid={`accordion-item-${value}`}>{children}</div>
);
accordionComponent.Control = ({ children }) => (
<button data-testid="accordion-control">{children}</button>
);
accordionComponent.Panel = ({ children }) => (
<div data-testid="accordion-panel">{children}</div>
);
return {
Accordion: accordionComponent,
AccordionItem: accordionComponent.Item,
AccordionControl: accordionComponent.Control,
AccordionPanel: accordionComponent.Panel,
Box: ({ children }) => <div>{children}</div>,
Center: ({ children }) => <div>{children}</div>,
Loader: () => <div data-testid="loader">Loading...</div>,
Text: ({ children }) => <span>{children}</span>,
};
});
describe('SettingsPage', () => {
beforeEach(() => {
vi.clearAllMocks();
});
describe('Rendering for Regular User', () => {
beforeEach(() => {
useAuthStore.mockReturnValue({
user_level: USER_LEVELS.USER,
username: 'testuser',
});
});
it('renders the settings page', () => {
render(<SettingsPage />);
expect(screen.getByTestId('accordion')).toBeInTheDocument();
});
it('renders UI Settings accordion item', () => {
render(<SettingsPage />);
expect(screen.getByTestId('accordion-item-ui-settings')).toBeInTheDocument();
expect(screen.getByText('UI Settings')).toBeInTheDocument();
});
it('opens UI Settings panel by default', () => {
render(<SettingsPage />);
expect(screen.getByTestId('ui-settings-form')).toBeInTheDocument();
});
it('does not render admin-only sections for regular users', () => {
render(<SettingsPage />);
expect(screen.queryByText('DVR')).not.toBeInTheDocument();
expect(screen.queryByText('Stream Settings')).not.toBeInTheDocument();
expect(screen.queryByText('System Settings')).not.toBeInTheDocument();
expect(screen.queryByText('User-Agents')).not.toBeInTheDocument();
expect(screen.queryByText('Stream Profiles')).not.toBeInTheDocument();
expect(screen.queryByText('Network Access')).not.toBeInTheDocument();
expect(screen.queryByText('Proxy Settings')).not.toBeInTheDocument();
expect(screen.queryByText('Backup & Restore')).not.toBeInTheDocument();
});
});
describe('Rendering for Admin User', () => {
beforeEach(() => {
useAuthStore.mockReturnValue({
user_level: USER_LEVELS.ADMIN,
username: 'admin',
});
});
it('renders all accordion items for admin', async () => {
render(<SettingsPage />);
expect(screen.getByText('UI Settings')).toBeInTheDocument();
await waitFor(() => {
expect(screen.getByText('DVR')).toBeInTheDocument();
expect(screen.getByText('Stream Settings')).toBeInTheDocument();
expect(screen.getByText('System Settings')).toBeInTheDocument();
expect(screen.getByText('User-Agents')).toBeInTheDocument();
expect(screen.getByText('Stream Profiles')).toBeInTheDocument();
expect(screen.getByText('Network Access')).toBeInTheDocument();
expect(screen.getByText('Proxy Settings')).toBeInTheDocument();
expect(screen.getByText('Backup & Restore')).toBeInTheDocument();
});
});
it('renders DVR settings accordion item', () => {
render(<SettingsPage />);
expect(screen.getByTestId('accordion-item-dvr-settings')).toBeInTheDocument();
});
it('renders Stream Settings accordion item', () => {
render(<SettingsPage />);
expect(screen.getByTestId('accordion-item-stream-settings')).toBeInTheDocument();
});
it('renders System Settings accordion item', () => {
render(<SettingsPage />);
expect(screen.getByTestId('accordion-item-system-settings')).toBeInTheDocument();
});
it('renders User-Agents accordion item', () => {
render(<SettingsPage />);
expect(screen.getByTestId('accordion-item-user-agents')).toBeInTheDocument();
});
it('renders Stream Profiles accordion item', () => {
render(<SettingsPage />);
expect(screen.getByTestId('accordion-item-stream-profiles')).toBeInTheDocument();
});
it('renders Network Access accordion item', () => {
render(<SettingsPage />);
expect(screen.getByTestId('accordion-item-network-access')).toBeInTheDocument();
});
it('renders Proxy Settings accordion item', () => {
render(<SettingsPage />);
expect(screen.getByTestId('accordion-item-proxy-settings')).toBeInTheDocument();
});
it('renders Backup & Restore accordion item', () => {
render(<SettingsPage />);
expect(screen.getByTestId('accordion-item-backups')).toBeInTheDocument();
});
});
describe('Accordion Interactions', () => {
beforeEach(() => {
useAuthStore.mockReturnValue({
user_level: USER_LEVELS.ADMIN,
username: 'admin',
});
});
it('opens DVR settings when clicked', async () => {
const user = userEvent.setup();
render(<SettingsPage />);
const streamSettingsButton = screen.getByText('DVR');
await user.click(streamSettingsButton);
await screen.findByTestId('dvr-settings-form');
});
});
});

View file

@ -0,0 +1,494 @@
// src/pages/__tests__/Stats.test.jsx
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import {
render,
screen,
waitFor,
fireEvent,
act,
} from '@testing-library/react';
import StatsPage from '../Stats';
import useStreamProfilesStore from '../../store/streamProfiles';
import useLocalStorage from '../../hooks/useLocalStorage';
import useChannelsStore from '../../store/channels';
import useLogosStore from '../../store/logos';
import {
fetchActiveChannelStats,
getClientStats,
getCombinedConnections,
getStatsByChannelId,
getVODStats,
stopChannel,
stopClient,
stopVODClient,
} from '../../utils/pages/StatsUtils.js';
// Mock dependencies
vi.mock('../../store/channels');
vi.mock('../../store/logos');
vi.mock('../../store/streamProfiles');
vi.mock('../../hooks/useLocalStorage');
vi.mock('../../components/SystemEvents', () => ({
default: () => <div data-testid="system-events">SystemEvents</div>
}));
vi.mock('../../components/ErrorBoundary.jsx', () => ({
default: ({ children }) => <div data-testid="error-boundary">{children}</div>
}));
vi.mock('../../components/cards/VodConnectionCard.jsx', () => ({
default: ({ vodContent, stopVODClient }) => (
<div data-testid={`vod-connection-card-${vodContent.content_uuid}`}>
VODConnectionCard - {vodContent.content_uuid}
{vodContent.connections?.map((conn) => (
<button
key={conn.client_id}
data-testid={`stop-vod-client-${conn.client_id}`}
onClick={() => stopVODClient(conn.client_id)}
>
Stop VOD Client
</button>
))}
</div>
),
}));
vi.mock('../../components/cards/StreamConnectionCard.jsx', () => ({
default: ({ channel }) => (
<div data-testid={`stream-connection-card-${channel.uuid}`}>
StreamConnectionCard - {channel.uuid}
</div>
),
}));
// Mock Mantine components
vi.mock('@mantine/core', () => ({
Box: ({ children, ...props }) => <div {...props}>{children}</div>,
Button: ({ children, onClick, loading, ...props }) => (
<button onClick={onClick} disabled={loading} {...props}>
{children}
</button>
),
Group: ({ children }) => <div>{children}</div>,
LoadingOverlay: () => <div data-testid="loading-overlay">Loading...</div>,
Text: ({ children }) => <span>{children}</span>,
Title: ({ children }) => <h3>{children}</h3>,
NumberInput: ({ value, onChange, min, max, ...props }) => (
<input
data-testid="refresh-interval-input"
type="number"
value={value}
onChange={(e) => onChange(Number(e.target.value))}
min={min}
max={max}
{...props}
/>
),
}));
//mock stats utils
vi.mock('../../utils/pages/StatsUtils', () => {
return {
fetchActiveChannelStats: vi.fn(),
getVODStats: vi.fn(),
getClientStats: vi.fn(),
getCombinedConnections: vi.fn(),
getStatsByChannelId: vi.fn(),
stopChannel: vi.fn(),
stopClient: vi.fn(),
stopVODClient: vi.fn(),
};
});
describe('StatsPage', () => {
const mockChannels = [
{ id: 1, uuid: 'channel-1', name: 'Channel 1' },
{ id: 2, uuid: 'channel-2', name: 'Channel 2' },
];
const mockChannelsByUUID = {
'channel-1': mockChannels[0],
'channel-2': mockChannels[1],
};
const mockStreamProfiles = [
{ id: 1, name: 'Profile 1' },
];
const mockLogos = {
'logo-1': 'logo-url-1',
};
const mockChannelStats = {
channels: [
{ channel_id: 1, uuid: 'channel-1', connections: 2 },
{ channel_id: 2, uuid: 'channel-2', connections: 1 },
],
};
const mockVODStats = {
vod_connections: [
{
content_uuid: 'vod-1',
connections: [
{ client_id: 'client-1', ip: '192.168.1.1' },
],
},
],
};
const mockProcessedChannelHistory = {
1: { id: 1, uuid: 'channel-1', connections: 2 },
2: { id: 2, uuid: 'channel-2', connections: 1 },
};
const mockClients = [
{ id: 'client-1', channel_id: 1 },
{ id: 'client-2', channel_id: 1 },
{ id: 'client-3', channel_id: 2 },
];
const mockCombinedConnections = [
{ id: 1, type: 'stream', data: { id: 1, uuid: 'channel-1' } },
{ id: 2, type: 'stream', data: { id: 2, uuid: 'channel-2' } },
{ id: 3, type: 'vod', data: { content_uuid: 'vod-1', connections: [{ client_id: 'client-1' }] } },
];
let mockSetChannelStats;
let mockSetRefreshInterval;
beforeEach(() => {
vi.clearAllMocks();
mockSetChannelStats = vi.fn();
mockSetRefreshInterval = vi.fn();
// Setup store mocks
useChannelsStore.mockImplementation((selector) => {
const state = {
channels: mockChannels,
channelsByUUID: mockChannelsByUUID,
stats: { channels: mockChannelStats.channels },
setChannelStats: mockSetChannelStats,
};
return selector ? selector(state) : state;
});
useStreamProfilesStore.mockImplementation((selector) => {
const state = {
profiles: mockStreamProfiles,
};
return selector ? selector(state) : state;
});
useLogosStore.mockImplementation((selector) => {
const state = {
logos: mockLogos,
};
return selector ? selector(state) : state;
});
useLocalStorage.mockReturnValue([5, mockSetRefreshInterval]);
// Setup API mocks
fetchActiveChannelStats.mockResolvedValue(mockChannelStats);
getVODStats.mockResolvedValue(mockVODStats);
getStatsByChannelId.mockReturnValue(mockProcessedChannelHistory);
getClientStats.mockReturnValue(mockClients);
getCombinedConnections.mockReturnValue(mockCombinedConnections);
stopVODClient.mockResolvedValue({});
delete window.location;
window.location = { pathname: '/stats' };
});
describe('Initial Rendering', () => {
it('renders the page title', async () => {
render(<StatsPage />);
await screen.findByText('Active Connections')
});
it('fetches initial stats on mount', async () => {
render(<StatsPage />);
await waitFor(() => {
expect(fetchActiveChannelStats).toHaveBeenCalledTimes(2);
expect(getVODStats).toHaveBeenCalledTimes(2);
});
});
it('displays connection counts', async () => {
render(<StatsPage />);
await waitFor(() => {
expect(screen.getByText(/2 streams/)).toBeInTheDocument();
expect(screen.getByText(/1 VOD connection/)).toBeInTheDocument();
});
});
it('renders SystemEvents component', async () => {
render(<StatsPage />);
await screen.findByTestId('system-events')
});
});
describe('Refresh Interval Controls', () => {
it('displays default refresh interval', () => {
render(<StatsPage />);
waitFor(() => {
const input = screen.getByTestId('refresh-interval-input');
expect(input).toHaveValue(5);
});
});
it('updates refresh interval when input changes', async () => {
render(<StatsPage />);
const input = screen.getByTestId('refresh-interval-input');
fireEvent.change(input, { target: { value: '10' } });
await waitFor(() => {
expect(mockSetRefreshInterval).toHaveBeenCalledWith(10);
});
});
it('displays polling active message when interval > 0', async () => {
render(<StatsPage />);
await waitFor(() => {
expect(screen.getByText(/Refreshing every 5s/)).toBeInTheDocument();
});
});
it('displays disabled message when interval is 0', async () => {
useLocalStorage.mockReturnValue([0, mockSetRefreshInterval]);
render(<StatsPage />);
await screen.findByText('Refreshing disabled')
});
});
describe('Auto-refresh Polling', () => {
it('sets up polling interval for stats', async () => {
vi.useFakeTimers();
render(<StatsPage />);
expect(fetchActiveChannelStats).toHaveBeenCalledTimes(2);
expect(getVODStats).toHaveBeenCalledTimes(2);
// Advance timers by 5 seconds
await act(async () => {
vi.advanceTimersByTime(5000);
});
expect(fetchActiveChannelStats).toHaveBeenCalledTimes(3);
expect(getVODStats).toHaveBeenCalledTimes(3);
vi.useRealTimers();
});
it('does not poll when interval is 0', async () => {
vi.useFakeTimers();
useLocalStorage.mockReturnValue([0, mockSetRefreshInterval]);
render(<StatsPage />);
expect(fetchActiveChannelStats).toHaveBeenCalledTimes(1);
await act(async () => {
vi.advanceTimersByTime(10000);
});
expect(fetchActiveChannelStats).toHaveBeenCalledTimes(1);
vi.useRealTimers();
});
it('clears interval on unmount', async () => {
vi.useFakeTimers();
const { unmount } = render(<StatsPage />);
expect(fetchActiveChannelStats).toHaveBeenCalledTimes(2);
unmount();
await act(async () => {
vi.advanceTimersByTime(5000);
});
// Should not fetch again after unmount
expect(fetchActiveChannelStats).toHaveBeenCalledTimes(2);
vi.useRealTimers();
});
});
describe('Manual Refresh', () => {
it('refreshes stats when Refresh Now button is clicked', async () => {
render(<StatsPage />);
expect(fetchActiveChannelStats).toHaveBeenCalledTimes(2);
const refreshButton = screen.getByText('Refresh Now');
fireEvent.click(refreshButton);
await waitFor(() => {
expect(fetchActiveChannelStats).toHaveBeenCalledTimes(3);
expect(getVODStats).toHaveBeenCalledTimes(3);
});
});
});
describe('Connection Display', () => {
it('renders stream connection cards', async () => {
render(<StatsPage />);
await waitFor(() => {
expect(screen.getByTestId('stream-connection-card-channel-1')).toBeInTheDocument();
expect(screen.getByTestId('stream-connection-card-channel-2')).toBeInTheDocument();
});
});
it('renders VOD connection cards', async () => {
render(<StatsPage />);
await waitFor(() => {
expect(screen.getByTestId('vod-connection-card-vod-1')).toBeInTheDocument();
});
});
it('displays empty state when no connections', async () => {
getCombinedConnections.mockReturnValue([]);
render(<StatsPage />);
await waitFor(() => {
expect(screen.getByText('No active connections')).toBeInTheDocument();
});
});
});
describe('VOD Client Management', () => {
it('stops VOD client when stop button is clicked', async () => {
render(<StatsPage />);
await waitFor(() => {
expect(screen.getByTestId('stop-vod-client-client-1')).toBeInTheDocument();
});
const stopButton = screen.getByTestId('stop-vod-client-client-1');
fireEvent.click(stopButton);
await waitFor(() => {
expect(stopVODClient).toHaveBeenCalledWith('client-1');
});
});
it('refreshes VOD stats after stopping client', async () => {
render(<StatsPage />);
await waitFor(() => {
expect(getVODStats).toHaveBeenCalledTimes(2);
});
const stopButton = await screen.findByTestId('stop-vod-client-client-1');
fireEvent.click(stopButton);
await waitFor(() => {
expect(getVODStats).toHaveBeenCalledTimes(3);
});
});
});
describe('Stats Processing', () => {
it('processes channel stats correctly', async () => {
render(<StatsPage />);
await waitFor(() => {
expect(getStatsByChannelId).toHaveBeenCalledWith(
mockChannelStats,
expect.any(Object),
mockChannelsByUUID,
mockChannels,
mockStreamProfiles
);
});
});
it('updates clients based on processed stats', async () => {
render(<StatsPage />);
await waitFor(() => {
expect(getClientStats).toHaveBeenCalledWith(mockProcessedChannelHistory);
});
});
});
describe('Error Handling', () => {
it('handles fetchActiveChannelStats error gracefully', async () => {
const consoleError = vi.spyOn(console, 'error').mockImplementation(() => {});
fetchActiveChannelStats.mockRejectedValue(new Error('API Error'));
render(<StatsPage />);
await waitFor(() => {
expect(consoleError).toHaveBeenCalledWith(
'Error fetching channel stats:',
expect.any(Error)
);
});
consoleError.mockRestore();
});
it('handles getVODStats error gracefully', async () => {
const consoleError = vi.spyOn(console, 'error').mockImplementation(() => {});
getVODStats.mockRejectedValue(new Error('VOD API Error'));
render(<StatsPage />);
await waitFor(() => {
expect(consoleError).toHaveBeenCalledWith(
'Error fetching VOD stats:',
expect.any(Error)
);
});
consoleError.mockRestore();
});
});
describe('Connection Count Display', () => {
it('displays singular form for 1 stream', async () => {
getCombinedConnections.mockReturnValue([
{ id: 1, type: 'stream', data: { id: 1, uuid: 'channel-1' } },
]);
getStatsByChannelId.mockReturnValue({ 1: { id: 1, uuid: 'channel-1' } });
render(<StatsPage />);
await waitFor(() => {
expect(screen.getByText(/1 stream/)).toBeInTheDocument();
});
});
it('displays plural form for multiple VOD connections', async () => {
const multiVODStats = {
vod_connections: [
{ content_uuid: 'vod-1', connections: [{ client_id: 'c1' }] },
{ content_uuid: 'vod-2', connections: [{ client_id: 'c2' }] },
],
};
getVODStats.mockResolvedValue(multiVODStats);
render(<StatsPage />);
await waitFor(() => {
expect(screen.getByText(/2 VOD connections/)).toBeInTheDocument();
});
});
});
});

View file

@ -0,0 +1,58 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { render, screen } from '@testing-library/react';
import UsersPage from '../Users';
import useAuthStore from '../../store/auth';
vi.mock('../../store/auth');
vi.mock('../../components/tables/UsersTable', () => ({
default: () => <div data-testid="users-table">UsersTable</div>
}));
vi.mock('@mantine/core', () => ({
Box: ({ children, ...props }) => <div {...props}>{children}</div>,
}));
describe('UsersPage', () => {
beforeEach(() => {
vi.clearAllMocks();
});
it('renders nothing when user is not authenticated', () => {
useAuthStore.mockReturnValue({ id: null });
const { container } = render(<UsersPage />);
expect(screen.getByText('Something went wrong')).toBeInTheDocument();
expect(screen.queryByTestId('users-table')).not.toBeInTheDocument();
});
it('renders UsersTable when user is authenticated', () => {
useAuthStore.mockReturnValue({ id: 1, email: 'test@example.com' });
render(<UsersPage />);
expect(screen.getByTestId('users-table')).toBeInTheDocument();
});
it('handles user with id 0 as authenticated', () => {
useAuthStore.mockReturnValue({ id: 0 });
const { container } = render(<UsersPage />);
// id: 0 is falsy, so should render empty
expect(screen.getByText('Something went wrong')).toBeInTheDocument();
});
it('switches from unauthenticated to authenticated state', () => {
useAuthStore.mockReturnValue({ id: null });
render(<UsersPage />);
expect(screen.getByText('Something went wrong')).toBeInTheDocument();
useAuthStore.mockReturnValue({ id: 1 });
render(<UsersPage />);
expect(screen.getByTestId('users-table')).toBeInTheDocument();
});
});

View file

@ -0,0 +1,468 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { render, screen, fireEvent, waitFor } from '@testing-library/react';
import VODsPage from '../VODs';
import useVODStore from '../../store/useVODStore';
import {
filterCategoriesToEnabled,
getCategoryOptions,
} from '../../utils/pages/VODsUtils.js';
vi.mock('../../store/useVODStore');
vi.mock('../../components/SeriesModal', () => ({
default: ({ opened, series, onClose }) =>
opened ? (
<div data-testid="series-modal">
<div data-testid="series-name">{series?.name}</div>
<button onClick={onClose}>Close</button>
</div>
) : null
}));
vi.mock('../../components/VODModal', () => ({
default: ({ opened, vod, onClose }) =>
opened ? (
<div data-testid="vod-modal">
<div data-testid="vod-name">{vod?.name}</div>
<button onClick={onClose}>Close</button>
</div>
) : null
}));
vi.mock('../../components/cards/VODCard', () => ({
default: ({ vod, onClick }) => (
<div data-testid="vod-card" onClick={() => onClick(vod)}>
<div>{vod.name}</div>
</div>
)
}));
vi.mock('../../components/cards/SeriesCard', () => ({
default: ({ series, onClick }) => (
<div data-testid="series-card" onClick={() => onClick(series)}>
<div>{series.name}</div>
</div>
)
}));
vi.mock('@mantine/core', () => {
const gridComponent = ({ children, ...props }) => <div {...props}>{children}</div>;
gridComponent.Col = ({ children, ...props }) => <div {...props}>{children}</div>;
return {
Box: ({ children, ...props }) => <div {...props}>{children}</div>,
Stack: ({ children, ...props }) => <div {...props}>{children}</div>,
Group: ({ children, ...props }) => <div {...props}>{children}</div>,
Flex: ({ children, ...props }) => <div {...props}>{children}</div>,
Title: ({ children, ...props }) => <h2 {...props}>{children}</h2>,
TextInput: ({ value, onChange, placeholder, icon }) => (
<div>
{icon}
<input
type="text"
value={value}
onChange={onChange}
placeholder={placeholder}
/>
</div>
),
Select: ({ value, onChange, data, label, placeholder }) => (
<div>
{label && <label>{label}</label>}
<select
value={value}
onChange={(e) => onChange?.(e.target.value)}
aria-label={placeholder || label}
>
{data?.map((option) => (
<option key={option.value} value={option.value}>
{option.label}
</option>
))}
</select>
</div>
),
SegmentedControl: ({ value, onChange, data }) => (
<div>
{data.map((item) => (
<button
key={item.value}
onClick={() => onChange(item.value)}
data-active={value === item.value}
>
{item.label}
</button>
))}
</div>
),
Pagination: ({ page, onChange, total }) => (
<div data-testid="pagination">
<button onClick={() => onChange(page - 1)} disabled={page === 1}>
Prev
</button>
<span>{page} of {total}</span>
<button onClick={() => onChange(page + 1)} disabled={page === total}>
Next
</button>
</div>
),
Grid: gridComponent,
GridCol: gridComponent.Col,
Loader: () => <div data-testid="loader">Loading...</div>,
LoadingOverlay: ({ visible }) =>
visible ? <div data-testid="loading-overlay">Loading...</div> : null,
};
});
vi.mock('../../utils/pages/VODsUtils.js', () => {
return {
filterCategoriesToEnabled: vi.fn(),
getCategoryOptions: vi.fn(),
};
});
describe('VODsPage', () => {
const mockFetchContent = vi.fn();
const mockFetchCategories = vi.fn();
const mockSetFilters = vi.fn();
const mockSetPage = vi.fn();
const mockSetPageSize = vi.fn();
const defaultStoreState = {
currentPageContent: [],
categories: {},
filters: { type: 'all', search: '', category: '' },
currentPage: 1,
totalCount: 0,
pageSize: 12,
setFilters: mockSetFilters,
setPage: mockSetPage,
setPageSize: mockSetPageSize,
fetchContent: mockFetchContent,
fetchCategories: mockFetchCategories,
};
beforeEach(() => {
vi.clearAllMocks();
mockFetchContent.mockResolvedValue();
mockFetchCategories.mockResolvedValue();
filterCategoriesToEnabled.mockReturnValue({});
getCategoryOptions.mockReturnValue([]);
useVODStore.mockImplementation((selector) => selector(defaultStoreState));
localStorage.clear();
});
it('renders the page title', async () => {
render(<VODsPage />);
await screen.findByText('Video on Demand');
});
it('fetches categories on mount', async () => {
render(<VODsPage />);
await waitFor(() => {
expect(mockFetchCategories).toHaveBeenCalledTimes(1);
});
});
it('fetches content on mount', async () => {
render(<VODsPage />);
await waitFor(() => {
expect(mockFetchContent).toHaveBeenCalledTimes(1);
});
});
it('displays loader during initial load', async () => {
render(<VODsPage />);
await screen.findByTestId('loader');
});
it('displays content after loading', async () => {
const stateWithContent = {
...defaultStoreState,
currentPageContent: [
{ id: 1, name: 'Movie 1', contentType: 'movie' },
{ id: 2, name: 'Series 1', contentType: 'series' },
],
};
useVODStore.mockImplementation((selector) => selector(stateWithContent));
render(<VODsPage />);
await waitFor(() => {
expect(screen.getByText('Movie 1')).toBeInTheDocument();
expect(screen.getByText('Series 1')).toBeInTheDocument();
});
});
it('renders VOD cards for movies', async () => {
const stateWithMovies = {
...defaultStoreState,
currentPageContent: [{ id: 1, name: 'Movie 1', contentType: 'movie' }],
};
useVODStore.mockImplementation((selector) => selector(stateWithMovies));
render(<VODsPage />);
await waitFor(() => {
expect(screen.getByTestId('vod-card')).toBeInTheDocument();
});
});
it('renders series cards for series', async () => {
const stateWithSeries = {
...defaultStoreState,
currentPageContent: [
{ id: 1, name: 'Series 1', contentType: 'series' },
],
};
useVODStore.mockImplementation((selector) => selector(stateWithSeries));
render(<VODsPage />);
await waitFor(() => {
expect(screen.getByTestId('series-card')).toBeInTheDocument();
});
});
it('opens VOD modal when VOD card is clicked', async () => {
const stateWithMovies = {
...defaultStoreState,
currentPageContent: [
{ id: 1, name: 'Test Movie', contentType: 'movie' },
],
};
useVODStore.mockImplementation((selector) => selector(stateWithMovies));
render(<VODsPage />);
await waitFor(() => {
fireEvent.click(screen.getByTestId('vod-card'));
});
expect(screen.getByTestId('vod-modal')).toBeInTheDocument();
expect(screen.getByTestId('vod-name')).toHaveTextContent('Test Movie');
});
it('opens series modal when series card is clicked', async () => {
const stateWithSeries = {
...defaultStoreState,
currentPageContent: [
{ id: 1, name: 'Test Series', contentType: 'series' },
],
};
useVODStore.mockImplementation((selector) => selector(stateWithSeries));
render(<VODsPage />);
await waitFor(() => {
fireEvent.click(screen.getByTestId('series-card'));
});
expect(screen.getByTestId('series-modal')).toBeInTheDocument();
expect(screen.getByTestId('series-name')).toHaveTextContent('Test Series');
});
it('closes VOD modal when close button is clicked', async () => {
const stateWithMovies = {
...defaultStoreState,
currentPageContent: [
{ id: 1, name: 'Test Movie', contentType: 'movie' },
],
};
useVODStore.mockImplementation((selector) => selector(stateWithMovies));
render(<VODsPage />);
await waitFor(() => {
fireEvent.click(screen.getByTestId('vod-card'));
});
fireEvent.click(screen.getByText('Close'));
expect(screen.queryByTestId('vod-modal')).not.toBeInTheDocument();
});
it('closes series modal when close button is clicked', async () => {
const stateWithSeries = {
...defaultStoreState,
currentPageContent: [
{ id: 1, name: 'Test Series', contentType: 'series' },
],
};
useVODStore.mockImplementation((selector) => selector(stateWithSeries));
render(<VODsPage />);
await waitFor(() => {
fireEvent.click(screen.getByTestId('series-card'));
});
fireEvent.click(screen.getByText('Close'));
expect(screen.queryByTestId('series-modal')).not.toBeInTheDocument();
});
it('updates filters when search input changes', async () => {
render(<VODsPage />);
const searchInput = screen.getByPlaceholderText('Search VODs...');
fireEvent.change(searchInput, { target: { value: 'test search' } });
await waitFor(() => {
expect(mockSetFilters).toHaveBeenCalledWith({ search: 'test search' });
});
});
it('updates filters and resets page when type changes', async () => {
render(<VODsPage />);
const moviesButton = screen.getByText('Movies');
fireEvent.click(moviesButton);
await waitFor(() => {
expect(mockSetFilters).toHaveBeenCalledWith({
type: 'movies',
category: '',
});
expect(mockSetPage).toHaveBeenCalledWith(1);
});
});
it('updates filters and resets page when category changes', async () => {
getCategoryOptions.mockReturnValue([
{ value: 'action', label: 'Action' },
]);
render(<VODsPage />);
const categorySelect = screen.getByLabelText('Category');
fireEvent.change(categorySelect, { target: { value: 'action' } });
await waitFor(() => {
expect(mockSetFilters).toHaveBeenCalledWith({ category: 'action' });
expect(mockSetPage).toHaveBeenCalledWith(1);
});
});
it('updates page size and saves to localStorage', async () => {
render(<VODsPage />);
const pageSizeSelect = screen.getByLabelText('Page Size');
fireEvent.change(pageSizeSelect, { target: { value: '24' } });
await waitFor(() => {
expect(mockSetPageSize).toHaveBeenCalledWith(24);
expect(localStorage.getItem('vodsPageSize')).toBe('24');
});
});
it('loads page size from localStorage on mount', async () => {
localStorage.setItem('vodsPageSize', '48');
render(<VODsPage />);
await waitFor(() => {
expect(mockSetPageSize).toHaveBeenCalledWith(48);
});
});
it('displays pagination when total pages > 1', async () => {
const stateWithPagination = {
...defaultStoreState,
currentPageContent: [{ id: 1, name: 'Movie 1', contentType: 'movie' }],
totalCount: 25,
pageSize: 12,
};
useVODStore.mockImplementation((selector) =>
selector(stateWithPagination)
);
render(<VODsPage />);
await waitFor(() => {
expect(screen.getByTestId('pagination')).toBeInTheDocument();
});
});
it('does not display pagination when total pages <= 1', async () => {
const stateNoPagination = {
...defaultStoreState,
currentPageContent: [{ id: 1, name: 'Movie 1', contentType: 'movie' }],
totalCount: 5,
pageSize: 12,
};
useVODStore.mockImplementation((selector) => selector(stateNoPagination));
render(<VODsPage />);
await waitFor(() => {
expect(screen.queryByTestId('pagination')).not.toBeInTheDocument();
});
});
it('changes page when pagination is clicked', async () => {
const stateWithPagination = {
...defaultStoreState,
currentPageContent: [{ id: 1, name: 'Movie 1', contentType: 'movie' }],
totalCount: 25,
pageSize: 12,
currentPage: 1,
};
useVODStore.mockImplementation((selector) =>
selector(stateWithPagination)
);
render(<VODsPage />);
await waitFor(() => {
fireEvent.click(screen.getByText('Next'));
});
expect(mockSetPage).toHaveBeenCalledWith(2);
});
it('refetches content when filters change', async () => {
const { rerender } = render(<VODsPage />);
const updatedState = {
...defaultStoreState,
filters: { type: 'movies', search: '', category: '' },
};
useVODStore.mockImplementation((selector) => selector(updatedState));
rerender(<VODsPage />);
await waitFor(() => {
expect(mockFetchContent).toHaveBeenCalledTimes(2);
});
});
it('refetches content when page changes', async () => {
const { rerender } = render(<VODsPage />);
const updatedState = {
...defaultStoreState,
currentPage: 2,
};
useVODStore.mockImplementation((selector) => selector(updatedState));
rerender(<VODsPage />);
await waitFor(() => {
expect(mockFetchContent).toHaveBeenCalledTimes(2);
});
});
it('refetches content when page size changes', async () => {
const { rerender } = render(<VODsPage />);
const updatedState = {
...defaultStoreState,
pageSize: 24,
};
useVODStore.mockImplementation((selector) => selector(updatedState));
rerender(<VODsPage />);
await waitFor(() => {
expect(mockFetchContent).toHaveBeenCalledTimes(2);
});
});
});

File diff suppressed because it is too large Load diff

View file

@ -402,6 +402,7 @@ const useChannelsStore = create((set, get) => ({
try {
set({
recordings: await api.getRecordings(),
isLoading: false,
});
} catch (error) {
console.error('Failed to fetch recordings:', error);

View file

@ -5,6 +5,7 @@ const useEPGsStore = create((set) => ({
epgs: {},
tvgs: [],
tvgsById: {},
tvgsLoaded: false,
isLoading: false,
error: null,
refreshProgress: {},
@ -36,11 +37,16 @@ const useEPGsStore = create((set) => ({
acc[tvg.id] = tvg;
return acc;
}, {}),
tvgsLoaded: true,
isLoading: false,
});
} catch (error) {
console.error('Failed to fetch tvgs:', error);
set({ error: 'Failed to load tvgs.', isLoading: false });
set({
error: 'Failed to load tvgs.',
tvgsLoaded: true,
isLoading: false,
});
}
},

View file

@ -0,0 +1,473 @@
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { renderHook, act } from '@testing-library/react';
import dayjs from 'dayjs';
import utc from 'dayjs/plugin/utc';
import timezone from 'dayjs/plugin/timezone';
import * as dateTimeUtils from '../dateTimeUtils';
import useSettingsStore from '../../store/settings';
import useLocalStorage from '../../hooks/useLocalStorage';
dayjs.extend(utc);
dayjs.extend(timezone);
vi.mock('../../store/settings');
vi.mock('../../hooks/useLocalStorage');
describe('dateTimeUtils', () => {
beforeEach(() => {
vi.clearAllMocks();
});
describe('convertToMs', () => {
it('should convert date to milliseconds', () => {
const date = '2024-01-15T10:30:00Z';
const result = dateTimeUtils.convertToMs(date);
expect(result).toBe(dayjs(date).valueOf());
});
it('should handle Date objects', () => {
const date = new Date('2024-01-15T10:30:00Z');
const result = dateTimeUtils.convertToMs(date);
expect(result).toBe(dayjs(date).valueOf());
});
});
describe('convertToSec', () => {
it('should convert date to unix timestamp', () => {
const date = '2024-01-15T10:30:00Z';
const result = dateTimeUtils.convertToSec(date);
expect(result).toBe(dayjs(date).unix());
});
it('should handle Date objects', () => {
const date = new Date('2024-01-15T10:30:00Z');
const result = dateTimeUtils.convertToSec(date);
expect(result).toBe(dayjs(date).unix());
});
});
describe('initializeTime', () => {
it('should create dayjs object from date string', () => {
const date = '2024-01-15T10:30:00Z';
const result = dateTimeUtils.initializeTime(date);
expect(result.format()).toBe(dayjs(date).format());
});
it('should handle Date objects', () => {
const date = new Date('2024-01-15T10:30:00Z');
const result = dateTimeUtils.initializeTime(date);
expect(result.format()).toBe(dayjs(date).format());
});
});
describe('startOfDay', () => {
it('should return start of day', () => {
const date = '2024-01-15T10:30:00Z';
const result = dateTimeUtils.startOfDay(date);
expect(result.hour()).toBe(0);
expect(result.minute()).toBe(0);
expect(result.second()).toBe(0);
});
});
describe('isBefore', () => {
it('should return true when first date is before second', () => {
const date1 = '2024-01-15T10:00:00Z';
const date2 = '2024-01-15T11:00:00Z';
expect(dateTimeUtils.isBefore(date1, date2)).toBe(true);
});
it('should return false when first date is after second', () => {
const date1 = '2024-01-15T11:00:00Z';
const date2 = '2024-01-15T10:00:00Z';
expect(dateTimeUtils.isBefore(date1, date2)).toBe(false);
});
});
describe('isAfter', () => {
it('should return true when first date is after second', () => {
const date1 = '2024-01-15T11:00:00Z';
const date2 = '2024-01-15T10:00:00Z';
expect(dateTimeUtils.isAfter(date1, date2)).toBe(true);
});
it('should return false when first date is before second', () => {
const date1 = '2024-01-15T10:00:00Z';
const date2 = '2024-01-15T11:00:00Z';
expect(dateTimeUtils.isAfter(date1, date2)).toBe(false);
});
});
describe('isSame', () => {
it('should return true when dates are same day', () => {
const date1 = '2024-01-15T10:00:00Z';
const date2 = '2024-01-15T11:00:00Z';
expect(dateTimeUtils.isSame(date1, date2)).toBe(true);
});
it('should return false when dates are different days', () => {
const date1 = '2024-01-15T10:00:00Z';
const date2 = '2024-01-16T10:00:00Z';
expect(dateTimeUtils.isSame(date1, date2)).toBe(false);
});
it('should accept unit parameter', () => {
const date1 = '2024-01-15T10:00:00Z';
const date2 = '2024-01-15T10:30:00Z';
expect(dateTimeUtils.isSame(date1, date2, 'hour')).toBe(true);
expect(dateTimeUtils.isSame(date1, date2, 'minute')).toBe(false);
});
});
describe('add', () => {
it('should add time to date', () => {
const date = dayjs.utc('2024-01-15T10:00:00Z');
const result = dateTimeUtils.add(date, 1, 'hour');
expect(result.hour()).toBe(11);
});
it('should handle different units', () => {
const date = '2024-01-15T10:00:00Z';
const dayResult = dateTimeUtils.add(date, 1, 'day');
expect(dayResult.date()).toBe(16);
const monthResult = dateTimeUtils.add(date, 1, 'month');
expect(monthResult.month()).toBe(1);
});
});
describe('subtract', () => {
it('should subtract time from date', () => {
const date = dayjs.utc('2024-01-15T10:00:00Z');
const result = dateTimeUtils.subtract(date, 1, 'hour');
expect(result.hour()).toBe(9);
});
it('should handle different units', () => {
const date = '2024-01-15T10:00:00Z';
const dayResult = dateTimeUtils.subtract(date, 1, 'day');
expect(dayResult.date()).toBe(14);
});
});
describe('diff', () => {
it('should calculate difference in milliseconds by default', () => {
const date1 = '2024-01-15T11:00:00Z';
const date2 = '2024-01-15T10:00:00Z';
const result = dateTimeUtils.diff(date1, date2);
expect(result).toBe(3600000);
});
it('should calculate difference in specified unit', () => {
const date1 = '2024-01-15T11:00:00Z';
const date2 = '2024-01-15T10:00:00Z';
expect(dateTimeUtils.diff(date1, date2, 'hour')).toBe(1);
expect(dateTimeUtils.diff(date1, date2, 'minute')).toBe(60);
});
});
describe('format', () => {
it('should format date with given format string', () => {
const date = '2024-01-15T10:30:00Z';
const result = dateTimeUtils.format(date, 'YYYY-MM-DD');
expect(result).toMatch(/2024-01-15/);
});
it('should handle time formatting', () => {
const date = '2024-01-15T10:30:00Z';
const result = dateTimeUtils.format(date, 'HH:mm');
expect(result).toMatch(/\d{2}:\d{2}/);
});
});
describe('getNow', () => {
it('should return current time as dayjs object', () => {
const result = dateTimeUtils.getNow();
expect(result.isValid()).toBe(true);
});
});
describe('toFriendlyDuration', () => {
it('should convert duration to human readable format', () => {
const result = dateTimeUtils.toFriendlyDuration(60, 'minutes');
expect(result).toBe('an hour');
});
it('should handle different units', () => {
const result = dateTimeUtils.toFriendlyDuration(2, 'hours');
expect(result).toBe('2 hours');
});
});
describe('fromNow', () => {
it('should return relative time from now', () => {
const pastDate = dayjs().subtract(1, 'hour').toISOString();
const result = dateTimeUtils.fromNow(pastDate);
expect(result).toMatch(/ago/);
});
});
describe('getNowMs', () => {
it('should return current time in milliseconds', () => {
const result = dateTimeUtils.getNowMs();
expect(typeof result).toBe('number');
expect(result).toBeGreaterThan(0);
});
});
describe('roundToNearest', () => {
it('should round to nearest 15 minutes', () => {
const date = dayjs('2024-01-15T10:17:00Z');
const result = dateTimeUtils.roundToNearest(date, 15);
expect(result.minute()).toBe(15);
});
it('should round up when past halfway point', () => {
const date = dayjs('2024-01-15T10:23:00Z');
const result = dateTimeUtils.roundToNearest(date, 15);
expect(result.minute()).toBe(30);
});
it('should handle rounding to next hour', () => {
const date = dayjs.utc('2024-01-15T10:53:00Z');
const result = dateTimeUtils.roundToNearest(date, 15);
expect(result.hour()).toBe(11);
expect(result.minute()).toBe(0);
});
it('should handle different minute intervals', () => {
const date = dayjs('2024-01-15T10:20:00Z');
const result = dateTimeUtils.roundToNearest(date, 30);
expect(result.minute()).toBe(30);
});
});
describe('useUserTimeZone', () => {
it('should return time zone from local storage', () => {
useLocalStorage.mockReturnValue(['America/New_York', vi.fn()]);
useSettingsStore.mockReturnValue({});
const { result } = renderHook(() => dateTimeUtils.useUserTimeZone());
expect(result.current).toBe('America/New_York');
});
it('should update time zone from settings', () => {
const setTimeZone = vi.fn();
useLocalStorage.mockReturnValue(['America/New_York', setTimeZone]);
useSettingsStore.mockReturnValue({
'system_settings': { value: { time_zone: 'America/Los_Angeles' } }
});
renderHook(() => dateTimeUtils.useUserTimeZone());
expect(setTimeZone).toHaveBeenCalledWith('America/Los_Angeles');
});
});
describe('useTimeHelpers', () => {
beforeEach(() => {
useLocalStorage.mockReturnValue(['America/New_York', vi.fn()]);
useSettingsStore.mockReturnValue({});
});
it('should return time zone, toUserTime, and userNow', () => {
const { result } = renderHook(() => dateTimeUtils.useTimeHelpers());
expect(result.current).toHaveProperty('timeZone');
expect(result.current).toHaveProperty('toUserTime');
expect(result.current).toHaveProperty('userNow');
});
it('should convert value to user time zone', () => {
const { result } = renderHook(() => dateTimeUtils.useTimeHelpers());
const date = '2024-01-15T10:00:00Z';
const converted = result.current.toUserTime(date);
expect(converted.isValid()).toBe(true);
});
it('should return null for null value', () => {
const { result } = renderHook(() => dateTimeUtils.useTimeHelpers());
const converted = result.current.toUserTime(null);
expect(converted).toBeDefined();
expect(converted.isValid()).toBe(false);
});
it('should handle timezone conversion errors', () => {
const { result } = renderHook(() => dateTimeUtils.useTimeHelpers());
const date = '2024-01-15T10:00:00Z';
const converted = result.current.toUserTime(date);
expect(converted.isValid()).toBe(true);
});
it('should return current time in user timezone', () => {
const { result } = renderHook(() => dateTimeUtils.useTimeHelpers());
const now = result.current.userNow();
expect(now.isValid()).toBe(true);
});
});
describe('RECURRING_DAY_OPTIONS', () => {
it('should have 7 day options', () => {
expect(dateTimeUtils.RECURRING_DAY_OPTIONS).toHaveLength(7);
});
it('should start with Sunday', () => {
expect(dateTimeUtils.RECURRING_DAY_OPTIONS[0]).toEqual({ value: 6, label: 'Sun' });
});
it('should include all weekdays', () => {
const labels = dateTimeUtils.RECURRING_DAY_OPTIONS.map(opt => opt.label);
expect(labels).toEqual(['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']);
});
});
describe('useDateTimeFormat', () => {
it('should return 12h format and mdy date format by default', () => {
useLocalStorage.mockReturnValueOnce(['12h', vi.fn()]).mockReturnValueOnce(['mdy', vi.fn()]);
const { result } = renderHook(() => dateTimeUtils.useDateTimeFormat());
expect(result.current).toEqual(['h:mma', 'MMM D']);
});
it('should return 24h format when set', () => {
useLocalStorage.mockReturnValueOnce(['24h', vi.fn()]).mockReturnValueOnce(['mdy', vi.fn()]);
const { result } = renderHook(() => dateTimeUtils.useDateTimeFormat());
expect(result.current[0]).toBe('HH:mm');
});
it('should return dmy date format when set', () => {
useLocalStorage.mockReturnValueOnce(['12h', vi.fn()]).mockReturnValueOnce(['dmy', vi.fn()]);
const { result } = renderHook(() => dateTimeUtils.useDateTimeFormat());
expect(result.current[1]).toBe('D MMM');
});
});
describe('toTimeString', () => {
it('should return 00:00 for null value', () => {
expect(dateTimeUtils.toTimeString(null)).toBe('00:00');
});
it('should parse HH:mm format', () => {
expect(dateTimeUtils.toTimeString('14:30')).toBe('14:30');
});
it('should parse HH:mm:ss format', () => {
const result = dateTimeUtils.toTimeString('14:30:45');
expect(result).toMatch(/14:30/);
});
it('should return original string for unparseable format', () => {
expect(dateTimeUtils.toTimeString('2:30 PM')).toBe('2:30 PM');
});
it('should return original string for invalid format', () => {
expect(dateTimeUtils.toTimeString('invalid')).toBe('invalid');
});
it('should handle Date objects', () => {
const date = new Date('2024-01-15T14:30:00Z');
const result = dateTimeUtils.toTimeString(date);
expect(result).toMatch(/\d{2}:\d{2}/);
});
it('should return 00:00 for invalid Date', () => {
expect(dateTimeUtils.toTimeString(new Date('invalid'))).toBe('00:00');
});
});
describe('parseDate', () => {
it('should return null for null value', () => {
expect(dateTimeUtils.parseDate(null)).toBeNull();
});
it('should parse YYYY-MM-DD format', () => {
const result = dateTimeUtils.parseDate('2024-01-15');
expect(result).toBeInstanceOf(Date);
expect(result?.getFullYear()).toBe(2024);
});
it('should parse ISO 8601 format', () => {
const result = dateTimeUtils.parseDate('2024-01-15T10:30:00Z');
expect(result).toBeInstanceOf(Date);
});
it('should return null for invalid date', () => {
expect(dateTimeUtils.parseDate('invalid')).toBeNull();
});
});
describe('buildTimeZoneOptions', () => {
it('should return array of timezone options', () => {
const result = dateTimeUtils.buildTimeZoneOptions();
expect(Array.isArray(result)).toBe(true);
expect(result.length).toBeGreaterThan(0);
});
it('should format timezone with offset', () => {
const result = dateTimeUtils.buildTimeZoneOptions();
expect(result[0]).toHaveProperty('value');
expect(result[0]).toHaveProperty('label');
expect(result[0].label).toMatch(/UTC[+-]\d{2}:\d{2}/);
});
it('should sort by offset then name', () => {
const result = dateTimeUtils.buildTimeZoneOptions();
for (let i = 1; i < result.length; i++) {
expect(result[i].numericOffset).toBeGreaterThanOrEqual(result[i - 1].numericOffset);
}
});
it('should include DST information when applicable', () => {
const result = dateTimeUtils.buildTimeZoneOptions();
const dstZone = result.find(opt => opt.label.includes('DST range'));
expect(dstZone).toBeDefined();
});
it('should add preferred zone if not in list', () => {
const preferredZone = 'Custom/Zone';
const result = dateTimeUtils.buildTimeZoneOptions(preferredZone);
const found = result.find(opt => opt.value === preferredZone);
expect(found).toBeDefined();
});
it('should not duplicate existing zones', () => {
const result = dateTimeUtils.buildTimeZoneOptions('UTC');
const utcOptions = result.filter(opt => opt.value === 'UTC');
expect(utcOptions).toHaveLength(1);
});
});
describe('getDefaultTimeZone', () => {
it('should return system timezone', () => {
const result = dateTimeUtils.getDefaultTimeZone();
expect(typeof result).toBe('string');
expect(result.length).toBeGreaterThan(0);
});
it('should return UTC on error', () => {
const originalDateTimeFormat = Intl.DateTimeFormat;
Intl.DateTimeFormat = vi.fn(() => {
throw new Error('Test error');
});
const result = dateTimeUtils.getDefaultTimeZone();
expect(result).toBe('UTC');
Intl.DateTimeFormat = originalDateTimeFormat;
});
});
});

View file

@ -0,0 +1,144 @@
import { describe, it, expect } from 'vitest';
import * as networkUtils from '../networkUtils';
describe('networkUtils', () => {
describe('IPV4_CIDR_REGEX', () => {
it('should match valid IPv4 CIDR notation', () => {
expect(networkUtils.IPV4_CIDR_REGEX.test('192.168.1.0/24')).toBe(true);
expect(networkUtils.IPV4_CIDR_REGEX.test('10.0.0.0/8')).toBe(true);
expect(networkUtils.IPV4_CIDR_REGEX.test('172.16.0.0/12')).toBe(true);
expect(networkUtils.IPV4_CIDR_REGEX.test('0.0.0.0/0')).toBe(true);
expect(networkUtils.IPV4_CIDR_REGEX.test('255.255.255.255/32')).toBe(true);
});
it('should not match invalid IPv4 CIDR notation', () => {
expect(networkUtils.IPV4_CIDR_REGEX.test('192.168.1.0')).toBe(false);
expect(networkUtils.IPV4_CIDR_REGEX.test('192.168.1.0/33')).toBe(false);
expect(networkUtils.IPV4_CIDR_REGEX.test('256.168.1.0/24')).toBe(false);
expect(networkUtils.IPV4_CIDR_REGEX.test('192.168/24')).toBe(false);
expect(networkUtils.IPV4_CIDR_REGEX.test('invalid')).toBe(false);
});
it('should not match IPv6 addresses', () => {
expect(networkUtils.IPV4_CIDR_REGEX.test('2001:db8::/32')).toBe(false);
});
});
describe('IPV6_CIDR_REGEX', () => {
it('should match valid IPv6 CIDR notation', () => {
expect(networkUtils.IPV6_CIDR_REGEX.test('2001:db8::/32')).toBe(true);
expect(networkUtils.IPV6_CIDR_REGEX.test('fe80::/10')).toBe(true);
expect(networkUtils.IPV6_CIDR_REGEX.test('::/0')).toBe(true);
expect(networkUtils.IPV6_CIDR_REGEX.test('2001:0db8:85a3:0000:0000:8a2e:0370:7334/64')).toBe(true);
});
it('should match compressed IPv6 CIDR notation', () => {
expect(networkUtils.IPV6_CIDR_REGEX.test('2001:db8::1/128')).toBe(true);
expect(networkUtils.IPV6_CIDR_REGEX.test('::1/128')).toBe(true);
});
it('should match IPv6 with embedded IPv4', () => {
expect(networkUtils.IPV6_CIDR_REGEX.test('::ffff:192.168.1.1/96')).toBe(true);
});
it('should not match invalid IPv6 CIDR notation', () => {
expect(networkUtils.IPV6_CIDR_REGEX.test('2001:db8::')).toBe(false);
expect(networkUtils.IPV6_CIDR_REGEX.test('2001:db8::/129')).toBe(false);
expect(networkUtils.IPV6_CIDR_REGEX.test('invalid/64')).toBe(false);
});
it('should not match IPv4 addresses', () => {
expect(networkUtils.IPV6_CIDR_REGEX.test('192.168.1.0/24')).toBe(false);
});
});
describe('formatBytes', () => {
it('should return "0 Bytes" for zero bytes', () => {
expect(networkUtils.formatBytes(0)).toBe('0 Bytes');
});
it('should format bytes correctly', () => {
expect(networkUtils.formatBytes(100)).toBe('100.00 Bytes');
expect(networkUtils.formatBytes(500)).toBe('500.00 Bytes');
});
it('should format kilobytes correctly', () => {
expect(networkUtils.formatBytes(1024)).toBe('1.00 KB');
expect(networkUtils.formatBytes(2048)).toBe('2.00 KB');
expect(networkUtils.formatBytes(1536)).toBe('1.50 KB');
});
it('should format megabytes correctly', () => {
expect(networkUtils.formatBytes(1048576)).toBe('1.00 MB');
expect(networkUtils.formatBytes(2097152)).toBe('2.00 MB');
expect(networkUtils.formatBytes(5242880)).toBe('5.00 MB');
});
it('should format gigabytes correctly', () => {
expect(networkUtils.formatBytes(1073741824)).toBe('1.00 GB');
expect(networkUtils.formatBytes(2147483648)).toBe('2.00 GB');
});
it('should format terabytes correctly', () => {
expect(networkUtils.formatBytes(1099511627776)).toBe('1.00 TB');
});
it('should format large numbers', () => {
expect(networkUtils.formatBytes(1125899906842624)).toBe('1.00 PB');
});
it('should handle decimal values', () => {
const result = networkUtils.formatBytes(1536);
expect(result).toMatch(/1\.50 KB/);
});
it('should always show two decimal places', () => {
const result = networkUtils.formatBytes(1024);
expect(result).toBe('1.00 KB');
});
});
describe('formatSpeed', () => {
it('should return "0 Bytes" for zero speed', () => {
expect(networkUtils.formatSpeed(0)).toBe('0 Bytes');
});
it('should format bits per second correctly', () => {
expect(networkUtils.formatSpeed(100)).toBe('100.00 bps');
expect(networkUtils.formatSpeed(500)).toBe('500.00 bps');
});
it('should format kilobits per second correctly', () => {
expect(networkUtils.formatSpeed(1024)).toBe('1.00 Kbps');
expect(networkUtils.formatSpeed(2048)).toBe('2.00 Kbps');
expect(networkUtils.formatSpeed(1536)).toBe('1.50 Kbps');
});
it('should format megabits per second correctly', () => {
expect(networkUtils.formatSpeed(1048576)).toBe('1.00 Mbps');
expect(networkUtils.formatSpeed(2097152)).toBe('2.00 Mbps');
expect(networkUtils.formatSpeed(10485760)).toBe('10.00 Mbps');
});
it('should format gigabits per second correctly', () => {
expect(networkUtils.formatSpeed(1073741824)).toBe('1.00 Gbps');
expect(networkUtils.formatSpeed(2147483648)).toBe('2.00 Gbps');
});
it('should handle decimal values', () => {
const result = networkUtils.formatSpeed(1536);
expect(result).toMatch(/1\.50 Kbps/);
});
it('should always show two decimal places', () => {
const result = networkUtils.formatSpeed(1024);
expect(result).toBe('1.00 Kbps');
});
it('should use speed units not byte units', () => {
const result = networkUtils.formatSpeed(1024);
expect(result).not.toContain('KB');
expect(result).toContain('Kbps');
});
});
});

View file

@ -0,0 +1,145 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { notifications } from '@mantine/notifications';
import * as notificationUtils from '../notificationUtils';
vi.mock('@mantine/notifications', () => ({
notifications: {
show: vi.fn(),
update: vi.fn(),
},
}));
describe('notificationUtils', () => {
beforeEach(() => {
vi.clearAllMocks();
});
describe('showNotification', () => {
it('should call notifications.show with notification object', () => {
const notificationObject = {
title: 'Test Title',
message: 'Test message',
color: 'blue',
};
notificationUtils.showNotification(notificationObject);
expect(notifications.show).toHaveBeenCalledWith(notificationObject);
expect(notifications.show).toHaveBeenCalledTimes(1);
});
it('should return the result from notifications.show', () => {
const mockReturnValue = 'notification-id-123';
notifications.show.mockReturnValue(mockReturnValue);
const result = notificationUtils.showNotification({ message: 'test' });
expect(result).toBe(mockReturnValue);
});
it('should handle notification with all properties', () => {
const notificationObject = {
id: 'custom-id',
title: 'Success',
message: 'Operation completed',
color: 'green',
autoClose: 5000,
withCloseButton: true,
};
notificationUtils.showNotification(notificationObject);
expect(notifications.show).toHaveBeenCalledWith(notificationObject);
});
it('should handle minimal notification object', () => {
const notificationObject = {
message: 'Simple message',
};
notificationUtils.showNotification(notificationObject);
expect(notifications.show).toHaveBeenCalledWith(notificationObject);
});
});
describe('updateNotification', () => {
it('should call notifications.update with id and notification object', () => {
const notificationId = 'notification-123';
const notificationObject = {
title: 'Updated Title',
message: 'Updated message',
color: 'green',
};
notificationUtils.updateNotification(notificationId, notificationObject);
expect(notifications.update).toHaveBeenCalledWith(notificationId, notificationObject);
expect(notifications.update).toHaveBeenCalledTimes(1);
});
it('should return the result from notifications.update', () => {
const mockReturnValue = { success: true };
notifications.update.mockReturnValue(mockReturnValue);
const result = notificationUtils.updateNotification('id', { message: 'test' });
expect(result).toBe(mockReturnValue);
});
it('should handle loading to success transition', () => {
const notificationId = 'loading-notification';
const updateObject = {
title: 'Success',
message: 'Operation completed successfully',
color: 'green',
loading: false,
};
notificationUtils.updateNotification(notificationId, updateObject);
expect(notifications.update).toHaveBeenCalledWith(notificationId, updateObject);
});
it('should handle loading to error transition', () => {
const notificationId = 'loading-notification';
const updateObject = {
title: 'Error',
message: 'Operation failed',
color: 'red',
loading: false,
};
notificationUtils.updateNotification(notificationId, updateObject);
expect(notifications.update).toHaveBeenCalledWith(notificationId, updateObject);
});
it('should handle partial updates', () => {
const notificationId = 'notification-123';
const updateObject = {
color: 'yellow',
};
notificationUtils.updateNotification(notificationId, updateObject);
expect(notifications.update).toHaveBeenCalledWith(notificationId, updateObject);
});
it('should handle empty notification id', () => {
const notificationObject = { message: 'test' };
notificationUtils.updateNotification('', notificationObject);
expect(notifications.update).toHaveBeenCalledWith('', notificationObject);
});
it('should handle null notification id', () => {
const notificationObject = { message: 'test' };
notificationUtils.updateNotification(null, notificationObject);
expect(notifications.update).toHaveBeenCalledWith(null, notificationObject);
});
});
});

View file

@ -0,0 +1,131 @@
import API from '../../api.js';
import {
format,
getNow,
initializeTime,
subtract,
toFriendlyDuration,
} from '../dateTimeUtils.js';
// Get buffering_speed from proxy settings
export const getBufferingSpeedThreshold = (proxySetting) => {
try {
if (proxySetting?.value) {
return parseFloat(proxySetting.value.buffering_speed) || 1.0;
}
} catch (error) {
console.error('Error getting buffering speed:', error);
}
return 1.0; // Default fallback
};
export const getStartDate = (uptime) => {
// Get the current date and time
const currentDate = new Date();
// Calculate the start date by subtracting uptime (in milliseconds)
const startDate = new Date(currentDate.getTime() - uptime * 1000);
// Format the date as a string (you can adjust the format as needed)
return startDate.toLocaleString({
weekday: 'short', // optional, adds day of the week
year: 'numeric',
month: '2-digit',
day: '2-digit',
hour: '2-digit',
minute: '2-digit',
second: '2-digit',
hour12: true, // 12-hour format with AM/PM
});
};
export const getM3uAccountsMap = (m3uAccounts) => {
const map = {};
if (m3uAccounts && Array.isArray(m3uAccounts)) {
m3uAccounts.forEach((account) => {
if (account.id) {
map[account.id] = account.name;
}
});
}
return map;
};
export const getChannelStreams = async (channelId) => {
return await API.getChannelStreams(channelId);
};
export const getMatchingStreamByUrl = (streamData, channelUrl) => {
return streamData.find(
(stream) =>
channelUrl.includes(stream.url) || stream.url.includes(channelUrl)
);
};
export const getSelectedStream = (availableStreams, streamId) => {
return availableStreams.find((s) => s.id.toString() === streamId);
};
export const switchStream = (channel, streamId) => {
return API.switchStream(channel.channel_id, streamId);
};
export const connectedAccessor = (dateFormat) => {
return (row) => {
// Check for connected_since (which is seconds since connection)
if (row.connected_since) {
// Calculate the actual connection time by subtracting the seconds from current time
const connectedTime = subtract(getNow(), row.connected_since, 'second');
return format(connectedTime, `${dateFormat} HH:mm:ss`);
}
// Fallback to connected_at if it exists
if (row.connected_at) {
const connectedTime = initializeTime(row.connected_at * 1000);
return format(connectedTime, `${dateFormat} HH:mm:ss`);
}
return 'Unknown';
};
};
export const durationAccessor = () => {
return (row) => {
if (row.connected_since) {
return toFriendlyDuration(row.connected_since, 'seconds');
}
if (row.connection_duration) {
return toFriendlyDuration(row.connection_duration, 'seconds');
}
return '-';
};
};
export const getLogoUrl = (logoId, logos, previewedStream) => {
return (
(logoId && logos && logos[logoId] ? logos[logoId].cache_url : null) ||
previewedStream?.logo_url ||
null
);
};
export const getStreamsByIds = (streamId) => {
return API.getStreamsByIds([streamId]);
};
export const getStreamOptions = (availableStreams, m3uAccountsMap) => {
return availableStreams.map((stream) => {
// Get account name from our mapping if it exists
const accountName =
stream.m3u_account && m3uAccountsMap[stream.m3u_account]
? m3uAccountsMap[stream.m3u_account]
: stream.m3u_account
? `M3U #${stream.m3u_account}`
: 'Unknown M3U';
return {
value: stream.id.toString(),
label: `${stream.name || `Stream #${stream.id}`} [${accountName}]`,
};
});
};

View file

@ -0,0 +1,13 @@
export const formatDuration = (seconds) => {
if (!seconds) return '';
const hours = Math.floor(seconds / 3600);
const mins = Math.floor((seconds % 3600) / 60);
const secs = seconds % 60;
return hours > 0 ? `${hours}h ${mins}m` : `${mins}m ${secs}s`;
};
export const getSeasonLabel = (vod) => {
return vod.season_number && vod.episode_number
? `S${vod.season_number.toString().padStart(2, '0')}E${vod.episode_number.toString().padStart(2, '0')}`
: '';
};

View file

@ -0,0 +1,139 @@
import { format, getNowMs, toFriendlyDuration } from '../dateTimeUtils.js';
export const formatDuration = (seconds) => {
if (!seconds) return 'Unknown';
const hours = Math.floor(seconds / 3600);
const minutes = Math.floor((seconds % 3600) / 60);
return hours > 0 ? `${hours}h ${minutes}m` : `${minutes}m`;
};
// Format time for display (e.g., "1:23:45" or "23:45")
export const formatTime = (seconds) => {
if (!seconds || seconds === 0) return '0:00';
const hours = Math.floor(seconds / 3600);
const minutes = Math.floor((seconds % 3600) / 60);
const secs = seconds % 60;
if (hours > 0) {
return `${hours}:${minutes.toString().padStart(2, '0')}:${secs.toString().padStart(2, '0')}`;
} else {
return `${minutes}:${secs.toString().padStart(2, '0')}`;
}
};
export const getMovieDisplayTitle = (vodContent) => {
return vodContent.content_name;
}
export const getEpisodeDisplayTitle = (metadata) => {
const season = metadata.season_number
? `S${metadata.season_number.toString().padStart(2, '0')}`
: 'S??';
const episode = metadata.episode_number
? `E${metadata.episode_number.toString().padStart(2, '0')}`
: 'E??';
return `${metadata.series_name} - ${season}${episode}`;
}
export const getMovieSubtitle = (metadata) => {
const parts = [];
if (metadata.genre) parts.push(metadata.genre);
// We'll handle rating separately as a badge now
return parts;
}
export const getEpisodeSubtitle = (metadata) => {
return [metadata.episode_name || 'Episode'];
}
export const calculateProgress = (connection, duration_secs) => {
if (!connection || !duration_secs) {
return {
percentage: 0,
currentTime: 0,
totalTime: duration_secs || 0,
};
}
const totalSeconds = duration_secs;
let percentage = 0;
let currentTime = 0;
const now = getNowMs() / 1000; // Current time in seconds
// Priority 1: Use last_seek_percentage if available (most accurate from range requests)
if (
connection.last_seek_percentage &&
connection.last_seek_percentage > 0 &&
connection.last_seek_timestamp
) {
// Calculate the position at the time of seek
const seekPosition = Math.round(
(connection.last_seek_percentage / 100) * totalSeconds
);
// Add elapsed time since the seek
const elapsedSinceSeek = now - connection.last_seek_timestamp;
currentTime = seekPosition + Math.floor(elapsedSinceSeek);
// Don't exceed the total duration
currentTime = Math.min(currentTime, totalSeconds);
percentage = (currentTime / totalSeconds) * 100;
}
// Priority 2: Use position_seconds if available
else if (connection.position_seconds && connection.position_seconds > 0) {
currentTime = connection.position_seconds;
percentage = (currentTime / totalSeconds) * 100;
}
return {
percentage: Math.min(percentage, 100), // Cap at 100%
currentTime: Math.max(0, currentTime), // Don't go negative
totalTime: totalSeconds,
};
}
export const calculateConnectionDuration = (connection) => {
// If duration is provided by API, use it
if (connection.duration && connection.duration > 0) {
return toFriendlyDuration(connection.duration, 'seconds');
}
// Fallback: try to extract from client_id timestamp
if (connection.client_id && connection.client_id.startsWith('vod_')) {
try {
const parts = connection.client_id.split('_');
if (parts.length >= 2) {
const clientStartTime = parseInt(parts[1]) / 1000; // Convert ms to seconds
const currentTime = getNowMs() / 1000;
return toFriendlyDuration(currentTime - clientStartTime, 'seconds');
}
} catch {
// Ignore parsing errors
}
}
return 'Unknown duration';
}
export const calculateConnectionStartTime = (connection, dateFormat) => {
if (connection.connected_at) {
return format(connection.connected_at * 1000, `${dateFormat} HH:mm:ss`);
}
// Fallback: calculate from client_id timestamp
if (connection.client_id && connection.client_id.startsWith('vod_')) {
try {
const parts = connection.client_id.split('_');
if (parts.length >= 2) {
const clientStartTime = parseInt(parts[1]);
return format(clientStartTime, `${dateFormat} HH:mm:ss`);
}
} catch {
// Ignore parsing errors
}
}
return 'Unknown';
}

View file

@ -0,0 +1,158 @@
import { describe, it, expect } from 'vitest';
import {
getConfirmationDetails,
} from '../PluginCardUtils';
describe('PluginCardUtils', () => {
describe('getConfirmationDetails', () => {
it('requires confirmation when action.confirm is true', () => {
const action = { label: 'Test Action', confirm: true };
const plugin = { name: 'Test Plugin' };
const result = getConfirmationDetails(action, plugin, {});
expect(result).toEqual({
requireConfirm: true,
confirmTitle: 'Run Test Action?',
confirmMessage: 'You\'re about to run "Test Action" from "Test Plugin".',
});
});
it('does not require confirmation when action.confirm is false', () => {
const action = { label: 'Test Action', confirm: false };
const plugin = { name: 'Test Plugin' };
const result = getConfirmationDetails(action, plugin, {});
expect(result.requireConfirm).toBe(false);
});
it('uses custom title and message from action.confirm object', () => {
const action = {
label: 'Test Action',
confirm: {
required: true,
title: 'Custom Title',
message: 'Custom message',
},
};
const plugin = { name: 'Test Plugin' };
const result = getConfirmationDetails(action, plugin, {});
expect(result).toEqual({
requireConfirm: true,
confirmTitle: 'Custom Title',
confirmMessage: 'Custom message',
});
});
it('requires confirmation when action.confirm.required is not explicitly false', () => {
const action = {
label: 'Test Action',
confirm: {
title: 'Custom Title',
},
};
const plugin = { name: 'Test Plugin' };
const result = getConfirmationDetails(action, plugin, {});
expect(result.requireConfirm).toBe(true);
});
it('does not require confirmation when action.confirm.required is false', () => {
const action = {
label: 'Test Action',
confirm: {
required: false,
title: 'Custom Title',
},
};
const plugin = { name: 'Test Plugin' };
const result = getConfirmationDetails(action, plugin, {});
expect(result.requireConfirm).toBe(false);
});
it('uses confirm field from plugin when action.confirm is undefined', () => {
const action = { label: 'Test Action' };
const plugin = {
name: 'Test Plugin',
fields: [{ id: 'confirm', default: true }],
};
const result = getConfirmationDetails(action, plugin, {});
expect(result.requireConfirm).toBe(true);
});
it('uses settings value over field default', () => {
const action = { label: 'Test Action' };
const plugin = {
name: 'Test Plugin',
fields: [{ id: 'confirm', default: false }],
};
const settings = { confirm: true };
const result = getConfirmationDetails(action, plugin, settings);
expect(result.requireConfirm).toBe(true);
});
it('uses field default when settings value is undefined', () => {
const action = { label: 'Test Action' };
const plugin = {
name: 'Test Plugin',
fields: [{ id: 'confirm', default: true }],
};
const settings = {};
const result = getConfirmationDetails(action, plugin, settings);
expect(result.requireConfirm).toBe(true);
});
it('does not require confirmation when no confirm configuration exists', () => {
const action = { label: 'Test Action' };
const plugin = { name: 'Test Plugin' };
const result = getConfirmationDetails(action, plugin, {});
expect(result.requireConfirm).toBe(false);
});
it('handles plugin without fields array', () => {
const action = { label: 'Test Action' };
const plugin = { name: 'Test Plugin' };
const result = getConfirmationDetails(action, plugin, {});
expect(result.requireConfirm).toBe(false);
});
it('handles null or undefined settings', () => {
const action = { label: 'Test Action' };
const plugin = {
name: 'Test Plugin',
fields: [{ id: 'confirm', default: true }],
};
const result = getConfirmationDetails(action, plugin, null);
expect(result.requireConfirm).toBe(true);
});
it('converts truthy confirm field values to boolean', () => {
const action = { label: 'Test Action' };
const plugin = {
name: 'Test Plugin',
fields: [{ id: 'confirm', default: 1 }],
};
const result = getConfirmationDetails(action, plugin, {});
expect(result.requireConfirm).toBe(true);
});
it('handles confirm field with null default', () => {
const action = { label: 'Test Action' };
const plugin = {
name: 'Test Plugin',
fields: [{ id: 'confirm', default: null }],
};
const result = getConfirmationDetails(action, plugin, {});
expect(result.requireConfirm).toBe(false);
});
});
});

View file

@ -0,0 +1,390 @@
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import {
removeRecording,
getPosterUrl,
getShowVideoUrl,
runComSkip,
deleteRecordingById,
deleteSeriesAndRule,
getRecordingUrl,
getSeasonLabel,
getSeriesInfo,
} from '../RecordingCardUtils';
import API from '../../../api';
import useChannelsStore from '../../../store/channels';
vi.mock('../../../api');
vi.mock('../../../store/channels');
describe('RecordingCardUtils', () => {
beforeEach(() => {
vi.clearAllMocks();
});
describe('removeRecording', () => {
let mockRemoveRecording;
let mockFetchRecordings;
beforeEach(() => {
mockRemoveRecording = vi.fn();
mockFetchRecordings = vi.fn();
useChannelsStore.getState = vi.fn(() => ({
removeRecording: mockRemoveRecording,
fetchRecordings: mockFetchRecordings,
}));
});
it('optimistically removes recording from store', () => {
API.deleteRecording.mockResolvedValue();
removeRecording('recording-1');
expect(mockRemoveRecording).toHaveBeenCalledWith('recording-1');
});
it('calls API to delete recording', () => {
API.deleteRecording.mockResolvedValue();
removeRecording('recording-1');
expect(API.deleteRecording).toHaveBeenCalledWith('recording-1');
});
it('handles optimistic removal error', () => {
const consoleError = vi.spyOn(console, 'error').mockImplementation();
mockRemoveRecording.mockImplementation(() => {
throw new Error('Store error');
});
API.deleteRecording.mockResolvedValue();
removeRecording('recording-1');
expect(consoleError).toHaveBeenCalledWith(
'Failed to optimistically remove recording',
expect.any(Error)
);
consoleError.mockRestore();
});
it('refetches recordings when API delete fails', async () => {
API.deleteRecording.mockRejectedValue(new Error('Delete failed'));
removeRecording('recording-1');
await vi.waitFor(() => {
expect(mockFetchRecordings).toHaveBeenCalled();
});
});
it('handles fetch error after failed delete', async () => {
const consoleError = vi.spyOn(console, 'error').mockImplementation();
API.deleteRecording.mockRejectedValue(new Error('Delete failed'));
mockFetchRecordings.mockImplementation(() => {
throw new Error('Fetch error');
});
removeRecording('recording-1');
await vi.waitFor(() => {
expect(consoleError).toHaveBeenCalledWith(
'Failed to refresh recordings after delete',
expect.any(Error)
);
});
consoleError.mockRestore();
});
});
describe('getPosterUrl', () => {
afterEach(() => {
vi.unstubAllEnvs();
});
it('returns logo URL when posterLogoId is provided', () => {
vi.stubEnv('DEV', false);
const result = getPosterUrl('logo-123', {}, '');
expect(result).toBe('/api/channels/logos/logo-123/cache/');
});
it('returns custom poster_url when no posterLogoId', () => {
vi.stubEnv('DEV', false);
const customProps = { poster_url: '/custom/poster.jpg' };
const result = getPosterUrl(null, customProps, '');
expect(result).toBe('/custom/poster.jpg');
});
it('returns posterUrl when no posterLogoId or custom poster_url', () => {
vi.stubEnv('DEV', false);
const result = getPosterUrl(null, {}, '/fallback/poster.jpg');
expect(result).toBe('/fallback/poster.jpg');
});
it('returns default logo when no parameters provided', () => {
vi.stubEnv('DEV', false);
const result = getPosterUrl(null, {}, '');
expect(result).toBe('/logo.png');
});
it('prepends dev server URL in dev mode for relative paths', () => {
vi.stubEnv('DEV', true);
const result = getPosterUrl(null, {}, '/poster.jpg');
expect(result).toMatch(/^https?:\/\/.*:5656\/poster\.jpg$/);
});
it('does not prepend dev URL for absolute URLs', () => {
vi.stubEnv('DEV', true);
const result = getPosterUrl(null, {}, 'https://example.com/poster.jpg');
expect(result).toBe('https://example.com/poster.jpg');
});
});
describe('getShowVideoUrl', () => {
it('returns proxy URL for channel', () => {
const channel = { uuid: 'channel-123' };
const result = getShowVideoUrl(channel, 'production');
expect(result).toBe('/proxy/ts/stream/channel-123');
});
it('prepends dev server URL in dev mode', () => {
const channel = { uuid: 'channel-123' };
const result = getShowVideoUrl(channel, 'dev');
expect(result).toMatch(/^https?:\/\/.*:5656\/proxy\/ts\/stream\/channel-123$/);
});
});
describe('runComSkip', () => {
it('calls API runComskip with recording id', async () => {
API.runComskip.mockResolvedValue();
const recording = { id: 'recording-1' };
await runComSkip(recording);
expect(API.runComskip).toHaveBeenCalledWith('recording-1');
});
});
describe('deleteRecordingById', () => {
it('calls API deleteRecording with id', async () => {
API.deleteRecording.mockResolvedValue();
await deleteRecordingById('recording-1');
expect(API.deleteRecording).toHaveBeenCalledWith('recording-1');
});
});
describe('deleteSeriesAndRule', () => {
it('removes series recordings and deletes series rule', async () => {
API.bulkRemoveSeriesRecordings.mockResolvedValue();
API.deleteSeriesRule.mockResolvedValue();
const seriesInfo = { tvg_id: 'series-123', title: 'Test Series' };
await deleteSeriesAndRule(seriesInfo);
expect(API.bulkRemoveSeriesRecordings).toHaveBeenCalledWith({
tvg_id: 'series-123',
title: 'Test Series',
scope: 'title',
});
expect(API.deleteSeriesRule).toHaveBeenCalledWith('series-123');
});
it('does nothing when tvg_id is not provided', async () => {
const seriesInfo = { title: 'Test Series' };
await deleteSeriesAndRule(seriesInfo);
expect(API.bulkRemoveSeriesRecordings).not.toHaveBeenCalled();
expect(API.deleteSeriesRule).not.toHaveBeenCalled();
});
it('handles bulk remove error gracefully', async () => {
const consoleError = vi.spyOn(console, 'error').mockImplementation();
API.bulkRemoveSeriesRecordings.mockRejectedValue(new Error('Bulk remove failed'));
API.deleteSeriesRule.mockResolvedValue();
const seriesInfo = { tvg_id: 'series-123', title: 'Test Series' };
await deleteSeriesAndRule(seriesInfo);
expect(consoleError).toHaveBeenCalledWith(
'Failed to remove series recordings',
expect.any(Error)
);
expect(API.deleteSeriesRule).toHaveBeenCalled();
consoleError.mockRestore();
});
it('handles delete rule error gracefully', async () => {
const consoleError = vi.spyOn(console, 'error').mockImplementation();
API.bulkRemoveSeriesRecordings.mockResolvedValue();
API.deleteSeriesRule.mockRejectedValue(new Error('Delete rule failed'));
const seriesInfo = { tvg_id: 'series-123', title: 'Test Series' };
await deleteSeriesAndRule(seriesInfo);
expect(consoleError).toHaveBeenCalledWith(
'Failed to delete series rule',
expect.any(Error)
);
consoleError.mockRestore();
});
});
describe('getRecordingUrl', () => {
it('returns file_url when available', () => {
const customProps = { file_url: '/recordings/file.mp4' };
const result = getRecordingUrl(customProps, 'production');
expect(result).toBe('/recordings/file.mp4');
});
it('returns output_file_url when file_url is not available', () => {
const customProps = { output_file_url: '/output/file.mp4' };
const result = getRecordingUrl(customProps, 'production');
expect(result).toBe('/output/file.mp4');
});
it('prefers file_url over output_file_url', () => {
const customProps = {
file_url: '/recordings/file.mp4',
output_file_url: '/output/file.mp4',
};
const result = getRecordingUrl(customProps, 'production');
expect(result).toBe('/recordings/file.mp4');
});
it('prepends dev server URL in dev mode for relative paths', () => {
const customProps = { file_url: '/recordings/file.mp4' };
const result = getRecordingUrl(customProps, 'dev');
expect(result).toMatch(/^https?:\/\/.*:5656\/recordings\/file\.mp4$/);
});
it('does not prepend dev URL for absolute URLs', () => {
const customProps = { file_url: 'https://example.com/file.mp4' };
const result = getRecordingUrl(customProps, 'dev');
expect(result).toBe('https://example.com/file.mp4');
});
it('returns undefined when no file URL is available', () => {
const result = getRecordingUrl({}, 'production');
expect(result).toBeUndefined();
});
it('handles null customProps', () => {
const result = getRecordingUrl(null, 'production');
expect(result).toBeUndefined();
});
});
describe('getSeasonLabel', () => {
it('returns formatted season and episode label', () => {
const result = getSeasonLabel(1, 5, null);
expect(result).toBe('S01E05');
});
it('pads single digit season and episode numbers', () => {
const result = getSeasonLabel(2, 3, null);
expect(result).toBe('S02E03');
});
it('handles multi-digit season and episode numbers', () => {
const result = getSeasonLabel(12, 34, null);
expect(result).toBe('S12E34');
});
it('returns onscreen value when season or episode is missing', () => {
const result = getSeasonLabel(null, 5, 'Episode 5');
expect(result).toBe('Episode 5');
});
it('returns onscreen value when only episode is missing', () => {
const result = getSeasonLabel(1, null, 'Special');
expect(result).toBe('Special');
});
it('returns null when no season, episode, or onscreen provided', () => {
const result = getSeasonLabel(null, null, null);
expect(result).toBeNull();
});
it('returns formatted label even when onscreen is provided', () => {
const result = getSeasonLabel(1, 5, 'Episode 5');
expect(result).toBe('S01E05');
});
});
describe('getSeriesInfo', () => {
it('extracts tvg_id and title from program', () => {
const customProps = {
program: { tvg_id: 'series-123', title: 'Test Series' },
};
const result = getSeriesInfo(customProps);
expect(result).toEqual({
tvg_id: 'series-123',
title: 'Test Series',
});
});
it('handles missing program object', () => {
const customProps = {};
const result = getSeriesInfo(customProps);
expect(result).toEqual({
tvg_id: undefined,
title: undefined,
});
});
it('handles null customProps', () => {
const result = getSeriesInfo(null);
expect(result).toEqual({
tvg_id: undefined,
title: undefined,
});
});
it('handles undefined customProps', () => {
const result = getSeriesInfo(undefined);
expect(result).toEqual({
tvg_id: undefined,
title: undefined,
});
});
it('handles partial program data', () => {
const customProps = {
program: { tvg_id: 'series-123' },
};
const result = getSeriesInfo(customProps);
expect(result).toEqual({
tvg_id: 'series-123',
title: undefined,
});
});
});
});

View file

@ -0,0 +1,300 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import * as StreamConnectionCardUtils from '../StreamConnectionCardUtils';
import API from '../../../api.js';
import * as dateTimeUtils from '../../dateTimeUtils.js';
vi.mock('../../../api.js');
vi.mock('../../dateTimeUtils.js');
describe('StreamConnectionCardUtils', () => {
beforeEach(() => {
vi.clearAllMocks();
});
describe('getBufferingSpeedThreshold', () => {
it('should return parsed buffering_speed from proxy settings', () => {
const proxySetting = {
value: { buffering_speed: 2.5 }
};
expect(StreamConnectionCardUtils.getBufferingSpeedThreshold(proxySetting)).toBe(2.5);
});
it('should return 1.0 for invalid JSON', () => {
const proxySetting = { value: { buffering_speed: 'invalid' } };
const consoleSpy = vi.spyOn(console, 'error').mockImplementation(() => {});
expect(StreamConnectionCardUtils.getBufferingSpeedThreshold(proxySetting)).toBe(1.0);
consoleSpy.mockRestore();
});
it('should return 1.0 when buffering_speed is not a number', () => {
const proxySetting = {
value: JSON.stringify({ buffering_speed: 'not a number' })
};
expect(StreamConnectionCardUtils.getBufferingSpeedThreshold(proxySetting)).toBe(1.0);
});
it('should return 1.0 when proxySetting is null', () => {
expect(StreamConnectionCardUtils.getBufferingSpeedThreshold(null)).toBe(1.0);
});
it('should return 1.0 when value is missing', () => {
expect(StreamConnectionCardUtils.getBufferingSpeedThreshold({})).toBe(1.0);
});
});
describe('getStartDate', () => {
it('should calculate start date from uptime in seconds', () => {
const uptime = 3600; // 1 hour
const result = StreamConnectionCardUtils.getStartDate(uptime);
expect(typeof result).toBe('string');
expect(result.length).toBeGreaterThan(0);
});
it('should handle zero uptime', () => {
const result = StreamConnectionCardUtils.getStartDate(0);
expect(typeof result).toBe('string');
});
});
describe('getM3uAccountsMap', () => {
it('should create map from m3u accounts array', () => {
const m3uAccounts = [
{ id: 1, name: 'Account 1' },
{ id: 2, name: 'Account 2' }
];
const result = StreamConnectionCardUtils.getM3uAccountsMap(m3uAccounts);
expect(result).toEqual({ 1: 'Account 1', 2: 'Account 2' });
});
it('should handle accounts without id', () => {
const m3uAccounts = [
{ name: 'Account 1' },
{ id: 2, name: 'Account 2' }
];
const result = StreamConnectionCardUtils.getM3uAccountsMap(m3uAccounts);
expect(result).toEqual({ 2: 'Account 2' });
});
it('should return empty object for null input', () => {
expect(StreamConnectionCardUtils.getM3uAccountsMap(null)).toEqual({});
});
it('should return empty object for non-array input', () => {
expect(StreamConnectionCardUtils.getM3uAccountsMap({})).toEqual({});
});
});
describe('getChannelStreams', () => {
it('should call API.getChannelStreams with channelId', async () => {
const mockStreams = [{ id: 1, name: 'Stream 1' }];
API.getChannelStreams.mockResolvedValue(mockStreams);
const result = await StreamConnectionCardUtils.getChannelStreams(123);
expect(API.getChannelStreams).toHaveBeenCalledWith(123);
expect(result).toEqual(mockStreams);
});
});
describe('getMatchingStreamByUrl', () => {
it('should find stream when channelUrl includes stream url', () => {
const streamData = [
{ id: 1, url: 'http://example.com/stream1' },
{ id: 2, url: 'http://example.com/stream2' }
];
const result = StreamConnectionCardUtils.getMatchingStreamByUrl(
streamData,
'http://example.com/stream1/playlist.m3u8'
);
expect(result).toEqual(streamData[0]);
});
it('should find stream when stream url includes channelUrl', () => {
const streamData = [
{ id: 1, url: 'http://example.com/stream1/playlist.m3u8' }
];
const result = StreamConnectionCardUtils.getMatchingStreamByUrl(
streamData,
'http://example.com/stream1'
);
expect(result).toEqual(streamData[0]);
});
it('should return undefined when no match found', () => {
const streamData = [{ id: 1, url: 'http://example.com/stream1' }];
const result = StreamConnectionCardUtils.getMatchingStreamByUrl(
streamData,
'http://different.com/stream'
);
expect(result).toBeUndefined();
});
});
describe('getSelectedStream', () => {
it('should find stream by id as string', () => {
const streams = [
{ id: 1, name: 'Stream 1' },
{ id: 2, name: 'Stream 2' }
];
const result = StreamConnectionCardUtils.getSelectedStream(streams, '2');
expect(result).toEqual(streams[1]);
});
it('should return undefined when stream not found', () => {
const streams = [{ id: 1, name: 'Stream 1' }];
const result = StreamConnectionCardUtils.getSelectedStream(streams, '99');
expect(result).toBeUndefined();
});
});
describe('switchStream', () => {
it('should call API.switchStream with channel_id and streamId', () => {
const channel = { channel_id: 123 };
API.switchStream.mockResolvedValue({ success: true });
StreamConnectionCardUtils.switchStream(channel, 456);
expect(API.switchStream).toHaveBeenCalledWith(123, 456);
});
});
describe('connectedAccessor', () => {
it('should format connected_since correctly', () => {
const mockNow = new Date('2024-01-01T12:00:00');
const mockConnectedTime = new Date('2024-01-01T10:00:00');
dateTimeUtils.getNow.mockReturnValue(mockNow);
dateTimeUtils.subtract.mockReturnValue(mockConnectedTime);
dateTimeUtils.format.mockReturnValue('01/01/2024 10:00:00');
const accessor = StreamConnectionCardUtils.connectedAccessor('MM/DD/YYYY');
const result = accessor({ connected_since: 7200 });
expect(dateTimeUtils.subtract).toHaveBeenCalledWith(mockNow, 7200, 'second');
expect(dateTimeUtils.format).toHaveBeenCalledWith(mockConnectedTime, 'MM/DD/YYYY HH:mm:ss');
expect(result).toBe('01/01/2024 10:00:00');
});
it('should fallback to connected_at when connected_since is missing', () => {
const mockTime = new Date('2024-01-01T10:00:00');
dateTimeUtils.initializeTime.mockReturnValue(mockTime);
dateTimeUtils.format.mockReturnValue('01/01/2024 10:00:00');
const accessor = StreamConnectionCardUtils.connectedAccessor('MM/DD/YYYY');
const result = accessor({ connected_at: 1704103200 });
expect(dateTimeUtils.initializeTime).toHaveBeenCalledWith(1704103200000);
expect(result).toBe('01/01/2024 10:00:00');
});
it('should return Unknown when no time data available', () => {
const accessor = StreamConnectionCardUtils.connectedAccessor('MM/DD/YYYY');
const result = accessor({});
expect(result).toBe('Unknown');
});
});
describe('durationAccessor', () => {
it('should format connected_since duration', () => {
dateTimeUtils.toFriendlyDuration.mockReturnValue('2h 30m');
const accessor = StreamConnectionCardUtils.durationAccessor();
const result = accessor({ connected_since: 9000 });
expect(dateTimeUtils.toFriendlyDuration).toHaveBeenCalledWith(9000, 'seconds');
expect(result).toBe('2h 30m');
});
it('should fallback to connection_duration', () => {
dateTimeUtils.toFriendlyDuration.mockReturnValue('1h 15m');
const accessor = StreamConnectionCardUtils.durationAccessor();
const result = accessor({ connection_duration: 4500 });
expect(dateTimeUtils.toFriendlyDuration).toHaveBeenCalledWith(4500, 'seconds');
expect(result).toBe('1h 15m');
});
it('should return - when no duration data available', () => {
const accessor = StreamConnectionCardUtils.durationAccessor();
const result = accessor({});
expect(result).toBe('-');
});
});
describe('getLogoUrl', () => {
it('should return cache_url from logos map when logoId exists', () => {
const logos = {
'logo-123': { cache_url: '/api/logos/logo-123/cache/' }
};
const result = StreamConnectionCardUtils.getLogoUrl('logo-123', logos, null);
expect(result).toBe('/api/logos/logo-123/cache/');
});
it('should fallback to previewedStream logo_url when logoId not in map', () => {
const previewedStream = { logo_url: 'http://example.com/logo.png' };
const result = StreamConnectionCardUtils.getLogoUrl('logo-456', {}, previewedStream);
expect(result).toBe('http://example.com/logo.png');
});
it('should return null when no logo available', () => {
const result = StreamConnectionCardUtils.getLogoUrl(null, {}, null);
expect(result).toBeNull();
});
});
describe('getStreamsByIds', () => {
it('should call API.getStreamsByIds with array containing streamId', async () => {
const mockStreams = [{ id: 123, name: 'Stream' }];
API.getStreamsByIds.mockResolvedValue(mockStreams);
const result = await StreamConnectionCardUtils.getStreamsByIds(123);
expect(API.getStreamsByIds).toHaveBeenCalledWith([123]);
expect(result).toEqual(mockStreams);
});
});
describe('getStreamOptions', () => {
it('should format stream options with account names from map', () => {
const streams = [
{ id: 1, name: 'Stream 1', m3u_account: 100 },
{ id: 2, name: 'Stream 2', m3u_account: 200 }
];
const accountsMap = { 100: 'Premium Account', 200: 'Basic Account' };
const result = StreamConnectionCardUtils.getStreamOptions(streams, accountsMap);
expect(result).toEqual([
{ value: '1', label: 'Stream 1 [Premium Account]' },
{ value: '2', label: 'Stream 2 [Basic Account]' }
]);
});
it('should use default M3U label when account not in map', () => {
const streams = [{ id: 1, name: 'Stream 1', m3u_account: 999 }];
const result = StreamConnectionCardUtils.getStreamOptions(streams, {});
expect(result[0].label).toBe('Stream 1 [M3U #999]');
});
it('should handle streams without name', () => {
const streams = [{ id: 5, m3u_account: 100 }];
const accountsMap = { 100: 'Account' };
const result = StreamConnectionCardUtils.getStreamOptions(streams, accountsMap);
expect(result[0].label).toBe('Stream #5 [Account]');
});
it('should handle streams without m3u_account', () => {
const streams = [{ id: 1, name: 'Stream 1' }];
const result = StreamConnectionCardUtils.getStreamOptions(streams, {});
expect(result[0].label).toBe('Stream 1 [Unknown M3U]');
});
});
});

Some files were not shown because too many files have changed in this diff Show more