Compare commits

...

278 commits

Author SHA1 Message Date
SergeantPanda
8521df94ad
Merge pull request #868 from DawtCom:main
Some checks failed
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
Move caching to client to remove burden on dispatch server
2026-01-18 17:26:49 -06:00
DawtCom
c970cfcf9a Move caching to client to remove burden on dispatch server 2026-01-18 00:49:17 -06:00
SergeantPanda
fe60c4f3bc Enhancement: Update frontend tests workflow to ensure proper triggering on push and pull request events only when frontend code changes.
Some checks failed
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
Frontend Tests / test (push) Has been cancelled
2026-01-17 18:30:13 -06:00
SergeantPanda
7cf7aecdf2
Merge pull request #857 from Dispatcharr/dev
Some checks failed
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
Frontend Tests / test (push) Has been cancelled
Dev
2026-01-15 09:05:06 -06:00
SergeantPanda
54644df9a3 Test Fix: Fixed SettingsUtils frontend tests for new grouped settings architecture. Updated test suite to properly verify grouped JSON settings (stream_settings, dvr_settings, etc.) instead of individual CharField settings, including tests for type conversions, array-to-CSV transformations, and special handling of proxy_settings and network_access. Frontend tests GitHub workflow now uses Node.js 24 (matching Dockerfile) and runs on both main and dev branch pushes and pull requests for comprehensive CI coverage.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
Frontend Tests / test (push) Waiting to run
2026-01-15 08:55:38 -06:00
SergeantPanda
38fa0fe99d Bug Fix: Fixed NumPy baseline detection in Docker entrypoint. Now calls numpy.show_config() directly with case-insensitive grep instead of incorrectly wrapping the output.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-14 17:10:06 -06:00
SergeantPanda
a772f5c353 changelog: Update missed close on 0.17.0 changelog.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-13 17:03:30 -06:00
GitHub Actions
da186bcb9d Release v0.17.0
Some checks failed
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
Frontend Tests / test (push) Has been cancelled
2026-01-13 22:48:13 +00:00
SergeantPanda
75df00e329
Merge pull request #849 from Dispatcharr/dev
Version 0.17.0
2026-01-13 16:45:33 -06:00
SergeantPanda
d0ed682b3d Bug Fix: Fixed bulk channel profile membership update endpoint silently ignoring channels without existing membership records. The endpoint now creates missing memberships automatically (matching single-channel endpoint behavior), validates that all channel IDs exist before processing, and provides detailed response feedback including counts of updated vs. created memberships. Added comprehensive Swagger documentation with request/response schemas.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-13 15:43:44 -06:00
SergeantPanda
60955a39c7 changelog: Update changelog for settings refactor. 2026-01-13 14:35:01 -06:00
SergeantPanda
6c15ae940d changelog: Update changelog for PR 837 2026-01-13 14:30:25 -06:00
SergeantPanda
516d0e02aa
Merge pull request #837 from mdellavo/bulk-update-fix 2026-01-13 14:26:04 -06:00
Marc DellaVolpe
6607cef5d4 fix bulk update error on unmatched first entry, add tests to cover bulk update api 2026-01-13 15:05:40 -05:00
SergeantPanda
2f9b544519
Merge pull request #848 from Dispatcharr/settings-refactor
Refactor CoreSettings to use JSONField for value storage and update r…
2026-01-13 13:34:50 -06:00
SergeantPanda
36967c10ce Refactor CoreSettings to use JSONField for value storage and update related logic for proper type handling. Adjusted serializers and forms to accommodate new data structure, ensuring seamless integration across the application. 2026-01-13 12:18:34 -06:00
SergeantPanda
4bfdd15b37 Bug Fix: Fixed PostgreSQL backup restore not completely cleaning database before restoration. The restore process now drops and recreates the entire public schema before running pg_restore, ensuring a truly clean restore that removes all tables, functions, and other objects not present in the backup file. This prevents leftover database objects from persisting when restoring backups from older branches or versions. Added --no-owner flag to pg_restore to avoid role permission errors when the backup was created by a different PostgreSQL user.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-12 16:38:20 -06:00
SergeantPanda
2a3d0db670 Enhancement: Loading feedback for all confirmation dialogs: Extended visual loading indicators across all confirmation dialogs throughout the application. Delete, cleanup, and bulk operation dialogs now show an animated dots loader and disabled state during async operations, providing consistent user feedback for backups (restore/delete), channels, EPGs, logos, VOD logos, M3U accounts, streams, users, groups, filters, profiles, batch operations, and network access changes.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-12 13:53:44 -06:00
SergeantPanda
43636a84d0 Enhancement: Added visual loading indicator to the backup restore confirmation dialog. When clicking "Restore", the button now displays an animated dots loader and becomes disabled, providing clear feedback that the restore operation is in progress. 2026-01-12 13:22:24 -06:00
SergeantPanda
6d5d16d667 Enhancement: Add check for existing NumPy baseline support before reinstalling legacy NumPy to avoid unnecessary installations. 2026-01-12 12:29:54 -06:00
SergeantPanda
f821dabe8e Enhancement: Users can now rename existing channel profiles and create duplicates with automatic channel membership cloning. Each profile action (edit, duplicate, delete) in the profile dropdown for quick access. 2026-01-12 11:29:33 -06:00
SergeantPanda
564dceb210 Bug fix: Fixed TV Guide loading overlay not disappearing after navigating from DVR page. The fetchRecordings() function in the channels store was setting isLoading: true on start but never resetting it to false on successful completion, causing the Guide page's loading overlay to remain visible indefinitely when accessed after the DVR page.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-11 20:51:26 -06:00
SergeantPanda
2e9280cf59
Merge pull request #834 from justinforlenza/stream-profile-argument-parse
Fix: Support quoted stream profile arguments
2026-01-11 20:27:15 -06:00
SergeantPanda
7594ba0a08 changelog: Update changelog for command line parsing PR. 2026-01-11 20:24:59 -06:00
Justin
e8d949db86 replaced standard str.split() with shlex.split() 2026-01-11 20:40:18 -05:00
SergeantPanda
a9a433bc5b changelog: Update changelog for test PR submitted. 2026-01-11 19:24:18 -06:00
SergeantPanda
e72e0215cb
Merge pull request #841 from nick4810/tests/frontend-unit-tests
Tests/frontend unit tests
2026-01-11 19:20:37 -06:00
SergeantPanda
b8374fcc68 Refactor/Enhancement: Refactored channel numbering dialogs into a unified CreateChannelModal component that now includes channel profile selection alongside channel number assignment for both single and bulk channel creation. Users can choose to add channels to all profiles, no profiles, or specific profiles with mutual exclusivity between special options ("All Profiles", "None") and specific profile selections. Profile selection defaults to the current table filter for intuitive workflow. 2026-01-11 19:05:07 -06:00
SergeantPanda
6b873be3cf Bug Fix: Fixed bulk and manual channel creation not refreshing channel profile memberships in the UI for all connected clients. WebSocket channels_created event now calls fetchChannelProfiles() to ensure profile membership updates are reflected in real-time for all users without requiring a page refresh. 2026-01-11 17:51:00 -06:00
SergeantPanda
edfa497203 Enhancement: Channel Profile membership control for manual channel creation and bulk operations: Extended the existing channel_profile_ids parameter from POST /api/channels/from-stream/ to also support POST /api/channels/ (manual creation) and bulk creation tasks with the same flexible semantics:
- Omitted parameter (default): Channels are added to ALL profiles (preserves backward compatibility)
  - Empty array `[]`: Channels are added to NO profiles
  - Sentinel value `[0]`: Channels are added to ALL profiles (explicit)
  - Specific IDs `[1, 2, ...]`: Channels are added only to the specified profiles
  This allows API consumers to control profile membership across all channel creation methods without requiring all channels to be added to every profile by default.
2026-01-11 17:31:15 -06:00
Nick Sandstrom
0242eb69ee Updated tests for mocked regex 2026-01-10 20:22:36 -08:00
Nick Sandstrom
93f74c9d91 Squashed commit of the following:
commit df18a89d0562edc8fd8fb5bc4cac702aefb5272c
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sat Jan 10 19:18:23 2026 -0800

    Updated tests

commit 90240344b89717fbad0e16fe209dbf00c567b1a8
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sun Jan 4 03:18:41 2026 -0800

    Updated tests

commit 525b7cb32bc8d235613706d6795795a0177ea24b
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sun Jan 4 03:18:31 2026 -0800

    Extracted component and util logic

commit e54ea2c3173c0ce3cfb0a2d70d76fdd0a66accc8
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Wed Dec 31 11:55:40 2025 -0800

    Updated tests

commit 5cbe164cb9818d8eab607af037da5faee2c1556f
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Wed Dec 31 11:55:14 2025 -0800

    Minor changes

    Exporting UiSettingsForm as default
    Reverted admin level type check

commit f9ab0d2a06091a2eed3ee6f34268c81bfd746f1e
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Tue Dec 30 23:31:29 2025 -0800

    Extracted component and util logic

commit a705a4db4a32d0851d087a984111837a0a83f722
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sun Dec 28 00:47:29 2025 -0800

    Updated tests

commit a72c6720a3980d0f279edf050b6b51eaae11cdbd
Merge: e8dcab6f 43525ca3
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sun Dec 28 00:04:24 2025 -0800

    Merge branch 'enhancement/component-cleanup' into test/component-cleanup

commit e8dcab6f832570cb986f114cfa574db4994b3aab
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sat Dec 27 22:35:59 2025 -0800

    Updated tests

commit 0fd230503844fba0c418ab0a03c46dc878697a55
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sat Dec 27 22:35:53 2025 -0800

    Added plugins store

commit d987f2de72272f24e26b1ed5bc04bb5c83033868
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sat Dec 27 22:35:43 2025 -0800

    Extracted component and util logic

commit 5a3138370a468a99c9f1ed0a36709a173656d809
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Wed Dec 24 23:13:07 2025 -0800

    Lazy-loading button modals

commit ac6945b5b55e0e16d050d4412a20c82f19250c4b
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Wed Dec 24 22:41:51 2025 -0800

    Extracted notification util

commit befe159fc06b67ee415f7498b5400fee0dc82528
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Wed Dec 24 22:28:12 2025 -0800

    Extracted component and util logic

commit ec10a3a4200a0c94cae29691a9fe06e5c4317bb7
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Wed Dec 24 22:22:09 2025 -0800

    Updated tests

commit c1c7214c8589c0ce7645ea24418d9dd978ac8c1f
Merge: eba6dce7 9c9cbab9
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Tue Dec 23 12:41:25 2025 -0800

    Merge branch 'enhancement/component-cleanup' into test/component-cleanup

commit eba6dce786495e352d4696030500db41d028036e
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sun Dec 21 10:12:19 2025 -0800

    Updated style props

commit 2024b0b267b849a5f100e5543b9188e8ad6dd3d9
Merge: b3700956 1029eb5b
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Sun Dec 21 09:27:21 2025 -0800

    Merge branch 'enhancement/component-cleanup' into test/component-cleanup

commit b3700956a4c2f473f1e977826f9537d27ea018ae
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Thu Dec 18 07:45:36 2025 -0800

    Reverted Channels change

commit 137cbb02473b7f2f41488601e3b64e5ff45ac656
Merge: 644ed001 2a0df81c
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Wed Dec 17 13:36:05 2025 -0800

    Merge branch 'enhancement/component-cleanup' into test/component-cleanup

commit 644ed00196c41eaa44df1b98236b7e5cc3124d82
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Wed Dec 17 13:29:13 2025 -0800

    Updated tests

commit c62d1bd0534aa19be99b8f87232ba872420111a0
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Tue Dec 16 14:12:31 2025 -0800

    Updated tests

commit 0cc0ee31d5ad84c59d8eba9fc4424f118f5e0ee2
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Tue Dec 16 13:44:55 2025 -0800

    Extracted component and util logic

commit 25d1b112af250b5ccebb1006511bff8e4387fc76
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Tue Dec 16 13:44:11 2025 -0800

    Added correct import for Text component

commit d8a04c6c09edf158220d3073939c9fb60069745c
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Tue Dec 16 13:43:55 2025 -0800

    Fixed component syntax

commit 59e35d3a4d0da8ed8476560cedacadf76162ea43
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Tue Dec 16 13:43:39 2025 -0800

    Fixed cache_url fallback

commit d2a170d2efd3d2b0e6078c9eebeb8dcea237be3b
Merge: b8f7e435 6c1b0f9a
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Tue Dec 16 12:00:45 2025 -0800

    Merge branch 'enhancement/component-cleanup' into test/component-cleanup

commit b8f7e4358a23f2e3a902929b57ab7a7d115241c5
Merge: 5b12c68a d97f0c90
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Mon Dec 15 07:42:06 2025 -0800

    Merge branch 'enhancement/component-cleanup' into test/component-cleanup

commit 5b12c68ab8ce429adc8d1355632aa411007d365b
Merge: eff58126 c63cb75b
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Mon Dec 8 16:56:14 2025 -0800

    Merge branch 'enhancement/unit-tests' into stage

commit eff58126fb6aba4ebe9a0c67eee65773bffb8ae9
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Mon Dec 8 16:49:43 2025 -0800

    Update .gitignore

commit c63cb75b8cad204d48a392a28d8a5bdf8c270496
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Mon Dec 8 16:28:03 2025 -0800

    Added unit tests for pages

commit 75306a6181ddeb2eaeb306387ba2b44c7fcfd5e3
Author: Nick Sandstrom <32273437+nick4810@users.noreply.github.com>
Date:   Mon Dec 8 16:27:19 2025 -0800

    Added Actions workflow
2026-01-10 19:36:23 -08:00
Nick Sandstrom
e2e6f61dee Merge remote-tracking branch 'upstream/dev' into tests/frontend-unit-tests 2026-01-10 19:35:40 -08:00
SergeantPanda
719a975210 Enhancement: Visual stale indicators for streams and groups: Added is_stale field to Stream and both is_stale and last_seen fields to ChannelGroupM3UAccount models to track items in their retention grace period. Stale groups display with orange buttons and a warning tooltip, while stale streams show with a red background color matching the visual treatment of empty channels.
Some checks failed
CI Pipeline / prepare (push) Has been cancelled
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Has been cancelled
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Has been cancelled
CI Pipeline / create-manifest (push) Has been cancelled
2026-01-09 14:57:07 -06:00
SergeantPanda
a84553d15c Enhancement: Stale status indicators for streams and groups: Added is_stale field to both Stream and ChannelGroupM3UAccount models to track items in their grace period (seen in previous refresh but not current). 2026-01-09 13:53:01 -06:00
SergeantPanda
cc9d38212e Enhancement: Groups now follow the same stale retention logic as streams, using the account's stale_stream_days setting. Groups that temporarily disappear from an M3U source are retained for the configured retention period instead of being immediately deleted, preserving user settings and preventing data loss when providers temporarily remove/re-add groups. (Closes #809) 2026-01-09 12:03:55 -06:00
SergeantPanda
caf56a59f3 Bug Fix: Fixed manual channel creation not adding channels to channel profiles. Manually created channels are now added to the selected profile if one is active, or to all profiles if "All" is selected, matching the behavior of channels created from streams.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-09 10:41:04 -06:00
SergeantPanda
ba5aa861e3 Bug Fix: Fixed Channel Profile filter incorrectly applying profile membership filtering even when "Show Disabled" was enabled, preventing all channels from being displayed. Profile filter now only applies when hiding disabled channels. (Fixes #825) 2026-01-09 10:26:09 -06:00
SergeantPanda
312fa11cfb More cleanup of base image.
Some checks failed
Base Image Build / prepare (push) Has been cancelled
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
Base Image Build / docker (amd64, ubuntu-24.04) (push) Has been cancelled
Base Image Build / docker (arm64, ubuntu-24.04-arm) (push) Has been cancelled
Base Image Build / create-manifest (push) Has been cancelled
2026-01-08 14:53:25 -06:00
SergeantPanda
ad334347a9 More cleanup of base image.
Some checks failed
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
Base Image Build / prepare (push) Has been cancelled
Base Image Build / docker (amd64, ubuntu-24.04) (push) Has been cancelled
Base Image Build / docker (arm64, ubuntu-24.04-arm) (push) Has been cancelled
Base Image Build / create-manifest (push) Has been cancelled
2026-01-08 14:52:58 -06:00
SergeantPanda
74a9d3d0cb
Merge pull request #823 from patchy8736/uwsgi-socket-timeout
Some checks are pending
Base Image Build / prepare (push) Waiting to run
Base Image Build / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
Base Image Build / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
Base Image Build / create-manifest (push) Blocked by required conditions
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
Bug Fix: Add socket-timeout to uWSGI production config to prevent VOD stream timeouts/streams vanishing from stats page
2026-01-08 13:36:18 -06:00
SergeantPanda
fa6315de33 changelog: Fix VOD streams disappearing from stats page during playback by updating uWSGI config to prevent premature cleanup 2026-01-08 13:35:38 -06:00
SergeantPanda
d6c1a2369b Merge branch 'dev' of https://github.com/Dispatcharr/Dispatcharr into pr/patchy8736/823 2026-01-08 13:27:42 -06:00
SergeantPanda
72d9125c36
Merge pull request #811 from nick4810/enhancement/component-cleanup
Extracted component and util logic
2026-01-08 13:07:41 -06:00
SergeantPanda
6e74c370cb changelog: Document refactor of Stats and VOD pages for improved readability and maintainability 2026-01-08 13:06:30 -06:00
SergeantPanda
10447f8c86 Merge branch 'dev' of https://github.com/Dispatcharr/Dispatcharr into pr/nick4810/811 2026-01-08 11:50:39 -06:00
SergeantPanda
1a2d39de91 Merge branch 'dev' of https://github.com/Dispatcharr/Dispatcharr into dev 2026-01-08 10:28:15 -06:00
SergeantPanda
f389420251 Add optional legacy NumPy support for older CPUs in Docker configurations 2026-01-08 10:27:58 -06:00
SergeantPanda
3f6eff96fc changelog: Update changelog for USE_LEGACY_NUMPY support 2026-01-08 10:26:08 -06:00
SergeantPanda
02faa1a4a7
Merge pull request #827 from Dispatcharr/numpy-none-baseline
Enhance Docker setup for legacy NumPy support and streamline installa…
2026-01-08 10:04:58 -06:00
SergeantPanda
c5a3a2af81 Enhance Docker setup for legacy NumPy support and streamline installation process
Some checks are pending
Base Image Build / prepare (push) Waiting to run
Base Image Build / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
Base Image Build / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
Base Image Build / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-08 10:02:29 -06:00
SergeantPanda
01370e8892 Bug fix: Fixed duplicate key constraint violations by treating TMDB/IMDB ID values of 0 or '0' as invalid (some providers use this to indicate "no ID"), converting them to NULL to prevent multiple items from incorrectly sharing the same ID. (Fixes #813)
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-07 16:38:09 -06:00
SergeantPanda
8cbb55c44b Bug Fix: Fixed Channels table EPG column showing "Not Assigned" on initial load for users with large EPG datasets. Added tvgsLoaded flag to EPG store to track when EPG data has finished loading, ensuring the table waits for EPG data before displaying. EPG cells now show animated skeleton placeholders while loading instead of incorrectly showing "Not Assigned". (Fixes #810) 2026-01-07 16:08:53 -06:00
patchy8736
0441dd7b7e Bug Fix: Add socket-timeout to uWSGI production config to prevent VOD stream timeouts during client buffering
The production uWSGI configuration (docker/uwsgi.ini) was missing the socket-timeout directive, causing it to default to 4 seconds. When clients (e.g., VLC) buffer VOD streams and temporarily stop reading from the HTTP socket, uWSGI's write operations timeout after 4 seconds, triggering premature stream cleanup and causing VOD streams to disappear from the stats page.

The fix adds socket-timeout = 600 to match the existing http-timeout = 600 value, giving uWSGI sufficient time to wait for clients to resume reading from buffered sockets. This prevents:
- uwsgi_response_write_body_do() TIMEOUT !!! errors in logs
- GeneratorExit exceptions and premature stream cleanup
- VOD streams vanishing from the stats page when clients buffer

The debug config already had socket-timeout = 3600, which is why the issue wasn't observed in debug mode. This fix aligns production behavior with the debug config while maintaining the production-appropriate 10-minute timeout duration.
2026-01-07 14:10:17 +01:00
SergeantPanda
30d093a2d3 Fixed bulk_create and bulk_update errors during VOD content refresh by pre-checking object existence with optimized bulk queries (3 queries total instead of N per batch) before creating new objects. This ensures all movie/series objects have primary keys before relation operations, preventing "prohibited to prevent data loss due to unsaved related object" errors. (Fixes #813)
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-06 16:12:50 -06:00
SergeantPanda
518c93c398 Enhance Docker setup for legacy NumPy support and streamline installation process 2026-01-06 14:07:37 -06:00
SergeantPanda
cc09c89156
Merge pull request #812 from Dispatcharr/React-Hooke-Form
Some checks failed
CI Pipeline / prepare (push) Has been cancelled
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Has been cancelled
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Has been cancelled
CI Pipeline / create-manifest (push) Has been cancelled
Refactor forms to use react-hook-form and Yup validation
2026-01-04 21:03:10 -06:00
Nick Sandstrom
21c0758cc9 Extracted component and util logic 2026-01-04 18:51:09 -08:00
SergeantPanda
f664910bf4 changelog: Add RHF and removetrailingzeros changes . 2026-01-04 20:49:39 -06:00
SergeantPanda
bc19bf8629 Remove "removeTrailingZeros" prop from the Channel Edit Form 2026-01-04 20:45:52 -06:00
SergeantPanda
16bbc1d875 Refactor forms to use react-hook-form and Yup for validation
- Replaced Formik with react-hook-form in Logo, M3UGroupFilter, M3UProfile, Stream, StreamProfile, and UserAgent components.
- Integrated Yup for schema validation in all updated forms.
- Updated form submission logic to accommodate new form handling methods.
- Adjusted state management and error handling to align with react-hook-form's API.
- Ensured compatibility with existing functionality while improving code readability and maintainability.
2026-01-04 20:40:16 -06:00
SergeantPanda
9612a67412 Change: VOD upstream read timeout reduced from 30 seconds to 10 seconds to minimize lock hold time when clients disconnect during connection phase
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-04 15:21:22 -06:00
SergeantPanda
4e65ffd113 Bug fix: Fixed VOD profile connection count not being decremented when stream connection fails (timeout, 404, etc.), preventing profiles from reaching capacity limits and rejecting valid stream requests 2026-01-04 15:00:08 -06:00
SergeantPanda
6031885537 Bug Fix: M3UMovieRelation.get_stream_url() and M3UEpisodeRelation.get_stream_url() to use XC client's _normalize_url() method instead of simple rstrip('/'). This properly handles malformed M3U account URLs (e.g., containing /player_api.php or query parameters) before constructing VOD stream endpoints, matching behavior of live channel URL building. (Closes #722) 2026-01-04 14:36:03 -06:00
SergeantPanda
8ae1a98a3b Bug Fix: Fixed onboarding message appearing in the Channels Table when filtered results are empty. The onboarding message now only displays when there are no channels created at all, not when channels exist but are filtered out by current filters. 2026-01-04 14:05:30 -06:00
SergeantPanda
48bdcfbd65 Bug fix: Release workflow Docker tagging: Fixed issue where latest and version tags (e.g., 0.16.0) were creating separate manifests instead of pointing to the same image digest, which caused old latest tags to become orphaned/untagged after new releases. Now creates a single multi-arch manifest with both tags, maintaining proper tag relationships and download statistics visibility on GitHub. 2026-01-04 12:05:01 -06:00
GitHub Actions
e151da27b9 Release v0.16.0
Some checks failed
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
2026-01-04 01:15:46 +00:00
SergeantPanda
fdca1fd165
Merge pull request #803 from Dispatcharr/dev
Some checks are pending
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
Version 0.16.0
2026-01-03 19:15:10 -06:00
SergeantPanda
9cc90354ee changelog: Update changelog for region code addition.
Some checks failed
CI Pipeline / prepare (push) Has been cancelled
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Has been cancelled
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Has been cancelled
CI Pipeline / create-manifest (push) Has been cancelled
2026-01-02 15:45:05 -06:00
SergeantPanda
62b6cfa2fb
Merge pull request #423 from bigpandaaaa/uk-region
Add 'UK' region
2026-01-02 15:39:27 -06:00
SergeantPanda
3f46f28a70 Bug Fix: Auto Channel Sync Force EPG Source feature not properly forcing "No EPG" assignment - When selecting "Force EPG Source" > "No EPG (Disabled)", channels were still being auto-matched to EPG data instead of forcing dummy/no EPG. Now correctly sets force_dummy_epg flag to prevent unwanted EPG assignment. (Fixes #788) 2026-01-02 15:22:25 -06:00
SergeantPanda
058de26bdf
Merge pull request #787 from nick4810/enhancement/component-cleanup
Enhancement/component cleanup
2026-01-02 13:56:08 -06:00
SergeantPanda
f51463162c
Merge pull request #794 from sethwv:dev
Fix root-owned __pycache__ by running Django commands as non-root user
2026-01-02 12:06:14 -06:00
SergeantPanda
0cb189acba changelog: Document Docker container file permissions update for Django management commands 2026-01-02 12:03:42 -06:00
SergeantPanda
3fe5ff9130 Merge branch 'dev' of https://github.com/Dispatcharr/Dispatcharr into pr/sethwv/794 2026-01-02 11:54:14 -06:00
SergeantPanda
131ebf9f55 changelog: Updated changelog for new refactor. 2026-01-02 11:29:01 -06:00
SergeantPanda
2ed784e8c4 Merge branch 'dev' of https://github.com/Dispatcharr/Dispatcharr into pr/nick4810/787 2026-01-02 11:27:22 -06:00
Nick Sandstrom
2e0aa90cd6 Merge remote-tracking branch 'upstream/dev' into enhancement/component-cleanup 2026-01-02 08:33:06 -08:00
SergeantPanda
a363d9f0e6
Merge pull request #796 from patchy8736:dev
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
Fix episode processing issues in VOD tasks (#770)
2026-01-02 10:13:48 -06:00
SergeantPanda
6a985d7a7d changelog: Update changelog for PR 2026-01-02 10:13:01 -06:00
SergeantPanda
1a67f3c8ec Merge branch 'dev' of https://github.com/Dispatcharr/Dispatcharr into pr/patchy8736/796 2026-01-02 09:53:54 -06:00
SergeantPanda
6bd8a0c12d Enhance error logging for invalid season and episode numbers in batch_process_episodes 2026-01-02 09:53:45 -06:00
Nick Sandstrom
6678311fa7 Added loading overlay while programs are fetching 2026-01-02 02:03:50 -08:00
SergeantPanda
e8c9432f65 changelog: Update changelog for VOD category filtering.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-01 18:29:54 -06:00
SergeantPanda
33f988b2c6
Merge pull request #784 from Vitekant:fix/vod-category-pipe-parsing
Fix VOD category filtering for names containing pipe "|" characters
2026-01-01 18:28:07 -06:00
SergeantPanda
13e4b19960 changelog: Add change for settings/logo refactor. 2026-01-01 18:21:52 -06:00
SergeantPanda
042c34eecc
Merge pull request #795 from nick4810/enhancement/component-cleanup-logos-settings 2026-01-01 18:15:47 -06:00
SergeantPanda
ded785de54
Merge pull request #789 from nick4810/fix/standard-users-signal-logos-ready
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2026-01-01 17:37:01 -06:00
patchy8736
c57f9fd7e7 Fix episode processing issues in VOD tasks
- Ensure season and episode numbers are properly converted to integers with error handling
- Remove zero-padding from debug log format for season/episode numbers
- Add validation to filter out relations with unsaved episodes that have no primary key
- Add proper logging for skipped relations when episode is not saved to database

These changes address potential crashes when API returns string values instead of integers
and prevent database errors when bulk creation operations fail silently due to conflicts.

Fixes issue #770
2026-01-01 15:57:27 +01:00
Nick Sandstrom
b4b0774189 Including notification util changes 2025-12-31 13:20:09 -08:00
Nick Sandstrom
7b1a85617f Minor changes
Exporting UiSettingsForm as default
Reverted admin level type check
2025-12-31 13:12:24 -08:00
Nick Sandstrom
a6361a07d2 Extracted component and util logic 2025-12-31 13:12:24 -08:00
sethwv-alt
b157159b87
Fix root-owned __pycache__ by running Django commands as non-root user 2025-12-31 12:16:19 -05:00
Nick Sandstrom
d9fc0e68d6 Signaling ready when no StreamTable rendered 2025-12-29 22:18:42 -08:00
Nick Sandstrom
43525ca32a Moved RecordingList outside of DVRPage
Helps to prevent renders
2025-12-27 23:49:06 -08:00
Nick Sandstrom
ffa1331c3b Updated to use util functions 2025-12-27 23:17:42 -08:00
Nick Sandstrom
26d9dbd246 Added plugins store 2025-12-27 22:45:48 -08:00
Nick Sandstrom
f97399de07 Extracted component and util logic 2025-12-27 22:45:48 -08:00
Nick Sandstrom
a5688605cd Lazy-loading button modals 2025-12-27 22:45:48 -08:00
Nick Sandstrom
ca96adf781 Extracted notification util 2025-12-27 22:45:48 -08:00
Nick Sandstrom
61247a452a Extracted component and util logic 2025-12-27 22:45:48 -08:00
Nick Sandstrom
fda188e738 Updated style props 2025-12-27 22:45:48 -08:00
SergeantPanda
57a6a842b2 Bug Fix/Enhancement:
Some checks failed
CI Pipeline / prepare (push) Has been cancelled
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Has been cancelled
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Has been cancelled
CI Pipeline / create-manifest (push) Has been cancelled
- M3U and EPG URLs now correctly preserve non-standard HTTPS ports (e.g., `:8443`) when accessed behind reverse proxies that forward the port in headers — `get_host_and_port()` now properly checks `X-Forwarded-Port` header before falling back to other detection methods (Fixes #704)
- M3U stream URLs now use `build_absolute_uri_with_port()` for consistency with EPG and logo URLs, ensuring uniform port handling across all M3U file URLs
2025-12-27 09:57:36 -06:00
SergeantPanda
f1c096bc94 Bug Fix: XtreamCodes M3U files now correctly set x-tvg-url and url-tvg headers to reference XC EPG URL (xmltv.php) instead of standard EPG endpoint when downloaded via XC API (Fixes #629) 2025-12-27 08:19:58 -06:00
Vitek
5a4be532fd Fix VOD category filtering for names containing pipe characters
Use rsplit('|', 1) instead of split('|', 1) to split from the right,
preserving any pipe characters in category names (e.g., "PL | BAJKI",
"EN | MOVIES"). This ensures the category_type is correctly extracted
as the last segment while keeping the full category name intact.

Fixes MovieFilter, SeriesFilter, and UnifiedContentViewSet category parsing.
2025-12-27 00:21:42 +01:00
SergeantPanda
cc3ed80e1a changelog: Add thanks for errorboundary.
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2025-12-26 16:16:25 -06:00
SergeantPanda
af88756197
Merge pull request #761 from nick4810/enhancement/component-cleanup
Enhancement/component cleanup
2025-12-26 16:08:49 -06:00
SergeantPanda
1b1f360705 Enhancement: Channel number inputs in stream-to-channel creation modals no longer have a maximum value restriction, allowing users to enter any valid channel number supported by the database 2025-12-26 15:55:25 -06:00
SergeantPanda
bc3ef1a3a9 Bug Fix: M3U and EPG manager page no longer crashes when a playlist references a deleted channel group (Fixes screen blank on navigation)
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
2025-12-26 14:58:02 -06:00
SergeantPanda
81af73a086 changelog: Add entry for stream validation continuing GET request on HEAD failure 2025-12-26 13:53:48 -06:00
SergeantPanda
0abacf1fef
Merge pull request #783 from kvnnap/dev 2025-12-26 13:51:03 -06:00
SergeantPanda
36a39cd4de Bug fix: XtreamCodes EPG limit parameter now properly converted to integer to prevent type errors when accessing EPG listings (Fixes #781) 2025-12-26 13:34:53 -06:00
SergeantPanda
46413b7e3a changelog: Update changelog for code refactoring and logo changes. 2025-12-26 12:44:26 -06:00
SergeantPanda
874e981449 Merge branch 'dev' of https://github.com/Dispatcharr/Dispatcharr into pr/nick4810/761 2025-12-26 12:37:57 -06:00
SergeantPanda
f5c6d2b576 Enhancement: Implement event-driven logo loading orchestration on Channels page
Introduce gated logo loading system that ensures logos render after both
ChannelsTable and StreamsTable have completed their initial data fetch,
preventing visual race conditions and ensuring proper paint order.

Changes:
- Add `allowLogoRendering` flag to logos store to gate logo fetching
- Implement `onReady` callbacks in ChannelsTable and StreamsTable
- Add orchestration logic in Channels.jsx to coordinate table readiness
- Use double requestAnimationFrame to defer logo loading until after browser paint
- Remove background logo loading from App.jsx (now page-specific)
- Simplify fetchChannelAssignableLogos to reuse fetchAllLogos
- Remove logos dependency from ChannelsTable columns to prevent re-renders

This ensures visual loading order: Channels → EPG → Streams → Logos,
regardless of network speed or data size, without timer-based hacks.
2025-12-26 12:30:08 -06:00
Kevin Napoli
1ef5a9ca13
Fix: Continue GET request if HEAD fails with the peer closing the connection without returning a response 2025-12-26 15:27:51 +01:00
SergeantPanda
2d31eca93d changelog: Correct formatting for thanks of event viewer arrow direction fix entry
Some checks failed
Base Image Build / prepare (push) Has been cancelled
CI Pipeline / prepare (push) Has been cancelled
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
Base Image Build / docker (amd64, ubuntu-24.04) (push) Has been cancelled
Base Image Build / docker (arm64, ubuntu-24.04-arm) (push) Has been cancelled
Base Image Build / create-manifest (push) Has been cancelled
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Has been cancelled
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Has been cancelled
CI Pipeline / create-manifest (push) Has been cancelled
2025-12-24 16:15:23 -06:00
SergeantPanda
510c9fc617
Merge pull request #757 from sethwv/dev 2025-12-24 16:14:29 -06:00
SergeantPanda
8f63659ad7 changelog: Update changelog for VLC support 2025-12-24 16:11:52 -06:00
SergeantPanda
31b9868bfd Merge branch 'dev' of https://github.com/Dispatcharr/Dispatcharr into pr/sethwv/757 2025-12-24 16:04:04 -06:00
SergeantPanda
da4597ac95 Merge branch 'main' of https://github.com/Dispatcharr/Dispatcharr into dev 2025-12-24 15:49:20 -06:00
SergeantPanda
523a127c81 Enhancement: Add VLC dependencies to DispatcharrBase image.
Some checks failed
Base Image Build / prepare (push) Has been cancelled
Build and Push Multi-Arch Docker Image / build-and-push (push) Has been cancelled
Base Image Build / docker (amd64, ubuntu-24.04) (push) Has been cancelled
Base Image Build / docker (arm64, ubuntu-24.04-arm) (push) Has been cancelled
Base Image Build / create-manifest (push) Has been cancelled
2025-12-24 15:48:56 -06:00
SergeantPanda
ec3093d9af Changelog: Update changelog to include client IP display in network access warning modal 2025-12-24 15:43:33 -06:00
SergeantPanda
5481b18d8a
Merge pull request #779 from damien-alt-sudo/feature/ui-network-access-clientip 2025-12-24 15:39:31 -06:00
Damien
bfca663870 Feature: Add client_ip response from settings check api to UI 2025-12-24 19:30:03 +00:00
Damien
11b3320277 Feature: Add client_ip response from settings check API 2025-12-24 19:27:38 +00:00
SergeantPanda
44a122924f advanced filtering for hiding disabled channels and viewing only empty channels
Some checks are pending
CI Pipeline / prepare (push) Waiting to run
CI Pipeline / docker (amd64, ubuntu-24.04) (push) Blocked by required conditions
CI Pipeline / docker (arm64, ubuntu-24.04-arm) (push) Blocked by required conditions
CI Pipeline / create-manifest (push) Blocked by required conditions
Build and Push Multi-Arch Docker Image / build-and-push (push) Waiting to run
(cherry picked from commit ea38c0b4b8)
Closes #182
2025-12-23 17:37:38 -06:00
SergeantPanda
48ebaffadd Cleanup dockerfile a bit. 2025-12-23 17:04:09 -06:00
SergeantPanda
daa919c764 Refactor logging messages in StreamManager for clarity and consistency. Also removed redundant parsing. 2025-12-23 15:52:56 -06:00
SergeantPanda
8f811f2ed3 Correct profile name casing for FFmpeg, Streamlink, and VLC in fixtures.json 2025-12-23 15:17:50 -06:00
SergeantPanda
ff7298a93e Enhance StreamManager for efficient log parsing and update VLC stream profile naming 2025-12-23 15:07:25 -06:00
Nick Sandstrom
9c9cbab94c Reverted lazy load of StreamsTable 2025-12-23 12:27:29 -08:00
SergeantPanda
904500906c Bug Fix: Update stream validation to return original URL instead of redirected URL when using redirect profile. 2025-12-23 09:51:02 -06:00
SergeantPanda
106ea72c9d Changelog: Fix event viewer arrow direction for corrected UI behavior 2025-12-22 17:38:55 -06:00
drnikcuk
eea84cfd8b
Update Stats.jsx (#773)
* Update Stats.jsx

Adds fix for stats control arrows direction swap
2025-12-22 17:33:26 -06:00
GitHub Actions
c7590d204e Release v0.15.1 2025-12-22 22:58:41 +00:00
SergeantPanda
7a0af3445a
Merge pull request #774 from Dispatcharr/dev
Version 0.15.1
2025-12-22 16:55:59 -06:00
SergeantPanda
18645fc08f Bug Fix: Re-apply failed merge to fix clients that don't have ipv6 support. 2025-12-22 16:39:09 -06:00
Seth Van Niekerk
aa5db6c3f4
Squash: Log Parsing Refactor & Enhancing 2025-12-22 15:14:46 -05:00
Nick Sandstrom
1029eb5b5c Table length checking if data is already set 2025-12-19 19:19:04 -08:00
SergeantPanda
ee183a9f75 Bug Fix: XtreamCodes EPG has_archive field now returns integer 0 instead of string "0" for proper JSON type consistency 2025-12-19 18:39:43 -06:00
nick4810
63daa3ddf2
Merge branch 'dev' into enhancement/component-cleanup 2025-12-19 16:35:26 -08:00
Nick Sandstrom
4cd63bc898 Reverted LoadingOverlay 2025-12-19 16:33:21 -08:00
GitHub Actions
05b62c22ad Release v0.15.0 2025-12-20 00:08:41 +00:00
SergeantPanda
2c12e8b872
Merge pull request #767 from Dispatcharr/dev
Version 0.15.0
2025-12-19 17:55:40 -06:00
SergeantPanda
20182c7ebf
Merge branch 'main' into dev 2025-12-19 17:53:06 -06:00
SergeantPanda
f0a9a3fc15 Bug Fix: Docker init script now validates DISPATCHARR_PORT is an integer before using it, preventing sed errors when Kubernetes sets it to a service URL like tcp://10.98.37.10:80. Falls back to default port 9191 when invalid (Fixes #737) 2025-12-19 17:00:30 -06:00
nick4810
097551ccf7
Merge branch 'dev' into enhancement/component-cleanup 2025-12-19 14:11:01 -08:00
Nick Sandstrom
22527b085d Checking if data has been fetched before displaying empty channels 2025-12-19 14:09:17 -08:00
SergeantPanda
944736612b Bug Fix: M3U profile form resets local state for search and replace patterns after saving, preventing validation errors when adding multiple profiles in a row 2025-12-19 15:49:18 -06:00
SergeantPanda
abc6ae94e5 Enhancement: Update SuperuserForm to include logo, version info, and improved layout 2025-12-19 10:44:39 -06:00
SergeantPanda
5371519d8a Enhancement: Update default backup settings to enable backups and set retention count to 3 2025-12-19 10:40:56 -06:00
SergeantPanda
b83f12809f Enhancement: Add HEADER_HEIGHT and ERROR_HEIGHT constants for improved layout calculations in FloatingVideo component 2025-12-18 17:18:44 -06:00
SergeantPanda
601f7d0297 changelog: Update changelog for DVR bug fix. 2025-12-18 16:57:43 -06:00
SergeantPanda
de31826137 refactor: externalize Redis and Celery configuration via environment variables
Replace hardcoded localhost:6379 values throughout codebase with environment-based configuration. Add REDIS_PORT support and allow REDIS_URL override for external Redis services. Configure Celery broker/result backend to use Redis settings with environment variable overrides.

Closes #762
2025-12-18 16:54:59 -06:00
SergeantPanda
e78c18c473 Bug Fix: XC get_simple_data_table now returns the id of the program in the database and epg_id the epg id from the matched epg. 2025-12-18 16:11:26 -06:00
SergeantPanda
73956924f5 Enhancement: Stream group as available hash option: Users can now select 'Group' as a hash key option in Settings → Stream Settings → M3U Hash Key, allowing streams to be differentiated by their group membership in addition to name, URL, TVG-ID, and M3U ID 2025-12-18 15:26:08 -06:00
SergeantPanda
0a4d27c236 Merge branch 'dev' of https://github.com/Dispatcharr/Dispatcharr into dev 2025-12-18 14:47:04 -06:00
SergeantPanda
45ea63e9cf chore: update dependencies in package.json
- Bump eslint from ^9.21.0 to ^9.27.0
- Upgrade vite from ^6.2.0 to ^7.1.7
- Add overrides for js-yaml to ^4.1.1
2025-12-18 14:45:55 -06:00
Dispatcharr
1510197bf0 Floating Video
Added handles on the corners of FloatingVideo to resize
2025-12-18 14:19:51 -06:00
SergeantPanda
9623dff6b1 Enhancement: Updated dependencies: Django (5.2.4 → 5.2.9) includes CVE security patch, psycopg2-binary (2.9.10 → 2.9.11), celery (5.5.3 → 5.6.0), djangorestframework (3.16.0 → 3.16.1), requests (2.32.4 → 2.32.5), psutil (7.0.0 → 7.1.3), gevent (25.5.1 → 25.9.1), rapidfuzz (3.13.0 → 3.14.3), torch (2.7.1 → 2.9.1), sentence-transformers (5.1.0 → 5.2.0), lxml (6.0.0 → 6.0.2) (Closes #662) 2025-12-18 13:19:18 -06:00
SergeantPanda
3ddcadb50d changelog: Give acknowledgement and reference issue. 2025-12-18 11:07:13 -06:00
SergeantPanda
1e42aa1011
Merge pull request #763 from Dispatcharr/OCI-docker-labels
Enhancement: Refactor Docker workflows to use docker/metadata-action …
2025-12-18 11:02:50 -06:00
SergeantPanda
ee0502f559
Merge branch 'dev' into OCI-docker-labels 2025-12-18 11:01:55 -06:00
SergeantPanda
f43de44946 Enhancement: Refactor Docker workflows to use docker/metadata-action for cleaner OCI label management 2025-12-18 10:58:48 -06:00
Nick Sandstrom
2b1d5622a6 Setting User before fetch settings completes 2025-12-18 07:47:18 -08:00
Nick Sandstrom
bd148a7f14 Reverted Channels change for initial render 2025-12-18 07:46:21 -08:00
SergeantPanda
a76a81c7f4
Merge pull request #738 from jdblack/flexible_devbuild
Give arguments to docker/build-dev.sh
2025-12-18 08:57:45 -06:00
SergeantPanda
bd57ee3f3c
Merge branch 'dev' into flexible_devbuild 2025-12-18 08:56:58 -06:00
SergeantPanda
2558ea0b0b Enhancement: Add VOD client stop functionality to Stats page 2025-12-17 16:54:10 -06:00
Nick Sandstrom
2a0df81c59 Lazy loading components 2025-12-17 13:35:12 -08:00
Nick Sandstrom
1906c9955e Updated to default export 2025-12-17 13:34:53 -08:00
Nick Sandstrom
4c60ce0c28 Extracted Series and Movie components 2025-12-17 13:34:20 -08:00
Dispatcharr
865ba432d3 Updated url path
#697 Encode tvg_id for DVR series rule deletions
2025-12-16 23:06:49 -06:00
Dispatcharr
7ea843956b Updated FloatingVideo.jsx
Added resizing of the floating video
Fixed floating video dragging
2025-12-16 21:52:35 -06:00
SergeantPanda
98a016a418 Enhance series info retrieval to return unique episodes and improve relation handling for active M3U accounts 2025-12-16 15:54:33 -06:00
Nick Sandstrom
36ec2fb1b0 Extracted component and util logic 2025-12-16 13:46:24 -08:00
Nick Sandstrom
dd75b5b21a Added correct import for Text component 2025-12-16 13:46:24 -08:00
Nick Sandstrom
38033da90f Fixed component syntax 2025-12-16 13:46:24 -08:00
Nick Sandstrom
7c45542332 Fixed cache_url fallback 2025-12-16 13:46:24 -08:00
SergeantPanda
748d5dc72d Bug Fix: When multiple m3uepisoderelations for a requested episode existed, the xc api would fail.(Fixes #569) 2025-12-16 15:44:42 -06:00
SergeantPanda
48e7060cdb Bug Fix: VOD episode processing now correctly handles duplicate episodes from the same provider. (Fixes #556) 2025-12-16 15:24:16 -06:00
Nick Sandstrom
6c1b0f9a60 Extracted component and util logic 2025-12-16 11:55:22 -08:00
Nick Sandstrom
ffd8d9fe6b Using util for getPosterUrl 2025-12-16 11:53:45 -08:00
Nick Sandstrom
0ba22df233 Updated Component syntax 2025-12-16 11:53:26 -08:00
Seth Van Niekerk
bc72b2d4a3
Add VLC and streamlink codec parsing support 2025-12-15 20:09:54 -05:00
Seth Van Niekerk
88c10e85c3
Add VLC TS demux output detection for codec parsing 2025-12-15 20:09:54 -05:00
Seth Van Niekerk
1ad8d6cdfd
Add VLC profile to fixtures with correct parameter order 2025-12-15 20:09:54 -05:00
Seth Van Niekerk
ee7a39fe21
Add VLC stream profile migration with correct parameters 2025-12-15 20:09:54 -05:00
Seth Van Niekerk
3b7f6dadaa
Add VLC packages and environment variables to DispatcharrBase 2025-12-15 20:09:54 -05:00
SergeantPanda
41642cd479 Improve orphaned CrontabSchedule cleanup logic to avoid deleting in-use schedules 2025-12-15 16:54:12 -06:00
SergeantPanda
1b27472c81 changelog: Add automated configuration backup/restore system to changelog 2025-12-15 16:22:38 -06:00
SergeantPanda
a60fd530f3 Merge branch 'dev' of https://github.com/Dispatcharr/Dispatcharr into dev 2025-12-15 16:17:53 -06:00
SergeantPanda
4878e92f44
Merge pull request #488 from stlalpha/feature/automated-backups
Enhancement: Add automated configuration backups
2025-12-15 16:17:33 -06:00
Nick Sandstrom
3bf8ddf376 Removed unused imports 2025-12-15 09:19:54 -08:00
Nick Sandstrom
65dbc5498d Fixed handler arrow functions 2025-12-15 08:21:00 -08:00
Nick Sandstrom
85390a078c Removed unused imports 2025-12-15 07:48:24 -08:00
Jim McBride
bd6cf287dc
Clean up orphaned CrontabSchedule records
- Add _cleanup_orphaned_crontab() helper function
  - Delete old crontab when disabling backup schedule
  - Delete old crontab when schedule settings change
  - Prevents database bloat from unused CrontabSchedule records
2025-12-13 19:02:36 -06:00
Jim McBride
662c5ff89a Reorganize simple mode backup scheduler layout
- Row 1: Frequency, Day (if weekly), Hour, Minute, Period (if 12h)
- Row 2: Retention, Save button
- Use wrap=nowrap to keep time selectors on same row
2025-12-13 18:49:36 -06:00
Jim McBride
1dc7700a62
Add timezone support for backup scheduling
- Set CrontabSchedule timezone to system timezone for accurate scheduling
  - Replace time TextInput with hour/minute Select dropdowns for cleaner UX
  - Remove UTC/local time conversion logic (handled by Celery)
  - Add tests for timezone functionality in simple and advanced modes
2025-12-13 13:27:56 -06:00
Nick Sandstrom
d97f0c907f Updated DVR for extracted logic 2025-12-13 06:33:28 -08:00
Nick Sandstrom
ae60f81314 Extracted DVR utils 2025-12-13 06:32:16 -08:00
Nick Sandstrom
bfcc47c331 Extracted DVR components 2025-12-13 06:31:56 -08:00
SergeantPanda
679adb324c changelog: Update changelog to reference github issue. 2025-12-12 17:26:24 -06:00
SergeantPanda
58a6cdedf7 Bug Fix: Fix handling of None values in xc_get_epg output to prevent AttributeError when title and/or description are none. 2025-12-12 17:23:02 -06:00
SergeantPanda
dedd898a29 Changelog: Document removal of unreachable code path in m3u output 2025-12-12 16:44:09 -06:00
SergeantPanda
0b09cd18b9
Merge pull request #725 from DawtCom/main
Removing unreachable code
2025-12-12 16:19:52 -06:00
dekzter
3537c9ee09
Merge pull request #741 from Dispatcharr/hide-disabled-channels
Advanced Filtering
2025-12-12 08:42:34 -05:00
dekzter
97930c3de8
Merge pull request #740 from Dispatcharr/revert-736-hide-disabled-channels
Revert "Advanced Filtering"
2025-12-12 08:41:20 -05:00
dekzter
c51916b40c
Revert "Advanced Filtering" 2025-12-12 08:30:17 -05:00
dekzter
ed61ac656a
Merge pull request #736 from Dispatcharr/hide-disabled-channels
Advanced Filtering
2025-12-12 08:05:21 -05:00
James Blackwell
56cf37d637 Give arguments to docker/build-dev.sh
This command improves docker/build-dev.sh, providing a variety of
arguments to assist building images

 -h for help

 -p push the build

 -r Specify a different registry, such as myname on dockerhub, or
  myregistry.local

 -a arch[,arch] cross build to one or more architectures; .e.g. -a
   linux/arm64,linux/amd64
2025-12-12 12:59:03 +07:00
dekzter
ea38c0b4b8 advanced filtering for hiding disabled channels and viewing only empty channels 2025-12-11 11:54:41 -05:00
Nick Sandstrom
dd5ae8450d Updated pages to utilize error boundary 2025-12-10 22:31:04 -08:00
Nick Sandstrom
0070d9e500 Added ErrorBoundary component 2025-12-10 22:29:48 -08:00
Nick Sandstrom
aea888238a Removed unused pages 2025-12-10 21:38:46 -08:00
Nick Sandstrom
700d0d2383 Moved error logic to separate component 2025-12-10 20:38:28 -08:00
dekzter
0bfd06a5a3 Merge remote-tracking branch 'origin/dev' into hide-disabled-channels 2025-12-10 17:32:50 -05:00
Jim McBride
8388152d79
Use system timezone for backup filenames
Updated create_backup to use the system's configured timezone for backup
filenames instead of always using UTC. This makes filenames more intuitive
and matches users' local time expectations.

Changes:
- Import pytz and CoreSettings
- Get system timezone from CoreSettings.get_system_time_zone()
- Convert current UTC time to system timezone for filename timestamp
- Fallback to UTC if timezone conversion fails
- Internal metadata timestamps remain UTC for consistency

Example:
- System timezone: America/New_York (EST)
- Created at 3:00 PM EST
- Old filename: dispatcharr-backup-2025.12.09.20.00.00.zip (UTC time)
- New filename: dispatcharr-backup-2025.12.09.15.00.00.zip (local time)

This aligns with the timezone-aware scheduling already implemented.
2025-12-09 09:06:22 -06:00
Jim McBride
795934dafe
Give Created column flexible width to prevent wrapping
Changed Created column from fixed size (160px) to flexible minSize (180px)
with whiteSpace: nowrap to ensure date/time displays on a single line.

The column can now expand as needed while maintaining readability of
timestamps in various date/time format combinations.
2025-12-09 08:58:38 -06:00
Jim McBride
70e574e25a
Add tests for cron expression functionality
Added comprehensive test coverage for cron expression support:

New tests in BackupSchedulerTestCase:
- test_cron_expression_stores_value: Verify cron_expression persists correctly
- test_cron_expression_creates_correct_schedule: Verify CrontabSchedule creation from cron
- test_cron_expression_invalid_format: Verify validation rejects malformed expressions
- test_cron_expression_empty_uses_simple_mode: Verify fallback to frequency/time mode
- test_cron_expression_overrides_simple_settings: Verify cron takes precedence

Updated existing tests to include cron_expression field:
- test_get_schedule_settings_defaults: Now checks cron_expression default
- test_get_schedule_success: Added cron_expression to mock response
- test_update_schedule_success: Added cron_expression to mock response

All tests verify the new cron functionality works correctly alongside
existing simple scheduling mode.
2025-12-09 08:54:23 -06:00
Jim McBride
3c76c72479
Add Enabled/Disabled label to Advanced toggle
Added dynamic label to the Advanced (Cron Expression) switch to match
the Scheduled Backups toggle above it.

Now displays:
  Scheduled Backups          [Enabled]
  Advanced (Cron Expression) [Enabled]

Provides consistent UI pattern and clearer status indication.
2025-12-09 08:38:57 -06:00
Jim McBride
53159bd420
Improve Advanced toggle layout alignment
Changed Advanced (Cron Expression) from a labeled switch to a proper
Group with space-between layout matching the Scheduled Backups row above.

Now displays as:
  Scheduled Backups          [Enabled toggle]
  Advanced (Cron Expression) [Toggle]

This creates consistent visual alignment with both text labels on the left
and toggle switches on the right.
2025-12-09 08:35:55 -06:00
Jim McBride
901cc09e38
Align Advanced toggle below Scheduled Backups header
Moved the Advanced (Cron Expression) switch outside the scheduleLoading
conditional and wrapped it in its own Group for proper alignment.

Layout is now:
- Scheduled Backups header with Enabled/Disabled switch
- Advanced (Cron Expression) toggle (aligned left)
- Schedule configuration inputs (conditional based on mode)

This provides clearer visual hierarchy and better UX.
2025-12-09 08:32:49 -06:00
Jim McBride
d4fbc9dc61
Honor user date/time format preferences for backup timestamps
- Import dayjs for date formatting
- Read date-format setting from localStorage ('mdy' or 'dmy')
- Move formatDate function into component to access user preferences
- Format dates according to user's date and time format settings:
  - MDY: MM/DD/YYYY
  - DMY: DD/MM/YYYY
  - 12h: h:mm:ss A
  - 24h: HH:mm:ss

The Created column now respects the same date/time format preferences
used throughout the app (Guide, Stats, DVR, SystemEvents, etc).
2025-12-09 08:29:28 -06:00
Jim McBride
1a350e79e0
Fix cron validation to support */N step notation
Updated regex to properly support step notation with asterisk (e.g., */2, */5).

Now supports all common cron patterns:
- * (wildcard)
- */2 (every 2 units - step notation)
- 5 (specific value)
- 1-5 (range)
- 1-5/2 (step within range)
- 1,3,5 (list)
- 10-20/5 (step within range)

Changed regex from:
  /^(\*|(\d+(-\d+)?(,\d+(-\d+)?)*)(\/\d+)?)$/
To:
  /^(\*\/\d+|\*|\d+(-\d+)?(\/\d+)?(,\d+(-\d+)?(\/\d+)?)*)$/

The key change is adding \*\/\d+ as the first alternative to explicitly
match step notation like */2, */5, */10, etc.

Backend already supports this via Django Celery Beat's CrontabSchedule,
which accepts standard cron syntax including step notation.
2025-12-09 08:22:20 -06:00
Jim McBride
e71e6bc3d7
Fix backup timestamp display to use UTC timezone
The list_backups function was creating timezone-naive datetime objects,
which caused the frontend to incorrectly interpret timestamps.

Now uses datetime.UTC when creating timestamps from file modification time
(consistent with other usage in this file on lines 186, 216), so the ISO
string includes timezone info (+00:00). This allows the browser to properly
convert UTC timestamps to the user's local timezone for display.

Before: Backend sends "2025-12-09T14:12:44" (ambiguous timezone)
After: Backend sends "2025-12-09T14:12:44+00:00" (explicit UTC)

The frontend's toLocaleString() will now correctly convert to local time.
2025-12-09 08:16:04 -06:00
Jim McBride
c65df2de89
Add real-time validation for cron expressions
- Add validateCronExpression function with comprehensive validation:
  - Checks for exactly 5 parts (minute hour day month weekday)
  - Validates cron syntax (*, ranges, lists, steps)
  - Validates numeric ranges (minute 0-59, hour 0-23, etc.)
  - Returns detailed error messages for each validation failure

- Add cronError state to track validation errors
- Validate on input change with handleScheduleChange
- Display error message below input field
- Disable Save button when cron expression is invalid
- Auto-validate when switching to advanced mode
- Clear errors when switching back to simple mode

User gets immediate feedback on cron syntax errors before attempting to save.
2025-12-09 08:09:56 -06:00
Jim McBride
5fbcaa91e0
Add custom cron expression support for backup scheduling
Frontend changes:
- Add advanced mode toggle switch for cron expressions
- Show cron expression input with helpful examples when enabled
- Display format hints: "minute hour day month weekday"
- Provide common examples (daily, weekly, every 6 hours, etc.)
- Conditionally render simple or advanced scheduling UI
- Support switching between simple and advanced modes

Backend changes:
- Add cron_expression to schedule settings (SETTING_KEYS, DEFAULTS)
- Update get_schedule_settings to include cron_expression
- Update update_schedule_settings to handle cron_expression
- Extend _sync_periodic_task to parse and use cron expressions
- Parse 5-part cron format: minute hour day_of_month month_of_year day_of_week
- Create CrontabSchedule from cron expression or simple frequency
- Add validation and error handling for invalid cron expressions

This addresses maintainer feedback for "custom scheduler (cron style) for more control".
Users can now schedule backups with full cron flexibility beyond daily/weekly.
2025-12-09 07:55:47 -06:00
Jim McBride
d718e5a142
Implement timezone-aware backup scheduling
- Add timezone conversion functions (utcToLocal, localToUtc)
- Use user's configured timezone from Settings (localStorage 'time-zone')
- Convert times to UTC when saving to backend
- Convert times from UTC to local when loading from backend
- Display timezone info showing user's timezone and scheduled time
- Helper text shows: "Timezone: America/New_York • Backup will run at 03:00"

This addresses maintainer feedback to handle timezone properly:
backend stores/schedules in UTC, frontend displays/edits in user's local time.
2025-12-09 07:52:53 -06:00
Jim McBride
806f78244d
Add proper ConfirmationDialog usage to BackupManager
- Import useWarningsStore from warnings store
- Add suppressWarning hook to component
- Add actionKey props to restore and delete confirmation dialogs
- Add onSuppressChange callback to enable "Don't ask again" functionality

This aligns BackupManager with the project's standard confirmation dialog pattern
used throughout the codebase (ChannelsTable, EPGsTable, etc).
2025-12-09 07:49:31 -06:00
DawtCom
e8fb01ebdd Removing unreachable code 2025-12-08 21:50:13 -06:00
SergeantPanda
514e7e06e4 Bug fix: EPG API now returns correct date/time format for start/end fields and proper string types for timestamps and channel_id 2025-12-08 20:50:50 -06:00
SergeantPanda
69f9ecd93c Bug Fix: Remove ipv6 binding from nginx config if ipv6 is not available. 2025-12-08 20:12:44 -06:00
GitHub Actions
4df4e5f963 Release v0.14.0 2025-12-09 00:01:50 +00:00
SergeantPanda
ecbef65891
Merge pull request #723 from Dispatcharr:dev
Version 0.14.0
2025-12-08 17:59:12 -06:00
SergeantPanda
98b29f97a1 changelog: Update verbiage 2025-12-08 17:49:40 -06:00
SergeantPanda
62f5c32609 Remove DJANGO_SECRET_KEY environment variable from uwsgi configuration files 2025-12-08 17:27:07 -06:00
dekzter
43b55e2d99 first run at hiding disabled channels in channel profiles 2025-12-08 08:38:39 -05:00
SergeantPanda
c03ddf60a0 Fixed verbiage for epg parsing status. 2025-12-07 21:28:04 -06:00
SergeantPanda
ce70b04097 changelog: update changelog 2025-12-07 20:56:59 -06:00
SergeantPanda
e2736babaa Reset umask after creating secret file. 2025-12-07 20:04:58 -06:00
SergeantPanda
2155229d7f Fix uwsgi command path in entrypoint script 2025-12-07 19:40:32 -06:00
SergeantPanda
cf37c6fd98 changelog: Updated changelog for 0.13.1 2025-12-07 19:06:45 -06:00
SergeantPanda
3512c3a623 Add DJANGO_SECRET_KEY environment variable to uwsgi configuration files 2025-12-07 19:05:31 -06:00
dekzter
d0edc3fa07 remove permission lines to see if this resolves lack of django secret key in environment profile.d 2025-12-07 07:54:30 -05:00
dekzter
b18bc62983 merged in from main 2025-12-06 14:13:06 -05:00
GitHub Actions
a912055255 Release v0.13.1 2025-12-06 18:43:16 +00:00
dekzter
10f329d673 release notes for built 2025-12-06 13:42:48 -05:00
dekzter
f3a901cb3a Security Fix - generate JWT on application init 2025-12-06 13:40:10 -05:00
SergeantPanda
759569b871 Enhancement: Add a priority field to EPGSource and prefer higher-priority sources when matching channels. Also ignore EPG sources where is_active is false during matching, and update serializers/forms/frontend accordingly.(Closes #603, #672) 2025-12-05 09:54:11 -06:00
SergeantPanda
c1d960138e Fix: Bulk channel editor confirmation dialog now shows the correct stream profile that will be set. 2025-12-05 09:02:03 -06:00
SergeantPanda
0d177e44f8 changelog: Change updated change to bug fix instead of change. 2025-12-04 15:45:09 -06:00
SergeantPanda
3b34fb11ef Fix: Fixes bug where Updated column wouldn't update in the EPG table without a webui refresh. 2025-12-04 15:43:33 -06:00
SergeantPanda
6c8270d0e5 Enhancement: Add support for 'extracting' status and display additional progress information in EPGsTable 2025-12-04 15:28:21 -06:00
SergeantPanda
5693ee7f9e perf: optimize EPG program parsing and implement atomic database updates to reduce I/O overhead and prevent partial data visibility 2025-12-04 14:57:57 -06:00
SergeantPanda
256ac2f55a Enhancement: Clean up orphaned programs for unmapped EPG entries 2025-12-04 14:25:44 -06:00
SergeantPanda
2a8ba9125c perf: optimize EPG program parsing for multi-channel sources
Dramatically improve EPG refresh performance by parsing the XML file once
per source instead of once per channel. The new implementation:

- Pre-filters to only process EPG entries mapped to actual channels
- Parses the entire XML file in a single pass
- Uses O(1) set lookups to skip unmapped channel programmes
- Skips non-mapped channels entirely with minimal overhead

For EPG sources with many channels but few mapped (e.g., 10,000 channels
with 100 mapped to channels), this provides approximately:
- 99% reduction in file open operations
- 99% reduction in XML file scans
- Proportional reduction in CPU and I/O overhead

The parse_programs_for_tvg_id() function is retained for single-channel
use cases (e.g., when a new channel is mapped via signals).

Fixes inefficient repeated file parsing that was occurring with large
EPG sources.
2025-12-04 14:07:28 -06:00
SergeantPanda
2de6ac5da1 changelog: Add sort buttons for 'Group' and 'M3U' columns in Streams table 2025-12-03 17:31:16 -06:00
SergeantPanda
6a96b6b485
Merge pull request #707 from bobey6/main
Enhancement: Add sort by 'Group' or 'M3U' buttons to Streams
2025-12-03 17:27:42 -06:00
SergeantPanda
5fce83fb51 style: Adjust table header and input components for consistent width 2025-12-03 17:13:50 -06:00
SergeantPanda
81b6570366 Fix name not sorting. 2025-12-03 17:03:58 -06:00
SergeantPanda
042612c677 Merge branch 'dev' of https://github.com/Dispatcharr/Dispatcharr into pr/bobey6/707 2025-12-03 16:49:21 -06:00
Jim McBride
e64002dfc4
Refactor BackupManager to match app table conventions 2025-12-02 22:19:20 -06:00
Jim McBride
70cf8928c4
Use CustomTable component for backup list 2025-12-02 22:01:59 -06:00
Jim McBride
3f9fd424e2
Update backup feature based on PR feedback
- Simplify to database-only backups (remove data directory backup)
- Update UI to match app styling patterns:
  - Use ActionIcon with transparent variant for table actions
  - Match icon/color conventions (SquareMinus/red.9, RotateCcw/yellow.5, Download/blue.5)
  - Use standard button bar layout with Paper/Box/Flex
  - Green "Create Backup" button matching "Add" pattern
  - Remove Card wrapper, Alert, and Divider for cleaner layout
  - Update to Mantine v8 Table syntax
- Use standard ConfirmationDialog (remove unused color prop)
- Update tests to remove get_data_dirs references
2025-12-02 19:33:27 -06:00
SergeantPanda
f38fb36eba Skip builds during documentation updates. 2025-12-02 15:03:34 -06:00
SergeantPanda
5e1ae23c4e docs: Update CHANGELOG 2025-12-02 14:58:22 -06:00
SergeantPanda
53a50474ba
Merge pull request #701 from jordandalley/nginx-add-ipv6-bind 2025-12-02 14:49:49 -06:00
SergeantPanda
92ced69bfd
Merge pull request #698 from adrianmace/fix-ipv6-access-issues
fix: Allow all IPv6 CIDRs by default
2025-12-02 14:36:51 -06:00
SergeantPanda
f1320c9a5d Merge branch 'dev' of https://github.com/Dispatcharr/Dispatcharr into pr/stlalpha/488 2025-12-02 13:39:06 -06:00
root
cf08e54bd8 Fix sorting functionality for Group and M3U columns
- Add missing header properties to group and m3u columns
- Fix layout issues with sort buttons (proper flex layout, remove blocking onClick)
- Fix sorting state initialization (use boolean instead of empty string)
- Fix sorting comparison operators (use strict equality)
- Fix 3rd click behavior to return to default sort instead of clearing
- Map frontend column IDs to backend field names for proper API requests
2025-12-01 18:11:58 +00:00
GitHub Copilot
641dcfc21e Add sorting functionality to Group and M3U columns in Streams table
- Added m3u_account__name to backend ordering_fields in StreamViewSet
- Implemented field mapping in frontend to convert column IDs to backend field names
- Added sort buttons to both Group and M3U columns with proper icons
- Sort buttons show current sort state (ascending/descending/none)
- Maintains consistent UX with existing Name column sorting
2025-11-30 19:20:25 +00:00
3l3m3nt
43949c3ef4 Added IPv6 port bind to nginx.conf 2025-11-30 19:30:47 +13:00
Adrian Mace
6a9b5282cd
fix: allow all IPv6 CIDRs by default
This change ensures that by default, IPv6 clients can
connect to the service unless explicitly denied.

Fixes #593
2025-11-30 00:39:30 +11:00
Jim McBride
3fb18ecce8 Enhancement: Respect user's 12h/24h time format preference in backup scheduler
- Read time-format setting from UI Settings via useLocalStorage
- Show 12-hour time input with AM/PM selector when user prefers 12h
- Show 24-hour time input when user prefers 24h
- Backend always stores 24-hour format (no API changes)
2025-11-27 08:49:29 -06:00
Jim McBride
3eaa76174e Feature: Automated configuration backups with scheduling
- Create/Download/Upload/Restore database backups (PostgreSQL and SQLite)
- Configurable data directory backups (via settings.py)
- Scheduled backups (daily/weekly) via Celery Beat
- Retention policy (keep last N backups)
- Token-based auth for async task polling
- X-Accel-Redirect support for nginx file serving
- Comprehensive tests
2025-11-26 21:11:13 -06:00
BigPanda
0dbc5221b2 Add 'UK' region
I'm not sure if this was intentional, but the UK seems to be missing from
the region list.
2025-09-18 21:20:52 +01:00
204 changed files with 26963 additions and 8743 deletions

View file

@ -31,3 +31,4 @@
LICENSE
README.md
data/
docker/data/

View file

@ -101,6 +101,28 @@ jobs:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: |
ghcr.io/${{ needs.prepare.outputs.repo_owner }}/${{ needs.prepare.outputs.repo_name }}
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${{ needs.prepare.outputs.repo_name }}
labels: |
org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}
org.opencontainers.image.description=Your ultimate IPTV & stream Management companion.
org.opencontainers.image.url=https://github.com/${{ github.repository }}
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.version=${{ needs.prepare.outputs.branch_tag }}-${{ needs.prepare.outputs.timestamp }}
org.opencontainers.image.created=${{ needs.prepare.outputs.timestamp }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.licenses=See repository
org.opencontainers.image.documentation=https://dispatcharr.github.io/Dispatcharr-Docs/
org.opencontainers.image.vendor=${{ needs.prepare.outputs.repo_owner }}
org.opencontainers.image.authors=${{ github.actor }}
maintainer=${{ github.actor }}
build_version=DispatcharrBase version: ${{ needs.prepare.outputs.branch_tag }}-${{ needs.prepare.outputs.timestamp }}
- name: Build and push Docker base image
uses: docker/build-push-action@v4
with:
@ -113,6 +135,7 @@ jobs:
ghcr.io/${{ needs.prepare.outputs.repo_owner }}/${{ needs.prepare.outputs.repo_name }}:${{ needs.prepare.outputs.branch_tag }}-${{ needs.prepare.outputs.timestamp }}-${{ matrix.platform }}
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${{ needs.prepare.outputs.repo_name }}:${{ needs.prepare.outputs.branch_tag }}-${{ matrix.platform }}
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${{ needs.prepare.outputs.repo_name }}:${{ needs.prepare.outputs.branch_tag }}-${{ needs.prepare.outputs.timestamp }}-${{ matrix.platform }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
REPO_OWNER=${{ needs.prepare.outputs.repo_owner }}
REPO_NAME=${{ needs.prepare.outputs.repo_name }}
@ -154,18 +177,74 @@ jobs:
# GitHub Container Registry manifests
# branch tag (e.g. base or base-dev)
docker buildx imagetools create --tag ghcr.io/${OWNER}/${REPO}:${BRANCH_TAG} \
docker buildx imagetools create \
--annotation "index:org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}" \
--annotation "index:org.opencontainers.image.description=Your ultimate IPTV & stream Management companion." \
--annotation "index:org.opencontainers.image.url=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.source=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.version=${BRANCH_TAG}-${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.created=${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.revision=${{ github.sha }}" \
--annotation "index:org.opencontainers.image.licenses=See repository" \
--annotation "index:org.opencontainers.image.documentation=https://dispatcharr.github.io/Dispatcharr-Docs/" \
--annotation "index:org.opencontainers.image.vendor=${OWNER}" \
--annotation "index:org.opencontainers.image.authors=${{ github.actor }}" \
--annotation "index:maintainer=${{ github.actor }}" \
--annotation "index:build_version=DispatcharrBase version: ${BRANCH_TAG}-${TIMESTAMP}" \
--tag ghcr.io/${OWNER}/${REPO}:${BRANCH_TAG} \
ghcr.io/${OWNER}/${REPO}:${BRANCH_TAG}-amd64 ghcr.io/${OWNER}/${REPO}:${BRANCH_TAG}-arm64
# branch + timestamp tag
docker buildx imagetools create --tag ghcr.io/${OWNER}/${REPO}:${BRANCH_TAG}-${TIMESTAMP} \
docker buildx imagetools create \
--annotation "index:org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}" \
--annotation "index:org.opencontainers.image.description=Your ultimate IPTV & stream Management companion." \
--annotation "index:org.opencontainers.image.url=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.source=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.version=${BRANCH_TAG}-${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.created=${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.revision=${{ github.sha }}" \
--annotation "index:org.opencontainers.image.licenses=See repository" \
--annotation "index:org.opencontainers.image.documentation=https://dispatcharr.github.io/Dispatcharr-Docs/" \
--annotation "index:org.opencontainers.image.vendor=${OWNER}" \
--annotation "index:org.opencontainers.image.authors=${{ github.actor }}" \
--annotation "index:maintainer=${{ github.actor }}" \
--annotation "index:build_version=DispatcharrBase version: ${BRANCH_TAG}-${TIMESTAMP}" \
--tag ghcr.io/${OWNER}/${REPO}:${BRANCH_TAG}-${TIMESTAMP} \
ghcr.io/${OWNER}/${REPO}:${BRANCH_TAG}-${TIMESTAMP}-amd64 ghcr.io/${OWNER}/${REPO}:${BRANCH_TAG}-${TIMESTAMP}-arm64
# Docker Hub manifests
# branch tag (e.g. base or base-dev)
docker buildx imagetools create --tag docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${BRANCH_TAG} \
docker buildx imagetools create \
--annotation "index:org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}" \
--annotation "index:org.opencontainers.image.description=Your ultimate IPTV & stream Management companion." \
--annotation "index:org.opencontainers.image.url=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.source=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.version=${BRANCH_TAG}-${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.created=${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.revision=${{ github.sha }}" \
--annotation "index:org.opencontainers.image.licenses=See repository" \
--annotation "index:org.opencontainers.image.documentation=https://dispatcharr.github.io/Dispatcharr-Docs/" \
--annotation "index:org.opencontainers.image.vendor=${OWNER}" \
--annotation "index:org.opencontainers.image.authors=${{ github.actor }}" \
--annotation "index:maintainer=${{ github.actor }}" \
--annotation "index:build_version=DispatcharrBase version: ${BRANCH_TAG}-${TIMESTAMP}" \
--tag docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${BRANCH_TAG} \
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${BRANCH_TAG}-amd64 docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${BRANCH_TAG}-arm64
# branch + timestamp tag
docker buildx imagetools create --tag docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${BRANCH_TAG}-${TIMESTAMP} \
docker buildx imagetools create \
--annotation "index:org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}" \
--annotation "index:org.opencontainers.image.description=Your ultimate IPTV & stream Management companion." \
--annotation "index:org.opencontainers.image.url=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.source=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.version=${BRANCH_TAG}-${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.created=${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.revision=${{ github.sha }}" \
--annotation "index:org.opencontainers.image.licenses=See repository" \
--annotation "index:org.opencontainers.image.documentation=https://dispatcharr.github.io/Dispatcharr-Docs/" \
--annotation "index:org.opencontainers.image.vendor=${OWNER}" \
--annotation "index:org.opencontainers.image.authors=${{ github.actor }}" \
--annotation "index:maintainer=${{ github.actor }}" \
--annotation "index:build_version=DispatcharrBase version: ${BRANCH_TAG}-${TIMESTAMP}" \
--tag docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${BRANCH_TAG}-${TIMESTAMP} \
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${BRANCH_TAG}-${TIMESTAMP}-amd64 docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${BRANCH_TAG}-${TIMESTAMP}-arm64

View file

@ -3,6 +3,8 @@ name: CI Pipeline
on:
push:
branches: [dev]
paths-ignore:
- '**.md'
pull_request:
branches: [dev]
workflow_dispatch:
@ -117,7 +119,27 @@ jobs:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# use metadata from the prepare job
- name: Extract metadata for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: |
ghcr.io/${{ needs.prepare.outputs.repo_owner }}/${{ needs.prepare.outputs.repo_name }}
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${{ needs.prepare.outputs.repo_name }}
labels: |
org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}
org.opencontainers.image.description=Your ultimate IPTV & stream Management companion.
org.opencontainers.image.url=https://github.com/${{ github.repository }}
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.version=${{ needs.prepare.outputs.version }}-${{ needs.prepare.outputs.timestamp }}
org.opencontainers.image.created=${{ needs.prepare.outputs.timestamp }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.licenses=See repository
org.opencontainers.image.documentation=https://dispatcharr.github.io/Dispatcharr-Docs/
org.opencontainers.image.vendor=${{ needs.prepare.outputs.repo_owner }}
org.opencontainers.image.authors=${{ github.actor }}
maintainer=${{ github.actor }}
build_version=Dispatcharr version: ${{ needs.prepare.outputs.version }}-${{ needs.prepare.outputs.timestamp }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
@ -135,6 +157,7 @@ jobs:
ghcr.io/${{ needs.prepare.outputs.repo_owner }}/${{ needs.prepare.outputs.repo_name }}:${{ needs.prepare.outputs.version }}-${{ needs.prepare.outputs.timestamp }}-${{ matrix.platform }}
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${{ needs.prepare.outputs.repo_name }}:${{ needs.prepare.outputs.branch_tag }}-${{ matrix.platform }}
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${{ needs.prepare.outputs.repo_name }}:${{ needs.prepare.outputs.version }}-${{ needs.prepare.outputs.timestamp }}-${{ matrix.platform }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
REPO_OWNER=${{ needs.prepare.outputs.repo_owner }}
REPO_NAME=${{ needs.prepare.outputs.repo_name }}
@ -179,16 +202,72 @@ jobs:
echo "Creating multi-arch manifest for ${OWNER}/${REPO}"
# branch tag (e.g. latest or dev)
docker buildx imagetools create --tag ghcr.io/${OWNER}/${REPO}:${BRANCH_TAG} \
docker buildx imagetools create \
--annotation "index:org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}" \
--annotation "index:org.opencontainers.image.description=Your ultimate IPTV & stream Management companion." \
--annotation "index:org.opencontainers.image.url=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.source=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.version=${BRANCH_TAG}" \
--annotation "index:org.opencontainers.image.created=${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.revision=${{ github.sha }}" \
--annotation "index:org.opencontainers.image.licenses=See repository" \
--annotation "index:org.opencontainers.image.documentation=https://dispatcharr.github.io/Dispatcharr-Docs/" \
--annotation "index:org.opencontainers.image.vendor=${OWNER}" \
--annotation "index:org.opencontainers.image.authors=${{ github.actor }}" \
--annotation "index:maintainer=${{ github.actor }}" \
--annotation "index:build_version=Dispatcharr version: ${VERSION}-${TIMESTAMP}" \
--tag ghcr.io/${OWNER}/${REPO}:${BRANCH_TAG} \
ghcr.io/${OWNER}/${REPO}:${BRANCH_TAG}-amd64 ghcr.io/${OWNER}/${REPO}:${BRANCH_TAG}-arm64
# version + timestamp tag
docker buildx imagetools create --tag ghcr.io/${OWNER}/${REPO}:${VERSION}-${TIMESTAMP} \
docker buildx imagetools create \
--annotation "index:org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}" \
--annotation "index:org.opencontainers.image.description=Your ultimate IPTV & stream Management companion." \
--annotation "index:org.opencontainers.image.url=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.source=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.version=${VERSION}-${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.created=${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.revision=${{ github.sha }}" \
--annotation "index:org.opencontainers.image.licenses=See repository" \
--annotation "index:org.opencontainers.image.documentation=https://dispatcharr.github.io/Dispatcharr-Docs/" \
--annotation "index:org.opencontainers.image.vendor=${OWNER}" \
--annotation "index:org.opencontainers.image.authors=${{ github.actor }}" \
--annotation "index:maintainer=${{ github.actor }}" \
--annotation "index:build_version=Dispatcharr version: ${VERSION}-${TIMESTAMP}" \
--tag ghcr.io/${OWNER}/${REPO}:${VERSION}-${TIMESTAMP} \
ghcr.io/${OWNER}/${REPO}:${VERSION}-${TIMESTAMP}-amd64 ghcr.io/${OWNER}/${REPO}:${VERSION}-${TIMESTAMP}-arm64
# also create Docker Hub manifests using the same username
docker buildx imagetools create --tag docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${BRANCH_TAG} \
docker buildx imagetools create \
--annotation "index:org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}" \
--annotation "index:org.opencontainers.image.description=Your ultimate IPTV & stream Management companion." \
--annotation "index:org.opencontainers.image.url=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.source=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.version=${BRANCH_TAG}" \
--annotation "index:org.opencontainers.image.created=${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.revision=${{ github.sha }}" \
--annotation "index:org.opencontainers.image.licenses=See repository" \
--annotation "index:org.opencontainers.image.documentation=https://dispatcharr.github.io/Dispatcharr-Docs/" \
--annotation "index:org.opencontainers.image.vendor=${OWNER}" \
--annotation "index:org.opencontainers.image.authors=${{ github.actor }}" \
--annotation "index:maintainer=${{ github.actor }}" \
--annotation "index:build_version=Dispatcharr version: ${VERSION}-${TIMESTAMP}" \
--tag docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${BRANCH_TAG} \
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${BRANCH_TAG}-amd64 docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${BRANCH_TAG}-arm64
docker buildx imagetools create --tag docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${VERSION}-${TIMESTAMP} \
docker buildx imagetools create \
--annotation "index:org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}" \
--annotation "index:org.opencontainers.image.description=Your ultimate IPTV & stream Management companion." \
--annotation "index:org.opencontainers.image.url=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.source=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.version=${VERSION}-${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.created=${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.revision=${{ github.sha }}" \
--annotation "index:org.opencontainers.image.licenses=See repository" \
--annotation "index:org.opencontainers.image.documentation=https://dispatcharr.github.io/Dispatcharr-Docs/" \
--annotation "index:org.opencontainers.image.vendor=${OWNER}" \
--annotation "index:org.opencontainers.image.authors=${{ github.actor }}" \
--annotation "index:maintainer=${{ github.actor }}" \
--annotation "index:build_version=Dispatcharr version: ${VERSION}-${TIMESTAMP}" \
--tag docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${VERSION}-${TIMESTAMP} \
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${VERSION}-${TIMESTAMP}-amd64 docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${VERSION}-${TIMESTAMP}-arm64

41
.github/workflows/frontend-tests.yml vendored Normal file
View file

@ -0,0 +1,41 @@
name: Frontend Tests
on:
push:
branches: [main, dev]
paths:
- 'frontend/**'
- '.github/workflows/frontend-tests.yml'
pull_request:
branches: [main, dev]
paths:
- 'frontend/**'
- '.github/workflows/frontend-tests.yml'
jobs:
test:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./frontend
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '24'
cache: 'npm'
cache-dependency-path: './frontend/package-lock.json'
- name: Install dependencies
run: npm ci
# - name: Run linter
# run: npm run lint
- name: Run tests
run: npm test

View file

@ -25,6 +25,7 @@ jobs:
new_version: ${{ steps.update_version.outputs.new_version }}
repo_owner: ${{ steps.meta.outputs.repo_owner }}
repo_name: ${{ steps.meta.outputs.repo_name }}
timestamp: ${{ steps.timestamp.outputs.timestamp }}
steps:
- uses: actions/checkout@v3
with:
@ -56,6 +57,12 @@ jobs:
REPO_NAME=$(echo "${{ github.repository }}" | cut -d '/' -f 2 | tr '[:upper:]' '[:lower:]')
echo "repo_name=${REPO_NAME}" >> $GITHUB_OUTPUT
- name: Generate timestamp for build
id: timestamp
run: |
TIMESTAMP=$(date -u +'%Y%m%d%H%M%S')
echo "timestamp=${TIMESTAMP}" >> $GITHUB_OUTPUT
- name: Commit and Tag
run: |
git add version.py CHANGELOG.md
@ -104,6 +111,28 @@ jobs:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: |
ghcr.io/${{ needs.prepare.outputs.repo_owner }}/${{ needs.prepare.outputs.repo_name }}
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${{ needs.prepare.outputs.repo_name }}
labels: |
org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}
org.opencontainers.image.description=Your ultimate IPTV & stream Management companion.
org.opencontainers.image.url=https://github.com/${{ github.repository }}
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.version=${{ needs.prepare.outputs.new_version }}
org.opencontainers.image.created=${{ needs.prepare.outputs.timestamp }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.licenses=See repository
org.opencontainers.image.documentation=https://dispatcharr.github.io/Dispatcharr-Docs/
org.opencontainers.image.vendor=${{ needs.prepare.outputs.repo_owner }}
org.opencontainers.image.authors=${{ github.actor }}
maintainer=${{ github.actor }}
build_version=Dispatcharr version: ${{ needs.prepare.outputs.new_version }} Build date: ${{ needs.prepare.outputs.timestamp }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
@ -115,6 +144,7 @@ jobs:
ghcr.io/${{ needs.prepare.outputs.repo_owner }}/${{ needs.prepare.outputs.repo_name }}:${{ needs.prepare.outputs.new_version }}-${{ matrix.platform }}
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${{ needs.prepare.outputs.repo_name }}:latest-${{ matrix.platform }}
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${{ needs.prepare.outputs.repo_name }}:${{ needs.prepare.outputs.new_version }}-${{ matrix.platform }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
REPO_OWNER=${{ needs.prepare.outputs.repo_owner }}
REPO_NAME=${{ needs.prepare.outputs.repo_name }}
@ -149,25 +179,48 @@ jobs:
OWNER=${{ needs.prepare.outputs.repo_owner }}
REPO=${{ needs.prepare.outputs.repo_name }}
VERSION=${{ needs.prepare.outputs.new_version }}
TIMESTAMP=${{ needs.prepare.outputs.timestamp }}
echo "Creating multi-arch manifest for ${OWNER}/${REPO}"
# GitHub Container Registry manifests
# latest tag
docker buildx imagetools create --tag ghcr.io/${OWNER}/${REPO}:latest \
ghcr.io/${OWNER}/${REPO}:latest-amd64 ghcr.io/${OWNER}/${REPO}:latest-arm64
# version tag
docker buildx imagetools create --tag ghcr.io/${OWNER}/${REPO}:${VERSION} \
# Create one manifest with both latest and version tags
docker buildx imagetools create \
--annotation "index:org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}" \
--annotation "index:org.opencontainers.image.description=Your ultimate IPTV & stream Management companion." \
--annotation "index:org.opencontainers.image.url=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.source=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.version=${VERSION}" \
--annotation "index:org.opencontainers.image.created=${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.revision=${{ github.sha }}" \
--annotation "index:org.opencontainers.image.licenses=See repository" \
--annotation "index:org.opencontainers.image.documentation=https://dispatcharr.github.io/Dispatcharr-Docs/" \
--annotation "index:org.opencontainers.image.vendor=${OWNER}" \
--annotation "index:org.opencontainers.image.authors=${{ github.actor }}" \
--annotation "index:maintainer=${{ github.actor }}" \
--annotation "index:build_version=Dispatcharr version: ${VERSION} Build date: ${TIMESTAMP}" \
--tag ghcr.io/${OWNER}/${REPO}:latest \
--tag ghcr.io/${OWNER}/${REPO}:${VERSION} \
ghcr.io/${OWNER}/${REPO}:${VERSION}-amd64 ghcr.io/${OWNER}/${REPO}:${VERSION}-arm64
# Docker Hub manifests
# latest tag
docker buildx imagetools create --tag docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:latest \
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:latest-amd64 docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:latest-arm64
# version tag
docker buildx imagetools create --tag docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${VERSION} \
# Create one manifest with both latest and version tags
docker buildx imagetools create \
--annotation "index:org.opencontainers.image.title=${{ needs.prepare.outputs.repo_name }}" \
--annotation "index:org.opencontainers.image.description=Your ultimate IPTV & stream Management companion." \
--annotation "index:org.opencontainers.image.url=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.source=https://github.com/${{ github.repository }}" \
--annotation "index:org.opencontainers.image.version=${VERSION}" \
--annotation "index:org.opencontainers.image.created=${TIMESTAMP}" \
--annotation "index:org.opencontainers.image.revision=${{ github.sha }}" \
--annotation "index:org.opencontainers.image.licenses=See repository" \
--annotation "index:org.opencontainers.image.documentation=https://dispatcharr.github.io/Dispatcharr-Docs/" \
--annotation "index:org.opencontainers.image.vendor=${OWNER}" \
--annotation "index:org.opencontainers.image.authors=${{ github.actor }}" \
--annotation "index:maintainer=${{ github.actor }}" \
--annotation "index:build_version=Dispatcharr version: ${VERSION} Build date: ${TIMESTAMP}" \
--tag docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:latest \
--tag docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${VERSION} \
docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${VERSION}-amd64 docker.io/${{ secrets.DOCKERHUB_ORGANIZATION }}/${REPO}:${VERSION}-arm64
create-release:

3
.gitignore vendored
View file

@ -18,4 +18,5 @@ dump.rdb
debugpy*
uwsgi.sock
package-lock.json
models
models
.idea

View file

@ -7,6 +7,181 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
### Changed
- Frontend tests GitHub workflow now uses Node.js 24 (matching Dockerfile) and runs on both `main` and `dev` branch pushes and pull requests for comprehensive CI coverage.
### Fixed
- Fixed NumPy baseline detection in Docker entrypoint. Now calls `numpy.show_config()` directly with case-insensitive grep instead of incorrectly wrapping the output.
- Fixed SettingsUtils frontend tests for new grouped settings architecture. Updated test suite to properly verify grouped JSON settings (stream_settings, dvr_settings, etc.) instead of individual CharField settings, including tests for type conversions, array-to-CSV transformations, and special handling of proxy_settings and network_access.
## [0.17.0] - 2026-01-13
### Added
- Loading feedback for all confirmation dialogs: Extended visual loading indicators across all confirmation dialogs throughout the application. Delete, cleanup, and bulk operation dialogs now show an animated dots loader and disabled state during async operations, providing consistent user feedback for backups (restore/delete), channels, EPGs, logos, VOD logos, M3U accounts, streams, users, groups, filters, profiles, batch operations, and network access changes.
- Channel profile edit and duplicate functionality: Users can now rename existing channel profiles and create duplicates with automatic channel membership cloning. Each profile action (edit, duplicate, delete) in the profile dropdown for quick access.
- ProfileModal component extracted for improved code organization and maintainability of channel profile management operations.
- Frontend unit tests for pages and utilities: Added comprehensive unit test coverage for frontend components within pages/ and JS files within utils/, along with a GitHub Actions workflow (`frontend-tests.yml`) to automatically run tests on commits and pull requests - Thanks [@nick4810](https://github.com/nick4810)
- Channel Profile membership control for manual channel creation and bulk operations: Extended the existing `channel_profile_ids` parameter from `POST /api/channels/from-stream/` to also support `POST /api/channels/` (manual creation) and bulk creation tasks with the same flexible semantics:
- Omitted parameter (default): Channels are added to ALL profiles (preserves backward compatibility)
- Empty array `[]`: Channels are added to NO profiles
- Sentinel value `[0]`: Channels are added to ALL profiles (explicit)
- Specific IDs `[1, 2, ...]`: Channels are added only to the specified profiles
This allows API consumers to control profile membership across all channel creation methods without requiring all channels to be added to every profile by default.
- Channel profile selection in creation modal: Users can now choose which profiles to add channels to when creating channels from streams (both single and bulk operations). Options include adding to all profiles, no profiles, or specific profiles with mutual exclusivity between special options ("All Profiles", "None") and specific profile selections. Profile selection defaults to the current table filter for intuitive workflow.
- Group retention policy for M3U accounts: Groups now follow the same stale retention logic as streams, using the account's `stale_stream_days` setting. Groups that temporarily disappear from an M3U source are retained for the configured retention period instead of being immediately deleted, preserving user settings and preventing data loss when providers temporarily remove/re-add groups. (Closes #809)
- Visual stale indicators for streams and groups: Added `is_stale` field to Stream and both `is_stale` and `last_seen` fields to ChannelGroupM3UAccount models to track items in their retention grace period. Stale groups display with orange buttons and a warning tooltip, while stale streams show with a red background color matching the visual treatment of empty channels.
### Changed
- Settings architecture refactored to use grouped JSON storage: Migrated from individual CharField settings to grouped JSONField settings for improved performance, maintainability, and type safety. Settings are now organized into logical groups (stream_settings, dvr_settings, backup_settings, system_settings, proxy_settings, network_access) with automatic migration handling. Backend provides helper methods (`get_stream_settings()`, `get_default_user_agent_id()`, etc.) for easy access. Frontend simplified by removing complex key mapping logic and standardizing on underscore-based field names throughout.
- Docker setup enhanced for legacy CPU support: Added `USE_LEGACY_NUMPY` environment variable to enable custom-built NumPy with no CPU baseline, allowing Dispatcharr to run on older CPUs (circa 2009) that lack support for newer baseline CPU features. When set to `true`, the entrypoint script will install the legacy NumPy build instead of the standard distribution. (Fixes #805)
- VOD upstream read timeout reduced from 30 seconds to 10 seconds to minimize lock hold time when clients disconnect during connection phase
- Form management refactored across application: Migrated Channel, Stream, M3U Profile, Stream Profile, Logo, and User Agent forms from Formik to React Hook Form (RHF) with Yup validation for improved form handling, better validation feedback, and enhanced code maintainability
- Stats and VOD pages refactored for clearer separation of concerns: extracted Stream/VOD connection cards (StreamConnectionCard, VodConnectionCard, VODCard, SeriesCard), moved page logic into dedicated utils, and lazy-loaded heavy components with ErrorBoundary fallbacks to improve readability and maintainability - Thanks [@nick4810](https://github.com/nick4810)
- Channel creation modal refactored: Extracted and unified channel numbering dialogs from StreamsTable into a dedicated CreateChannelModal component that handles both single and bulk channel creation with cleaner, more maintainable implementation and integrated profile selection controls.
### Fixed
- Fixed bulk channel profile membership update endpoint silently ignoring channels without existing membership records. The endpoint now creates missing memberships automatically (matching single-channel endpoint behavior), validates that all channel IDs exist before processing, and provides detailed response feedback including counts of updated vs. created memberships. Added comprehensive Swagger documentation with request/response schemas.
- Fixed bulk channel edit endpoint crashing with `ValueError: Field names must be given to bulk_update()` when the first channel in the update list had no actual field changes. The endpoint now collects all unique field names from all channels being updated instead of only looking at the first channel, properly handling cases where different channels update different fields or when some channels have no changes - Thanks [@mdellavo](https://github.com/mdellavo) (Fixes #804)
- Fixed PostgreSQL backup restore not completely cleaning database before restoration. The restore process now drops and recreates the entire `public` schema before running `pg_restore`, ensuring a truly clean restore that removes all tables, functions, and other objects not present in the backup file. This prevents leftover database objects from persisting when restoring backups from older branches or versions. Added `--no-owner` flag to `pg_restore` to avoid role permission errors when the backup was created by a different PostgreSQL user.
- Fixed TV Guide loading overlay not disappearing after navigating from DVR page. The `fetchRecordings()` function in the channels store was setting `isLoading: true` on start but never resetting it to `false` on successful completion, causing the Guide page's loading overlay to remain visible indefinitely when accessed after the DVR page.
- Fixed stream profile parameters not properly handling quoted arguments. Switched from basic `.split()` to `shlex.split()` for parsing command-line parameters, allowing proper handling of multi-word arguments in quotes (e.g., OAuth tokens in HTTP headers like `"--twitch-api-header=Authorization=OAuth token123"`). This ensures external streaming tools like Streamlink and FFmpeg receive correctly formatted arguments when using stream profiles with complex parameters - Thanks [@justinforlenza](https://github.com/justinforlenza) (Fixes #833)
- Fixed bulk and manual channel creation not refreshing channel profile memberships in the UI for all connected clients. WebSocket `channels_created` event now calls `fetchChannelProfiles()` to ensure profile membership updates are reflected in real-time for all users without requiring a page refresh.
- Fixed Channel Profile filter incorrectly applying profile membership filtering even when "Show Disabled" was enabled, preventing all channels from being displayed. Profile filter now only applies when hiding disabled channels. (Fixes #825)
- Fixed manual channel creation not adding channels to channel profiles. Manually created channels are now added to the selected profile if one is active, or to all profiles if "All" is selected, matching the behavior of channels created from streams.
- Fixed VOD streams disappearing from stats page during playback by adding `socket-timeout = 600` to production uWSGI config. The missing directive caused uWSGI to use its default 4-second timeout, triggering premature cleanup when clients buffered content. Now matches the existing `http-timeout = 600` value and prevents timeout errors during normal client buffering - Thanks [@patchy8736](https://github.com/patchy8736)
- Fixed Channels table EPG column showing "Not Assigned" on initial load for users with large EPG datasets. Added `tvgsLoaded` flag to EPG store to track when EPG data has finished loading, ensuring the table waits for EPG data before displaying. EPG cells now show animated skeleton placeholders while loading instead of incorrectly showing "Not Assigned". (Fixes #810)
- Fixed VOD profile connection count not being decremented when stream connection fails (timeout, 404, etc.), preventing profiles from reaching capacity limits and rejecting valid stream requests
- Fixed React warning in Channel form by removing invalid `removeTrailingZeros` prop from NumberInput component
- Release workflow Docker tagging: Fixed issue where `latest` and version tags (e.g., `0.16.0`) were creating separate manifests instead of pointing to the same image digest, which caused old `latest` tags to become orphaned/untagged after new releases. Now creates a single multi-arch manifest with both tags, maintaining proper tag relationships and download statistics visibility on GitHub.
- Fixed onboarding message appearing in the Channels Table when filtered results are empty. The onboarding message now only displays when there are no channels created at all, not when channels exist but are filtered out by current filters.
- Fixed `M3UMovieRelation.get_stream_url()` and `M3UEpisodeRelation.get_stream_url()` to use XC client's `_normalize_url()` method instead of simple `rstrip('/')`. This properly handles malformed M3U account URLs (e.g., containing `/player_api.php` or query parameters) before constructing VOD stream endpoints, matching behavior of live channel URL building. (Closes #722)
- Fixed bulk_create and bulk_update errors during VOD content refresh by pre-checking object existence with optimized bulk queries (3 queries total instead of N per batch) before creating new objects. This ensures all movie/series objects have primary keys before relation operations, preventing "prohibited to prevent data loss due to unsaved related object" errors. Additionally fixed duplicate key constraint violations by treating TMDB/IMDB ID values of `0` or `'0'` as invalid (some providers use this to indicate "no ID"), converting them to NULL to prevent multiple items from incorrectly sharing the same ID. (Fixes #813)
## [0.16.0] - 2026-01-04
### Added
- Advanced filtering for Channels table: Filter menu now allows toggling disabled channels visibility (when a profile is selected) and filtering to show only empty channels without streams (Closes #182)
- Network Access warning modal now displays the client's IP address for better transparency when network restrictions are being enforced - Thanks [@damien-alt-sudo](https://github.com/damien-alt-sudo) (Closes #778)
- VLC streaming support - Thanks [@sethwv](https://github.com/sethwv)
- Added `cvlc` as an alternative streaming backend alongside FFmpeg and Streamlink
- Log parser refactoring: Introduced `LogParserFactory` and stream-specific parsers (`FFmpegLogParser`, `VLCLogParser`, `StreamlinkLogParser`) to enable codec and resolution detection from multiple streaming tools
- VLC log parsing for stream information: Detects video/audio codecs from TS demux output, supports both stream-copy and transcode modes with resolution/FPS extraction from transcode output
- Locked, read-only VLC stream profile configured for headless operation with intelligent audio/video codec detection
- VLC and required plugins installed in Docker environment with headless configuration
- ErrorBoundary component for handling frontend errors gracefully with generic error message - Thanks [@nick4810](https://github.com/nick4810)
### Changed
- Fixed event viewer arrow direction (previously inverted) — UI behavior corrected. - Thanks [@drnikcuk](https://github.com/drnikcuk) (Closes #772)
- Region code options now intentionally include both `GB` (ISO 3166-1 standard) and `UK` (commonly used by EPG/XMLTV providers) to accommodate real-world EPG data variations. Many providers use `UK` in channel identifiers (e.g., `BBCOne.uk`) despite `GB` being the official ISO country code. Users should select the region code that matches their specific EPG provider's convention for optimal region-based EPG matching bonuses - Thanks [@bigpandaaaa](https://github.com/bigpandaaaa)
- Channel number inputs in stream-to-channel creation modals no longer have a maximum value restriction, allowing users to enter any valid channel number supported by the database
- Stream log parsing refactored to use factory pattern: Simplified `ChannelService.parse_and_store_stream_info()` to route parsing through specialized log parsers instead of inline program-specific logic (~150 lines of code removed)
- Stream profile names in fixtures updated to use proper capitalization (ffmpeg → FFmpeg, streamlink → Streamlink)
- Frontend component refactoring for improved code organization and maintainability - Thanks [@nick4810](https://github.com/nick4810)
- Extracted large nested components into separate files (RecordingCard, RecordingDetailsModal, RecurringRuleModal, RecordingSynopsis, GuideRow, HourTimeline, PluginCard, ProgramRecordingModal, SeriesRecordingModal, Field)
- Moved business logic from components into dedicated utility files (dateTimeUtils, RecordingCardUtils, RecordingDetailsModalUtils, RecurringRuleModalUtils, DVRUtils, guideUtils, PluginsUtils, PluginCardUtils, notificationUtils)
- Lazy loaded heavy components (SuperuserForm, RecordingDetailsModal, ProgramRecordingModal, SeriesRecordingModal, PluginCard) with loading fallbacks
- Removed unused Dashboard and Home pages
- Guide page refactoring: Extracted GuideRow and HourTimeline components, moved grid calculations and utility functions to guideUtils.js, added loading states for initial data fetching, improved performance through better memoization
- Plugins page refactoring: Extracted PluginCard and Field components, added Zustand store for plugin state management, improved plugin action confirmation handling, better separation of concerns between UI and business logic
- Logo loading optimization: Logos now load only after both Channels and Streams tables complete loading to prevent blocking initial page render, with rendering gated by table readiness to ensure data loads before visual elements
- M3U stream URLs now use `build_absolute_uri_with_port()` for consistency with EPG and logo URLs, ensuring uniform port handling across all M3U file URLs
- Settings and Logos page refactoring for improved readability and separation of concerns - Thanks [@nick4810](https://github.com/nick4810)
- Extracted individual settings forms (DVR, Network Access, Proxy, Stream, System, UI) into separate components with dedicated utility files
- Moved larger nested components into their own files
- Moved business logic into corresponding utils/ files
- Extracted larger in-line component logic into its own function
- Each panel in Settings now uses its own form state with the parent component handling active state management
### Fixed
- Auto Channel Sync Force EPG Source feature not properly forcing "No EPG" assignment - When selecting "Force EPG Source" > "No EPG (Disabled)", channels were still being auto-matched to EPG data instead of forcing dummy/no EPG. Now correctly sets `force_dummy_epg` flag to prevent unwanted EPG assignment. (Fixes #788)
- VOD episode processing now properly handles season and episode numbers from APIs that return string values instead of integers, with comprehensive error logging to track data quality issues - Thanks [@patchy8736](https://github.com/patchy8736) (Fixes #770)
- VOD episode-to-stream relations are now validated to ensure episodes have been saved to the database before creating relations, preventing integrity errors when bulk_create operations encounter conflicts - Thanks [@patchy8736](https://github.com/patchy8736)
- VOD category filtering now correctly handles category names containing pipe "|" characters (e.g., "PL | BAJKI", "EN | MOVIES") by using `rsplit()` to split from the right instead of the left, ensuring the category type is correctly extracted as the last segment - Thanks [@Vitekant](https://github.com/Vitekant)
- M3U and EPG URLs now correctly preserve non-standard HTTPS ports (e.g., `:8443`) when accessed behind reverse proxies that forward the port in headers — `get_host_and_port()` now properly checks `X-Forwarded-Port` header before falling back to other detection methods (Fixes #704)
- M3U and EPG manager page no longer crashes when a playlist references a deleted channel group (Fixes screen blank on navigation)
- Stream validation now returns original URL instead of redirected URL to prevent issues with temporary redirect URLs that expire before clients can connect
- XtreamCodes EPG limit parameter now properly converted to integer to prevent type errors when accessing EPG listings (Fixes #781)
- Docker container file permissions: Django management commands (`migrate`, `collectstatic`) now run as the non-root user to prevent root-owned `__pycache__` and static files from causing permission issues - Thanks [@sethwv](https://github.com/sethwv)
- Stream validation now continues with GET request if HEAD request fails due to connection issues - Thanks [@kvnnap](https://github.com/kvnnap) (Fixes #782)
- XtreamCodes M3U files now correctly set `x-tvg-url` and `url-tvg` headers to reference XC EPG URL (`xmltv.php`) instead of standard EPG endpoint when downloaded via XC API (Fixes #629)
## [0.15.1] - 2025-12-22
### Fixed
- XtreamCodes EPG `has_archive` field now returns integer `0` instead of string `"0"` for proper JSON type consistency
- nginx now gracefully handles hosts without IPv6 support by automatically disabling IPv6 binding at startup (Fixes #744)
## [0.15.0] - 2025-12-20
### Added
- VOD client stop button in Stats page: Users can now disconnect individual VOD clients from the Stats view, similar to the existing channel client disconnect functionality.
- Automated configuration backup/restore system with scheduled backups, retention policies, and async task processing - Thanks [@stlalpha](https://github.com/stlalpha) (Closes #153)
- Stream group as available hash option: Users can now select 'Group' as a hash key option in Settings → Stream Settings → M3U Hash Key, allowing streams to be differentiated by their group membership in addition to name, URL, TVG-ID, and M3U ID
### Changed
- Initial super user creation page now matches the login page design with logo, welcome message, divider, and version display for a more consistent and polished first-time setup experience
- Removed unreachable code path in m3u output - Thanks [@DawtCom](https://github.com/DawtCom)
- GitHub Actions workflows now use `docker/metadata-action` for cleaner and more maintainable OCI-compliant image label generation across all build pipelines (ci.yml, base-image.yml, release.yml). Labels are applied to both platform-specific images and multi-arch manifests with proper annotation formatting. - Thanks [@mrdynamo]https://github.com/mrdynamo) (Closes #724)
- Update docker/dev-build.sh to support private registries, multiple architectures and pushing. Now you can do things like `dev-build.sh -p -r my.private.registry -a linux/arm64,linux/amd64` - Thanks [@jdblack](https://github.com/jblack)
- Updated dependencies: Django (5.2.4 → 5.2.9) includes CVE security patch, psycopg2-binary (2.9.10 → 2.9.11), celery (5.5.3 → 5.6.0), djangorestframework (3.16.0 → 3.16.1), requests (2.32.4 → 2.32.5), psutil (7.0.0 → 7.1.3), gevent (25.5.1 → 25.9.1), rapidfuzz (3.13.0 → 3.14.3), torch (2.7.1 → 2.9.1), sentence-transformers (5.1.0 → 5.2.0), lxml (6.0.0 → 6.0.2) (Closes #662)
- Frontend dependencies updated: Vite (6.2.0 → 7.1.7), ESLint (9.21.0 → 9.27.0), and related packages; added npm `overrides` to enforce js-yaml@^4.1.1 for transitive security fix. All 6 reported vulnerabilities resolved with `npm audit fix`.
- Floating video player now supports resizing via a drag handles, with minimum size enforcement and viewport/page boundary constraints to keep it visible.
- Redis connection settings now fully configurable via environment variables (`REDIS_HOST`, `REDIS_PORT`, `REDIS_DB`, `REDIS_URL`), replacing hardcoded `localhost:6379` values throughout the codebase. This enables use of external Redis services in production deployments. (Closes #762)
- Celery broker and result backend URLs now respect `REDIS_HOST`/`REDIS_PORT`/`REDIS_DB` settings as defaults, with `CELERY_BROKER_URL` and `CELERY_RESULT_BACKEND` environment variables available for override.
### Fixed
- Docker init script now validates DISPATCHARR_PORT is an integer before using it, preventing sed errors when Kubernetes sets it to a service URL like `tcp://10.98.37.10:80`. Falls back to default port 9191 when invalid (Fixes #737)
- M3U Profile form now properly resets local state for search and replace patterns after saving, preventing validation errors when adding multiple profiles in a row
- DVR series rule deletion now properly handles TVG IDs that contain slashes by encoding them in the URL path (Fixes #697)
- VOD episode processing now correctly handles duplicate episodes (same episode in multiple languages/qualities) by reusing Episode records across multiple M3UEpisodeRelation entries instead of attempting to create duplicates (Fixes #556)
- XtreamCodes series streaming endpoint now correctly handles episodes with multiple streams (different languages/qualities) by selecting the best available stream based on account priority (Fixes #569)
- XtreamCodes series info API now returns unique episodes instead of duplicate entries when multiple streams exist for the same episode (different languages/qualities)
- nginx now gracefully handles hosts without IPv6 support by automatically disabling IPv6 binding at startup (Fixes #744)
- XtreamCodes EPG API now returns correct date/time format for start/end fields and proper string types for timestamps and channel_id
- XtreamCodes EPG API now handles None values for title and description fields to prevent AttributeError
- XtreamCodes EPG `id` field now provides unique identifiers per program listing instead of always returning "0" for better client EPG handling
- XtreamCodes EPG `epg_id` field now correctly returns the EPGData record ID (representing the EPG source/channel mapping) instead of a dummy value
## [0.14.0] - 2025-12-09
### Added
- Sort buttons for 'Group' and 'M3U' columns in Streams table for improved stream organization and filtering - Thanks [@bobey6](https://github.com/bobey6)
- EPG source priority field for controlling which EPG source is preferred when multiple sources have matching entries for a channel (higher numbers = higher priority) (Closes #603)
### Changed
- EPG program parsing optimized for sources with many channels but only a fraction mapped. Now parses XML file once per source instead of once per channel, dramatically reducing I/O and CPU overhead. For sources with 10,000 channels and 100 mapped, this results in ~99x fewer file opens and ~100x fewer full file scans. Orphaned programs for unmapped channels are also cleaned up during refresh to prevent database bloat. Database updates are now atomic to prevent clients from seeing empty/partial EPG data during refresh.
- EPG table now displays detailed status messages including refresh progress, success messages, and last message for idle sources (matching M3U table behavior) (Closes #214)
- IPv6 access now allowed by default with all IPv6 CIDRs accepted - Thanks [@adrianmace](https://github.com/adrianmace)
- nginx.conf updated to bind to both IPv4 and IPv6 ports - Thanks [@jordandalley](https://github.com/jordandalley)
- EPG matching now respects source priority and only uses active (enabled) EPG sources (Closes #672)
- EPG form API Key field now only visible when Schedules Direct source type is selected
### Fixed
- EPG table "Updated" column now updates in real-time via WebSocket using the actual backend timestamp instead of requiring a page refresh
- Bulk channel editor confirmation dialog now displays the correct stream profile name that will be applied to the selected channels.
- uWSGI not found and 502 bad gateway on first startup
## [0.13.1] - 2025-12-06
### Fixed
- JWT token generated so is unique for each deployment
## [0.13.0] - 2025-12-02
### Added

View file

@ -27,6 +27,7 @@ urlpatterns = [
path('core/', include(('core.api_urls', 'core'), namespace='core')),
path('plugins/', include(('apps.plugins.api_urls', 'plugins'), namespace='plugins')),
path('vod/', include(('apps.vod.api_urls', 'vod'), namespace='vod')),
path('backups/', include(('apps.backups.api_urls', 'backups'), namespace='backups')),
# path('output/', include(('apps.output.api_urls', 'output'), namespace='output')),
#path('player/', include(('apps.player.api_urls', 'player'), namespace='player')),
#path('settings/', include(('apps.settings.api_urls', 'settings'), namespace='settings')),

0
apps/backups/__init__.py Normal file
View file

18
apps/backups/api_urls.py Normal file
View file

@ -0,0 +1,18 @@
from django.urls import path
from . import api_views
app_name = "backups"
urlpatterns = [
path("", api_views.list_backups, name="backup-list"),
path("create/", api_views.create_backup, name="backup-create"),
path("upload/", api_views.upload_backup, name="backup-upload"),
path("schedule/", api_views.get_schedule, name="backup-schedule-get"),
path("schedule/update/", api_views.update_schedule, name="backup-schedule-update"),
path("status/<str:task_id>/", api_views.backup_status, name="backup-status"),
path("<str:filename>/download-token/", api_views.get_download_token, name="backup-download-token"),
path("<str:filename>/download/", api_views.download_backup, name="backup-download"),
path("<str:filename>/delete/", api_views.delete_backup, name="backup-delete"),
path("<str:filename>/restore/", api_views.restore_backup, name="backup-restore"),
]

364
apps/backups/api_views.py Normal file
View file

@ -0,0 +1,364 @@
import hashlib
import hmac
import logging
import os
from pathlib import Path
from celery.result import AsyncResult
from django.conf import settings
from django.http import HttpResponse, StreamingHttpResponse, Http404
from rest_framework import status
from rest_framework.decorators import api_view, permission_classes, parser_classes
from rest_framework.permissions import IsAdminUser, AllowAny
from rest_framework.parsers import MultiPartParser, FormParser
from rest_framework.response import Response
from . import services
from .tasks import create_backup_task, restore_backup_task
from .scheduler import get_schedule_settings, update_schedule_settings
logger = logging.getLogger(__name__)
def _generate_task_token(task_id: str) -> str:
"""Generate a signed token for task status access without auth."""
secret = settings.SECRET_KEY.encode()
return hmac.new(secret, task_id.encode(), hashlib.sha256).hexdigest()[:32]
def _verify_task_token(task_id: str, token: str) -> bool:
"""Verify a task token is valid."""
expected = _generate_task_token(task_id)
return hmac.compare_digest(expected, token)
@api_view(["GET"])
@permission_classes([IsAdminUser])
def list_backups(request):
"""List all available backup files."""
try:
backups = services.list_backups()
return Response(backups, status=status.HTTP_200_OK)
except Exception as e:
return Response(
{"detail": f"Failed to list backups: {str(e)}"},
status=status.HTTP_500_INTERNAL_SERVER_ERROR,
)
@api_view(["POST"])
@permission_classes([IsAdminUser])
def create_backup(request):
"""Create a new backup (async via Celery)."""
try:
task = create_backup_task.delay()
return Response(
{
"detail": "Backup started",
"task_id": task.id,
"task_token": _generate_task_token(task.id),
},
status=status.HTTP_202_ACCEPTED,
)
except Exception as e:
return Response(
{"detail": f"Failed to start backup: {str(e)}"},
status=status.HTTP_500_INTERNAL_SERVER_ERROR,
)
@api_view(["GET"])
@permission_classes([AllowAny])
def backup_status(request, task_id):
"""Check the status of a backup/restore task.
Requires either:
- Valid admin authentication, OR
- Valid task_token query parameter
"""
# Check for token-based auth (for restore when session is invalidated)
token = request.query_params.get("token")
if token:
if not _verify_task_token(task_id, token):
return Response(
{"detail": "Invalid task token"},
status=status.HTTP_403_FORBIDDEN,
)
else:
# Fall back to admin auth check
if not request.user.is_authenticated or not request.user.is_staff:
return Response(
{"detail": "Authentication required"},
status=status.HTTP_401_UNAUTHORIZED,
)
try:
result = AsyncResult(task_id)
if result.ready():
task_result = result.get()
if task_result.get("status") == "completed":
return Response({
"state": "completed",
"result": task_result,
})
else:
return Response({
"state": "failed",
"error": task_result.get("error", "Unknown error"),
})
elif result.failed():
return Response({
"state": "failed",
"error": str(result.result),
})
else:
return Response({
"state": result.state.lower(),
})
except Exception as e:
return Response(
{"detail": f"Failed to get task status: {str(e)}"},
status=status.HTTP_500_INTERNAL_SERVER_ERROR,
)
@api_view(["GET"])
@permission_classes([IsAdminUser])
def get_download_token(request, filename):
"""Get a signed token for downloading a backup file."""
try:
# Security: prevent path traversal
if ".." in filename or "/" in filename or "\\" in filename:
raise Http404("Invalid filename")
backup_dir = services.get_backup_dir()
backup_file = backup_dir / filename
if not backup_file.exists():
raise Http404("Backup file not found")
token = _generate_task_token(filename)
return Response({"token": token})
except Http404:
raise
except Exception as e:
return Response(
{"detail": f"Failed to generate token: {str(e)}"},
status=status.HTTP_500_INTERNAL_SERVER_ERROR,
)
@api_view(["GET"])
@permission_classes([AllowAny])
def download_backup(request, filename):
"""Download a backup file.
Requires either:
- Valid admin authentication, OR
- Valid download_token query parameter
"""
# Check for token-based auth (avoids CORS preflight issues)
token = request.query_params.get("token")
if token:
if not _verify_task_token(filename, token):
return Response(
{"detail": "Invalid download token"},
status=status.HTTP_403_FORBIDDEN,
)
else:
# Fall back to admin auth check
if not request.user.is_authenticated or not request.user.is_staff:
return Response(
{"detail": "Authentication required"},
status=status.HTTP_401_UNAUTHORIZED,
)
try:
# Security: prevent path traversal by checking for suspicious characters
if ".." in filename or "/" in filename or "\\" in filename:
raise Http404("Invalid filename")
backup_dir = services.get_backup_dir()
backup_file = (backup_dir / filename).resolve()
# Security: ensure the resolved path is still within backup_dir
if not str(backup_file).startswith(str(backup_dir.resolve())):
raise Http404("Invalid filename")
if not backup_file.exists() or not backup_file.is_file():
raise Http404("Backup file not found")
file_size = backup_file.stat().st_size
# Use X-Accel-Redirect for nginx (AIO container) - nginx serves file directly
# Fall back to streaming for non-nginx deployments
use_nginx_accel = os.environ.get("USE_NGINX_ACCEL", "").lower() == "true"
logger.info(f"[DOWNLOAD] File: {filename}, Size: {file_size}, USE_NGINX_ACCEL: {use_nginx_accel}")
if use_nginx_accel:
# X-Accel-Redirect: Django returns immediately, nginx serves file
logger.info(f"[DOWNLOAD] Using X-Accel-Redirect: /protected-backups/{filename}")
response = HttpResponse()
response["X-Accel-Redirect"] = f"/protected-backups/{filename}"
response["Content-Type"] = "application/zip"
response["Content-Length"] = file_size
response["Content-Disposition"] = f'attachment; filename="{filename}"'
return response
else:
# Streaming fallback for non-nginx deployments
logger.info(f"[DOWNLOAD] Using streaming fallback (no nginx)")
def file_iterator(file_path, chunk_size=2 * 1024 * 1024):
with open(file_path, "rb") as f:
while chunk := f.read(chunk_size):
yield chunk
response = StreamingHttpResponse(
file_iterator(backup_file),
content_type="application/zip",
)
response["Content-Length"] = file_size
response["Content-Disposition"] = f'attachment; filename="{filename}"'
return response
except Http404:
raise
except Exception as e:
return Response(
{"detail": f"Download failed: {str(e)}"},
status=status.HTTP_500_INTERNAL_SERVER_ERROR,
)
@api_view(["DELETE"])
@permission_classes([IsAdminUser])
def delete_backup(request, filename):
"""Delete a backup file."""
try:
# Security: prevent path traversal
if ".." in filename or "/" in filename or "\\" in filename:
raise Http404("Invalid filename")
services.delete_backup(filename)
return Response(
{"detail": "Backup deleted successfully"},
status=status.HTTP_204_NO_CONTENT,
)
except FileNotFoundError:
raise Http404("Backup file not found")
except Exception as e:
return Response(
{"detail": f"Delete failed: {str(e)}"},
status=status.HTTP_500_INTERNAL_SERVER_ERROR,
)
@api_view(["POST"])
@permission_classes([IsAdminUser])
@parser_classes([MultiPartParser, FormParser])
def upload_backup(request):
"""Upload a backup file for restoration."""
uploaded = request.FILES.get("file")
if not uploaded:
return Response(
{"detail": "No file uploaded"},
status=status.HTTP_400_BAD_REQUEST,
)
try:
backup_dir = services.get_backup_dir()
filename = uploaded.name or "uploaded-backup.zip"
# Ensure unique filename
backup_file = backup_dir / filename
counter = 1
while backup_file.exists():
name_parts = filename.rsplit(".", 1)
if len(name_parts) == 2:
backup_file = backup_dir / f"{name_parts[0]}-{counter}.{name_parts[1]}"
else:
backup_file = backup_dir / f"{filename}-{counter}"
counter += 1
# Save uploaded file
with backup_file.open("wb") as f:
for chunk in uploaded.chunks():
f.write(chunk)
return Response(
{
"detail": "Backup uploaded successfully",
"filename": backup_file.name,
},
status=status.HTTP_201_CREATED,
)
except Exception as e:
return Response(
{"detail": f"Upload failed: {str(e)}"},
status=status.HTTP_500_INTERNAL_SERVER_ERROR,
)
@api_view(["POST"])
@permission_classes([IsAdminUser])
def restore_backup(request, filename):
"""Restore from a backup file (async via Celery). WARNING: This will flush the database!"""
try:
# Security: prevent path traversal
if ".." in filename or "/" in filename or "\\" in filename:
raise Http404("Invalid filename")
backup_dir = services.get_backup_dir()
backup_file = backup_dir / filename
if not backup_file.exists():
raise Http404("Backup file not found")
task = restore_backup_task.delay(filename)
return Response(
{
"detail": "Restore started",
"task_id": task.id,
"task_token": _generate_task_token(task.id),
},
status=status.HTTP_202_ACCEPTED,
)
except Http404:
raise
except Exception as e:
return Response(
{"detail": f"Failed to start restore: {str(e)}"},
status=status.HTTP_500_INTERNAL_SERVER_ERROR,
)
@api_view(["GET"])
@permission_classes([IsAdminUser])
def get_schedule(request):
"""Get backup schedule settings."""
try:
settings = get_schedule_settings()
return Response(settings)
except Exception as e:
return Response(
{"detail": f"Failed to get schedule: {str(e)}"},
status=status.HTTP_500_INTERNAL_SERVER_ERROR,
)
@api_view(["PUT"])
@permission_classes([IsAdminUser])
def update_schedule(request):
"""Update backup schedule settings."""
try:
settings = update_schedule_settings(request.data)
return Response(settings)
except ValueError as e:
return Response(
{"detail": str(e)},
status=status.HTTP_400_BAD_REQUEST,
)
except Exception as e:
return Response(
{"detail": f"Failed to update schedule: {str(e)}"},
status=status.HTTP_500_INTERNAL_SERVER_ERROR,
)

7
apps/backups/apps.py Normal file
View file

@ -0,0 +1,7 @@
from django.apps import AppConfig
class BackupsConfig(AppConfig):
default_auto_field = "django.db.models.BigAutoField"
name = "apps.backups"
verbose_name = "Backups"

View file

0
apps/backups/models.py Normal file
View file

202
apps/backups/scheduler.py Normal file
View file

@ -0,0 +1,202 @@
import json
import logging
from django_celery_beat.models import PeriodicTask, CrontabSchedule
from core.models import CoreSettings
logger = logging.getLogger(__name__)
BACKUP_SCHEDULE_TASK_NAME = "backup-scheduled-task"
DEFAULTS = {
"schedule_enabled": True,
"schedule_frequency": "daily",
"schedule_time": "03:00",
"schedule_day_of_week": 0, # Sunday
"retention_count": 3,
"schedule_cron_expression": "",
}
def _get_backup_settings():
"""Get all backup settings from CoreSettings grouped JSON."""
try:
settings_obj = CoreSettings.objects.get(key="backup_settings")
return settings_obj.value if isinstance(settings_obj.value, dict) else DEFAULTS.copy()
except CoreSettings.DoesNotExist:
return DEFAULTS.copy()
def _update_backup_settings(updates: dict) -> None:
"""Update backup settings in the grouped JSON."""
obj, created = CoreSettings.objects.get_or_create(
key="backup_settings",
defaults={"name": "Backup Settings", "value": DEFAULTS.copy()}
)
current = obj.value if isinstance(obj.value, dict) else {}
current.update(updates)
obj.value = current
obj.save()
def get_schedule_settings() -> dict:
"""Get all backup schedule settings."""
settings = _get_backup_settings()
return {
"enabled": bool(settings.get("schedule_enabled", DEFAULTS["schedule_enabled"])),
"frequency": str(settings.get("schedule_frequency", DEFAULTS["schedule_frequency"])),
"time": str(settings.get("schedule_time", DEFAULTS["schedule_time"])),
"day_of_week": int(settings.get("schedule_day_of_week", DEFAULTS["schedule_day_of_week"])),
"retention_count": int(settings.get("retention_count", DEFAULTS["retention_count"])),
"cron_expression": str(settings.get("schedule_cron_expression", DEFAULTS["schedule_cron_expression"])),
}
def update_schedule_settings(data: dict) -> dict:
"""Update backup schedule settings and sync the PeriodicTask."""
# Validate
if "frequency" in data and data["frequency"] not in ("daily", "weekly"):
raise ValueError("frequency must be 'daily' or 'weekly'")
if "time" in data:
try:
hour, minute = data["time"].split(":")
int(hour)
int(minute)
except (ValueError, AttributeError):
raise ValueError("time must be in HH:MM format")
if "day_of_week" in data:
day = int(data["day_of_week"])
if day < 0 or day > 6:
raise ValueError("day_of_week must be 0-6 (Sunday-Saturday)")
if "retention_count" in data:
count = int(data["retention_count"])
if count < 0:
raise ValueError("retention_count must be >= 0")
# Update settings with proper key names
updates = {}
if "enabled" in data:
updates["schedule_enabled"] = bool(data["enabled"])
if "frequency" in data:
updates["schedule_frequency"] = str(data["frequency"])
if "time" in data:
updates["schedule_time"] = str(data["time"])
if "day_of_week" in data:
updates["schedule_day_of_week"] = int(data["day_of_week"])
if "retention_count" in data:
updates["retention_count"] = int(data["retention_count"])
if "cron_expression" in data:
updates["schedule_cron_expression"] = str(data["cron_expression"])
_update_backup_settings(updates)
# Sync the periodic task
_sync_periodic_task()
return get_schedule_settings()
def _sync_periodic_task() -> None:
"""Create, update, or delete the scheduled backup task based on settings."""
settings = get_schedule_settings()
if not settings["enabled"]:
# Delete the task if it exists
task = PeriodicTask.objects.filter(name=BACKUP_SCHEDULE_TASK_NAME).first()
if task:
old_crontab = task.crontab
task.delete()
_cleanup_orphaned_crontab(old_crontab)
logger.info("Backup schedule disabled, removed periodic task")
return
# Get old crontab before creating new one
old_crontab = None
try:
old_task = PeriodicTask.objects.get(name=BACKUP_SCHEDULE_TASK_NAME)
old_crontab = old_task.crontab
except PeriodicTask.DoesNotExist:
pass
# Check if using cron expression (advanced mode)
if settings["cron_expression"]:
# Parse cron expression: "minute hour day month weekday"
try:
parts = settings["cron_expression"].split()
if len(parts) != 5:
raise ValueError("Cron expression must have 5 parts: minute hour day month weekday")
minute, hour, day_of_month, month_of_year, day_of_week = parts
crontab, _ = CrontabSchedule.objects.get_or_create(
minute=minute,
hour=hour,
day_of_week=day_of_week,
day_of_month=day_of_month,
month_of_year=month_of_year,
timezone=CoreSettings.get_system_time_zone(),
)
except Exception as e:
logger.error(f"Invalid cron expression '{settings['cron_expression']}': {e}")
raise ValueError(f"Invalid cron expression: {e}")
else:
# Use simple frequency-based scheduling
# Parse time
hour, minute = settings["time"].split(":")
# Build crontab based on frequency
system_tz = CoreSettings.get_system_time_zone()
if settings["frequency"] == "daily":
crontab, _ = CrontabSchedule.objects.get_or_create(
minute=minute,
hour=hour,
day_of_week="*",
day_of_month="*",
month_of_year="*",
timezone=system_tz,
)
else: # weekly
crontab, _ = CrontabSchedule.objects.get_or_create(
minute=minute,
hour=hour,
day_of_week=str(settings["day_of_week"]),
day_of_month="*",
month_of_year="*",
timezone=system_tz,
)
# Create or update the periodic task
task, created = PeriodicTask.objects.update_or_create(
name=BACKUP_SCHEDULE_TASK_NAME,
defaults={
"task": "apps.backups.tasks.scheduled_backup_task",
"crontab": crontab,
"enabled": True,
"kwargs": json.dumps({"retention_count": settings["retention_count"]}),
},
)
# Clean up old crontab if it changed and is orphaned
if old_crontab and old_crontab.id != crontab.id:
_cleanup_orphaned_crontab(old_crontab)
action = "Created" if created else "Updated"
logger.info(f"{action} backup schedule: {settings['frequency']} at {settings['time']}")
def _cleanup_orphaned_crontab(crontab_schedule):
"""Delete old CrontabSchedule if no other tasks are using it."""
if crontab_schedule is None:
return
# Check if any other tasks are using this crontab
if PeriodicTask.objects.filter(crontab=crontab_schedule).exists():
logger.debug(f"CrontabSchedule {crontab_schedule.id} still in use, not deleting")
return
logger.debug(f"Cleaning up orphaned CrontabSchedule: {crontab_schedule.id}")
crontab_schedule.delete()

350
apps/backups/services.py Normal file
View file

@ -0,0 +1,350 @@
import datetime
import json
import os
import shutil
import subprocess
import tempfile
from pathlib import Path
from zipfile import ZipFile, ZIP_DEFLATED
import logging
import pytz
from django.conf import settings
from core.models import CoreSettings
logger = logging.getLogger(__name__)
def get_backup_dir() -> Path:
"""Get the backup directory, creating it if necessary."""
backup_dir = Path(settings.BACKUP_ROOT)
backup_dir.mkdir(parents=True, exist_ok=True)
return backup_dir
def _is_postgresql() -> bool:
"""Check if we're using PostgreSQL."""
return settings.DATABASES["default"]["ENGINE"] == "django.db.backends.postgresql"
def _get_pg_env() -> dict:
"""Get environment variables for PostgreSQL commands."""
db_config = settings.DATABASES["default"]
env = os.environ.copy()
env["PGPASSWORD"] = db_config.get("PASSWORD", "")
return env
def _get_pg_args() -> list[str]:
"""Get common PostgreSQL command arguments."""
db_config = settings.DATABASES["default"]
return [
"-h", db_config.get("HOST", "localhost"),
"-p", str(db_config.get("PORT", 5432)),
"-U", db_config.get("USER", "postgres"),
"-d", db_config.get("NAME", "dispatcharr"),
]
def _dump_postgresql(output_file: Path) -> None:
"""Dump PostgreSQL database using pg_dump."""
logger.info("Dumping PostgreSQL database with pg_dump...")
cmd = [
"pg_dump",
*_get_pg_args(),
"-Fc", # Custom format for pg_restore
"-v", # Verbose
"-f", str(output_file),
]
result = subprocess.run(
cmd,
env=_get_pg_env(),
capture_output=True,
text=True,
)
if result.returncode != 0:
logger.error(f"pg_dump failed: {result.stderr}")
raise RuntimeError(f"pg_dump failed: {result.stderr}")
logger.debug(f"pg_dump output: {result.stderr}")
def _clean_postgresql_schema() -> None:
"""Drop and recreate the public schema to ensure a completely clean restore."""
logger.info("[PG_CLEAN] Dropping and recreating public schema...")
# Commands to drop and recreate schema
sql_commands = "DROP SCHEMA IF EXISTS public CASCADE; CREATE SCHEMA public; GRANT ALL ON SCHEMA public TO public;"
cmd = [
"psql",
*_get_pg_args(),
"-c", sql_commands,
]
result = subprocess.run(
cmd,
env=_get_pg_env(),
capture_output=True,
text=True,
)
if result.returncode != 0:
logger.error(f"[PG_CLEAN] Failed to clean schema: {result.stderr}")
raise RuntimeError(f"Failed to clean PostgreSQL schema: {result.stderr}")
logger.info("[PG_CLEAN] Schema cleaned successfully")
def _restore_postgresql(dump_file: Path) -> None:
"""Restore PostgreSQL database using pg_restore."""
logger.info("[PG_RESTORE] Starting pg_restore...")
logger.info(f"[PG_RESTORE] Dump file: {dump_file}")
# Drop and recreate schema to ensure a completely clean restore
_clean_postgresql_schema()
pg_args = _get_pg_args()
logger.info(f"[PG_RESTORE] Connection args: {pg_args}")
cmd = [
"pg_restore",
"--no-owner", # Skip ownership commands (we already created schema)
*pg_args,
"-v", # Verbose
str(dump_file),
]
logger.info(f"[PG_RESTORE] Running command: {' '.join(cmd)}")
result = subprocess.run(
cmd,
env=_get_pg_env(),
capture_output=True,
text=True,
)
logger.info(f"[PG_RESTORE] Return code: {result.returncode}")
# pg_restore may return non-zero even on partial success
# Check for actual errors vs warnings
if result.returncode != 0:
# Some errors during restore are expected (e.g., "does not exist" when cleaning)
# Only fail on critical errors
stderr = result.stderr.lower()
if "fatal" in stderr or "could not connect" in stderr:
logger.error(f"[PG_RESTORE] Failed critically: {result.stderr}")
raise RuntimeError(f"pg_restore failed: {result.stderr}")
else:
logger.warning(f"[PG_RESTORE] Completed with warnings: {result.stderr[:500]}...")
logger.info("[PG_RESTORE] Completed successfully")
def _dump_sqlite(output_file: Path) -> None:
"""Dump SQLite database using sqlite3 .backup command."""
logger.info("Dumping SQLite database with sqlite3 .backup...")
db_path = Path(settings.DATABASES["default"]["NAME"])
if not db_path.exists():
raise FileNotFoundError(f"SQLite database not found: {db_path}")
# Use sqlite3 .backup command via stdin for reliable execution
result = subprocess.run(
["sqlite3", str(db_path)],
input=f".backup '{output_file}'\n",
capture_output=True,
text=True,
)
if result.returncode != 0:
logger.error(f"sqlite3 backup failed: {result.stderr}")
raise RuntimeError(f"sqlite3 backup failed: {result.stderr}")
# Verify the backup file was created
if not output_file.exists():
raise RuntimeError("sqlite3 backup failed: output file not created")
logger.info(f"sqlite3 backup completed successfully: {output_file}")
def _restore_sqlite(dump_file: Path) -> None:
"""Restore SQLite database by replacing the database file."""
logger.info("Restoring SQLite database...")
db_path = Path(settings.DATABASES["default"]["NAME"])
backup_current = None
# Backup current database before overwriting
if db_path.exists():
backup_current = db_path.with_suffix(".db.bak")
shutil.copy2(db_path, backup_current)
logger.info(f"Backed up current database to {backup_current}")
# Ensure parent directory exists
db_path.parent.mkdir(parents=True, exist_ok=True)
# The backup file from _dump_sqlite is a complete SQLite database file
# We can simply copy it over the existing database
shutil.copy2(dump_file, db_path)
# Verify the restore worked by checking if sqlite3 can read it
result = subprocess.run(
["sqlite3", str(db_path)],
input=".tables\n",
capture_output=True,
text=True,
)
if result.returncode != 0:
logger.error(f"sqlite3 verification failed: {result.stderr}")
# Try to restore from backup
if backup_current and backup_current.exists():
shutil.copy2(backup_current, db_path)
logger.info("Restored original database from backup")
raise RuntimeError(f"sqlite3 restore verification failed: {result.stderr}")
logger.info("sqlite3 restore completed successfully")
def create_backup() -> Path:
"""
Create a backup archive containing database dump and data directories.
Returns the path to the created backup file.
"""
backup_dir = get_backup_dir()
# Use system timezone for filename (user-friendly), but keep internal timestamps as UTC
system_tz_name = CoreSettings.get_system_time_zone()
try:
system_tz = pytz.timezone(system_tz_name)
now_local = datetime.datetime.now(datetime.UTC).astimezone(system_tz)
timestamp = now_local.strftime("%Y.%m.%d.%H.%M.%S")
except Exception as e:
logger.warning(f"Failed to use system timezone {system_tz_name}: {e}, falling back to UTC")
timestamp = datetime.datetime.now(datetime.UTC).strftime("%Y.%m.%d.%H.%M.%S")
backup_name = f"dispatcharr-backup-{timestamp}.zip"
backup_file = backup_dir / backup_name
logger.info(f"Creating backup: {backup_name}")
with tempfile.TemporaryDirectory(prefix="dispatcharr-backup-") as temp_dir:
temp_path = Path(temp_dir)
# Determine database type and dump accordingly
if _is_postgresql():
db_dump_file = temp_path / "database.dump"
_dump_postgresql(db_dump_file)
db_type = "postgresql"
else:
db_dump_file = temp_path / "database.sqlite3"
_dump_sqlite(db_dump_file)
db_type = "sqlite"
# Create ZIP archive with compression and ZIP64 support for large files
with ZipFile(backup_file, "w", compression=ZIP_DEFLATED, allowZip64=True) as zip_file:
# Add database dump
zip_file.write(db_dump_file, db_dump_file.name)
# Add metadata
metadata = {
"format": "dispatcharr-backup",
"version": 2,
"database_type": db_type,
"database_file": db_dump_file.name,
"created_at": datetime.datetime.now(datetime.UTC).isoformat(),
}
zip_file.writestr("metadata.json", json.dumps(metadata, indent=2))
logger.info(f"Backup created successfully: {backup_file}")
return backup_file
def restore_backup(backup_file: Path) -> None:
"""
Restore from a backup archive.
WARNING: This will overwrite the database!
"""
if not backup_file.exists():
raise FileNotFoundError(f"Backup file not found: {backup_file}")
logger.info(f"Restoring from backup: {backup_file}")
with tempfile.TemporaryDirectory(prefix="dispatcharr-restore-") as temp_dir:
temp_path = Path(temp_dir)
# Extract backup
logger.debug("Extracting backup archive...")
with ZipFile(backup_file, "r") as zip_file:
zip_file.extractall(temp_path)
# Read metadata
metadata_file = temp_path / "metadata.json"
if not metadata_file.exists():
raise ValueError("Invalid backup: missing metadata.json")
with open(metadata_file) as f:
metadata = json.load(f)
# Restore database
_restore_database(temp_path, metadata)
logger.info("Restore completed successfully")
def _restore_database(temp_path: Path, metadata: dict) -> None:
"""Restore database from backup."""
db_type = metadata.get("database_type", "postgresql")
db_file = metadata.get("database_file", "database.dump")
dump_file = temp_path / db_file
if not dump_file.exists():
raise ValueError(f"Invalid backup: missing {db_file}")
current_db_type = "postgresql" if _is_postgresql() else "sqlite"
if db_type != current_db_type:
raise ValueError(
f"Database type mismatch: backup is {db_type}, "
f"but current database is {current_db_type}"
)
if db_type == "postgresql":
_restore_postgresql(dump_file)
else:
_restore_sqlite(dump_file)
def list_backups() -> list[dict]:
"""List all available backup files with metadata."""
backup_dir = get_backup_dir()
backups = []
for backup_file in sorted(backup_dir.glob("dispatcharr-backup-*.zip"), reverse=True):
# Use UTC timezone so frontend can convert to user's local time
created_time = datetime.datetime.fromtimestamp(backup_file.stat().st_mtime, datetime.UTC)
backups.append({
"name": backup_file.name,
"size": backup_file.stat().st_size,
"created": created_time.isoformat(),
})
return backups
def delete_backup(filename: str) -> None:
"""Delete a backup file."""
backup_dir = get_backup_dir()
backup_file = backup_dir / filename
if not backup_file.exists():
raise FileNotFoundError(f"Backup file not found: {filename}")
if not backup_file.is_file():
raise ValueError(f"Invalid backup file: {filename}")
backup_file.unlink()
logger.info(f"Deleted backup: {filename}")

106
apps/backups/tasks.py Normal file
View file

@ -0,0 +1,106 @@
import logging
import traceback
from celery import shared_task
from . import services
logger = logging.getLogger(__name__)
def _cleanup_old_backups(retention_count: int) -> int:
"""Delete old backups, keeping only the most recent N. Returns count deleted."""
if retention_count <= 0:
return 0
backups = services.list_backups()
if len(backups) <= retention_count:
return 0
# Backups are sorted newest first, so delete from the end
to_delete = backups[retention_count:]
deleted = 0
for backup in to_delete:
try:
services.delete_backup(backup["name"])
deleted += 1
logger.info(f"[CLEANUP] Deleted old backup: {backup['name']}")
except Exception as e:
logger.error(f"[CLEANUP] Failed to delete {backup['name']}: {e}")
return deleted
@shared_task(bind=True)
def create_backup_task(self):
"""Celery task to create a backup asynchronously."""
try:
logger.info(f"[BACKUP] Starting backup task {self.request.id}")
backup_file = services.create_backup()
logger.info(f"[BACKUP] Task {self.request.id} completed: {backup_file.name}")
return {
"status": "completed",
"filename": backup_file.name,
"size": backup_file.stat().st_size,
}
except Exception as e:
logger.error(f"[BACKUP] Task {self.request.id} failed: {str(e)}")
logger.error(f"[BACKUP] Traceback: {traceback.format_exc()}")
return {
"status": "failed",
"error": str(e),
}
@shared_task(bind=True)
def restore_backup_task(self, filename: str):
"""Celery task to restore a backup asynchronously."""
try:
logger.info(f"[RESTORE] Starting restore task {self.request.id} for {filename}")
backup_dir = services.get_backup_dir()
backup_file = backup_dir / filename
logger.info(f"[RESTORE] Backup file path: {backup_file}")
services.restore_backup(backup_file)
logger.info(f"[RESTORE] Task {self.request.id} completed successfully")
return {
"status": "completed",
"filename": filename,
}
except Exception as e:
logger.error(f"[RESTORE] Task {self.request.id} failed: {str(e)}")
logger.error(f"[RESTORE] Traceback: {traceback.format_exc()}")
return {
"status": "failed",
"error": str(e),
}
@shared_task(bind=True)
def scheduled_backup_task(self, retention_count: int = 0):
"""Celery task for scheduled backups with optional retention cleanup."""
try:
logger.info(f"[SCHEDULED] Starting scheduled backup task {self.request.id}")
# Create backup
backup_file = services.create_backup()
logger.info(f"[SCHEDULED] Backup created: {backup_file.name}")
# Cleanup old backups if retention is set
deleted = 0
if retention_count > 0:
deleted = _cleanup_old_backups(retention_count)
logger.info(f"[SCHEDULED] Cleanup complete, deleted {deleted} old backup(s)")
return {
"status": "completed",
"filename": backup_file.name,
"size": backup_file.stat().st_size,
"deleted_count": deleted,
}
except Exception as e:
logger.error(f"[SCHEDULED] Task {self.request.id} failed: {str(e)}")
logger.error(f"[SCHEDULED] Traceback: {traceback.format_exc()}")
return {
"status": "failed",
"error": str(e),
}

1163
apps/backups/tests.py Normal file

File diff suppressed because it is too large Load diff

View file

@ -47,7 +47,7 @@ urlpatterns = [
path('series-rules/', SeriesRulesAPIView.as_view(), name='series_rules'),
path('series-rules/evaluate/', EvaluateSeriesRulesAPIView.as_view(), name='evaluate_series_rules'),
path('series-rules/bulk-remove/', BulkRemoveSeriesRecordingsAPIView.as_view(), name='bulk_remove_series_recordings'),
path('series-rules/<str:tvg_id>/', DeleteSeriesRuleAPIView.as_view(), name='delete_series_rule'),
path('series-rules/<path:tvg_id>/', DeleteSeriesRuleAPIView.as_view(), name='delete_series_rule'),
path('recordings/bulk-delete-upcoming/', BulkDeleteUpcomingRecordingsAPIView.as_view(), name='bulk_delete_upcoming_recordings'),
path('dvr/comskip-config/', ComskipConfigAPIView.as_view(), name='comskip_config'),
]

View file

@ -8,7 +8,10 @@ from drf_yasg.utils import swagger_auto_schema
from drf_yasg import openapi
from django.shortcuts import get_object_or_404, get_list_or_404
from django.db import transaction
import os, json, requests, logging
from django.db.models import Q
import os, json, requests, logging, mimetypes
from django.utils.http import http_date
from urllib.parse import unquote
from apps.accounts.permissions import (
Authenticated,
IsAdmin,
@ -124,10 +127,12 @@ class StreamViewSet(viewsets.ModelViewSet):
filter_backends = [DjangoFilterBackend, SearchFilter, OrderingFilter]
filterset_class = StreamFilter
search_fields = ["name", "channel_group__name"]
ordering_fields = ["name", "channel_group__name"]
ordering_fields = ["name", "channel_group__name", "m3u_account__name"]
ordering = ["-name"]
def get_permissions(self):
if self.action == "duplicate":
return [IsAdmin()]
try:
return [perm() for perm in permission_classes_by_action[self.action]]
except KeyError:
@ -234,12 +239,8 @@ class ChannelGroupViewSet(viewsets.ModelViewSet):
return [Authenticated()]
def get_queryset(self):
"""Add annotation for association counts"""
from django.db.models import Count
return ChannelGroup.objects.annotate(
channel_count=Count('channels', distinct=True),
m3u_account_count=Count('m3u_accounts', distinct=True)
)
"""Return channel groups with prefetched relations for efficient counting"""
return ChannelGroup.objects.prefetch_related('channels', 'm3u_accounts').all()
def update(self, request, *args, **kwargs):
"""Override update to check M3U associations"""
@ -275,15 +276,20 @@ class ChannelGroupViewSet(viewsets.ModelViewSet):
@action(detail=False, methods=["post"], url_path="cleanup")
def cleanup_unused_groups(self, request):
"""Delete all channel groups with no channels or M3U account associations"""
from django.db.models import Count
from django.db.models import Q, Exists, OuterRef
# Find groups with no channels and no M3U account associations using Exists subqueries
from .models import Channel, ChannelGroupM3UAccount
has_channels = Channel.objects.filter(channel_group_id=OuterRef('pk'))
has_accounts = ChannelGroupM3UAccount.objects.filter(channel_group_id=OuterRef('pk'))
# Find groups with no channels and no M3U account associations
unused_groups = ChannelGroup.objects.annotate(
channel_count=Count('channels', distinct=True),
m3u_account_count=Count('m3u_accounts', distinct=True)
has_channels=Exists(has_channels),
has_accounts=Exists(has_accounts)
).filter(
channel_count=0,
m3u_account_count=0
has_channels=False,
has_accounts=False
)
deleted_count = unused_groups.count()
@ -384,6 +390,72 @@ class ChannelViewSet(viewsets.ModelViewSet):
ordering_fields = ["channel_number", "name", "channel_group__name"]
ordering = ["-channel_number"]
def create(self, request, *args, **kwargs):
"""Override create to handle channel profile membership"""
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
with transaction.atomic():
channel = serializer.save()
# Handle channel profile membership
# Semantics:
# - Omitted (None): add to ALL profiles (backward compatible default)
# - Empty array []: add to NO profiles
# - Sentinel [0] or 0: add to ALL profiles (explicit)
# - [1,2,...]: add to specified profile IDs only
channel_profile_ids = request.data.get("channel_profile_ids")
if channel_profile_ids is not None:
# Normalize single ID to array
if not isinstance(channel_profile_ids, list):
channel_profile_ids = [channel_profile_ids]
# Determine action based on semantics
if channel_profile_ids is None:
# Omitted -> add to all profiles (backward compatible)
profiles = ChannelProfile.objects.all()
ChannelProfileMembership.objects.bulk_create([
ChannelProfileMembership(channel_profile=profile, channel=channel, enabled=True)
for profile in profiles
])
elif isinstance(channel_profile_ids, list) and len(channel_profile_ids) == 0:
# Empty array -> add to no profiles
pass
elif isinstance(channel_profile_ids, list) and 0 in channel_profile_ids:
# Sentinel 0 -> add to all profiles (explicit)
profiles = ChannelProfile.objects.all()
ChannelProfileMembership.objects.bulk_create([
ChannelProfileMembership(channel_profile=profile, channel=channel, enabled=True)
for profile in profiles
])
else:
# Specific profile IDs
try:
channel_profiles = ChannelProfile.objects.filter(id__in=channel_profile_ids)
if len(channel_profiles) != len(channel_profile_ids):
missing_ids = set(channel_profile_ids) - set(channel_profiles.values_list('id', flat=True))
return Response(
{"error": f"Channel profiles with IDs {list(missing_ids)} not found"},
status=status.HTTP_400_BAD_REQUEST,
)
ChannelProfileMembership.objects.bulk_create([
ChannelProfileMembership(
channel_profile=profile,
channel=channel,
enabled=True
)
for profile in channel_profiles
])
except Exception as e:
return Response(
{"error": f"Error creating profile memberships: {str(e)}"},
status=status.HTTP_400_BAD_REQUEST,
)
headers = self.get_success_headers(serializer.data)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
def get_permissions(self):
if self.action in [
"edit_bulk",
@ -419,10 +491,41 @@ class ChannelViewSet(viewsets.ModelViewSet):
group_names = channel_group.split(",")
qs = qs.filter(channel_group__name__in=group_names)
if self.request.user.user_level < 10:
qs = qs.filter(user_level__lte=self.request.user.user_level)
filters = {}
q_filters = Q()
return qs
channel_profile_id = self.request.query_params.get("channel_profile_id")
show_disabled_param = self.request.query_params.get("show_disabled", None)
only_streamless = self.request.query_params.get("only_streamless", None)
if channel_profile_id:
try:
profile_id_int = int(channel_profile_id)
if show_disabled_param is None:
# Show only enabled channels: channels that have a membership
# record for this profile with enabled=True
# Default is DISABLED (channels without membership are hidden)
filters["channelprofilemembership__channel_profile_id"] = profile_id_int
filters["channelprofilemembership__enabled"] = True
# If show_disabled is True, show all channels (no filtering needed)
except (ValueError, TypeError):
# Ignore invalid profile id values
pass
if only_streamless:
q_filters &= Q(streams__isnull=True)
if self.request.user.user_level < 10:
filters["user_level__lte"] = self.request.user.user_level
if filters:
qs = qs.filter(**filters)
if q_filters:
qs = qs.filter(q_filters)
return qs.distinct()
def get_serializer_context(self):
context = super().get_serializer_context()
@ -518,11 +621,18 @@ class ChannelViewSet(viewsets.ModelViewSet):
# Single bulk_update query instead of individual saves
channels_to_update = [channel for channel, _ in validated_updates]
if channels_to_update:
Channel.objects.bulk_update(
channels_to_update,
fields=list(validated_updates[0][1].keys()),
batch_size=100
)
# Collect all unique field names from all updates
all_fields = set()
for _, validated_data in validated_updates:
all_fields.update(validated_data.keys())
# Only call bulk_update if there are fields to update
if all_fields:
Channel.objects.bulk_update(
channels_to_update,
fields=list(all_fields),
batch_size=100
)
# Return the updated objects (already in memory)
serialized_channels = ChannelSerializer(
@ -707,7 +817,7 @@ class ChannelViewSet(viewsets.ModelViewSet):
"channel_profile_ids": openapi.Schema(
type=openapi.TYPE_ARRAY,
items=openapi.Items(type=openapi.TYPE_INTEGER),
description="(Optional) Channel profile ID(s) to add the channel to. Can be a single ID or array of IDs. If not provided, channel is added to all profiles."
description="(Optional) Channel profile ID(s). Behavior: omitted = add to ALL profiles (default); empty array [] = add to NO profiles; [0] = add to ALL profiles (explicit); [1,2,...] = add only to specified profiles."
),
},
),
@ -800,14 +910,37 @@ class ChannelViewSet(viewsets.ModelViewSet):
channel.streams.add(stream)
# Handle channel profile membership
# Semantics:
# - Omitted (None): add to ALL profiles (backward compatible default)
# - Empty array []: add to NO profiles
# - Sentinel [0] or 0: add to ALL profiles (explicit)
# - [1,2,...]: add to specified profile IDs only
channel_profile_ids = request.data.get("channel_profile_ids")
if channel_profile_ids is not None:
# Normalize single ID to array
if not isinstance(channel_profile_ids, list):
channel_profile_ids = [channel_profile_ids]
if channel_profile_ids:
# Add channel only to the specified profiles
# Determine action based on semantics
if channel_profile_ids is None:
# Omitted -> add to all profiles (backward compatible)
profiles = ChannelProfile.objects.all()
ChannelProfileMembership.objects.bulk_create([
ChannelProfileMembership(channel_profile=profile, channel=channel, enabled=True)
for profile in profiles
])
elif isinstance(channel_profile_ids, list) and len(channel_profile_ids) == 0:
# Empty array -> add to no profiles
pass
elif isinstance(channel_profile_ids, list) and 0 in channel_profile_ids:
# Sentinel 0 -> add to all profiles (explicit)
profiles = ChannelProfile.objects.all()
ChannelProfileMembership.objects.bulk_create([
ChannelProfileMembership(channel_profile=profile, channel=channel, enabled=True)
for profile in profiles
])
else:
# Specific profile IDs
try:
channel_profiles = ChannelProfile.objects.filter(id__in=channel_profile_ids)
if len(channel_profiles) != len(channel_profile_ids):
@ -830,13 +963,6 @@ class ChannelViewSet(viewsets.ModelViewSet):
{"error": f"Error creating profile memberships: {str(e)}"},
status=status.HTTP_400_BAD_REQUEST,
)
else:
# Default behavior: add to all profiles
profiles = ChannelProfile.objects.all()
ChannelProfileMembership.objects.bulk_create([
ChannelProfileMembership(channel_profile=profile, channel=channel, enabled=True)
for profile in profiles
])
# Send WebSocket notification for single channel creation
from core.utils import send_websocket_update
@ -869,7 +995,7 @@ class ChannelViewSet(viewsets.ModelViewSet):
"channel_profile_ids": openapi.Schema(
type=openapi.TYPE_ARRAY,
items=openapi.Items(type=openapi.TYPE_INTEGER),
description="(Optional) Channel profile ID(s) to add the channels to. If not provided, channels are added to all profiles."
description="(Optional) Channel profile ID(s). Behavior: omitted = add to ALL profiles (default); empty array [] = add to NO profiles; [0] = add to ALL profiles (explicit); [1,2,...] = add only to specified profiles."
),
"starting_channel_number": openapi.Schema(
type=openapi.TYPE_INTEGER,
@ -1528,11 +1654,10 @@ class LogoViewSet(viewsets.ModelViewSet):
"""Streams the logo file, whether it's local or remote."""
logo = self.get_object()
logo_url = logo.url
if logo_url.startswith("/data"): # Local file
if not os.path.exists(logo_url):
raise Http404("Image not found")
stat = os.stat(logo_url)
# Get proper mime type (first item of the tuple)
content_type, _ = mimetypes.guess_type(logo_url)
if not content_type:
@ -1542,6 +1667,8 @@ class LogoViewSet(viewsets.ModelViewSet):
response = StreamingHttpResponse(
open(logo_url, "rb"), content_type=content_type
)
response["Cache-Control"] = "public, max-age=14400" # Cache in browser for 4 hours
response["Last-Modified"] = http_date(stat.st_mtime)
response["Content-Disposition"] = 'inline; filename="{}"'.format(
os.path.basename(logo_url)
)
@ -1581,6 +1708,10 @@ class LogoViewSet(viewsets.ModelViewSet):
remote_response.iter_content(chunk_size=8192),
content_type=content_type,
)
if(remote_response.headers.get("Cache-Control")):
response["Cache-Control"] = remote_response.headers.get("Cache-Control")
if(remote_response.headers.get("Last-Modified")):
response["Last-Modified"] = remote_response.headers.get("Last-Modified")
response["Content-Disposition"] = 'inline; filename="{}"'.format(
os.path.basename(logo_url)
)
@ -1612,11 +1743,58 @@ class ChannelProfileViewSet(viewsets.ModelViewSet):
return self.request.user.channel_profiles.all()
def get_permissions(self):
if self.action == "duplicate":
return [IsAdmin()]
try:
return [perm() for perm in permission_classes_by_action[self.action]]
except KeyError:
return [Authenticated()]
@action(detail=True, methods=["post"], url_path="duplicate", permission_classes=[IsAdmin])
def duplicate(self, request, pk=None):
requested_name = str(request.data.get("name", "")).strip()
if not requested_name:
return Response(
{"detail": "Name is required to duplicate a profile."},
status=status.HTTP_400_BAD_REQUEST,
)
if ChannelProfile.objects.filter(name=requested_name).exists():
return Response(
{"detail": "A channel profile with this name already exists."},
status=status.HTTP_400_BAD_REQUEST,
)
source_profile = self.get_object()
with transaction.atomic():
new_profile = ChannelProfile.objects.create(name=requested_name)
source_memberships = ChannelProfileMembership.objects.filter(
channel_profile=source_profile
)
source_enabled_map = {
membership.channel_id: membership.enabled
for membership in source_memberships
}
new_memberships = list(
ChannelProfileMembership.objects.filter(channel_profile=new_profile)
)
for membership in new_memberships:
membership.enabled = source_enabled_map.get(
membership.channel_id, False
)
if new_memberships:
ChannelProfileMembership.objects.bulk_update(
new_memberships, ["enabled"]
)
serializer = self.get_serializer(new_profile)
return Response(serializer.data, status=status.HTTP_201_CREATED)
class GetChannelStreamsAPIView(APIView):
def get_permissions(self):
@ -1673,6 +1851,30 @@ class BulkUpdateChannelMembershipAPIView(APIView):
except KeyError:
return [Authenticated()]
@swagger_auto_schema(
operation_description="Bulk enable or disable channels for a specific profile. Creates membership records if they don't exist.",
request_body=BulkChannelProfileMembershipSerializer,
responses={
200: openapi.Response(
description="Channels updated successfully",
schema=openapi.Schema(
type=openapi.TYPE_OBJECT,
properties={
"status": openapi.Schema(type=openapi.TYPE_STRING, example="success"),
"updated": openapi.Schema(type=openapi.TYPE_INTEGER, description="Number of channels updated"),
"created": openapi.Schema(type=openapi.TYPE_INTEGER, description="Number of new memberships created"),
"invalid_channels": openapi.Schema(
type=openapi.TYPE_ARRAY,
items=openapi.Schema(type=openapi.TYPE_INTEGER),
description="List of channel IDs that don't exist"
),
},
),
),
400: "Invalid request data",
404: "Profile not found",
},
)
def patch(self, request, profile_id):
"""Bulk enable or disable channels for a specific profile"""
# Get the channel profile
@ -1685,21 +1887,67 @@ class BulkUpdateChannelMembershipAPIView(APIView):
updates = serializer.validated_data["channels"]
channel_ids = [entry["channel_id"] for entry in updates]
memberships = ChannelProfileMembership.objects.filter(
# Validate that all channels exist
existing_channels = set(
Channel.objects.filter(id__in=channel_ids).values_list("id", flat=True)
)
invalid_channels = [cid for cid in channel_ids if cid not in existing_channels]
if invalid_channels:
return Response(
{
"error": "Some channels do not exist",
"invalid_channels": invalid_channels,
},
status=status.HTTP_400_BAD_REQUEST,
)
# Get existing memberships
existing_memberships = ChannelProfileMembership.objects.filter(
channel_profile=channel_profile, channel_id__in=channel_ids
)
membership_dict = {m.channel_id: m for m in existing_memberships}
membership_dict = {m.channel.id: m for m in memberships}
# Prepare lists for bulk operations
memberships_to_update = []
memberships_to_create = []
for entry in updates:
channel_id = entry["channel_id"]
enabled_status = entry["enabled"]
if channel_id in membership_dict:
# Update existing membership
membership_dict[channel_id].enabled = enabled_status
memberships_to_update.append(membership_dict[channel_id])
else:
# Create new membership
memberships_to_create.append(
ChannelProfileMembership(
channel_profile=channel_profile,
channel_id=channel_id,
enabled=enabled_status,
)
)
ChannelProfileMembership.objects.bulk_update(memberships, ["enabled"])
# Perform bulk operations
with transaction.atomic():
if memberships_to_update:
ChannelProfileMembership.objects.bulk_update(
memberships_to_update, ["enabled"]
)
if memberships_to_create:
ChannelProfileMembership.objects.bulk_create(memberships_to_create)
return Response({"status": "success"}, status=status.HTTP_200_OK)
return Response(
{
"status": "success",
"updated": len(memberships_to_update),
"created": len(memberships_to_create),
"invalid_channels": [],
},
status=status.HTTP_200_OK,
)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
@ -1745,7 +1993,7 @@ class RecordingViewSet(viewsets.ModelViewSet):
def get_permissions(self):
# Allow unauthenticated playback of recording files (like other streaming endpoints)
if getattr(self, 'action', None) == 'file':
if self.action == 'file':
return [AllowAny()]
try:
return [perm() for perm in permission_classes_by_action[self.action]]
@ -2026,7 +2274,7 @@ class DeleteSeriesRuleAPIView(APIView):
return [Authenticated()]
def delete(self, request, tvg_id):
tvg_id = str(tvg_id)
tvg_id = unquote(str(tvg_id or ""))
rules = [r for r in CoreSettings.get_dvr_series_rules() if str(r.get("tvg_id")) != tvg_id]
CoreSettings.set_dvr_series_rules(rules)
return Response({"success": True, "rules": rules})

View file

@ -0,0 +1,29 @@
# Generated by Django 5.2.9 on 2026-01-09 18:19
import datetime
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('dispatcharr_channels', '0030_alter_stream_url'),
]
operations = [
migrations.AddField(
model_name='channelgroupm3uaccount',
name='is_stale',
field=models.BooleanField(db_index=True, default=False, help_text='Whether this group relationship is stale (not seen in recent refresh, pending deletion)'),
),
migrations.AddField(
model_name='channelgroupm3uaccount',
name='last_seen',
field=models.DateTimeField(db_index=True, default=datetime.datetime.now, help_text='Last time this group was seen in the M3U source during a refresh'),
),
migrations.AddField(
model_name='stream',
name='is_stale',
field=models.BooleanField(db_index=True, default=False, help_text='Whether this stream is stale (not seen in recent refresh, pending deletion)'),
),
]

View file

@ -94,6 +94,11 @@ class Stream(models.Model):
db_index=True,
)
last_seen = models.DateTimeField(db_index=True, default=datetime.now)
is_stale = models.BooleanField(
default=False,
db_index=True,
help_text="Whether this stream is stale (not seen in recent refresh, pending deletion)"
)
custom_properties = models.JSONField(default=dict, blank=True, null=True)
# Stream statistics fields
@ -119,11 +124,11 @@ class Stream(models.Model):
return self.name or self.url or f"Stream ID {self.id}"
@classmethod
def generate_hash_key(cls, name, url, tvg_id, keys=None, m3u_id=None):
def generate_hash_key(cls, name, url, tvg_id, keys=None, m3u_id=None, group=None):
if keys is None:
keys = CoreSettings.get_m3u_hash_key().split(",")
stream_parts = {"name": name, "url": url, "tvg_id": tvg_id, "m3u_id": m3u_id}
stream_parts = {"name": name, "url": url, "tvg_id": tvg_id, "m3u_id": m3u_id, "group": group}
hash_parts = {key: stream_parts[key] for key in keys if key in stream_parts}
@ -589,6 +594,16 @@ class ChannelGroupM3UAccount(models.Model):
blank=True,
help_text='Starting channel number for auto-created channels in this group'
)
last_seen = models.DateTimeField(
default=datetime.now,
db_index=True,
help_text='Last time this group was seen in the M3U source during a refresh'
)
is_stale = models.BooleanField(
default=False,
db_index=True,
help_text='Whether this group relationship is stale (not seen in recent refresh, pending deletion)'
)
class Meta:
unique_together = ("channel_group", "m3u_account")

View file

@ -119,6 +119,7 @@ class StreamSerializer(serializers.ModelSerializer):
"current_viewers",
"updated_at",
"last_seen",
"is_stale",
"stream_profile_id",
"is_custom",
"channel_group",
@ -155,7 +156,7 @@ class ChannelGroupM3UAccountSerializer(serializers.ModelSerializer):
class Meta:
model = ChannelGroupM3UAccount
fields = ["m3u_accounts", "channel_group", "enabled", "auto_channel_sync", "auto_sync_channel_start", "custom_properties"]
fields = ["m3u_accounts", "channel_group", "enabled", "auto_channel_sync", "auto_sync_channel_start", "custom_properties", "is_stale", "last_seen"]
def to_representation(self, instance):
data = super().to_representation(instance)
@ -179,8 +180,8 @@ class ChannelGroupM3UAccountSerializer(serializers.ModelSerializer):
# Channel Group
#
class ChannelGroupSerializer(serializers.ModelSerializer):
channel_count = serializers.IntegerField(read_only=True)
m3u_account_count = serializers.IntegerField(read_only=True)
channel_count = serializers.SerializerMethodField()
m3u_account_count = serializers.SerializerMethodField()
m3u_accounts = ChannelGroupM3UAccountSerializer(
many=True,
read_only=True
@ -190,6 +191,14 @@ class ChannelGroupSerializer(serializers.ModelSerializer):
model = ChannelGroup
fields = ["id", "name", "channel_count", "m3u_account_count", "m3u_accounts"]
def get_channel_count(self, obj):
"""Get count of channels in this group"""
return obj.channels.count()
def get_m3u_account_count(self, obj):
"""Get count of M3U accounts associated with this group"""
return obj.m3u_accounts.count()
class ChannelProfileSerializer(serializers.ModelSerializer):
channels = serializers.SerializerMethodField()

View file

@ -295,7 +295,11 @@ def match_channels_to_epg(channels_data, epg_data, region_code=None, use_ml=True
if score > 50: # Only show decent matches
logger.debug(f" EPG '{row['name']}' (norm: '{row['norm_name']}') => score: {score} (base: {base_score}, bonus: {bonus})")
if score > best_score:
# When scores are equal, prefer higher priority EPG source
row_priority = row.get('epg_source_priority', 0)
best_priority = best_epg.get('epg_source_priority', 0) if best_epg else -1
if score > best_score or (score == best_score and row_priority > best_priority):
best_score = score
best_epg = row
@ -471,9 +475,9 @@ def match_epg_channels():
"norm_chan": normalize_name(channel.name) # Always use channel name for fuzzy matching!
})
# Get all EPG data
# Get all EPG data from active sources, ordered by source priority (highest first) so we prefer higher priority matches
epg_data = []
for epg in EPGData.objects.all():
for epg in EPGData.objects.select_related('epg_source').filter(epg_source__is_active=True):
normalized_tvg_id = epg.tvg_id.strip().lower() if epg.tvg_id else ""
epg_data.append({
'id': epg.id,
@ -482,9 +486,13 @@ def match_epg_channels():
'name': epg.name,
'norm_name': normalize_name(epg.name),
'epg_source_id': epg.epg_source.id if epg.epg_source else None,
'epg_source_priority': epg.epg_source.priority if epg.epg_source else 0,
})
logger.info(f"Processing {len(channels_data)} channels against {len(epg_data)} EPG entries")
# Sort EPG data by source priority (highest first) so we prefer higher priority matches
epg_data.sort(key=lambda x: x['epg_source_priority'], reverse=True)
logger.info(f"Processing {len(channels_data)} channels against {len(epg_data)} EPG entries (from active sources only)")
# Run EPG matching with progress updates - automatically uses conservative thresholds for bulk operations
result = match_channels_to_epg(channels_data, epg_data, region_code, use_ml=True, send_progress=True)
@ -618,9 +626,9 @@ def match_selected_channels_epg(channel_ids):
"norm_chan": normalize_name(channel.name)
})
# Get all EPG data
# Get all EPG data from active sources, ordered by source priority (highest first) so we prefer higher priority matches
epg_data = []
for epg in EPGData.objects.all():
for epg in EPGData.objects.select_related('epg_source').filter(epg_source__is_active=True):
normalized_tvg_id = epg.tvg_id.strip().lower() if epg.tvg_id else ""
epg_data.append({
'id': epg.id,
@ -629,9 +637,13 @@ def match_selected_channels_epg(channel_ids):
'name': epg.name,
'norm_name': normalize_name(epg.name),
'epg_source_id': epg.epg_source.id if epg.epg_source else None,
'epg_source_priority': epg.epg_source.priority if epg.epg_source else 0,
})
logger.info(f"Processing {len(channels_data)} selected channels against {len(epg_data)} EPG entries")
# Sort EPG data by source priority (highest first) so we prefer higher priority matches
epg_data.sort(key=lambda x: x['epg_source_priority'], reverse=True)
logger.info(f"Processing {len(channels_data)} selected channels against {len(epg_data)} EPG entries (from active sources only)")
# Run EPG matching with progress updates - automatically uses appropriate thresholds
result = match_channels_to_epg(channels_data, epg_data, region_code, use_ml=True, send_progress=True)
@ -749,9 +761,10 @@ def match_single_channel_epg(channel_id):
test_normalized = normalize_name(test_name)
logger.debug(f"DEBUG normalization example: '{test_name}''{test_normalized}' (call sign preserved)")
# Get all EPG data for matching - must include norm_name field
# Get all EPG data for matching from active sources - must include norm_name field
# Ordered by source priority (highest first) so we prefer higher priority matches
epg_data_list = []
for epg in EPGData.objects.filter(name__isnull=False).exclude(name=''):
for epg in EPGData.objects.select_related('epg_source').filter(epg_source__is_active=True, name__isnull=False).exclude(name=''):
normalized_epg_tvg_id = epg.tvg_id.strip().lower() if epg.tvg_id else ""
epg_data_list.append({
'id': epg.id,
@ -760,10 +773,14 @@ def match_single_channel_epg(channel_id):
'name': epg.name,
'norm_name': normalize_name(epg.name),
'epg_source_id': epg.epg_source.id if epg.epg_source else None,
'epg_source_priority': epg.epg_source.priority if epg.epg_source else 0,
})
# Sort EPG data by source priority (highest first) so we prefer higher priority matches
epg_data_list.sort(key=lambda x: x['epg_source_priority'], reverse=True)
if not epg_data_list:
return {"matched": False, "message": "No EPG data available for matching"}
return {"matched": False, "message": "No EPG data available for matching (from active sources)"}
logger.info(f"Matching single channel '{channel.name}' against {len(epg_data_list)} EPG entries")
@ -2662,7 +2679,38 @@ def bulk_create_channels_from_streams(self, stream_ids, channel_profile_ids=None
)
# Handle channel profile membership
if profile_ids:
# Semantics:
# - None: add to ALL profiles (backward compatible default)
# - Empty array []: add to NO profiles
# - Sentinel [0] or 0 in array: add to ALL profiles (explicit)
# - [1,2,...]: add to specified profile IDs only
if profile_ids is None:
# Omitted -> add to all profiles (backward compatible)
all_profiles = ChannelProfile.objects.all()
channel_profile_memberships.extend([
ChannelProfileMembership(
channel_profile=profile,
channel=channel,
enabled=True
)
for profile in all_profiles
])
elif isinstance(profile_ids, list) and len(profile_ids) == 0:
# Empty array -> add to no profiles
pass
elif isinstance(profile_ids, list) and 0 in profile_ids:
# Sentinel 0 -> add to all profiles (explicit)
all_profiles = ChannelProfile.objects.all()
channel_profile_memberships.extend([
ChannelProfileMembership(
channel_profile=profile,
channel=channel,
enabled=True
)
for profile in all_profiles
])
else:
# Specific profile IDs
try:
specific_profiles = ChannelProfile.objects.filter(id__in=profile_ids)
channel_profile_memberships.extend([
@ -2678,17 +2726,6 @@ def bulk_create_channels_from_streams(self, stream_ids, channel_profile_ids=None
'channel_id': channel.id,
'error': f'Failed to add to profiles: {str(e)}'
})
else:
# Add to all profiles by default
all_profiles = ChannelProfile.objects.all()
channel_profile_memberships.extend([
ChannelProfileMembership(
channel_profile=profile,
channel=channel,
enabled=True
)
for profile in all_profiles
])
# Bulk update channels with logos
if update:

View file

@ -0,0 +1,211 @@
from django.test import TestCase
from django.contrib.auth import get_user_model
from rest_framework.test import APIClient
from rest_framework import status
from apps.channels.models import Channel, ChannelGroup
User = get_user_model()
class ChannelBulkEditAPITests(TestCase):
def setUp(self):
# Create a test admin user (user_level >= 10) and authenticate
self.user = User.objects.create_user(username="testuser", password="testpass123")
self.user.user_level = 10 # Set admin level
self.user.save()
self.client = APIClient()
self.client.force_authenticate(user=self.user)
self.bulk_edit_url = "/api/channels/channels/edit/bulk/"
# Create test channel group
self.group1 = ChannelGroup.objects.create(name="Test Group 1")
self.group2 = ChannelGroup.objects.create(name="Test Group 2")
# Create test channels
self.channel1 = Channel.objects.create(
channel_number=1.0,
name="Channel 1",
tvg_id="channel1",
channel_group=self.group1
)
self.channel2 = Channel.objects.create(
channel_number=2.0,
name="Channel 2",
tvg_id="channel2",
channel_group=self.group1
)
self.channel3 = Channel.objects.create(
channel_number=3.0,
name="Channel 3",
tvg_id="channel3"
)
def test_bulk_edit_success(self):
"""Test successful bulk update of multiple channels"""
data = [
{"id": self.channel1.id, "name": "Updated Channel 1"},
{"id": self.channel2.id, "name": "Updated Channel 2", "channel_number": 22.0},
]
response = self.client.patch(self.bulk_edit_url, data, format="json")
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data["message"], "Successfully updated 2 channels")
self.assertEqual(len(response.data["channels"]), 2)
# Verify database changes
self.channel1.refresh_from_db()
self.channel2.refresh_from_db()
self.assertEqual(self.channel1.name, "Updated Channel 1")
self.assertEqual(self.channel2.name, "Updated Channel 2")
self.assertEqual(self.channel2.channel_number, 22.0)
def test_bulk_edit_with_empty_validated_data_first(self):
"""
Test the bug fix: when first channel has empty validated_data.
This was causing: ValueError: Field names must be given to bulk_update()
"""
# Create a channel with data that will be "unchanged" (empty validated_data)
# We'll send the same data it already has
data = [
# First channel: no actual changes (this would create empty validated_data)
{"id": self.channel1.id},
# Second channel: has changes
{"id": self.channel2.id, "name": "Updated Channel 2"},
]
response = self.client.patch(self.bulk_edit_url, data, format="json")
# Should not crash with ValueError
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data["message"], "Successfully updated 2 channels")
# Verify the channel with changes was updated
self.channel2.refresh_from_db()
self.assertEqual(self.channel2.name, "Updated Channel 2")
def test_bulk_edit_all_empty_updates(self):
"""Test when all channels have empty updates (no actual changes)"""
data = [
{"id": self.channel1.id},
{"id": self.channel2.id},
]
response = self.client.patch(self.bulk_edit_url, data, format="json")
# Should succeed without calling bulk_update
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data["message"], "Successfully updated 2 channels")
def test_bulk_edit_mixed_fields(self):
"""Test bulk update where different channels update different fields"""
data = [
{"id": self.channel1.id, "name": "New Name 1"},
{"id": self.channel2.id, "channel_number": 99.0},
{"id": self.channel3.id, "tvg_id": "new_tvg_id", "name": "New Name 3"},
]
response = self.client.patch(self.bulk_edit_url, data, format="json")
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data["message"], "Successfully updated 3 channels")
# Verify all updates
self.channel1.refresh_from_db()
self.channel2.refresh_from_db()
self.channel3.refresh_from_db()
self.assertEqual(self.channel1.name, "New Name 1")
self.assertEqual(self.channel2.channel_number, 99.0)
self.assertEqual(self.channel3.tvg_id, "new_tvg_id")
self.assertEqual(self.channel3.name, "New Name 3")
def test_bulk_edit_with_channel_group(self):
"""Test bulk update with channel_group_id changes"""
data = [
{"id": self.channel1.id, "channel_group_id": self.group2.id},
{"id": self.channel3.id, "channel_group_id": self.group1.id},
]
response = self.client.patch(self.bulk_edit_url, data, format="json")
self.assertEqual(response.status_code, status.HTTP_200_OK)
# Verify group changes
self.channel1.refresh_from_db()
self.channel3.refresh_from_db()
self.assertEqual(self.channel1.channel_group, self.group2)
self.assertEqual(self.channel3.channel_group, self.group1)
def test_bulk_edit_nonexistent_channel(self):
"""Test bulk update with a channel that doesn't exist"""
nonexistent_id = 99999
data = [
{"id": nonexistent_id, "name": "Should Fail"},
{"id": self.channel1.id, "name": "Should Still Update"},
]
response = self.client.patch(self.bulk_edit_url, data, format="json")
# Should return 400 with errors
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn("errors", response.data)
self.assertEqual(len(response.data["errors"]), 1)
self.assertEqual(response.data["errors"][0]["channel_id"], nonexistent_id)
self.assertEqual(response.data["errors"][0]["error"], "Channel not found")
# The valid channel should still be updated
self.assertEqual(response.data["updated_count"], 1)
def test_bulk_edit_validation_error(self):
"""Test bulk update with invalid data (validation error)"""
data = [
{"id": self.channel1.id, "channel_number": "invalid_number"},
]
response = self.client.patch(self.bulk_edit_url, data, format="json")
# Should return 400 with validation errors
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn("errors", response.data)
self.assertEqual(len(response.data["errors"]), 1)
self.assertIn("channel_number", response.data["errors"][0]["errors"])
def test_bulk_edit_empty_channel_updates(self):
"""Test bulk update with empty list"""
data = []
response = self.client.patch(self.bulk_edit_url, data, format="json")
# Empty list is accepted and returns success with 0 updates
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data["message"], "Successfully updated 0 channels")
def test_bulk_edit_missing_channel_updates(self):
"""Test bulk update without proper format (dict instead of list)"""
data = {"channel_updates": {}}
response = self.client.patch(self.bulk_edit_url, data, format="json")
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(response.data["error"], "Expected a list of channel updates")
def test_bulk_edit_preserves_other_fields(self):
"""Test that bulk update only changes specified fields"""
original_channel_number = self.channel1.channel_number
original_tvg_id = self.channel1.tvg_id
data = [
{"id": self.channel1.id, "name": "Only Name Changed"},
]
response = self.client.patch(self.bulk_edit_url, data, format="json")
self.assertEqual(response.status_code, status.HTTP_200_OK)
# Verify only name changed, other fields preserved
self.channel1.refresh_from_db()
self.assertEqual(self.channel1.name, "Only Name Changed")
self.assertEqual(self.channel1.channel_number, original_channel_number)
self.assertEqual(self.channel1.tvg_id, original_tvg_id)

View file

@ -0,0 +1,18 @@
# Generated by Django 5.2.4 on 2025-12-05 15:24
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('epg', '0020_migrate_time_to_starttime_placeholders'),
]
operations = [
migrations.AddField(
model_name='epgsource',
name='priority',
field=models.PositiveIntegerField(default=0, help_text='Priority for EPG matching (higher numbers = higher priority). Used when multiple EPG sources have matching entries for a channel.'),
),
]

View file

@ -45,6 +45,10 @@ class EPGSource(models.Model):
null=True,
help_text="Custom properties for dummy EPG configuration (regex patterns, timezone, duration, etc.)"
)
priority = models.PositiveIntegerField(
default=0,
help_text="Priority for EPG matching (higher numbers = higher priority). Used when multiple EPG sources have matching entries for a channel."
)
status = models.CharField(
max_length=20,
choices=STATUS_CHOICES,

View file

@ -24,6 +24,7 @@ class EPGSourceSerializer(serializers.ModelSerializer):
'is_active',
'file_path',
'refresh_interval',
'priority',
'status',
'last_message',
'created_at',

View file

@ -286,11 +286,12 @@ def fetch_xmltv(source):
logger.info(f"Fetching XMLTV data from source: {source.name}")
try:
# Get default user agent from settings
default_user_agent_setting = CoreSettings.objects.filter(key='default-user-agent').first()
stream_settings = CoreSettings.get_stream_settings()
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:138.0) Gecko/20100101 Firefox/138.0" # Fallback default
if default_user_agent_setting and default_user_agent_setting.value:
default_user_agent_id = stream_settings.get('default_user_agent')
if default_user_agent_id:
try:
user_agent_obj = UserAgent.objects.filter(id=int(default_user_agent_setting.value)).first()
user_agent_obj = UserAgent.objects.filter(id=int(default_user_agent_id)).first()
if user_agent_obj and user_agent_obj.user_agent:
user_agent = user_agent_obj.user_agent
logger.debug(f"Using default user agent: {user_agent}")
@ -1393,11 +1394,23 @@ def parse_programs_for_tvg_id(epg_id):
def parse_programs_for_source(epg_source, tvg_id=None):
"""
Parse programs for all MAPPED channels from an EPG source in a single pass.
This is an optimized version that:
1. Only processes EPG entries that are actually mapped to channels
2. Parses the XML file ONCE instead of once per channel
3. Skips programmes for unmapped channels entirely during parsing
This dramatically improves performance when an EPG source has many channels
but only a fraction are mapped.
"""
# Send initial programs parsing notification
send_epg_update(epg_source.id, "parsing_programs", 0)
should_log_memory = False
process = None
initial_memory = 0
source_file = None
# Add memory tracking only in trace mode or higher
try:
@ -1417,82 +1430,229 @@ def parse_programs_for_source(epg_source, tvg_id=None):
should_log_memory = False
try:
# Process EPG entries in batches rather than all at once
batch_size = 20 # Process fewer channels at once to reduce memory usage
epg_count = EPGData.objects.filter(epg_source=epg_source).count()
# Only get EPG entries that are actually mapped to channels
mapped_epg_ids = set(
Channel.objects.filter(
epg_data__epg_source=epg_source,
epg_data__isnull=False
).values_list('epg_data_id', flat=True)
)
if epg_count == 0:
logger.info(f"No EPG entries found for source: {epg_source.name}")
# Update status - this is not an error, just no entries
if not mapped_epg_ids:
total_epg_count = EPGData.objects.filter(epg_source=epg_source).count()
logger.info(f"No channels mapped to any EPG entries from source: {epg_source.name} "
f"(source has {total_epg_count} EPG entries, 0 mapped)")
# Update status - this is not an error, just no mapped entries
epg_source.status = 'success'
epg_source.save(update_fields=['status'])
epg_source.last_message = f"No channels mapped to this EPG source ({total_epg_count} entries available)"
epg_source.save(update_fields=['status', 'last_message'])
send_epg_update(epg_source.id, "parsing_programs", 100, status="success")
return True
logger.info(f"Parsing programs for {epg_count} EPG entries from source: {epg_source.name}")
# Get the mapped EPG entries with their tvg_ids
mapped_epgs = EPGData.objects.filter(id__in=mapped_epg_ids).values('id', 'tvg_id')
tvg_id_to_epg_id = {epg['tvg_id']: epg['id'] for epg in mapped_epgs if epg['tvg_id']}
mapped_tvg_ids = set(tvg_id_to_epg_id.keys())
failed_entries = []
program_count = 0
channel_count = 0
updated_count = 0
processed = 0
# Process in batches using cursor-based approach to limit memory usage
last_id = 0
while True:
# Get a batch of EPG entries
batch_entries = list(EPGData.objects.filter(
epg_source=epg_source,
id__gt=last_id
).order_by('id')[:batch_size])
total_epg_count = EPGData.objects.filter(epg_source=epg_source).count()
mapped_count = len(mapped_tvg_ids)
if not batch_entries:
break # No more entries to process
logger.info(f"Parsing programs for {mapped_count} MAPPED channels from source: {epg_source.name} "
f"(skipping {total_epg_count - mapped_count} unmapped EPG entries)")
# Update last_id for next iteration
last_id = batch_entries[-1].id
# Get the file path
file_path = epg_source.extracted_file_path if epg_source.extracted_file_path else epg_source.file_path
if not file_path:
file_path = epg_source.get_cache_file()
# Process this batch
for epg in batch_entries:
if epg.tvg_id:
try:
result = parse_programs_for_tvg_id(epg.id)
if result == "Task already running":
logger.info(f"Program parse for {epg.id} already in progress, skipping")
# Check if the file exists
if not os.path.exists(file_path):
logger.error(f"EPG file not found at: {file_path}")
processed += 1
progress = min(95, int((processed / epg_count) * 100)) if epg_count > 0 else 50
send_epg_update(epg_source.id, "parsing_programs", progress)
except Exception as e:
logger.error(f"Error parsing programs for tvg_id={epg.tvg_id}: {e}", exc_info=True)
failed_entries.append(f"{epg.tvg_id}: {str(e)}")
if epg_source.url:
# Update the file path in the database
new_path = epg_source.get_cache_file()
logger.info(f"Updating file_path from '{file_path}' to '{new_path}'")
epg_source.file_path = new_path
epg_source.save(update_fields=['file_path'])
logger.info(f"Fetching new EPG data from URL: {epg_source.url}")
# Force garbage collection after each batch
batch_entries = None # Remove reference to help garbage collection
# Fetch new data before continuing
fetch_success = fetch_xmltv(epg_source)
if not fetch_success:
logger.error(f"Failed to fetch EPG data for source: {epg_source.name}")
epg_source.status = 'error'
epg_source.last_message = f"Failed to download EPG data"
epg_source.save(update_fields=['status', 'last_message'])
send_epg_update(epg_source.id, "parsing_programs", 100, status="error", error="Failed to download EPG file")
return False
# Update file_path with the new location
file_path = epg_source.extracted_file_path if epg_source.extracted_file_path else epg_source.file_path
else:
logger.error(f"No URL provided for EPG source {epg_source.name}, cannot fetch new data")
epg_source.status = 'error'
epg_source.last_message = f"No URL provided, cannot fetch EPG data"
epg_source.save(update_fields=['status', 'last_message'])
send_epg_update(epg_source.id, "parsing_programs", 100, status="error", error="No URL provided")
return False
# SINGLE PASS PARSING: Parse the XML file once and collect all programs in memory
# We parse FIRST, then do an atomic delete+insert to avoid race conditions
# where clients might see empty/partial EPG data during the transition
all_programs_to_create = []
programs_by_channel = {tvg_id: 0 for tvg_id in mapped_tvg_ids} # Track count per channel
total_programs = 0
skipped_programs = 0
last_progress_update = 0
try:
logger.debug(f"Opening file for single-pass parsing: {file_path}")
source_file = open(file_path, 'rb')
# Stream parse the file using lxml's iterparse
program_parser = etree.iterparse(source_file, events=('end',), tag='programme', remove_blank_text=True, recover=True)
for _, elem in program_parser:
channel_id = elem.get('channel')
# Skip programmes for unmapped channels immediately
if channel_id not in mapped_tvg_ids:
skipped_programs += 1
# Clear element to free memory
clear_element(elem)
continue
# This programme is for a mapped channel - process it
try:
start_time = parse_xmltv_time(elem.get('start'))
end_time = parse_xmltv_time(elem.get('stop'))
title = None
desc = None
sub_title = None
# Efficiently process child elements
for child in elem:
if child.tag == 'title':
title = child.text or 'No Title'
elif child.tag == 'desc':
desc = child.text or ''
elif child.tag == 'sub-title':
sub_title = child.text or ''
if not title:
title = 'No Title'
# Extract custom properties
custom_props = extract_custom_properties(elem)
custom_properties_json = custom_props if custom_props else None
epg_id = tvg_id_to_epg_id[channel_id]
all_programs_to_create.append(ProgramData(
epg_id=epg_id,
start_time=start_time,
end_time=end_time,
title=title,
description=desc,
sub_title=sub_title,
tvg_id=channel_id,
custom_properties=custom_properties_json
))
total_programs += 1
programs_by_channel[channel_id] += 1
# Clear the element to free memory
clear_element(elem)
# Send progress update (estimate based on programs processed)
if total_programs - last_progress_update >= 5000:
last_progress_update = total_programs
# Cap at 70% during parsing phase (save 30% for DB operations)
progress = min(70, 10 + int((total_programs / max(total_programs + 10000, 1)) * 60))
send_epg_update(epg_source.id, "parsing_programs", progress,
processed=total_programs, channels=mapped_count)
# Periodic garbage collection during parsing
if total_programs % 5000 == 0:
gc.collect()
except Exception as e:
logger.error(f"Error processing program for {channel_id}: {e}", exc_info=True)
clear_element(elem)
continue
except etree.XMLSyntaxError as xml_error:
logger.error(f"XML syntax error parsing program data: {xml_error}")
epg_source.status = EPGSource.STATUS_ERROR
epg_source.last_message = f"XML parsing error: {str(xml_error)}"
epg_source.save(update_fields=['status', 'last_message'])
send_epg_update(epg_source.id, "parsing_programs", 100, status="error", message=str(xml_error))
return False
except Exception as e:
logger.error(f"Error parsing XML for programs: {e}", exc_info=True)
raise
finally:
if source_file:
source_file.close()
source_file = None
# Now perform atomic delete + bulk insert
# This ensures clients never see empty/partial EPG data
logger.info(f"Parsed {total_programs} programs, performing atomic database update...")
send_epg_update(epg_source.id, "parsing_programs", 75, message="Updating database...")
batch_size = 1000
try:
with transaction.atomic():
# Delete existing programs for mapped EPGs
deleted_count = ProgramData.objects.filter(epg_id__in=mapped_epg_ids).delete()[0]
logger.debug(f"Deleted {deleted_count} existing programs")
# Clean up orphaned programs for unmapped EPG entries
unmapped_epg_ids = list(EPGData.objects.filter(
epg_source=epg_source
).exclude(id__in=mapped_epg_ids).values_list('id', flat=True))
if unmapped_epg_ids:
orphaned_count = ProgramData.objects.filter(epg_id__in=unmapped_epg_ids).delete()[0]
if orphaned_count > 0:
logger.info(f"Cleaned up {orphaned_count} orphaned programs for {len(unmapped_epg_ids)} unmapped EPG entries")
# Bulk insert all new programs in batches within the same transaction
for i in range(0, len(all_programs_to_create), batch_size):
batch = all_programs_to_create[i:i + batch_size]
ProgramData.objects.bulk_create(batch)
# Update progress during insertion
progress = 75 + int((i / len(all_programs_to_create)) * 20) if all_programs_to_create else 95
if i % (batch_size * 5) == 0:
send_epg_update(epg_source.id, "parsing_programs", min(95, progress),
message=f"Inserting programs... {i}/{len(all_programs_to_create)}")
logger.info(f"Atomic update complete: deleted {deleted_count}, inserted {total_programs} programs")
except Exception as db_error:
logger.error(f"Database error during atomic update: {db_error}", exc_info=True)
epg_source.status = EPGSource.STATUS_ERROR
epg_source.last_message = f"Database error: {str(db_error)}"
epg_source.save(update_fields=['status', 'last_message'])
send_epg_update(epg_source.id, "parsing_programs", 100, status="error", message=str(db_error))
return False
finally:
# Clear the large list to free memory
all_programs_to_create = None
gc.collect()
# If there were failures, include them in the message but continue
if failed_entries:
epg_source.status = EPGSource.STATUS_SUCCESS # Still mark as success if some processed
error_summary = f"Failed to parse {len(failed_entries)} of {epg_count} entries"
stats_summary = f"Processed {program_count} programs across {channel_count} channels. Updated: {updated_count}."
epg_source.last_message = f"{stats_summary} Warning: {error_summary}"
epg_source.updated_at = timezone.now()
epg_source.save(update_fields=['status', 'last_message', 'updated_at'])
# Count channels that actually got programs
channels_with_programs = sum(1 for count in programs_by_channel.values() if count > 0)
# Send completion notification with mixed status
send_epg_update(epg_source.id, "parsing_programs", 100,
status="success",
message=epg_source.last_message)
# Explicitly release memory of large lists before returning
del failed_entries
gc.collect()
return True
# If all successful, set a comprehensive success message
# Success message
epg_source.status = EPGSource.STATUS_SUCCESS
epg_source.last_message = f"Successfully processed {program_count} programs across {channel_count} channels. Updated: {updated_count}."
epg_source.last_message = (
f"Parsed {total_programs:,} programs for {channels_with_programs} channels "
f"(skipped {skipped_programs:,} programs for {total_epg_count - mapped_count} unmapped channels)"
)
epg_source.updated_at = timezone.now()
epg_source.save(update_fields=['status', 'last_message', 'updated_at'])
@ -1500,17 +1660,21 @@ def parse_programs_for_source(epg_source, tvg_id=None):
log_system_event(
event_type='epg_refresh',
source_name=epg_source.name,
programs=program_count,
channels=channel_count,
updated=updated_count,
programs=total_programs,
channels=channels_with_programs,
skipped_programs=skipped_programs,
unmapped_channels=total_epg_count - mapped_count,
)
# Send completion notification with status
send_epg_update(epg_source.id, "parsing_programs", 100,
status="success",
message=epg_source.last_message)
message=epg_source.last_message,
updated_at=epg_source.updated_at.isoformat())
logger.info(f"Completed parsing all programs for source: {epg_source.name}")
logger.info(f"Completed parsing programs for source: {epg_source.name} - "
f"{total_programs:,} programs for {channels_with_programs} channels, "
f"skipped {skipped_programs:,} programs for unmapped channels")
return True
except Exception as e:
@ -1525,14 +1689,19 @@ def parse_programs_for_source(epg_source, tvg_id=None):
return False
finally:
# Final memory cleanup and tracking
if source_file:
try:
source_file.close()
except:
pass
source_file = None
# Explicitly release any remaining large data structures
failed_entries = None
program_count = None
channel_count = None
updated_count = None
processed = None
programs_to_create = None
programs_by_channel = None
mapped_epg_ids = None
mapped_tvg_ids = None
tvg_id_to_epg_id = None
gc.collect()
# Add comprehensive memory cleanup at the end
@ -1546,12 +1715,13 @@ def fetch_schedules_direct(source):
logger.info(f"Fetching Schedules Direct data from source: {source.name}")
try:
# Get default user agent from settings
default_user_agent_setting = CoreSettings.objects.filter(key='default-user-agent').first()
stream_settings = CoreSettings.get_stream_settings()
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:138.0) Gecko/20100101 Firefox/138.0" # Fallback default
default_user_agent_id = stream_settings.get('default_user_agent')
if default_user_agent_setting and default_user_agent_setting.value:
if default_user_agent_id:
try:
user_agent_obj = UserAgent.objects.filter(id=int(default_user_agent_setting.value)).first()
user_agent_obj = UserAgent.objects.filter(id=int(default_user_agent_id)).first()
if user_agent_obj and user_agent_obj.user_agent:
user_agent = user_agent_obj.user_agent
logger.debug(f"Using default user agent: {user_agent}")

View file

@ -513,7 +513,19 @@ def check_field_lengths(streams_to_create):
@shared_task
def process_groups(account, groups):
def process_groups(account, groups, scan_start_time=None):
"""Process groups and update their relationships with the M3U account.
Args:
account: M3UAccount instance
groups: Dict of group names to custom properties
scan_start_time: Timestamp when the scan started (for consistent last_seen marking)
"""
# Use scan_start_time if provided, otherwise current time
# This ensures consistency with stream processing and cleanup logic
if scan_start_time is None:
scan_start_time = timezone.now()
existing_groups = {
group.name: group
for group in ChannelGroup.objects.filter(name__in=groups.keys())
@ -553,24 +565,8 @@ def process_groups(account, groups):
).select_related('channel_group')
}
# Get ALL existing relationships for this account to identify orphaned ones
all_existing_relationships = {
rel.channel_group.name: rel
for rel in ChannelGroupM3UAccount.objects.filter(
m3u_account=account
).select_related('channel_group')
}
relations_to_create = []
relations_to_update = []
relations_to_delete = []
# Find orphaned relationships (groups that no longer exist in the source)
current_group_names = set(groups.keys())
for group_name, rel in all_existing_relationships.items():
if group_name not in current_group_names:
relations_to_delete.append(rel)
logger.debug(f"Marking relationship for deletion: group '{group_name}' no longer exists in source for account {account.id}")
for group in all_group_objs:
custom_props = groups.get(group.name, {})
@ -597,9 +593,15 @@ def process_groups(account, groups):
del updated_custom_props["xc_id"]
existing_rel.custom_properties = updated_custom_props
existing_rel.last_seen = scan_start_time
existing_rel.is_stale = False
relations_to_update.append(existing_rel)
logger.debug(f"Updated xc_id for group '{group.name}' from '{existing_xc_id}' to '{new_xc_id}' - account {account.id}")
else:
# Update last_seen even if xc_id hasn't changed
existing_rel.last_seen = scan_start_time
existing_rel.is_stale = False
relations_to_update.append(existing_rel)
logger.debug(f"xc_id unchanged for group '{group.name}' - account {account.id}")
else:
# Create new relationship - this group is new to this M3U account
@ -613,6 +615,8 @@ def process_groups(account, groups):
m3u_account=account,
custom_properties=custom_props,
enabled=auto_enable_new_groups_live,
last_seen=scan_start_time,
is_stale=False,
)
)
@ -623,15 +627,38 @@ def process_groups(account, groups):
# Bulk update existing relationships
if relations_to_update:
ChannelGroupM3UAccount.objects.bulk_update(relations_to_update, ['custom_properties'])
logger.info(f"Updated {len(relations_to_update)} existing group relationships with new xc_id values for account {account.id}")
ChannelGroupM3UAccount.objects.bulk_update(relations_to_update, ['custom_properties', 'last_seen', 'is_stale'])
logger.info(f"Updated {len(relations_to_update)} existing group relationships for account {account.id}")
# Delete orphaned relationships
if relations_to_delete:
ChannelGroupM3UAccount.objects.filter(
id__in=[rel.id for rel in relations_to_delete]
).delete()
logger.info(f"Deleted {len(relations_to_delete)} orphaned group relationships for account {account.id}: {[rel.channel_group.name for rel in relations_to_delete]}")
def cleanup_stale_group_relationships(account, scan_start_time):
"""
Remove group relationships that haven't been seen since the stale retention period.
This follows the same logic as stream cleanup for consistency.
"""
# Calculate cutoff date for stale group relationships
stale_cutoff = scan_start_time - timezone.timedelta(days=account.stale_stream_days)
logger.info(
f"Removing group relationships not seen since {stale_cutoff} for M3U account {account.id}"
)
# Find stale relationships
stale_relationships = ChannelGroupM3UAccount.objects.filter(
m3u_account=account,
last_seen__lt=stale_cutoff
).select_related('channel_group')
relations_to_delete = list(stale_relationships)
deleted_count = len(relations_to_delete)
if deleted_count > 0:
logger.info(
f"Found {deleted_count} stale group relationships for account {account.id}: "
f"{[rel.channel_group.name for rel in relations_to_delete]}"
)
# Delete the stale relationships
stale_relationships.delete()
# Check if any of the deleted relationships left groups with no remaining associations
orphaned_group_ids = []
@ -656,6 +683,10 @@ def process_groups(account, groups):
deleted_groups = list(ChannelGroup.objects.filter(id__in=orphaned_group_ids).values_list('name', flat=True))
ChannelGroup.objects.filter(id__in=orphaned_group_ids).delete()
logger.info(f"Deleted {len(orphaned_group_ids)} orphaned groups that had no remaining associations: {deleted_groups}")
else:
logger.debug(f"No stale group relationships found for account {account.id}")
return deleted_count
def collect_xc_streams(account_id, enabled_groups):
@ -792,7 +823,7 @@ def process_xc_category_direct(account_id, batch, groups, hash_keys):
group_title = group_name
stream_hash = Stream.generate_hash_key(
name, url, tvg_id, hash_keys, m3u_id=account_id
name, url, tvg_id, hash_keys, m3u_id=account_id, group=group_title
)
stream_props = {
"name": name,
@ -803,6 +834,7 @@ def process_xc_category_direct(account_id, batch, groups, hash_keys):
"channel_group_id": int(group_id),
"stream_hash": stream_hash,
"custom_properties": stream,
"is_stale": False,
}
if stream_hash not in stream_hashes:
@ -838,10 +870,12 @@ def process_xc_category_direct(account_id, batch, groups, hash_keys):
setattr(obj, key, value)
obj.last_seen = timezone.now()
obj.updated_at = timezone.now() # Update timestamp only for changed streams
obj.is_stale = False
streams_to_update.append(obj)
else:
# Always update last_seen, even if nothing else changed
obj.last_seen = timezone.now()
obj.is_stale = False
# Don't update updated_at for unchanged streams
streams_to_update.append(obj)
@ -852,6 +886,7 @@ def process_xc_category_direct(account_id, batch, groups, hash_keys):
stream_props["updated_at"] = (
timezone.now()
) # Set initial updated_at for new streams
stream_props["is_stale"] = False
streams_to_create.append(Stream(**stream_props))
try:
@ -863,7 +898,7 @@ def process_xc_category_direct(account_id, batch, groups, hash_keys):
# Simplified bulk update for better performance
Stream.objects.bulk_update(
streams_to_update,
['name', 'url', 'logo_url', 'tvg_id', 'custom_properties', 'last_seen', 'updated_at'],
['name', 'url', 'logo_url', 'tvg_id', 'custom_properties', 'last_seen', 'updated_at', 'is_stale'],
batch_size=150 # Smaller batch size for XC processing
)
@ -966,7 +1001,7 @@ def process_m3u_batch_direct(account_id, batch, groups, hash_keys):
)
continue
stream_hash = Stream.generate_hash_key(name, url, tvg_id, hash_keys, m3u_id=account_id)
stream_hash = Stream.generate_hash_key(name, url, tvg_id, hash_keys, m3u_id=account_id, group=group_title)
stream_props = {
"name": name,
"url": url,
@ -976,6 +1011,7 @@ def process_m3u_batch_direct(account_id, batch, groups, hash_keys):
"channel_group_id": int(groups.get(group_title)),
"stream_hash": stream_hash,
"custom_properties": stream_info["attributes"],
"is_stale": False,
}
if stream_hash not in stream_hashes:
@ -1015,11 +1051,15 @@ def process_m3u_batch_direct(account_id, batch, groups, hash_keys):
obj.custom_properties = stream_props["custom_properties"]
obj.updated_at = timezone.now()
# Always mark as not stale since we saw it in this refresh
obj.is_stale = False
streams_to_update.append(obj)
else:
# New stream
stream_props["last_seen"] = timezone.now()
stream_props["updated_at"] = timezone.now()
stream_props["is_stale"] = False
streams_to_create.append(Stream(**stream_props))
try:
@ -1031,7 +1071,7 @@ def process_m3u_batch_direct(account_id, batch, groups, hash_keys):
# Update all streams in a single bulk operation
Stream.objects.bulk_update(
streams_to_update,
['name', 'url', 'logo_url', 'tvg_id', 'custom_properties', 'last_seen', 'updated_at'],
['name', 'url', 'logo_url', 'tvg_id', 'custom_properties', 'last_seen', 'updated_at', 'is_stale'],
batch_size=200
)
except Exception as e:
@ -1092,7 +1132,15 @@ def cleanup_streams(account_id, scan_start_time=timezone.now):
@shared_task
def refresh_m3u_groups(account_id, use_cache=False, full_refresh=False):
def refresh_m3u_groups(account_id, use_cache=False, full_refresh=False, scan_start_time=None):
"""Refresh M3U groups for an account.
Args:
account_id: ID of the M3U account
use_cache: Whether to use cached M3U file
full_refresh: Whether this is part of a full refresh
scan_start_time: Timestamp when the scan started (for consistent last_seen marking)
"""
if not acquire_task_lock("refresh_m3u_account_groups", account_id):
return f"Task already running for account_id={account_id}.", None
@ -1419,7 +1467,7 @@ def refresh_m3u_groups(account_id, use_cache=False, full_refresh=False):
send_m3u_update(account_id, "processing_groups", 0)
process_groups(account, groups)
process_groups(account, groups, scan_start_time)
release_task_lock("refresh_m3u_account_groups", account_id)
@ -2526,7 +2574,7 @@ def refresh_single_m3u_account(account_id):
if not extinf_data:
try:
logger.info(f"Calling refresh_m3u_groups for account {account_id}")
result = refresh_m3u_groups(account_id, full_refresh=True)
result = refresh_m3u_groups(account_id, full_refresh=True, scan_start_time=refresh_start_timestamp)
logger.trace(f"refresh_m3u_groups result: {result}")
# Check for completely empty result or missing groups
@ -2806,9 +2854,26 @@ def refresh_single_m3u_account(account_id):
id=-1
).exists() # This will never find anything but ensures DB sync
# Mark streams that weren't seen in this refresh as stale (pending deletion)
stale_stream_count = Stream.objects.filter(
m3u_account=account,
last_seen__lt=refresh_start_timestamp
).update(is_stale=True)
logger.info(f"Marked {stale_stream_count} streams as stale for account {account_id}")
# Mark group relationships that weren't seen in this refresh as stale (pending deletion)
stale_group_count = ChannelGroupM3UAccount.objects.filter(
m3u_account=account,
last_seen__lt=refresh_start_timestamp
).update(is_stale=True)
logger.info(f"Marked {stale_group_count} group relationships as stale for account {account_id}")
# Now run cleanup
streams_deleted = cleanup_streams(account_id, refresh_start_timestamp)
# Cleanup stale group relationships (follows same retention policy as streams)
cleanup_stale_group_relationships(account, refresh_start_timestamp)
# Run auto channel sync after successful refresh
auto_sync_message = ""
try:

View file

@ -7,7 +7,6 @@ from django.views.decorators.csrf import csrf_exempt
from django.views.decorators.http import require_http_methods
from apps.epg.models import ProgramData
from apps.accounts.models import User
from core.models import CoreSettings, NETWORK_ACCESS
from dispatcharr.utils import network_access_allowed
from django.utils import timezone as django_timezone
from django.shortcuts import get_object_or_404
@ -161,18 +160,7 @@ def generate_m3u(request, profile_name=None, user=None):
channelprofilemembership__enabled=True
).order_by('channel_number')
else:
if profile_name is not None:
try:
channel_profile = ChannelProfile.objects.get(name=profile_name)
except ChannelProfile.DoesNotExist:
logger.warning("Requested channel profile (%s) during m3u generation does not exist", profile_name)
raise Http404(f"Channel profile '{profile_name}' not found")
channels = Channel.objects.filter(
channelprofilemembership__channel_profile=channel_profile,
channelprofilemembership__enabled=True,
).order_by("channel_number")
else:
channels = Channel.objects.order_by("channel_number")
channels = Channel.objects.order_by("channel_number")
# Check if the request wants to use direct logo URLs instead of cache
use_cached_logos = request.GET.get('cachedlogos', 'true').lower() != 'false'
@ -185,16 +173,26 @@ def generate_m3u(request, profile_name=None, user=None):
tvg_id_source = request.GET.get('tvg_id_source', 'channel_number').lower()
# Build EPG URL with query parameters if needed
epg_base_url = build_absolute_uri_with_port(request, reverse('output:epg_endpoint', args=[profile_name]) if profile_name else reverse('output:epg_endpoint'))
# Check if this is an XC API request (has username/password in GET params and user is authenticated)
xc_username = request.GET.get('username')
xc_password = request.GET.get('password')
# Optionally preserve certain query parameters
preserved_params = ['tvg_id_source', 'cachedlogos', 'days']
query_params = {k: v for k, v in request.GET.items() if k in preserved_params}
if query_params:
from urllib.parse import urlencode
epg_url = f"{epg_base_url}?{urlencode(query_params)}"
if user is not None and xc_username and xc_password:
# This is an XC API request - use XC-style EPG URL
base_url = build_absolute_uri_with_port(request, '')
epg_url = f"{base_url}/xmltv.php?username={xc_username}&password={xc_password}"
else:
epg_url = epg_base_url
# Regular request - use standard EPG endpoint
epg_base_url = build_absolute_uri_with_port(request, reverse('output:epg_endpoint', args=[profile_name]) if profile_name else reverse('output:epg_endpoint'))
# Optionally preserve certain query parameters
preserved_params = ['tvg_id_source', 'cachedlogos', 'days']
query_params = {k: v for k, v in request.GET.items() if k in preserved_params}
if query_params:
from urllib.parse import urlencode
epg_url = f"{epg_base_url}?{urlencode(query_params)}"
else:
epg_url = epg_base_url
# Add x-tvg-url and url-tvg attribute for EPG URL
m3u_content = f'#EXTM3U x-tvg-url="{epg_url}" url-tvg="{epg_url}"\n'
@ -258,12 +256,10 @@ def generate_m3u(request, profile_name=None, user=None):
stream_url = first_stream.url
else:
# Fall back to proxy URL if no direct URL available
base_url = request.build_absolute_uri('/')[:-1]
stream_url = f"{base_url}/proxy/ts/stream/{channel.uuid}"
stream_url = build_absolute_uri_with_port(request, f"/proxy/ts/stream/{channel.uuid}")
else:
# Standard behavior - use proxy URL
base_url = request.build_absolute_uri('/')[:-1]
stream_url = f"{base_url}/proxy/ts/stream/{channel.uuid}"
stream_url = build_absolute_uri_with_port(request, f"/proxy/ts/stream/{channel.uuid}")
m3u_content += extinf_line + stream_url + "\n"
@ -2269,7 +2265,7 @@ def xc_get_epg(request, user, short=False):
# Get the mapped integer for this specific channel
channel_num_int = channel_num_map.get(channel.id, int(channel.channel_number))
limit = request.GET.get('limit', 4)
limit = int(request.GET.get('limit', 4))
if channel.epg_data:
# Check if this is a dummy EPG that generates on-demand
if channel.epg_data.epg_source and channel.epg_data.epg_source.source_type == 'dummy':
@ -2303,31 +2299,41 @@ def xc_get_epg(request, user, short=False):
output = {"epg_listings": []}
for program in programs:
id = "0"
epg_id = "0"
title = program['title'] if isinstance(program, dict) else program.title
description = program['description'] if isinstance(program, dict) else program.description
start = program["start_time"] if isinstance(program, dict) else program.start_time
end = program["end_time"] if isinstance(program, dict) else program.end_time
# For database programs, use actual ID; for generated dummy programs, create synthetic ID
if isinstance(program, dict):
# Generated dummy program - create unique ID from channel + timestamp
program_id = str(abs(hash(f"{channel_id}_{int(start.timestamp())}")))
else:
# Database program - use actual ID
program_id = str(program.id)
# epg_id refers to the EPG source/channel mapping in XC panels
# Use the actual EPGData ID when available, otherwise fall back to 0
epg_id = str(channel.epg_data.id) if channel.epg_data else "0"
program_output = {
"id": f"{id}",
"epg_id": f"{epg_id}",
"title": base64.b64encode(title.encode()).decode(),
"id": program_id,
"epg_id": epg_id,
"title": base64.b64encode((title or "").encode()).decode(),
"lang": "",
"start": start.strftime("%Y%m%d%H%M%S"),
"end": end.strftime("%Y%m%d%H%M%S"),
"description": base64.b64encode(description.encode()).decode(),
"channel_id": channel_num_int,
"start_timestamp": int(start.timestamp()),
"stop_timestamp": int(end.timestamp()),
"start": start.strftime("%Y-%m-%d %H:%M:%S"),
"end": end.strftime("%Y-%m-%d %H:%M:%S"),
"description": base64.b64encode((description or "").encode()).decode(),
"channel_id": str(channel_num_int),
"start_timestamp": str(int(start.timestamp())),
"stop_timestamp": str(int(end.timestamp())),
"stream_id": f"{channel_id}",
}
if short == False:
program_output["now_playing"] = 1 if start <= django_timezone.now() <= end else 0
program_output["has_archive"] = "0"
program_output["has_archive"] = 0
output['epg_listings'].append(program_output)
@ -2532,34 +2538,45 @@ def xc_get_series_info(request, user, series_id):
except Exception as e:
logger.error(f"Error refreshing series data for relation {series_relation.id}: {str(e)}")
# Get episodes for this series from the same M3U account
episode_relations = M3UEpisodeRelation.objects.filter(
episode__series=series,
m3u_account=series_relation.m3u_account
).select_related('episode').order_by('episode__season_number', 'episode__episode_number')
# Get unique episodes for this series that have relations from any active M3U account
# We query episodes directly to avoid duplicates when multiple relations exist
# (e.g., same episode in different languages/qualities)
from apps.vod.models import Episode
episodes = Episode.objects.filter(
series=series,
m3u_relations__m3u_account__is_active=True
).distinct().order_by('season_number', 'episode_number')
# Group episodes by season
seasons = {}
for relation in episode_relations:
episode = relation.episode
for episode in episodes:
season_num = episode.season_number or 1
if season_num not in seasons:
seasons[season_num] = []
# Try to get the highest priority related M3UEpisodeRelation for this episode (for video/audio/bitrate)
# Get the highest priority relation for this episode (for container_extension, video/audio/bitrate)
from apps.vod.models import M3UEpisodeRelation
first_relation = M3UEpisodeRelation.objects.filter(
episode=episode
best_relation = M3UEpisodeRelation.objects.filter(
episode=episode,
m3u_account__is_active=True
).select_related('m3u_account').order_by('-m3u_account__priority', 'id').first()
video = audio = bitrate = None
if first_relation and first_relation.custom_properties:
info = first_relation.custom_properties.get('info')
if info and isinstance(info, dict):
info_info = info.get('info')
if info_info and isinstance(info_info, dict):
video = info_info.get('video', {})
audio = info_info.get('audio', {})
bitrate = info_info.get('bitrate', 0)
container_extension = "mp4"
added_timestamp = str(int(episode.created_at.timestamp()))
if best_relation:
container_extension = best_relation.container_extension or "mp4"
added_timestamp = str(int(best_relation.created_at.timestamp()))
if best_relation.custom_properties:
info = best_relation.custom_properties.get('info')
if info and isinstance(info, dict):
info_info = info.get('info')
if info_info and isinstance(info_info, dict):
video = info_info.get('video', {})
audio = info_info.get('audio', {})
bitrate = info_info.get('bitrate', 0)
if video is None:
video = episode.custom_properties.get('video', {}) if episode.custom_properties else {}
if audio is None:
@ -2572,8 +2589,8 @@ def xc_get_series_info(request, user, series_id):
"season": season_num,
"episode_num": episode.episode_number or 0,
"title": episode.name,
"container_extension": relation.container_extension or "mp4",
"added": str(int(relation.created_at.timestamp())),
"container_extension": container_extension,
"added": added_timestamp,
"custom_sid": None,
"direct_source": "",
"info": {
@ -2889,7 +2906,7 @@ def xc_series_stream(request, username, password, stream_id, extension):
filters = {"episode_id": stream_id, "m3u_account__is_active": True}
try:
episode_relation = M3UEpisodeRelation.objects.select_related('episode').get(**filters)
episode_relation = M3UEpisodeRelation.objects.select_related('episode').filter(**filters).order_by('-m3u_account__priority', 'id').first()
except M3UEpisodeRelation.DoesNotExist:
return JsonResponse({"error": "Episode not found"}, status=404)
@ -2922,19 +2939,16 @@ def get_host_and_port(request):
if xfh:
if ":" in xfh:
host, port = xfh.split(":", 1)
# Omit standard ports from URLs, or omit if port doesn't match standard for scheme
# (e.g., HTTPS but port is 9191 = behind external reverse proxy)
# Omit standard ports from URLs
if port == standard_port:
return host, None
# If port doesn't match standard and X-Forwarded-Proto is set, likely behind external RP
if request.META.get("HTTP_X_FORWARDED_PROTO"):
host = xfh.split(":")[0] # Strip port, will check for proper port below
else:
return host, port
# Non-standard port in X-Forwarded-Host - return it
# This handles reverse proxies on non-standard ports (e.g., https://example.com:8443)
return host, port
else:
host = xfh
# Check for X-Forwarded-Port header (if we didn't already find a valid port)
# Check for X-Forwarded-Port header (if we didn't find a port in X-Forwarded-Host)
port = request.META.get("HTTP_X_FORWARDED_PORT")
if port:
# Omit standard ports from URLs
@ -2952,22 +2966,28 @@ def get_host_and_port(request):
else:
host = raw_host
# 3. Check if we're behind a reverse proxy (X-Forwarded-Proto or X-Forwarded-For present)
# 3. Check for X-Forwarded-Port (when Host header has no port but we're behind a reverse proxy)
port = request.META.get("HTTP_X_FORWARDED_PORT")
if port:
# Omit standard ports from URLs
return host, None if port == standard_port else port
# 4. Check if we're behind a reverse proxy (X-Forwarded-Proto or X-Forwarded-For present)
# If so, assume standard port for the scheme (don't trust SERVER_PORT in this case)
if request.META.get("HTTP_X_FORWARDED_PROTO") or request.META.get("HTTP_X_FORWARDED_FOR"):
return host, None
# 4. Try SERVER_PORT from META (only if NOT behind reverse proxy)
# 5. Try SERVER_PORT from META (only if NOT behind reverse proxy)
port = request.META.get("SERVER_PORT")
if port:
# Omit standard ports from URLs
return host, None if port == standard_port else port
# 5. Dev fallback: guess port 5656
# 6. Dev fallback: guess port 5656
if os.environ.get("DISPATCHARR_ENV") == "dev" or host in ("localhost", "127.0.0.1"):
return host, "5656"
# 6. Final fallback: assume standard port for scheme (omit from URL)
# 7. Final fallback: assume standard port for scheme (omit from URL)
return host, None
def build_absolute_uri_with_port(request, path):

View file

@ -48,9 +48,11 @@ class ClientManager:
# Import here to avoid potential import issues
from apps.proxy.ts_proxy.channel_status import ChannelStatus
import redis
from django.conf import settings
# Get all channels from Redis
redis_client = redis.Redis.from_url('redis://localhost:6379', decode_responses=True)
# Get all channels from Redis using settings
redis_url = getattr(settings, 'REDIS_URL', 'redis://localhost:6379/0')
redis_client = redis.Redis.from_url(redis_url, decode_responses=True)
all_channels = []
cursor = 0

View file

@ -15,6 +15,7 @@ from ..redis_keys import RedisKeys
from ..constants import EventType, ChannelState, ChannelMetadataField
from ..url_utils import get_stream_info_for_switch
from core.utils import log_system_event
from .log_parsers import LogParserFactory
logger = logging.getLogger("ts_proxy")
@ -419,124 +420,51 @@ class ChannelService:
@staticmethod
def parse_and_store_stream_info(channel_id, stream_info_line, stream_type="video", stream_id=None):
"""Parse FFmpeg stream info line and store in Redis metadata and database"""
"""
Parse stream info from FFmpeg/VLC/Streamlink logs and store in Redis/DB.
Uses specialized parsers for each streaming tool.
"""
try:
if stream_type == "input":
# Example lines:
# Input #0, mpegts, from 'http://example.com/stream.ts':
# Input #0, hls, from 'http://example.com/stream.m3u8':
# Use factory to parse the line based on stream type
parsed_data = LogParserFactory.parse(stream_type, stream_info_line)
if not parsed_data:
return
# Extract input format (e.g., "mpegts", "hls", "flv", etc.)
input_match = re.search(r'Input #\d+,\s*([^,]+)', stream_info_line)
input_format = input_match.group(1).strip() if input_match else None
# Update Redis and database with parsed data
ChannelService._update_stream_info_in_redis(
channel_id,
parsed_data.get('video_codec'),
parsed_data.get('resolution'),
parsed_data.get('width'),
parsed_data.get('height'),
parsed_data.get('source_fps'),
parsed_data.get('pixel_format'),
parsed_data.get('video_bitrate'),
parsed_data.get('audio_codec'),
parsed_data.get('sample_rate'),
parsed_data.get('audio_channels'),
parsed_data.get('audio_bitrate'),
parsed_data.get('stream_type')
)
# Store in Redis if we have valid data
if input_format:
ChannelService._update_stream_info_in_redis(channel_id, None, None, None, None, None, None, None, None, None, None, None, input_format)
# Save to database if stream_id is provided
if stream_id:
ChannelService._update_stream_stats_in_db(stream_id, stream_type=input_format)
logger.debug(f"Input format info - Format: {input_format} for channel {channel_id}")
elif stream_type == "video":
# Example line:
# Stream #0:0: Video: h264 (Main), yuv420p(tv, progressive), 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 2000 kb/s, 29.97 fps, 90k tbn
# Extract video codec (e.g., "h264", "mpeg2video", etc.)
codec_match = re.search(r'Video:\s*([a-zA-Z0-9_]+)', stream_info_line)
video_codec = codec_match.group(1) if codec_match else None
# Extract resolution (e.g., "1280x720") - be more specific to avoid hex values
# Look for resolution patterns that are realistic video dimensions
resolution_match = re.search(r'\b(\d{3,5})x(\d{3,5})\b', stream_info_line)
if resolution_match:
width = int(resolution_match.group(1))
height = int(resolution_match.group(2))
# Validate that these look like reasonable video dimensions
if 100 <= width <= 10000 and 100 <= height <= 10000:
resolution = f"{width}x{height}"
else:
width = height = resolution = None
else:
width = height = resolution = None
# Extract source FPS (e.g., "29.97 fps")
fps_match = re.search(r'(\d+(?:\.\d+)?)\s*fps', stream_info_line)
source_fps = float(fps_match.group(1)) if fps_match else None
# Extract pixel format (e.g., "yuv420p")
pixel_format_match = re.search(r'Video:\s*[^,]+,\s*([^,(]+)', stream_info_line)
pixel_format = None
if pixel_format_match:
pf = pixel_format_match.group(1).strip()
# Clean up pixel format (remove extra info in parentheses)
if '(' in pf:
pf = pf.split('(')[0].strip()
pixel_format = pf
# Extract bitrate if present (e.g., "2000 kb/s")
video_bitrate = None
bitrate_match = re.search(r'(\d+(?:\.\d+)?)\s*kb/s', stream_info_line)
if bitrate_match:
video_bitrate = float(bitrate_match.group(1))
# Store in Redis if we have valid data
if any(x is not None for x in [video_codec, resolution, source_fps, pixel_format, video_bitrate]):
ChannelService._update_stream_info_in_redis(channel_id, video_codec, resolution, width, height, source_fps, pixel_format, video_bitrate, None, None, None, None, None)
# Save to database if stream_id is provided
if stream_id:
ChannelService._update_stream_stats_in_db(
stream_id,
video_codec=video_codec,
resolution=resolution,
source_fps=source_fps,
pixel_format=pixel_format,
video_bitrate=video_bitrate
)
logger.info(f"Video stream info - Codec: {video_codec}, Resolution: {resolution}, "
f"Source FPS: {source_fps}, Pixel Format: {pixel_format}, "
f"Video Bitrate: {video_bitrate} kb/s")
elif stream_type == "audio":
# Example line:
# Stream #0:1[0x101]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 64 kb/s
# Extract audio codec (e.g., "aac", "mp3", etc.)
codec_match = re.search(r'Audio:\s*([a-zA-Z0-9_]+)', stream_info_line)
audio_codec = codec_match.group(1) if codec_match else None
# Extract sample rate (e.g., "48000 Hz")
sample_rate_match = re.search(r'(\d+)\s*Hz', stream_info_line)
sample_rate = int(sample_rate_match.group(1)) if sample_rate_match else None
# Extract channel layout (e.g., "stereo", "5.1", "mono")
# Look for common channel layouts
channel_match = re.search(r'\b(mono|stereo|5\.1|7\.1|quad|2\.1)\b', stream_info_line, re.IGNORECASE)
channels = channel_match.group(1) if channel_match else None
# Extract audio bitrate if present (e.g., "64 kb/s")
audio_bitrate = None
bitrate_match = re.search(r'(\d+(?:\.\d+)?)\s*kb/s', stream_info_line)
if bitrate_match:
audio_bitrate = float(bitrate_match.group(1))
# Store in Redis if we have valid data
if any(x is not None for x in [audio_codec, sample_rate, channels, audio_bitrate]):
ChannelService._update_stream_info_in_redis(channel_id, None, None, None, None, None, None, None, audio_codec, sample_rate, channels, audio_bitrate, None)
# Save to database if stream_id is provided
if stream_id:
ChannelService._update_stream_stats_in_db(
stream_id,
audio_codec=audio_codec,
sample_rate=sample_rate,
audio_channels=channels,
audio_bitrate=audio_bitrate
)
if stream_id:
ChannelService._update_stream_stats_in_db(
stream_id,
video_codec=parsed_data.get('video_codec'),
resolution=parsed_data.get('resolution'),
source_fps=parsed_data.get('source_fps'),
pixel_format=parsed_data.get('pixel_format'),
video_bitrate=parsed_data.get('video_bitrate'),
audio_codec=parsed_data.get('audio_codec'),
sample_rate=parsed_data.get('sample_rate'),
audio_channels=parsed_data.get('audio_channels'),
audio_bitrate=parsed_data.get('audio_bitrate'),
stream_type=parsed_data.get('stream_type')
)
except Exception as e:
logger.debug(f"Error parsing FFmpeg {stream_type} stream info: {e}")
logger.debug(f"Error parsing {stream_type} stream info: {e}")
@staticmethod
def _update_stream_info_in_redis(channel_id, codec, resolution, width, height, fps, pixel_format, video_bitrate, audio_codec=None, sample_rate=None, channels=None, audio_bitrate=None, input_format=None):

View file

@ -0,0 +1,410 @@
"""Log parsers for FFmpeg, Streamlink, and VLC output."""
import re
import logging
from abc import ABC, abstractmethod
from typing import Optional, Dict, Any
logger = logging.getLogger(__name__)
class BaseLogParser(ABC):
"""Base class for log parsers"""
# Map of stream_type -> method_name that this parser handles
STREAM_TYPE_METHODS: Dict[str, str] = {}
@abstractmethod
def can_parse(self, line: str) -> Optional[str]:
"""
Check if this parser can handle the line.
Returns the stream_type if it can parse, None otherwise.
e.g., 'video', 'audio', 'vlc_video', 'vlc_audio', 'streamlink'
"""
pass
@abstractmethod
def parse_input_format(self, line: str) -> Optional[Dict[str, Any]]:
pass
@abstractmethod
def parse_video_stream(self, line: str) -> Optional[Dict[str, Any]]:
pass
@abstractmethod
def parse_audio_stream(self, line: str) -> Optional[Dict[str, Any]]:
pass
class FFmpegLogParser(BaseLogParser):
"""Parser for FFmpeg log output"""
STREAM_TYPE_METHODS = {
'input': 'parse_input_format',
'video': 'parse_video_stream',
'audio': 'parse_audio_stream'
}
def can_parse(self, line: str) -> Optional[str]:
"""Check if this is an FFmpeg line we can parse"""
lower = line.lower()
# Input format detection
if lower.startswith('input #'):
return 'input'
# Stream info (only during input phase, but we'll let stream_manager handle phase tracking)
if 'stream #' in lower:
if 'video:' in lower:
return 'video'
elif 'audio:' in lower:
return 'audio'
return None
def parse_input_format(self, line: str) -> Optional[Dict[str, Any]]:
"""Parse FFmpeg input format (e.g., mpegts, hls)"""
try:
input_match = re.search(r'Input #\d+,\s*([^,]+)', line)
input_format = input_match.group(1).strip() if input_match else None
if input_format:
logger.debug(f"Input format info - Format: {input_format}")
return {'stream_type': input_format}
except Exception as e:
logger.debug(f"Error parsing FFmpeg input format: {e}")
return None
def parse_video_stream(self, line: str) -> Optional[Dict[str, Any]]:
"""Parse FFmpeg video stream info"""
try:
result = {}
# Extract codec, resolution, fps, pixel format, bitrate
codec_match = re.search(r'Video:\s*([a-zA-Z0-9_]+)', line)
if codec_match:
result['video_codec'] = codec_match.group(1)
resolution_match = re.search(r'\b(\d{3,5})x(\d{3,5})\b', line)
if resolution_match:
width = int(resolution_match.group(1))
height = int(resolution_match.group(2))
if 100 <= width <= 10000 and 100 <= height <= 10000:
result['resolution'] = f"{width}x{height}"
result['width'] = width
result['height'] = height
fps_match = re.search(r'(\d+(?:\.\d+)?)\s*fps', line)
if fps_match:
result['source_fps'] = float(fps_match.group(1))
pixel_format_match = re.search(r'Video:\s*[^,]+,\s*([^,(]+)', line)
if pixel_format_match:
pf = pixel_format_match.group(1).strip()
if '(' in pf:
pf = pf.split('(')[0].strip()
result['pixel_format'] = pf
bitrate_match = re.search(r'(\d+(?:\.\d+)?)\s*kb/s', line)
if bitrate_match:
result['video_bitrate'] = float(bitrate_match.group(1))
if result:
logger.info(f"Video stream info - Codec: {result.get('video_codec')}, "
f"Resolution: {result.get('resolution')}, "
f"Source FPS: {result.get('source_fps')}, "
f"Pixel Format: {result.get('pixel_format')}, "
f"Video Bitrate: {result.get('video_bitrate')} kb/s")
return result
except Exception as e:
logger.debug(f"Error parsing FFmpeg video stream info: {e}")
return None
def parse_audio_stream(self, line: str) -> Optional[Dict[str, Any]]:
"""Parse FFmpeg audio stream info"""
try:
result = {}
codec_match = re.search(r'Audio:\s*([a-zA-Z0-9_]+)', line)
if codec_match:
result['audio_codec'] = codec_match.group(1)
sample_rate_match = re.search(r'(\d+)\s*Hz', line)
if sample_rate_match:
result['sample_rate'] = int(sample_rate_match.group(1))
channel_match = re.search(r'\b(mono|stereo|5\.1|7\.1|quad|2\.1)\b', line, re.IGNORECASE)
if channel_match:
result['audio_channels'] = channel_match.group(1)
bitrate_match = re.search(r'(\d+(?:\.\d+)?)\s*kb/s', line)
if bitrate_match:
result['audio_bitrate'] = float(bitrate_match.group(1))
if result:
return result
except Exception as e:
logger.debug(f"Error parsing FFmpeg audio stream info: {e}")
return None
class VLCLogParser(BaseLogParser):
"""Parser for VLC log output"""
STREAM_TYPE_METHODS = {
'vlc_video': 'parse_video_stream',
'vlc_audio': 'parse_audio_stream'
}
def can_parse(self, line: str) -> Optional[str]:
"""Check if this is a VLC line we can parse"""
lower = line.lower()
# VLC TS demux codec detection
if 'ts demux debug' in lower and 'type=' in lower:
if 'video' in lower:
return 'vlc_video'
elif 'audio' in lower:
return 'vlc_audio'
# VLC decoder output
if 'decoder' in lower and ('channels:' in lower or 'samplerate:' in lower or 'x' in line or 'fps' in lower):
if 'audio' in lower or 'channels:' in lower or 'samplerate:' in lower:
return 'vlc_audio'
else:
return 'vlc_video'
# VLC transcode output for resolution/FPS
if 'stream_out_transcode' in lower and ('source fps' in lower or ('source ' in lower and 'x' in line)):
return 'vlc_video'
return None
def parse_input_format(self, line: str) -> Optional[Dict[str, Any]]:
return None
def parse_video_stream(self, line: str) -> Optional[Dict[str, Any]]:
"""Parse VLC TS demux output and decoder info for video"""
try:
lower = line.lower()
result = {}
# Codec detection from TS demux
video_codec_map = {
('avc', 'h.264', 'type=0x1b'): "h264",
('hevc', 'h.265', 'type=0x24'): "hevc",
('mpeg-2', 'type=0x02'): "mpeg2video",
('mpeg-4', 'type=0x10'): "mpeg4"
}
for patterns, codec in video_codec_map.items():
if any(p in lower for p in patterns):
result['video_codec'] = codec
break
# Extract FPS from transcode output: "source fps 30/1"
fps_fraction_match = re.search(r'source fps\s+(\d+)/(\d+)', lower)
if fps_fraction_match:
numerator = int(fps_fraction_match.group(1))
denominator = int(fps_fraction_match.group(2))
if denominator > 0:
result['source_fps'] = numerator / denominator
# Extract resolution from transcode output: "source 1280x720"
source_res_match = re.search(r'source\s+(\d{3,4})x(\d{3,4})', lower)
if source_res_match:
width = int(source_res_match.group(1))
height = int(source_res_match.group(2))
if 100 <= width <= 10000 and 100 <= height <= 10000:
result['resolution'] = f"{width}x{height}"
result['width'] = width
result['height'] = height
else:
# Fallback: generic resolution pattern
resolution_match = re.search(r'(\d{3,4})x(\d{3,4})', line)
if resolution_match:
width = int(resolution_match.group(1))
height = int(resolution_match.group(2))
if 100 <= width <= 10000 and 100 <= height <= 10000:
result['resolution'] = f"{width}x{height}"
result['width'] = width
result['height'] = height
# Fallback: try to extract FPS from generic format
if 'source_fps' not in result:
fps_match = re.search(r'(\d+\.?\d*)\s*fps', lower)
if fps_match:
result['source_fps'] = float(fps_match.group(1))
return result if result else None
except Exception as e:
logger.debug(f"Error parsing VLC video stream info: {e}")
return None
def parse_audio_stream(self, line: str) -> Optional[Dict[str, Any]]:
"""Parse VLC TS demux output and decoder info for audio"""
try:
lower = line.lower()
result = {}
# Codec detection from TS demux
audio_codec_map = {
('type=0xf', 'adts'): "aac",
('type=0x03', 'type=0x04'): "mp3",
('type=0x06', 'type=0x81'): "ac3",
('type=0x0b', 'lpcm'): "pcm"
}
for patterns, codec in audio_codec_map.items():
if any(p in lower for p in patterns):
result['audio_codec'] = codec
break
# VLC decoder format: "AAC channels: 2 samplerate: 48000"
if 'channels:' in lower:
channels_match = re.search(r'channels:\s*(\d+)', lower)
if channels_match:
num_channels = int(channels_match.group(1))
# Convert number to name
channel_names = {1: 'mono', 2: 'stereo', 6: '5.1', 8: '7.1'}
result['audio_channels'] = channel_names.get(num_channels, str(num_channels))
if 'samplerate:' in lower:
samplerate_match = re.search(r'samplerate:\s*(\d+)', lower)
if samplerate_match:
result['sample_rate'] = int(samplerate_match.group(1))
# Try to extract sample rate (Hz format)
sample_rate_match = re.search(r'(\d+)\s*hz', lower)
if sample_rate_match and 'sample_rate' not in result:
result['sample_rate'] = int(sample_rate_match.group(1))
# Try to extract channels (word format)
if 'audio_channels' not in result:
channel_match = re.search(r'\b(mono|stereo|5\.1|7\.1|quad|2\.1)\b', lower)
if channel_match:
result['audio_channels'] = channel_match.group(1)
return result if result else None
except Exception as e:
logger.error(f"[VLC AUDIO PARSER] Error parsing VLC audio stream info: {e}")
return None
class StreamlinkLogParser(BaseLogParser):
"""Parser for Streamlink log output"""
STREAM_TYPE_METHODS = {
'streamlink': 'parse_video_stream'
}
def can_parse(self, line: str) -> Optional[str]:
"""Check if this is a Streamlink line we can parse"""
lower = line.lower()
if 'opening stream:' in lower or 'available streams:' in lower:
return 'streamlink'
return None
def parse_input_format(self, line: str) -> Optional[Dict[str, Any]]:
return None
def parse_video_stream(self, line: str) -> Optional[Dict[str, Any]]:
"""Parse Streamlink quality/resolution"""
try:
quality_match = re.search(r'(\d+p|\d+x\d+)', line)
if quality_match:
quality = quality_match.group(1)
if 'x' in quality:
resolution = quality
width, height = map(int, quality.split('x'))
else:
resolutions = {
'2160p': ('3840x2160', 3840, 2160),
'1080p': ('1920x1080', 1920, 1080),
'720p': ('1280x720', 1280, 720),
'480p': ('854x480', 854, 480),
'360p': ('640x360', 640, 360)
}
resolution, width, height = resolutions.get(quality, ('1920x1080', 1920, 1080))
return {
'video_codec': 'h264',
'resolution': resolution,
'width': width,
'height': height,
'pixel_format': 'yuv420p'
}
except Exception as e:
logger.debug(f"Error parsing Streamlink video info: {e}")
return None
def parse_audio_stream(self, line: str) -> Optional[Dict[str, Any]]:
return None
class LogParserFactory:
"""Factory to get the appropriate log parser"""
_parsers = {
'ffmpeg': FFmpegLogParser(),
'vlc': VLCLogParser(),
'streamlink': StreamlinkLogParser()
}
@classmethod
def _get_parser_and_method(cls, stream_type: str) -> Optional[tuple[BaseLogParser, str]]:
"""Determine parser and method from stream_type"""
# Check each parser to see if it handles this stream_type
for parser in cls._parsers.values():
method_name = parser.STREAM_TYPE_METHODS.get(stream_type)
if method_name:
return (parser, method_name)
return None
@classmethod
def parse(cls, stream_type: str, line: str) -> Optional[Dict[str, Any]]:
"""
Parse a log line based on stream type.
Returns parsed data or None if parsing fails.
"""
result = cls._get_parser_and_method(stream_type)
if not result:
return None
parser, method_name = result
method = getattr(parser, method_name, None)
if method:
return method(line)
return None
@classmethod
def auto_parse(cls, line: str) -> Optional[tuple[str, Dict[str, Any]]]:
"""
Automatically detect which parser can handle this line and parse it.
Returns (stream_type, parsed_data) or None if no parser can handle it.
"""
# Try each parser to see if it can handle this line
for parser in cls._parsers.values():
stream_type = parser.can_parse(line)
if stream_type:
# Parser can handle this line, now parse it
parsed_data = cls.parse(stream_type, line)
if parsed_data:
return (stream_type, parsed_data)
return None

View file

@ -107,6 +107,10 @@ class StreamManager:
# Add this flag for tracking transcoding process status
self.transcode_process_active = False
# Track stream command for efficient log parser routing
self.stream_command = None
self.parser_type = None # Will be set when transcode process starts
# Add tracking for data throughput
self.bytes_processed = 0
self.last_bytes_update = time.time()
@ -476,6 +480,21 @@ class StreamManager:
# Build and start transcode command
self.transcode_cmd = stream_profile.build_command(self.url, self.user_agent)
# Store stream command for efficient log parser routing
self.stream_command = stream_profile.command
# Map actual commands to parser types for direct routing
command_to_parser = {
'ffmpeg': 'ffmpeg',
'cvlc': 'vlc',
'vlc': 'vlc',
'streamlink': 'streamlink'
}
self.parser_type = command_to_parser.get(self.stream_command.lower())
if self.parser_type:
logger.debug(f"Using {self.parser_type} parser for log parsing (command: {self.stream_command})")
else:
logger.debug(f"Unknown stream command '{self.stream_command}', will use auto-detection for log parsing")
# For UDP streams, remove any user_agent parameters from the command
if hasattr(self, 'stream_type') and self.stream_type == StreamType.UDP:
# Filter out any arguments that contain the user_agent value or related headers
@ -645,35 +664,51 @@ class StreamManager:
if content_lower.startswith('output #') or 'encoder' in content_lower:
self.ffmpeg_input_phase = False
# Only parse stream info if we're still in the input phase
if ("stream #" in content_lower and
("video:" in content_lower or "audio:" in content_lower) and
self.ffmpeg_input_phase):
# Route to appropriate parser based on known command type
from .services.log_parsers import LogParserFactory
from .services.channel_service import ChannelService
from .services.channel_service import ChannelService
if "video:" in content_lower:
ChannelService.parse_and_store_stream_info(self.channel_id, content, "video", self.current_stream_id)
elif "audio:" in content_lower:
ChannelService.parse_and_store_stream_info(self.channel_id, content, "audio", self.current_stream_id)
parse_result = None
# If we know the parser type, use direct routing for efficiency
if self.parser_type:
# Get the appropriate parser and check what it can parse
parser = LogParserFactory._parsers.get(self.parser_type)
if parser:
stream_type = parser.can_parse(content)
if stream_type:
# Parser can handle this line, parse it directly
parsed_data = LogParserFactory.parse(stream_type, content)
if parsed_data:
parse_result = (stream_type, parsed_data)
else:
# Unknown command type - use auto-detection as fallback
parse_result = LogParserFactory.auto_parse(content)
if parse_result:
stream_type, parsed_data = parse_result
# For FFmpeg, only parse during input phase
if stream_type in ['video', 'audio', 'input']:
if self.ffmpeg_input_phase:
ChannelService.parse_and_store_stream_info(self.channel_id, content, stream_type, self.current_stream_id)
else:
# VLC and Streamlink can be parsed anytime
ChannelService.parse_and_store_stream_info(self.channel_id, content, stream_type, self.current_stream_id)
# Determine log level based on content
if any(keyword in content_lower for keyword in ['error', 'failed', 'cannot', 'invalid', 'corrupt']):
logger.error(f"FFmpeg stderr for channel {self.channel_id}: {content}")
logger.error(f"Stream process error for channel {self.channel_id}: {content}")
elif any(keyword in content_lower for keyword in ['warning', 'deprecated', 'ignoring']):
logger.warning(f"FFmpeg stderr for channel {self.channel_id}: {content}")
logger.warning(f"Stream process warning for channel {self.channel_id}: {content}")
elif content.startswith('frame=') or 'fps=' in content or 'speed=' in content:
# Stats lines - log at trace level to avoid spam
logger.trace(f"FFmpeg stats for channel {self.channel_id}: {content}")
logger.trace(f"Stream stats for channel {self.channel_id}: {content}")
elif any(keyword in content_lower for keyword in ['input', 'output', 'stream', 'video', 'audio']):
# Stream info - log at info level
logger.info(f"FFmpeg info for channel {self.channel_id}: {content}")
if content.startswith('Input #0'):
# If it's input 0, parse stream info
from .services.channel_service import ChannelService
ChannelService.parse_and_store_stream_info(self.channel_id, content, "input", self.current_stream_id)
logger.info(f"Stream info for channel {self.channel_id}: {content}")
else:
# Everything else at debug level
logger.debug(f"FFmpeg stderr for channel {self.channel_id}: {content}")
logger.debug(f"Stream process output for channel {self.channel_id}: {content}")
except Exception as e:
logger.error(f"Error logging stderr content for channel {self.channel_id}: {e}")

View file

@ -462,16 +462,21 @@ def validate_stream_url(url, user_agent=None, timeout=(5, 5)):
session.headers.update(headers)
# Make HEAD request first as it's faster and doesn't download content
head_response = session.head(
url,
timeout=timeout,
allow_redirects=True
)
head_request_success = True
try:
head_response = session.head(
url,
timeout=timeout,
allow_redirects=True
)
except requests.exceptions.RequestException as e:
head_request_success = False
logger.warning(f"Request error (HEAD), assuming HEAD not supported: {str(e)}")
# If HEAD not supported, server will return 405 or other error
if 200 <= head_response.status_code < 300:
if head_request_success and (200 <= head_response.status_code < 300):
# HEAD request successful
return True, head_response.url, head_response.status_code, "Valid (HEAD request)"
return True, url, head_response.status_code, "Valid (HEAD request)"
# Try a GET request with stream=True to avoid downloading all content
get_response = session.get(
@ -484,7 +489,7 @@ def validate_stream_url(url, user_agent=None, timeout=(5, 5)):
# IMPORTANT: Check status code first before checking content
if not (200 <= get_response.status_code < 300):
logger.warning(f"Stream validation failed with HTTP status {get_response.status_code}")
return False, get_response.url, get_response.status_code, f"Invalid HTTP status: {get_response.status_code}"
return False, url, get_response.status_code, f"Invalid HTTP status: {get_response.status_code}"
# Only check content if status code is valid
try:
@ -538,7 +543,7 @@ def validate_stream_url(url, user_agent=None, timeout=(5, 5)):
get_response.close()
# If we have content, consider it valid even with unrecognized content type
return is_valid, get_response.url, get_response.status_code, message
return is_valid, url, get_response.status_code, message
except requests.exceptions.Timeout:
return False, url, 0, "Timeout connecting to stream"

View file

@ -97,7 +97,11 @@ class PersistentVODConnection:
# First check if we have a pre-stored content length from HEAD request
try:
import redis
r = redis.StrictRedis(host='localhost', port=6379, db=0, decode_responses=True)
from django.conf import settings
redis_host = getattr(settings, 'REDIS_HOST', 'localhost')
redis_port = int(getattr(settings, 'REDIS_PORT', 6379))
redis_db = int(getattr(settings, 'REDIS_DB', 0))
r = redis.StrictRedis(host=redis_host, port=redis_port, db=redis_db, decode_responses=True)
content_length_key = f"vod_content_length:{self.session_id}"
stored_length = r.get(content_length_key)
if stored_length:

View file

@ -24,6 +24,11 @@ from apps.m3u.models import M3UAccountProfile
logger = logging.getLogger("vod_proxy")
def get_vod_client_stop_key(client_id):
"""Get the Redis key for signaling a VOD client to stop"""
return f"vod_proxy:client:{client_id}:stop"
def infer_content_type_from_url(url: str) -> Optional[str]:
"""
Infer MIME type from file extension in URL
@ -352,12 +357,12 @@ class RedisBackedVODConnection:
logger.info(f"[{self.session_id}] Making request #{state.request_count} to {'final' if state.final_url else 'original'} URL")
# Make request
# Make request (10s connect, 10s read timeout - keeps lock time reasonable if client disconnects)
response = self.local_session.get(
target_url,
headers=headers,
stream=True,
timeout=(10, 30),
timeout=(10, 10),
allow_redirects=allow_redirects
)
response.raise_for_status()
@ -707,6 +712,10 @@ class MultiWorkerVODConnectionManager:
content_name = content_obj.name if hasattr(content_obj, 'name') else str(content_obj)
client_id = session_id
# Track whether we incremented profile connections (for cleanup on error)
profile_connections_incremented = False
redis_connection = None
logger.info(f"[{client_id}] Worker {self.worker_id} - Redis-backed streaming request for {content_type} {content_name}")
try:
@ -797,6 +806,7 @@ class MultiWorkerVODConnectionManager:
# Increment profile connections after successful connection creation
self._increment_profile_connections(m3u_profile)
profile_connections_incremented = True
logger.info(f"[{client_id}] Worker {self.worker_id} - Created consolidated connection with session metadata")
else:
@ -832,6 +842,7 @@ class MultiWorkerVODConnectionManager:
# Create streaming generator
def stream_generator():
decremented = False
stop_signal_detected = False
try:
logger.info(f"[{client_id}] Worker {self.worker_id} - Starting Redis-backed stream")
@ -846,14 +857,25 @@ class MultiWorkerVODConnectionManager:
bytes_sent = 0
chunk_count = 0
# Get the stop signal key for this client
stop_key = get_vod_client_stop_key(client_id)
for chunk in upstream_response.iter_content(chunk_size=8192):
if chunk:
yield chunk
bytes_sent += len(chunk)
chunk_count += 1
# Update activity every 100 chunks in consolidated connection state
# Check for stop signal every 100 chunks
if chunk_count % 100 == 0:
# Check if stop signal has been set
if self.redis_client and self.redis_client.exists(stop_key):
logger.info(f"[{client_id}] Worker {self.worker_id} - Stop signal detected, terminating stream")
# Delete the stop key
self.redis_client.delete(stop_key)
stop_signal_detected = True
break
# Update the connection state
logger.debug(f"Client: [{client_id}] Worker: {self.worker_id} sent {chunk_count} chunks for VOD: {content_name}")
if redis_connection._acquire_lock():
@ -867,7 +889,10 @@ class MultiWorkerVODConnectionManager:
finally:
redis_connection._release_lock()
logger.info(f"[{client_id}] Worker {self.worker_id} - Redis-backed stream completed: {bytes_sent} bytes sent")
if stop_signal_detected:
logger.info(f"[{client_id}] Worker {self.worker_id} - Stream stopped by signal: {bytes_sent} bytes sent")
else:
logger.info(f"[{client_id}] Worker {self.worker_id} - Redis-backed stream completed: {bytes_sent} bytes sent")
redis_connection.decrement_active_streams()
decremented = True
@ -1004,6 +1029,19 @@ class MultiWorkerVODConnectionManager:
except Exception as e:
logger.error(f"[{client_id}] Worker {self.worker_id} - Error in Redis-backed stream_content_with_session: {e}", exc_info=True)
# Decrement profile connections if we incremented them but failed before streaming started
if profile_connections_incremented:
logger.info(f"[{client_id}] Connection error occurred after profile increment - decrementing profile connections")
self._decrement_profile_connections(m3u_profile.id)
# Also clean up the Redis connection state since we won't be using it
if redis_connection:
try:
redis_connection.cleanup(connection_manager=self, current_worker_id=self.worker_id)
except Exception as cleanup_error:
logger.error(f"[{client_id}] Error during cleanup after connection failure: {cleanup_error}")
return HttpResponse(f"Streaming error: {str(e)}", status=500)
def _apply_timeshift_parameters(self, original_url, utc_start=None, utc_end=None, offset=None):

View file

@ -21,4 +21,7 @@ urlpatterns = [
# VOD Stats
path('stats/', views.VODStatsView.as_view(), name='vod_stats'),
# Stop VOD client connection
path('stop_client/', views.stop_vod_client, name='stop_vod_client'),
]

View file

@ -15,7 +15,7 @@ from django.views import View
from apps.vod.models import Movie, Series, Episode
from apps.m3u.models import M3UAccount, M3UAccountProfile
from apps.proxy.vod_proxy.connection_manager import VODConnectionManager
from apps.proxy.vod_proxy.multi_worker_connection_manager import MultiWorkerVODConnectionManager, infer_content_type_from_url
from apps.proxy.vod_proxy.multi_worker_connection_manager import MultiWorkerVODConnectionManager, infer_content_type_from_url, get_vod_client_stop_key
from .utils import get_client_info, create_vod_response
logger = logging.getLogger(__name__)
@ -329,7 +329,11 @@ class VODStreamView(View):
# Store the total content length in Redis for the persistent connection to use
try:
import redis
r = redis.StrictRedis(host='localhost', port=6379, db=0, decode_responses=True)
from django.conf import settings
redis_host = getattr(settings, 'REDIS_HOST', 'localhost')
redis_port = int(getattr(settings, 'REDIS_PORT', 6379))
redis_db = int(getattr(settings, 'REDIS_DB', 0))
r = redis.StrictRedis(host=redis_host, port=redis_port, db=redis_db, decode_responses=True)
content_length_key = f"vod_content_length:{session_id}"
r.set(content_length_key, total_size, ex=1800) # Store for 30 minutes
logger.info(f"[VOD-HEAD] Stored total content length {total_size} for session {session_id}")
@ -1011,3 +1015,59 @@ class VODStatsView(View):
except Exception as e:
logger.error(f"Error getting VOD stats: {e}")
return JsonResponse({'error': str(e)}, status=500)
from rest_framework.decorators import api_view, permission_classes
from apps.accounts.permissions import IsAdmin
@csrf_exempt
@api_view(["POST"])
@permission_classes([IsAdmin])
def stop_vod_client(request):
"""Stop a specific VOD client connection using stop signal mechanism"""
try:
# Parse request body
import json
try:
data = json.loads(request.body)
except json.JSONDecodeError:
return JsonResponse({'error': 'Invalid JSON'}, status=400)
client_id = data.get('client_id')
if not client_id:
return JsonResponse({'error': 'No client_id provided'}, status=400)
logger.info(f"Request to stop VOD client: {client_id}")
# Get Redis client
connection_manager = MultiWorkerVODConnectionManager.get_instance()
redis_client = connection_manager.redis_client
if not redis_client:
return JsonResponse({'error': 'Redis not available'}, status=500)
# Check if connection exists
connection_key = f"vod_persistent_connection:{client_id}"
connection_data = redis_client.hgetall(connection_key)
if not connection_data:
logger.warning(f"VOD connection not found: {client_id}")
return JsonResponse({'error': 'Connection not found'}, status=404)
# Set a stop signal key that the worker will check
stop_key = get_vod_client_stop_key(client_id)
redis_client.setex(stop_key, 60, "true") # 60 second TTL
logger.info(f"Set stop signal for VOD client: {client_id}")
return JsonResponse({
'message': 'VOD client stop signal sent',
'client_id': client_id,
'stop_key': stop_key
})
except Exception as e:
logger.error(f"Error stopping VOD client: {e}", exc_info=True)
return JsonResponse({'error': str(e)}, status=500)

View file

@ -62,7 +62,7 @@ class MovieFilter(django_filters.FilterSet):
# Handle the format 'category_name|category_type'
if '|' in value:
category_name, category_type = value.split('|', 1)
category_name, category_type = value.rsplit('|', 1)
return queryset.filter(
m3u_relations__category__name=category_name,
m3u_relations__category__category_type=category_type
@ -219,7 +219,7 @@ class SeriesFilter(django_filters.FilterSet):
# Handle the format 'category_name|category_type'
if '|' in value:
category_name, category_type = value.split('|', 1)
category_name, category_type = value.rsplit('|', 1)
return queryset.filter(
m3u_relations__category__name=category_name,
m3u_relations__category__category_type=category_type
@ -588,7 +588,7 @@ class UnifiedContentViewSet(viewsets.ReadOnlyModelViewSet):
if category:
if '|' in category:
cat_name, cat_type = category.split('|', 1)
cat_name, cat_type = category.rsplit('|', 1)
if cat_type == 'movie':
where_conditions[0] += " AND movies.id IN (SELECT movie_id FROM vod_m3umovierelation mmr JOIN vod_vodcategory c ON mmr.category_id = c.id WHERE c.name = %s)"
where_conditions[1] = "1=0" # Exclude series

View file

@ -245,10 +245,13 @@ class M3UMovieRelation(models.Model):
"""Get the full stream URL for this movie from this provider"""
# Build URL dynamically for XtreamCodes accounts
if self.m3u_account.account_type == 'XC':
server_url = self.m3u_account.server_url.rstrip('/')
from core.xtream_codes import Client as XCClient
# Use XC client's URL normalization to handle malformed URLs
# (e.g., URLs with /player_api.php or query parameters)
normalized_url = XCClient(self.m3u_account.server_url, '', '')._normalize_url(self.m3u_account.server_url)
username = self.m3u_account.username
password = self.m3u_account.password
return f"{server_url}/movie/{username}/{password}/{self.stream_id}.{self.container_extension or 'mp4'}"
return f"{normalized_url}/movie/{username}/{password}/{self.stream_id}.{self.container_extension or 'mp4'}"
else:
# For other account types, we would need another way to build URLs
return None
@ -285,10 +288,12 @@ class M3UEpisodeRelation(models.Model):
if self.m3u_account.account_type == 'XC':
# For XtreamCodes accounts, build the URL dynamically
server_url = self.m3u_account.server_url.rstrip('/')
# Use XC client's URL normalization to handle malformed URLs
# (e.g., URLs with /player_api.php or query parameters)
normalized_url = XtreamCodesClient(self.m3u_account.server_url, '', '')._normalize_url(self.m3u_account.server_url)
username = self.m3u_account.username
password = self.m3u_account.password
return f"{server_url}/series/{username}/{password}/{self.stream_id}.{self.container_extension or 'mp4'}"
return f"{normalized_url}/series/{username}/{password}/{self.stream_id}.{self.container_extension or 'mp4'}"
else:
# We might support non XC accounts in the future
# For now, return None

View file

@ -410,10 +410,10 @@ def process_movie_batch(account, batch, categories, relations, scan_start_time=N
tmdb_id = movie_data.get('tmdb_id') or movie_data.get('tmdb')
imdb_id = movie_data.get('imdb_id') or movie_data.get('imdb')
# Clean empty string IDs
if tmdb_id == '':
# Clean empty string IDs and zero values (some providers use 0 to indicate no ID)
if tmdb_id == '' or tmdb_id == 0 or tmdb_id == '0':
tmdb_id = None
if imdb_id == '':
if imdb_id == '' or imdb_id == 0 or imdb_id == '0':
imdb_id = None
# Create a unique key for this movie (priority: TMDB > IMDB > name+year)
@ -614,26 +614,41 @@ def process_movie_batch(account, batch, categories, relations, scan_start_time=N
# First, create new movies and get their IDs
created_movies = {}
if movies_to_create:
Movie.objects.bulk_create(movies_to_create, ignore_conflicts=True)
# Bulk query to check which movies already exist
tmdb_ids = [m.tmdb_id for m in movies_to_create if m.tmdb_id]
imdb_ids = [m.imdb_id for m in movies_to_create if m.imdb_id]
name_year_pairs = [(m.name, m.year) for m in movies_to_create if not m.tmdb_id and not m.imdb_id]
# Get the newly created movies with their IDs
# We need to re-fetch them to get the primary keys
existing_by_tmdb = {m.tmdb_id: m for m in Movie.objects.filter(tmdb_id__in=tmdb_ids)} if tmdb_ids else {}
existing_by_imdb = {m.imdb_id: m for m in Movie.objects.filter(imdb_id__in=imdb_ids)} if imdb_ids else {}
existing_by_name_year = {}
if name_year_pairs:
for movie in Movie.objects.filter(tmdb_id__isnull=True, imdb_id__isnull=True):
key = (movie.name, movie.year)
if key in name_year_pairs:
existing_by_name_year[key] = movie
# Check each movie against the bulk query results
movies_actually_created = []
for movie in movies_to_create:
# Find the movie by its unique identifiers
if movie.tmdb_id:
db_movie = Movie.objects.filter(tmdb_id=movie.tmdb_id).first()
elif movie.imdb_id:
db_movie = Movie.objects.filter(imdb_id=movie.imdb_id).first()
else:
db_movie = Movie.objects.filter(
name=movie.name,
year=movie.year,
tmdb_id__isnull=True,
imdb_id__isnull=True
).first()
existing = None
if movie.tmdb_id and movie.tmdb_id in existing_by_tmdb:
existing = existing_by_tmdb[movie.tmdb_id]
elif movie.imdb_id and movie.imdb_id in existing_by_imdb:
existing = existing_by_imdb[movie.imdb_id]
elif not movie.tmdb_id and not movie.imdb_id:
existing = existing_by_name_year.get((movie.name, movie.year))
if db_movie:
created_movies[id(movie)] = db_movie
if existing:
created_movies[id(movie)] = existing
else:
movies_actually_created.append(movie)
created_movies[id(movie)] = movie
# Bulk create only movies that don't exist
if movies_actually_created:
Movie.objects.bulk_create(movies_actually_created)
# Update existing movies
if movies_to_update:
@ -649,12 +664,16 @@ def process_movie_batch(account, batch, categories, relations, scan_start_time=N
movie.logo = movie._logo_to_update
movie.save(update_fields=['logo'])
# Update relations to reference the correct movie objects
# Update relations to reference the correct movie objects (with PKs)
for relation in relations_to_create:
if id(relation.movie) in created_movies:
relation.movie = created_movies[id(relation.movie)]
# Handle relations
for relation in relations_to_update:
if id(relation.movie) in created_movies:
relation.movie = created_movies[id(relation.movie)]
# All movies now have PKs, safe to bulk create/update relations
if relations_to_create:
M3UMovieRelation.objects.bulk_create(relations_to_create, ignore_conflicts=True)
@ -724,10 +743,10 @@ def process_series_batch(account, batch, categories, relations, scan_start_time=
tmdb_id = series_data.get('tmdb') or series_data.get('tmdb_id')
imdb_id = series_data.get('imdb') or series_data.get('imdb_id')
# Clean empty string IDs
if tmdb_id == '':
# Clean empty string IDs and zero values (some providers use 0 to indicate no ID)
if tmdb_id == '' or tmdb_id == 0 or tmdb_id == '0':
tmdb_id = None
if imdb_id == '':
if imdb_id == '' or imdb_id == 0 or imdb_id == '0':
imdb_id = None
# Create a unique key for this series (priority: TMDB > IMDB > name+year)
@ -945,26 +964,41 @@ def process_series_batch(account, batch, categories, relations, scan_start_time=
# First, create new series and get their IDs
created_series = {}
if series_to_create:
Series.objects.bulk_create(series_to_create, ignore_conflicts=True)
# Bulk query to check which series already exist
tmdb_ids = [s.tmdb_id for s in series_to_create if s.tmdb_id]
imdb_ids = [s.imdb_id for s in series_to_create if s.imdb_id]
name_year_pairs = [(s.name, s.year) for s in series_to_create if not s.tmdb_id and not s.imdb_id]
# Get the newly created series with their IDs
# We need to re-fetch them to get the primary keys
existing_by_tmdb = {s.tmdb_id: s for s in Series.objects.filter(tmdb_id__in=tmdb_ids)} if tmdb_ids else {}
existing_by_imdb = {s.imdb_id: s for s in Series.objects.filter(imdb_id__in=imdb_ids)} if imdb_ids else {}
existing_by_name_year = {}
if name_year_pairs:
for series in Series.objects.filter(tmdb_id__isnull=True, imdb_id__isnull=True):
key = (series.name, series.year)
if key in name_year_pairs:
existing_by_name_year[key] = series
# Check each series against the bulk query results
series_actually_created = []
for series in series_to_create:
# Find the series by its unique identifiers
if series.tmdb_id:
db_series = Series.objects.filter(tmdb_id=series.tmdb_id).first()
elif series.imdb_id:
db_series = Series.objects.filter(imdb_id=series.imdb_id).first()
else:
db_series = Series.objects.filter(
name=series.name,
year=series.year,
tmdb_id__isnull=True,
imdb_id__isnull=True
).first()
existing = None
if series.tmdb_id and series.tmdb_id in existing_by_tmdb:
existing = existing_by_tmdb[series.tmdb_id]
elif series.imdb_id and series.imdb_id in existing_by_imdb:
existing = existing_by_imdb[series.imdb_id]
elif not series.tmdb_id and not series.imdb_id:
existing = existing_by_name_year.get((series.name, series.year))
if db_series:
created_series[id(series)] = db_series
if existing:
created_series[id(series)] = existing
else:
series_actually_created.append(series)
created_series[id(series)] = series
# Bulk create only series that don't exist
if series_actually_created:
Series.objects.bulk_create(series_actually_created)
# Update existing series
if series_to_update:
@ -980,12 +1014,16 @@ def process_series_batch(account, batch, categories, relations, scan_start_time=
series.logo = series._logo_to_update
series.save(update_fields=['logo'])
# Update relations to reference the correct series objects
# Update relations to reference the correct series objects (with PKs)
for relation in relations_to_create:
if id(relation.series) in created_series:
relation.series = created_series[id(relation.series)]
# Handle relations
for relation in relations_to_update:
if id(relation.series) in created_series:
relation.series = created_series[id(relation.series)]
# All series now have PKs, safe to bulk create/update relations
if relations_to_create:
M3USeriesRelation.objects.bulk_create(relations_to_create, ignore_conflicts=True)
@ -1232,7 +1270,13 @@ def refresh_series_episodes(account, series, external_series_id, episodes_data=N
def batch_process_episodes(account, series, episodes_data, scan_start_time=None):
"""Process episodes in batches for better performance"""
"""Process episodes in batches for better performance.
Note: Multiple streams can represent the same episode (e.g., different languages
or qualities). Each stream has a unique stream_id, but they share the same
season/episode number. We create one Episode record per (series, season, episode)
and multiple M3UEpisodeRelation records pointing to it.
"""
if not episodes_data:
return
@ -1249,12 +1293,13 @@ def batch_process_episodes(account, series, episodes_data, scan_start_time=None)
logger.info(f"Batch processing {len(all_episodes_data)} episodes for series {series.name}")
# Extract episode identifiers
episode_keys = []
# Note: episode_keys may have duplicates when multiple streams represent same episode
episode_keys = set() # Use set to track unique episode keys
episode_ids = []
for episode_data in all_episodes_data:
season_num = episode_data['_season_number']
episode_num = episode_data.get('episode_num', 0)
episode_keys.append((series.id, season_num, episode_num))
episode_keys.add((series.id, season_num, episode_num))
episode_ids.append(str(episode_data.get('id')))
# Pre-fetch existing episodes
@ -1277,12 +1322,25 @@ def batch_process_episodes(account, series, episodes_data, scan_start_time=None)
relations_to_create = []
relations_to_update = []
# Track episodes we're creating in this batch to avoid duplicates
# Key: (series_id, season_number, episode_number) -> Episode object
episodes_pending_creation = {}
for episode_data in all_episodes_data:
try:
episode_id = str(episode_data.get('id'))
episode_name = episode_data.get('title', 'Unknown Episode')
season_number = episode_data['_season_number']
episode_number = episode_data.get('episode_num', 0)
# Ensure season and episode numbers are integers (API may return strings)
try:
season_number = int(episode_data['_season_number'])
except (ValueError, TypeError) as e:
logger.warning(f"Invalid season_number '{episode_data.get('_season_number')}' for episode '{episode_name}': {e}")
season_number = 0
try:
episode_number = int(episode_data.get('episode_num', 0))
except (ValueError, TypeError) as e:
logger.warning(f"Invalid episode_num '{episode_data.get('episode_num')}' for episode '{episode_name}': {e}")
episode_number = 0
info = episode_data.get('info', {})
# Extract episode metadata
@ -1306,10 +1364,15 @@ def batch_process_episodes(account, series, episodes_data, scan_start_time=None)
if backdrop:
custom_props['backdrop_path'] = [backdrop]
# Find existing episode
# Find existing episode - check DB first, then pending creations
episode_key = (series.id, season_number, episode_number)
episode = existing_episodes.get(episode_key)
# Check if we already have this episode pending creation (multiple streams for same episode)
if not episode and episode_key in episodes_pending_creation:
episode = episodes_pending_creation[episode_key]
logger.debug(f"Reusing pending episode for S{season_number}E{episode_number} (stream_id: {episode_id})")
if episode:
# Update existing episode
updated = False
@ -1338,7 +1401,9 @@ def batch_process_episodes(account, series, episodes_data, scan_start_time=None)
episode.custom_properties = custom_props if custom_props else None
updated = True
if updated:
# Only add to update list if episode has a PK (exists in DB) and isn't already in list
# Episodes pending creation don't have PKs yet and will be created via bulk_create
if updated and episode.pk and episode not in episodes_to_update:
episodes_to_update.append(episode)
else:
# Create new episode
@ -1356,6 +1421,8 @@ def batch_process_episodes(account, series, episodes_data, scan_start_time=None)
custom_properties=custom_props if custom_props else None
)
episodes_to_create.append(episode)
# Track this episode so subsequent streams with same season/episode can reuse it
episodes_pending_creation[episode_key] = episode
# Handle episode relation
if episode_id in existing_relations:
@ -1389,9 +1456,43 @@ def batch_process_episodes(account, series, episodes_data, scan_start_time=None)
# Execute batch operations
with transaction.atomic():
# Create new episodes
# Create new episodes - use ignore_conflicts in case of race conditions
if episodes_to_create:
Episode.objects.bulk_create(episodes_to_create)
Episode.objects.bulk_create(episodes_to_create, ignore_conflicts=True)
# Re-fetch the created episodes to get their PKs
# We need to do this because bulk_create with ignore_conflicts doesn't set PKs
created_episode_keys = [
(ep.series_id, ep.season_number, ep.episode_number)
for ep in episodes_to_create
]
db_episodes = Episode.objects.filter(series=series)
episode_pk_map = {
(ep.series_id, ep.season_number, ep.episode_number): ep
for ep in db_episodes
}
# Update relations to point to the actual DB episodes with PKs
for relation in relations_to_create:
ep = relation.episode
key = (ep.series_id, ep.season_number, ep.episode_number)
if key in episode_pk_map:
relation.episode = episode_pk_map[key]
# Filter out relations with unsaved episodes (no PK)
# This can happen if bulk_create had a conflict and ignore_conflicts=True didn't save the episode
valid_relations_to_create = []
for relation in relations_to_create:
if relation.episode.pk is not None:
valid_relations_to_create.append(relation)
else:
season_num = relation.episode.season_number
episode_num = relation.episode.episode_number
logger.warning(
f"Skipping relation for episode S{season_num}E{episode_num} "
f"- episode not saved to database"
)
relations_to_create = valid_relations_to_create
# Update existing episodes
if episodes_to_update:
@ -1400,9 +1501,9 @@ def batch_process_episodes(account, series, episodes_data, scan_start_time=None)
'tmdb_id', 'imdb_id', 'custom_properties'
])
# Create new episode relations
# Create new episode relations - use ignore_conflicts for stream_id duplicates
if relations_to_create:
M3UEpisodeRelation.objects.bulk_create(relations_to_create)
M3UEpisodeRelation.objects.bulk_create(relations_to_create, ignore_conflicts=True)
# Update existing episode relations
if relations_to_update:

View file

@ -15,8 +15,9 @@ from .models import (
UserAgent,
StreamProfile,
CoreSettings,
STREAM_HASH_KEY,
NETWORK_ACCESS,
STREAM_SETTINGS_KEY,
DVR_SETTINGS_KEY,
NETWORK_ACCESS_KEY,
PROXY_SETTINGS_KEY,
)
from .serializers import (
@ -68,16 +69,28 @@ class CoreSettingsViewSet(viewsets.ModelViewSet):
def update(self, request, *args, **kwargs):
instance = self.get_object()
old_value = instance.value
response = super().update(request, *args, **kwargs)
if instance.key == STREAM_HASH_KEY:
if instance.value != request.data["value"]:
rehash_streams.delay(request.data["value"].split(","))
# If DVR pre/post offsets changed, reschedule upcoming recordings
try:
from core.models import DVR_PRE_OFFSET_MINUTES_KEY, DVR_POST_OFFSET_MINUTES_KEY
if instance.key in (DVR_PRE_OFFSET_MINUTES_KEY, DVR_POST_OFFSET_MINUTES_KEY):
if instance.value != request.data.get("value"):
# If stream settings changed and m3u_hash_key is different, rehash streams
if instance.key == STREAM_SETTINGS_KEY:
new_value = request.data.get("value", {})
if isinstance(new_value, dict) and isinstance(old_value, dict):
old_hash = old_value.get("m3u_hash_key", "")
new_hash = new_value.get("m3u_hash_key", "")
if old_hash != new_hash:
hash_keys = new_hash.split(",") if isinstance(new_hash, str) else new_hash
rehash_streams.delay(hash_keys)
# If DVR settings changed and pre/post offsets are different, reschedule upcoming recordings
if instance.key == DVR_SETTINGS_KEY:
new_value = request.data.get("value", {})
if isinstance(new_value, dict) and isinstance(old_value, dict):
old_pre = old_value.get("pre_offset_minutes")
new_pre = new_value.get("pre_offset_minutes")
old_post = old_value.get("post_offset_minutes")
new_post = new_value.get("post_offset_minutes")
if old_pre != new_pre or old_post != new_post:
try:
# Prefer async task if Celery is available
from apps.channels.tasks import reschedule_upcoming_recordings_for_offset_change
@ -86,24 +99,23 @@ class CoreSettingsViewSet(viewsets.ModelViewSet):
# Fallback to synchronous implementation
from apps.channels.tasks import reschedule_upcoming_recordings_for_offset_change_impl
reschedule_upcoming_recordings_for_offset_change_impl()
except Exception:
pass
return response
def create(self, request, *args, **kwargs):
response = super().create(request, *args, **kwargs)
# If creating DVR pre/post offset settings, also reschedule upcoming recordings
# If creating DVR settings with offset values, reschedule upcoming recordings
try:
key = request.data.get("key")
from core.models import DVR_PRE_OFFSET_MINUTES_KEY, DVR_POST_OFFSET_MINUTES_KEY
if key in (DVR_PRE_OFFSET_MINUTES_KEY, DVR_POST_OFFSET_MINUTES_KEY):
try:
from apps.channels.tasks import reschedule_upcoming_recordings_for_offset_change
reschedule_upcoming_recordings_for_offset_change.delay()
except Exception:
from apps.channels.tasks import reschedule_upcoming_recordings_for_offset_change_impl
reschedule_upcoming_recordings_for_offset_change_impl()
if key == DVR_SETTINGS_KEY:
value = request.data.get("value", {})
if isinstance(value, dict) and ("pre_offset_minutes" in value or "post_offset_minutes" in value):
try:
from apps.channels.tasks import reschedule_upcoming_recordings_for_offset_change
reschedule_upcoming_recordings_for_offset_change.delay()
except Exception:
from apps.channels.tasks import reschedule_upcoming_recordings_for_offset_change_impl
reschedule_upcoming_recordings_for_offset_change_impl()
except Exception:
pass
return response
@ -111,13 +123,13 @@ class CoreSettingsViewSet(viewsets.ModelViewSet):
def check(self, request, *args, **kwargs):
data = request.data
if data.get("key") == NETWORK_ACCESS:
if data.get("key") == NETWORK_ACCESS_KEY:
client_ip = ipaddress.ip_address(get_client_ip(request))
in_network = {}
invalid = []
value = json.loads(data.get("value", "{}"))
value = data.get("value", {})
for key, val in value.items():
in_network[key] = []
cidrs = val.split(",")
@ -143,7 +155,11 @@ class CoreSettingsViewSet(viewsets.ModelViewSet):
status=status.HTTP_200_OK,
)
return Response(in_network, status=status.HTTP_200_OK)
response_data = {
**in_network,
"client_ip": str(client_ip)
}
return Response(response_data, status=status.HTTP_200_OK)
return Response({}, status=status.HTTP_200_OK)
@ -157,8 +173,8 @@ class ProxySettingsViewSet(viewsets.ViewSet):
"""Get or create the proxy settings CoreSettings entry"""
try:
settings_obj = CoreSettings.objects.get(key=PROXY_SETTINGS_KEY)
settings_data = json.loads(settings_obj.value)
except (CoreSettings.DoesNotExist, json.JSONDecodeError):
settings_data = settings_obj.value
except CoreSettings.DoesNotExist:
# Create default settings
settings_data = {
"buffering_timeout": 15,
@ -171,7 +187,7 @@ class ProxySettingsViewSet(viewsets.ViewSet):
key=PROXY_SETTINGS_KEY,
defaults={
"name": "Proxy Settings",
"value": json.dumps(settings_data)
"value": settings_data
}
)
return settings_obj, settings_data
@ -193,8 +209,8 @@ class ProxySettingsViewSet(viewsets.ViewSet):
serializer = ProxySettingsSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
# Update the JSON data
settings_obj.value = json.dumps(serializer.validated_data)
# Update the JSON data - store as dict directly
settings_obj.value = serializer.validated_data
settings_obj.save()
return Response(serializer.validated_data)
@ -209,8 +225,8 @@ class ProxySettingsViewSet(viewsets.ViewSet):
serializer = ProxySettingsSerializer(data=updated_data)
serializer.is_valid(raise_exception=True)
# Update the JSON data
settings_obj.value = json.dumps(serializer.validated_data)
# Update the JSON data - store as dict directly
settings_obj.value = serializer.validated_data
settings_obj.save()
return Response(serializer.validated_data)
@ -328,8 +344,8 @@ def rehash_streams_endpoint(request):
"""Trigger the rehash streams task"""
try:
# Get the current hash keys from settings
hash_key_setting = CoreSettings.objects.get(key=STREAM_HASH_KEY)
hash_keys = hash_key_setting.value.split(",")
hash_key = CoreSettings.get_m3u_hash_key()
hash_keys = hash_key.split(",") if isinstance(hash_key, str) else hash_key
# Queue the rehash task
task = rehash_streams.delay(hash_keys)
@ -340,10 +356,10 @@ def rehash_streams_endpoint(request):
"task_id": task.id
}, status=status.HTTP_200_OK)
except CoreSettings.DoesNotExist:
except Exception as e:
return Response({
"success": False,
"message": "Hash key settings not found"
"message": f"Error triggering rehash: {str(e)}"
}, status=status.HTTP_400_BAD_REQUEST)
except Exception as e:

View file

@ -23,7 +23,7 @@
"model": "core.streamprofile",
"pk": 1,
"fields": {
"name": "ffmpeg",
"name": "FFmpeg",
"command": "ffmpeg",
"parameters": "-i {streamUrl} -c:v copy -c:a copy -f mpegts pipe:1",
"is_active": true,
@ -34,11 +34,22 @@
"model": "core.streamprofile",
"pk": 2,
"fields": {
"name": "streamlink",
"name": "Streamlink",
"command": "streamlink",
"parameters": "{streamUrl} best --stdout",
"is_active": true,
"user_agent": "1"
}
},
{
"model": "core.streamprofile",
"pk": 3,
"fields": {
"name": "VLC",
"command": "cvlc",
"parameters": "-vv -I dummy --no-video-title-show --http-user-agent {userAgent} {streamUrl} --sout #standard{access=file,mux=ts,dst=-}",
"is_active": true,
"user_agent": "1"
}
}
]

View file

@ -1,13 +1,13 @@
# your_app/management/commands/update_column.py
from django.core.management.base import BaseCommand
from core.models import CoreSettings, NETWORK_ACCESS
from core.models import CoreSettings, NETWORK_ACCESS_KEY
class Command(BaseCommand):
help = "Reset network access settings"
def handle(self, *args, **options):
setting = CoreSettings.objects.get(key=NETWORK_ACCESS)
setting.value = "{}"
setting = CoreSettings.objects.get(key=NETWORK_ACCESS_KEY)
setting.value = {}
setting.save()

View file

@ -0,0 +1,42 @@
# Generated migration to add VLC stream profile
from django.db import migrations
def add_vlc_profile(apps, schema_editor):
StreamProfile = apps.get_model("core", "StreamProfile")
UserAgent = apps.get_model("core", "UserAgent")
# Check if VLC profile already exists
if not StreamProfile.objects.filter(name="VLC").exists():
# Get the TiviMate user agent (should be pk=1)
try:
tivimate_ua = UserAgent.objects.get(pk=1)
except UserAgent.DoesNotExist:
# Fallback: get first available user agent
tivimate_ua = UserAgent.objects.first()
if not tivimate_ua:
# No user agents exist, skip creating profile
return
StreamProfile.objects.create(
name="VLC",
command="cvlc",
parameters="-vv -I dummy --no-video-title-show --http-user-agent {userAgent} {streamUrl} --sout #standard{access=file,mux=ts,dst=-}",
is_active=True,
user_agent=tivimate_ua,
locked=True, # Make it read-only like ffmpeg/streamlink
)
def remove_vlc_profile(apps, schema_editor):
StreamProfile = apps.get_model("core", "StreamProfile")
StreamProfile.objects.filter(name="VLC").delete()
class Migration(migrations.Migration):
dependencies = [
('core', '0018_alter_systemevent_event_type'),
]
operations = [
migrations.RunPython(add_vlc_profile, remove_vlc_profile),
]

View file

@ -0,0 +1,267 @@
# Generated migration to change CoreSettings value field to JSONField and consolidate settings
import json
from django.db import migrations, models
def convert_string_to_json(apps, schema_editor):
"""Convert existing string values to appropriate JSON types before changing column type"""
CoreSettings = apps.get_model("core", "CoreSettings")
for setting in CoreSettings.objects.all():
value = setting.value
if not value:
# Empty strings become empty string in JSON
setting.value = json.dumps("")
setting.save(update_fields=['value'])
continue
# Try to parse as JSON if it looks like JSON (objects/arrays)
if value.startswith('{') or value.startswith('['):
try:
parsed = json.loads(value)
# Store as JSON string temporarily (column is still CharField)
setting.value = json.dumps(parsed)
setting.save(update_fields=['value'])
continue
except (json.JSONDecodeError, ValueError):
pass
# Try to parse as number
try:
# Check if it's an integer
if '.' not in value and value.lstrip('-').isdigit():
setting.value = json.dumps(int(value))
setting.save(update_fields=['value'])
continue
# Check if it's a float
float_val = float(value)
setting.value = json.dumps(float_val)
setting.save(update_fields=['value'])
continue
except (ValueError, AttributeError):
pass
# Check for booleans
if value.lower() in ('true', 'false', '1', '0', 'yes', 'no', 'on', 'off'):
bool_val = value.lower() in ('true', '1', 'yes', 'on')
setting.value = json.dumps(bool_val)
setting.save(update_fields=['value'])
continue
# Default: store as JSON string
setting.value = json.dumps(value)
setting.save(update_fields=['value'])
def consolidate_settings(apps, schema_editor):
"""Consolidate individual setting rows into grouped JSON objects."""
CoreSettings = apps.get_model("core", "CoreSettings")
# Helper to get setting value
def get_value(key, default=None):
try:
obj = CoreSettings.objects.get(key=key)
return obj.value if obj.value is not None else default
except CoreSettings.DoesNotExist:
return default
# STREAM SETTINGS
stream_settings = {
"default_user_agent": get_value("default-user-agent"),
"default_stream_profile": get_value("default-stream-profile"),
"m3u_hash_key": get_value("m3u-hash-key", ""),
"preferred_region": get_value("preferred-region"),
"auto_import_mapped_files": get_value("auto-import-mapped-files"),
}
CoreSettings.objects.update_or_create(
key="stream_settings",
defaults={"name": "Stream Settings", "value": stream_settings}
)
# DVR SETTINGS
dvr_settings = {
"tv_template": get_value("dvr-tv-template", "TV_Shows/{show}/S{season:02d}E{episode:02d}.mkv"),
"movie_template": get_value("dvr-movie-template", "Movies/{title} ({year}).mkv"),
"tv_fallback_dir": get_value("dvr-tv-fallback-dir", "TV_Shows"),
"tv_fallback_template": get_value("dvr-tv-fallback-template", "TV_Shows/{show}/{start}.mkv"),
"movie_fallback_template": get_value("dvr-movie-fallback-template", "Movies/{start}.mkv"),
"comskip_enabled": bool(get_value("dvr-comskip-enabled", False)),
"comskip_custom_path": get_value("dvr-comskip-custom-path", ""),
"pre_offset_minutes": int(get_value("dvr-pre-offset-minutes", 0) or 0),
"post_offset_minutes": int(get_value("dvr-post-offset-minutes", 0) or 0),
"series_rules": get_value("dvr-series-rules", []),
}
CoreSettings.objects.update_or_create(
key="dvr_settings",
defaults={"name": "DVR Settings", "value": dvr_settings}
)
# BACKUP SETTINGS - using underscore keys (not dashes)
backup_settings = {
"schedule_enabled": get_value("backup_schedule_enabled") if get_value("backup_schedule_enabled") is not None else True,
"schedule_frequency": get_value("backup_schedule_frequency") or "daily",
"schedule_time": get_value("backup_schedule_time") or "03:00",
"schedule_day_of_week": get_value("backup_schedule_day_of_week") if get_value("backup_schedule_day_of_week") is not None else 0,
"retention_count": get_value("backup_retention_count") if get_value("backup_retention_count") is not None else 3,
"schedule_cron_expression": get_value("backup_schedule_cron_expression") or "",
}
CoreSettings.objects.update_or_create(
key="backup_settings",
defaults={"name": "Backup Settings", "value": backup_settings}
)
# SYSTEM SETTINGS
system_settings = {
"time_zone": get_value("system-time-zone", "UTC"),
"max_system_events": int(get_value("max-system-events", 100) or 100),
}
CoreSettings.objects.update_or_create(
key="system_settings",
defaults={"name": "System Settings", "value": system_settings}
)
# Rename proxy-settings to proxy_settings (if it exists with old name)
try:
old_proxy = CoreSettings.objects.get(key="proxy-settings")
old_proxy.key = "proxy_settings"
old_proxy.save()
except CoreSettings.DoesNotExist:
pass
# Ensure proxy_settings exists with defaults if not present
proxy_obj, proxy_created = CoreSettings.objects.get_or_create(
key="proxy_settings",
defaults={
"name": "Proxy Settings",
"value": {
"buffering_timeout": 15,
"buffering_speed": 1.0,
"redis_chunk_ttl": 60,
"channel_shutdown_delay": 0,
"channel_init_grace_period": 5,
}
}
)
# Rename network-access to network_access (if it exists with old name)
try:
old_network = CoreSettings.objects.get(key="network-access")
old_network.key = "network_access"
old_network.save()
except CoreSettings.DoesNotExist:
pass
# Ensure network_access exists with defaults if not present
network_obj, network_created = CoreSettings.objects.get_or_create(
key="network_access",
defaults={
"name": "Network Access",
"value": {}
}
)
# Delete old individual setting rows (keep only the new grouped settings)
grouped_keys = ["stream_settings", "dvr_settings", "backup_settings", "system_settings", "proxy_settings", "network_access"]
CoreSettings.objects.exclude(key__in=grouped_keys).delete()
def reverse_migration(apps, schema_editor):
"""Reverse migration: split grouped settings and convert JSON back to strings"""
CoreSettings = apps.get_model("core", "CoreSettings")
# Helper to create individual setting
def create_setting(key, name, value):
# Convert value back to string representation for CharField
if isinstance(value, str):
str_value = value
elif isinstance(value, bool):
str_value = "true" if value else "false"
elif isinstance(value, (int, float)):
str_value = str(value)
elif isinstance(value, (dict, list)):
str_value = json.dumps(value)
elif value is None:
str_value = ""
else:
str_value = str(value)
CoreSettings.objects.update_or_create(
key=key,
defaults={"name": name, "value": str_value}
)
# Split stream_settings
try:
stream = CoreSettings.objects.get(key="stream_settings")
if isinstance(stream.value, dict):
create_setting("default_user_agent", "Default User Agent", stream.value.get("default_user_agent"))
create_setting("default_stream_profile", "Default Stream Profile", stream.value.get("default_stream_profile"))
create_setting("stream_hash_key", "Stream Hash Key", stream.value.get("m3u_hash_key", ""))
create_setting("preferred_region", "Preferred Region", stream.value.get("preferred_region"))
create_setting("auto_import_mapped_files", "Auto Import Mapped Files", stream.value.get("auto_import_mapped_files"))
stream.delete()
except CoreSettings.DoesNotExist:
pass
# Split dvr_settings
try:
dvr = CoreSettings.objects.get(key="dvr_settings")
if isinstance(dvr.value, dict):
create_setting("dvr_tv_template", "DVR TV Template", dvr.value.get("tv_template", "TV_Shows/{show}/S{season:02d}E{episode:02d}.mkv"))
create_setting("dvr_movie_template", "DVR Movie Template", dvr.value.get("movie_template", "Movies/{title} ({year}).mkv"))
create_setting("dvr_tv_fallback_dir", "DVR TV Fallback Dir", dvr.value.get("tv_fallback_dir", "TV_Shows"))
create_setting("dvr_tv_fallback_template", "DVR TV Fallback Template", dvr.value.get("tv_fallback_template", "TV_Shows/{show}/{start}.mkv"))
create_setting("dvr_movie_fallback_template", "DVR Movie Fallback Template", dvr.value.get("movie_fallback_template", "Movies/{start}.mkv"))
create_setting("dvr_comskip_enabled", "DVR Comskip Enabled", dvr.value.get("comskip_enabled", False))
create_setting("dvr_comskip_custom_path", "DVR Comskip Custom Path", dvr.value.get("comskip_custom_path", ""))
create_setting("dvr_pre_offset_minutes", "DVR Pre Offset Minutes", dvr.value.get("pre_offset_minutes", 0))
create_setting("dvr_post_offset_minutes", "DVR Post Offset Minutes", dvr.value.get("post_offset_minutes", 0))
create_setting("dvr_series_rules", "DVR Series Rules", dvr.value.get("series_rules", []))
dvr.delete()
except CoreSettings.DoesNotExist:
pass
# Split backup_settings
try:
backup = CoreSettings.objects.get(key="backup_settings")
if isinstance(backup.value, dict):
create_setting("backup_schedule_enabled", "Backup Schedule Enabled", backup.value.get("schedule_enabled", False))
create_setting("backup_schedule_frequency", "Backup Schedule Frequency", backup.value.get("schedule_frequency", "weekly"))
create_setting("backup_schedule_time", "Backup Schedule Time", backup.value.get("schedule_time", "02:00"))
create_setting("backup_schedule_day_of_week", "Backup Schedule Day of Week", backup.value.get("schedule_day_of_week", 0))
create_setting("backup_retention_count", "Backup Retention Count", backup.value.get("retention_count", 7))
create_setting("backup_schedule_cron_expression", "Backup Schedule Cron Expression", backup.value.get("schedule_cron_expression", ""))
backup.delete()
except CoreSettings.DoesNotExist:
pass
# Split system_settings
try:
system = CoreSettings.objects.get(key="system_settings")
if isinstance(system.value, dict):
create_setting("system_time_zone", "System Time Zone", system.value.get("time_zone", "UTC"))
create_setting("max_system_events", "Max System Events", system.value.get("max_system_events", 100))
system.delete()
except CoreSettings.DoesNotExist:
pass
class Migration(migrations.Migration):
dependencies = [
('core', '0019_add_vlc_stream_profile'),
]
operations = [
# First, convert all data to valid JSON strings while column is still CharField
migrations.RunPython(convert_string_to_json, migrations.RunPython.noop),
# Then change the field type to JSONField
migrations.AlterField(
model_name='coresettings',
name='value',
field=models.JSONField(blank=True, default=dict),
),
# Finally, consolidate individual settings into grouped JSON objects
migrations.RunPython(consolidate_settings, reverse_migration),
]

View file

@ -1,4 +1,7 @@
# core/models.py
from shlex import split as shlex_split
from django.conf import settings
from django.db import models
from django.utils.text import slugify
@ -133,7 +136,7 @@ class StreamProfile(models.Model):
# Split the command and iterate through each part to apply replacements
cmd = [self.command] + [
self._replace_in_part(part, replacements)
for part in self.parameters.split()
for part in shlex_split(self.parameters) # use shlex to handle quoted strings
]
return cmd
@ -145,24 +148,13 @@ class StreamProfile(models.Model):
return part
DEFAULT_USER_AGENT_KEY = slugify("Default User-Agent")
DEFAULT_STREAM_PROFILE_KEY = slugify("Default Stream Profile")
STREAM_HASH_KEY = slugify("M3U Hash Key")
PREFERRED_REGION_KEY = slugify("Preferred Region")
AUTO_IMPORT_MAPPED_FILES = slugify("Auto-Import Mapped Files")
NETWORK_ACCESS = slugify("Network Access")
PROXY_SETTINGS_KEY = slugify("Proxy Settings")
DVR_TV_TEMPLATE_KEY = slugify("DVR TV Template")
DVR_MOVIE_TEMPLATE_KEY = slugify("DVR Movie Template")
DVR_SERIES_RULES_KEY = slugify("DVR Series Rules")
DVR_TV_FALLBACK_DIR_KEY = slugify("DVR TV Fallback Dir")
DVR_TV_FALLBACK_TEMPLATE_KEY = slugify("DVR TV Fallback Template")
DVR_MOVIE_FALLBACK_TEMPLATE_KEY = slugify("DVR Movie Fallback Template")
DVR_COMSKIP_ENABLED_KEY = slugify("DVR Comskip Enabled")
DVR_COMSKIP_CUSTOM_PATH_KEY = slugify("DVR Comskip Custom Path")
DVR_PRE_OFFSET_MINUTES_KEY = slugify("DVR Pre-Offset Minutes")
DVR_POST_OFFSET_MINUTES_KEY = slugify("DVR Post-Offset Minutes")
SYSTEM_TIME_ZONE_KEY = slugify("System Time Zone")
# Setting group keys
STREAM_SETTINGS_KEY = "stream_settings"
DVR_SETTINGS_KEY = "dvr_settings"
BACKUP_SETTINGS_KEY = "backup_settings"
PROXY_SETTINGS_KEY = "proxy_settings"
NETWORK_ACCESS_KEY = "network_access"
SYSTEM_SETTINGS_KEY = "system_settings"
class CoreSettings(models.Model):
@ -173,208 +165,166 @@ class CoreSettings(models.Model):
name = models.CharField(
max_length=255,
)
value = models.CharField(
max_length=255,
value = models.JSONField(
default=dict,
blank=True,
)
def __str__(self):
return "Core Settings"
# Helper methods to get/set grouped settings
@classmethod
def _get_group(cls, key, defaults=None):
"""Get a settings group, returning defaults if not found."""
try:
return cls.objects.get(key=key).value or (defaults or {})
except cls.DoesNotExist:
return defaults or {}
@classmethod
def _update_group(cls, key, name, updates):
"""Update specific fields in a settings group."""
obj, created = cls.objects.get_or_create(
key=key,
defaults={"name": name, "value": {}}
)
current = obj.value if isinstance(obj.value, dict) else {}
current.update(updates)
obj.value = current
obj.save()
return current
# Stream Settings
@classmethod
def get_stream_settings(cls):
"""Get all stream-related settings."""
return cls._get_group(STREAM_SETTINGS_KEY, {
"default_user_agent": None,
"default_stream_profile": None,
"m3u_hash_key": "",
"preferred_region": None,
"auto_import_mapped_files": None,
})
@classmethod
def get_default_user_agent_id(cls):
"""Retrieve a system profile by name (or return None if not found)."""
return cls.objects.get(key=DEFAULT_USER_AGENT_KEY).value
return cls.get_stream_settings().get("default_user_agent")
@classmethod
def get_default_stream_profile_id(cls):
return cls.objects.get(key=DEFAULT_STREAM_PROFILE_KEY).value
return cls.get_stream_settings().get("default_stream_profile")
@classmethod
def get_m3u_hash_key(cls):
return cls.objects.get(key=STREAM_HASH_KEY).value
return cls.get_stream_settings().get("m3u_hash_key", "")
@classmethod
def get_preferred_region(cls):
"""Retrieve the preferred region setting (or return None if not found)."""
try:
return cls.objects.get(key=PREFERRED_REGION_KEY).value
except cls.DoesNotExist:
return None
return cls.get_stream_settings().get("preferred_region")
@classmethod
def get_auto_import_mapped_files(cls):
"""Retrieve the preferred region setting (or return None if not found)."""
try:
return cls.objects.get(key=AUTO_IMPORT_MAPPED_FILES).value
except cls.DoesNotExist:
return None
return cls.get_stream_settings().get("auto_import_mapped_files")
# DVR Settings
@classmethod
def get_proxy_settings(cls):
"""Retrieve proxy settings as dict (or return defaults if not found)."""
try:
import json
settings_json = cls.objects.get(key=PROXY_SETTINGS_KEY).value
return json.loads(settings_json)
except (cls.DoesNotExist, json.JSONDecodeError):
# Return defaults if not found or invalid JSON
return {
"buffering_timeout": 15,
"buffering_speed": 1.0,
"redis_chunk_ttl": 60,
"channel_shutdown_delay": 0,
"channel_init_grace_period": 5,
}
def get_dvr_settings(cls):
"""Get all DVR-related settings."""
return cls._get_group(DVR_SETTINGS_KEY, {
"tv_template": "TV_Shows/{show}/S{season:02d}E{episode:02d}.mkv",
"movie_template": "Movies/{title} ({year}).mkv",
"tv_fallback_dir": "TV_Shows",
"tv_fallback_template": "TV_Shows/{show}/{start}.mkv",
"movie_fallback_template": "Movies/{start}.mkv",
"comskip_enabled": False,
"comskip_custom_path": "",
"pre_offset_minutes": 0,
"post_offset_minutes": 0,
"series_rules": [],
})
@classmethod
def get_dvr_tv_template(cls):
try:
return cls.objects.get(key=DVR_TV_TEMPLATE_KEY).value
except cls.DoesNotExist:
# Default: relative to recordings root (/data/recordings)
return "TV_Shows/{show}/S{season:02d}E{episode:02d}.mkv"
return cls.get_dvr_settings().get("tv_template", "TV_Shows/{show}/S{season:02d}E{episode:02d}.mkv")
@classmethod
def get_dvr_movie_template(cls):
try:
return cls.objects.get(key=DVR_MOVIE_TEMPLATE_KEY).value
except cls.DoesNotExist:
return "Movies/{title} ({year}).mkv"
return cls.get_dvr_settings().get("movie_template", "Movies/{title} ({year}).mkv")
@classmethod
def get_dvr_tv_fallback_dir(cls):
"""Folder name to use when a TV episode has no season/episode information.
Defaults to 'TV_Show' to match existing behavior but can be overridden in settings.
"""
try:
return cls.objects.get(key=DVR_TV_FALLBACK_DIR_KEY).value or "TV_Shows"
except cls.DoesNotExist:
return "TV_Shows"
return cls.get_dvr_settings().get("tv_fallback_dir", "TV_Shows")
@classmethod
def get_dvr_tv_fallback_template(cls):
"""Full path template used when season/episode are missing for a TV airing."""
try:
return cls.objects.get(key=DVR_TV_FALLBACK_TEMPLATE_KEY).value
except cls.DoesNotExist:
# default requested by user
return "TV_Shows/{show}/{start}.mkv"
return cls.get_dvr_settings().get("tv_fallback_template", "TV_Shows/{show}/{start}.mkv")
@classmethod
def get_dvr_movie_fallback_template(cls):
"""Full path template used when movie metadata is incomplete."""
try:
return cls.objects.get(key=DVR_MOVIE_FALLBACK_TEMPLATE_KEY).value
except cls.DoesNotExist:
return "Movies/{start}.mkv"
return cls.get_dvr_settings().get("movie_fallback_template", "Movies/{start}.mkv")
@classmethod
def get_dvr_comskip_enabled(cls):
"""Return boolean-like string value ('true'/'false') for comskip enablement."""
try:
val = cls.objects.get(key=DVR_COMSKIP_ENABLED_KEY).value
return str(val).lower() in ("1", "true", "yes", "on")
except cls.DoesNotExist:
return False
return bool(cls.get_dvr_settings().get("comskip_enabled", False))
@classmethod
def get_dvr_comskip_custom_path(cls):
"""Return configured comskip.ini path or empty string if unset."""
try:
return cls.objects.get(key=DVR_COMSKIP_CUSTOM_PATH_KEY).value
except cls.DoesNotExist:
return ""
return cls.get_dvr_settings().get("comskip_custom_path", "")
@classmethod
def set_dvr_comskip_custom_path(cls, path: str | None):
"""Persist the comskip.ini path setting, normalizing nulls to empty string."""
value = (path or "").strip()
obj, _ = cls.objects.get_or_create(
key=DVR_COMSKIP_CUSTOM_PATH_KEY,
defaults={"name": "DVR Comskip Custom Path", "value": value},
)
if obj.value != value:
obj.value = value
obj.save(update_fields=["value"])
cls._update_group(DVR_SETTINGS_KEY, "DVR Settings", {"comskip_custom_path": value})
return value
@classmethod
def get_dvr_pre_offset_minutes(cls):
"""Minutes to start recording before scheduled start (default 0)."""
try:
val = cls.objects.get(key=DVR_PRE_OFFSET_MINUTES_KEY).value
return int(val)
except cls.DoesNotExist:
return 0
except Exception:
try:
return int(float(val))
except Exception:
return 0
return int(cls.get_dvr_settings().get("pre_offset_minutes", 0) or 0)
@classmethod
def get_dvr_post_offset_minutes(cls):
"""Minutes to stop recording after scheduled end (default 0)."""
try:
val = cls.objects.get(key=DVR_POST_OFFSET_MINUTES_KEY).value
return int(val)
except cls.DoesNotExist:
return 0
except Exception:
try:
return int(float(val))
except Exception:
return 0
@classmethod
def get_system_time_zone(cls):
"""Return configured system time zone or fall back to Django settings."""
try:
value = cls.objects.get(key=SYSTEM_TIME_ZONE_KEY).value
if value:
return value
except cls.DoesNotExist:
pass
return getattr(settings, "TIME_ZONE", "UTC") or "UTC"
@classmethod
def set_system_time_zone(cls, tz_name: str | None):
"""Persist the desired system time zone identifier."""
value = (tz_name or "").strip() or getattr(settings, "TIME_ZONE", "UTC") or "UTC"
obj, _ = cls.objects.get_or_create(
key=SYSTEM_TIME_ZONE_KEY,
defaults={"name": "System Time Zone", "value": value},
)
if obj.value != value:
obj.value = value
obj.save(update_fields=["value"])
return value
return int(cls.get_dvr_settings().get("post_offset_minutes", 0) or 0)
@classmethod
def get_dvr_series_rules(cls):
"""Return list of series recording rules. Each: {tvg_id, title, mode: 'all'|'new'}"""
import json
try:
raw = cls.objects.get(key=DVR_SERIES_RULES_KEY).value
rules = json.loads(raw) if raw else []
if isinstance(rules, list):
return rules
return []
except cls.DoesNotExist:
# Initialize empty if missing
cls.objects.create(key=DVR_SERIES_RULES_KEY, name="DVR Series Rules", value="[]")
return []
return cls.get_dvr_settings().get("series_rules", [])
@classmethod
def set_dvr_series_rules(cls, rules):
import json
try:
obj, _ = cls.objects.get_or_create(key=DVR_SERIES_RULES_KEY, defaults={"name": "DVR Series Rules", "value": "[]"})
obj.value = json.dumps(rules)
obj.save(update_fields=["value"])
return rules
except Exception:
return rules
cls._update_group(DVR_SETTINGS_KEY, "DVR Settings", {"series_rules": rules})
return rules
# Proxy Settings
@classmethod
def get_proxy_settings(cls):
"""Get proxy settings."""
return cls._get_group(PROXY_SETTINGS_KEY, {
"buffering_timeout": 15,
"buffering_speed": 1.0,
"redis_chunk_ttl": 60,
"channel_shutdown_delay": 0,
"channel_init_grace_period": 5,
})
# System Settings
@classmethod
def get_system_settings(cls):
"""Get all system-related settings."""
return cls._get_group(SYSTEM_SETTINGS_KEY, {
"time_zone": getattr(settings, "TIME_ZONE", "UTC") or "UTC",
"max_system_events": 100,
})
@classmethod
def get_system_time_zone(cls):
return cls.get_system_settings().get("time_zone") or getattr(settings, "TIME_ZONE", "UTC") or "UTC"
@classmethod
def set_system_time_zone(cls, tz_name: str | None):
value = (tz_name or "").strip() or getattr(settings, "TIME_ZONE", "UTC") or "UTC"
cls._update_group(SYSTEM_SETTINGS_KEY, "System Settings", {"time_zone": value})
return value
class SystemEvent(models.Model):

View file

@ -3,7 +3,7 @@ import json
import ipaddress
from rest_framework import serializers
from .models import CoreSettings, UserAgent, StreamProfile, NETWORK_ACCESS
from .models import CoreSettings, UserAgent, StreamProfile, NETWORK_ACCESS_KEY
class UserAgentSerializer(serializers.ModelSerializer):
@ -40,10 +40,10 @@ class CoreSettingsSerializer(serializers.ModelSerializer):
fields = "__all__"
def update(self, instance, validated_data):
if instance.key == NETWORK_ACCESS:
if instance.key == NETWORK_ACCESS_KEY:
errors = False
invalid = {}
value = json.loads(validated_data.get("value"))
value = validated_data.get("value")
for key, val in value.items():
cidrs = val.split(",")
for cidr in cidrs:

View file

@ -513,7 +513,8 @@ def rehash_streams(keys):
for obj in batch:
# Generate new hash
new_hash = Stream.generate_hash_key(obj.name, obj.url, obj.tvg_id, keys, m3u_id=obj.m3u_account_id)
group_name = obj.channel_group.name if obj.channel_group else None
new_hash = Stream.generate_hash_key(obj.name, obj.url, obj.tvg_id, keys, m3u_id=obj.m3u_account_id, group=group_name)
# Check if this hash already exists in our tracking dict or in database
if new_hash in hash_keys:

View file

@ -417,8 +417,12 @@ def log_system_event(event_type, channel_id=None, channel_name=None, **details):
# Get max events from settings (default 100)
try:
max_events_setting = CoreSettings.objects.filter(key='max-system-events').first()
max_events = int(max_events_setting.value) if max_events_setting else 100
from .models import CoreSettings
system_settings = CoreSettings.objects.filter(key='system_settings').first()
if system_settings and isinstance(system_settings.value, dict):
max_events = int(system_settings.value.get('max_system_events', 100))
else:
max_events = 100
except Exception:
max_events = 100

View file

@ -1,5 +1,6 @@
# core/views.py
import os
from shlex import split as shlex_split
import sys
import subprocess
import logging
@ -37,7 +38,9 @@ def stream_view(request, channel_uuid):
"""
try:
redis_host = getattr(settings, "REDIS_HOST", "localhost")
redis_client = redis.Redis(host=settings.REDIS_HOST, port=6379, db=int(getattr(settings, "REDIS_DB", "0")))
redis_port = int(getattr(settings, "REDIS_PORT", 6379))
redis_db = int(getattr(settings, "REDIS_DB", "0"))
redis_client = redis.Redis(host=redis_host, port=redis_port, db=redis_db)
# Retrieve the channel by the provided stream_id.
channel = Channel.objects.get(uuid=channel_uuid)
@ -129,7 +132,7 @@ def stream_view(request, channel_uuid):
stream_profile = channel.stream_profile
if not stream_profile:
logger.error("No stream profile set for channel ID=%s, using default", channel.id)
stream_profile = StreamProfile.objects.get(id=CoreSettings.objects.get(key="default-stream-profile").value)
stream_profile = StreamProfile.objects.get(id=CoreSettings.get_default_stream_profile_id())
logger.debug("Stream profile used: %s", stream_profile.name)
@ -142,7 +145,7 @@ def stream_view(request, channel_uuid):
logger.debug("Formatted parameters: %s", parameters)
# Build the final command.
cmd = [stream_profile.command] + parameters.split()
cmd = [stream_profile.command] + shlex_split(parameters)
logger.debug("Executing command: %s", cmd)
try:

View file

@ -73,8 +73,12 @@ class PersistentLock:
# Example usage (for testing purposes only):
if __name__ == "__main__":
# Connect to Redis on localhost; adjust connection parameters as needed.
client = redis.Redis(host="localhost", port=6379, db=0)
import os
# Connect to Redis using environment variables; adjust connection parameters as needed.
redis_host = os.environ.get("REDIS_HOST", "localhost")
redis_port = int(os.environ.get("REDIS_PORT", 6379))
redis_db = int(os.environ.get("REDIS_DB", 0))
client = redis.Redis(host=redis_host, port=redis_port, db=redis_db)
lock = PersistentLock(client, "lock:example_account", lock_timeout=120)
if lock.acquire():

View file

@ -4,8 +4,9 @@ from datetime import timedelta
BASE_DIR = Path(__file__).resolve().parent.parent
SECRET_KEY = "REPLACE_ME_WITH_A_REAL_SECRET"
SECRET_KEY = os.environ.get("DJANGO_SECRET_KEY")
REDIS_HOST = os.environ.get("REDIS_HOST", "localhost")
REDIS_PORT = int(os.environ.get("REDIS_PORT", 6379))
REDIS_DB = os.environ.get("REDIS_DB", "0")
# Set DEBUG to True for development, False for production
@ -118,7 +119,7 @@ CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [(REDIS_HOST, 6379, REDIS_DB)], # Ensure Redis is running
"hosts": [(REDIS_HOST, REDIS_PORT, REDIS_DB)], # Ensure Redis is running
},
},
}
@ -184,8 +185,10 @@ STATICFILES_DIRS = [
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
AUTH_USER_MODEL = "accounts.User"
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER_URL", "redis://localhost:6379/0")
CELERY_RESULT_BACKEND = CELERY_BROKER_URL
# Build default Redis URL from components for Celery
_default_redis_url = f"redis://{REDIS_HOST}:{REDIS_PORT}/{REDIS_DB}"
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER_URL", _default_redis_url)
CELERY_RESULT_BACKEND = os.environ.get("CELERY_RESULT_BACKEND", CELERY_BROKER_URL)
# Configure Redis key prefix
CELERY_RESULT_BACKEND_TRANSPORT_OPTIONS = {
@ -226,6 +229,13 @@ CELERY_BEAT_SCHEDULE = {
MEDIA_ROOT = BASE_DIR / "media"
MEDIA_URL = "/media/"
# Backup settings
BACKUP_ROOT = os.environ.get("BACKUP_ROOT", "/data/backups")
BACKUP_DATA_DIRS = [
os.environ.get("LOGOS_DIR", "/data/logos"),
os.environ.get("UPLOADS_DIR", "/data/uploads"),
os.environ.get("PLUGINS_DIR", "/data/plugins"),
]
SERVER_IP = "127.0.0.1"
@ -242,7 +252,7 @@ SIMPLE_JWT = {
}
# Redis connection settings
REDIS_URL = "redis://localhost:6379/0"
REDIS_URL = os.environ.get("REDIS_URL", f"redis://{REDIS_HOST}:{REDIS_PORT}/{REDIS_DB}")
REDIS_SOCKET_TIMEOUT = 60 # Socket timeout in seconds
REDIS_SOCKET_CONNECT_TIMEOUT = 5 # Connection timeout in seconds
REDIS_HEALTH_CHECK_INTERVAL = 15 # Health check every 15 seconds

View file

@ -3,7 +3,7 @@ import json
import ipaddress
from django.http import JsonResponse
from django.core.exceptions import ValidationError
from core.models import CoreSettings, NETWORK_ACCESS
from core.models import CoreSettings, NETWORK_ACCESS_KEY
def json_error_response(message, status=400):
@ -39,12 +39,15 @@ def get_client_ip(request):
def network_access_allowed(request, settings_key):
network_access = json.loads(CoreSettings.objects.get(key=NETWORK_ACCESS).value)
try:
network_access = CoreSettings.objects.get(key=NETWORK_ACCESS_KEY).value
except CoreSettings.DoesNotExist:
network_access = {}
cidrs = (
network_access[settings_key].split(",")
if settings_key in network_access
else ["0.0.0.0/0"]
else ["0.0.0.0/0", "::/0"]
)
network_allowed = False

View file

@ -4,26 +4,44 @@ ENV DEBIAN_FRONTEND=noninteractive
ENV VIRTUAL_ENV=/dispatcharrpy
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# --- Install Python 3.13 and system dependencies ---
# --- Install Python 3.13 and build dependencies ---
# Note: Hardware acceleration (VA-API, VDPAU, NVENC) already included in base ffmpeg image
RUN apt-get update && apt-get install --no-install-recommends -y \
ca-certificates software-properties-common gnupg2 curl wget \
&& add-apt-repository ppa:deadsnakes/ppa \
&& apt-get update \
&& apt-get install --no-install-recommends -y \
python3.13 python3.13-dev python3.13-venv \
python3.13 python3.13-dev python3.13-venv libpython3.13 \
python-is-python3 python3-pip \
libpcre3 libpcre3-dev libpq-dev procps \
build-essential gcc pciutils \
nginx streamlink comskip\
&& apt-get clean && rm -rf /var/lib/apt/lists/*
libpcre3 libpcre3-dev libpq-dev procps pciutils \
nginx streamlink comskip \
vlc-bin vlc-plugin-base \
build-essential gcc g++ gfortran libopenblas-dev libopenblas0 ninja-build
# --- Create Python virtual environment ---
RUN python3.13 -m venv $VIRTUAL_ENV && $VIRTUAL_ENV/bin/pip install --upgrade pip
# --- Install Python dependencies ---
COPY requirements.txt /tmp/requirements.txt
RUN $VIRTUAL_ENV/bin/pip install --no-cache-dir -r /tmp/requirements.txt && rm /tmp/requirements.txt
RUN $VIRTUAL_ENV/bin/pip install --no-cache-dir -r /tmp/requirements.txt && \
rm /tmp/requirements.txt
# --- Build legacy NumPy wheel for old hardware (store for runtime switching) ---
RUN $VIRTUAL_ENV/bin/pip install --no-cache-dir build && \
cd /tmp && \
$VIRTUAL_ENV/bin/pip download --no-binary numpy --no-deps numpy && \
tar -xzf numpy-*.tar.gz && \
cd numpy-*/ && \
$VIRTUAL_ENV/bin/python -m build --wheel -Csetup-args=-Dcpu-baseline="none" -Csetup-args=-Dcpu-dispatch="none" && \
mv dist/*.whl /opt/ && \
cd / && rm -rf /tmp/numpy-* /tmp/*.tar.gz && \
$VIRTUAL_ENV/bin/pip uninstall -y build
# --- Clean up build dependencies to reduce image size ---
RUN apt-get remove -y build-essential gcc g++ gfortran libopenblas-dev libpcre3-dev python3.13-dev ninja-build && \
apt-get autoremove -y --purge && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /root/.cache /tmp/*
# --- Set up Redis 7.x ---
RUN curl -fsSL https://packages.redis.io/gpg | gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg && \

View file

@ -35,9 +35,6 @@ RUN rm -rf /app/frontend
# Copy built frontend assets
COPY --from=frontend-builder /app/frontend/dist /app/frontend/dist
# Run Django collectstatic
RUN python manage.py collectstatic --noinput
# Add timestamp argument
ARG TIMESTAMP

View file

@ -1,11 +1,65 @@
#!/bin/bash
docker build --build-arg BRANCH=dev -t dispatcharr/dispatcharr:dev -f Dockerfile ..
#!/bin/bash
set -e
# Default values
VERSION=$(python3 -c "import sys; sys.path.append('..'); import version; print(version.__version__)")
REGISTRY="dispatcharr" # Registry or private repo to push to
IMAGE="dispatcharr" # Image that we're building
BRANCH="dev"
ARCH="" # Architectures to build for, e.g. linux/amd64,linux/arm64
PUSH=false
usage() {
cat <<- EOF
To test locally:
./build-dev.sh
To build and push to registry:
./build-dev.sh -p
To build and push to a private registry:
./build-dev.sh -p -r myregistry:5000
To build for -both- x86_64 and arm_64:
./build-dev.sh -p -a linux/amd64,linux/arm64
Do it all:
./build-dev.sh -p -r myregistry:5000 -a linux/amd64,linux/arm64
EOF
exit 0
}
# Parse options
while getopts "pr:a:b:i:h" opt; do
case $opt in
r) REGISTRY="$OPTARG" ;;
a) ARCH="--platform $OPTARG" ;;
b) BRANCH="$OPTARG" ;;
i) IMAGE="$OPTARG" ;;
p) PUSH=true ;;
h) usage ;;
\?) echo "Invalid option: -$OPTARG" >&2; exit 1 ;;
esac
done
BUILD_ARGS="BRANCH=$BRANCH"
echo docker build --build-arg $BUILD_ARGS $ARCH -t $IMAGE
docker build -f Dockerfile --build-arg $BUILD_ARGS $ARCH -t $IMAGE ..
docker tag $IMAGE $IMAGE:$BRANCH
docker tag $IMAGE $IMAGE:$VERSION
if [ -z "$PUSH" ]; then
echo "Please run 'docker push -t $IMAGE:dev -t $IMAGE:${VERSION}' when ready"
else
for TAG in latest "$VERSION" "$BRANCH"; do
docker tag "$IMAGE" "$REGISTRY/$IMAGE:$TAG"
docker push -q "$REGISTRY/$IMAGE:$TAG"
done
echo "Images pushed successfully."
fi
# Get version information
VERSION=$(python -c "import sys; sys.path.append('..'); import version; print(version.__version__)")
# Build with version tag
docker build --build-arg BRANCH=dev \
-t dispatcharr/dispatcharr:dev \
-t dispatcharr/dispatcharr:${VERSION} \
-f Dockerfile ..

View file

@ -14,6 +14,10 @@ services:
- REDIS_HOST=localhost
- CELERY_BROKER_URL=redis://localhost:6379/0
- DISPATCHARR_LOG_LEVEL=info
# Legacy CPU Support (Optional)
# Uncomment to enable legacy NumPy build for older CPUs (circa 2009)
# that lack support for newer baseline CPU features
#- USE_LEGACY_NUMPY=true
# Process Priority Configuration (Optional)
# Lower values = higher priority. Range: -20 (highest) to 19 (lowest)
# Negative values require cap_add: SYS_NICE (uncomment below)

View file

@ -18,6 +18,10 @@ services:
- REDIS_HOST=localhost
- CELERY_BROKER_URL=redis://localhost:6379/0
- DISPATCHARR_LOG_LEVEL=trace
# Legacy CPU Support (Optional)
# Uncomment to enable legacy NumPy build for older CPUs (circa 2009)
# that lack support for newer baseline CPU features
#- USE_LEGACY_NUMPY=true
# Process Priority Configuration (Optional)
# Lower values = higher priority. Range: -20 (highest) to 19 (lowest)
# Negative values require cap_add: SYS_NICE (uncomment below)

View file

@ -17,6 +17,10 @@ services:
- REDIS_HOST=localhost
- CELERY_BROKER_URL=redis://localhost:6379/0
- DISPATCHARR_LOG_LEVEL=debug
# Legacy CPU Support (Optional)
# Uncomment to enable legacy NumPy build for older CPUs (circa 2009)
# that lack support for newer baseline CPU features
#- USE_LEGACY_NUMPY=true
# Process Priority Configuration (Optional)
# Lower values = higher priority. Range: -20 (highest) to 19 (lowest)
# Negative values require cap_add: SYS_NICE (uncomment below)

View file

@ -17,6 +17,10 @@ services:
- REDIS_HOST=redis
- CELERY_BROKER_URL=redis://redis:6379/0
- DISPATCHARR_LOG_LEVEL=info
# Legacy CPU Support (Optional)
# Uncomment to enable legacy NumPy build for older CPUs (circa 2009)
# that lack support for newer baseline CPU features
#- USE_LEGACY_NUMPY=true
# Process Priority Configuration (Optional)
# Lower values = higher priority. Range: -20 (highest) to 19 (lowest)
# Negative values require cap_add: SYS_NICE (uncomment below)

View file

@ -27,6 +27,18 @@ echo_with_timestamp() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1"
}
# --- NumPy version switching for legacy hardware ---
if [ "$USE_LEGACY_NUMPY" = "true" ]; then
# Check if NumPy was compiled with baseline support
if /dispatcharrpy/bin/python -c "import numpy; numpy.show_config()" 2>&1 | grep -qi "baseline"; then
echo_with_timestamp "🔧 Switching to legacy NumPy (no CPU baseline)..."
/dispatcharrpy/bin/pip install --no-cache-dir --force-reinstall --no-deps /opt/numpy-*.whl
echo_with_timestamp "✅ Legacy NumPy installed"
else
echo_with_timestamp "✅ Legacy NumPy (no baseline) already installed, skipping reinstallation"
fi
fi
# Set PostgreSQL environment variables
export POSTGRES_DB=${POSTGRES_DB:-dispatcharr}
export POSTGRES_USER=${POSTGRES_USER:-dispatch}
@ -40,6 +52,21 @@ export REDIS_DB=${REDIS_DB:-0}
export DISPATCHARR_PORT=${DISPATCHARR_PORT:-9191}
export LIBVA_DRIVERS_PATH='/usr/local/lib/x86_64-linux-gnu/dri'
export LD_LIBRARY_PATH='/usr/local/lib'
export SECRET_FILE="/data/jwt"
# Ensure Django secret key exists or generate a new one
if [ ! -f "$SECRET_FILE" ]; then
echo "Generating new Django secret key..."
old_umask=$(umask)
umask 077
tmpfile="$(mktemp "${SECRET_FILE}.XXXXXX")" || { echo "mktemp failed"; exit 1; }
python3 - <<'PY' >"$tmpfile" || { echo "secret generation failed"; rm -f "$tmpfile"; exit 1; }
import secrets
print(secrets.token_urlsafe(64))
PY
mv -f "$tmpfile" "$SECRET_FILE" || { echo "move failed"; rm -f "$tmpfile"; exit 1; }
umask $old_umask
fi
export DJANGO_SECRET_KEY="$(cat "$SECRET_FILE")"
# Process priority configuration
# UWSGI_NICE_LEVEL: Absolute nice value for uWSGI/streaming (default: 0 = normal priority)
@ -85,12 +112,12 @@ export POSTGRES_DIR=/data/db
if [[ ! -f /etc/profile.d/dispatcharr.sh ]]; then
# Define all variables to process
variables=(
PATH VIRTUAL_ENV DJANGO_SETTINGS_MODULE PYTHONUNBUFFERED
PATH VIRTUAL_ENV DJANGO_SETTINGS_MODULE PYTHONUNBUFFERED PYTHONDONTWRITEBYTECODE
POSTGRES_DB POSTGRES_USER POSTGRES_PASSWORD POSTGRES_HOST POSTGRES_PORT
DISPATCHARR_ENV DISPATCHARR_DEBUG DISPATCHARR_LOG_LEVEL
REDIS_HOST REDIS_DB POSTGRES_DIR DISPATCHARR_PORT
DISPATCHARR_VERSION DISPATCHARR_TIMESTAMP LIBVA_DRIVERS_PATH LIBVA_DRIVER_NAME LD_LIBRARY_PATH
CELERY_NICE_LEVEL UWSGI_NICE_LEVEL
CELERY_NICE_LEVEL UWSGI_NICE_LEVEL DJANGO_SECRET_KEY
)
# Process each variable for both profile.d and environment
@ -159,9 +186,9 @@ else
pids+=("$nginx_pid")
fi
cd /app
python manage.py migrate --noinput
python manage.py collectstatic --noinput
# Run Django commands as non-root user to prevent permission issues
su - $POSTGRES_USER -c "cd /app && python manage.py migrate --noinput"
su - $POSTGRES_USER -c "cd /app && python manage.py collectstatic --noinput"
# Select proper uwsgi config based on environment
if [ "$DISPATCHARR_ENV" = "dev" ] && [ "$DISPATCHARR_DEBUG" != "true" ]; then
@ -187,7 +214,7 @@ fi
# Users can override via UWSGI_NICE_LEVEL environment variable in docker-compose
# Start with nice as root, then use setpriv to drop privileges to dispatch user
# This preserves both the nice value and environment variables
nice -n $UWSGI_NICE_LEVEL su -p - "$POSTGRES_USER" -c "cd /app && exec uwsgi $uwsgi_args" & uwsgi_pid=$!
nice -n $UWSGI_NICE_LEVEL su - "$POSTGRES_USER" -c "cd /app && exec /dispatcharrpy/bin/uwsgi $uwsgi_args" & uwsgi_pid=$!
echo "✅ uwsgi started with PID $uwsgi_pid (nice $UWSGI_NICE_LEVEL)"
pids+=("$uwsgi_pid")

View file

@ -15,6 +15,7 @@ DATA_DIRS=(
APP_DIRS=(
"/app/logo_cache"
"/app/media"
"/app/static"
)
# Create all directories
@ -29,9 +30,21 @@ if [ "$(id -u)" = "0" ] && [ -d "/app" ]; then
chown $PUID:$PGID /app
fi
fi
# Configure nginx port
if ! [[ "$DISPATCHARR_PORT" =~ ^[0-9]+$ ]]; then
echo "⚠️ Warning: DISPATCHARR_PORT is not a valid integer, using default port 9191"
DISPATCHARR_PORT=9191
fi
sed -i "s/NGINX_PORT/${DISPATCHARR_PORT}/g" /etc/nginx/sites-enabled/default
# Configure nginx based on IPv6 availability
if ip -6 addr show | grep -q "inet6"; then
echo "✅ IPv6 is available, enabling IPv6 in nginx"
else
echo "⚠️ IPv6 not available, disabling IPv6 in nginx"
sed -i '/listen \[::\]:/d' /etc/nginx/sites-enabled/default
fi
# NOTE: mac doesn't run as root, so only manage permissions
# if this script is running as root
if [ "$(id -u)" = "0" ]; then

View file

@ -3,6 +3,7 @@ proxy_cache_path /app/logo_cache levels=1:2 keys_zone=logo_cache:10m
server {
listen NGINX_PORT;
listen [::]:NGINX_PORT;
proxy_connect_timeout 75;
proxy_send_timeout 300;
@ -34,6 +35,13 @@ server {
root /data;
}
# Internal location for X-Accel-Redirect backup downloads
# Django handles auth, nginx serves the file directly
location /protected-backups/ {
internal;
alias /data/backups/;
}
location /api/logos/(?<logo_id>\d+)/cache/ {
proxy_pass http://127.0.0.1:5656;
proxy_cache logo_cache;

View file

@ -20,7 +20,6 @@ module = scripts.debug_wrapper:application
virtualenv = /dispatcharrpy
master = true
env = DJANGO_SETTINGS_MODULE=dispatcharr.settings
socket = /app/uwsgi.sock
chmod-socket = 777
vacuum = true

View file

@ -21,6 +21,7 @@ module = dispatcharr.wsgi:application
virtualenv = /dispatcharrpy
master = true
env = DJANGO_SETTINGS_MODULE=dispatcharr.settings
env = USE_NGINX_ACCEL=true
socket = /app/uwsgi.sock
chmod-socket = 777
vacuum = true
@ -36,6 +37,7 @@ http-keepalive = 1
buffer-size = 65536 # Increase buffer for large payloads
post-buffering = 4096 # Reduce buffering for real-time streaming
http-timeout = 600 # Prevent disconnects from long streams
socket-timeout = 600 # Prevent write timeouts when client buffers
lazy-apps = true # Improve memory efficiency
# Async mode (use gevent for high concurrency)
@ -57,4 +59,4 @@ logformat-strftime = true
log-date = %%Y-%%m-%%d %%H:%%M:%%S,000
# Use formatted time with environment variable for log level
log-format = %(ftime) $(DISPATCHARR_LOG_LEVEL) uwsgi.requests Worker ID: %(wid) %(method) %(status) %(uri) %(msecs)ms
log-buffering = 1024 # Add buffer size limit for logging
log-buffering = 1024 # Add buffer size limit for logging

View file

@ -36,7 +36,7 @@
"model": "core.streamprofile",
"pk": 1,
"fields": {
"profile_name": "ffmpeg",
"profile_name": "FFmpeg",
"command": "ffmpeg",
"parameters": "-i {streamUrl} -c:a copy -c:v copy -f mpegts pipe:1",
"is_active": true,
@ -46,13 +46,23 @@
{
"model": "core.streamprofile",
"fields": {
"profile_name": "streamlink",
"profile_name": "Streamlink",
"command": "streamlink",
"parameters": "{streamUrl} best --stdout",
"is_active": true,
"user_agent": "1"
}
},
{
"model": "core.streamprofile",
"fields": {
"profile_name": "VLC",
"command": "cvlc",
"parameters": "-vv -I dummy --no-video-title-show --http-user-agent {userAgent} {streamUrl} --sout #standard{access=file,mux=ts,dst=-}",
"is_active": true,
"user_agent": "1"
}
},
{
"model": "core.coresettings",
"fields": {

File diff suppressed because it is too large Load diff

View file

@ -23,11 +23,12 @@
"@mantine/form": "~8.0.1",
"@mantine/hooks": "~8.0.1",
"@mantine/notifications": "~8.0.1",
"@hookform/resolvers": "^5.2.2",
"@tanstack/react-table": "^8.21.2",
"allotment": "^1.20.4",
"dayjs": "^1.11.13",
"formik": "^2.4.6",
"hls.js": "^1.5.20",
"react-hook-form": "^7.70.0",
"lucide-react": "^0.511.0",
"mpegts.js": "^1.8.0",
"react": "^19.1.0",
@ -54,18 +55,21 @@
"@types/react": "^19.1.0",
"@types/react-dom": "^19.1.0",
"@vitejs/plugin-react-swc": "^4.1.0",
"eslint": "^9.21.0",
"eslint": "^9.27.0",
"eslint-plugin-react-hooks": "^5.1.0",
"eslint-plugin-react-refresh": "^0.4.19",
"globals": "^15.15.0",
"jsdom": "^27.0.0",
"prettier": "^3.5.3",
"vite": "^6.2.0",
"vite": "^7.1.7",
"vitest": "^3.2.4"
},
"resolutions": {
"vite": "7.1.7",
"react": "19.1.0",
"react-dom": "19.1.0"
},
"overrides": {
"js-yaml": "^4.1.1"
}
}

View file

@ -19,7 +19,6 @@ import Users from './pages/Users';
import LogosPage from './pages/Logos';
import VODsPage from './pages/VODs';
import useAuthStore from './store/auth';
import useLogosStore from './store/logos';
import FloatingVideo from './components/FloatingVideo';
import { WebsocketProvider } from './WebSocket';
import { Box, AppShell, MantineProvider } from '@mantine/core';
@ -40,8 +39,6 @@ const defaultRoute = '/channels';
const App = () => {
const [open, setOpen] = useState(true);
const [backgroundLoadingStarted, setBackgroundLoadingStarted] =
useState(false);
const isAuthenticated = useAuthStore((s) => s.isAuthenticated);
const setIsAuthenticated = useAuthStore((s) => s.setIsAuthenticated);
const logout = useAuthStore((s) => s.logout);
@ -81,11 +78,7 @@ const App = () => {
const loggedIn = await initializeAuth();
if (loggedIn) {
await initData();
// Start background logo loading after app is fully initialized (only once)
if (!backgroundLoadingStarted) {
setBackgroundLoadingStarted(true);
useLogosStore.getState().startBackgroundLoading();
}
// Logos are now loaded at the end of initData, no need for background loading
} else {
await logout();
}
@ -96,7 +89,7 @@ const App = () => {
};
checkAuth();
}, [initializeAuth, initData, logout, backgroundLoadingStarted]);
}, [initializeAuth, initData, logout]);
return (
<MantineProvider

View file

@ -574,7 +574,7 @@ export const WebsocketProvider = ({ children }) => {
const sourceId =
parsedEvent.data.source || parsedEvent.data.account;
const epg = epgs[sourceId];
// Only update progress if the EPG still exists in the store
// This prevents crashes when receiving updates for deleted EPGs
if (epg) {
@ -582,7 +582,9 @@ export const WebsocketProvider = ({ children }) => {
updateEPGProgress(parsedEvent.data);
} else {
// EPG was deleted, ignore this update
console.debug(`Ignoring EPG refresh update for deleted EPG ${sourceId}`);
console.debug(
`Ignoring EPG refresh update for deleted EPG ${sourceId}`
);
break;
}
@ -621,6 +623,10 @@ export const WebsocketProvider = ({ children }) => {
status: parsedEvent.data.status || 'success',
last_message:
parsedEvent.data.message || epg.last_message,
// Use the timestamp from the backend if provided
...(parsedEvent.data.updated_at && {
updated_at: parsedEvent.data.updated_at,
}),
});
// Only show success notification if we've finished parsing programs and had no errors
@ -750,6 +756,7 @@ export const WebsocketProvider = ({ children }) => {
try {
await API.requeryChannels();
await useChannelsStore.getState().fetchChannels();
await fetchChannelProfiles();
console.log('Channels refreshed after bulk creation');
} catch (error) {
console.error(

View file

@ -336,6 +336,15 @@ export default class API {
delete channelData.channel_number;
}
// Add channel profile IDs based on current selection
const selectedProfileId = useChannelsStore.getState().selectedProfileId;
if (selectedProfileId && selectedProfileId !== '0') {
// Specific profile selected - add only to that profile
channelData.channel_profile_ids = [parseInt(selectedProfileId)];
}
// If selectedProfileId is '0' or not set, don't include channel_profile_ids
// which will trigger the backend's default behavior of adding to all profiles
if (channel.logo_file) {
// Must send FormData for file upload
body = new FormData();
@ -1349,6 +1358,183 @@ export default class API {
}
}
// Backup API (async with Celery task polling)
static async listBackups() {
try {
const response = await request(`${host}/api/backups/`);
return response || [];
} catch (e) {
errorNotification('Failed to load backups', e);
throw e;
}
}
static async getBackupStatus(taskId, token = null) {
try {
let url = `${host}/api/backups/status/${taskId}/`;
if (token) {
url += `?token=${encodeURIComponent(token)}`;
}
const response = await request(url, { auth: !token });
return response;
} catch (e) {
throw e;
}
}
static async waitForBackupTask(taskId, onProgress, token = null) {
const pollInterval = 2000; // Poll every 2 seconds
const maxAttempts = 300; // Max 10 minutes (300 * 2s)
for (let attempt = 0; attempt < maxAttempts; attempt++) {
try {
const status = await API.getBackupStatus(taskId, token);
if (onProgress) {
onProgress(status);
}
if (status.state === 'completed') {
return status.result;
} else if (status.state === 'failed') {
throw new Error(status.error || 'Task failed');
}
} catch (e) {
throw e;
}
// Wait before next poll
await new Promise((resolve) => setTimeout(resolve, pollInterval));
}
throw new Error('Task timed out');
}
static async createBackup(onProgress) {
try {
// Start the backup task
const response = await request(`${host}/api/backups/create/`, {
method: 'POST',
});
// Wait for the task to complete using token for auth
const result = await API.waitForBackupTask(response.task_id, onProgress, response.task_token);
return result;
} catch (e) {
errorNotification('Failed to create backup', e);
throw e;
}
}
static async uploadBackup(file) {
try {
const formData = new FormData();
formData.append('file', file);
const response = await request(
`${host}/api/backups/upload/`,
{
method: 'POST',
body: formData,
}
);
return response;
} catch (e) {
errorNotification('Failed to upload backup', e);
throw e;
}
}
static async deleteBackup(filename) {
try {
const encodedFilename = encodeURIComponent(filename);
await request(`${host}/api/backups/${encodedFilename}/delete/`, {
method: 'DELETE',
});
} catch (e) {
errorNotification('Failed to delete backup', e);
throw e;
}
}
static async getDownloadToken(filename) {
// Get a download token from the server
try {
const response = await request(`${host}/api/backups/${encodeURIComponent(filename)}/download-token/`);
return response.token;
} catch (e) {
throw e;
}
}
static async downloadBackup(filename) {
try {
// Get a download token first (requires auth)
const token = await API.getDownloadToken(filename);
const encodedFilename = encodeURIComponent(filename);
// Build the download URL with token
const downloadUrl = `${host}/api/backups/${encodedFilename}/download/?token=${encodeURIComponent(token)}`;
// Use direct browser navigation instead of fetch to avoid CORS issues
const link = document.createElement('a');
link.href = downloadUrl;
link.download = filename;
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
return { filename };
} catch (e) {
errorNotification('Failed to download backup', e);
throw e;
}
}
static async restoreBackup(filename, onProgress) {
try {
// Start the restore task
const encodedFilename = encodeURIComponent(filename);
const response = await request(
`${host}/api/backups/${encodedFilename}/restore/`,
{
method: 'POST',
}
);
// Wait for the task to complete using token for auth
// Token-based auth allows status polling even after DB restore invalidates user sessions
const result = await API.waitForBackupTask(response.task_id, onProgress, response.task_token);
return result;
} catch (e) {
errorNotification('Failed to restore backup', e);
throw e;
}
}
static async getBackupSchedule() {
try {
const response = await request(`${host}/api/backups/schedule/`);
return response;
} catch (e) {
errorNotification('Failed to get backup schedule', e);
throw e;
}
}
static async updateBackupSchedule(settings) {
try {
const response = await request(`${host}/api/backups/schedule/update/`, {
method: 'PUT',
body: settings,
});
return response;
} catch (e) {
errorNotification('Failed to update backup schedule', e);
throw e;
}
}
static async getVersion() {
try {
const response = await request(`${host}/api/core/version/`, {
@ -1514,6 +1700,19 @@ export default class API {
}
}
static async stopVODClient(clientId) {
try {
const response = await request(`${host}/proxy/vod/stop_client/`, {
method: 'POST',
body: { client_id: clientId },
});
return response;
} catch (e) {
errorNotification('Failed to stop VOD client', e);
}
}
static async stopChannel(id) {
try {
const response = await request(`${host}/proxy/ts/stop/${id}`, {
@ -1922,6 +2121,24 @@ export default class API {
}
}
static async duplicateChannelProfile(id, name) {
try {
const response = await request(
`${host}/api/channels/profiles/${id}/duplicate/`,
{
method: 'POST',
body: { name },
}
);
useChannelsStore.getState().addProfile(response);
return response;
} catch (e) {
errorNotification(`Failed to duplicate channel profile ${id}`, e);
}
}
static async deleteChannelProfile(id) {
try {
await request(`${host}/api/channels/profiles/${id}/`, {
@ -2131,7 +2348,8 @@ export default class API {
static async deleteSeriesRule(tvgId) {
try {
await request(`${host}/api/channels/series-rules/${tvgId}/`, { method: 'DELETE' });
const encodedTvgId = encodeURIComponent(tvgId);
await request(`${host}/api/channels/series-rules/${encodedTvgId}/`, { method: 'DELETE' });
notifications.show({ title: 'Series rule removed' });
} catch (e) {
errorNotification('Failed to remove series rule', e);

View file

@ -16,6 +16,7 @@ import useWarningsStore from '../store/warnings';
* @param {string} props.actionKey - Unique key for this type of action (used for suppression)
* @param {Function} props.onSuppressChange - Called when "don't show again" option changes
* @param {string} [props.size='md'] - Size of the modal
* @param {boolean} [props.loading=false] - Whether the confirm button should show loading state
*/
const ConfirmationDialog = ({
opened,
@ -31,6 +32,7 @@ const ConfirmationDialog = ({
zIndex = 1000,
showDeleteFileOption = false,
deleteFileLabel = 'Also delete files from disk',
loading = false,
}) => {
const suppressWarning = useWarningsStore((s) => s.suppressWarning);
const isWarningSuppressed = useWarningsStore((s) => s.isWarningSuppressed);
@ -93,10 +95,16 @@ const ConfirmationDialog = ({
)}
<Group justify="flex-end">
<Button variant="outline" onClick={handleClose}>
<Button variant="outline" onClick={handleClose} disabled={loading}>
{cancelLabel}
</Button>
<Button color="red" onClick={handleConfirm}>
<Button
color="red"
onClick={handleConfirm}
loading={loading}
disabled={loading}
loaderProps={{ type: 'dots' }}
>
{confirmLabel}
</Button>
</Group>

View file

@ -0,0 +1,18 @@
import React from 'react';
class ErrorBoundary extends React.Component {
state = { hasError: false };
static getDerivedStateFromError(error) {
return { hasError: true };
}
render() {
if (this.state.hasError) {
return <div>Something went wrong</div>;
}
return this.props.children;
}
}
export default ErrorBoundary;

View file

@ -0,0 +1,47 @@
import { NumberInput, Select, Switch, TextInput } from '@mantine/core';
import React from 'react';
export const Field = ({ field, value, onChange }) => {
const common = { label: field.label, description: field.help_text };
const effective = value ?? field.default;
switch (field.type) {
case 'boolean':
return (
<Switch
checked={!!effective}
onChange={(e) => onChange(field.id, e.currentTarget.checked)}
label={field.label}
description={field.help_text}
/>
);
case 'number':
return (
<NumberInput
value={value ?? field.default ?? 0}
onChange={(v) => onChange(field.id, v)}
{...common}
/>
);
case 'select':
return (
<Select
value={(value ?? field.default ?? '') + ''}
data={(field.options || []).map((o) => ({
value: o.value + '',
label: o.label,
}))}
onChange={(v) => onChange(field.id, v)}
{...common}
/>
);
case 'string':
default:
return (
<TextInput
value={value ?? field.default ?? ''}
onChange={(e) => onChange(field.id, e.currentTarget.value)}
{...common}
/>
);
}
};

View file

@ -1,5 +1,5 @@
// frontend/src/components/FloatingVideo.js
import React, { useEffect, useRef, useState } from 'react';
import React, { useCallback, useEffect, useRef, useState } from 'react';
import Draggable from 'react-draggable';
import useVideoStore from '../store/useVideoStore';
import mpegts from 'mpegts.js';
@ -17,7 +17,94 @@ export default function FloatingVideo() {
const [isLoading, setIsLoading] = useState(false);
const [loadError, setLoadError] = useState(null);
const [showOverlay, setShowOverlay] = useState(true);
const [videoSize, setVideoSize] = useState({ width: 320, height: 180 });
const [isResizing, setIsResizing] = useState(false);
const resizeStateRef = useRef(null);
const overlayTimeoutRef = useRef(null);
const aspectRatioRef = useRef(320 / 180);
const [dragPosition, setDragPosition] = useState(null);
const dragPositionRef = useRef(null);
const dragOffsetRef = useRef({ x: 0, y: 0 });
const initialPositionRef = useRef(null);
const MIN_WIDTH = 220;
const MIN_HEIGHT = 124;
const VISIBLE_MARGIN = 48; // keep part of the window visible when dragging
const HEADER_HEIGHT = 38; // height of the close button header area
const ERROR_HEIGHT = 45; // approximate height of error message area when displayed
const HANDLE_SIZE = 18;
const HANDLE_OFFSET = 0;
const resizeHandleBaseStyle = {
position: 'absolute',
width: HANDLE_SIZE,
height: HANDLE_SIZE,
backgroundColor: 'transparent',
borderRadius: 6,
zIndex: 8,
touchAction: 'none',
};
const resizeHandles = [
{
id: 'bottom-right',
cursor: 'nwse-resize',
xDir: 1,
yDir: 1,
isLeft: false,
isTop: false,
style: {
bottom: HANDLE_OFFSET,
right: HANDLE_OFFSET,
borderBottom: '2px solid rgba(255, 255, 255, 0.9)',
borderRight: '2px solid rgba(255, 255, 255, 0.9)',
borderRadius: '0 0 6px 0',
},
},
{
id: 'bottom-left',
cursor: 'nesw-resize',
xDir: -1,
yDir: 1,
isLeft: true,
isTop: false,
style: {
bottom: HANDLE_OFFSET,
left: HANDLE_OFFSET,
borderBottom: '2px solid rgba(255, 255, 255, 0.9)',
borderLeft: '2px solid rgba(255, 255, 255, 0.9)',
borderRadius: '0 0 0 6px',
},
},
{
id: 'top-right',
cursor: 'nesw-resize',
xDir: 1,
yDir: -1,
isLeft: false,
isTop: true,
style: {
top: HANDLE_OFFSET,
right: HANDLE_OFFSET,
borderTop: '2px solid rgba(255, 255, 255, 0.9)',
borderRight: '2px solid rgba(255, 255, 255, 0.9)',
borderRadius: '0 6px 0 0',
},
},
{
id: 'top-left',
cursor: 'nwse-resize',
xDir: -1,
yDir: -1,
isLeft: true,
isTop: true,
style: {
top: HANDLE_OFFSET,
left: HANDLE_OFFSET,
borderTop: '2px solid rgba(255, 255, 255, 0.9)',
borderLeft: '2px solid rgba(255, 255, 255, 0.9)',
borderRadius: '6px 0 0 0',
},
},
];
// Safely destroy the mpegts player to prevent errors
const safeDestroyPlayer = () => {
@ -315,24 +402,319 @@ export default function FloatingVideo() {
}, 50);
};
const clampToVisible = useCallback(
(x, y) => {
if (typeof window === 'undefined') return { x, y };
const totalHeight = videoSize.height + HEADER_HEIGHT + ERROR_HEIGHT;
const minX = -(videoSize.width - VISIBLE_MARGIN);
const minY = -(totalHeight - VISIBLE_MARGIN);
const maxX = window.innerWidth - videoSize.width;
const maxY = window.innerHeight - totalHeight;
return {
x: Math.min(Math.max(x, minX), maxX),
y: Math.min(Math.max(y, minY), maxY),
};
},
[
VISIBLE_MARGIN,
HEADER_HEIGHT,
ERROR_HEIGHT,
videoSize.height,
videoSize.width,
]
);
const clampToVisibleWithSize = useCallback(
(x, y, width, height) => {
if (typeof window === 'undefined') return { x, y };
const totalHeight = height + HEADER_HEIGHT + ERROR_HEIGHT;
const minX = -(width - VISIBLE_MARGIN);
const minY = -(totalHeight - VISIBLE_MARGIN);
const maxX = window.innerWidth - width;
const maxY = window.innerHeight - totalHeight;
return {
x: Math.min(Math.max(x, minX), maxX),
y: Math.min(Math.max(y, minY), maxY),
};
},
[VISIBLE_MARGIN, HEADER_HEIGHT, ERROR_HEIGHT]
);
const handleResizeMove = useCallback(
(event) => {
if (!resizeStateRef.current) return;
const clientX =
event.touches && event.touches.length
? event.touches[0].clientX
: event.clientX;
const clientY =
event.touches && event.touches.length
? event.touches[0].clientY
: event.clientY;
const {
startX,
startY,
startWidth,
startHeight,
startPos,
handle,
aspectRatio,
} = resizeStateRef.current;
const deltaX = clientX - startX;
const deltaY = clientY - startY;
const widthDelta = deltaX * handle.xDir;
const heightDelta = deltaY * handle.yDir;
const ratio = aspectRatio || aspectRatioRef.current;
// Derive width/height while keeping the original aspect ratio
let nextWidth = startWidth + widthDelta;
let nextHeight = nextWidth / ratio;
// Allow vertical-driven resize if the user drags mostly vertically
if (Math.abs(deltaY) > Math.abs(deltaX)) {
nextHeight = startHeight + heightDelta;
nextWidth = nextHeight * ratio;
}
// Respect minimums while keeping the ratio
if (nextWidth < MIN_WIDTH) {
nextWidth = MIN_WIDTH;
nextHeight = nextWidth / ratio;
}
if (nextHeight < MIN_HEIGHT) {
nextHeight = MIN_HEIGHT;
nextWidth = nextHeight * ratio;
}
// Keep within viewport with a margin based on current position
const posX = startPos?.x ?? 0;
const posY = startPos?.y ?? 0;
const margin = VISIBLE_MARGIN;
let maxWidth = null;
let maxHeight = null;
if (!handle.isLeft) {
maxWidth = Math.max(MIN_WIDTH, window.innerWidth - posX - margin);
}
if (!handle.isTop) {
maxHeight = Math.max(MIN_HEIGHT, window.innerHeight - posY - margin);
}
if (maxWidth != null && nextWidth > maxWidth) {
nextWidth = maxWidth;
nextHeight = nextWidth / ratio;
}
if (maxHeight != null && nextHeight > maxHeight) {
nextHeight = maxHeight;
nextWidth = nextHeight * ratio;
}
// Final pass to honor both bounds while keeping the ratio
if (maxWidth != null && nextWidth > maxWidth) {
nextWidth = maxWidth;
nextHeight = nextWidth / ratio;
}
setVideoSize({
width: Math.round(nextWidth),
height: Math.round(nextHeight),
});
if (handle.isLeft || handle.isTop) {
let nextX = posX;
let nextY = posY;
if (handle.isLeft) {
nextX = posX + (startWidth - nextWidth);
}
if (handle.isTop) {
nextY = posY + (startHeight - nextHeight);
}
const clamped = clampToVisibleWithSize(
nextX,
nextY,
nextWidth,
nextHeight
);
if (handle.isLeft) {
nextX = clamped.x;
}
if (handle.isTop) {
nextY = clamped.y;
}
const nextPos = { x: nextX, y: nextY };
setDragPosition(nextPos);
dragPositionRef.current = nextPos;
}
},
[MIN_HEIGHT, MIN_WIDTH, VISIBLE_MARGIN, clampToVisibleWithSize]
);
const endResize = useCallback(() => {
setIsResizing(false);
resizeStateRef.current = null;
window.removeEventListener('mousemove', handleResizeMove);
window.removeEventListener('mouseup', endResize);
window.removeEventListener('touchmove', handleResizeMove);
window.removeEventListener('touchend', endResize);
}, [handleResizeMove]);
const startResize = (event, handle) => {
event.stopPropagation();
event.preventDefault();
const clientX =
event.touches && event.touches.length
? event.touches[0].clientX
: event.clientX;
const clientY =
event.touches && event.touches.length
? event.touches[0].clientY
: event.clientY;
const aspectRatio =
videoSize.height > 0
? videoSize.width / videoSize.height
: aspectRatioRef.current;
aspectRatioRef.current = aspectRatio;
const startPos = dragPositionRef.current ||
initialPositionRef.current || { x: 0, y: 0 };
resizeStateRef.current = {
startX: clientX,
startY: clientY,
startWidth: videoSize.width,
startHeight: videoSize.height,
aspectRatio,
startPos,
handle,
};
setIsResizing(true);
window.addEventListener('mousemove', handleResizeMove);
window.addEventListener('mouseup', endResize);
window.addEventListener('touchmove', handleResizeMove);
window.addEventListener('touchend', endResize);
};
useEffect(() => {
return () => {
endResize();
};
}, [endResize]);
useEffect(() => {
dragPositionRef.current = dragPosition;
}, [dragPosition]);
// Initialize the floating window near bottom-right once
useEffect(() => {
if (initialPositionRef.current || typeof window === 'undefined') return;
const totalHeight = videoSize.height + HEADER_HEIGHT + ERROR_HEIGHT;
const initialX = Math.max(10, window.innerWidth - videoSize.width - 20);
const initialY = Math.max(10, window.innerHeight - totalHeight - 20);
const pos = clampToVisible(initialX, initialY);
initialPositionRef.current = pos;
setDragPosition(pos);
dragPositionRef.current = pos;
}, [
clampToVisible,
videoSize.height,
videoSize.width,
HEADER_HEIGHT,
ERROR_HEIGHT,
]);
const handleDragStart = useCallback(
(event, data) => {
const clientX = event.touches?.[0]?.clientX ?? event.clientX;
const clientY = event.touches?.[0]?.clientY ?? event.clientY;
const rect = videoContainerRef.current?.getBoundingClientRect();
if (clientX != null && clientY != null && rect) {
dragOffsetRef.current = {
x: clientX - rect.left,
y: clientY - rect.top,
};
} else {
dragOffsetRef.current = { x: 0, y: 0 };
}
const clamped = clampToVisible(data?.x ?? 0, data?.y ?? 0);
setDragPosition(clamped);
dragPositionRef.current = clamped;
},
[clampToVisible]
);
const handleDrag = useCallback(
(event) => {
const clientX = event.touches?.[0]?.clientX ?? event.clientX;
const clientY = event.touches?.[0]?.clientY ?? event.clientY;
if (clientX == null || clientY == null) return;
const nextX = clientX - (dragOffsetRef.current?.x ?? 0);
const nextY = clientY - (dragOffsetRef.current?.y ?? 0);
const clamped = clampToVisible(nextX, nextY);
setDragPosition(clamped);
dragPositionRef.current = clamped;
},
[clampToVisible]
);
const handleDragStop = useCallback(
(_, data) => {
const clamped = clampToVisible(data?.x ?? 0, data?.y ?? 0);
setDragPosition(clamped);
dragPositionRef.current = clamped;
},
[clampToVisible]
);
// If the floating video is hidden or no URL is selected, do not render
if (!isVisible || !streamUrl) {
return null;
}
return (
<Draggable nodeRef={videoContainerRef}>
<Draggable
nodeRef={videoContainerRef}
cancel=".floating-video-no-drag"
disabled={isResizing}
position={dragPosition || undefined}
defaultPosition={initialPositionRef.current || { x: 0, y: 0 }}
onStart={handleDragStart}
onDrag={handleDrag}
onStop={handleDragStop}
>
<div
ref={videoContainerRef}
style={{
position: 'fixed',
bottom: '20px',
right: '20px',
width: '320px',
top: 0,
left: 0,
width: `${videoSize.width}px`,
zIndex: 9999,
backgroundColor: '#333',
borderRadius: '8px',
overflow: 'hidden',
overflow: 'visible',
boxShadow: '0 2px 10px rgba(0,0,0,0.7)',
}}
>
@ -378,10 +760,12 @@ export default function FloatingVideo() {
<video
ref={videoRef}
controls
className="floating-video-no-drag"
style={{
width: '100%',
height: '180px',
height: `${videoSize.height}px`,
backgroundColor: '#000',
borderRadius: '0 0 8px 8px',
// Better controls styling for VOD
...(contentType === 'vod' && {
controlsList: 'nodownload',
@ -468,6 +852,21 @@ export default function FloatingVideo() {
</Text>
</Box>
)}
{/* Resize handles */}
{resizeHandles.map((handle) => (
<Box
key={handle.id}
className="floating-video-no-drag"
onMouseDown={(event) => startResize(event, handle)}
onTouchStart={(event) => startResize(event, handle)}
style={{
...resizeHandleBaseStyle,
...handle.style,
cursor: handle.cursor,
}}
/>
))}
</div>
</Draggable>
);

View file

@ -0,0 +1,206 @@
import React from "react";
import {
CHANNEL_WIDTH,
EXPANDED_PROGRAM_HEIGHT,
HOUR_WIDTH,
PROGRAM_HEIGHT,
} from '../pages/guideUtils.js';
import {Box, Flex, Text} from "@mantine/core";
import {Play} from "lucide-react";
import logo from "../images/logo.png";
const GuideRow = React.memo(({ index, style, data }) => {
const {
filteredChannels,
programsByChannelId,
expandedProgramId,
rowHeights,
logos,
hoveredChannelId,
setHoveredChannelId,
renderProgram,
handleLogoClick,
contentWidth,
} = data;
const channel = filteredChannels[index];
if (!channel) {
return null;
}
const channelPrograms = programsByChannelId.get(channel.id) || [];
const rowHeight =
rowHeights[index] ??
(channelPrograms.some((program) => program.id === expandedProgramId)
? EXPANDED_PROGRAM_HEIGHT
: PROGRAM_HEIGHT);
const PlaceholderProgram = () => {
return <>
{Array.from({length: Math.ceil(24 / 2)}).map(
(_, placeholderIndex) => (
<Box
key={`placeholder-${channel.id}-${placeholderIndex}`}
style={{
alignItems: 'center',
justifyContent: 'center',
}}
pos='absolute'
left={placeholderIndex * (HOUR_WIDTH * 2)}
top={0}
w={HOUR_WIDTH * 2}
h={rowHeight - 4}
bd={'1px dashed #2D3748'}
bdrs={4}
display={'flex'}
c='#4A5568'
>
<Text size="sm">No program data</Text>
</Box>
)
)}
</>;
}
return (
<div
data-testid="guide-row"
style={{ ...style, width: contentWidth, height: rowHeight }}
>
<Box
style={{
borderBottom: '0px solid #27272A',
transition: 'height 0.2s ease',
overflow: 'visible',
}}
display={'flex'}
h={'100%'}
pos='relative'
>
<Box
className="channel-logo"
style={{
flexShrink: 0,
alignItems: 'center',
justifyContent: 'center',
backgroundColor: '#18181B',
borderRight: '1px solid #27272A',
borderBottom: '1px solid #27272A',
boxShadow: '2px 0 5px rgba(0,0,0,0.2)',
zIndex: 30,
transition: 'height 0.2s ease',
cursor: 'pointer',
}}
w={CHANNEL_WIDTH}
miw={CHANNEL_WIDTH}
display={'flex'}
left={0}
h={'100%'}
pos='relative'
onClick={(event) => handleLogoClick(channel, event)}
onMouseEnter={() => setHoveredChannelId(channel.id)}
onMouseLeave={() => setHoveredChannelId(null)}
>
{hoveredChannelId === channel.id && (
<Flex
align="center"
justify="center"
style={{
backgroundColor: 'rgba(0, 0, 0, 0.7)',
zIndex: 10,
animation: 'fadeIn 0.2s',
}}
pos='absolute'
top={0}
left={0}
right={0}
bottom={0}
w={'100%'}
h={'100%'}
>
<Play size={32} color="#fff" fill="#fff" />
</Flex>
)}
<Flex
direction="column"
align="center"
justify="space-between"
style={{
boxSizing: 'border-box',
zIndex: 5,
}}
w={'100%'}
h={'100%'}
p={'4px'}
pos='relative'
>
<Box
style={{
alignItems: 'center',
justifyContent: 'center',
overflow: 'hidden',
}}
w={'100%'}
h={`${rowHeight - 32}px`}
display={'flex'}
p={'4px'}
mb={'4px'}
>
<img
src={logos[channel.logo_id]?.cache_url || logo}
alt={channel.name}
style={{
maxWidth: '100%',
maxHeight: '100%',
objectFit: 'contain',
}}
/>
</Box>
<Text
size="sm"
weight={600}
style={{
transform: 'translateX(-50%)',
backgroundColor: '#18181B',
alignItems: 'center',
justifyContent: 'center',
}}
pos='absolute'
bottom={4}
left={'50%'}
p={'2px 8px'}
bdrs={4}
fz={'0.85em'}
bd={'1px solid #27272A'}
h={'24px'}
display={'flex'}
miw={'36px'}
>
{channel.channel_number || '-'}
</Text>
</Flex>
</Box>
<Box
style={{
transition: 'height 0.2s ease',
}}
flex={1}
pos='relative'
h={'100%'}
pl={0}
>
{channelPrograms.length > 0 ? (
channelPrograms.map((program) =>
renderProgram(program, undefined, channel)
)
) : <PlaceholderProgram />}
</Box>
</Box>
</div>
);
});
export default GuideRow;

View file

@ -0,0 +1,105 @@
import React from 'react';
import { Box, Text } from '@mantine/core';
import { format } from '../utils/dateTimeUtils.js';
import { HOUR_WIDTH } from '../pages/guideUtils.js';
const HourBlock = React.memo(({ hourData, timeFormat, formatDayLabel, handleTimeClick }) => {
const { time, isNewDay } = hourData;
return (
<Box
key={format(time)}
style={{
borderRight: '1px solid #8DAFAA',
cursor: 'pointer',
borderLeft: isNewDay ? '2px solid #3BA882' : 'none',
backgroundColor: isNewDay ? '#1E2A27' : '#1B2421',
}}
w={HOUR_WIDTH}
h={'40px'}
pos='relative'
c='#a0aec0'
onClick={(e) => handleTimeClick(time, e)}
>
<Text
size="sm"
style={{ transform: 'none' }}
pos='absolute'
top={8}
left={4}
bdrs={2}
lh={1.2}
ta='left'
>
<Text
span
size="xs"
display={'block'}
opacity={0.7}
fw={isNewDay ? 600 : 400}
c={isNewDay ? '#3BA882' : undefined}
>
{formatDayLabel(time)}
</Text>
{format(time, timeFormat)}
<Text span size="xs" ml={1} opacity={0.7} />
</Text>
<Box
style={{
backgroundColor: '#27272A',
zIndex: 10,
}}
pos='absolute'
left={0}
top={0}
bottom={0}
w={'1px'}
/>
<Box
style={{ justifyContent: 'space-between' }}
pos='absolute'
bottom={0}
w={'100%'}
display={'flex'}
p={'0 1px'}
>
{[15, 30, 45].map((minute) => (
<Box
key={minute}
style={{ backgroundColor: '#718096' }}
w={'1px'}
h={'8px'}
pos='absolute'
bottom={0}
left={`${(minute / 60) * 100}%`}
/>
))}
</Box>
</Box>
);
});
const HourTimeline = React.memo(({
hourTimeline,
timeFormat,
formatDayLabel,
handleTimeClick
}) => {
return (
<>
{hourTimeline.map((hourData) => (
<HourBlock
key={format(hourData.time)}
hourData={hourData}
timeFormat={timeFormat}
formatDayLabel={formatDayLabel}
handleTimeClick={handleTimeClick}
/>
))}
</>
);
});
export default HourTimeline;

View file

@ -1,4 +1,4 @@
import React, { useState, useEffect, useRef } from 'react';
import React, { useState, useEffect, useRef } from 'react';
import { Skeleton } from '@mantine/core';
import useLogosStore from '../store/logos';
import logo from '../images/logo.png'; // Default logo
@ -16,15 +16,16 @@ const LazyLogo = ({
}) => {
const [isLoading, setIsLoading] = useState(false);
const [hasError, setHasError] = useState(false);
const fetchAttempted = useRef(new Set()); // Track which IDs we've already tried to fetch
const fetchAttempted = useRef(new Set());
const isMountedRef = useRef(true);
const logos = useLogosStore((s) => s.logos);
const fetchLogosByIds = useLogosStore((s) => s.fetchLogosByIds);
const allowLogoRendering = useLogosStore((s) => s.allowLogoRendering);
// Determine the logo source
const logoData = logoId && logos[logoId];
const logoSrc = logoData?.cache_url || fallbackSrc; // Only use cache URL if we have logo data
const logoSrc = logoData?.cache_url || fallbackSrc;
// Cleanup on unmount
useEffect(() => {
@ -34,6 +35,9 @@ const LazyLogo = ({
}, []);
useEffect(() => {
// Don't start fetching until logo rendering is allowed
if (!allowLogoRendering) return;
// If we have a logoId but no logo data, add it to the batch request queue
if (
logoId &&
@ -44,7 +48,7 @@ const LazyLogo = ({
isMountedRef.current
) {
setIsLoading(true);
fetchAttempted.current.add(logoId); // Mark this ID as attempted
fetchAttempted.current.add(logoId);
logoRequestQueue.add(logoId);
// Clear existing timer and set new one to batch requests
@ -82,7 +86,7 @@ const LazyLogo = ({
setIsLoading(false);
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [logoId, fetchLogosByIds, logoData]); // Include logoData to detect when it becomes available
}, [logoId, fetchLogosByIds, logoData, allowLogoRendering]);
// Reset error state when logoId changes
useEffect(() => {
@ -91,8 +95,10 @@ const LazyLogo = ({
}
}, [logoId]);
// Show skeleton while loading
if (isLoading && !logoData) {
// Show skeleton if:
// 1. Logo rendering is not allowed yet, OR
// 2. We don't have logo data yet (regardless of loading state)
if (logoId && (!allowLogoRendering || !logoData)) {
return (
<Skeleton
height={style.maxHeight || 18}

View file

@ -0,0 +1,26 @@
import { Text, } from '@mantine/core';
// Short preview that triggers the details modal when clicked
const RecordingSynopsis = ({ description, onOpen }) => {
const truncated = description?.length > 140;
const preview = truncated
? `${description.slice(0, 140).trim()}...`
: description;
if (!description) return null;
return (
<Text
size="xs"
c="dimmed"
lineClamp={2}
title={description}
onClick={() => onOpen?.()}
style={{ cursor: 'pointer' }}
>
{preview}
</Text>
);
};
export default RecordingSynopsis;

View file

@ -0,0 +1,978 @@
import { useEffect, useMemo, useState } from 'react';
import {
ActionIcon,
Box,
Button,
FileInput,
Flex,
Group,
Loader,
Modal,
NumberInput,
Paper,
Select,
Stack,
Switch,
Text,
TextInput,
Tooltip,
} from '@mantine/core';
import {
Download,
RefreshCcw,
RotateCcw,
SquareMinus,
SquarePlus,
UploadCloud,
} from 'lucide-react';
import { notifications } from '@mantine/notifications';
import dayjs from 'dayjs';
import API from '../../api';
import ConfirmationDialog from '../ConfirmationDialog';
import useLocalStorage from '../../hooks/useLocalStorage';
import useWarningsStore from '../../store/warnings';
import { CustomTable, useTable } from '../tables/CustomTable';
const RowActions = ({
row,
handleDownload,
handleRestoreClick,
handleDeleteClick,
downloading,
}) => {
return (
<Flex gap={4} wrap="nowrap">
<Tooltip label="Download">
<ActionIcon
variant="transparent"
size="sm"
color="blue.5"
onClick={() => handleDownload(row.original.name)}
loading={downloading === row.original.name}
disabled={downloading !== null}
>
<Download size={18} />
</ActionIcon>
</Tooltip>
<Tooltip label="Restore">
<ActionIcon
variant="transparent"
size="sm"
color="yellow.5"
onClick={() => handleRestoreClick(row.original)}
>
<RotateCcw size={18} />
</ActionIcon>
</Tooltip>
<Tooltip label="Delete">
<ActionIcon
variant="transparent"
size="sm"
color="red.9"
onClick={() => handleDeleteClick(row.original)}
>
<SquareMinus size={18} />
</ActionIcon>
</Tooltip>
</Flex>
);
};
// Convert 24h time string to 12h format with period
function to12Hour(time24) {
if (!time24) return { time: '12:00', period: 'AM' };
const [hours, minutes] = time24.split(':').map(Number);
const period = hours >= 12 ? 'PM' : 'AM';
const hours12 = hours % 12 || 12;
return {
time: `${hours12}:${String(minutes).padStart(2, '0')}`,
period,
};
}
// Convert 12h time + period to 24h format
function to24Hour(time12, period) {
if (!time12) return '00:00';
const [hours, minutes] = time12.split(':').map(Number);
let hours24 = hours;
if (period === 'PM' && hours !== 12) {
hours24 = hours + 12;
} else if (period === 'AM' && hours === 12) {
hours24 = 0;
}
return `${String(hours24).padStart(2, '0')}:${String(minutes).padStart(2, '0')}`;
}
// Get default timezone (same as Settings page)
function getDefaultTimeZone() {
try {
return Intl.DateTimeFormat().resolvedOptions().timeZone || 'UTC';
} catch {
return 'UTC';
}
}
// Validate cron expression
function validateCronExpression(expression) {
if (!expression || expression.trim() === '') {
return { valid: false, error: 'Cron expression is required' };
}
const parts = expression.trim().split(/\s+/);
if (parts.length !== 5) {
return {
valid: false,
error:
'Cron expression must have exactly 5 parts: minute hour day month weekday',
};
}
const [minute, hour, dayOfMonth, month, dayOfWeek] = parts;
// Validate each part (allowing *, */N steps, ranges, lists, steps)
// Supports: *, */2, 5, 1-5, 1-5/2, 1,3,5, etc.
const cronPartRegex =
/^(\*\/\d+|\*|\d+(-\d+)?(\/\d+)?(,\d+(-\d+)?(\/\d+)?)*)$/;
if (!cronPartRegex.test(minute)) {
return {
valid: false,
error: 'Invalid minute field (0-59, *, or cron syntax)',
};
}
if (!cronPartRegex.test(hour)) {
return {
valid: false,
error: 'Invalid hour field (0-23, *, or cron syntax)',
};
}
if (!cronPartRegex.test(dayOfMonth)) {
return {
valid: false,
error: 'Invalid day field (1-31, *, or cron syntax)',
};
}
if (!cronPartRegex.test(month)) {
return {
valid: false,
error: 'Invalid month field (1-12, *, or cron syntax)',
};
}
if (!cronPartRegex.test(dayOfWeek)) {
return {
valid: false,
error: 'Invalid weekday field (0-6, *, or cron syntax)',
};
}
// Additional range validation for numeric values
const validateRange = (value, min, max, name) => {
// Skip if it's * or contains special characters
if (
value === '*' ||
value.includes('/') ||
value.includes('-') ||
value.includes(',')
) {
return null;
}
const num = parseInt(value, 10);
if (isNaN(num) || num < min || num > max) {
return `${name} must be between ${min} and ${max}`;
}
return null;
};
const minuteError = validateRange(minute, 0, 59, 'Minute');
if (minuteError) return { valid: false, error: minuteError };
const hourError = validateRange(hour, 0, 23, 'Hour');
if (hourError) return { valid: false, error: hourError };
const dayError = validateRange(dayOfMonth, 1, 31, 'Day');
if (dayError) return { valid: false, error: dayError };
const monthError = validateRange(month, 1, 12, 'Month');
if (monthError) return { valid: false, error: monthError };
const weekdayError = validateRange(dayOfWeek, 0, 6, 'Weekday');
if (weekdayError) return { valid: false, error: weekdayError };
return { valid: true, error: null };
}
const DAYS_OF_WEEK = [
{ value: '0', label: 'Sunday' },
{ value: '1', label: 'Monday' },
{ value: '2', label: 'Tuesday' },
{ value: '3', label: 'Wednesday' },
{ value: '4', label: 'Thursday' },
{ value: '5', label: 'Friday' },
{ value: '6', label: 'Saturday' },
];
function formatBytes(bytes) {
if (bytes === 0) return '0 B';
const k = 1024;
const sizes = ['B', 'KB', 'MB', 'GB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return `${(bytes / Math.pow(k, i)).toFixed(2)} ${sizes[i]}`;
}
export default function BackupManager() {
const [backups, setBackups] = useState([]);
const [loading, setLoading] = useState(false);
const [creating, setCreating] = useState(false);
const [downloading, setDownloading] = useState(null);
const [uploadFile, setUploadFile] = useState(null);
const [uploadModalOpen, setUploadModalOpen] = useState(false);
const [restoreConfirmOpen, setRestoreConfirmOpen] = useState(false);
const [deleteConfirmOpen, setDeleteConfirmOpen] = useState(false);
const [selectedBackup, setSelectedBackup] = useState(null);
const [restoring, setRestoring] = useState(false);
const [deleting, setDeleting] = useState(false);
// Read user's preferences from settings
const [timeFormat] = useLocalStorage('time-format', '12h');
const [dateFormatSetting] = useLocalStorage('date-format', 'mdy');
const [tableSize] = useLocalStorage('table-size', 'default');
const [userTimezone] = useLocalStorage('time-zone', getDefaultTimeZone());
const is12Hour = timeFormat === '12h';
// Format date according to user preferences
const formatDate = (dateString) => {
const date = dayjs(dateString);
const datePart = dateFormatSetting === 'mdy' ? 'MM/DD/YYYY' : 'DD/MM/YYYY';
const timePart = is12Hour ? 'h:mm:ss A' : 'HH:mm:ss';
return date.format(`${datePart}, ${timePart}`);
};
// Warning suppression for confirmation dialogs
const suppressWarning = useWarningsStore((s) => s.suppressWarning);
// Schedule state
const [schedule, setSchedule] = useState({
enabled: false,
frequency: 'daily',
time: '03:00',
day_of_week: 0,
retention_count: 0,
cron_expression: '',
});
const [scheduleLoading, setScheduleLoading] = useState(false);
const [scheduleSaving, setScheduleSaving] = useState(false);
const [scheduleChanged, setScheduleChanged] = useState(false);
const [advancedMode, setAdvancedMode] = useState(false);
const [cronError, setCronError] = useState(null);
// For 12-hour display mode
const [displayTime, setDisplayTime] = useState('3:00');
const [timePeriod, setTimePeriod] = useState('AM');
const columns = useMemo(
() => [
{
header: 'Filename',
accessorKey: 'name',
grow: true,
cell: ({ cell }) => (
<div
style={{
whiteSpace: 'nowrap',
overflow: 'hidden',
textOverflow: 'ellipsis',
}}
>
{cell.getValue()}
</div>
),
},
{
header: 'Size',
accessorKey: 'size',
size: 80,
cell: ({ cell }) => (
<Text size="sm">{formatBytes(cell.getValue())}</Text>
),
},
{
header: 'Created',
accessorKey: 'created',
minSize: 180,
cell: ({ cell }) => (
<Text size="sm" style={{ whiteSpace: 'nowrap' }}>
{formatDate(cell.getValue())}
</Text>
),
},
{
id: 'actions',
header: 'Actions',
size: tableSize === 'compact' ? 75 : 100,
},
],
[tableSize]
);
const renderHeaderCell = (header) => {
return (
<Text size="sm" name={header.id}>
{header.column.columnDef.header}
</Text>
);
};
const renderBodyCell = ({ cell, row }) => {
switch (cell.column.id) {
case 'actions':
return (
<RowActions
row={row}
handleDownload={handleDownload}
handleRestoreClick={handleRestoreClick}
handleDeleteClick={handleDeleteClick}
downloading={downloading}
/>
);
}
};
const table = useTable({
columns,
data: backups,
allRowIds: backups.map((b) => b.name),
bodyCellRenderFns: {
actions: renderBodyCell,
},
headerCellRenderFns: {
name: renderHeaderCell,
size: renderHeaderCell,
created: renderHeaderCell,
actions: renderHeaderCell,
},
});
const loadBackups = async () => {
setLoading(true);
try {
const backupList = await API.listBackups();
setBackups(backupList);
} catch (error) {
notifications.show({
title: 'Error',
message: error?.message || 'Failed to load backups',
color: 'red',
});
} finally {
setLoading(false);
}
};
const loadSchedule = async () => {
setScheduleLoading(true);
try {
const settings = await API.getBackupSchedule();
// Check if using cron expression (advanced mode)
if (settings.cron_expression) {
setAdvancedMode(true);
}
setSchedule(settings);
// Initialize 12-hour display values
const { time, period } = to12Hour(settings.time);
setDisplayTime(time);
setTimePeriod(period);
setScheduleChanged(false);
} catch (error) {
// Ignore errors on initial load - settings may not exist yet
} finally {
setScheduleLoading(false);
}
};
useEffect(() => {
loadBackups();
loadSchedule();
}, []);
// Validate cron expression when switching to advanced mode
useEffect(() => {
if (advancedMode && schedule.cron_expression) {
const validation = validateCronExpression(schedule.cron_expression);
setCronError(validation.valid ? null : validation.error);
} else {
setCronError(null);
}
}, [advancedMode, schedule.cron_expression]);
const handleScheduleChange = (field, value) => {
setSchedule((prev) => ({ ...prev, [field]: value }));
setScheduleChanged(true);
// Validate cron expression if in advanced mode
if (field === 'cron_expression' && advancedMode) {
const validation = validateCronExpression(value);
setCronError(validation.valid ? null : validation.error);
}
};
// Handle time changes in 12-hour mode
const handleTimeChange12h = (newTime, newPeriod) => {
const time = newTime ?? displayTime;
const period = newPeriod ?? timePeriod;
setDisplayTime(time);
setTimePeriod(period);
// Convert to 24h and update schedule
const time24 = to24Hour(time, period);
handleScheduleChange('time', time24);
};
// Handle time changes in 24-hour mode
const handleTimeChange24h = (value) => {
handleScheduleChange('time', value);
// Also update 12h display state in case user switches formats
const { time, period } = to12Hour(value);
setDisplayTime(time);
setTimePeriod(period);
};
const handleSaveSchedule = async () => {
setScheduleSaving(true);
try {
const scheduleToSave = advancedMode
? schedule
: { ...schedule, cron_expression: '' };
const updated = await API.updateBackupSchedule(scheduleToSave);
setSchedule(updated);
setScheduleChanged(false);
notifications.show({
title: 'Success',
message: 'Backup schedule saved',
color: 'green',
});
} catch (error) {
notifications.show({
title: 'Error',
message: error?.message || 'Failed to save schedule',
color: 'red',
});
} finally {
setScheduleSaving(false);
}
};
const handleCreateBackup = async () => {
setCreating(true);
try {
await API.createBackup();
notifications.show({
title: 'Success',
message: 'Backup created successfully',
color: 'green',
});
await loadBackups();
} catch (error) {
notifications.show({
title: 'Error',
message: error?.message || 'Failed to create backup',
color: 'red',
});
} finally {
setCreating(false);
}
};
const handleDownload = async (filename) => {
setDownloading(filename);
try {
await API.downloadBackup(filename);
notifications.show({
title: 'Download Started',
message: `Downloading ${filename}...`,
color: 'blue',
});
} catch (error) {
notifications.show({
title: 'Error',
message: error?.message || 'Failed to download backup',
color: 'red',
});
} finally {
setDownloading(null);
}
};
const handleDeleteClick = (backup) => {
setSelectedBackup(backup);
setDeleteConfirmOpen(true);
};
const handleDeleteConfirm = async () => {
setDeleting(true);
try {
await API.deleteBackup(selectedBackup.name);
notifications.show({
title: 'Success',
message: 'Backup deleted successfully',
color: 'green',
});
await loadBackups();
} catch (error) {
notifications.show({
title: 'Error',
message: error?.message || 'Failed to delete backup',
color: 'red',
});
} finally {
setDeleting(false);
setDeleteConfirmOpen(false);
setSelectedBackup(null);
}
};
const handleRestoreClick = (backup) => {
setSelectedBackup(backup);
setRestoreConfirmOpen(true);
};
const handleRestoreConfirm = async () => {
setRestoring(true);
try {
await API.restoreBackup(selectedBackup.name);
notifications.show({
title: 'Success',
message:
'Backup restored successfully. You may need to refresh the page.',
color: 'green',
});
setTimeout(() => window.location.reload(), 2000);
} catch (error) {
notifications.show({
title: 'Error',
message: error?.message || 'Failed to restore backup',
color: 'red',
});
} finally {
setRestoring(false);
setRestoreConfirmOpen(false);
setSelectedBackup(null);
}
};
const handleUploadSubmit = async () => {
if (!uploadFile) return;
try {
await API.uploadBackup(uploadFile);
notifications.show({
title: 'Success',
message: 'Backup uploaded successfully',
color: 'green',
});
setUploadModalOpen(false);
setUploadFile(null);
await loadBackups();
} catch (error) {
notifications.show({
title: 'Error',
message: error?.message || 'Failed to upload backup',
color: 'red',
});
}
};
return (
<Stack gap="md">
{/* Schedule Settings */}
<Stack gap="sm">
<Group justify="space-between">
<Text size="sm" fw={500}>
Scheduled Backups
</Text>
<Switch
checked={schedule.enabled}
onChange={(e) =>
handleScheduleChange('enabled', e.currentTarget.checked)
}
label={schedule.enabled ? 'Enabled' : 'Disabled'}
/>
</Group>
<Group justify="space-between">
<Text size="sm" fw={500}>
Advanced (Cron Expression)
</Text>
<Switch
checked={advancedMode}
onChange={(e) => setAdvancedMode(e.currentTarget.checked)}
label={advancedMode ? 'Enabled' : 'Disabled'}
disabled={!schedule.enabled}
size="sm"
/>
</Group>
{scheduleLoading ? (
<Loader size="sm" />
) : (
<>
{advancedMode ? (
<>
<Stack gap="sm">
<TextInput
label="Cron Expression"
value={schedule.cron_expression}
onChange={(e) =>
handleScheduleChange(
'cron_expression',
e.currentTarget.value
)
}
placeholder="0 3 * * *"
description="Format: minute hour day month weekday (e.g., '0 3 * * *' = 3:00 AM daily)"
disabled={!schedule.enabled}
error={cronError}
/>
<Text size="xs" c="dimmed">
Examples: <br /> <code>0 3 * * *</code> - Every day at 3:00
AM
<br /> <code>0 2 * * 0</code> - Every Sunday at 2:00 AM
<br /> <code>0 */6 * * *</code> - Every 6 hours
<br /> <code>30 14 1 * *</code> - 1st of every month at
2:30 PM
</Text>
</Stack>
<Group grow align="flex-end">
<NumberInput
label="Retention"
description="0 = keep all"
value={schedule.retention_count}
onChange={(value) =>
handleScheduleChange('retention_count', value || 0)
}
min={0}
disabled={!schedule.enabled}
/>
<Button
onClick={handleSaveSchedule}
loading={scheduleSaving}
disabled={!scheduleChanged || (advancedMode && cronError)}
variant="default"
>
Save
</Button>
</Group>
</>
) : (
<Stack gap="sm">
<Group align="flex-end" gap="xs" wrap="nowrap">
<Select
label="Frequency"
value={schedule.frequency}
onChange={(value) =>
handleScheduleChange('frequency', value)
}
data={[
{ value: 'daily', label: 'Daily' },
{ value: 'weekly', label: 'Weekly' },
]}
disabled={!schedule.enabled}
/>
{schedule.frequency === 'weekly' && (
<Select
label="Day"
value={String(schedule.day_of_week)}
onChange={(value) =>
handleScheduleChange('day_of_week', parseInt(value, 10))
}
data={DAYS_OF_WEEK}
disabled={!schedule.enabled}
/>
)}
{is12Hour ? (
<>
<Select
label="Hour"
value={displayTime ? displayTime.split(':')[0] : '12'}
onChange={(value) => {
const minute = displayTime
? displayTime.split(':')[1]
: '00';
handleTimeChange12h(`${value}:${minute}`, null);
}}
data={Array.from({ length: 12 }, (_, i) => ({
value: String(i + 1),
label: String(i + 1),
}))}
disabled={!schedule.enabled}
searchable
/>
<Select
label="Minute"
value={displayTime ? displayTime.split(':')[1] : '00'}
onChange={(value) => {
const hour = displayTime
? displayTime.split(':')[0]
: '12';
handleTimeChange12h(`${hour}:${value}`, null);
}}
data={Array.from({ length: 60 }, (_, i) => ({
value: String(i).padStart(2, '0'),
label: String(i).padStart(2, '0'),
}))}
disabled={!schedule.enabled}
searchable
/>
<Select
label="Period"
value={timePeriod}
onChange={(value) => handleTimeChange12h(null, value)}
data={[
{ value: 'AM', label: 'AM' },
{ value: 'PM', label: 'PM' },
]}
disabled={!schedule.enabled}
/>
</>
) : (
<>
<Select
label="Hour"
value={
schedule.time ? schedule.time.split(':')[0] : '00'
}
onChange={(value) => {
const minute = schedule.time
? schedule.time.split(':')[1]
: '00';
handleTimeChange24h(`${value}:${minute}`);
}}
data={Array.from({ length: 24 }, (_, i) => ({
value: String(i).padStart(2, '0'),
label: String(i).padStart(2, '0'),
}))}
disabled={!schedule.enabled}
searchable
/>
<Select
label="Minute"
value={
schedule.time ? schedule.time.split(':')[1] : '00'
}
onChange={(value) => {
const hour = schedule.time
? schedule.time.split(':')[0]
: '00';
handleTimeChange24h(`${hour}:${value}`);
}}
data={Array.from({ length: 60 }, (_, i) => ({
value: String(i).padStart(2, '0'),
label: String(i).padStart(2, '0'),
}))}
disabled={!schedule.enabled}
searchable
/>
</>
)}
</Group>
<Group grow align="flex-end" gap="xs">
<NumberInput
label="Retention"
description="0 = keep all"
value={schedule.retention_count}
onChange={(value) =>
handleScheduleChange('retention_count', value || 0)
}
min={0}
disabled={!schedule.enabled}
/>
<Button
onClick={handleSaveSchedule}
loading={scheduleSaving}
disabled={!scheduleChanged}
variant="default"
>
Save
</Button>
</Group>
</Stack>
)}
{/* Timezone info - only show in simple mode */}
{!advancedMode && schedule.enabled && schedule.time && (
<Text size="xs" c="dimmed" mt="xs">
System Timezone: {userTimezone} Backup will run at{' '}
{schedule.time} {userTimezone}
</Text>
)}
</>
)}
</Stack>
{/* Backups List */}
<Stack gap={0}>
<Paper>
<Box
style={{
display: 'flex',
justifyContent: 'flex-end',
padding: 10,
}}
>
<Flex gap={6}>
<Tooltip label="Upload existing backup">
<Button
leftSection={<UploadCloud size={18} />}
variant="light"
size="xs"
onClick={() => setUploadModalOpen(true)}
p={5}
>
Upload
</Button>
</Tooltip>
<Tooltip label="Refresh list">
<Button
leftSection={<RefreshCcw size={18} />}
variant="light"
size="xs"
onClick={loadBackups}
loading={loading}
p={5}
>
Refresh
</Button>
</Tooltip>
<Tooltip label="Create new backup">
<Button
leftSection={<SquarePlus size={18} />}
variant="light"
size="xs"
onClick={handleCreateBackup}
loading={creating}
p={5}
color="green"
style={{
borderWidth: '1px',
borderColor: 'green',
color: 'white',
}}
>
Create Backup
</Button>
</Tooltip>
</Flex>
</Box>
</Paper>
<Box
style={{
display: 'flex',
flexDirection: 'column',
maxHeight: 300,
width: '100%',
overflow: 'hidden',
}}
>
<Box
style={{
flex: 1,
overflowY: 'auto',
overflowX: 'auto',
border: 'solid 1px rgb(68,68,68)',
borderRadius: 'var(--mantine-radius-default)',
}}
>
{loading ? (
<Box p="xl" style={{ display: 'flex', justifyContent: 'center' }}>
<Loader />
</Box>
) : backups.length === 0 ? (
<Text size="sm" c="dimmed" p="md" ta="center">
No backups found. Create one to get started.
</Text>
) : (
<div style={{ minWidth: 500 }}>
<CustomTable table={table} />
</div>
)}
</Box>
</Box>
</Stack>
<Modal
opened={uploadModalOpen}
onClose={() => {
setUploadModalOpen(false);
setUploadFile(null);
}}
title="Upload Backup"
>
<Stack>
<FileInput
label="Select backup file"
placeholder="Choose a .zip file"
accept=".zip,application/zip,application/x-zip-compressed"
value={uploadFile}
onChange={setUploadFile}
/>
<Group justify="flex-end">
<Button
variant="outline"
onClick={() => {
setUploadModalOpen(false);
setUploadFile(null);
}}
>
Cancel
</Button>
<Button
onClick={handleUploadSubmit}
disabled={!uploadFile}
variant="default"
>
Upload
</Button>
</Group>
</Stack>
</Modal>
<ConfirmationDialog
opened={restoreConfirmOpen}
onClose={() => {
setRestoreConfirmOpen(false);
setSelectedBackup(null);
}}
onConfirm={handleRestoreConfirm}
title="Restore Backup"
message={`Are you sure you want to restore from "${selectedBackup?.name}"? This will replace all current data with the backup data. This action cannot be undone.`}
confirmLabel="Restore"
cancelLabel="Cancel"
actionKey="restore-backup"
onSuppressChange={suppressWarning}
loading={restoring}
/>
<ConfirmationDialog
opened={deleteConfirmOpen}
onClose={() => {
setDeleteConfirmOpen(false);
setSelectedBackup(null);
}}
onConfirm={handleDeleteConfirm}
title="Delete Backup"
message={`Are you sure you want to delete "${selectedBackup?.name}"? This action cannot be undone.`}
confirmLabel="Delete"
cancelLabel="Cancel"
actionKey="delete-backup"
onSuppressChange={suppressWarning}
loading={deleting}
/>
</Stack>
);
}

View file

@ -0,0 +1,258 @@
import React, { useState } from 'react';
import { showNotification } from '../../utils/notificationUtils.js';
import { Field } from '../Field.jsx';
import {
ActionIcon,
Button,
Card,
Divider,
Group,
Stack,
Switch,
Text,
} from '@mantine/core';
import { Trash2 } from 'lucide-react';
import { getConfirmationDetails } from '../../utils/cards/PluginCardUtils.js';
const PluginFieldList = ({ plugin, settings, updateField }) => {
return plugin.fields.map((f) => (
<Field
key={f.id}
field={f}
value={settings?.[f.id]}
onChange={updateField}
/>
));
};
const PluginActionList = ({ plugin, enabled, running, handlePluginRun }) => {
return plugin.actions.map((action) => (
<Group key={action.id} justify="space-between">
<div>
<Text>{action.label}</Text>
{action.description && (
<Text size="sm" c="dimmed">
{action.description}
</Text>
)}
</div>
<Button
loading={running}
disabled={!enabled}
onClick={() => handlePluginRun(action)}
size="xs"
>
{running ? 'Running…' : 'Run'}
</Button>
</Group>
));
};
const PluginActionStatus = ({ running, lastResult }) => {
return (
<>
{running && (
<Text size="sm" c="dimmed">
Running action please wait
</Text>
)}
{!running && lastResult?.file && (
<Text size="sm" c="dimmed">
Output: {lastResult.file}
</Text>
)}
{!running && lastResult?.error && (
<Text size="sm" c="red">
Error: {String(lastResult.error)}
</Text>
)}
</>
);
};
const PluginCard = ({
plugin,
onSaveSettings,
onRunAction,
onToggleEnabled,
onRequireTrust,
onRequestDelete,
onRequestConfirm,
}) => {
const [settings, setSettings] = useState(plugin.settings || {});
const [saving, setSaving] = useState(false);
const [running, setRunning] = useState(false);
const [enabled, setEnabled] = useState(!!plugin.enabled);
const [lastResult, setLastResult] = useState(null);
// Keep local enabled state in sync with props (e.g., after import + enable)
React.useEffect(() => {
setEnabled(!!plugin.enabled);
}, [plugin.enabled]);
// Sync settings if plugin changes identity
React.useEffect(() => {
setSettings(plugin.settings || {});
}, [plugin.key]);
const updateField = (id, val) => {
setSettings((prev) => ({ ...prev, [id]: val }));
};
const save = async () => {
setSaving(true);
try {
await onSaveSettings(plugin.key, settings);
showNotification({
title: 'Saved',
message: `${plugin.name} settings updated`,
color: 'green',
});
} finally {
setSaving(false);
}
};
const missing = plugin.missing;
const handleEnableChange = () => {
return async (e) => {
const next = e.currentTarget.checked;
if (next && !plugin.ever_enabled && onRequireTrust) {
const ok = await onRequireTrust(plugin);
if (!ok) {
// Revert
setEnabled(false);
return;
}
}
setEnabled(next);
const resp = await onToggleEnabled(plugin.key, next);
if (next && resp?.ever_enabled) {
plugin.ever_enabled = true;
}
};
};
const handlePluginRun = async (a) => {
setRunning(true);
setLastResult(null);
try {
// Determine if confirmation is required from action metadata or fallback field
const { requireConfirm, confirmTitle, confirmMessage } =
getConfirmationDetails(a, plugin, settings);
if (requireConfirm) {
const confirmed = await onRequestConfirm(confirmTitle, confirmMessage);
if (!confirmed) {
// User canceled, abort the action
return;
}
}
// Save settings before running to ensure backend uses latest values
try {
await onSaveSettings(plugin.key, settings);
} catch (e) {
/* ignore, run anyway */
}
const resp = await onRunAction(plugin.key, a.id);
if (resp?.success) {
setLastResult(resp.result || {});
const msg = resp.result?.message || 'Plugin action completed';
showNotification({
title: plugin.name,
message: msg,
color: 'green',
});
} else {
const err = resp?.error || 'Unknown error';
setLastResult({ error: err });
showNotification({
title: `${plugin.name} error`,
message: String(err),
color: 'red',
});
}
} finally {
setRunning(false);
}
};
return (
<Card
shadow="sm"
radius="md"
withBorder
opacity={!missing && enabled ? 1 : 0.6}
>
<Group justify="space-between" mb="xs" align="center">
<div>
<Text fw={600}>{plugin.name}</Text>
<Text size="sm" c="dimmed">
{plugin.description}
</Text>
</div>
<Group gap="xs" align="center">
<ActionIcon
variant="subtle"
color="red"
title="Delete plugin"
onClick={() => onRequestDelete && onRequestDelete(plugin)}
>
<Trash2 size={16} />
</ActionIcon>
<Text size="xs" c="dimmed">
v{plugin.version || '1.0.0'}
</Text>
<Switch
checked={!missing && enabled}
onChange={handleEnableChange()}
size="xs"
onLabel="On"
offLabel="Off"
disabled={missing}
/>
</Group>
</Group>
{missing && (
<Text size="sm" c="red">
Missing plugin files. Re-import or delete this entry.
</Text>
)}
{!missing && plugin.fields && plugin.fields.length > 0 && (
<Stack gap="xs" mt="sm">
<PluginFieldList
plugin={plugin}
settings={settings}
updateField={updateField}
/>
<Group>
<Button loading={saving} onClick={save} variant="default" size="xs">
Save Settings
</Button>
</Group>
</Stack>
)}
{!missing && plugin.actions && plugin.actions.length > 0 && (
<>
<Divider my="sm" />
<Stack gap="xs">
<PluginActionList
plugin={plugin}
enabled={enabled}
running={running}
handlePluginRun={handlePluginRun}
/>
<PluginActionStatus running={running} lastResult={lastResult} />
</Stack>
</>
)}
</Card>
);
};
export default PluginCard;

View file

@ -0,0 +1,422 @@
import useChannelsStore from '../../store/channels.jsx';
import useSettingsStore from '../../store/settings.jsx';
import useVideoStore from '../../store/useVideoStore.jsx';
import { useDateTimeFormat, useTimeHelpers } from '../../utils/dateTimeUtils.js';
import { notifications } from '@mantine/notifications';
import React from 'react';
import {
ActionIcon,
Badge,
Box,
Button,
Card,
Center,
Flex,
Group,
Image,
Modal,
Stack,
Text,
Tooltip,
} from '@mantine/core';
import { AlertTriangle, SquareX } from 'lucide-react';
import RecordingSynopsis from '../RecordingSynopsis';
import {
deleteRecordingById,
deleteSeriesAndRule,
getPosterUrl,
getRecordingUrl,
getSeasonLabel,
getSeriesInfo,
getShowVideoUrl,
removeRecording,
runComSkip,
} from './../../utils/cards/RecordingCardUtils.js';
const RecordingCard = ({ recording, onOpenDetails, onOpenRecurring }) => {
const channels = useChannelsStore((s) => s.channels);
const env_mode = useSettingsStore((s) => s.environment.env_mode);
const showVideo = useVideoStore((s) => s.showVideo);
const fetchRecordings = useChannelsStore((s) => s.fetchRecordings);
const { toUserTime, userNow } = useTimeHelpers();
const [timeformat, dateformat] = useDateTimeFormat();
const channel = channels?.[recording.channel];
const customProps = recording.custom_properties || {};
const program = customProps.program || {};
const recordingName = program.title || 'Custom Recording';
const subTitle = program.sub_title || '';
const description = program.description || customProps.description || '';
const isRecurringRule = customProps?.rule?.type === 'recurring';
// Poster or channel logo
const posterUrl = getPosterUrl(
customProps.poster_logo_id, customProps, channel?.logo?.cache_url, env_mode);
const start = toUserTime(recording.start_time);
const end = toUserTime(recording.end_time);
const now = userNow();
const status = customProps.status;
const isTimeActive = now.isAfter(start) && now.isBefore(end);
const isInterrupted = status === 'interrupted';
const isInProgress = isTimeActive; // Show as recording by time, regardless of status glitches
const isUpcoming = now.isBefore(start);
const isSeriesGroup = Boolean(
recording._group_count && recording._group_count > 1
);
// Season/Episode display if present
const season = customProps.season ?? program?.custom_properties?.season;
const episode = customProps.episode ?? program?.custom_properties?.episode;
const onscreen =
customProps.onscreen_episode ??
program?.custom_properties?.onscreen_episode;
const seLabel = getSeasonLabel(season, episode, onscreen);
const handleWatchLive = () => {
if (!channel) return;
showVideo(getShowVideoUrl(channel, env_mode), 'live');
};
const handleWatchRecording = () => {
// Only enable if backend provides a playable file URL in custom properties
const fileUrl = getRecordingUrl(customProps, env_mode);
if (!fileUrl) return;
showVideo(fileUrl, 'vod', {
name: recordingName,
logo: { url: posterUrl },
});
};
const handleRunComskip = async (e) => {
e?.stopPropagation?.();
try {
await runComSkip(recording);
notifications.show({
title: 'Removing commercials',
message: 'Queued comskip for this recording',
color: 'blue.5',
autoClose: 2000,
});
} catch (error) {
console.error('Failed to queue comskip for recording', error);
}
};
// Cancel handling for series groups
const [cancelOpen, setCancelOpen] = React.useState(false);
const [busy, setBusy] = React.useState(false);
const handleCancelClick = (e) => {
e.stopPropagation();
if (isRecurringRule) {
onOpenRecurring?.(recording, true);
return;
}
if (isSeriesGroup) {
setCancelOpen(true);
} else {
removeRecording(recording.id);
}
};
const seriesInfo = getSeriesInfo(customProps);
const removeUpcomingOnly = async () => {
try {
setBusy(true);
await deleteRecordingById(recording.id);
} finally {
setBusy(false);
setCancelOpen(false);
try {
await fetchRecordings();
} catch (error) {
console.error('Failed to refresh recordings', error);
}
}
};
const removeSeriesAndRule = async () => {
try {
setBusy(true);
await deleteSeriesAndRule(seriesInfo);
} finally {
setBusy(false);
setCancelOpen(false);
try {
await fetchRecordings();
} catch (error) {
console.error(
'Failed to refresh recordings after series removal',
error
);
}
}
};
const handleOnMainCardClick = () => {
if (isRecurringRule) {
onOpenRecurring?.(recording, false);
} else {
onOpenDetails?.(recording);
}
}
const WatchLive = () => {
return <Button
size="xs"
variant="light"
onClick={(e) => {
e.stopPropagation();
handleWatchLive();
}}
>
Watch Live
</Button>;
}
const WatchRecording = () => {
return <Tooltip
label={
customProps.file_url || customProps.output_file_url
? 'Watch recording'
: 'Recording playback not available yet'
}
>
<Button
size="xs"
variant="default"
onClick={(e) => {
e.stopPropagation();
handleWatchRecording();
}}
disabled={
customProps.status === 'recording' || !(customProps.file_url || customProps.output_file_url)
}
>
Watch
</Button>
</Tooltip>;
}
const MainCard = (
<Card
shadow="sm"
padding="md"
radius="md"
withBorder
style={{
color: '#fff',
backgroundColor: isInterrupted ? '#2b1f20' : '#27272A',
borderColor: isInterrupted ? '#a33' : undefined,
height: '100%',
cursor: 'pointer',
}}
onClick={handleOnMainCardClick}
>
<Flex justify="space-between" align="center" pb={8}>
<Group gap={8} flex={1} miw={0}>
<Badge
color={
isInterrupted
? 'red.7'
: isInProgress
? 'red.6'
: isUpcoming
? 'yellow.6'
: 'gray.6'
}
>
{isInterrupted
? 'Interrupted'
: isInProgress
? 'Recording'
: isUpcoming
? 'Scheduled'
: 'Completed'}
</Badge>
{isInterrupted && <AlertTriangle size={16} color="#ffa94d" />}
<Stack gap={2} flex={1} miw={0}>
<Group gap={8} wrap="nowrap">
<Text fw={600} lineClamp={1} title={recordingName}>
{recordingName}
</Text>
{isSeriesGroup && (
<Badge color="teal" variant="filled">
Series
</Badge>
)}
{isRecurringRule && (
<Badge color="blue" variant="light">
Recurring
</Badge>
)}
{seLabel && !isSeriesGroup && (
<Badge color="gray" variant="light">
{seLabel}
</Badge>
)}
</Group>
</Stack>
</Group>
<Center>
<Tooltip label={isUpcoming || isInProgress ? 'Cancel' : 'Delete'}>
<ActionIcon
variant="transparent"
color="red.9"
onMouseDown={(e) => e.stopPropagation()}
onClick={handleCancelClick}
>
<SquareX size="20" />
</ActionIcon>
</Tooltip>
</Center>
</Flex>
<Flex gap="sm" align="center">
<Image
src={posterUrl}
w={64}
h={64}
fit="contain"
radius="sm"
alt={recordingName}
fallbackSrc="/logo.png"
/>
<Stack gap={6} flex={1}>
{!isSeriesGroup && subTitle && (
<Group justify="space-between">
<Text size="sm" c="dimmed">
Episode
</Text>
<Text size="sm" fw={700} title={subTitle}>
{subTitle}
</Text>
</Group>
)}
<Group justify="space-between">
<Text size="sm" c="dimmed">
Channel
</Text>
<Text size="sm">
{channel ? `${channel.channel_number}${channel.name}` : '—'}
</Text>
</Group>
<Group justify="space-between">
<Text size="sm" c="dimmed">
{isSeriesGroup ? 'Next recording' : 'Time'}
</Text>
<Text size="sm">
{start.format(`${dateformat}, YYYY ${timeformat}`)} {end.format(timeformat)}
</Text>
</Group>
{!isSeriesGroup && description && (
<RecordingSynopsis
description={description}
onOpen={() => onOpenDetails?.(recording)}
/>
)}
{isInterrupted && customProps.interrupted_reason && (
<Text size="xs" c="red.4">
{customProps.interrupted_reason}
</Text>
)}
<Group justify="flex-end" gap="xs" pt={4}>
{isInProgress && <WatchLive />}
{!isUpcoming && <WatchRecording />}
{!isUpcoming &&
customProps?.status === 'completed' &&
(!customProps?.comskip ||
customProps?.comskip?.status !== 'completed') && (
<Button
size="xs"
variant="light"
color="teal"
onClick={handleRunComskip}
>
Remove commercials
</Button>
)}
</Group>
</Stack>
</Flex>
{/* If this card is a grouped upcoming series, show count */}
{recording._group_count > 1 && (
<Text
size="xs"
c="dimmed"
style={{ position: 'absolute', bottom: 6, right: 12 }}
>
Next of {recording._group_count}
</Text>
)}
</Card>
);
if (!isSeriesGroup) return MainCard;
// Stacked look for series groups: render two shadow layers behind the main card
return (
<Box style={{ position: 'relative' }}>
<Modal
opened={cancelOpen}
onClose={() => setCancelOpen(false)}
title="Cancel Series"
centered
size="md"
zIndex={9999}
>
<Stack gap="sm">
<Text>This is a series rule. What would you like to cancel?</Text>
<Group justify="flex-end">
<Button
variant="default"
loading={busy}
onClick={removeUpcomingOnly}
>
Only this upcoming
</Button>
<Button color="red" loading={busy} onClick={removeSeriesAndRule}>
Entire series + rule
</Button>
</Group>
</Stack>
</Modal>
<Box
style={{
position: 'absolute',
inset: 0,
transform: 'translate(10px, 10px) rotate(-1deg)',
borderRadius: 12,
backgroundColor: '#1f1f23',
border: '1px solid #2f2f34',
boxShadow: '0 6px 18px rgba(0,0,0,0.35)',
pointerEvents: 'none',
zIndex: 0,
}}
/>
<Box
style={{
position: 'absolute',
inset: 0,
transform: 'translate(5px, 5px) rotate(1deg)',
borderRadius: 12,
backgroundColor: '#232327',
border: '1px solid #333',
boxShadow: '0 4px 12px rgba(0,0,0,0.30)',
pointerEvents: 'none',
zIndex: 1,
}}
/>
<Box style={{ position: 'relative', zIndex: 2 }}>{MainCard}</Box>
</Box>
);
};
export default RecordingCard;

View file

@ -0,0 +1,85 @@
import {
Badge,
Box,
Card,
CardSection,
Group,
Image,
Stack,
Text,
} from '@mantine/core';
import {Calendar, Play, Star} from "lucide-react";
import React from "react";
const SeriesCard = ({ series, onClick }) => {
return (
<Card
shadow="sm"
padding="md"
radius="md"
withBorder
style={{ cursor: 'pointer', backgroundColor: '#27272A' }}
onClick={() => onClick(series)}
>
<CardSection>
<Box pos="relative" h={300}>
{series.logo?.url ? (
<Image
src={series.logo.url}
height={300}
alt={series.name}
fit="contain"
/>
) : (
<Box
style={{
backgroundColor: '#404040',
alignItems: 'center',
justifyContent: 'center',
}}
h={300}
display="flex"
>
<Play size={48} color="#666" />
</Box>
)}
{/* Add Series badge in the same position as Movie badge */}
<Badge pos="absolute" bottom={8} left={8} color="purple">
Series
</Badge>
</Box>
</CardSection>
<Stack spacing={8} mt="md">
<Text weight={500}>{series.name}</Text>
<Group spacing={16}>
{series.year && (
<Group spacing={4}>
<Calendar size={14} color="#666" />
<Text size="xs" c="dimmed">
{series.year}
</Text>
</Group>
)}
{series.rating && (
<Group spacing={4}>
<Star size={14} color="#666" />
<Text size="xs" c="dimmed">
{series.rating}
</Text>
</Group>
)}
</Group>
{series.genre && (
<Text size="xs" c="dimmed" lineClamp={1}>
{series.genre}
</Text>
)}
</Stack>
</Card>
);
};
export default SeriesCard;

View file

@ -0,0 +1,613 @@
import { useLocation } from 'react-router-dom';
import React, { useEffect, useMemo, useState } from 'react';
import useLocalStorage from '../../hooks/useLocalStorage.jsx';
import usePlaylistsStore from '../../store/playlists.jsx';
import useSettingsStore from '../../store/settings.jsx';
import {
ActionIcon,
Badge,
Box,
Card,
Center,
Flex,
Group,
Select,
Stack,
Text,
Tooltip,
} from '@mantine/core';
import {
Gauge,
HardDriveDownload,
HardDriveUpload,
SquareX,
Timer,
Users,
Video,
} from 'lucide-react';
import { toFriendlyDuration } from '../../utils/dateTimeUtils.js';
import { CustomTable, useTable } from '../tables/CustomTable/index.jsx';
import { TableHelper } from '../../helpers/index.jsx';
import logo from '../../images/logo.png';
import { formatBytes, formatSpeed } from '../../utils/networkUtils.js';
import { showNotification } from '../../utils/notificationUtils.js';
import {
connectedAccessor,
durationAccessor,
getBufferingSpeedThreshold,
getChannelStreams,
getLogoUrl,
getM3uAccountsMap,
getMatchingStreamByUrl,
getSelectedStream,
getStartDate,
getStreamOptions,
getStreamsByIds,
switchStream,
} from '../../utils/cards/StreamConnectionCardUtils.js';
// Create a separate component for each channel card to properly handle the hook
const StreamConnectionCard = ({
channel,
clients,
stopClient,
stopChannel,
logos,
channelsByUUID,
}) => {
const location = useLocation();
const [availableStreams, setAvailableStreams] = useState([]);
const [isLoadingStreams, setIsLoadingStreams] = useState(false);
const [activeStreamId, setActiveStreamId] = useState(null);
const [currentM3UProfile, setCurrentM3UProfile] = useState(null); // Add state for current M3U profile
const [data, setData] = useState([]);
const [previewedStream, setPreviewedStream] = useState(null);
// Get M3U account data from the playlists store
const m3uAccounts = usePlaylistsStore((s) => s.playlists);
// Get settings for speed threshold
const settings = useSettingsStore((s) => s.settings);
// Get Date-format from localStorage
const [dateFormatSetting] = useLocalStorage('date-format', 'mdy');
const dateFormat = dateFormatSetting === 'mdy' ? 'MM/DD' : 'DD/MM';
const [tableSize] = useLocalStorage('table-size', 'default');
// Create a map of M3U account IDs to names for quick lookup
const m3uAccountsMap = useMemo(() => {
return getM3uAccountsMap(m3uAccounts);
}, [m3uAccounts]);
// Update M3U profile information when channel data changes
useEffect(() => {
// If the channel data includes M3U profile information, update our state
if (channel.m3u_profile || channel.m3u_profile_name) {
setCurrentM3UProfile({
name:
channel.m3u_profile?.name ||
channel.m3u_profile_name ||
'Default M3U',
});
}
}, [channel.m3u_profile, channel.m3u_profile_name, channel.stream_id]);
// Fetch available streams for this channel
useEffect(() => {
const fetchStreams = async () => {
setIsLoadingStreams(true);
try {
// Get channel ID from UUID
const channelId = channelsByUUID[channel.channel_id];
if (channelId) {
const streamData = await getChannelStreams(channelId);
// Use streams in the order returned by the API without sorting
setAvailableStreams(streamData);
// If we have a channel URL, try to find the matching stream
if (channel.url && streamData.length > 0) {
// Try to find matching stream based on URL
const matchingStream = getMatchingStreamByUrl(
streamData,
channel.url
);
if (matchingStream) {
setActiveStreamId(matchingStream.id.toString());
// If the stream has M3U profile info, save it
if (matchingStream.m3u_profile) {
setCurrentM3UProfile(matchingStream.m3u_profile);
}
}
}
}
} catch (error) {
console.error('Error fetching streams:', error);
} finally {
setIsLoadingStreams(false);
}
};
fetchStreams();
}, [channel.channel_id, channel.url, channelsByUUID]);
useEffect(() => {
setData(
clients
.filter((client) => client.channel.channel_id === channel.channel_id)
.map((client) => ({
id: client.client_id,
...client,
}))
);
}, [clients, channel.channel_id]);
const renderHeaderCell = (header) => {
switch (header.id) {
default:
return (
<Group>
<Text size="sm" name={header.id}>
{header.column.columnDef.header}
</Text>
</Group>
);
}
};
const renderBodyCell = ({ cell, row }) => {
switch (cell.column.id) {
case 'actions':
return (
<Box sx={{ justifyContent: 'right' }}>
<Center>
<Tooltip label="Disconnect client">
<ActionIcon
size="sm"
variant="transparent"
color="red.9"
onClick={() =>
stopClient(
row.original.channel.uuid,
row.original.client_id
)
}
>
<SquareX size="18" />
</ActionIcon>
</Tooltip>
</Center>
</Box>
);
}
};
const checkStreamsAfterChange = (streamId) => {
return async () => {
try {
const channelId = channelsByUUID[channel.channel_id];
if (channelId) {
const updatedStreamData = await getChannelStreams(channelId);
console.log('Channel streams after switch:', updatedStreamData);
// Update current stream information with fresh data
const updatedStream = getSelectedStream(updatedStreamData, streamId);
if (updatedStream?.m3u_profile) {
setCurrentM3UProfile(updatedStream.m3u_profile);
}
}
} catch (error) {
console.error('Error checking streams after switch:', error);
}
};
};
// Handle stream switching
const handleStreamChange = async (streamId) => {
try {
console.log('Switching to stream ID:', streamId);
// Find the selected stream in availableStreams for debugging
const selectedStream = getSelectedStream(availableStreams, streamId);
console.log('Selected stream details:', selectedStream);
// Make sure we're passing the correct ID to the API
const response = await switchStream(channel, streamId);
console.log('Stream switch API response:', response);
// Update the local active stream ID immediately
setActiveStreamId(streamId);
// Update M3U profile information if available in the response
if (response?.m3u_profile) {
setCurrentM3UProfile(response.m3u_profile);
} else if (selectedStream && selectedStream.m3u_profile) {
// Fallback to the profile from the selected stream
setCurrentM3UProfile(selectedStream.m3u_profile);
}
// Show detailed notification with stream name
showNotification({
title: 'Stream switching',
message: `Switching to "${selectedStream?.name}" for ${channel.name}`,
color: 'blue.5',
});
// After a short delay, fetch streams again to confirm the switch
setTimeout(checkStreamsAfterChange(streamId), 2000);
} catch (error) {
console.error('Stream switch error:', error);
showNotification({
title: 'Error switching stream',
message: error.toString(),
color: 'red.5',
});
}
};
const clientsColumns = useMemo(
() => [
{
id: 'expand',
size: 20,
},
{
header: 'IP Address',
accessorKey: 'ip_address',
},
// Updated Connected column with tooltip
{
id: 'connected',
header: 'Connected',
accessorFn: connectedAccessor(dateFormat),
cell: ({ cell }) => (
<Tooltip
label={
cell.getValue() !== 'Unknown'
? `Connected at ${cell.getValue()}`
: 'Unknown connection time'
}
>
<Text size="xs">{cell.getValue()}</Text>
</Tooltip>
),
},
// Update Duration column with tooltip showing exact seconds
{
id: 'duration',
header: 'Duration',
accessorFn: durationAccessor(),
cell: ({ cell, row }) => {
const exactDuration =
row.original.connected_since || row.original.connection_duration;
return (
<Tooltip
label={
exactDuration
? `${exactDuration.toFixed(1)} seconds`
: 'Unknown duration'
}
>
<Text size="xs">{cell.getValue()}</Text>
</Tooltip>
);
},
},
{
id: 'actions',
header: 'Actions',
size: tableSize == 'compact' ? 75 : 100,
},
],
[]
);
const channelClientsTable = useTable({
...TableHelper.defaultProperties,
columns: clientsColumns,
data,
allRowIds: data.map((client) => client.id),
tableCellProps: () => ({
padding: 4,
borderColor: '#444',
color: '#E0E0E0',
fontSize: '0.85rem',
}),
headerCellRenderFns: {
ip_address: renderHeaderCell,
connected: renderHeaderCell,
duration: renderHeaderCell,
actions: renderHeaderCell,
},
bodyCellRenderFns: {
actions: renderBodyCell,
},
getExpandedRowHeight: (row) => {
return 20 + 28 * row.original.streams.length;
},
expandedRowRenderer: ({ row }) => {
return (
<Box p="xs">
<Group spacing="xs" align="flex-start">
<Text size="xs" fw={500} color="dimmed">
User Agent:
</Text>
<Text size="xs">{row.original.user_agent || 'Unknown'}</Text>
</Group>
</Box>
);
},
mantineExpandButtonProps: ({ row, table }) => ({
size: 'xs',
style: {
transform: row.getIsExpanded() ? 'rotate(180deg)' : 'rotate(-90deg)',
transition: 'transform 0.2s',
},
}),
displayColumnDefOptions: {
'mrt-row-expand': {
size: 15,
header: '',
},
'mrt-row-actions': {
size: 74,
},
},
});
// Get logo URL from the logos object if available
const logoUrl = getLogoUrl(channel.logo_id, logos, previewedStream);
useEffect(() => {
let isMounted = true;
// Only fetch if we have a stream_id and NO channel.name
if (!channel.name && channel.stream_id) {
getStreamsByIds(channel.stream_id).then((streams) => {
if (isMounted && streams && streams.length > 0) {
setPreviewedStream(streams[0]);
}
});
}
return () => {
isMounted = false;
};
}, [channel.name, channel.stream_id]);
const channelName =
channel.name || previewedStream?.name || 'Unnamed Channel';
const uptime = channel.uptime || 0;
const bitrates = channel.bitrates || [];
const totalBytes = channel.total_bytes || 0;
const clientCount = channel.client_count || 0;
const avgBitrate = channel.avg_bitrate || '0 Kbps';
const streamProfileName = channel.stream_profile?.name || 'Unknown Profile';
// Use currentM3UProfile if available, otherwise fall back to channel data
const m3uProfileName =
currentM3UProfile?.name ||
channel.m3u_profile?.name ||
channel.m3u_profile_name ||
'Unknown M3U Profile';
// Create select options for available streams
const streamOptions = getStreamOptions(availableStreams, m3uAccountsMap);
if (location.pathname !== '/stats') {
return <></>;
}
// Safety check - if channel doesn't have required data, don't render
if (!channel || !channel.channel_id) {
return null;
}
return (
<Card
key={channel.channel_id}
shadow="sm"
padding="md"
radius="md"
withBorder
style={{
backgroundColor: '#27272A',
}}
color="#fff"
maw={700}
w={'100%'}
>
<Stack pos="relative">
<Group justify="space-between">
<Box
style={{
alignItems: 'center',
justifyContent: 'center',
}}
w={100}
h={50}
display="flex"
>
<img
src={logoUrl || logo}
style={{
maxWidth: '100%',
maxHeight: '100%',
objectFit: 'contain',
}}
alt="channel logo"
/>
</Box>
<Group>
<Box>
<Tooltip label={getStartDate(uptime)}>
<Center>
<Timer pr={5} />
{toFriendlyDuration(uptime, 'seconds')}
</Center>
</Tooltip>
</Box>
<Center>
<Tooltip label="Stop Channel">
<ActionIcon
variant="transparent"
color="red.9"
onClick={() => stopChannel(channel.channel_id)}
>
<SquareX size="24" />
</ActionIcon>
</Tooltip>
</Center>
</Group>
</Group>
<Flex justify="space-between" align="center">
<Group>
<Text fw={500}>{channelName}</Text>
</Group>
<Tooltip label="Active Stream Profile">
<Group gap={5}>
<Video size="18" />
{streamProfileName}
</Group>
</Tooltip>
</Flex>
{/* Display M3U profile information */}
<Flex justify="flex-end" align="center" mt={-8}>
<Group gap={5}>
<HardDriveUpload size="18" />
<Tooltip label="Current M3U Profile">
<Text size="xs">{m3uProfileName}</Text>
</Tooltip>
</Group>
</Flex>
{/* Add stream selection dropdown */}
{availableStreams.length > 0 && (
<Tooltip label="Switch to another stream source">
<Select
size="xs"
label="Active Stream"
placeholder={
isLoadingStreams ? 'Loading streams...' : 'Select stream'
}
data={streamOptions}
value={activeStreamId || channel.stream_id?.toString() || null}
onChange={handleStreamChange}
disabled={isLoadingStreams}
mt={8}
/>
</Tooltip>
)}
{/* Add stream information badges */}
<Group gap="xs" mt="xs">
{channel.resolution && (
<Tooltip label="Video resolution">
<Badge size="sm" variant="light" color="red">
{channel.resolution}
</Badge>
</Tooltip>
)}
{channel.source_fps && (
<Tooltip label="Source frames per second">
<Badge size="sm" variant="light" color="orange">
{channel.source_fps} FPS
</Badge>
</Tooltip>
)}
{channel.video_codec && (
<Tooltip label="Video codec">
<Badge size="sm" variant="light" color="blue">
{channel.video_codec.toUpperCase()}
</Badge>
</Tooltip>
)}
{channel.audio_codec && (
<Tooltip label="Audio codec">
<Badge size="sm" variant="light" color="pink">
{channel.audio_codec.toUpperCase()}
</Badge>
</Tooltip>
)}
{channel.audio_channels && (
<Tooltip label="Audio channel configuration">
<Badge size="sm" variant="light" color="pink">
{channel.audio_channels}
</Badge>
</Tooltip>
)}
{channel.stream_type && (
<Tooltip label="Stream type">
<Badge size="sm" variant="light" color="cyan">
{channel.stream_type.toUpperCase()}
</Badge>
</Tooltip>
)}
{channel.ffmpeg_speed && (
<Tooltip
label={`Current Speed: ${parseFloat(channel.ffmpeg_speed).toFixed(2)}x`}
>
<Badge
size="sm"
variant="light"
color={
parseFloat(channel.ffmpeg_speed) >=
getBufferingSpeedThreshold(settings['proxy_settings'])
? 'green'
: 'red'
}
>
{parseFloat(channel.ffmpeg_speed).toFixed(2)}x
</Badge>
</Tooltip>
)}
</Group>
<Group justify="space-between">
<Group gap={4}>
<Tooltip
label={`Current bitrate: ${formatSpeed(bitrates.at(-1) || 0)}`}
>
<Group gap={4} style={{ cursor: 'help' }}>
<Gauge pr={5} size="22" />
<Text size="sm">{formatSpeed(bitrates.at(-1) || 0)}</Text>
</Group>
</Tooltip>
</Group>
<Tooltip label={`Average bitrate: ${avgBitrate}`}>
<Text size="sm" style={{ cursor: 'help' }}>
Avg: {avgBitrate}
</Text>
</Tooltip>
<Group gap={4}>
<Tooltip label={`Total transferred: ${formatBytes(totalBytes)}`}>
<Group gap={4} style={{ cursor: 'help' }}>
<HardDriveDownload size="18" />
<Text size="sm">{formatBytes(totalBytes)}</Text>
</Group>
</Tooltip>
</Group>
<Group gap={5}>
<Tooltip
label={`${clientCount} active client${clientCount !== 1 ? 's' : ''}`}
>
<Group gap={4} style={{ cursor: 'help' }}>
<Users size="18" />
<Text size="sm">{clientCount}</Text>
</Group>
</Tooltip>
</Group>
</Group>
<CustomTable table={channelClientsTable} />
</Stack>
</Card>
);
};
export default StreamConnectionCard;

View file

@ -0,0 +1,143 @@
import {
ActionIcon,
Badge,
Box,
Card,
CardSection,
Group,
Image,
Stack,
Text,
} from '@mantine/core';
import { Calendar, Clock, Play, Star } from 'lucide-react';
import React from 'react';
import {
formatDuration,
getSeasonLabel,
} from '../../utils/cards/VODCardUtils.js';
const VODCard = ({ vod, onClick }) => {
const isEpisode = vod.type === 'episode';
const getDisplayTitle = () => {
if (isEpisode && vod.series) {
return (
<Stack spacing={4}>
<Text size="sm" c="dimmed">
{vod.series.name}
</Text>
<Text weight={500}>
{getSeasonLabel(vod)} - {vod.name}
</Text>
</Stack>
);
}
return <Text weight={500}>{vod.name}</Text>;
};
const handleCardClick = async () => {
// Just pass the basic vod info to the parent handler
onClick(vod);
};
return (
<Card
shadow="sm"
padding="md"
radius="md"
withBorder
style={{ cursor: 'pointer', backgroundColor: '#27272A' }}
onClick={handleCardClick}
>
<CardSection>
<Box pos="relative" h={300}>
{vod.logo?.url ? (
<Image
src={vod.logo.url}
height={300}
alt={vod.name}
fit="contain"
/>
) : (
<Box
style={{
backgroundColor: '#404040',
alignItems: 'center',
justifyContent: 'center',
}}
h={300}
display="flex"
>
<Play size={48} color="#666" />
</Box>
)}
<ActionIcon
style={{
backgroundColor: 'rgba(0,0,0,0.7)',
}}
pos="absolute"
top={8}
right={8}
onClick={(e) => {
e.stopPropagation();
onClick(vod);
}}
>
<Play size={16} color="white" />
</ActionIcon>
<Badge
pos="absolute"
bottom={8}
left={8}
color={isEpisode ? 'blue' : 'green'}
>
{isEpisode ? 'Episode' : 'Movie'}
</Badge>
</Box>
</CardSection>
<Stack spacing={8} mt="md">
{getDisplayTitle()}
<Group spacing={16}>
{vod.year && (
<Group spacing={4}>
<Calendar size={14} color="#666" />
<Text size="xs" c="dimmed">
{vod.year}
</Text>
</Group>
)}
{vod.duration && (
<Group spacing={4}>
<Clock size={14} color="#666" />
<Text size="xs" c="dimmed">
{formatDuration(vod.duration_secs)}
</Text>
</Group>
)}
{vod.rating && (
<Group spacing={4}>
<Star size={14} color="#666" />
<Text size="xs" c="dimmed">
{vod.rating}
</Text>
</Group>
)}
</Group>
{vod.genre && (
<Text size="xs" c="dimmed" lineClamp={1}>
{vod.genre}
</Text>
)}
</Stack>
</Card>
);
};
export default VODCard;

View file

@ -0,0 +1,422 @@
// Format duration for content length
import useLocalStorage from '../../hooks/useLocalStorage.jsx';
import React, { useCallback, useEffect, useState } from 'react';
import logo from '../../images/logo.png';
import { ActionIcon, Badge, Box, Card, Center, Flex, Group, Progress, Stack, Text, Tooltip } from '@mantine/core';
import { convertToSec, fromNow, toFriendlyDuration } from '../../utils/dateTimeUtils.js';
import { ChevronDown, HardDriveUpload, SquareX, Timer, Video } from 'lucide-react';
import {
calculateConnectionDuration,
calculateConnectionStartTime,
calculateProgress,
formatDuration,
formatTime,
getEpisodeDisplayTitle,
getEpisodeSubtitle,
getMovieDisplayTitle,
getMovieSubtitle,
} from '../../utils/cards/VodConnectionCardUtils.js';
const ClientDetails = ({ connection, connectionStartTime }) => {
return (
<Stack
gap="xs"
style={{
backgroundColor: 'rgba(255, 255, 255, 0.02)',
}}
p={12}
bdrs={6}
bd={'1px solid rgba(255, 255, 255, 0.08)'}
>
{connection.user_agent &&
connection.user_agent !== 'Unknown' && (
<Group gap={8} align="flex-start">
<Text size="xs" fw={500} c="dimmed" miw={80}>
User Agent:
</Text>
<Text size="xs" ff={'monospace'} flex={1}>
{connection.user_agent.length > 100
? `${connection.user_agent.substring(0, 100)}...`
: connection.user_agent}
</Text>
</Group>
)}
<Group gap={8}>
<Text size="xs" fw={500} c="dimmed" miw={80}>
Client ID:
</Text>
<Text size="xs" ff={'monospace'}>
{connection.client_id || 'Unknown'}
</Text>
</Group>
{connection.connected_at && (
<Group gap={8}>
<Text size="xs" fw={500} c="dimmed" miw={80}>
Connected:
</Text>
<Text size="xs">{connectionStartTime}</Text>
</Group>
)}
{connection.duration && connection.duration > 0 && (
<Group gap={8}>
<Text size="xs" fw={500} c="dimmed" miw={80}>
Watch Duration:
</Text>
<Text size="xs">
{toFriendlyDuration(connection.duration, 'seconds')}
</Text>
</Group>
)}
{/* Seek/Position Information */}
{(connection.last_seek_percentage > 0 ||
connection.last_seek_byte > 0) && (
<>
<Group gap={8}>
<Text size="xs" fw={500} c="dimmed" miw={80}>
Last Seek:
</Text>
<Text size="xs">
{connection.last_seek_percentage?.toFixed(1)}%
{connection.total_content_size > 0 && (
<span style={{ color: 'var(--mantine-color-dimmed)' }}>
{' '}
({Math.round(connection.last_seek_byte / (1024 * 1024))}
MB /{' '}
{Math.round(
connection.total_content_size / (1024 * 1024)
)}
MB)
</span>
)}
</Text>
</Group>
{Number(connection.last_seek_timestamp) > 0 && (
<Group gap={8}>
<Text size="xs" fw={500} c="dimmed" miw={80}>
Seek Time:
</Text>
<Text size="xs">
{fromNow(convertToSec(Number(connection.last_seek_timestamp)))}
</Text>
</Group>
)}
</>
)}
{connection.bytes_sent > 0 && (
<Group gap={8}>
<Text size="xs" fw={500} c="dimmed" miw={80}>
Data Sent:
</Text>
<Text size="xs">
{(connection.bytes_sent / (1024 * 1024)).toFixed(1)} MB
</Text>
</Group>
)}
</Stack>
);
}
// Create a VOD Card component similar to ChannelCard
const VodConnectionCard = ({ vodContent, stopVODClient }) => {
const [dateFormatSetting] = useLocalStorage('date-format', 'mdy');
const dateFormat = dateFormatSetting === 'mdy' ? 'MM/DD' : 'DD/MM';
const [isClientExpanded, setIsClientExpanded] = useState(false);
const [, setUpdateTrigger] = useState(0); // Force re-renders for progress updates
// Get metadata from the VOD content
const metadata = vodContent.content_metadata || {};
const contentType = vodContent.content_type;
const isMovie = contentType === 'movie';
const isEpisode = contentType === 'episode';
// Set up timer to update progress every second
useEffect(() => {
const interval = setInterval(() => {
setUpdateTrigger((prev) => prev + 1);
}, 1000);
return () => clearInterval(interval);
}, []);
// Get the individual connection (since we now separate cards per connection)
const connection =
vodContent.individual_connection ||
(vodContent.connections && vodContent.connections[0]);
// Get poster/logo URL
const posterUrl = metadata.logo_url || logo;
// Get display title
const getDisplayTitle = () => {
if (isMovie) {
return getMovieDisplayTitle(vodContent);
} else if (isEpisode) {
return getEpisodeDisplayTitle(metadata);
}
return vodContent.content_name;
};
// Get subtitle info
const getSubtitle = () => {
if (isMovie) {
return getMovieSubtitle(metadata);
} else if (isEpisode) {
return getEpisodeSubtitle(metadata);
}
return [];
};
// Render subtitle
const renderSubtitle = () => {
const subtitleParts = getSubtitle();
if (subtitleParts.length === 0) return null;
return (
<Text size="sm" c="dimmed">
{subtitleParts.join(' • ')}
</Text>
);
};
// Calculate progress percentage and time
const getProgressInfo = useCallback(() => {
return calculateProgress(connection, metadata.duration_secs);
}, [connection, metadata.duration_secs]);
// Calculate duration for connection
const getConnectionDuration = useCallback((connection) => {
return calculateConnectionDuration(connection);
}, []);
// Get connection start time for tooltip
const getConnectionStartTime = useCallback(
(connection) => {
return calculateConnectionStartTime(connection, dateFormat);
},
[dateFormat]
);
return (
<Card
shadow="sm"
padding="md"
radius="md"
withBorder
style={{
backgroundColor: '#27272A',
}}
color='#FFF'
maw={700}
w={'100%'}
>
<Stack pos='relative' >
{/* Header with poster and basic info */}
<Group justify="space-between">
<Box h={100} display='flex'
style={{
alignItems: 'center',
justifyContent: 'center',
}}
>
<img
src={posterUrl}
style={{
maxWidth: '100%',
maxHeight: '100%',
objectFit: 'contain',
}}
alt="content poster"
/>
</Box>
<Group>
{connection && (
<Tooltip
label={`Connected at ${getConnectionStartTime(connection)}`}
>
<Center>
<Timer pr={5} />
{getConnectionDuration(connection)}
</Center>
</Tooltip>
)}
{connection && stopVODClient && (
<Center>
<Tooltip label="Stop VOD Connection">
<ActionIcon
variant="transparent"
color="red.9"
onClick={() => stopVODClient(connection.client_id)}
>
<SquareX size="24" />
</ActionIcon>
</Tooltip>
</Center>
)}
</Group>
</Group>
{/* Title and type */}
<Flex justify="space-between" align="center">
<Group>
<Text fw={500}>{getDisplayTitle()}</Text>
</Group>
<Tooltip label="Content Type">
<Group gap={5}>
<Video size="18" />
{isMovie ? 'Movie' : 'TV Episode'}
</Group>
</Tooltip>
</Flex>
{/* Display M3U profile information - matching channel card style */}
{connection &&
connection.m3u_profile &&
(connection.m3u_profile.profile_name ||
connection.m3u_profile.account_name) && (
<Flex justify="flex-end" align="flex-start" mt={-8}>
<Group gap={5} align="flex-start">
<HardDriveUpload size="18" mt={2} />
<Stack gap={0}>
<Tooltip label="M3U Account">
<Text size="xs" fw={500}>
{connection.m3u_profile.account_name || 'Unknown Account'}
</Text>
</Tooltip>
<Tooltip label="M3U Profile">
<Text size="xs" c="dimmed">
{connection.m3u_profile.profile_name || 'Default Profile'}
</Text>
</Tooltip>
</Stack>
</Group>
</Flex>
)}
{/* Subtitle/episode info */}
{getSubtitle().length > 0 && (
<Flex justify="flex-start" align="center" mt={-12}>
{renderSubtitle()}
</Flex>
)}
{/* Content information badges - streamlined to avoid duplication */}
<Group gap="xs" mt={-4}>
{metadata.year && (
<Tooltip label="Release Year">
<Badge size="sm" variant="light" color="orange">
{metadata.year}
</Badge>
</Tooltip>
)}
{metadata.duration_secs && (
<Tooltip label="Content Duration">
<Badge size="sm" variant="light" color="blue">
{formatDuration(metadata.duration_secs)}
</Badge>
</Tooltip>
)}
{metadata.rating && (
<Tooltip label="Critic Rating (out of 10)">
<Badge size="sm" variant="light" color="yellow">
{parseFloat(metadata.rating).toFixed(1)}/10
</Badge>
</Tooltip>
)}
</Group>
{/* Progress bar - show current position in content */}
{connection &&
metadata.duration_secs &&
(() => {
const { totalTime, currentTime, percentage} = getProgressInfo();
return totalTime > 0 ? (
<Stack gap="xs" mt="sm">
<Group justify="space-between" align="center">
<Text size="xs" fw={500} c="dimmed">
Progress
</Text>
<Text size="xs" c="dimmed">
{formatTime(currentTime)} /{' '}
{formatTime(totalTime)}
</Text>
</Group>
<Progress
value={percentage}
size="sm"
color="blue"
style={{
backgroundColor: 'rgba(255, 255, 255, 0.1)',
}}
/>
<Text size="xs" c="dimmed" ta="center">
{percentage.toFixed(1)}% watched
</Text>
</Stack>
) : null;
})()}
{/* Client information section - collapsible like channel cards */}
{connection && (
<Stack gap="xs" mt="xs">
{/* Client summary header - always visible */}
<Group
justify="space-between"
align="center"
style={{
cursor: 'pointer',
backgroundColor: 'rgba(255, 255, 255, 0.05)',
}}
p={'8px 12px'}
bdrs={6}
bd={'1px solid rgba(255, 255, 255, 0.1)'}
onClick={() => setIsClientExpanded(!isClientExpanded)}
>
<Group gap={8}>
<Text size="sm" fw={500} color="dimmed">
Client:
</Text>
<Text size="sm" ff={'monospace'}>
{connection.client_ip || 'Unknown IP'}
</Text>
</Group>
<Group gap={8}>
<Text size="xs" color="dimmed">
{isClientExpanded ? 'Hide Details' : 'Show Details'}
</Text>
<ChevronDown
size={16}
style={{
transform: isClientExpanded
? 'rotate(0deg)'
: 'rotate(180deg)',
transition: 'transform 0.2s',
}}
/>
</Group>
</Group>
{/* Expanded client details */}
{isClientExpanded && (
<ClientDetails
connection={connection}
connectionStartTime={getConnectionStartTime(connection)} />
)}
</Stack>
)}
</Stack>
</Card>
);
};
export default VodConnectionCard;

View file

@ -1,5 +1,6 @@
import React, { useState, useEffect, useRef, useMemo } from 'react';
import { useFormik } from 'formik';
import { useForm } from 'react-hook-form';
import { yupResolver } from '@hookform/resolvers/yup';
import * as Yup from 'yup';
import useChannelsStore from '../../store/channels';
import API from '../../api';
@ -42,6 +43,11 @@ import useEPGsStore from '../../store/epgs';
import { FixedSizeList as List } from 'react-window';
import { USER_LEVELS, USER_LEVEL_LABELS } from '../../constants';
const validationSchema = Yup.object({
name: Yup.string().required('Name is required'),
channel_group_id: Yup.string().required('Channel group is required'),
});
const ChannelForm = ({ channel = null, isOpen, onClose }) => {
const theme = useMantineTheme();
@ -100,7 +106,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
const handleLogoSuccess = ({ logo }) => {
if (logo && logo.id) {
formik.setFieldValue('logo_id', logo.id);
setValue('logo_id', logo.id);
ensureLogosLoaded(); // Refresh logos
}
setLogoModalOpen(false);
@ -124,7 +130,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
if (response.matched) {
// Update the form with the new EPG data
if (response.channel && response.channel.epg_data_id) {
formik.setFieldValue('epg_data_id', response.channel.epg_data_id);
setValue('epg_data_id', response.channel.epg_data_id);
}
notifications.show({
@ -152,7 +158,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
};
const handleSetNameFromEpg = () => {
const epgDataId = formik.values.epg_data_id;
const epgDataId = watch('epg_data_id');
if (!epgDataId) {
notifications.show({
title: 'No EPG Selected',
@ -164,7 +170,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
const tvg = tvgsById[epgDataId];
if (tvg && tvg.name) {
formik.setFieldValue('name', tvg.name);
setValue('name', tvg.name);
notifications.show({
title: 'Success',
message: `Channel name set to "${tvg.name}"`,
@ -180,7 +186,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
};
const handleSetLogoFromEpg = async () => {
const epgDataId = formik.values.epg_data_id;
const epgDataId = watch('epg_data_id');
if (!epgDataId) {
notifications.show({
title: 'No EPG Selected',
@ -207,7 +213,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
);
if (matchingLogo) {
formik.setFieldValue('logo_id', matchingLogo.id);
setValue('logo_id', matchingLogo.id);
notifications.show({
title: 'Success',
message: `Logo set to "${matchingLogo.name}"`,
@ -231,7 +237,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
// Create logo by calling the Logo API directly
const newLogo = await API.createLogo(newLogoData);
formik.setFieldValue('logo_id', newLogo.id);
setValue('logo_id', newLogo.id);
notifications.update({
id: 'creating-logo',
@ -264,7 +270,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
};
const handleSetTvgIdFromEpg = () => {
const epgDataId = formik.values.epg_data_id;
const epgDataId = watch('epg_data_id');
if (!epgDataId) {
notifications.show({
title: 'No EPG Selected',
@ -276,7 +282,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
const tvg = tvgsById[epgDataId];
if (tvg && tvg.tvg_id) {
formik.setFieldValue('tvg_id', tvg.tvg_id);
setValue('tvg_id', tvg.tvg_id);
notifications.show({
title: 'Success',
message: `TVG-ID set to "${tvg.tvg_id}"`,
@ -291,130 +297,130 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
}
};
const formik = useFormik({
initialValues: {
name: '',
channel_number: '', // Change from 0 to empty string for consistency
channel_group_id:
Object.keys(channelGroups).length > 0
const defaultValues = useMemo(
() => ({
name: channel?.name || '',
channel_number:
channel?.channel_number !== null &&
channel?.channel_number !== undefined
? channel.channel_number
: '',
channel_group_id: channel?.channel_group_id
? `${channel.channel_group_id}`
: Object.keys(channelGroups).length > 0
? Object.keys(channelGroups)[0]
: '',
stream_profile_id: '0',
tvg_id: '',
tvc_guide_stationid: '',
epg_data_id: '',
logo_id: '',
user_level: '0',
},
validationSchema: Yup.object({
name: Yup.string().required('Name is required'),
channel_group_id: Yup.string().required('Channel group is required'),
stream_profile_id: channel?.stream_profile_id
? `${channel.stream_profile_id}`
: '0',
tvg_id: channel?.tvg_id || '',
tvc_guide_stationid: channel?.tvc_guide_stationid || '',
epg_data_id: channel?.epg_data_id ?? '',
logo_id: channel?.logo_id ? `${channel.logo_id}` : '',
user_level: `${channel?.user_level ?? '0'}`,
}),
onSubmit: async (values, { setSubmitting }) => {
let response;
[channel, channelGroups]
);
try {
const formattedValues = { ...values };
const {
register,
handleSubmit,
setValue,
watch,
reset,
formState: { errors, isSubmitting },
} = useForm({
defaultValues,
resolver: yupResolver(validationSchema),
});
// Convert empty or "0" stream_profile_id to null for the API
if (
!formattedValues.stream_profile_id ||
formattedValues.stream_profile_id === '0'
) {
formattedValues.stream_profile_id = null;
}
const onSubmit = async (values) => {
let response;
// Ensure tvg_id is properly included (no empty strings)
formattedValues.tvg_id = formattedValues.tvg_id || null;
try {
const formattedValues = { ...values };
// Ensure tvc_guide_stationid is properly included (no empty strings)
formattedValues.tvc_guide_stationid =
formattedValues.tvc_guide_stationid || null;
// Convert empty or "0" stream_profile_id to null for the API
if (
!formattedValues.stream_profile_id ||
formattedValues.stream_profile_id === '0'
) {
formattedValues.stream_profile_id = null;
}
if (channel) {
// If there's an EPG to set, use our enhanced endpoint
if (values.epg_data_id !== (channel.epg_data_id ?? '')) {
// Use the special endpoint to set EPG and trigger refresh
const epgResponse = await API.setChannelEPG(
channel.id,
values.epg_data_id
);
// Ensure tvg_id is properly included (no empty strings)
formattedValues.tvg_id = formattedValues.tvg_id || null;
// Remove epg_data_id from values since we've handled it separately
const { epg_data_id, ...otherValues } = formattedValues;
// Ensure tvc_guide_stationid is properly included (no empty strings)
formattedValues.tvc_guide_stationid =
formattedValues.tvc_guide_stationid || null;
// Update other channel fields if needed
if (Object.keys(otherValues).length > 0) {
response = await API.updateChannel({
id: channel.id,
...otherValues,
streams: channelStreams.map((stream) => stream.id),
});
}
} else {
// No EPG change, regular update
if (channel) {
// If there's an EPG to set, use our enhanced endpoint
if (values.epg_data_id !== (channel.epg_data_id ?? '')) {
// Use the special endpoint to set EPG and trigger refresh
const epgResponse = await API.setChannelEPG(
channel.id,
values.epg_data_id
);
// Remove epg_data_id from values since we've handled it separately
const { epg_data_id, ...otherValues } = formattedValues;
// Update other channel fields if needed
if (Object.keys(otherValues).length > 0) {
response = await API.updateChannel({
id: channel.id,
...formattedValues,
...otherValues,
streams: channelStreams.map((stream) => stream.id),
});
}
} else {
// New channel creation - use the standard method
response = await API.addChannel({
// No EPG change, regular update
response = await API.updateChannel({
id: channel.id,
...formattedValues,
streams: channelStreams.map((stream) => stream.id),
});
}
} catch (error) {
console.error('Error saving channel:', error);
} else {
// New channel creation - use the standard method
response = await API.addChannel({
...formattedValues,
streams: channelStreams.map((stream) => stream.id),
});
}
} catch (error) {
console.error('Error saving channel:', error);
}
formik.resetForm();
API.requeryChannels();
reset();
API.requeryChannels();
// Refresh channel profiles to update the membership information
useChannelsStore.getState().fetchChannelProfiles();
// Refresh channel profiles to update the membership information
useChannelsStore.getState().fetchChannelProfiles();
setSubmitting(false);
setTvgFilter('');
setLogoFilter('');
onClose();
},
});
setTvgFilter('');
setLogoFilter('');
onClose();
};
useEffect(() => {
if (channel) {
if (channel.epg_data_id) {
const epgSource = epgs[tvgsById[channel.epg_data_id]?.epg_source];
setSelectedEPG(epgSource ? `${epgSource.id}` : '');
}
reset(defaultValues);
setChannelStreams(channel?.streams || []);
formik.setValues({
name: channel.name || '',
channel_number:
channel.channel_number !== null ? channel.channel_number : '',
channel_group_id: channel.channel_group_id
? `${channel.channel_group_id}`
: '',
stream_profile_id: channel.stream_profile_id
? `${channel.stream_profile_id}`
: '0',
tvg_id: channel.tvg_id || '',
tvc_guide_stationid: channel.tvc_guide_stationid || '',
epg_data_id: channel.epg_data_id ?? '',
logo_id: channel.logo_id ? `${channel.logo_id}` : '',
user_level: `${channel.user_level}`,
});
setChannelStreams(channel.streams || []);
if (channel?.epg_data_id) {
const epgSource = epgs[tvgsById[channel.epg_data_id]?.epg_source];
setSelectedEPG(epgSource ? `${epgSource.id}` : '');
} else {
formik.resetForm();
setSelectedEPG('');
}
if (!channel) {
setTvgFilter('');
setLogoFilter('');
setChannelStreams([]); // Ensure streams are cleared when adding a new channel
}
}, [channel, tvgsById, channelGroups]);
}, [defaultValues, channel, reset, epgs, tvgsById]);
// Memoize logo options to prevent infinite re-renders during background loading
const logoOptions = useMemo(() => {
@ -431,10 +437,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
// If a new group was created and returned, update the form with it
if (newGroup && newGroup.id) {
// Preserve all current form values while updating just the channel_group_id
formik.setValues({
...formik.values,
channel_group_id: `${newGroup.id}`,
});
setValue('channel_group_id', `${newGroup.id}`);
}
};
@ -472,7 +475,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
}
styles={{ content: { '--mantine-color-body': '#27272A' } }}
>
<form onSubmit={formik.handleSubmit}>
<form onSubmit={handleSubmit(onSubmit)}>
<Group justify="space-between" align="top">
<Stack gap="5" style={{ flex: 1 }}>
<TextInput
@ -481,7 +484,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
label={
<Group gap="xs">
<span>Channel Name</span>
{formik.values.epg_data_id && (
{watch('epg_data_id') && (
<Button
size="xs"
variant="transparent"
@ -495,9 +498,8 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
)}
</Group>
}
value={formik.values.name}
onChange={formik.handleChange}
error={formik.errors.name ? formik.touched.name : ''}
{...register('name')}
error={errors.name?.message}
size="xs"
style={{ flex: 1 }}
/>
@ -516,8 +518,8 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
label="Channel Group"
readOnly
value={
channelGroups[formik.values.channel_group_id]
? channelGroups[formik.values.channel_group_id].name
channelGroups[watch('channel_group_id')]
? channelGroups[watch('channel_group_id')].name
: ''
}
onClick={() => setGroupPopoverOpened(true)}
@ -557,7 +559,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
>
<UnstyledButton
onClick={() => {
formik.setFieldValue(
setValue(
'channel_group_id',
filteredGroups[index].id
);
@ -587,16 +589,12 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
id="channel_group_id"
name="channel_group_id"
label="Channel Group"
value={formik.values.channel_group_id}
value={watch('channel_group_id')}
searchable
onChange={(value) => {
formik.setFieldValue('channel_group_id', value); // Update Formik's state with the new value
setValue('channel_group_id', value);
}}
error={
formik.errors.channel_group_id
? formik.touched.channel_group_id
: ''
}
error={errors.channel_group_id?.message}
data={Object.values(channelGroups).map((option, index) => ({
value: `${option.id}`,
label: option.name,
@ -622,15 +620,11 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
id="stream_profile_id"
label="Stream Profile"
name="stream_profile_id"
value={formik.values.stream_profile_id}
value={watch('stream_profile_id')}
onChange={(value) => {
formik.setFieldValue('stream_profile_id', value); // Update Formik's state with the new value
setValue('stream_profile_id', value);
}}
error={
formik.errors.stream_profile_id
? formik.touched.stream_profile_id
: ''
}
error={errors.stream_profile_id?.message}
data={[{ value: '0', label: '(use default)' }].concat(
streamProfiles.map((option) => ({
value: `${option.id}`,
@ -648,13 +642,11 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
value: `${value}`,
};
})}
value={formik.values.user_level}
value={watch('user_level')}
onChange={(value) => {
formik.setFieldValue('user_level', value);
setValue('user_level', value);
}}
error={
formik.errors.user_level ? formik.touched.user_level : ''
}
error={errors.user_level?.message}
/>
</Stack>
@ -684,7 +676,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
label={
<Group gap="xs">
<span>Logo</span>
{formik.values.epg_data_id && (
{watch('epg_data_id') && (
<Button
size="xs"
variant="transparent"
@ -699,9 +691,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
</Group>
}
readOnly
value={
channelLogos[formik.values.logo_id]?.name || 'Default'
}
value={channelLogos[watch('logo_id')]?.name || 'Default'}
onClick={() => {
console.log(
'Logo input clicked, setting popover opened to true'
@ -756,10 +746,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
borderRadius: '4px',
}}
onClick={() => {
formik.setFieldValue(
'logo_id',
filteredLogos[index].id
);
setValue('logo_id', filteredLogos[index].id);
setLogoPopoverOpened(false);
}}
onMouseEnter={(e) => {
@ -810,7 +797,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
<Stack gap="xs" align="center">
<LazyLogo
logoId={formik.values.logo_id}
logoId={watch('logo_id')}
alt="channel logo"
style={{ height: 40 }}
/>
@ -833,19 +820,12 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
id="channel_number"
name="channel_number"
label="Channel # (blank to auto-assign)"
value={formik.values.channel_number}
onChange={(value) =>
formik.setFieldValue('channel_number', value)
}
error={
formik.errors.channel_number
? formik.touched.channel_number
: ''
}
value={watch('channel_number')}
onChange={(value) => setValue('channel_number', value)}
error={errors.channel_number?.message}
size="xs"
step={0.1} // Add step prop to allow decimal inputs
precision={1} // Specify decimal precision
removeTrailingZeros // Optional: remove trailing zeros for cleaner display
/>
<TextInput
@ -854,7 +834,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
label={
<Group gap="xs">
<span>TVG-ID</span>
{formik.values.epg_data_id && (
{watch('epg_data_id') && (
<Button
size="xs"
variant="transparent"
@ -868,9 +848,8 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
)}
</Group>
}
value={formik.values.tvg_id}
onChange={formik.handleChange}
error={formik.errors.tvg_id ? formik.touched.tvg_id : ''}
{...register('tvg_id')}
error={errors.tvg_id?.message}
size="xs"
/>
@ -878,13 +857,8 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
id="tvc_guide_stationid"
name="tvc_guide_stationid"
label="Gracenote StationId"
value={formik.values.tvc_guide_stationid}
onChange={formik.handleChange}
error={
formik.errors.tvc_guide_stationid
? formik.touched.tvc_guide_stationid
: ''
}
{...register('tvc_guide_stationid')}
error={errors.tvc_guide_stationid?.message}
size="xs"
/>
@ -904,9 +878,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
<Button
size="xs"
variant="transparent"
onClick={() =>
formik.setFieldValue('epg_data_id', null)
}
onClick={() => setValue('epg_data_id', null)}
>
Use Dummy
</Button>
@ -933,7 +905,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
}
readOnly
value={(() => {
const tvg = tvgsById[formik.values.epg_data_id];
const tvg = tvgsById[watch('epg_data_id')];
const epgSource = tvg && epgs[tvg.epg_source];
const tvgLabel = tvg ? tvg.name || tvg.id : '';
if (epgSource && tvgLabel) {
@ -953,7 +925,7 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
color="white"
onClick={(e) => {
e.stopPropagation();
formik.setFieldValue('epg_data_id', null);
setValue('epg_data_id', null);
}}
title="Create new group"
size="small"
@ -1012,12 +984,9 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
size="xs"
onClick={() => {
if (filteredTvgs[index].id == '0') {
formik.setFieldValue('epg_data_id', null);
setValue('epg_data_id', null);
} else {
formik.setFieldValue(
'epg_data_id',
filteredTvgs[index].id
);
setValue('epg_data_id', filteredTvgs[index].id);
// Also update selectedEPG to match the EPG source of the selected tvg
if (filteredTvgs[index].epg_source) {
setSelectedEPG(
@ -1047,11 +1016,11 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
<Button
type="submit"
variant="default"
disabled={formik.isSubmitting}
loading={formik.isSubmitting}
disabled={isSubmitting}
loading={isSubmitting}
loaderProps={{ type: 'dots' }}
>
{formik.isSubmitting ? 'Saving...' : 'Submit'}
{isSubmitting ? 'Saving...' : 'Submit'}
</Button>
</Flex>
</form>

View file

@ -77,6 +77,9 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
const [confirmSetLogosOpen, setConfirmSetLogosOpen] = useState(false);
const [confirmSetTvgIdsOpen, setConfirmSetTvgIdsOpen] = useState(false);
const [confirmBatchUpdateOpen, setConfirmBatchUpdateOpen] = useState(false);
const [settingNames, setSettingNames] = useState(false);
const [settingLogos, setSettingLogos] = useState(false);
const [settingTvgIds, setSettingTvgIds] = useState(false);
const isWarningSuppressed = useWarningsStore((s) => s.isWarningSuppressed);
const suppressWarning = useWarningsStore((s) => s.suppressWarning);
@ -135,8 +138,10 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
if (values.stream_profile_id === '0') {
changes.push(`• Stream Profile: Use Default`);
} else {
const profileName =
streamProfiles[values.stream_profile_id]?.name || 'Selected Profile';
const profile = streamProfiles.find(
(p) => `${p.id}` === `${values.stream_profile_id}`
);
const profileName = profile?.name || 'Selected Profile';
changes.push(`• Stream Profile: ${profileName}`);
}
}
@ -326,6 +331,7 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
};
const executeSetNamesFromEpg = async () => {
setSettingNames(true);
try {
// Start the backend task
await API.setChannelNamesFromEpg(channelIds);
@ -339,7 +345,6 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
});
// Close the modal since the task is now running in background
setConfirmSetNamesOpen(false);
onClose();
} catch (error) {
console.error('Failed to start EPG name setting task:', error);
@ -348,6 +353,8 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
message: 'Failed to start EPG name setting task.',
color: 'red',
});
} finally {
setSettingNames(false);
setConfirmSetNamesOpen(false);
}
};
@ -371,6 +378,7 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
};
const executeSetLogosFromEpg = async () => {
setSettingLogos(true);
try {
// Start the backend task
await API.setChannelLogosFromEpg(channelIds);
@ -384,7 +392,6 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
});
// Close the modal since the task is now running in background
setConfirmSetLogosOpen(false);
onClose();
} catch (error) {
console.error('Failed to start EPG logo setting task:', error);
@ -393,6 +400,8 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
message: 'Failed to start EPG logo setting task.',
color: 'red',
});
} finally {
setSettingLogos(false);
setConfirmSetLogosOpen(false);
}
};
@ -416,6 +425,7 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
};
const executeSetTvgIdsFromEpg = async () => {
setSettingTvgIds(true);
try {
// Start the backend task
await API.setChannelTvgIdsFromEpg(channelIds);
@ -429,7 +439,6 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
});
// Close the modal since the task is now running in background
setConfirmSetTvgIdsOpen(false);
onClose();
} catch (error) {
console.error('Failed to start EPG TVG-ID setting task:', error);
@ -438,6 +447,8 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
message: 'Failed to start EPG TVG-ID setting task.',
color: 'red',
});
} finally {
setSettingTvgIds(false);
setConfirmSetTvgIdsOpen(false);
}
};
@ -945,6 +956,7 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
opened={confirmSetNamesOpen}
onClose={() => setConfirmSetNamesOpen(false)}
onConfirm={executeSetNamesFromEpg}
loading={settingNames}
title="Confirm Set Names from EPG"
message={
<div style={{ whiteSpace: 'pre-line' }}>
@ -966,6 +978,7 @@ This action cannot be undone.`}
opened={confirmSetLogosOpen}
onClose={() => setConfirmSetLogosOpen(false)}
onConfirm={executeSetLogosFromEpg}
loading={settingLogos}
title="Confirm Set Logos from EPG"
message={
<div style={{ whiteSpace: 'pre-line' }}>
@ -987,6 +1000,7 @@ This action cannot be undone.`}
opened={confirmSetTvgIdsOpen}
onClose={() => setConfirmSetTvgIdsOpen(false)}
onConfirm={executeSetTvgIdsFromEpg}
loading={settingTvgIds}
title="Confirm Set TVG-IDs from EPG"
message={
<div style={{ whiteSpace: 'pre-line' }}>
@ -1008,6 +1022,7 @@ This action cannot be undone.`}
opened={confirmBatchUpdateOpen}
onClose={() => setConfirmBatchUpdateOpen(false)}
onConfirm={onSubmit}
loading={isSubmitting}
title="Confirm Batch Update"
message={
<div>

View file

@ -29,6 +29,7 @@ const EPG = ({ epg = null, isOpen, onClose }) => {
api_key: '',
is_active: true,
refresh_interval: 24,
priority: 0,
},
validate: {
@ -69,6 +70,7 @@ const EPG = ({ epg = null, isOpen, onClose }) => {
api_key: epg.api_key,
is_active: epg.is_active,
refresh_interval: epg.refresh_interval,
priority: epg.priority ?? 0,
};
form.setValues(values);
setSourceType(epg.source_type);
@ -148,14 +150,24 @@ const EPG = ({ epg = null, isOpen, onClose }) => {
key={form.key('url')}
/>
<TextInput
id="api_key"
name="api_key"
label="API Key"
description="API key for services that require authentication"
{...form.getInputProps('api_key')}
key={form.key('api_key')}
disabled={sourceType !== 'schedules_direct'}
{sourceType === 'schedules_direct' && (
<TextInput
id="api_key"
name="api_key"
label="API Key"
description="API key for services that require authentication"
{...form.getInputProps('api_key')}
key={form.key('api_key')}
/>
)}
<NumberInput
min={0}
max={999}
label="Priority"
description="Priority for EPG matching (higher numbers = higher priority). Used when multiple EPG sources have matching entries for a channel."
{...form.getInputProps('priority')}
key={form.key('priority')}
/>
{/* Put checkbox at the same level as Refresh Interval */}

View file

@ -183,6 +183,7 @@ const GroupManager = React.memo(({ isOpen, onClose }) => {
const [confirmDeleteOpen, setConfirmDeleteOpen] = useState(false);
const [groupToDelete, setGroupToDelete] = useState(null);
const [confirmCleanupOpen, setConfirmCleanupOpen] = useState(false);
const [deletingGroup, setDeletingGroup] = useState(false);
// Memoize the channel groups array to prevent unnecessary re-renders
const channelGroupsArray = useMemo(
@ -382,6 +383,7 @@ const GroupManager = React.memo(({ isOpen, onClose }) => {
const executeDeleteGroup = useCallback(
async (group) => {
setDeletingGroup(true);
try {
await API.deleteChannelGroup(group.id);
@ -392,13 +394,14 @@ const GroupManager = React.memo(({ isOpen, onClose }) => {
});
await fetchGroupUsage(); // Refresh usage data
setConfirmDeleteOpen(false);
} catch (error) {
notifications.show({
title: 'Error',
message: 'Failed to delete group',
color: 'red',
});
} finally {
setDeletingGroup(false);
setConfirmDeleteOpen(false);
}
},
@ -680,6 +683,7 @@ const GroupManager = React.memo(({ isOpen, onClose }) => {
opened={confirmDeleteOpen}
onClose={() => setConfirmDeleteOpen(false)}
onConfirm={() => groupToDelete && executeDeleteGroup(groupToDelete)}
loading={deletingGroup}
title="Confirm Group Deletion"
message={
groupToDelete ? (
@ -706,6 +710,7 @@ This action cannot be undone.`}
opened={confirmCleanupOpen}
onClose={() => setConfirmCleanupOpen(false)}
onConfirm={executeCleanup}
loading={isCleaningUp}
title="Confirm Group Cleanup"
message={
<div style={{ whiteSpace: 'pre-line' }}>

View file

@ -96,28 +96,30 @@ const LiveGroupFilter = ({
}
setGroupStates(
playlist.channel_groups.map((group) => {
// Parse custom_properties if present
let customProps = {};
if (group.custom_properties) {
try {
customProps =
typeof group.custom_properties === 'string'
? JSON.parse(group.custom_properties)
: group.custom_properties;
} catch {
customProps = {};
playlist.channel_groups
.filter((group) => channelGroups[group.channel_group]) // Filter out groups that don't exist
.map((group) => {
// Parse custom_properties if present
let customProps = {};
if (group.custom_properties) {
try {
customProps =
typeof group.custom_properties === 'string'
? JSON.parse(group.custom_properties)
: group.custom_properties;
} catch {
customProps = {};
}
}
}
return {
...group,
name: channelGroups[group.channel_group].name,
auto_channel_sync: group.auto_channel_sync || false,
auto_sync_channel_start: group.auto_sync_channel_start || 1.0,
custom_properties: customProps,
original_enabled: group.enabled,
};
})
return {
...group,
name: channelGroups[group.channel_group].name,
auto_channel_sync: group.auto_channel_sync || false,
auto_sync_channel_start: group.auto_sync_channel_start || 1.0,
custom_properties: customProps,
original_enabled: group.enabled,
};
})
);
}, [playlist, channelGroups]);
@ -261,25 +263,42 @@ const LiveGroupFilter = ({
}}
>
{/* Group Enable/Disable Button */}
<Button
color={group.enabled ? 'green' : 'gray'}
variant="filled"
onClick={() => toggleGroupEnabled(group.channel_group)}
radius="md"
size="xs"
leftSection={
group.enabled ? (
<CircleCheck size={14} />
) : (
<CircleX size={14} />
)
<Tooltip
label={
group.enabled && group.is_stale
? 'This group was not seen in the last M3U refresh and will be deleted after the retention period expires'
: ''
}
fullWidth
disabled={!group.enabled || !group.is_stale}
multiline
w={220}
>
<Text size="xs" truncate>
{group.name}
</Text>
</Button>
<Button
color={
group.enabled
? group.is_stale
? 'orange'
: 'green'
: 'gray'
}
variant="filled"
onClick={() => toggleGroupEnabled(group.channel_group)}
radius="md"
size="xs"
leftSection={
group.enabled ? (
<CircleCheck size={14} />
) : (
<CircleX size={14} />
)
}
fullWidth
>
<Text size="xs" truncate>
{group.name}
</Text>
</Button>
</Tooltip>
{/* Auto Sync Controls */}
<Stack spacing="xs" style={{ '--stack-gap': '4px' }}>
@ -367,7 +386,8 @@ const LiveGroupFilter = ({
if (
group.custom_properties?.custom_epg_id !==
undefined ||
group.custom_properties?.force_dummy_epg
group.custom_properties?.force_dummy_epg ||
group.custom_properties?.force_epg_selected
) {
selectedValues.push('force_epg');
}
@ -430,23 +450,20 @@ const LiveGroupFilter = ({
// Handle force_epg
if (selectedOptions.includes('force_epg')) {
// Migrate from old force_dummy_epg if present
// Set default to force_dummy_epg if no EPG settings exist yet
if (
newCustomProps.force_dummy_epg &&
newCustomProps.custom_epg_id === undefined
newCustomProps.custom_epg_id ===
undefined &&
!newCustomProps.force_dummy_epg
) {
// Migrate: force_dummy_epg=true becomes custom_epg_id=null
newCustomProps.custom_epg_id = null;
delete newCustomProps.force_dummy_epg;
} else if (
newCustomProps.custom_epg_id === undefined
) {
// New configuration: initialize with null (no EPG/default dummy)
newCustomProps.custom_epg_id = null;
// Default to "No EPG (Disabled)"
newCustomProps.force_dummy_epg = true;
}
} else {
// Only remove custom_epg_id when deselected
// Remove all EPG settings when deselected
delete newCustomProps.custom_epg_id;
delete newCustomProps.force_dummy_epg;
delete newCustomProps.force_epg_selected;
}
// Handle group_override
@ -1122,7 +1139,8 @@ const LiveGroupFilter = ({
{/* Show EPG selector when force_epg is selected */}
{(group.custom_properties?.custom_epg_id !== undefined ||
group.custom_properties?.force_dummy_epg) && (
group.custom_properties?.force_dummy_epg ||
group.custom_properties?.force_epg_selected) && (
<Tooltip
label="Force a specific EPG source for all auto-synced channels in this group. For dummy EPGs, all channels will share the same EPG data. For regular EPG sources (XMLTV, Schedules Direct), channels will be matched by their tvg_id within that source. Select 'No EPG' to disable EPG assignment."
withArrow
@ -1131,44 +1149,90 @@ const LiveGroupFilter = ({
label="EPG Source"
placeholder="No EPG (Disabled)"
value={(() => {
// Handle migration from force_dummy_epg
// Show custom EPG if set
if (
group.custom_properties?.custom_epg_id !==
undefined
undefined &&
group.custom_properties?.custom_epg_id !== null
) {
// Convert to string, use '0' for null/no EPG
return group.custom_properties.custom_epg_id ===
null
? '0'
: group.custom_properties.custom_epg_id.toString();
} else if (
group.custom_properties?.force_dummy_epg
) {
// Show "No EPG" for old force_dummy_epg configs
return group.custom_properties.custom_epg_id.toString();
}
// Show "No EPG" if force_dummy_epg is set
if (group.custom_properties?.force_dummy_epg) {
return '0';
}
return '0';
// Otherwise show empty/placeholder
return null;
})()}
onChange={(value) => {
// Convert back: '0' means no EPG (null)
const newValue =
value === '0' ? null : parseInt(value);
setGroupStates(
groupStates.map((state) => {
if (
state.channel_group === group.channel_group
) {
return {
...state,
custom_properties: {
if (value === '0') {
// "No EPG (Disabled)" selected - use force_dummy_epg
setGroupStates(
groupStates.map((state) => {
if (
state.channel_group ===
group.channel_group
) {
const newProps = {
...state.custom_properties,
custom_epg_id: newValue,
},
};
}
return state;
})
);
};
delete newProps.custom_epg_id;
delete newProps.force_epg_selected;
newProps.force_dummy_epg = true;
return {
...state,
custom_properties: newProps,
};
}
return state;
})
);
} else if (value) {
// Specific EPG source selected
const epgId = parseInt(value);
setGroupStates(
groupStates.map((state) => {
if (
state.channel_group ===
group.channel_group
) {
const newProps = {
...state.custom_properties,
};
newProps.custom_epg_id = epgId;
delete newProps.force_dummy_epg;
delete newProps.force_epg_selected;
return {
...state,
custom_properties: newProps,
};
}
return state;
})
);
} else {
// Cleared - remove all EPG settings
setGroupStates(
groupStates.map((state) => {
if (
state.channel_group ===
group.channel_group
) {
const newProps = {
...state.custom_properties,
};
delete newProps.custom_epg_id;
delete newProps.force_dummy_epg;
delete newProps.force_epg_selected;
return {
...state,
custom_properties: newProps,
};
}
return state;
})
);
}
}}
data={[
{ value: '0', label: 'No EPG (Disabled)' },

View file

@ -1,5 +1,6 @@
import React, { useState, useEffect } from 'react';
import { useFormik } from 'formik';
import React, { useState, useEffect, useMemo } from 'react';
import { useForm } from 'react-hook-form';
import { yupResolver } from '@hookform/resolvers/yup';
import * as Yup from 'yup';
import {
Modal,
@ -18,143 +19,148 @@ import { Upload, FileImage, X } from 'lucide-react';
import { notifications } from '@mantine/notifications';
import API from '../../api';
const schema = Yup.object({
name: Yup.string().required('Name is required'),
url: Yup.string()
.required('URL is required')
.test(
'valid-url-or-path',
'Must be a valid URL or local file path',
(value) => {
if (!value) return false;
// Allow local file paths starting with /data/logos/
if (value.startsWith('/data/logos/')) return true;
// Allow valid URLs
try {
new URL(value);
return true;
} catch {
return false;
}
}
),
});
const LogoForm = ({ logo = null, isOpen, onClose, onSuccess }) => {
const [logoPreview, setLogoPreview] = useState(null);
const [uploading, setUploading] = useState(false);
const [selectedFile, setSelectedFile] = useState(null); // Store selected file
const formik = useFormik({
initialValues: {
name: '',
url: '',
},
validationSchema: Yup.object({
name: Yup.string().required('Name is required'),
url: Yup.string()
.required('URL is required')
.test(
'valid-url-or-path',
'Must be a valid URL or local file path',
(value) => {
if (!value) return false;
// Allow local file paths starting with /data/logos/
if (value.startsWith('/data/logos/')) return true;
// Allow valid URLs
try {
new URL(value);
return true;
} catch {
return false;
}
}
),
const defaultValues = useMemo(
() => ({
name: logo?.name || '',
url: logo?.url || '',
}),
onSubmit: async (values, { setSubmitting }) => {
try {
setUploading(true);
let uploadResponse = null; // Store upload response for later use
[logo]
);
// If we have a selected file, upload it first
if (selectedFile) {
try {
uploadResponse = await API.uploadLogo(selectedFile, values.name);
// Use the uploaded file data instead of form values
values.name = uploadResponse.name;
values.url = uploadResponse.url;
} catch (uploadError) {
let errorMessage = 'Failed to upload logo file';
if (
uploadError.code === 'NETWORK_ERROR' ||
uploadError.message?.includes('timeout')
) {
errorMessage = 'Upload timed out. Please try again.';
} else if (uploadError.status === 413) {
errorMessage = 'File too large. Please choose a smaller file.';
} else if (uploadError.body?.error) {
errorMessage = uploadError.body.error;
}
notifications.show({
title: 'Upload Error',
message: errorMessage,
color: 'red',
});
return; // Don't proceed with creation if upload fails
}
}
// Now create or update the logo with the final values
// Only proceed if we don't already have a logo from file upload
if (logo) {
const updatedLogo = await API.updateLogo(logo.id, values);
notifications.show({
title: 'Success',
message: 'Logo updated successfully',
color: 'green',
});
onSuccess?.({ type: 'update', logo: updatedLogo }); // Call onSuccess for updates
} else if (!selectedFile) {
// Only create a new logo entry if we're not uploading a file
// (file upload already created the logo entry)
const newLogo = await API.createLogo(values);
notifications.show({
title: 'Success',
message: 'Logo created successfully',
color: 'green',
});
onSuccess?.({ type: 'create', logo: newLogo }); // Call onSuccess for creates
} else {
// File was uploaded and logo was already created
notifications.show({
title: 'Success',
message: 'Logo uploaded successfully',
color: 'green',
});
onSuccess?.({ type: 'create', logo: uploadResponse });
}
onClose();
} catch (error) {
let errorMessage = logo
? 'Failed to update logo'
: 'Failed to create logo';
// Handle specific timeout errors
if (
error.code === 'NETWORK_ERROR' ||
error.message?.includes('timeout')
) {
errorMessage = 'Request timed out. Please try again.';
} else if (error.response?.data?.error) {
errorMessage = error.response.data.error;
}
notifications.show({
title: 'Error',
message: errorMessage,
color: 'red',
});
} finally {
setSubmitting(false);
setUploading(false);
}
},
const {
register,
handleSubmit,
formState: { errors, isSubmitting },
reset,
setValue,
watch,
} = useForm({
defaultValues,
resolver: yupResolver(schema),
});
useEffect(() => {
if (logo) {
formik.setValues({
name: logo.name || '',
url: logo.url || '',
const onSubmit = async (values) => {
try {
setUploading(true);
let uploadResponse = null; // Store upload response for later use
// If we have a selected file, upload it first
if (selectedFile) {
try {
uploadResponse = await API.uploadLogo(selectedFile, values.name);
// Use the uploaded file data instead of form values
values.name = uploadResponse.name;
values.url = uploadResponse.url;
} catch (uploadError) {
let errorMessage = 'Failed to upload logo file';
if (
uploadError.code === 'NETWORK_ERROR' ||
uploadError.message?.includes('timeout')
) {
errorMessage = 'Upload timed out. Please try again.';
} else if (uploadError.status === 413) {
errorMessage = 'File too large. Please choose a smaller file.';
} else if (uploadError.body?.error) {
errorMessage = uploadError.body.error;
}
notifications.show({
title: 'Upload Error',
message: errorMessage,
color: 'red',
});
return; // Don't proceed with creation if upload fails
}
}
// Now create or update the logo with the final values
// Only proceed if we don't already have a logo from file upload
if (logo) {
const updatedLogo = await API.updateLogo(logo.id, values);
notifications.show({
title: 'Success',
message: 'Logo updated successfully',
color: 'green',
});
onSuccess?.({ type: 'update', logo: updatedLogo }); // Call onSuccess for updates
} else if (!selectedFile) {
// Only create a new logo entry if we're not uploading a file
// (file upload already created the logo entry)
const newLogo = await API.createLogo(values);
notifications.show({
title: 'Success',
message: 'Logo created successfully',
color: 'green',
});
onSuccess?.({ type: 'create', logo: newLogo }); // Call onSuccess for creates
} else {
// File was uploaded and logo was already created
notifications.show({
title: 'Success',
message: 'Logo uploaded successfully',
color: 'green',
});
onSuccess?.({ type: 'create', logo: uploadResponse });
}
onClose();
} catch (error) {
let errorMessage = logo
? 'Failed to update logo'
: 'Failed to create logo';
// Handle specific timeout errors
if (
error.code === 'NETWORK_ERROR' ||
error.message?.includes('timeout')
) {
errorMessage = 'Request timed out. Please try again.';
} else if (error.response?.data?.error) {
errorMessage = error.response.data.error;
}
notifications.show({
title: 'Error',
message: errorMessage,
color: 'red',
});
setLogoPreview(logo.cache_url);
} else {
formik.resetForm();
setLogoPreview(null);
} finally {
setUploading(false);
}
// Clear any selected file when logo changes
};
useEffect(() => {
reset(defaultValues);
setLogoPreview(logo?.cache_url || null);
setSelectedFile(null);
}, [logo, isOpen]);
}, [defaultValues, logo, reset]);
const handleFileSelect = (files) => {
if (files.length === 0) return;
@ -180,18 +186,19 @@ const LogoForm = ({ logo = null, isOpen, onClose, onSuccess }) => {
setLogoPreview(previewUrl);
// Auto-fill the name field if empty
if (!formik.values.name) {
const currentName = watch('name');
if (!currentName) {
const nameWithoutExtension = file.name.replace(/\.[^/.]+$/, '');
formik.setFieldValue('name', nameWithoutExtension);
setValue('name', nameWithoutExtension);
}
// Set a placeholder URL (will be replaced after upload)
formik.setFieldValue('url', 'file://pending-upload');
setValue('url', 'file://pending-upload');
};
const handleUrlChange = (event) => {
const url = event.target.value;
formik.setFieldValue('url', url);
setValue('url', url);
// Clear any selected file when manually entering URL
if (selectedFile) {
@ -219,7 +226,7 @@ const LogoForm = ({ logo = null, isOpen, onClose, onSuccess }) => {
const filename = pathname.substring(pathname.lastIndexOf('/') + 1);
const nameWithoutExtension = filename.replace(/\.[^/.]+$/, '');
if (nameWithoutExtension) {
formik.setFieldValue('name', nameWithoutExtension);
setValue('name', nameWithoutExtension);
}
} catch (error) {
// If the URL is invalid, do nothing.
@ -244,7 +251,7 @@ const LogoForm = ({ logo = null, isOpen, onClose, onSuccess }) => {
title={logo ? 'Edit Logo' : 'Add Logo'}
size="md"
>
<form onSubmit={formik.handleSubmit}>
<form onSubmit={handleSubmit(onSubmit)}>
<Stack spacing="md">
{/* Logo Preview */}
{logoPreview && (
@ -338,18 +345,18 @@ const LogoForm = ({ logo = null, isOpen, onClose, onSuccess }) => {
<TextInput
label="Logo URL"
placeholder="https://example.com/logo.png"
{...formik.getFieldProps('url')}
{...register('url')}
onChange={handleUrlChange}
onBlur={handleUrlBlur}
error={formik.touched.url && formik.errors.url}
error={errors.url?.message}
disabled={!!selectedFile} // Disable when file is selected
/>
<TextInput
label="Name"
placeholder="Enter logo name"
{...formik.getFieldProps('name')}
error={formik.touched.name && formik.errors.name}
{...register('name')}
error={errors.name?.message}
/>
{selectedFile && (
@ -363,7 +370,7 @@ const LogoForm = ({ logo = null, isOpen, onClose, onSuccess }) => {
<Button variant="light" onClick={onClose}>
Cancel
</Button>
<Button type="submit" loading={formik.isSubmitting || uploading}>
<Button type="submit" loading={isSubmitting || uploading}>
{logo ? 'Update' : 'Create'}
</Button>
</Group>

View file

@ -151,6 +151,7 @@ const M3UFilters = ({ playlist, isOpen, onClose }) => {
const [deleteTarget, setDeleteTarget] = useState(null);
const [filterToDelete, setFilterToDelete] = useState(null);
const [filters, setFilters] = useState([]);
const [deleting, setDeleting] = useState(false);
const isWarningSuppressed = useWarningsStore((s) => s.isWarningSuppressed);
const suppressWarning = useWarningsStore((s) => s.suppressWarning);
@ -192,16 +193,17 @@ const M3UFilters = ({ playlist, isOpen, onClose }) => {
const deleteFilter = async (id) => {
if (!playlist || !playlist.id) return;
setDeleting(true);
try {
await API.deleteM3UFilter(playlist.id, id);
setConfirmDeleteOpen(false);
fetchPlaylist(playlist.id);
setFilters(filters.filter((f) => f.id !== id));
} catch (error) {
console.error('Error deleting profile:', error);
} finally {
setDeleting(false);
setConfirmDeleteOpen(false);
}
fetchPlaylist(playlist.id);
setFilters(filters.filter((f) => f.id !== id));
};
const closeEditor = (updatedPlaylist = null) => {
@ -321,6 +323,7 @@ const M3UFilters = ({ playlist, isOpen, onClose }) => {
opened={confirmDeleteOpen}
onClose={() => setConfirmDeleteOpen(false)}
onConfirm={() => deleteFilter(deleteTarget)}
loading={deleting}
title="Confirm Filter Deletion"
message={
filterToDelete ? (

View file

@ -1,6 +1,5 @@
// Modal.js
import React, { useState, useEffect, forwardRef } from 'react';
import { useFormik } from 'formik';
import * as Yup from 'yup';
import API from '../../api';
import M3UProfiles from './M3UProfiles';

View file

@ -1,5 +1,6 @@
import React, { useState, useEffect } from 'react';
import { useFormik } from 'formik';
import React, { useState, useEffect, useMemo } from 'react';
import { useForm } from 'react-hook-form';
import { yupResolver } from '@hookform/resolvers/yup';
import * as Yup from 'yup';
import API from '../../api';
import {
@ -31,6 +32,89 @@ const RegexFormAndView = ({ profile = null, m3u, isOpen, onClose }) => {
const [sampleInput, setSampleInput] = useState('');
const isDefaultProfile = profile?.is_default;
const defaultValues = useMemo(
() => ({
name: profile?.name || '',
max_streams: profile?.max_streams || 0,
search_pattern: profile?.search_pattern || '',
replace_pattern: profile?.replace_pattern || '',
notes: profile?.custom_properties?.notes || '',
}),
[profile]
);
const schema = Yup.object({
name: Yup.string().required('Name is required'),
search_pattern: Yup.string().when([], {
is: () => !isDefaultProfile,
then: (schema) => schema.required('Search pattern is required'),
otherwise: (schema) => schema.notRequired(),
}),
replace_pattern: Yup.string().when([], {
is: () => !isDefaultProfile,
then: (schema) => schema.required('Replace pattern is required'),
otherwise: (schema) => schema.notRequired(),
}),
notes: Yup.string(), // Optional field
});
const {
register,
handleSubmit,
formState: { errors, isSubmitting },
reset,
setValue,
watch,
} = useForm({
defaultValues,
resolver: yupResolver(schema),
});
const onSubmit = async (values) => {
console.log('submiting');
// For default profiles, only send name and custom_properties (notes)
let submitValues;
if (isDefaultProfile) {
submitValues = {
name: values.name,
custom_properties: {
// Preserve existing custom_properties and add/update notes
...(profile?.custom_properties || {}),
notes: values.notes || '',
},
};
} else {
// For regular profiles, send all fields
submitValues = {
name: values.name,
max_streams: values.max_streams,
search_pattern: values.search_pattern,
replace_pattern: values.replace_pattern,
custom_properties: {
// Preserve existing custom_properties and add/update notes
...(profile?.custom_properties || {}),
notes: values.notes || '',
},
};
}
if (profile?.id) {
await API.updateM3UProfile(m3u.id, {
id: profile.id,
...submitValues,
});
} else {
await API.addM3UProfile(m3u.id, submitValues);
}
reset();
// Reset local state to sync with form reset
setSearchPattern('');
setReplacePattern('');
onClose();
};
useEffect(() => {
async function fetchStreamUrl() {
try {
@ -79,96 +163,22 @@ const RegexFormAndView = ({ profile = null, m3u, isOpen, onClose }) => {
}, [searchPattern, replacePattern]);
const onSearchPatternUpdate = (e) => {
formik.handleChange(e);
setSearchPattern(e.target.value);
const value = e.target.value;
setSearchPattern(value);
setValue('search_pattern', value);
};
const onReplacePatternUpdate = (e) => {
formik.handleChange(e);
setReplacePattern(e.target.value);
const value = e.target.value;
setReplacePattern(value);
setValue('replace_pattern', value);
};
const formik = useFormik({
initialValues: {
name: '',
max_streams: 0,
search_pattern: '',
replace_pattern: '',
notes: '',
},
validationSchema: Yup.object({
name: Yup.string().required('Name is required'),
search_pattern: Yup.string().when([], {
is: () => !isDefaultProfile,
then: (schema) => schema.required('Search pattern is required'),
otherwise: (schema) => schema.notRequired(),
}),
replace_pattern: Yup.string().when([], {
is: () => !isDefaultProfile,
then: (schema) => schema.required('Replace pattern is required'),
otherwise: (schema) => schema.notRequired(),
}),
notes: Yup.string(), // Optional field
}),
onSubmit: async (values, { setSubmitting, resetForm }) => {
console.log('submiting');
// For default profiles, only send name and custom_properties (notes)
let submitValues;
if (isDefaultProfile) {
submitValues = {
name: values.name,
custom_properties: {
// Preserve existing custom_properties and add/update notes
...(profile?.custom_properties || {}),
notes: values.notes || '',
},
};
} else {
// For regular profiles, send all fields
submitValues = {
name: values.name,
max_streams: values.max_streams,
search_pattern: values.search_pattern,
replace_pattern: values.replace_pattern,
custom_properties: {
// Preserve existing custom_properties and add/update notes
...(profile?.custom_properties || {}),
notes: values.notes || '',
},
};
}
if (profile?.id) {
await API.updateM3UProfile(m3u.id, {
id: profile.id,
...submitValues,
});
} else {
await API.addM3UProfile(m3u.id, submitValues);
}
resetForm();
setSubmitting(false);
onClose();
},
});
useEffect(() => {
if (profile) {
setSearchPattern(profile.search_pattern);
setReplacePattern(profile.replace_pattern);
formik.setValues({
name: profile.name,
max_streams: profile.max_streams,
search_pattern: profile.search_pattern,
replace_pattern: profile.replace_pattern,
notes: profile.custom_properties?.notes || '',
});
} else {
formik.resetForm();
}
}, [profile]); // eslint-disable-line react-hooks/exhaustive-deps
reset(defaultValues);
setSearchPattern(profile?.search_pattern || '');
setReplacePattern(profile?.replace_pattern || '');
}, [defaultValues, profile, reset]);
const handleSampleInputChange = (e) => {
setSampleInput(e.target.value);
@ -209,27 +219,21 @@ const RegexFormAndView = ({ profile = null, m3u, isOpen, onClose }) => {
}
size="lg"
>
<form onSubmit={formik.handleSubmit}>
<form onSubmit={handleSubmit(onSubmit)}>
<TextInput
id="name"
name="name"
label="Name"
value={formik.values.name}
onChange={formik.handleChange}
error={formik.errors.name ? formik.touched.name : ''}
{...register('name')}
error={errors.name?.message}
/>
{/* Only show max streams field for non-default profiles */}
{!isDefaultProfile && (
<NumberInput
id="max_streams"
name="max_streams"
label="Max Streams"
value={formik.values.max_streams}
onChange={(value) =>
formik.setFieldValue('max_streams', value || 0)
}
error={formik.errors.max_streams ? formik.touched.max_streams : ''}
{...register('max_streams')}
value={watch('max_streams')}
onChange={(value) => setValue('max_streams', value || 0)}
error={errors.max_streams?.message}
min={0}
placeholder="0 = unlimited"
/>
@ -239,40 +243,25 @@ const RegexFormAndView = ({ profile = null, m3u, isOpen, onClose }) => {
{!isDefaultProfile && (
<>
<TextInput
id="search_pattern"
name="search_pattern"
label="Search Pattern (Regex)"
value={searchPattern}
onChange={onSearchPatternUpdate}
error={
formik.errors.search_pattern
? formik.touched.search_pattern
: ''
}
error={errors.search_pattern?.message}
/>
<TextInput
id="replace_pattern"
name="replace_pattern"
label="Replace Pattern"
value={replacePattern}
onChange={onReplacePatternUpdate}
error={
formik.errors.replace_pattern
? formik.touched.replace_pattern
: ''
}
error={errors.replace_pattern?.message}
/>
</>
)}
<Textarea
id="notes"
name="notes"
label="Notes"
placeholder="Add any notes or comments about this profile..."
value={formik.values.notes}
onChange={formik.handleChange}
error={formik.errors.notes ? formik.touched.notes : ''}
{...register('notes')}
error={errors.notes?.message}
minRows={2}
maxRows={4}
autosize
@ -287,9 +276,9 @@ const RegexFormAndView = ({ profile = null, m3u, isOpen, onClose }) => {
>
<Button
type="submit"
disabled={formik.isSubmitting}
disabled={isSubmitting}
size="xs"
style={{ width: formik.isSubmitting ? 'auto' : 'auto' }}
style={{ width: isSubmitting ? 'auto' : 'auto' }}
>
Submit
</Button>

View file

@ -38,6 +38,7 @@ const M3UProfiles = ({ playlist = null, isOpen, onClose }) => {
const [confirmDeleteOpen, setConfirmDeleteOpen] = useState(false);
const [deleteTarget, setDeleteTarget] = useState(null);
const [profileToDelete, setProfileToDelete] = useState(null);
const [deletingProfile, setDeletingProfile] = useState(false);
const [accountInfoOpen, setAccountInfoOpen] = useState(false);
const [selectedProfileForInfo, setSelectedProfileForInfo] = useState(null);
@ -88,11 +89,13 @@ const M3UProfiles = ({ playlist = null, isOpen, onClose }) => {
const executeDeleteProfile = async (id) => {
if (!playlist || !playlist.id) return;
setDeletingProfile(true);
try {
await API.deleteM3UProfile(playlist.id, id);
setConfirmDeleteOpen(false);
} catch (error) {
console.error('Error deleting profile:', error);
} finally {
setDeletingProfile(false);
setConfirmDeleteOpen(false);
}
};
@ -359,6 +362,7 @@ const M3UProfiles = ({ playlist = null, isOpen, onClose }) => {
opened={confirmDeleteOpen}
onClose={() => setConfirmDeleteOpen(false)}
onConfirm={() => executeDeleteProfile(deleteTarget)}
loading={deletingProfile}
title="Confirm Profile Deletion"
message={
profileToDelete ? (

View file

@ -0,0 +1,110 @@
import React from 'react';
import { Modal, Flex, Button } from '@mantine/core';
import useChannelsStore from '../../store/channels.jsx';
import { deleteRecordingById } from '../../utils/cards/RecordingCardUtils.js';
import { deleteSeriesAndRule } from '../../utils/cards/RecordingCardUtils.js';
import { deleteSeriesRuleByTvgId } from '../../pages/guideUtils.js';
export default function ProgramRecordingModal({
opened,
onClose,
program,
recording,
existingRuleMode,
onRecordOne,
onRecordSeriesAll,
onRecordSeriesNew,
onExistingRuleModeChange,
}) {
const handleRemoveRecording = async () => {
try {
await deleteRecordingById(recording.id);
} catch (error) {
console.warn('Failed to delete recording', error);
}
try {
await useChannelsStore.getState().fetchRecordings();
} catch (error) {
console.warn('Failed to refresh recordings after delete', error);
}
onClose();
};
const handleRemoveSeries = async () => {
await deleteSeriesAndRule({
tvg_id: program.tvg_id,
title: program.title,
});
try {
await useChannelsStore.getState().fetchRecordings();
} catch (error) {
console.warn('Failed to refresh recordings after series delete', error);
}
onClose();
};
const handleRemoveSeriesRule = async () => {
await deleteSeriesRuleByTvgId(program.tvg_id);
onExistingRuleModeChange(null);
onClose();
};
return (
<Modal
opened={opened}
onClose={onClose}
title={`Record: ${program?.title}`}
centered
radius="md"
zIndex={9999}
overlayProps={{ color: '#000', backgroundOpacity: 0.55, blur: 0 }}
styles={{
content: { backgroundColor: '#18181B', color: 'white' },
header: { backgroundColor: '#18181B', color: 'white' },
title: { color: 'white' },
}}
>
<Flex direction="column" gap="sm">
<Button
onClick={() => {
onRecordOne();
onClose();
}}
>
Just this one
</Button>
<Button variant="light" onClick={() => {
onRecordSeriesAll();
onClose();
}}>
Every episode
</Button>
<Button variant="light" onClick={() => {
onRecordSeriesNew();
onClose();
}}>
New episodes only
</Button>
{recording && (
<>
<Button color="orange" variant="light" onClick={handleRemoveRecording}>
Remove this recording
</Button>
<Button color="red" variant="light" onClick={handleRemoveSeries}>
Remove this series (scheduled)
</Button>
</>
)}
{existingRuleMode && (
<Button color="red" variant="subtle" onClick={handleRemoveSeriesRule}>
Remove series rule ({existingRuleMode})
</Button>
)}
</Flex>
</Modal>
);
}

Some files were not shown because too many files have changed in this diff Show more