mirror of
https://github.com/Dispatcharr/Dispatcharr.git
synced 2026-01-23 02:35:14 +00:00
Merge remote-tracking branch 'origin/dev' into Media-Server
This commit is contained in:
commit
5cdbb2661d
104 changed files with 10890 additions and 2964 deletions
2
.github/workflows/ci.yml
vendored
2
.github/workflows/ci.yml
vendored
|
|
@ -3,6 +3,8 @@ name: CI Pipeline
|
|||
on:
|
||||
push:
|
||||
branches: [dev]
|
||||
paths-ignore:
|
||||
- '**.md'
|
||||
pull_request:
|
||||
branches: [dev]
|
||||
workflow_dispatch:
|
||||
|
|
|
|||
6
.github/workflows/release.yml
vendored
6
.github/workflows/release.yml
vendored
|
|
@ -43,6 +43,10 @@ jobs:
|
|||
NEW_VERSION=$(python -c "import version; print(f'{version.__version__}')")
|
||||
echo "new_version=${NEW_VERSION}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Update Changelog
|
||||
run: |
|
||||
python scripts/update_changelog.py ${{ steps.update_version.outputs.new_version }}
|
||||
|
||||
- name: Set repository metadata
|
||||
id: meta
|
||||
run: |
|
||||
|
|
@ -54,7 +58,7 @@ jobs:
|
|||
|
||||
- name: Commit and Tag
|
||||
run: |
|
||||
git add version.py
|
||||
git add version.py CHANGELOG.md
|
||||
git commit -m "Release v${{ steps.update_version.outputs.new_version }}"
|
||||
git tag -a "v${{ steps.update_version.outputs.new_version }}" -m "Release v${{ steps.update_version.outputs.new_version }}"
|
||||
git push origin main --tags
|
||||
|
|
|
|||
849
CHANGELOG.md
Normal file
849
CHANGELOG.md
Normal file
|
|
@ -0,0 +1,849 @@
|
|||
# Changelog
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
### Added
|
||||
|
||||
- Sort buttons for 'Group' and 'M3U' columns in Streams table for improved stream organization and filtering - Thanks [@bobey6](https://github.com/bobey6)
|
||||
|
||||
### Changed
|
||||
|
||||
- **Performance**: EPG program parsing optimized for sources with many channels but only a fraction mapped. Now parses XML file once per source instead of once per channel, dramatically reducing I/O and CPU overhead. For sources with 10,000 channels and 100 mapped, this results in ~99x fewer file opens and ~100x fewer full file scans. Orphaned programs for unmapped channels are also cleaned up during refresh to prevent database bloat. Database updates are now atomic to prevent clients from seeing empty/partial EPG data during refresh.
|
||||
- IPv6 access now allowed by default with all IPv6 CIDRs accepted - Thanks [@adrianmace](https://github.com/adrianmace)
|
||||
- nginx.conf updated to bind to both IPv4 and IPv6 ports - Thanks [@jordandalley](https://github.com/jordandalley)
|
||||
|
||||
## [0.13.0] - 2025-12-02
|
||||
|
||||
### Added
|
||||
|
||||
- `CHANGELOG.md` file following Keep a Changelog format to document all notable changes and project history
|
||||
- System event logging and viewer: Comprehensive logging system that tracks internal application events (M3U refreshes, EPG updates, stream switches, errors) with a dedicated UI viewer for filtering and reviewing historical events. Improves monitoring, troubleshooting, and understanding system behavior
|
||||
- M3U/EPG endpoint caching: Implements intelligent caching for frequently requested M3U playlists and EPG data to reduce database load and improve response times for clients.
|
||||
- Search icon to name headers for the channels and streams tables (#686)
|
||||
- Comprehensive logging for user authentication events and network access restrictions
|
||||
- Validation for EPG objects and payloads in updateEPG functions to prevent errors from invalid data
|
||||
- Referrerpolicy to YouTube iframes in series and VOD modals for better compatibility
|
||||
|
||||
### Changed
|
||||
|
||||
- XC player API now returns server_info for unknown actions to align with provider behavior
|
||||
- XC player API refactored to streamline action handling and ensure consistent responses
|
||||
- Date parsing logic in generate_custom_dummy_programs improved to handle empty or invalid inputs
|
||||
- DVR cards now reflect date and time formats chosen by user - Thanks [@Biologisten](https://github.com/Biologisten)
|
||||
- "Uncategorized" categories and relations now automatically created for VOD accounts to improve content management (#627)
|
||||
- Improved minimum horizontal size in the stats page for better usability on smaller displays
|
||||
- M3U and EPG generation now handles missing channel profiles with appropriate error logging
|
||||
|
||||
### Fixed
|
||||
|
||||
- Episode URLs in series modal now use UUID instead of ID, fixing broken links (#684, #694)
|
||||
- Stream preview now respects selected M3U profile instead of always using default profile (#690)
|
||||
- Channel groups filter in M3UGroupFilter component now filters out non-existent groups (prevents blank webui when editing M3U after a group was removed)
|
||||
- Stream order now preserved in PATCH/PUT responses from ChannelSerializer, ensuring consistent ordering across all API operations - Thanks [@FiveBoroughs](https://github.com/FiveBoroughs) (#643)
|
||||
- XC client compatibility: float channel numbers now converted to integers
|
||||
- M3U account and profile modals now scrollable on mobile devices for improved usability
|
||||
|
||||
## [0.12.0] - 2025-11-19
|
||||
|
||||
### Added
|
||||
|
||||
- RTSP stream support with automatic protocol detection when a proxy profile requires it. The proxy now forces FFmpeg for RTSP sources and properly handles RTSP URLs - Thanks [@ragchuck](https://github.com/ragchuck) (#184)
|
||||
- UDP stream support, including correct handling when a proxy profile specifies a UDP source. The proxy now skips HTTP-specific headers (like `user_agent`) for non-HTTP protocols and performs manual redirect handling to improve reliability (#617)
|
||||
- Separate VOD logos system with a new `VODLogo` model, database migration, dedicated API/viewset, and server-paginated UI. This separates movie/series logos from channel logos, making cleanup safer and enabling independent bulk operations
|
||||
|
||||
### Changed
|
||||
|
||||
- Background profile refresh now uses a rate-limiting/backoff strategy to avoid provider bans
|
||||
- Bulk channel editing now validates all requested changes up front and applies updates in a single database transaction
|
||||
- ProxyServer shutdown & ghost-client handling improved to avoid initializing channels for transient clients and prevent duplicate reinitialization during rapid reconnects
|
||||
- URL / Stream validation expanded to support credentials on non-FQDN hosts, skips HTTP-only checks for RTSP/RTP/UDP streams, and improved host/port normalization
|
||||
- TV guide scrolling & timeline synchronization improved with mouse-wheel scrolling, synchronized timeline position with guide navigation, and improved mobile momentum scrolling (#252)
|
||||
- EPG Source dropdown now sorts alphabetically - Thanks [@0x53c65c0a8bd30fff](https://github.com/0x53c65c0a8bd30fff)
|
||||
- M3U POST handling restored and improved for clients (e.g., Smarters) that request playlists using HTTP POST - Thanks [@maluueu](https://github.com/maluueu)
|
||||
- Login form revamped with branding, cleaner layout, loading state, "Remember Me" option, and focused sign-in flow
|
||||
- Series & VOD now have copy-link buttons in modals for easier URL sharing
|
||||
- `get_host_and_port` now prioritizes verified port sources and handles reverse-proxy edge cases more accurately (#618)
|
||||
|
||||
### Fixed
|
||||
|
||||
- EXTINF parsing overhauled to correctly extract attributes such as `tvg-id`, `tvg-name`, and `group-title`, even when values include quotes or commas (#637)
|
||||
- Websocket payload size reduced during EPG processing to avoid UI freezes, blank screens, or memory spikes in the browser (#327)
|
||||
- Logo management UI fixes including confirmation dialogs, header checkbox reset, delete button reliability, and full client refetch after cleanup
|
||||
|
||||
## [0.11.2] - 2025-11-04
|
||||
|
||||
### Added
|
||||
|
||||
- Custom Dummy EPG improvements:
|
||||
- Support for using an existing Custom Dummy EPG as a template for creating new EPGs
|
||||
- Custom fallback templates for unmatched patterns
|
||||
- `{endtime}` as an available output placeholder and renamed `{time}` → `{starttime}` (#590)
|
||||
- Support for date placeholders that respect both source and output timezones (#597)
|
||||
- Ability to bulk assign Custom Dummy EPGs to multiple channels
|
||||
- "Include New Tag" option to mark programs as new in Dummy EPG output
|
||||
- Support for month strings in date parsing
|
||||
- Ability to set custom posters and channel logos via regex patterns for Custom Dummy EPGs
|
||||
- Improved DST handling by calculating offsets based on the actual program date, not today's date
|
||||
|
||||
### Changed
|
||||
|
||||
- Stream model maximum URL length increased from 2000 to 4096 characters (#585)
|
||||
- Groups now sorted during `xc_get_live_categories` based on the order they first appear (by lowest channel number)
|
||||
- Client TTL settings updated and periodic refresh implemented during active streaming to maintain accurate connection tracking
|
||||
- `ProgramData.sub_title` field changed from `CharField` to `TextField` to allow subtitles longer than 255 characters (#579)
|
||||
- Startup improved by verifying `/data` directory ownership and automatically fixing permissions if needed. Pre-creates `/data/models` during initialization (#614)
|
||||
- Port detection enhanced to check `request.META.get("SERVER_PORT")` before falling back to defaults, ensuring correct port when generating M3U, EPG, and logo URLs - Thanks [@lasharor](https://github.com/lasharor)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Custom Dummy EPG frontend DST calculation now uses program date instead of current date
|
||||
- Channel titles no longer truncated early after an apostrophe - Thanks [@0x53c65c0a8bd30fff](https://github.com/0x53c65c0a8bd30fff)
|
||||
|
||||
## [0.11.1] - 2025-10-22
|
||||
|
||||
### Fixed
|
||||
|
||||
- uWSGI not receiving environmental variables
|
||||
- LXC unable to access daemons launched by uWSGI ([#575](https://github.com/Dispatcharr/Dispatcharr/issues/575), [#576](https://github.com/Dispatcharr/Dispatcharr/issues/576), [#577](https://github.com/Dispatcharr/Dispatcharr/issues/577))
|
||||
|
||||
## [0.11.0] - 2025-10-22
|
||||
|
||||
### Added
|
||||
|
||||
- Custom Dummy EPG system:
|
||||
- Regex pattern matching and name source selection
|
||||
- Support for custom upcoming and ended programs
|
||||
- Timezone-aware with source and local timezone selection
|
||||
- Option to include categories and date/live tags in Dummy EPG output
|
||||
- (#293)
|
||||
- Auto-Enable & Category Improvements:
|
||||
- Auto-enable settings for new groups and categories in M3U and VOD components (#208)
|
||||
- IPv6 CIDR validation in Settings - Thanks [@jordandalley](https://github.com/jordandalley) (#236)
|
||||
- Custom logo support for channel groups in Auto Sync Channels (#555)
|
||||
- Tooltips added to the Stream Table
|
||||
|
||||
### Changed
|
||||
|
||||
- Celery and uWSGI now have configurable `nice` levels (defaults: `uWSGI=0`, `Celery=5`) to prioritize streaming when needed. (#571)
|
||||
- Directory creation and ownership management refactored in init scripts to avoid unnecessary recursive `chown` operations and improve boot speed
|
||||
- HTTP streamer switched to threaded model with piped output for improved robustness
|
||||
- Chunk timeout configuration improved and StreamManager timeout handling enhanced
|
||||
- Proxy timeout values reduced to avoid unnecessary waiting
|
||||
- Resource cleanup improved to prevent "Too many open files" errors
|
||||
- Proxy settings caching implemented and database connections properly closed after use
|
||||
- EPG program fetching optimized with chunked retrieval and explicit ordering to reduce memory usage during output
|
||||
- EPG output now sorted by channel number for consistent presentation
|
||||
- Stream Table buttons reordered for better usability
|
||||
- Database connection handling improved throughout the codebase to reduce overall connection count
|
||||
|
||||
### Fixed
|
||||
|
||||
- Crash when resizing columns in the Channel Table (#516)
|
||||
- Errors when saving stream settings (#535)
|
||||
- Preview and edit bugs for custom streams where profile and group selections did not display correctly
|
||||
- `channel_id` and `channel.uuid` now converted to strings before processing to fix manual switching when the uWSGI worker was not the stream owner (#269)
|
||||
- Stream locking and connection search issues when switching channels; increased search timeout to reduce premature failures (#503)
|
||||
- Stream Table buttons no longer shift into multiple rows when selecting many streams
|
||||
- Custom stream previews
|
||||
- Custom Stream settings not loading properly (#186)
|
||||
- Orphaned categories now automatically removed for VOD and Series during M3U refresh (#540)
|
||||
|
||||
## [0.10.4] - 2025-10-08
|
||||
|
||||
### Added
|
||||
|
||||
- "Assign TVG-ID from EPG" functionality with frontend actions for single-channel and batch operations
|
||||
- Confirmation dialogs in `ChannelBatchForm` for setting names, logos, TVG-IDs, and clearing EPG assignments
|
||||
- "Clear EPG" button to `ChannelBatchForm` for easy reset of assignments
|
||||
- Batch editing of channel logos - Thanks [@EmeraldPi](https://github.com/EmeraldPi)
|
||||
- Ability to set logo name from URL - Thanks [@EmeraldPi](https://github.com/EmeraldPi)
|
||||
- Proper timestamp tracking for channel creation and updates; `XC Get Live Streams` now uses this information
|
||||
- Time Zone Settings added to the application ([#482](https://github.com/Dispatcharr/Dispatcharr/issues/482), [#347](https://github.com/Dispatcharr/Dispatcharr/issues/347))
|
||||
- Comskip settings support including comskip.ini upload and custom directory selection (#418)
|
||||
- Manual recording scheduling for channels without EPG data (#162)
|
||||
|
||||
### Changed
|
||||
|
||||
- Default M3U account type is now set to XC for new accounts
|
||||
- Performance optimization: Only fetch playlists and channel profiles after a successful M3U refresh (rather than every status update)
|
||||
- Playlist retrieval now includes current connection counts and improved session handling during VOD session start
|
||||
- Improved stream selection logic when all profiles have reached max connections (retries faster)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Large EPGs now fully parse all channels
|
||||
- Duplicate channel outputs for streamer profiles set to "All"
|
||||
- Streamer profiles with "All" assigned now receive all eligible channels
|
||||
- PostgreSQL btree index errors from logo URL validation during channel creation (#519)
|
||||
- M3U processing lock not releasing when no streams found during XC refresh, which also skipped VOD scanning (#449)
|
||||
- Float conversion errors by normalizing decimal format during VOD scanning (#526)
|
||||
- Direct URL ordering in M3U output to use correct stream sequence (#528)
|
||||
- Adding multiple M3U accounts without refreshing modified only the first entry (#397)
|
||||
- UI state bug where new playlist creation was not notified to frontend ("Fetching Groups" stuck)
|
||||
- Minor FFmpeg task and stream termination bugs in DVR module
|
||||
- Input escaping issue where single quotes were interpreted as code delimiters (#406)
|
||||
|
||||
## [0.10.3] - 2025-10-04
|
||||
|
||||
### Added
|
||||
|
||||
- Logo management UI improvements where Channel editor now uses the Logo Manager modal, allowing users to add logos by URL directly from the edit form - Thanks [@EmeraldPi](https://github.com/EmeraldPi)
|
||||
|
||||
### Changed
|
||||
|
||||
- FFmpeg base container rebuilt with improved native build support - Thanks [@EmeraldPi](https://github.com/EmeraldPi)
|
||||
- GitHub Actions workflow updated to use native runners instead of QEMU emulation for more reliable multi-architecture builds
|
||||
|
||||
### Fixed
|
||||
|
||||
- EPG parsing stability when large EPG files would not fully parse all channels. Parser now uses `iterparse` with `recover=True` for both channel and program-level parsing, ensuring complete and resilient XML processing even when Cloudflare injects additional root elements
|
||||
|
||||
## [0.10.2] - 2025-10-03
|
||||
|
||||
### Added
|
||||
|
||||
- `m3u_id` parameter to `generate_hash_key` and updated related calls
|
||||
- Support for `x-tvg-url` and `url-tvg` generation with preserved query parameters (#345)
|
||||
- Exact Gracenote ID matching for EPG channel mapping (#291)
|
||||
- Recovery handling for XMLTV parser errors
|
||||
- `nice -n 5` added to Celery commands for better process priority management
|
||||
|
||||
### Changed
|
||||
|
||||
- Default M3U hash key changed to URL only for new installs
|
||||
- M3U profile retrieval now includes current connection counts and improved session handling during VOD session start
|
||||
- Improved stream selection logic when all profiles have reached max connections (retries faster)
|
||||
- XMLTV parsing refactored to use `iterparse` for `<tv>` element
|
||||
- Release workflow refactored to run on native architecture
|
||||
- Docker build system improvements:
|
||||
- Split install/build steps
|
||||
- Switch from Yarn → NPM
|
||||
- Updated to Node.js 24 (frontend build)
|
||||
- Improved ARM build reliability
|
||||
- Pushes to DockerHub with combined manifest
|
||||
- Removed redundant tags and improved build organization
|
||||
|
||||
### Fixed
|
||||
|
||||
- Cloudflare-hosted EPG feeds breaking parsing (#497)
|
||||
- Bulk channel creation now preserves the order channels were selected in (no longer reversed)
|
||||
- M3U hash settings not saving properly
|
||||
- VOD selecting the wrong M3U profile at session start (#461)
|
||||
- Redundant `h` removed from 12-hour time format in settings page
|
||||
|
||||
## [0.10.1] - 2025-09-24
|
||||
|
||||
### Added
|
||||
|
||||
- Virtualized rendering for TV Guide for smoother performance when displaying large guides - Thanks [@stlalpha](https://github.com/stlalpha) (#438)
|
||||
- Enhanced channel/program mapping to reuse EPG data across multiple channels that share the same TVG-ID
|
||||
|
||||
### Changed
|
||||
|
||||
- `URL` field length in EPGSource model increased from 200 → 1000 characters to support long URLs with tokens
|
||||
- Improved URL transformation logic with more advanced regex during profile refreshes
|
||||
- During EPG scanning, the first display name for a channel is now used instead of the last
|
||||
- `whiteSpace` style changed from `nowrap` → `pre` in StreamsTable for better text formatting
|
||||
|
||||
### Fixed
|
||||
|
||||
- EPG channel parsing failure when channel `URL` exceeded 500 characters by adding validation during scanning (#452)
|
||||
- Frontend incorrectly saving case-sensitive setting as a JSON string for stream filters
|
||||
|
||||
## [0.10.0] - 2025-09-18
|
||||
|
||||
### Added
|
||||
|
||||
- Channel Creation Improvements:
|
||||
- Ability to specify channel number during channel creation ([#377](https://github.com/Dispatcharr/Dispatcharr/issues/377), [#169](https://github.com/Dispatcharr/Dispatcharr/issues/169))
|
||||
- Asynchronous bulk channel creation from stream IDs with WebSocket progress updates
|
||||
- WebSocket notifications when channels are created
|
||||
- EPG Auto-Matching (Rewritten & Enhanced):
|
||||
- Completely refactored for improved accuracy and efficiency
|
||||
- Can now be applied to selected channels or triggered directly from the channel edit form
|
||||
- Uses stricter matching logic with support from sentence transformers
|
||||
- Added progress notifications during the matching process
|
||||
- Implemented memory cleanup for ML models after matching operations
|
||||
- Removed deprecated matching scripts
|
||||
- Logo & EPG Management:
|
||||
- Ability in channel edit form and bulk channel editor to set logos and names from assigned EPG (#157)
|
||||
- Improved logo update flow: frontend refreshes on changes, store updates after bulk changes, progress shown via notifications
|
||||
- Table Enhancements:
|
||||
- All tables now support adjustable column resizing (#295)
|
||||
- Channels and Streams tables persist column widths and center divider position to local storage
|
||||
- Improved sizing and layout for user-agents, stream profiles, logos, M3U, and EPG tables
|
||||
|
||||
### Changed
|
||||
|
||||
- Simplified VOD and series access: removed user-level restrictions on M3U accounts
|
||||
- Skip disabled M3U accounts when choosing streams during playback (#402)
|
||||
- Enhanced `UserViewSet` queryset to prefetch related channel profiles for better performance
|
||||
- Auto-focus added to EPG filter input
|
||||
- Category API retrieval now sorts by name
|
||||
- Increased default column size for EPG fields and removed max size on group/EPG columns
|
||||
- Standardized EPG column header to display `(EPG ID - TVG-ID)`
|
||||
|
||||
### Fixed
|
||||
|
||||
- Bug during VOD cleanup where all VODs not from the current M3U scan could be deleted
|
||||
- Logos not being set correctly in some cases
|
||||
- Bug where not setting a channel number caused an error when creating a channel (#422)
|
||||
- Bug where clicking "Add Channel" with a channel selected opened the edit form instead
|
||||
- Bug where a newly created channel could reuse streams from another channel due to form not clearing properly
|
||||
- VOD page not displaying correct order while changing pages
|
||||
- `ReferenceError: setIsInitialized is not defined` when logging into web UI
|
||||
- `cannot access local variable 'total_chunks' where it is not associated with a value` during VOD refresh
|
||||
|
||||
## [0.9.1] - 2025-09-13
|
||||
|
||||
### Fixed
|
||||
|
||||
- Broken migrations affecting the plugins system
|
||||
- DVR and plugin paths to ensure proper functionality (#381)
|
||||
|
||||
## [0.9.0] - 2025-09-12
|
||||
|
||||
### Added
|
||||
|
||||
- **Video on Demand (VOD) System:**
|
||||
- Complete VOD infrastructure with support for movies and TV series
|
||||
- Advanced VOD metadata including IMDB/TMDB integration, trailers, cast information
|
||||
- Smart VOD categorization with filtering by type (movies vs series)
|
||||
- Multi-provider VOD support with priority-based selection
|
||||
- VOD streaming proxy with connection tracking and statistics
|
||||
- Season/episode organization for TV series with expandable episode details
|
||||
- VOD statistics and monitoring integrated with existing stats dashboard
|
||||
- Optimized VOD parsing and category filtering
|
||||
- Dedicated VOD page with movies and series tabs
|
||||
- Rich VOD modals with backdrop images, trailers, and metadata
|
||||
- Episode management with season-based organization
|
||||
- Play button integration with external player support
|
||||
- VOD statistics cards similar to channel cards
|
||||
- **Plugin System:**
|
||||
- Extensible Plugin Framework - Developers can build custom functionality without modifying Dispatcharr core
|
||||
- Plugin Discovery & Management - Automatic detection of installed plugins, with enable/disable controls in the UI
|
||||
- Backend API Support - New APIs for listing, loading, and managing plugins programmatically
|
||||
- Plugin Registry - Structured models for plugin metadata (name, version, author, description)
|
||||
- UI Enhancements - Dedicated Plugins page in the admin panel for centralized plugin management
|
||||
- Documentation & Scaffolding - Initial documentation and scaffolding to accelerate plugin development
|
||||
- **DVR System:**
|
||||
- Refreshed DVR page for managing scheduled and completed recordings
|
||||
- Global pre/post padding controls surfaced in Settings
|
||||
- Playback support for completed recordings directly in the UI
|
||||
- DVR table view includes title, channel, time, and padding adjustments for clear scheduling
|
||||
- Improved population of DVR listings, fixing intermittent blank screen issues
|
||||
- Comskip integration for automated commercial detection and skipping in recordings
|
||||
- User-configurable comskip toggle in Settings
|
||||
- **Enhanced Channel Management:**
|
||||
- EPG column added to channels table for better organization
|
||||
- EPG filtering by channel assignment and source name
|
||||
- Channel batch renaming for efficient bulk channel name updates
|
||||
- Auto channel sync improvements with custom stream profile override
|
||||
- Channel logo management overhaul with background loading
|
||||
- Date and time format customization in settings - Thanks [@Biologisten](https://github.com/Biologisten)
|
||||
- Auto-refresh intervals for statistics with better UI controls
|
||||
- M3U profile notes field for better organization
|
||||
- XC account information retrieval and display with account refresh functionality and notifications
|
||||
|
||||
### Changed
|
||||
|
||||
- JSONB field conversion for custom properties (replacing text fields) for better performance
|
||||
- Database encoding converted from ASCII to UTF8 for better character support
|
||||
- Batch processing for M3U updates and channel operations
|
||||
- Query optimization with prefetch_related to eliminate N+1 queries
|
||||
- Reduced API calls by fetching all data at once instead of per-category
|
||||
- Buffering speed setting now affects UI indicators
|
||||
- Swagger endpoint accessible with or without trailing slash
|
||||
- EPG source names displayed before channel names in edit forms
|
||||
- Logo loading improvements with background processing
|
||||
- Channel card enhancements with better status indicators
|
||||
- Group column width optimization
|
||||
- Better content-type detection for streams
|
||||
- Improved headers with content-range and total length
|
||||
- Enhanced user-agent handling for M3U accounts
|
||||
- HEAD request support with connection keep-alive
|
||||
- Progress tracking improvements for clients with new sessions
|
||||
- Server URL length increased to 1000 characters for token support
|
||||
- Prettier formatting applied to all frontend code
|
||||
- String quote standardization and code formatting improvements
|
||||
|
||||
### Fixed
|
||||
|
||||
- Logo loading issues in channel edit forms resolved
|
||||
- M3U download error handling and user feedback improved
|
||||
- Unique constraint violations fixed during stream rehashing
|
||||
- Channel stats fetching moved from Celery beat task to configurable API calls
|
||||
- Speed badge colors now use configurable buffering speed setting
|
||||
- Channel cards properly close when streams stop
|
||||
- Active streams labeling updated from "Active Channels"
|
||||
- WebSocket updates for client connect/disconnect events
|
||||
- Null value handling before database saves
|
||||
- Empty string scrubbing for cleaner data
|
||||
- Group relationship cleanup for removed M3U groups
|
||||
- Logo cleanup for unused files with proper batch processing
|
||||
- Recordings start 5 mins after show starts (#102)
|
||||
|
||||
### Closed
|
||||
|
||||
- [#350](https://github.com/Dispatcharr/Dispatcharr/issues/350): Allow DVR recordings to be played via the UI
|
||||
- [#349](https://github.com/Dispatcharr/Dispatcharr/issues/349): DVR screen doesn't populate consistently
|
||||
- [#340](https://github.com/Dispatcharr/Dispatcharr/issues/340): Global find and replace
|
||||
- [#311](https://github.com/Dispatcharr/Dispatcharr/issues/311): Stat's "Current Speed" does not reflect "Buffering Speed" setting
|
||||
- [#304](https://github.com/Dispatcharr/Dispatcharr/issues/304): Name ignored when uploading logo
|
||||
- [#300](https://github.com/Dispatcharr/Dispatcharr/issues/300): Updating Logo throws error
|
||||
- [#286](https://github.com/Dispatcharr/Dispatcharr/issues/286): 2 Value/Column EPG in Channel Edit
|
||||
- [#280](https://github.com/Dispatcharr/Dispatcharr/issues/280): Add general text field in M3U/XS profiles
|
||||
- [#190](https://github.com/Dispatcharr/Dispatcharr/issues/190): Show which stream is being used and allow it to be altered in channel properties
|
||||
- [#155](https://github.com/Dispatcharr/Dispatcharr/issues/155): Additional column with EPG assignment information / Allow filtering by EPG assignment
|
||||
- [#138](https://github.com/Dispatcharr/Dispatcharr/issues/138): Bulk Channel Edit Functions
|
||||
|
||||
## [0.8.0] - 2025-08-19
|
||||
|
||||
### Added
|
||||
|
||||
- Channel & Stream Enhancements:
|
||||
- Preview streams under a channel, with stream logo and name displayed in the channel card
|
||||
- Advanced stats for channel streams
|
||||
- Stream qualities displayed in the channel table
|
||||
- Stream stats now saved to the database
|
||||
- URL badges can now be clicked to copy stream links to the clipboard
|
||||
- M3U Filtering for Streams:
|
||||
- Streams for an M3U account can now be filtered using flexible parameters
|
||||
- Apply filters based on stream name, group title, or stream URL (via regex)
|
||||
- Filters support both inclusion and exclusion logic for precise control
|
||||
- Multiple filters can be layered with a priority order for complex rules
|
||||
- Ability to reverse the sort order for auto channel sync
|
||||
- Custom validator for URL fields now allows non-FQDN hostnames (#63)
|
||||
- Membership creation added in `UpdateChannelMembershipAPIView` if not found (#275)
|
||||
|
||||
### Changed
|
||||
|
||||
- Bumped Postgres to version 17
|
||||
- Updated dependencies in `requirements.txt` for compatibility and improvements
|
||||
- Improved chunked extraction to prevent memory issues - Thanks [@pantherale0](https://github.com/pantherale0)
|
||||
|
||||
### Fixed
|
||||
|
||||
- XML escaping for channel ID in `generate_dummy_epg` function
|
||||
- Bug where creating a channel from a stream not displayed in the table used an invalid stream name
|
||||
- Debian install script - Thanks [@deku-m](https://github.com/deku-m)
|
||||
|
||||
## [0.7.1] - 2025-07-29
|
||||
|
||||
### Added
|
||||
|
||||
- Natural sorting for channel names during auto channel sync
|
||||
- Ability to sort auto sync order by provider order (default), channel name, TVG ID, or last updated time
|
||||
- Auto-created channels can now be assigned to specific channel profiles (#255)
|
||||
- Channel profiles are now fetched automatically after a successful M3U refresh
|
||||
- Uses only whole numbers when assigning the next available channel number
|
||||
|
||||
### Changed
|
||||
|
||||
- Logo upload behavior changed to wait for the Create button before saving
|
||||
- Uses the channel name as the display name in EPG output for improved readability
|
||||
- Ensures channels are only added to a selected profile if one is explicitly chosen
|
||||
|
||||
### Fixed
|
||||
|
||||
- Logo Manager prevents redundant messages from the file scanner by properly tracking uploaded logos in Redis
|
||||
- Fixed an issue preventing logo uploads via URL
|
||||
- Adds internal support for assigning multiple profiles via API
|
||||
|
||||
## [0.7.0] - 2025-07-19
|
||||
|
||||
### Added
|
||||
|
||||
- **Logo Manager:**
|
||||
- Complete logo management system with filtering, search, and usage tracking
|
||||
- Upload logos directly through the UI
|
||||
- Automatically scan `/data/logos` for existing files (#69)
|
||||
- View which channels use each logo
|
||||
- Bulk delete unused logos with cleanup
|
||||
- Enhanced display with hover effects and improved sizing
|
||||
- Improved logo fetching with timeouts and user-agent headers to prevent hanging
|
||||
- **Group Manager:**
|
||||
- Comprehensive group management interface (#128)
|
||||
- Search and filter groups with ease
|
||||
- Bulk operations for cleanup
|
||||
- Filter channels by group membership
|
||||
- Automatically clean up unused groups
|
||||
- **Auto Channel Sync:**
|
||||
- Automatic channel synchronization from M3U sources (#147)
|
||||
- Configure auto-sync settings per M3U account group
|
||||
- Set starting channel numbers by group
|
||||
- Override group names during sync
|
||||
- Apply regex match and replace for channel names
|
||||
- Filter channels by regex match on stream name
|
||||
- Track auto-created vs manually added channels
|
||||
- Smart updates preserve UUIDs and existing links
|
||||
- Stream rehashing with WebSocket notifications
|
||||
- Better error handling for blocked rehash attempts
|
||||
- Lock acquisition to prevent conflicts
|
||||
- Real-time progress tracking
|
||||
|
||||
### Changed
|
||||
|
||||
- Persist table page sizes in local storage (streams & channels)
|
||||
- Smoother pagination and improved UX
|
||||
- Fixed z-index issues during table refreshes
|
||||
- Improved XC client with connection pooling
|
||||
- Better error handling for API and JSON decode failures
|
||||
- Smarter handling of empty content and blocking responses
|
||||
- Improved EPG XML generation with richer metadata
|
||||
- Better support for keywords, languages, ratings, and credits
|
||||
- Better form layouts and responsive buttons
|
||||
- Enhanced confirmation dialogs and feedback
|
||||
|
||||
### Fixed
|
||||
|
||||
- Channel table now correctly restores page size from local storage
|
||||
- Resolved WebSocket message formatting issues
|
||||
- Fixed logo uploads and edits
|
||||
- Corrected ESLint issues across the codebase
|
||||
- Fixed HTML validation errors in menus
|
||||
- Optimized logo fetching with proper timeouts and headers ([#101](https://github.com/Dispatcharr/Dispatcharr/issues/101), [#217](https://github.com/Dispatcharr/Dispatcharr/issues/217))
|
||||
|
||||
## [0.6.2] - 2025-07-10
|
||||
|
||||
### Fixed
|
||||
|
||||
- **Streaming & Connection Stability:**
|
||||
- Provider timeout issues - Slow but responsive providers no longer cause channel lockups
|
||||
- Added chunk and process timeouts - Prevents hanging during stream processing and transcoding
|
||||
- Improved connection handling - Enhanced process management and socket closure detection for safer streaming
|
||||
- Enhanced health monitoring - Health monitor now properly notifies main thread without attempting reconnections
|
||||
- **User Interface & Experience:**
|
||||
- Touch screen compatibility - Web player can now be properly closed on touch devices
|
||||
- Improved user management - Added support for first/last names, login tracking, and standardized table formatting
|
||||
- Improved logging - Enhanced log messages with channel IDs for better debugging
|
||||
- Code cleanup - Removed unused imports, variables, and dead links
|
||||
|
||||
## [0.6.1] - 2025-06-27
|
||||
|
||||
### Added
|
||||
|
||||
- Dynamic parameter options for M3U and EPG URLs (#207)
|
||||
- Support for 'num' property in channel number extraction (fixes channel creation from XC streams not having channel numbers)
|
||||
|
||||
### Changed
|
||||
|
||||
- EPG generation now uses streaming responses to prevent client timeouts during large EPG file generation (#179)
|
||||
- Improved reliability when downloading EPG data from external sources
|
||||
- Better program positioning - Programs that start before the current view now have proper text positioning (#223)
|
||||
- Better mobile support - Improved sizing and layout for mobile devices across multiple tables
|
||||
- Responsive stats cards - Better calculation for card layout and improved filling on different screen sizes (#218)
|
||||
- Enhanced table rendering - M3U and EPG tables now render better on small screens
|
||||
- Optimized spacing - Removed unnecessary padding and blank space throughout the interface
|
||||
- Better settings layout - Improved minimum widths and mobile support for settings pages
|
||||
- Always show 2 decimal places for FFmpeg speed values
|
||||
|
||||
### Fixed
|
||||
|
||||
- TV Guide now properly filters channels based on selected channel group
|
||||
- Resolved loading issues - Fixed channels and groups not loading correctly in the TV Guide
|
||||
- Stream profile fixes - Resolved issue with setting stream profile to 'use default'
|
||||
- Single channel editing - When only one channel is selected, the correct channel editor now opens
|
||||
- Bulk edit improvements - Added "no change" options for bulk editing operations
|
||||
- Bulk channel editor now properly saves changes (#222)
|
||||
- Link form improvements - Better sizing and rendering of link forms with proper layering
|
||||
- Confirmation dialogs added with warning suppression for user deletion, channel profile deletion, and M3U profile deletion
|
||||
|
||||
## [0.6.0] - 2025-06-19
|
||||
|
||||
### Added
|
||||
|
||||
- **User Management & Access Control:**
|
||||
- Complete user management system with user levels and channel access controls
|
||||
- Network access control with CIDR validation and IP-based restrictions
|
||||
- Logout functionality and improved loading states for authenticated users
|
||||
- **Xtream Codes Output:**
|
||||
- Xtream Codes support enables easy output to IPTV clients (#195)
|
||||
- **Stream Management & Monitoring:**
|
||||
- FFmpeg statistics integration - Real-time display of video/audio codec info, resolution, speed, and stream type
|
||||
- Automatic stream switching when buffering is detected
|
||||
- Enhanced stream profile management with better connection tracking
|
||||
- Improved stream state detection, including buffering as an active state
|
||||
- **Channel Management:**
|
||||
- Bulk channel editing for channel group, stream profile, and user access level
|
||||
- **Enhanced M3U & EPG Features:**
|
||||
- Dynamic `tvg-id` source selection for M3U and EPG (`tvg_id`, `gracenote`, or `channel_number`)
|
||||
- Direct URL support in M3U output via `direct=true` parameter
|
||||
- Flexible EPG output with a configurable day limit via `days=#` parameter
|
||||
- Support for LIVE tags and `dd_progrid` numbering in EPG processing
|
||||
- Proxy settings configuration with UI integration and improved validation
|
||||
- Stream retention controls - Set stale stream days to `0` to disable retention completely (#123)
|
||||
- Tuner flexibility - Minimum of 1 tuner now allowed for HDHomeRun output
|
||||
- Fallback IP geolocation provider (#127) - Thanks [@maluueu](https://github.com/maluueu)
|
||||
- POST method now allowed for M3U output, enabling support for Smarters IPTV - Thanks [@maluueu](https://github.com/maluueu)
|
||||
|
||||
### Changed
|
||||
|
||||
- Improved channel cards with better status indicators and tooltips
|
||||
- Clearer error messaging for unsupported codecs in the web player
|
||||
- Network access warnings to prevent accidental lockouts
|
||||
- Case-insensitive M3U parsing for improved compatibility
|
||||
- Better EPG processing with improved channel matching
|
||||
- Replaced Mantine React Table with custom implementations
|
||||
- Improved tooltips and parameter wrapping for cleaner interfaces
|
||||
- Better badge colors and status indicators
|
||||
- Stronger form validation and user feedback
|
||||
- Streamlined settings management using JSON configs
|
||||
- Default value population for clean installs
|
||||
- Environment-specific configuration support for multiple deployment scenarios
|
||||
|
||||
### Fixed
|
||||
|
||||
- FFmpeg process cleanup - Ensures FFmpeg fully exits before marking connection closed
|
||||
- Resolved stream profile update issues in statistics display
|
||||
- Fixed M3U profile ID behavior when switching streams
|
||||
- Corrected stream switching logic - Redis is only updated on successful switches
|
||||
- Fixed connection counting - Excludes the current profile from available connection counts
|
||||
- Fixed custom stream channel creation when no group is assigned (#122)
|
||||
- Resolved EPG auto-matching deadlock when many channels match simultaneously - Thanks [@xham3](https://github.com/xham3)
|
||||
|
||||
## [0.5.2] - 2025-06-03
|
||||
|
||||
### Added
|
||||
|
||||
- Direct Logo Support: Added ability to bypass logo caching by adding `?cachedlogos=false` to the end of M3U and EPG URLs (#109)
|
||||
|
||||
### Changed
|
||||
|
||||
- Dynamic Resource Management: Auto-scales Celery workers based on demand, reducing overall memory and CPU usage while still allowing high-demand tasks to complete quickly (#111)
|
||||
- Enhanced Logging:
|
||||
- Improved logging for M3U processing
|
||||
- Better error output from XML parser for easier troubleshooting
|
||||
|
||||
### Fixed
|
||||
|
||||
- XMLTV Parsing: Added `remove_blank_text=True` to lxml parser to prevent crashes with poorly formatted XMLTV files (#115)
|
||||
- Stats Display: Refactored channel info retrieval for safer decoding and improved error logging, fixing intermittent issues with statistics not displaying properly
|
||||
|
||||
## [0.5.1] - 2025-05-28
|
||||
|
||||
### Added
|
||||
|
||||
- Support for ZIP-compressed EPG files
|
||||
- Automatic extraction of compressed files after downloading
|
||||
- Intelligent file type detection for EPG sources:
|
||||
- Reads the first bits of files to determine file type
|
||||
- If a compressed file is detected, it peeks inside to find XML files
|
||||
- Random descriptions for dummy channels in the TV guide
|
||||
- Support for decimal channel numbers (converted from integer to float) - Thanks [@MooseyOnTheLoosey](https://github.com/MooseyOnTheLoosey)
|
||||
- Show channels without EPG data in TV Guide
|
||||
- Profile name added to HDHR-friendly name and device ID (allows adding multiple HDHR profiles to Plex)
|
||||
|
||||
### Changed
|
||||
|
||||
- About 30% faster EPG processing
|
||||
- Significantly improved memory usage for large EPG files
|
||||
- Improved timezone handling
|
||||
- Cleaned up cached files when deleting EPG sources
|
||||
- Performance improvements when processing extremely large M3U files
|
||||
- Improved batch processing with better cleanup
|
||||
- Enhanced WebSocket update handling for large operations
|
||||
- Redis configured for better performance (no longer saves to disk)
|
||||
- Improved memory management for Celery tasks
|
||||
- Separated beat schedules with a file scanning interval set to 20 seconds
|
||||
- Improved authentication error handling with user redirection to the login page
|
||||
- Improved channel card formatting for different screen resolutions (can now actually read the channel stats card on mobile)
|
||||
- Decreased line height for status messages in the EPG and M3U tables for better appearance on smaller screens
|
||||
- Updated the EPG form to match the M3U form for consistency
|
||||
|
||||
### Fixed
|
||||
|
||||
- Profile selection issues that previously caused WebUI crashes
|
||||
- Issue with `tvc-guide-id` (Gracenote ID) in bulk channel creation
|
||||
- Bug when uploading an M3U with the default user-agent set
|
||||
- Bug where multiple channel initializations could occur, causing zombie streams and performance issues (choppy streams)
|
||||
- Better error handling for buffer overflow issues
|
||||
- Fixed various memory leaks
|
||||
- Bug in the TV Guide that would crash the web UI when selecting a profile to filter by
|
||||
- Multiple minor bug fixes and code cleanup
|
||||
|
||||
## [0.5.0] - 2025-05-15
|
||||
|
||||
### Added
|
||||
|
||||
- **XtreamCodes Support:**
|
||||
- Initial XtreamCodes client support
|
||||
- Option to add EPG source with XC account
|
||||
- Improved XC login and authentication
|
||||
- Improved error handling for XC connections
|
||||
- **Hardware Acceleration:**
|
||||
- Detection of hardware acceleration capabilities with recommendations (available in logs after startup)
|
||||
- Improved support for NVIDIA, Intel (QSV), and VAAPI acceleration methods
|
||||
- Added necessary drivers and libraries for hardware acceleration
|
||||
- Automatically assigns required permissions for hardware acceleration
|
||||
- Thanks to [@BXWeb](https://github.com/BXWeb), @chris.r3x, [@rykr](https://github.com/rykr), @j3111, [@jesmannstl](https://github.com/jesmannstl), @jimmycarbone, [@gordlaben](https://github.com/gordlaben), [@roofussummers](https://github.com/roofussummers), [@slamanna212](https://github.com/slamanna212)
|
||||
- **M3U and EPG Management:**
|
||||
- Enhanced M3U profile creation with live regex results
|
||||
- Added stale stream detection with configurable thresholds
|
||||
- Improved status messaging for M3U and EPG operations:
|
||||
- Shows download speed with estimated time remaining
|
||||
- Shows parsing time remaining
|
||||
- Added "Pending Setup" status for M3U's requiring group selection
|
||||
- Improved handling of M3U group filtering
|
||||
- **UI Improvements:**
|
||||
- Added configurable table sizes
|
||||
- Enhanced video player with loading and error states
|
||||
- Improved WebSocket connection handling with authentication
|
||||
- Added confirmation dialogs for critical operations
|
||||
- Auto-assign numbers now configurable by selection
|
||||
- Added bulk editing of channel profile membership (select multiple channels, then click the profile toggle on any selected channel to apply the change to all)
|
||||
- **Infrastructure & Performance:**
|
||||
- Standardized and improved the logging system
|
||||
- New environment variable to set logging level: `DISPATCHARR_LOG_LEVEL` (default: `INFO`, available: `TRACE`, `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`)
|
||||
- Introduced a new base image build process: updates are now significantly smaller (typically under 15MB unless the base image changes)
|
||||
- Improved environment variable handling in container
|
||||
- Support for Gracenote ID (`tvc-guide-stationid`) - Thanks [@rykr](https://github.com/rykr)
|
||||
- Improved file upload handling with size limits removed
|
||||
|
||||
### Fixed
|
||||
|
||||
- Issues with profiles not loading correctly
|
||||
- Problems with stream previews in tables
|
||||
- Channel creation and editing workflows
|
||||
- Logo display issues
|
||||
- WebSocket connection problems
|
||||
- Multiple React-related errors and warnings
|
||||
- Pagination and filtering issues in tables
|
||||
|
||||
## [0.4.1] - 2025-05-01
|
||||
|
||||
### Changed
|
||||
|
||||
- Optimized uWSGI configuration settings for better server performance
|
||||
- Improved asynchronous processing by converting additional timers to gevent
|
||||
- Enhanced EPG (Electronic Program Guide) downloading with proper user agent headers
|
||||
|
||||
### Fixed
|
||||
|
||||
- Issue with "add streams to channel" functionality to correctly follow disabled state logic
|
||||
|
||||
## [0.4.0] - 2025-05-01
|
||||
|
||||
### Added
|
||||
|
||||
- URL copy buttons for stream and channel URLs
|
||||
- Manual stream switching ability
|
||||
- EPG auto-match notifications - Users now receive feedback about how many matches were found
|
||||
- Informative tooltips throughout the interface, including stream profiles and user-agent details
|
||||
- Display of connected time for each client
|
||||
- Current M3U profile information to stats
|
||||
- Better logging for which channel clients are getting chunks from
|
||||
|
||||
### Changed
|
||||
|
||||
- Table System Rewrite: Completely refactored channel and stream tables for dramatically improved performance with large datasets
|
||||
- Improved Concurrency: Replaced time.sleep with gevent.sleep for better performance when handling multiple streams
|
||||
- Improved table interactions:
|
||||
- Restored alternating row colors and hover effects
|
||||
- Added shift-click support for multiple row selection
|
||||
- Preserved drag-and-drop functionality
|
||||
- Adjusted logo display to prevent layout shifts with different sized logos
|
||||
- Improved sticky headers in tables
|
||||
- Fixed spacing and padding in EPG and M3U tables for better readability on smaller displays
|
||||
- Stream URL handling improved for search/replace patterns
|
||||
- Enhanced stream lock management for better reliability
|
||||
- Added stream name to channel status for better visibility
|
||||
- Properly track current stream ID during stream switches
|
||||
- Improved EPG cache handling and cleanup of old cache files
|
||||
- Corrected content type for M3U file (using m3u instead of m3u8)
|
||||
- Fixed logo URL handling in M3U generation
|
||||
- Enhanced tuner count calculation to include only active M3U accounts
|
||||
- Increased thread stack size in uwsgi configuration
|
||||
- Changed proxy to use uwsgi socket
|
||||
- Added build timestamp to version information
|
||||
- Reduced excessive logging during M3U/EPG file importing
|
||||
- Improved store variable handling to increase application efficiency
|
||||
- Frontend now being built by Yarn instead of NPM
|
||||
|
||||
### Fixed
|
||||
|
||||
- Issues with channel statistics randomly not working
|
||||
- Stream ordering in channel selection
|
||||
- M3U profile name added to stream names for better identification
|
||||
- Channel form not updating some properties after saving
|
||||
- Issue with setting logos to default
|
||||
- Channel creation from streams
|
||||
- Channel group saving
|
||||
- Improved error handling throughout the application
|
||||
- Bugs in deleting stream profiles
|
||||
- Resolved mimetype detection issues
|
||||
- Fixed form display issues
|
||||
- Added proper requerying after form submissions and item deletions
|
||||
- Bug overwriting tvg-id when loading TV Guide
|
||||
- Bug that prevented large m3u's and epg's from uploading
|
||||
- Typo in Stream Profile header column for Description - Thanks [@LoudSoftware](https://github.com/LoudSoftware)
|
||||
- Typo in m3u input processing (tv-chno instead of tvg-chno) - Thanks @www2a
|
||||
|
||||
## [0.3.3] - 2025-04-18
|
||||
|
||||
### Fixed
|
||||
|
||||
- Issue with dummy EPG calculating hours above 24, ensuring time values remain within valid 24-hour format
|
||||
- Auto import functionality to properly process old files that hadn't been imported yet, rather than ignoring them
|
||||
|
||||
## [0.3.2] - 2025-04-16
|
||||
|
||||
### Fixed
|
||||
|
||||
- Issue with stream ordering for channels - resolved problem where stream objects were incorrectly processed when assigning order in channel configurations
|
||||
|
||||
## [0.3.1] - 2025-04-16
|
||||
|
||||
### Added
|
||||
|
||||
- Key to navigation links in sidebar to resolve DOM errors when loading web UI
|
||||
- Channels that are set to 'dummy' epg to the TV Guide
|
||||
|
||||
### Fixed
|
||||
|
||||
- Issue preventing dummy EPG from being set
|
||||
- Channel numbers not saving properly
|
||||
- EPGs not refreshing when linking EPG to channel
|
||||
- Improved error messages in notifications
|
||||
|
||||
## [0.3.0] - 2025-04-15
|
||||
|
||||
### Added
|
||||
|
||||
- URL validation for redirect profile:
|
||||
- Validates stream URLs before redirecting clients
|
||||
- Prevents clients from being redirected to unavailable streams
|
||||
- Now tries alternate streams when primary stream validation fails
|
||||
- Dynamic tuner configuration for HDHomeRun devices:
|
||||
- TunerCount is now dynamically created based on profile max connections
|
||||
- Sets minimum of 2 tuners, up to 10 for unlimited profiles
|
||||
|
||||
### Changed
|
||||
|
||||
- More robust stream switching:
|
||||
- Clients now wait properly if a stream is in the switching state
|
||||
- Improved reliability during stream transitions
|
||||
- Performance enhancements:
|
||||
- Increased workers and threads for uwsgi for better concurrency
|
||||
|
||||
### Fixed
|
||||
|
||||
- Issue with multiple dead streams in a row - System now properly handles cases where several sequential streams are unavailable
|
||||
- Broken links to compose files in documentation
|
||||
|
||||
## [0.2.1] - 2025-04-13
|
||||
|
||||
### Fixed
|
||||
|
||||
- Stream preview (not channel)
|
||||
- Streaming wouldn't work when using default user-agent for an M3U
|
||||
- WebSockets and M3U profile form issues
|
||||
|
||||
## [0.2.0] - 2025-04-12
|
||||
|
||||
Initial beta public release.
|
||||
|
|
@ -20,30 +20,88 @@ class TokenObtainPairView(TokenObtainPairView):
|
|||
def post(self, request, *args, **kwargs):
|
||||
# Custom logic here
|
||||
if not network_access_allowed(request, "UI"):
|
||||
# Log blocked login attempt due to network restrictions
|
||||
from core.utils import log_system_event
|
||||
username = request.data.get("username", 'unknown')
|
||||
client_ip = request.META.get('REMOTE_ADDR', 'unknown')
|
||||
user_agent = request.META.get('HTTP_USER_AGENT', 'unknown')
|
||||
log_system_event(
|
||||
event_type='login_failed',
|
||||
user=username,
|
||||
client_ip=client_ip,
|
||||
user_agent=user_agent,
|
||||
reason='Network access denied',
|
||||
)
|
||||
return Response({"error": "Forbidden"}, status=status.HTTP_403_FORBIDDEN)
|
||||
|
||||
# Get the response from the parent class first
|
||||
response = super().post(request, *args, **kwargs)
|
||||
username = request.data.get("username")
|
||||
|
||||
# If login was successful, update last_login
|
||||
if response.status_code == 200:
|
||||
username = request.data.get("username")
|
||||
if username:
|
||||
from django.utils import timezone
|
||||
try:
|
||||
user = User.objects.get(username=username)
|
||||
user.last_login = timezone.now()
|
||||
user.save(update_fields=['last_login'])
|
||||
except User.DoesNotExist:
|
||||
pass # User doesn't exist, but login somehow succeeded
|
||||
# Log login attempt
|
||||
from core.utils import log_system_event
|
||||
client_ip = request.META.get('REMOTE_ADDR', 'unknown')
|
||||
user_agent = request.META.get('HTTP_USER_AGENT', 'unknown')
|
||||
|
||||
return response
|
||||
try:
|
||||
response = super().post(request, *args, **kwargs)
|
||||
|
||||
# If login was successful, update last_login and log success
|
||||
if response.status_code == 200:
|
||||
if username:
|
||||
from django.utils import timezone
|
||||
try:
|
||||
user = User.objects.get(username=username)
|
||||
user.last_login = timezone.now()
|
||||
user.save(update_fields=['last_login'])
|
||||
|
||||
# Log successful login
|
||||
log_system_event(
|
||||
event_type='login_success',
|
||||
user=username,
|
||||
client_ip=client_ip,
|
||||
user_agent=user_agent,
|
||||
)
|
||||
except User.DoesNotExist:
|
||||
pass # User doesn't exist, but login somehow succeeded
|
||||
else:
|
||||
# Log failed login attempt
|
||||
log_system_event(
|
||||
event_type='login_failed',
|
||||
user=username or 'unknown',
|
||||
client_ip=client_ip,
|
||||
user_agent=user_agent,
|
||||
reason='Invalid credentials',
|
||||
)
|
||||
|
||||
return response
|
||||
|
||||
except Exception as e:
|
||||
# If parent class raises an exception (e.g., validation error), log failed attempt
|
||||
log_system_event(
|
||||
event_type='login_failed',
|
||||
user=username or 'unknown',
|
||||
client_ip=client_ip,
|
||||
user_agent=user_agent,
|
||||
reason=f'Authentication error: {str(e)[:100]}',
|
||||
)
|
||||
raise # Re-raise the exception to maintain normal error flow
|
||||
|
||||
|
||||
class TokenRefreshView(TokenRefreshView):
|
||||
def post(self, request, *args, **kwargs):
|
||||
# Custom logic here
|
||||
if not network_access_allowed(request, "UI"):
|
||||
# Log blocked token refresh attempt due to network restrictions
|
||||
from core.utils import log_system_event
|
||||
client_ip = request.META.get('REMOTE_ADDR', 'unknown')
|
||||
user_agent = request.META.get('HTTP_USER_AGENT', 'unknown')
|
||||
log_system_event(
|
||||
event_type='login_failed',
|
||||
user='token_refresh',
|
||||
client_ip=client_ip,
|
||||
user_agent=user_agent,
|
||||
reason='Network access denied (token refresh)',
|
||||
)
|
||||
return Response({"error": "Unauthorized"}, status=status.HTTP_403_FORBIDDEN)
|
||||
|
||||
return super().post(request, *args, **kwargs)
|
||||
|
|
@ -80,6 +138,15 @@ def initialize_superuser(request):
|
|||
class AuthViewSet(viewsets.ViewSet):
|
||||
"""Handles user login and logout"""
|
||||
|
||||
def get_permissions(self):
|
||||
"""
|
||||
Login doesn't require auth, but logout does
|
||||
"""
|
||||
if self.action == 'logout':
|
||||
from rest_framework.permissions import IsAuthenticated
|
||||
return [IsAuthenticated()]
|
||||
return []
|
||||
|
||||
@swagger_auto_schema(
|
||||
operation_description="Authenticate and log in a user",
|
||||
request_body=openapi.Schema(
|
||||
|
|
@ -100,6 +167,11 @@ class AuthViewSet(viewsets.ViewSet):
|
|||
password = request.data.get("password")
|
||||
user = authenticate(request, username=username, password=password)
|
||||
|
||||
# Get client info for logging
|
||||
from core.utils import log_system_event
|
||||
client_ip = request.META.get('REMOTE_ADDR', 'unknown')
|
||||
user_agent = request.META.get('HTTP_USER_AGENT', 'unknown')
|
||||
|
||||
if user:
|
||||
login(request, user)
|
||||
# Update last_login timestamp
|
||||
|
|
@ -107,6 +179,14 @@ class AuthViewSet(viewsets.ViewSet):
|
|||
user.last_login = timezone.now()
|
||||
user.save(update_fields=['last_login'])
|
||||
|
||||
# Log successful login
|
||||
log_system_event(
|
||||
event_type='login_success',
|
||||
user=username,
|
||||
client_ip=client_ip,
|
||||
user_agent=user_agent,
|
||||
)
|
||||
|
||||
return Response(
|
||||
{
|
||||
"message": "Login successful",
|
||||
|
|
@ -118,6 +198,15 @@ class AuthViewSet(viewsets.ViewSet):
|
|||
},
|
||||
}
|
||||
)
|
||||
|
||||
# Log failed login attempt
|
||||
log_system_event(
|
||||
event_type='login_failed',
|
||||
user=username or 'unknown',
|
||||
client_ip=client_ip,
|
||||
user_agent=user_agent,
|
||||
reason='Invalid credentials',
|
||||
)
|
||||
return Response({"error": "Invalid credentials"}, status=400)
|
||||
|
||||
@swagger_auto_schema(
|
||||
|
|
@ -126,6 +215,19 @@ class AuthViewSet(viewsets.ViewSet):
|
|||
)
|
||||
def logout(self, request):
|
||||
"""Logs out the authenticated user"""
|
||||
# Log logout event before actually logging out
|
||||
from core.utils import log_system_event
|
||||
username = request.user.username if request.user and request.user.is_authenticated else 'unknown'
|
||||
client_ip = request.META.get('REMOTE_ADDR', 'unknown')
|
||||
user_agent = request.META.get('HTTP_USER_AGENT', 'unknown')
|
||||
|
||||
log_system_event(
|
||||
event_type='logout',
|
||||
user=username,
|
||||
client_ip=client_ip,
|
||||
user_agent=user_agent,
|
||||
)
|
||||
|
||||
logout(request)
|
||||
return Response({"message": "Logout successful"})
|
||||
|
||||
|
|
|
|||
|
|
@ -125,7 +125,7 @@ class StreamViewSet(viewsets.ModelViewSet):
|
|||
filter_backends = [DjangoFilterBackend, SearchFilter, OrderingFilter]
|
||||
filterset_class = StreamFilter
|
||||
search_fields = ["name", "channel_group__name"]
|
||||
ordering_fields = ["name", "channel_group__name"]
|
||||
ordering_fields = ["name", "channel_group__name", "m3u_account__name"]
|
||||
ordering = ["-name"]
|
||||
|
||||
def get_permissions(self):
|
||||
|
|
@ -436,8 +436,8 @@ class ChannelViewSet(viewsets.ModelViewSet):
|
|||
@action(detail=False, methods=["patch"], url_path="edit/bulk")
|
||||
def edit_bulk(self, request):
|
||||
"""
|
||||
Bulk edit channels.
|
||||
Expects a list of channels with their updates.
|
||||
Bulk edit channels efficiently.
|
||||
Validates all updates first, then applies in a single transaction.
|
||||
"""
|
||||
data = request.data
|
||||
if not isinstance(data, list):
|
||||
|
|
@ -446,63 +446,94 @@ class ChannelViewSet(viewsets.ModelViewSet):
|
|||
status=status.HTTP_400_BAD_REQUEST,
|
||||
)
|
||||
|
||||
updated_channels = []
|
||||
errors = []
|
||||
# Extract IDs and validate presence
|
||||
channel_updates = {}
|
||||
missing_ids = []
|
||||
|
||||
for channel_data in data:
|
||||
for i, channel_data in enumerate(data):
|
||||
channel_id = channel_data.get("id")
|
||||
if not channel_id:
|
||||
errors.append({"error": "Channel ID is required"})
|
||||
continue
|
||||
missing_ids.append(f"Item {i}: Channel ID is required")
|
||||
else:
|
||||
channel_updates[channel_id] = channel_data
|
||||
|
||||
try:
|
||||
channel = Channel.objects.get(id=channel_id)
|
||||
if missing_ids:
|
||||
return Response(
|
||||
{"errors": missing_ids},
|
||||
status=status.HTTP_400_BAD_REQUEST,
|
||||
)
|
||||
|
||||
# Handle channel_group_id properly - convert string to integer if needed
|
||||
if 'channel_group_id' in channel_data:
|
||||
group_id = channel_data['channel_group_id']
|
||||
if group_id is not None:
|
||||
try:
|
||||
channel_data['channel_group_id'] = int(group_id)
|
||||
except (ValueError, TypeError):
|
||||
channel_data['channel_group_id'] = None
|
||||
# Fetch all channels at once (one query)
|
||||
channels_dict = {
|
||||
c.id: c for c in Channel.objects.filter(id__in=channel_updates.keys())
|
||||
}
|
||||
|
||||
# Use the serializer to validate and update
|
||||
serializer = ChannelSerializer(
|
||||
channel, data=channel_data, partial=True
|
||||
)
|
||||
# Validate and prepare updates
|
||||
validated_updates = []
|
||||
errors = []
|
||||
|
||||
if serializer.is_valid():
|
||||
updated_channel = serializer.save()
|
||||
updated_channels.append(updated_channel)
|
||||
else:
|
||||
errors.append({
|
||||
"channel_id": channel_id,
|
||||
"errors": serializer.errors
|
||||
})
|
||||
for channel_id, channel_data in channel_updates.items():
|
||||
channel = channels_dict.get(channel_id)
|
||||
|
||||
except Channel.DoesNotExist:
|
||||
if not channel:
|
||||
errors.append({
|
||||
"channel_id": channel_id,
|
||||
"error": "Channel not found"
|
||||
})
|
||||
except Exception as e:
|
||||
continue
|
||||
|
||||
# Handle channel_group_id conversion
|
||||
if 'channel_group_id' in channel_data:
|
||||
group_id = channel_data['channel_group_id']
|
||||
if group_id is not None:
|
||||
try:
|
||||
channel_data['channel_group_id'] = int(group_id)
|
||||
except (ValueError, TypeError):
|
||||
channel_data['channel_group_id'] = None
|
||||
|
||||
# Validate with serializer
|
||||
serializer = ChannelSerializer(
|
||||
channel, data=channel_data, partial=True
|
||||
)
|
||||
|
||||
if serializer.is_valid():
|
||||
validated_updates.append((channel, serializer.validated_data))
|
||||
else:
|
||||
errors.append({
|
||||
"channel_id": channel_id,
|
||||
"error": str(e)
|
||||
"errors": serializer.errors
|
||||
})
|
||||
|
||||
if errors:
|
||||
return Response(
|
||||
{"errors": errors, "updated_count": len(updated_channels)},
|
||||
{"errors": errors, "updated_count": len(validated_updates)},
|
||||
status=status.HTTP_400_BAD_REQUEST,
|
||||
)
|
||||
|
||||
# Serialize the updated channels for response
|
||||
serialized_channels = ChannelSerializer(updated_channels, many=True).data
|
||||
# Apply all updates in a transaction
|
||||
with transaction.atomic():
|
||||
for channel, validated_data in validated_updates:
|
||||
for key, value in validated_data.items():
|
||||
setattr(channel, key, value)
|
||||
|
||||
# Single bulk_update query instead of individual saves
|
||||
channels_to_update = [channel for channel, _ in validated_updates]
|
||||
if channels_to_update:
|
||||
Channel.objects.bulk_update(
|
||||
channels_to_update,
|
||||
fields=list(validated_updates[0][1].keys()),
|
||||
batch_size=100
|
||||
)
|
||||
|
||||
# Return the updated objects (already in memory)
|
||||
serialized_channels = ChannelSerializer(
|
||||
[channel for channel, _ in validated_updates],
|
||||
many=True,
|
||||
context=self.get_serializer_context()
|
||||
).data
|
||||
|
||||
return Response({
|
||||
"message": f"Successfully updated {len(updated_channels)} channels",
|
||||
"message": f"Successfully updated {len(validated_updates)} channels",
|
||||
"channels": serialized_channels
|
||||
})
|
||||
|
||||
|
|
@ -988,19 +1019,27 @@ class ChannelViewSet(viewsets.ModelViewSet):
|
|||
channel.epg_data = epg_data
|
||||
channel.save(update_fields=["epg_data"])
|
||||
|
||||
# Explicitly trigger program refresh for this EPG
|
||||
from apps.epg.tasks import parse_programs_for_tvg_id
|
||||
# Only trigger program refresh for non-dummy EPG sources
|
||||
status_message = None
|
||||
if epg_data.epg_source.source_type != 'dummy':
|
||||
# Explicitly trigger program refresh for this EPG
|
||||
from apps.epg.tasks import parse_programs_for_tvg_id
|
||||
|
||||
task_result = parse_programs_for_tvg_id.delay(epg_data.id)
|
||||
task_result = parse_programs_for_tvg_id.delay(epg_data.id)
|
||||
|
||||
# Prepare response with task status info
|
||||
status_message = "EPG refresh queued"
|
||||
if task_result.result == "Task already running":
|
||||
status_message = "EPG refresh already in progress"
|
||||
# Prepare response with task status info
|
||||
status_message = "EPG refresh queued"
|
||||
if task_result.result == "Task already running":
|
||||
status_message = "EPG refresh already in progress"
|
||||
|
||||
# Build response message
|
||||
message = f"EPG data set to {epg_data.tvg_id} for channel {channel.name}"
|
||||
if status_message:
|
||||
message += f". {status_message}"
|
||||
|
||||
return Response(
|
||||
{
|
||||
"message": f"EPG data set to {epg_data.tvg_id} for channel {channel.name}. {status_message}.",
|
||||
"message": message,
|
||||
"channel": self.get_serializer(channel).data,
|
||||
"task_status": status_message,
|
||||
}
|
||||
|
|
@ -1032,8 +1071,15 @@ class ChannelViewSet(viewsets.ModelViewSet):
|
|||
def batch_set_epg(self, request):
|
||||
"""Efficiently associate multiple channels with EPG data at once."""
|
||||
associations = request.data.get("associations", [])
|
||||
channels_updated = 0
|
||||
programs_refreshed = 0
|
||||
|
||||
if not associations:
|
||||
return Response(
|
||||
{"error": "associations list is required"},
|
||||
status=status.HTTP_400_BAD_REQUEST,
|
||||
)
|
||||
|
||||
# Extract channel IDs upfront
|
||||
channel_updates = {}
|
||||
unique_epg_ids = set()
|
||||
|
||||
for assoc in associations:
|
||||
|
|
@ -1043,32 +1089,58 @@ class ChannelViewSet(viewsets.ModelViewSet):
|
|||
if not channel_id:
|
||||
continue
|
||||
|
||||
try:
|
||||
# Get the channel
|
||||
channel = Channel.objects.get(id=channel_id)
|
||||
channel_updates[channel_id] = epg_data_id
|
||||
if epg_data_id:
|
||||
unique_epg_ids.add(epg_data_id)
|
||||
|
||||
# Set the EPG data
|
||||
channel.epg_data_id = epg_data_id
|
||||
channel.save(update_fields=["epg_data"])
|
||||
channels_updated += 1
|
||||
# Batch fetch all channels (single query)
|
||||
channels_dict = {
|
||||
c.id: c for c in Channel.objects.filter(id__in=channel_updates.keys())
|
||||
}
|
||||
|
||||
# Track unique EPG data IDs
|
||||
if epg_data_id:
|
||||
unique_epg_ids.add(epg_data_id)
|
||||
|
||||
except Channel.DoesNotExist:
|
||||
# Collect channels to update
|
||||
channels_to_update = []
|
||||
for channel_id, epg_data_id in channel_updates.items():
|
||||
if channel_id not in channels_dict:
|
||||
logger.error(f"Channel with ID {channel_id} not found")
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Error setting EPG data for channel {channel_id}: {str(e)}"
|
||||
continue
|
||||
|
||||
channel = channels_dict[channel_id]
|
||||
channel.epg_data_id = epg_data_id
|
||||
channels_to_update.append(channel)
|
||||
|
||||
# Bulk update all channels (single query)
|
||||
if channels_to_update:
|
||||
with transaction.atomic():
|
||||
Channel.objects.bulk_update(
|
||||
channels_to_update,
|
||||
fields=["epg_data_id"],
|
||||
batch_size=100
|
||||
)
|
||||
|
||||
# Trigger program refresh for unique EPG data IDs
|
||||
from apps.epg.tasks import parse_programs_for_tvg_id
|
||||
channels_updated = len(channels_to_update)
|
||||
|
||||
# Trigger program refresh for unique EPG data IDs (skip dummy EPGs)
|
||||
from apps.epg.tasks import parse_programs_for_tvg_id
|
||||
from apps.epg.models import EPGData
|
||||
|
||||
# Batch fetch EPG data (single query)
|
||||
epg_data_dict = {
|
||||
epg.id: epg
|
||||
for epg in EPGData.objects.filter(id__in=unique_epg_ids).select_related('epg_source')
|
||||
}
|
||||
|
||||
programs_refreshed = 0
|
||||
for epg_id in unique_epg_ids:
|
||||
parse_programs_for_tvg_id.delay(epg_id)
|
||||
programs_refreshed += 1
|
||||
epg_data = epg_data_dict.get(epg_id)
|
||||
if not epg_data:
|
||||
logger.error(f"EPGData with ID {epg_id} not found")
|
||||
continue
|
||||
|
||||
# Only refresh non-dummy EPG sources
|
||||
if epg_data.epg_source.source_type != 'dummy':
|
||||
parse_programs_for_tvg_id.delay(epg_id)
|
||||
programs_refreshed += 1
|
||||
|
||||
return Response(
|
||||
{
|
||||
|
|
@ -1233,7 +1305,7 @@ class CleanupUnusedLogosAPIView(APIView):
|
|||
return [Authenticated()]
|
||||
|
||||
@swagger_auto_schema(
|
||||
operation_description="Delete all logos that are not used by any channels, movies, or series",
|
||||
operation_description="Delete all channel logos that are not used by any channels",
|
||||
request_body=openapi.Schema(
|
||||
type=openapi.TYPE_OBJECT,
|
||||
properties={
|
||||
|
|
@ -1247,24 +1319,11 @@ class CleanupUnusedLogosAPIView(APIView):
|
|||
responses={200: "Cleanup completed"},
|
||||
)
|
||||
def post(self, request):
|
||||
"""Delete all logos with no channel, movie, or series associations"""
|
||||
"""Delete all channel logos with no channel associations"""
|
||||
delete_files = request.data.get("delete_files", False)
|
||||
|
||||
# Find logos that are not used by channels, movies, or series
|
||||
filter_conditions = Q(channels__isnull=True)
|
||||
|
||||
# Add VOD conditions if models are available
|
||||
try:
|
||||
filter_conditions &= Q(movie__isnull=True)
|
||||
except:
|
||||
pass
|
||||
|
||||
try:
|
||||
filter_conditions &= Q(series__isnull=True)
|
||||
except:
|
||||
pass
|
||||
|
||||
unused_logos = Logo.objects.filter(filter_conditions)
|
||||
# Find logos that are not used by any channels
|
||||
unused_logos = Logo.objects.filter(channels__isnull=True)
|
||||
deleted_count = unused_logos.count()
|
||||
logo_names = list(unused_logos.values_list('name', flat=True))
|
||||
local_files_deleted = 0
|
||||
|
|
@ -1336,13 +1395,6 @@ class LogoViewSet(viewsets.ModelViewSet):
|
|||
# Start with basic prefetch for channels
|
||||
queryset = Logo.objects.prefetch_related('channels').order_by('name')
|
||||
|
||||
# Try to prefetch VOD relations if available
|
||||
try:
|
||||
queryset = queryset.prefetch_related('movie', 'series')
|
||||
except:
|
||||
# VOD app might not be available, continue without VOD prefetch
|
||||
pass
|
||||
|
||||
# Filter by specific IDs
|
||||
ids = self.request.query_params.getlist('ids')
|
||||
if ids:
|
||||
|
|
@ -1355,62 +1407,14 @@ class LogoViewSet(viewsets.ModelViewSet):
|
|||
pass # Invalid IDs, return empty queryset
|
||||
queryset = Logo.objects.none()
|
||||
|
||||
# Filter by usage - now includes VOD content
|
||||
# Filter by usage
|
||||
used_filter = self.request.query_params.get('used', None)
|
||||
if used_filter == 'true':
|
||||
# Logo is used if it has any channels, movies, or series
|
||||
filter_conditions = Q(channels__isnull=False)
|
||||
|
||||
# Add VOD conditions if models are available
|
||||
try:
|
||||
filter_conditions |= Q(movie__isnull=False)
|
||||
except:
|
||||
pass
|
||||
|
||||
try:
|
||||
filter_conditions |= Q(series__isnull=False)
|
||||
except:
|
||||
pass
|
||||
|
||||
queryset = queryset.filter(filter_conditions).distinct()
|
||||
|
||||
# Logo is used if it has any channels
|
||||
queryset = queryset.filter(channels__isnull=False).distinct()
|
||||
elif used_filter == 'false':
|
||||
# Logo is unused if it has no channels, movies, or series
|
||||
filter_conditions = Q(channels__isnull=True)
|
||||
|
||||
# Add VOD conditions if models are available
|
||||
try:
|
||||
filter_conditions &= Q(movie__isnull=True)
|
||||
except:
|
||||
pass
|
||||
|
||||
try:
|
||||
filter_conditions &= Q(series__isnull=True)
|
||||
except:
|
||||
pass
|
||||
|
||||
queryset = queryset.filter(filter_conditions)
|
||||
|
||||
# Filter for channel assignment (unused + channel-used, exclude VOD-only)
|
||||
channel_assignable = self.request.query_params.get('channel_assignable', None)
|
||||
if channel_assignable == 'true':
|
||||
# Include logos that are either:
|
||||
# 1. Completely unused, OR
|
||||
# 2. Used by channels (but may also be used by VOD)
|
||||
# Exclude logos that are ONLY used by VOD content
|
||||
|
||||
unused_condition = Q(channels__isnull=True)
|
||||
channel_used_condition = Q(channels__isnull=False)
|
||||
|
||||
# Add VOD conditions if models are available
|
||||
try:
|
||||
unused_condition &= Q(movie__isnull=True) & Q(series__isnull=True)
|
||||
except:
|
||||
pass
|
||||
|
||||
# Combine: unused OR used by channels
|
||||
filter_conditions = unused_condition | channel_used_condition
|
||||
queryset = queryset.filter(filter_conditions).distinct()
|
||||
# Logo is unused if it has no channels
|
||||
queryset = queryset.filter(channels__isnull=True)
|
||||
|
||||
# Filter by name
|
||||
name_filter = self.request.query_params.get('name', None)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,54 @@
|
|||
# Generated migration to backfill stream_hash for existing custom streams
|
||||
|
||||
from django.db import migrations
|
||||
import hashlib
|
||||
|
||||
|
||||
def backfill_custom_stream_hashes(apps, schema_editor):
|
||||
"""
|
||||
Generate stream_hash for all custom streams that don't have one.
|
||||
Uses stream ID to create a stable hash that won't change when name/url is edited.
|
||||
"""
|
||||
Stream = apps.get_model('dispatcharr_channels', 'Stream')
|
||||
|
||||
custom_streams_without_hash = Stream.objects.filter(
|
||||
is_custom=True,
|
||||
stream_hash__isnull=True
|
||||
)
|
||||
|
||||
updated_count = 0
|
||||
for stream in custom_streams_without_hash:
|
||||
# Generate a stable hash using the stream's ID
|
||||
# This ensures the hash never changes even if name/url is edited
|
||||
unique_string = f"custom_stream_{stream.id}"
|
||||
stream.stream_hash = hashlib.sha256(unique_string.encode()).hexdigest()
|
||||
stream.save(update_fields=['stream_hash'])
|
||||
updated_count += 1
|
||||
|
||||
if updated_count > 0:
|
||||
print(f"Backfilled stream_hash for {updated_count} custom streams")
|
||||
else:
|
||||
print("No custom streams needed stream_hash backfill")
|
||||
|
||||
|
||||
def reverse_backfill(apps, schema_editor):
|
||||
"""
|
||||
Reverse migration - clear stream_hash for custom streams.
|
||||
Note: This will break preview functionality for custom streams.
|
||||
"""
|
||||
Stream = apps.get_model('dispatcharr_channels', 'Stream')
|
||||
|
||||
custom_streams = Stream.objects.filter(is_custom=True)
|
||||
count = custom_streams.update(stream_hash=None)
|
||||
print(f"Cleared stream_hash for {count} custom streams")
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('dispatcharr_channels', '0028_channel_created_at_channel_updated_at'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(backfill_custom_stream_hashes, reverse_backfill),
|
||||
]
|
||||
18
apps/channels/migrations/0030_alter_stream_url.py
Normal file
18
apps/channels/migrations/0030_alter_stream_url.py
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
# Generated by Django 5.2.4 on 2025-10-28 20:00
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('dispatcharr_channels', '0029_backfill_custom_stream_hashes'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name='stream',
|
||||
name='url',
|
||||
field=models.URLField(blank=True, max_length=4096, null=True),
|
||||
),
|
||||
]
|
||||
|
|
@ -55,7 +55,7 @@ class Stream(models.Model):
|
|||
"""
|
||||
|
||||
name = models.CharField(max_length=255, default="Default Stream")
|
||||
url = models.URLField(max_length=2000, blank=True, null=True)
|
||||
url = models.URLField(max_length=4096, blank=True, null=True)
|
||||
m3u_account = models.ForeignKey(
|
||||
M3UAccount,
|
||||
on_delete=models.CASCADE,
|
||||
|
|
@ -152,8 +152,14 @@ class Stream(models.Model):
|
|||
stream = cls.objects.create(**fields_to_update)
|
||||
return stream, True # True means it was created
|
||||
|
||||
# @TODO: honor stream's stream profile
|
||||
def get_stream_profile(self):
|
||||
"""
|
||||
Get the stream profile for this stream.
|
||||
Uses the stream's own profile if set, otherwise returns the default.
|
||||
"""
|
||||
if self.stream_profile:
|
||||
return self.stream_profile
|
||||
|
||||
stream_profile = StreamProfile.objects.get(
|
||||
id=CoreSettings.get_default_stream_profile_id()
|
||||
)
|
||||
|
|
|
|||
|
|
@ -64,47 +64,15 @@ class LogoSerializer(serializers.ModelSerializer):
|
|||
return reverse("api:channels:logo-cache", args=[obj.id])
|
||||
|
||||
def get_channel_count(self, obj):
|
||||
"""Get the number of channels, movies, and series using this logo"""
|
||||
channel_count = obj.channels.count()
|
||||
|
||||
# Safely get movie count
|
||||
try:
|
||||
movie_count = obj.movie.count() if hasattr(obj, 'movie') else 0
|
||||
except AttributeError:
|
||||
movie_count = 0
|
||||
|
||||
# Safely get series count
|
||||
try:
|
||||
series_count = obj.series.count() if hasattr(obj, 'series') else 0
|
||||
except AttributeError:
|
||||
series_count = 0
|
||||
|
||||
return channel_count + movie_count + series_count
|
||||
"""Get the number of channels using this logo"""
|
||||
return obj.channels.count()
|
||||
|
||||
def get_is_used(self, obj):
|
||||
"""Check if this logo is used by any channels, movies, or series"""
|
||||
# Check if used by channels
|
||||
if obj.channels.exists():
|
||||
return True
|
||||
|
||||
# Check if used by movies (handle case where VOD app might not be available)
|
||||
try:
|
||||
if hasattr(obj, 'movie') and obj.movie.exists():
|
||||
return True
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
# Check if used by series (handle case where VOD app might not be available)
|
||||
try:
|
||||
if hasattr(obj, 'series') and obj.series.exists():
|
||||
return True
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
return False
|
||||
"""Check if this logo is used by any channels"""
|
||||
return obj.channels.exists()
|
||||
|
||||
def get_channel_names(self, obj):
|
||||
"""Get the names of channels, movies, and series using this logo (limited to first 5)"""
|
||||
"""Get the names of channels using this logo (limited to first 5)"""
|
||||
names = []
|
||||
|
||||
# Get channel names
|
||||
|
|
@ -112,28 +80,6 @@ class LogoSerializer(serializers.ModelSerializer):
|
|||
for channel in channels:
|
||||
names.append(f"Channel: {channel.name}")
|
||||
|
||||
# Get movie names (only if we haven't reached limit)
|
||||
if len(names) < 5:
|
||||
try:
|
||||
if hasattr(obj, 'movie'):
|
||||
remaining_slots = 5 - len(names)
|
||||
movies = obj.movie.all()[:remaining_slots]
|
||||
for movie in movies:
|
||||
names.append(f"Movie: {movie.name}")
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
# Get series names (only if we haven't reached limit)
|
||||
if len(names) < 5:
|
||||
try:
|
||||
if hasattr(obj, 'series'):
|
||||
remaining_slots = 5 - len(names)
|
||||
series = obj.series.all()[:remaining_slots]
|
||||
for series_item in series:
|
||||
names.append(f"Series: {series_item.name}")
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
# Calculate total count for "more" message
|
||||
total_count = self.get_channel_count(obj)
|
||||
if total_count > 5:
|
||||
|
|
@ -348,8 +294,17 @@ class ChannelSerializer(serializers.ModelSerializer):
|
|||
|
||||
if include_streams:
|
||||
self.fields["streams"] = serializers.SerializerMethodField()
|
||||
|
||||
return super().to_representation(instance)
|
||||
return super().to_representation(instance)
|
||||
else:
|
||||
# Fix: For PATCH/PUT responses, ensure streams are ordered
|
||||
representation = super().to_representation(instance)
|
||||
if "streams" in representation:
|
||||
representation["streams"] = list(
|
||||
instance.streams.all()
|
||||
.order_by("channelstream__order")
|
||||
.values_list("id", flat=True)
|
||||
)
|
||||
return representation
|
||||
|
||||
def get_logo(self, obj):
|
||||
return LogoSerializer(obj.logo).data
|
||||
|
|
|
|||
|
|
@ -45,6 +45,20 @@ def set_default_m3u_account(sender, instance, **kwargs):
|
|||
else:
|
||||
raise ValueError("No default M3UAccount found.")
|
||||
|
||||
@receiver(post_save, sender=Stream)
|
||||
def generate_custom_stream_hash(sender, instance, created, **kwargs):
|
||||
"""
|
||||
Generate a stable stream_hash for custom streams after creation.
|
||||
Uses the stream's ID to ensure the hash never changes even if name/url is edited.
|
||||
"""
|
||||
if instance.is_custom and not instance.stream_hash and created:
|
||||
import hashlib
|
||||
# Use stream ID for a stable, unique hash that never changes
|
||||
unique_string = f"custom_stream_{instance.id}"
|
||||
instance.stream_hash = hashlib.sha256(unique_string.encode()).hexdigest()
|
||||
# Use update to avoid triggering signals again
|
||||
Stream.objects.filter(id=instance.id).update(stream_hash=instance.stream_hash)
|
||||
|
||||
@receiver(post_save, sender=Channel)
|
||||
def refresh_epg_programs(sender, instance, created, **kwargs):
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -1434,6 +1434,18 @@ def run_recording(recording_id, channel_id, start_time_str, end_time_str):
|
|||
|
||||
logger.info(f"Starting recording for channel {channel.name}")
|
||||
|
||||
# Log system event for recording start
|
||||
try:
|
||||
from core.utils import log_system_event
|
||||
log_system_event(
|
||||
'recording_start',
|
||||
channel_id=channel.uuid,
|
||||
channel_name=channel.name,
|
||||
recording_id=recording_id
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Could not log recording start event: {e}")
|
||||
|
||||
# Try to resolve the Recording row up front
|
||||
recording_obj = None
|
||||
try:
|
||||
|
|
@ -1827,6 +1839,20 @@ def run_recording(recording_id, channel_id, start_time_str, end_time_str):
|
|||
# After the loop, the file and response are closed automatically.
|
||||
logger.info(f"Finished recording for channel {channel.name}")
|
||||
|
||||
# Log system event for recording end
|
||||
try:
|
||||
from core.utils import log_system_event
|
||||
log_system_event(
|
||||
'recording_end',
|
||||
channel_id=channel.uuid,
|
||||
channel_name=channel.name,
|
||||
recording_id=recording_id,
|
||||
interrupted=interrupted,
|
||||
bytes_written=bytes_written
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Could not log recording end event: {e}")
|
||||
|
||||
# Remux TS to MKV container
|
||||
remux_success = False
|
||||
try:
|
||||
|
|
|
|||
|
|
@ -147,23 +147,37 @@ class EPGGridAPIView(APIView):
|
|||
f"EPGGridAPIView: Found {count} program(s), including recently ended, currently running, and upcoming shows."
|
||||
)
|
||||
|
||||
# Generate dummy programs for channels that have no EPG data
|
||||
# Generate dummy programs for channels that have no EPG data OR dummy EPG sources
|
||||
from apps.channels.models import Channel
|
||||
from apps.epg.models import EPGSource
|
||||
from django.db.models import Q
|
||||
|
||||
# Get channels with no EPG data
|
||||
# Get channels with no EPG data at all (standard dummy)
|
||||
channels_without_epg = Channel.objects.filter(Q(epg_data__isnull=True))
|
||||
channels_count = channels_without_epg.count()
|
||||
|
||||
# Log more detailed information about channels missing EPG data
|
||||
if channels_count > 0:
|
||||
# Get channels with custom dummy EPG sources (generate on-demand with patterns)
|
||||
channels_with_custom_dummy = Channel.objects.filter(
|
||||
epg_data__epg_source__source_type='dummy'
|
||||
).distinct()
|
||||
|
||||
# Log what we found
|
||||
without_count = channels_without_epg.count()
|
||||
custom_count = channels_with_custom_dummy.count()
|
||||
|
||||
if without_count > 0:
|
||||
channel_names = [f"{ch.name} (ID: {ch.id})" for ch in channels_without_epg]
|
||||
logger.warning(
|
||||
f"EPGGridAPIView: Missing EPG data for these channels: {', '.join(channel_names)}"
|
||||
logger.debug(
|
||||
f"EPGGridAPIView: Channels needing standard dummy EPG: {', '.join(channel_names)}"
|
||||
)
|
||||
|
||||
if custom_count > 0:
|
||||
channel_names = [f"{ch.name} (ID: {ch.id})" for ch in channels_with_custom_dummy]
|
||||
logger.debug(
|
||||
f"EPGGridAPIView: Channels needing custom dummy EPG: {', '.join(channel_names)}"
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
f"EPGGridAPIView: Found {channels_count} channels with no EPG data."
|
||||
f"EPGGridAPIView: Found {without_count} channels needing standard dummy, {custom_count} needing custom dummy EPG."
|
||||
)
|
||||
|
||||
# Serialize the regular programs
|
||||
|
|
@ -205,12 +219,91 @@ class EPGGridAPIView(APIView):
|
|||
|
||||
# Generate and append dummy programs
|
||||
dummy_programs = []
|
||||
for channel in channels_without_epg:
|
||||
# Use the channel UUID as tvg_id for dummy programs to match in the guide
|
||||
|
||||
# Import the function from output.views
|
||||
from apps.output.views import generate_dummy_programs as gen_dummy_progs
|
||||
|
||||
# Handle channels with CUSTOM dummy EPG sources (with patterns)
|
||||
for channel in channels_with_custom_dummy:
|
||||
# For dummy EPGs, ALWAYS use channel UUID to ensure unique programs per channel
|
||||
# This prevents multiple channels assigned to the same dummy EPG from showing identical data
|
||||
# Each channel gets its own unique program data even if they share the same EPG source
|
||||
dummy_tvg_id = str(channel.uuid)
|
||||
|
||||
try:
|
||||
# Create programs every 4 hours for the next 24 hours
|
||||
# Get the custom dummy EPG source
|
||||
epg_source = channel.epg_data.epg_source if channel.epg_data else None
|
||||
|
||||
logger.debug(f"Generating custom dummy programs for channel: {channel.name} (ID: {channel.id})")
|
||||
|
||||
# Determine which name to parse based on custom properties
|
||||
name_to_parse = channel.name
|
||||
if epg_source and epg_source.custom_properties:
|
||||
custom_props = epg_source.custom_properties
|
||||
name_source = custom_props.get('name_source')
|
||||
|
||||
if name_source == 'stream':
|
||||
# Get the stream index (1-based from user, convert to 0-based)
|
||||
stream_index = custom_props.get('stream_index', 1) - 1
|
||||
|
||||
# Get streams ordered by channelstream order
|
||||
channel_streams = channel.streams.all().order_by('channelstream__order')
|
||||
|
||||
if channel_streams.exists() and 0 <= stream_index < channel_streams.count():
|
||||
stream = list(channel_streams)[stream_index]
|
||||
name_to_parse = stream.name
|
||||
logger.debug(f"Using stream name for parsing: {name_to_parse} (stream index: {stream_index})")
|
||||
else:
|
||||
logger.warning(f"Stream index {stream_index} not found for channel {channel.name}, falling back to channel name")
|
||||
elif name_source == 'channel':
|
||||
logger.debug(f"Using channel name for parsing: {name_to_parse}")
|
||||
|
||||
# Generate programs using custom patterns from the dummy EPG source
|
||||
# Use the same tvg_id that will be set in the program data
|
||||
generated = gen_dummy_progs(
|
||||
channel_id=dummy_tvg_id,
|
||||
channel_name=name_to_parse,
|
||||
num_days=1,
|
||||
program_length_hours=4,
|
||||
epg_source=epg_source
|
||||
)
|
||||
|
||||
# Custom dummy should always return data (either from patterns or fallback)
|
||||
if generated:
|
||||
logger.debug(f"Generated {len(generated)} custom dummy programs for {channel.name}")
|
||||
# Convert generated programs to API format
|
||||
for program in generated:
|
||||
dummy_program = {
|
||||
"id": f"dummy-custom-{channel.id}-{program['start_time'].hour}",
|
||||
"epg": {"tvg_id": dummy_tvg_id, "name": channel.name},
|
||||
"start_time": program['start_time'].isoformat(),
|
||||
"end_time": program['end_time'].isoformat(),
|
||||
"title": program['title'],
|
||||
"description": program['description'],
|
||||
"tvg_id": dummy_tvg_id,
|
||||
"sub_title": None,
|
||||
"custom_properties": None,
|
||||
}
|
||||
dummy_programs.append(dummy_program)
|
||||
else:
|
||||
logger.warning(f"No programs generated for custom dummy EPG channel: {channel.name}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Error creating custom dummy programs for channel {channel.name} (ID: {channel.id}): {str(e)}"
|
||||
)
|
||||
|
||||
# Handle channels with NO EPG data (standard dummy with humorous descriptions)
|
||||
for channel in channels_without_epg:
|
||||
# For channels with no EPG, use UUID to ensure uniqueness (matches frontend logic)
|
||||
# The frontend uses: tvgRecord?.tvg_id ?? channel.uuid
|
||||
# Since there's no EPG data, it will fall back to UUID
|
||||
dummy_tvg_id = str(channel.uuid)
|
||||
|
||||
try:
|
||||
logger.debug(f"Generating standard dummy programs for channel: {channel.name} (ID: {channel.id})")
|
||||
|
||||
# Create programs every 4 hours for the next 24 hours with humorous descriptions
|
||||
for hour_offset in range(0, 24, 4):
|
||||
# Use timedelta for time arithmetic instead of replace() to avoid hour overflow
|
||||
start_time = now + timedelta(hours=hour_offset)
|
||||
|
|
@ -238,7 +331,7 @@ class EPGGridAPIView(APIView):
|
|||
|
||||
# Create a dummy program in the same format as regular programs
|
||||
dummy_program = {
|
||||
"id": f"dummy-{channel.id}-{hour_offset}", # Create a unique ID
|
||||
"id": f"dummy-standard-{channel.id}-{hour_offset}",
|
||||
"epg": {"tvg_id": dummy_tvg_id, "name": channel.name},
|
||||
"start_time": start_time.isoformat(),
|
||||
"end_time": end_time.isoformat(),
|
||||
|
|
@ -252,7 +345,7 @@ class EPGGridAPIView(APIView):
|
|||
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Error creating dummy programs for channel {channel.name} (ID: {channel.id}): {str(e)}"
|
||||
f"Error creating standard dummy programs for channel {channel.name} (ID: {channel.id}): {str(e)}"
|
||||
)
|
||||
|
||||
# Combine regular and dummy programs
|
||||
|
|
@ -284,7 +377,22 @@ class EPGImportAPIView(APIView):
|
|||
)
|
||||
def post(self, request, format=None):
|
||||
logger.info("EPGImportAPIView: Received request to import EPG data.")
|
||||
refresh_epg_data.delay(request.data.get("id", None)) # Trigger Celery task
|
||||
epg_id = request.data.get("id", None)
|
||||
|
||||
# Check if this is a dummy EPG source
|
||||
try:
|
||||
from .models import EPGSource
|
||||
epg_source = EPGSource.objects.get(id=epg_id)
|
||||
if epg_source.source_type == 'dummy':
|
||||
logger.info(f"EPGImportAPIView: Skipping refresh for dummy EPG source {epg_id}")
|
||||
return Response(
|
||||
{"success": False, "message": "Dummy EPG sources do not require refreshing."},
|
||||
status=status.HTTP_400_BAD_REQUEST,
|
||||
)
|
||||
except EPGSource.DoesNotExist:
|
||||
pass # Let the task handle the missing source
|
||||
|
||||
refresh_epg_data.delay(epg_id) # Trigger Celery task
|
||||
logger.info("EPGImportAPIView: Task dispatched to refresh EPG data.")
|
||||
return Response(
|
||||
{"success": True, "message": "EPG data import initiated."},
|
||||
|
|
@ -308,3 +416,4 @@ class EPGDataViewSet(viewsets.ReadOnlyModelViewSet):
|
|||
return [perm() for perm in permission_classes_by_action[self.action]]
|
||||
except KeyError:
|
||||
return [Authenticated()]
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,23 @@
|
|||
# Generated by Django 5.2.4 on 2025-10-17 17:02
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('epg', '0017_alter_epgsource_url'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='epgsource',
|
||||
name='custom_properties',
|
||||
field=models.JSONField(blank=True, default=dict, help_text='Custom properties for dummy EPG configuration (regex patterns, timezone, duration, etc.)', null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='epgsource',
|
||||
name='source_type',
|
||||
field=models.CharField(choices=[('xmltv', 'XMLTV URL'), ('schedules_direct', 'Schedules Direct API'), ('dummy', 'Custom Dummy EPG')], max_length=20),
|
||||
),
|
||||
]
|
||||
18
apps/epg/migrations/0019_alter_programdata_sub_title.py
Normal file
18
apps/epg/migrations/0019_alter_programdata_sub_title.py
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
# Generated by Django 5.2.4 on 2025-10-22 21:59
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('epg', '0018_epgsource_custom_properties_and_more'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name='programdata',
|
||||
name='sub_title',
|
||||
field=models.TextField(blank=True, null=True),
|
||||
),
|
||||
]
|
||||
|
|
@ -0,0 +1,119 @@
|
|||
# Generated migration to replace {time} placeholders with {starttime}
|
||||
|
||||
import re
|
||||
from django.db import migrations
|
||||
|
||||
|
||||
def migrate_time_placeholders(apps, schema_editor):
|
||||
"""
|
||||
Replace {time} with {starttime} and {time24} with {starttime24}
|
||||
in all dummy EPG source custom_properties templates.
|
||||
"""
|
||||
EPGSource = apps.get_model('epg', 'EPGSource')
|
||||
|
||||
# Fields that contain templates with placeholders
|
||||
template_fields = [
|
||||
'title_template',
|
||||
'description_template',
|
||||
'upcoming_title_template',
|
||||
'upcoming_description_template',
|
||||
'ended_title_template',
|
||||
'ended_description_template',
|
||||
'channel_logo_url',
|
||||
'program_poster_url',
|
||||
]
|
||||
|
||||
# Get all dummy EPG sources
|
||||
dummy_sources = EPGSource.objects.filter(source_type='dummy')
|
||||
|
||||
updated_count = 0
|
||||
for source in dummy_sources:
|
||||
if not source.custom_properties:
|
||||
continue
|
||||
|
||||
modified = False
|
||||
custom_props = source.custom_properties.copy()
|
||||
|
||||
for field in template_fields:
|
||||
if field in custom_props and custom_props[field]:
|
||||
original_value = custom_props[field]
|
||||
|
||||
# Replace {time24} first (before {time}) to avoid double replacement
|
||||
# e.g., {time24} shouldn't become {starttime24} via {time} -> {starttime}
|
||||
new_value = original_value
|
||||
new_value = re.sub(r'\{time24\}', '{starttime24}', new_value)
|
||||
new_value = re.sub(r'\{time\}', '{starttime}', new_value)
|
||||
|
||||
if new_value != original_value:
|
||||
custom_props[field] = new_value
|
||||
modified = True
|
||||
|
||||
if modified:
|
||||
source.custom_properties = custom_props
|
||||
source.save(update_fields=['custom_properties'])
|
||||
updated_count += 1
|
||||
|
||||
if updated_count > 0:
|
||||
print(f"Migration complete: Updated {updated_count} dummy EPG source(s) with new placeholder names.")
|
||||
else:
|
||||
print("No dummy EPG sources needed placeholder updates.")
|
||||
|
||||
|
||||
def reverse_migration(apps, schema_editor):
|
||||
"""
|
||||
Reverse the migration by replacing {starttime} back to {time}.
|
||||
"""
|
||||
EPGSource = apps.get_model('epg', 'EPGSource')
|
||||
|
||||
template_fields = [
|
||||
'title_template',
|
||||
'description_template',
|
||||
'upcoming_title_template',
|
||||
'upcoming_description_template',
|
||||
'ended_title_template',
|
||||
'ended_description_template',
|
||||
'channel_logo_url',
|
||||
'program_poster_url',
|
||||
]
|
||||
|
||||
dummy_sources = EPGSource.objects.filter(source_type='dummy')
|
||||
|
||||
updated_count = 0
|
||||
for source in dummy_sources:
|
||||
if not source.custom_properties:
|
||||
continue
|
||||
|
||||
modified = False
|
||||
custom_props = source.custom_properties.copy()
|
||||
|
||||
for field in template_fields:
|
||||
if field in custom_props and custom_props[field]:
|
||||
original_value = custom_props[field]
|
||||
|
||||
# Reverse the replacements
|
||||
new_value = original_value
|
||||
new_value = re.sub(r'\{starttime24\}', '{time24}', new_value)
|
||||
new_value = re.sub(r'\{starttime\}', '{time}', new_value)
|
||||
|
||||
if new_value != original_value:
|
||||
custom_props[field] = new_value
|
||||
modified = True
|
||||
|
||||
if modified:
|
||||
source.custom_properties = custom_props
|
||||
source.save(update_fields=['custom_properties'])
|
||||
updated_count += 1
|
||||
|
||||
if updated_count > 0:
|
||||
print(f"Reverse migration complete: Reverted {updated_count} dummy EPG source(s) to old placeholder names.")
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('epg', '0019_alter_programdata_sub_title'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(migrate_time_placeholders, reverse_migration),
|
||||
]
|
||||
|
|
@ -8,6 +8,7 @@ class EPGSource(models.Model):
|
|||
SOURCE_TYPE_CHOICES = [
|
||||
('xmltv', 'XMLTV URL'),
|
||||
('schedules_direct', 'Schedules Direct API'),
|
||||
('dummy', 'Custom Dummy EPG'),
|
||||
]
|
||||
|
||||
STATUS_IDLE = 'idle'
|
||||
|
|
@ -38,6 +39,12 @@ class EPGSource(models.Model):
|
|||
refresh_task = models.ForeignKey(
|
||||
PeriodicTask, on_delete=models.SET_NULL, null=True, blank=True
|
||||
)
|
||||
custom_properties = models.JSONField(
|
||||
default=dict,
|
||||
blank=True,
|
||||
null=True,
|
||||
help_text="Custom properties for dummy EPG configuration (regex patterns, timezone, duration, etc.)"
|
||||
)
|
||||
status = models.CharField(
|
||||
max_length=20,
|
||||
choices=STATUS_CHOICES,
|
||||
|
|
@ -148,7 +155,7 @@ class ProgramData(models.Model):
|
|||
start_time = models.DateTimeField()
|
||||
end_time = models.DateTimeField()
|
||||
title = models.CharField(max_length=255)
|
||||
sub_title = models.CharField(max_length=255, blank=True, null=True)
|
||||
sub_title = models.TextField(blank=True, null=True)
|
||||
description = models.TextField(blank=True, null=True)
|
||||
tvg_id = models.CharField(max_length=255, null=True, blank=True)
|
||||
custom_properties = models.JSONField(default=dict, blank=True, null=True)
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ from .models import EPGSource, EPGData, ProgramData
|
|||
from apps.channels.models import Channel
|
||||
|
||||
class EPGSourceSerializer(serializers.ModelSerializer):
|
||||
epg_data_ids = serializers.SerializerMethodField()
|
||||
epg_data_count = serializers.SerializerMethodField()
|
||||
read_only_fields = ['created_at', 'updated_at']
|
||||
url = serializers.CharField(
|
||||
required=False,
|
||||
|
|
@ -28,11 +28,13 @@ class EPGSourceSerializer(serializers.ModelSerializer):
|
|||
'last_message',
|
||||
'created_at',
|
||||
'updated_at',
|
||||
'epg_data_ids'
|
||||
'custom_properties',
|
||||
'epg_data_count'
|
||||
]
|
||||
|
||||
def get_epg_data_ids(self, obj):
|
||||
return list(obj.epgs.values_list('id', flat=True))
|
||||
def get_epg_data_count(self, obj):
|
||||
"""Return the count of EPG data entries instead of all IDs to prevent large payloads"""
|
||||
return obj.epgs.count()
|
||||
|
||||
class ProgramDataSerializer(serializers.ModelSerializer):
|
||||
class Meta:
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
from django.db.models.signals import post_save, post_delete, pre_save
|
||||
from django.dispatch import receiver
|
||||
from .models import EPGSource
|
||||
from .models import EPGSource, EPGData
|
||||
from .tasks import refresh_epg_data, delete_epg_refresh_task_by_id
|
||||
from django_celery_beat.models import PeriodicTask, IntervalSchedule
|
||||
from core.utils import is_protected_path
|
||||
from core.utils import is_protected_path, send_websocket_update
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
|
|
@ -12,15 +12,77 @@ logger = logging.getLogger(__name__)
|
|||
|
||||
@receiver(post_save, sender=EPGSource)
|
||||
def trigger_refresh_on_new_epg_source(sender, instance, created, **kwargs):
|
||||
# Trigger refresh only if the source is newly created and active
|
||||
if created and instance.is_active:
|
||||
# Trigger refresh only if the source is newly created, active, and not a dummy EPG
|
||||
if created and instance.is_active and instance.source_type != 'dummy':
|
||||
refresh_epg_data.delay(instance.id)
|
||||
|
||||
@receiver(post_save, sender=EPGSource)
|
||||
def create_dummy_epg_data(sender, instance, created, **kwargs):
|
||||
"""
|
||||
Automatically create EPGData for dummy EPG sources when they are created.
|
||||
This allows channels to be assigned to dummy EPGs immediately without
|
||||
requiring a refresh first.
|
||||
"""
|
||||
if instance.source_type == 'dummy':
|
||||
# Ensure dummy EPGs always have idle status and no status message
|
||||
if instance.status != EPGSource.STATUS_IDLE or instance.last_message:
|
||||
instance.status = EPGSource.STATUS_IDLE
|
||||
instance.last_message = None
|
||||
instance.save(update_fields=['status', 'last_message'])
|
||||
|
||||
# Create a URL-friendly tvg_id from the dummy EPG name
|
||||
# Replace spaces and special characters with underscores
|
||||
friendly_tvg_id = instance.name.replace(' ', '_').replace('-', '_')
|
||||
# Remove any characters that aren't alphanumeric or underscores
|
||||
friendly_tvg_id = ''.join(c for c in friendly_tvg_id if c.isalnum() or c == '_')
|
||||
# Convert to lowercase for consistency
|
||||
friendly_tvg_id = friendly_tvg_id.lower()
|
||||
# Prefix with 'dummy_' to make it clear this is a dummy EPG
|
||||
friendly_tvg_id = f"dummy_{friendly_tvg_id}"
|
||||
|
||||
# Create or update the EPGData record
|
||||
epg_data, data_created = EPGData.objects.get_or_create(
|
||||
tvg_id=friendly_tvg_id,
|
||||
epg_source=instance,
|
||||
defaults={
|
||||
'name': instance.name,
|
||||
'icon_url': None
|
||||
}
|
||||
)
|
||||
|
||||
# Update name if it changed and record already existed
|
||||
if not data_created and epg_data.name != instance.name:
|
||||
epg_data.name = instance.name
|
||||
epg_data.save(update_fields=['name'])
|
||||
|
||||
if data_created:
|
||||
logger.info(f"Auto-created EPGData for dummy EPG source: {instance.name} (ID: {instance.id})")
|
||||
|
||||
# Send websocket update to notify frontend that EPG data has been created
|
||||
# This allows the channel form to immediately show the new dummy EPG without refreshing
|
||||
send_websocket_update('updates', 'update', {
|
||||
'type': 'epg_data_created',
|
||||
'source_id': instance.id,
|
||||
'source_name': instance.name,
|
||||
'epg_data_id': epg_data.id
|
||||
})
|
||||
else:
|
||||
logger.debug(f"EPGData already exists for dummy EPG source: {instance.name} (ID: {instance.id})")
|
||||
|
||||
@receiver(post_save, sender=EPGSource)
|
||||
def create_or_update_refresh_task(sender, instance, **kwargs):
|
||||
"""
|
||||
Create or update a Celery Beat periodic task when an EPGSource is created/updated.
|
||||
Skip creating tasks for dummy EPG sources as they don't need refreshing.
|
||||
"""
|
||||
# Skip task creation for dummy EPGs
|
||||
if instance.source_type == 'dummy':
|
||||
# If there's an existing task, disable it
|
||||
if instance.refresh_task:
|
||||
instance.refresh_task.enabled = False
|
||||
instance.refresh_task.save(update_fields=['enabled'])
|
||||
return
|
||||
|
||||
task_name = f"epg_source-refresh-{instance.id}"
|
||||
interval, _ = IntervalSchedule.objects.get_or_create(
|
||||
every=int(instance.refresh_interval),
|
||||
|
|
@ -80,7 +142,14 @@ def delete_refresh_task(sender, instance, **kwargs):
|
|||
def update_status_on_active_change(sender, instance, **kwargs):
|
||||
"""
|
||||
When an EPGSource's is_active field changes, update the status accordingly.
|
||||
For dummy EPGs, always ensure status is idle and no status message.
|
||||
"""
|
||||
# Dummy EPGs should always be idle with no status message
|
||||
if instance.source_type == 'dummy':
|
||||
instance.status = EPGSource.STATUS_IDLE
|
||||
instance.last_message = None
|
||||
return
|
||||
|
||||
if instance.pk: # Only for existing records, not new ones
|
||||
try:
|
||||
# Get the current record from the database
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ from asgiref.sync import async_to_sync
|
|||
from channels.layers import get_channel_layer
|
||||
|
||||
from .models import EPGSource, EPGData, ProgramData
|
||||
from core.utils import acquire_task_lock, release_task_lock, send_websocket_update, cleanup_memory
|
||||
from core.utils import acquire_task_lock, release_task_lock, send_websocket_update, cleanup_memory, log_system_event
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
|
@ -133,8 +133,9 @@ def delete_epg_refresh_task_by_id(epg_id):
|
|||
@shared_task
|
||||
def refresh_all_epg_data():
|
||||
logger.info("Starting refresh_epg_data task.")
|
||||
active_sources = EPGSource.objects.filter(is_active=True)
|
||||
logger.debug(f"Found {active_sources.count()} active EPGSource(s).")
|
||||
# Exclude dummy EPG sources from refresh - they don't need refreshing
|
||||
active_sources = EPGSource.objects.filter(is_active=True).exclude(source_type='dummy')
|
||||
logger.debug(f"Found {active_sources.count()} active EPGSource(s) (excluding dummy EPGs).")
|
||||
|
||||
for source in active_sources:
|
||||
refresh_epg_data(source.id)
|
||||
|
|
@ -180,6 +181,13 @@ def refresh_epg_data(source_id):
|
|||
gc.collect()
|
||||
return
|
||||
|
||||
# Skip refresh for dummy EPG sources - they don't need refreshing
|
||||
if source.source_type == 'dummy':
|
||||
logger.info(f"Skipping refresh for dummy EPG source {source.name} (ID: {source_id})")
|
||||
release_task_lock('refresh_epg_data', source_id)
|
||||
gc.collect()
|
||||
return
|
||||
|
||||
# Continue with the normal processing...
|
||||
logger.info(f"Processing EPGSource: {source.name} (type: {source.source_type})")
|
||||
if source.source_type == 'xmltv':
|
||||
|
|
@ -1149,6 +1157,12 @@ def parse_programs_for_tvg_id(epg_id):
|
|||
epg = EPGData.objects.get(id=epg_id)
|
||||
epg_source = epg.epg_source
|
||||
|
||||
# Skip program parsing for dummy EPG sources - they don't have program data files
|
||||
if epg_source.source_type == 'dummy':
|
||||
logger.info(f"Skipping program parsing for dummy EPG source {epg_source.name} (ID: {epg_id})")
|
||||
release_task_lock('parse_epg_programs', epg_id)
|
||||
return
|
||||
|
||||
if not Channel.objects.filter(epg_data=epg).exists():
|
||||
logger.info(f"No channels matched to EPG {epg.tvg_id}")
|
||||
release_task_lock('parse_epg_programs', epg_id)
|
||||
|
|
@ -1379,11 +1393,23 @@ def parse_programs_for_tvg_id(epg_id):
|
|||
|
||||
|
||||
def parse_programs_for_source(epg_source, tvg_id=None):
|
||||
"""
|
||||
Parse programs for all MAPPED channels from an EPG source in a single pass.
|
||||
|
||||
This is an optimized version that:
|
||||
1. Only processes EPG entries that are actually mapped to channels
|
||||
2. Parses the XML file ONCE instead of once per channel
|
||||
3. Skips programmes for unmapped channels entirely during parsing
|
||||
|
||||
This dramatically improves performance when an EPG source has many channels
|
||||
but only a fraction are mapped.
|
||||
"""
|
||||
# Send initial programs parsing notification
|
||||
send_epg_update(epg_source.id, "parsing_programs", 0)
|
||||
should_log_memory = False
|
||||
process = None
|
||||
initial_memory = 0
|
||||
source_file = None
|
||||
|
||||
# Add memory tracking only in trace mode or higher
|
||||
try:
|
||||
|
|
@ -1403,91 +1429,250 @@ def parse_programs_for_source(epg_source, tvg_id=None):
|
|||
should_log_memory = False
|
||||
|
||||
try:
|
||||
# Process EPG entries in batches rather than all at once
|
||||
batch_size = 20 # Process fewer channels at once to reduce memory usage
|
||||
epg_count = EPGData.objects.filter(epg_source=epg_source).count()
|
||||
# Only get EPG entries that are actually mapped to channels
|
||||
mapped_epg_ids = set(
|
||||
Channel.objects.filter(
|
||||
epg_data__epg_source=epg_source,
|
||||
epg_data__isnull=False
|
||||
).values_list('epg_data_id', flat=True)
|
||||
)
|
||||
|
||||
if epg_count == 0:
|
||||
logger.info(f"No EPG entries found for source: {epg_source.name}")
|
||||
# Update status - this is not an error, just no entries
|
||||
if not mapped_epg_ids:
|
||||
total_epg_count = EPGData.objects.filter(epg_source=epg_source).count()
|
||||
logger.info(f"No channels mapped to any EPG entries from source: {epg_source.name} "
|
||||
f"(source has {total_epg_count} EPG entries, 0 mapped)")
|
||||
# Update status - this is not an error, just no mapped entries
|
||||
epg_source.status = 'success'
|
||||
epg_source.save(update_fields=['status'])
|
||||
epg_source.last_message = f"No channels mapped to this EPG source ({total_epg_count} entries available)"
|
||||
epg_source.save(update_fields=['status', 'last_message'])
|
||||
send_epg_update(epg_source.id, "parsing_programs", 100, status="success")
|
||||
return True
|
||||
|
||||
logger.info(f"Parsing programs for {epg_count} EPG entries from source: {epg_source.name}")
|
||||
# Get the mapped EPG entries with their tvg_ids
|
||||
mapped_epgs = EPGData.objects.filter(id__in=mapped_epg_ids).values('id', 'tvg_id')
|
||||
tvg_id_to_epg_id = {epg['tvg_id']: epg['id'] for epg in mapped_epgs if epg['tvg_id']}
|
||||
mapped_tvg_ids = set(tvg_id_to_epg_id.keys())
|
||||
|
||||
failed_entries = []
|
||||
program_count = 0
|
||||
channel_count = 0
|
||||
updated_count = 0
|
||||
processed = 0
|
||||
# Process in batches using cursor-based approach to limit memory usage
|
||||
last_id = 0
|
||||
while True:
|
||||
# Get a batch of EPG entries
|
||||
batch_entries = list(EPGData.objects.filter(
|
||||
epg_source=epg_source,
|
||||
id__gt=last_id
|
||||
).order_by('id')[:batch_size])
|
||||
total_epg_count = EPGData.objects.filter(epg_source=epg_source).count()
|
||||
mapped_count = len(mapped_tvg_ids)
|
||||
|
||||
if not batch_entries:
|
||||
break # No more entries to process
|
||||
logger.info(f"Parsing programs for {mapped_count} MAPPED channels from source: {epg_source.name} "
|
||||
f"(skipping {total_epg_count - mapped_count} unmapped EPG entries)")
|
||||
|
||||
# Update last_id for next iteration
|
||||
last_id = batch_entries[-1].id
|
||||
# Get the file path
|
||||
file_path = epg_source.extracted_file_path if epg_source.extracted_file_path else epg_source.file_path
|
||||
if not file_path:
|
||||
file_path = epg_source.get_cache_file()
|
||||
|
||||
# Process this batch
|
||||
for epg in batch_entries:
|
||||
if epg.tvg_id:
|
||||
try:
|
||||
result = parse_programs_for_tvg_id(epg.id)
|
||||
if result == "Task already running":
|
||||
logger.info(f"Program parse for {epg.id} already in progress, skipping")
|
||||
# Check if the file exists
|
||||
if not os.path.exists(file_path):
|
||||
logger.error(f"EPG file not found at: {file_path}")
|
||||
|
||||
processed += 1
|
||||
progress = min(95, int((processed / epg_count) * 100)) if epg_count > 0 else 50
|
||||
send_epg_update(epg_source.id, "parsing_programs", progress)
|
||||
except Exception as e:
|
||||
logger.error(f"Error parsing programs for tvg_id={epg.tvg_id}: {e}", exc_info=True)
|
||||
failed_entries.append(f"{epg.tvg_id}: {str(e)}")
|
||||
if epg_source.url:
|
||||
# Update the file path in the database
|
||||
new_path = epg_source.get_cache_file()
|
||||
logger.info(f"Updating file_path from '{file_path}' to '{new_path}'")
|
||||
epg_source.file_path = new_path
|
||||
epg_source.save(update_fields=['file_path'])
|
||||
logger.info(f"Fetching new EPG data from URL: {epg_source.url}")
|
||||
|
||||
# Force garbage collection after each batch
|
||||
batch_entries = None # Remove reference to help garbage collection
|
||||
# Fetch new data before continuing
|
||||
fetch_success = fetch_xmltv(epg_source)
|
||||
|
||||
if not fetch_success:
|
||||
logger.error(f"Failed to fetch EPG data for source: {epg_source.name}")
|
||||
epg_source.status = 'error'
|
||||
epg_source.last_message = f"Failed to download EPG data"
|
||||
epg_source.save(update_fields=['status', 'last_message'])
|
||||
send_epg_update(epg_source.id, "parsing_programs", 100, status="error", error="Failed to download EPG file")
|
||||
return False
|
||||
|
||||
# Update file_path with the new location
|
||||
file_path = epg_source.extracted_file_path if epg_source.extracted_file_path else epg_source.file_path
|
||||
else:
|
||||
logger.error(f"No URL provided for EPG source {epg_source.name}, cannot fetch new data")
|
||||
epg_source.status = 'error'
|
||||
epg_source.last_message = f"No URL provided, cannot fetch EPG data"
|
||||
epg_source.save(update_fields=['status', 'last_message'])
|
||||
send_epg_update(epg_source.id, "parsing_programs", 100, status="error", error="No URL provided")
|
||||
return False
|
||||
|
||||
# SINGLE PASS PARSING: Parse the XML file once and collect all programs in memory
|
||||
# We parse FIRST, then do an atomic delete+insert to avoid race conditions
|
||||
# where clients might see empty/partial EPG data during the transition
|
||||
all_programs_to_create = []
|
||||
programs_by_channel = {tvg_id: 0 for tvg_id in mapped_tvg_ids} # Track count per channel
|
||||
total_programs = 0
|
||||
skipped_programs = 0
|
||||
last_progress_update = 0
|
||||
|
||||
try:
|
||||
logger.debug(f"Opening file for single-pass parsing: {file_path}")
|
||||
source_file = open(file_path, 'rb')
|
||||
|
||||
# Stream parse the file using lxml's iterparse
|
||||
program_parser = etree.iterparse(source_file, events=('end',), tag='programme', remove_blank_text=True, recover=True)
|
||||
|
||||
for _, elem in program_parser:
|
||||
channel_id = elem.get('channel')
|
||||
|
||||
# Skip programmes for unmapped channels immediately
|
||||
if channel_id not in mapped_tvg_ids:
|
||||
skipped_programs += 1
|
||||
# Clear element to free memory
|
||||
clear_element(elem)
|
||||
continue
|
||||
|
||||
# This programme is for a mapped channel - process it
|
||||
try:
|
||||
start_time = parse_xmltv_time(elem.get('start'))
|
||||
end_time = parse_xmltv_time(elem.get('stop'))
|
||||
title = None
|
||||
desc = None
|
||||
sub_title = None
|
||||
|
||||
# Efficiently process child elements
|
||||
for child in elem:
|
||||
if child.tag == 'title':
|
||||
title = child.text or 'No Title'
|
||||
elif child.tag == 'desc':
|
||||
desc = child.text or ''
|
||||
elif child.tag == 'sub-title':
|
||||
sub_title = child.text or ''
|
||||
|
||||
if not title:
|
||||
title = 'No Title'
|
||||
|
||||
# Extract custom properties
|
||||
custom_props = extract_custom_properties(elem)
|
||||
custom_properties_json = custom_props if custom_props else None
|
||||
|
||||
epg_id = tvg_id_to_epg_id[channel_id]
|
||||
all_programs_to_create.append(ProgramData(
|
||||
epg_id=epg_id,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
title=title,
|
||||
description=desc,
|
||||
sub_title=sub_title,
|
||||
tvg_id=channel_id,
|
||||
custom_properties=custom_properties_json
|
||||
))
|
||||
total_programs += 1
|
||||
programs_by_channel[channel_id] += 1
|
||||
|
||||
# Clear the element to free memory
|
||||
clear_element(elem)
|
||||
|
||||
# Send progress update (estimate based on programs processed)
|
||||
if total_programs - last_progress_update >= 5000:
|
||||
last_progress_update = total_programs
|
||||
# Cap at 70% during parsing phase (save 30% for DB operations)
|
||||
progress = min(70, 10 + int((total_programs / max(total_programs + 10000, 1)) * 60))
|
||||
send_epg_update(epg_source.id, "parsing_programs", progress,
|
||||
processed=total_programs, channels=mapped_count)
|
||||
|
||||
# Periodic garbage collection during parsing
|
||||
if total_programs % 5000 == 0:
|
||||
gc.collect()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing program for {channel_id}: {e}", exc_info=True)
|
||||
clear_element(elem)
|
||||
continue
|
||||
|
||||
except etree.XMLSyntaxError as xml_error:
|
||||
logger.error(f"XML syntax error parsing program data: {xml_error}")
|
||||
epg_source.status = EPGSource.STATUS_ERROR
|
||||
epg_source.last_message = f"XML parsing error: {str(xml_error)}"
|
||||
epg_source.save(update_fields=['status', 'last_message'])
|
||||
send_epg_update(epg_source.id, "parsing_programs", 100, status="error", message=str(xml_error))
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.error(f"Error parsing XML for programs: {e}", exc_info=True)
|
||||
raise
|
||||
finally:
|
||||
if source_file:
|
||||
source_file.close()
|
||||
source_file = None
|
||||
|
||||
# Now perform atomic delete + bulk insert
|
||||
# This ensures clients never see empty/partial EPG data
|
||||
logger.info(f"Parsed {total_programs} programs, performing atomic database update...")
|
||||
send_epg_update(epg_source.id, "parsing_programs", 75, message="Updating database...")
|
||||
|
||||
batch_size = 1000
|
||||
try:
|
||||
with transaction.atomic():
|
||||
# Delete existing programs for mapped EPGs
|
||||
deleted_count = ProgramData.objects.filter(epg_id__in=mapped_epg_ids).delete()[0]
|
||||
logger.debug(f"Deleted {deleted_count} existing programs")
|
||||
|
||||
# Clean up orphaned programs for unmapped EPG entries
|
||||
unmapped_epg_ids = list(EPGData.objects.filter(
|
||||
epg_source=epg_source
|
||||
).exclude(id__in=mapped_epg_ids).values_list('id', flat=True))
|
||||
|
||||
if unmapped_epg_ids:
|
||||
orphaned_count = ProgramData.objects.filter(epg_id__in=unmapped_epg_ids).delete()[0]
|
||||
if orphaned_count > 0:
|
||||
logger.info(f"Cleaned up {orphaned_count} orphaned programs for {len(unmapped_epg_ids)} unmapped EPG entries")
|
||||
|
||||
# Bulk insert all new programs in batches within the same transaction
|
||||
for i in range(0, len(all_programs_to_create), batch_size):
|
||||
batch = all_programs_to_create[i:i + batch_size]
|
||||
ProgramData.objects.bulk_create(batch)
|
||||
|
||||
# Update progress during insertion
|
||||
progress = 75 + int((i / len(all_programs_to_create)) * 20) if all_programs_to_create else 95
|
||||
if i % (batch_size * 5) == 0:
|
||||
send_epg_update(epg_source.id, "parsing_programs", min(95, progress),
|
||||
message=f"Inserting programs... {i}/{len(all_programs_to_create)}")
|
||||
|
||||
logger.info(f"Atomic update complete: deleted {deleted_count}, inserted {total_programs} programs")
|
||||
|
||||
except Exception as db_error:
|
||||
logger.error(f"Database error during atomic update: {db_error}", exc_info=True)
|
||||
epg_source.status = EPGSource.STATUS_ERROR
|
||||
epg_source.last_message = f"Database error: {str(db_error)}"
|
||||
epg_source.save(update_fields=['status', 'last_message'])
|
||||
send_epg_update(epg_source.id, "parsing_programs", 100, status="error", message=str(db_error))
|
||||
return False
|
||||
finally:
|
||||
# Clear the large list to free memory
|
||||
all_programs_to_create = None
|
||||
gc.collect()
|
||||
|
||||
# If there were failures, include them in the message but continue
|
||||
if failed_entries:
|
||||
epg_source.status = EPGSource.STATUS_SUCCESS # Still mark as success if some processed
|
||||
error_summary = f"Failed to parse {len(failed_entries)} of {epg_count} entries"
|
||||
stats_summary = f"Processed {program_count} programs across {channel_count} channels. Updated: {updated_count}."
|
||||
epg_source.last_message = f"{stats_summary} Warning: {error_summary}"
|
||||
epg_source.updated_at = timezone.now()
|
||||
epg_source.save(update_fields=['status', 'last_message', 'updated_at'])
|
||||
# Count channels that actually got programs
|
||||
channels_with_programs = sum(1 for count in programs_by_channel.values() if count > 0)
|
||||
|
||||
# Send completion notification with mixed status
|
||||
send_epg_update(epg_source.id, "parsing_programs", 100,
|
||||
status="success",
|
||||
message=epg_source.last_message)
|
||||
|
||||
# Explicitly release memory of large lists before returning
|
||||
del failed_entries
|
||||
gc.collect()
|
||||
|
||||
return True
|
||||
|
||||
# If all successful, set a comprehensive success message
|
||||
# Success message
|
||||
epg_source.status = EPGSource.STATUS_SUCCESS
|
||||
epg_source.last_message = f"Successfully processed {program_count} programs across {channel_count} channels. Updated: {updated_count}."
|
||||
epg_source.last_message = (
|
||||
f"Parsed {total_programs:,} programs for {channels_with_programs} channels "
|
||||
f"(skipped {skipped_programs:,} programmes for {total_epg_count - mapped_count} unmapped channels)"
|
||||
)
|
||||
epg_source.updated_at = timezone.now()
|
||||
epg_source.save(update_fields=['status', 'last_message', 'updated_at'])
|
||||
|
||||
# Log system event for EPG refresh
|
||||
log_system_event(
|
||||
event_type='epg_refresh',
|
||||
source_name=epg_source.name,
|
||||
programs=total_programs,
|
||||
channels=channels_with_programs,
|
||||
skipped_programs=skipped_programs,
|
||||
unmapped_channels=total_epg_count - mapped_count,
|
||||
)
|
||||
|
||||
# Send completion notification with status
|
||||
send_epg_update(epg_source.id, "parsing_programs", 100,
|
||||
status="success",
|
||||
message=epg_source.last_message)
|
||||
|
||||
logger.info(f"Completed parsing all programs for source: {epg_source.name}")
|
||||
logger.info(f"Completed parsing programs for source: {epg_source.name} - "
|
||||
f"{total_programs:,} programs for {channels_with_programs} channels, "
|
||||
f"skipped {skipped_programs:,} programmes for unmapped channels")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
|
|
@ -1502,14 +1687,19 @@ def parse_programs_for_source(epg_source, tvg_id=None):
|
|||
return False
|
||||
finally:
|
||||
# Final memory cleanup and tracking
|
||||
|
||||
if source_file:
|
||||
try:
|
||||
source_file.close()
|
||||
except:
|
||||
pass
|
||||
source_file = None
|
||||
|
||||
# Explicitly release any remaining large data structures
|
||||
failed_entries = None
|
||||
program_count = None
|
||||
channel_count = None
|
||||
updated_count = None
|
||||
processed = None
|
||||
programs_to_create = None
|
||||
programs_by_channel = None
|
||||
mapped_epg_ids = None
|
||||
mapped_tvg_ids = None
|
||||
tvg_id_to_epg_id = None
|
||||
gc.collect()
|
||||
|
||||
# Add comprehensive memory cleanup at the end
|
||||
|
|
@ -1943,3 +2133,20 @@ def detect_file_format(file_path=None, content=None):
|
|||
|
||||
# If we reach here, we couldn't reliably determine the format
|
||||
return format_type, is_compressed, file_extension
|
||||
|
||||
|
||||
def generate_dummy_epg(source):
|
||||
"""
|
||||
DEPRECATED: This function is no longer used.
|
||||
|
||||
Dummy EPG programs are now generated on-demand when they are requested
|
||||
(during XMLTV export or EPG grid display), rather than being pre-generated
|
||||
and stored in the database.
|
||||
|
||||
See: apps/output/views.py - generate_custom_dummy_programs()
|
||||
|
||||
This function remains for backward compatibility but should not be called.
|
||||
"""
|
||||
logger.warning(f"generate_dummy_epg() called for {source.name} but this function is deprecated. "
|
||||
f"Dummy EPG programs are now generated on-demand.")
|
||||
return True
|
||||
|
|
|
|||
|
|
@ -152,6 +152,46 @@ class M3UAccountViewSet(viewsets.ModelViewSet):
|
|||
and not old_vod_enabled
|
||||
and new_vod_enabled
|
||||
):
|
||||
# Create Uncategorized categories immediately so they're available in the UI
|
||||
from apps.vod.models import VODCategory, M3UVODCategoryRelation
|
||||
|
||||
# Create movie Uncategorized category
|
||||
movie_category, _ = VODCategory.objects.get_or_create(
|
||||
name="Uncategorized",
|
||||
category_type="movie",
|
||||
defaults={}
|
||||
)
|
||||
|
||||
# Create series Uncategorized category
|
||||
series_category, _ = VODCategory.objects.get_or_create(
|
||||
name="Uncategorized",
|
||||
category_type="series",
|
||||
defaults={}
|
||||
)
|
||||
|
||||
# Create relations for both categories (disabled by default until first refresh)
|
||||
account_custom_props = instance.custom_properties or {}
|
||||
auto_enable_new = account_custom_props.get("auto_enable_new_groups_vod", True)
|
||||
|
||||
M3UVODCategoryRelation.objects.get_or_create(
|
||||
category=movie_category,
|
||||
m3u_account=instance,
|
||||
defaults={
|
||||
'enabled': auto_enable_new,
|
||||
'custom_properties': {}
|
||||
}
|
||||
)
|
||||
|
||||
M3UVODCategoryRelation.objects.get_or_create(
|
||||
category=series_category,
|
||||
m3u_account=instance,
|
||||
defaults={
|
||||
'enabled': auto_enable_new,
|
||||
'custom_properties': {}
|
||||
}
|
||||
)
|
||||
|
||||
# Trigger full VOD refresh
|
||||
from apps.vod.tasks import refresh_vod_content
|
||||
|
||||
refresh_vod_content.delay(instance.id)
|
||||
|
|
|
|||
|
|
@ -24,11 +24,13 @@ from core.utils import (
|
|||
acquire_task_lock,
|
||||
release_task_lock,
|
||||
natural_sort_key,
|
||||
log_system_event,
|
||||
)
|
||||
from core.models import CoreSettings, UserAgent
|
||||
from asgiref.sync import async_to_sync
|
||||
from core.xtream_codes import Client as XCClient
|
||||
from core.utils import send_websocket_update
|
||||
from .utils import normalize_stream_url
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
|
@ -219,6 +221,10 @@ def fetch_m3u_lines(account, use_cache=False):
|
|||
# Has HTTP URLs, might be a simple M3U without headers
|
||||
is_valid_m3u = True
|
||||
logger.info("Content validated as M3U: contains HTTP URLs")
|
||||
elif any(line.strip().startswith(('rtsp', 'rtp', 'udp')) for line in content_lines):
|
||||
# Has RTSP/RTP/UDP URLs, might be a simple M3U without headers
|
||||
is_valid_m3u = True
|
||||
logger.info("Content validated as M3U: contains RTSP/RTP/UDP URLs")
|
||||
|
||||
if not is_valid_m3u:
|
||||
# Log what we actually received for debugging
|
||||
|
|
@ -434,25 +440,51 @@ def get_case_insensitive_attr(attributes, key, default=""):
|
|||
def parse_extinf_line(line: str) -> dict:
|
||||
"""
|
||||
Parse an EXTINF line from an M3U file.
|
||||
This function removes the "#EXTINF:" prefix, then splits the remaining
|
||||
string on the first comma that is not enclosed in quotes.
|
||||
This function removes the "#EXTINF:" prefix, then extracts all key="value" attributes,
|
||||
and treats everything after the last attribute as the display name.
|
||||
|
||||
Returns a dictionary with:
|
||||
- 'attributes': a dict of attribute key/value pairs (e.g. tvg-id, tvg-logo, group-title)
|
||||
- 'display_name': the text after the comma (the fallback display name)
|
||||
- 'display_name': the text after the attributes (the fallback display name)
|
||||
- 'name': the value from tvg-name (if present) or the display name otherwise.
|
||||
"""
|
||||
if not line.startswith("#EXTINF:"):
|
||||
return None
|
||||
content = line[len("#EXTINF:") :].strip()
|
||||
# Split on the first comma that is not inside quotes.
|
||||
parts = re.split(r',(?=(?:[^"]*"[^"]*")*[^"]*$)', content, maxsplit=1)
|
||||
if len(parts) != 2:
|
||||
return None
|
||||
attributes_part, display_name = parts[0], parts[1].strip()
|
||||
attrs = dict(re.findall(r'([^\s]+)=["\']([^"\']+)["\']', attributes_part))
|
||||
# Use tvg-name attribute if available; otherwise, use the display name.
|
||||
name = get_case_insensitive_attr(attrs, "tvg-name", display_name)
|
||||
|
||||
# Single pass: extract all attributes AND track the last attribute position
|
||||
# This regex matches both key="value" and key='value' patterns
|
||||
attrs = {}
|
||||
last_attr_end = 0
|
||||
|
||||
# Use a single regex that handles both quote types
|
||||
for match in re.finditer(r'([^\s]+)=(["\'])([^\2]*?)\2', content):
|
||||
key = match.group(1)
|
||||
value = match.group(3)
|
||||
attrs[key] = value
|
||||
last_attr_end = match.end()
|
||||
|
||||
# Everything after the last attribute (skipping leading comma and whitespace) is the display name
|
||||
if last_attr_end > 0:
|
||||
remaining = content[last_attr_end:].strip()
|
||||
# Remove leading comma if present
|
||||
if remaining.startswith(','):
|
||||
remaining = remaining[1:].strip()
|
||||
display_name = remaining
|
||||
else:
|
||||
# No attributes found, try the old comma-split method as fallback
|
||||
parts = content.split(',', 1)
|
||||
if len(parts) == 2:
|
||||
display_name = parts[1].strip()
|
||||
else:
|
||||
display_name = content.strip()
|
||||
|
||||
# Use tvg-name attribute if available; otherwise try tvc-guide-title, then fall back to display name.
|
||||
name = get_case_insensitive_attr(attrs, "tvg-name", None)
|
||||
if not name:
|
||||
name = get_case_insensitive_attr(attrs, "tvc-guide-title", None)
|
||||
if not name:
|
||||
name = display_name
|
||||
return {"attributes": attrs, "display_name": display_name, "name": name}
|
||||
|
||||
|
||||
|
|
@ -894,6 +926,12 @@ def process_m3u_batch_direct(account_id, batch, groups, hash_keys):
|
|||
for stream_info in batch:
|
||||
try:
|
||||
name, url = stream_info["name"], stream_info["url"]
|
||||
|
||||
# Validate URL length - maximum of 4096 characters
|
||||
if url and len(url) > 4096:
|
||||
logger.warning(f"Skipping stream '{name}': URL too long ({len(url)} characters, max 4096)")
|
||||
continue
|
||||
|
||||
tvg_id, tvg_logo = get_case_insensitive_attr(
|
||||
stream_info["attributes"], "tvg-id", ""
|
||||
), get_case_insensitive_attr(stream_info["attributes"], "tvg-logo", "")
|
||||
|
|
@ -1180,52 +1218,14 @@ def refresh_m3u_groups(account_id, use_cache=False, full_refresh=False):
|
|||
auth_result = xc_client.authenticate()
|
||||
logger.debug(f"Authentication response: {auth_result}")
|
||||
|
||||
# Save account information to all active profiles
|
||||
# Queue async profile refresh task to run in background
|
||||
# This prevents any delay in the main refresh process
|
||||
try:
|
||||
from apps.m3u.models import M3UAccountProfile
|
||||
|
||||
profiles = M3UAccountProfile.objects.filter(
|
||||
m3u_account=account,
|
||||
is_active=True
|
||||
)
|
||||
|
||||
# Update each profile with account information using its own transformed credentials
|
||||
for profile in profiles:
|
||||
try:
|
||||
# Get transformed credentials for this specific profile
|
||||
profile_url, profile_username, profile_password = get_transformed_credentials(account, profile)
|
||||
|
||||
# Create a separate XC client for this profile's credentials
|
||||
with XCClient(
|
||||
profile_url,
|
||||
profile_username,
|
||||
profile_password,
|
||||
user_agent_string
|
||||
) as profile_client:
|
||||
# Authenticate with this profile's credentials
|
||||
if profile_client.authenticate():
|
||||
# Get account information specific to this profile's credentials
|
||||
profile_account_info = profile_client.get_account_info()
|
||||
|
||||
# Merge with existing custom_properties if they exist
|
||||
existing_props = profile.custom_properties or {}
|
||||
existing_props.update(profile_account_info)
|
||||
profile.custom_properties = existing_props
|
||||
profile.save(update_fields=['custom_properties'])
|
||||
|
||||
logger.info(f"Updated account information for profile '{profile.name}' with transformed credentials")
|
||||
else:
|
||||
logger.warning(f"Failed to authenticate profile '{profile.name}' with transformed credentials")
|
||||
|
||||
except Exception as profile_error:
|
||||
logger.error(f"Failed to update account information for profile '{profile.name}': {str(profile_error)}")
|
||||
# Continue with other profiles even if one fails
|
||||
|
||||
logger.info(f"Processed account information for {profiles.count()} profiles for account {account.name}")
|
||||
|
||||
except Exception as save_error:
|
||||
logger.warning(f"Failed to process profile account information: {str(save_error)}")
|
||||
# Don't fail the whole process if saving account info fails
|
||||
logger.info(f"Queueing background profile refresh for account {account.name}")
|
||||
refresh_account_profiles.delay(account.id)
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to queue profile refresh task: {str(e)}")
|
||||
# Don't fail the main refresh if profile refresh can't be queued
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to authenticate with XC server: {str(e)}"
|
||||
|
|
@ -1367,10 +1367,12 @@ def refresh_m3u_groups(account_id, use_cache=False, full_refresh=False):
|
|||
)
|
||||
problematic_lines.append((line_index + 1, line[:200]))
|
||||
|
||||
elif extinf_data and line.startswith("http"):
|
||||
elif extinf_data and (line.startswith("http") or line.startswith("rtsp") or line.startswith("rtp") or line.startswith("udp")):
|
||||
url_count += 1
|
||||
# Normalize UDP URLs only (e.g., remove VLC-specific @ prefix)
|
||||
normalized_url = normalize_stream_url(line) if line.startswith("udp") else line
|
||||
# Associate URL with the last EXTINF line
|
||||
extinf_data[-1]["url"] = line
|
||||
extinf_data[-1]["url"] = normalized_url
|
||||
valid_stream_count += 1
|
||||
|
||||
# Periodically log progress for large files
|
||||
|
|
@ -1548,7 +1550,7 @@ def sync_auto_channels(account_id, scan_start_time=None):
|
|||
|
||||
# Get force_dummy_epg, group_override, and regex patterns from group custom_properties
|
||||
group_custom_props = {}
|
||||
force_dummy_epg = False
|
||||
force_dummy_epg = False # Backward compatibility: legacy option to disable EPG
|
||||
override_group_id = None
|
||||
name_regex_pattern = None
|
||||
name_replace_pattern = None
|
||||
|
|
@ -1557,6 +1559,8 @@ def sync_auto_channels(account_id, scan_start_time=None):
|
|||
channel_sort_order = None
|
||||
channel_sort_reverse = False
|
||||
stream_profile_id = None
|
||||
custom_logo_id = None
|
||||
custom_epg_id = None # New option: select specific EPG source (takes priority over force_dummy_epg)
|
||||
if group_relation.custom_properties:
|
||||
group_custom_props = group_relation.custom_properties
|
||||
force_dummy_epg = group_custom_props.get("force_dummy_epg", False)
|
||||
|
|
@ -1567,11 +1571,13 @@ def sync_auto_channels(account_id, scan_start_time=None):
|
|||
)
|
||||
name_match_regex = group_custom_props.get("name_match_regex")
|
||||
channel_profile_ids = group_custom_props.get("channel_profile_ids")
|
||||
custom_epg_id = group_custom_props.get("custom_epg_id")
|
||||
channel_sort_order = group_custom_props.get("channel_sort_order")
|
||||
channel_sort_reverse = group_custom_props.get(
|
||||
"channel_sort_reverse", False
|
||||
)
|
||||
stream_profile_id = group_custom_props.get("stream_profile_id")
|
||||
custom_logo_id = group_custom_props.get("custom_logo_id")
|
||||
|
||||
# Determine which group to use for created channels
|
||||
target_group = channel_group
|
||||
|
|
@ -1826,7 +1832,25 @@ def sync_auto_channels(account_id, scan_start_time=None):
|
|||
|
||||
# Handle logo updates
|
||||
current_logo = None
|
||||
if stream.logo_url:
|
||||
if custom_logo_id:
|
||||
# Use the custom logo specified in group settings
|
||||
from apps.channels.models import Logo
|
||||
try:
|
||||
current_logo = Logo.objects.get(id=custom_logo_id)
|
||||
except Logo.DoesNotExist:
|
||||
logger.warning(
|
||||
f"Custom logo with ID {custom_logo_id} not found for existing channel, falling back to stream logo"
|
||||
)
|
||||
# Fall back to stream logo if custom logo not found
|
||||
if stream.logo_url:
|
||||
current_logo, _ = Logo.objects.get_or_create(
|
||||
url=stream.logo_url,
|
||||
defaults={
|
||||
"name": stream.name or stream.tvg_id or "Unknown"
|
||||
},
|
||||
)
|
||||
elif stream.logo_url:
|
||||
# No custom logo configured, use stream logo
|
||||
from apps.channels.models import Logo
|
||||
|
||||
current_logo, _ = Logo.objects.get_or_create(
|
||||
|
|
@ -1842,10 +1866,42 @@ def sync_auto_channels(account_id, scan_start_time=None):
|
|||
|
||||
# Handle EPG data updates
|
||||
current_epg_data = None
|
||||
if stream.tvg_id and not force_dummy_epg:
|
||||
if custom_epg_id:
|
||||
# Use the custom EPG specified in group settings (e.g., a dummy EPG)
|
||||
from apps.epg.models import EPGSource
|
||||
try:
|
||||
epg_source = EPGSource.objects.get(id=custom_epg_id)
|
||||
# For dummy EPGs, select the first (and typically only) EPGData entry from this source
|
||||
if epg_source.source_type == 'dummy':
|
||||
current_epg_data = EPGData.objects.filter(
|
||||
epg_source=epg_source
|
||||
).first()
|
||||
if not current_epg_data:
|
||||
logger.warning(
|
||||
f"No EPGData found for dummy EPG source {epg_source.name} (ID: {custom_epg_id})"
|
||||
)
|
||||
else:
|
||||
# For non-dummy sources, try to find existing EPGData by tvg_id
|
||||
if stream.tvg_id:
|
||||
current_epg_data = EPGData.objects.filter(
|
||||
tvg_id=stream.tvg_id,
|
||||
epg_source=epg_source
|
||||
).first()
|
||||
except EPGSource.DoesNotExist:
|
||||
logger.warning(
|
||||
f"Custom EPG source with ID {custom_epg_id} not found for existing channel, falling back to auto-match"
|
||||
)
|
||||
# Fall back to auto-match by tvg_id
|
||||
if stream.tvg_id and not force_dummy_epg:
|
||||
current_epg_data = EPGData.objects.filter(
|
||||
tvg_id=stream.tvg_id
|
||||
).first()
|
||||
elif stream.tvg_id and not force_dummy_epg:
|
||||
# Auto-match EPG by tvg_id (original behavior)
|
||||
current_epg_data = EPGData.objects.filter(
|
||||
tvg_id=stream.tvg_id
|
||||
).first()
|
||||
# If force_dummy_epg is True and no custom_epg_id, current_epg_data stays None
|
||||
|
||||
if existing_channel.epg_data != current_epg_data:
|
||||
existing_channel.epg_data = current_epg_data
|
||||
|
|
@ -1935,19 +1991,81 @@ def sync_auto_channels(account_id, scan_start_time=None):
|
|||
ChannelProfileMembership.objects.bulk_create(memberships)
|
||||
|
||||
# Try to match EPG data
|
||||
if stream.tvg_id and not force_dummy_epg:
|
||||
if custom_epg_id:
|
||||
# Use the custom EPG specified in group settings (e.g., a dummy EPG)
|
||||
from apps.epg.models import EPGSource
|
||||
try:
|
||||
epg_source = EPGSource.objects.get(id=custom_epg_id)
|
||||
# For dummy EPGs, select the first (and typically only) EPGData entry from this source
|
||||
if epg_source.source_type == 'dummy':
|
||||
epg_data = EPGData.objects.filter(
|
||||
epg_source=epg_source
|
||||
).first()
|
||||
if epg_data:
|
||||
channel.epg_data = epg_data
|
||||
channel.save(update_fields=["epg_data"])
|
||||
else:
|
||||
logger.warning(
|
||||
f"No EPGData found for dummy EPG source {epg_source.name} (ID: {custom_epg_id})"
|
||||
)
|
||||
else:
|
||||
# For non-dummy sources, try to find existing EPGData by tvg_id
|
||||
if stream.tvg_id:
|
||||
epg_data = EPGData.objects.filter(
|
||||
tvg_id=stream.tvg_id,
|
||||
epg_source=epg_source
|
||||
).first()
|
||||
if epg_data:
|
||||
channel.epg_data = epg_data
|
||||
channel.save(update_fields=["epg_data"])
|
||||
except EPGSource.DoesNotExist:
|
||||
logger.warning(
|
||||
f"Custom EPG source with ID {custom_epg_id} not found, falling back to auto-match"
|
||||
)
|
||||
# Fall back to auto-match by tvg_id
|
||||
if stream.tvg_id and not force_dummy_epg:
|
||||
epg_data = EPGData.objects.filter(
|
||||
tvg_id=stream.tvg_id
|
||||
).first()
|
||||
if epg_data:
|
||||
channel.epg_data = epg_data
|
||||
channel.save(update_fields=["epg_data"])
|
||||
elif stream.tvg_id and not force_dummy_epg:
|
||||
# Auto-match EPG by tvg_id (original behavior)
|
||||
epg_data = EPGData.objects.filter(
|
||||
tvg_id=stream.tvg_id
|
||||
).first()
|
||||
if epg_data:
|
||||
channel.epg_data = epg_data
|
||||
channel.save(update_fields=["epg_data"])
|
||||
elif stream.tvg_id and force_dummy_epg:
|
||||
elif force_dummy_epg:
|
||||
# Force dummy EPG with no custom EPG selected (set to None)
|
||||
channel.epg_data = None
|
||||
channel.save(update_fields=["epg_data"])
|
||||
|
||||
# Handle logo
|
||||
if stream.logo_url:
|
||||
if custom_logo_id:
|
||||
# Use the custom logo specified in group settings
|
||||
from apps.channels.models import Logo
|
||||
try:
|
||||
custom_logo = Logo.objects.get(id=custom_logo_id)
|
||||
channel.logo = custom_logo
|
||||
channel.save(update_fields=["logo"])
|
||||
except Logo.DoesNotExist:
|
||||
logger.warning(
|
||||
f"Custom logo with ID {custom_logo_id} not found, falling back to stream logo"
|
||||
)
|
||||
# Fall back to stream logo if custom logo not found
|
||||
if stream.logo_url:
|
||||
logo, _ = Logo.objects.get_or_create(
|
||||
url=stream.logo_url,
|
||||
defaults={
|
||||
"name": stream.name or stream.tvg_id or "Unknown"
|
||||
},
|
||||
)
|
||||
channel.logo = logo
|
||||
channel.save(update_fields=["logo"])
|
||||
elif stream.logo_url:
|
||||
from apps.channels.models import Logo
|
||||
|
||||
logo, _ = Logo.objects.get_or_create(
|
||||
|
|
@ -2114,6 +2232,106 @@ def get_transformed_credentials(account, profile=None):
|
|||
return base_url, base_username, base_password
|
||||
|
||||
|
||||
@shared_task
|
||||
def refresh_account_profiles(account_id):
|
||||
"""Refresh account information for all active profiles of an XC account.
|
||||
|
||||
This task runs asynchronously in the background after account refresh completes.
|
||||
It includes rate limiting delays between profile authentications to prevent provider bans.
|
||||
"""
|
||||
from django.conf import settings
|
||||
import time
|
||||
|
||||
try:
|
||||
account = M3UAccount.objects.get(id=account_id, is_active=True)
|
||||
|
||||
if account.account_type != M3UAccount.Types.XC:
|
||||
logger.debug(f"Account {account_id} is not XC type, skipping profile refresh")
|
||||
return f"Account {account_id} is not an XtreamCodes account"
|
||||
|
||||
from apps.m3u.models import M3UAccountProfile
|
||||
|
||||
profiles = M3UAccountProfile.objects.filter(
|
||||
m3u_account=account,
|
||||
is_active=True
|
||||
)
|
||||
|
||||
if not profiles.exists():
|
||||
logger.info(f"No active profiles found for account {account.name}")
|
||||
return f"No active profiles for account {account_id}"
|
||||
|
||||
# Get user agent for this account
|
||||
try:
|
||||
user_agent_string = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"
|
||||
if account.user_agent_id:
|
||||
from core.models import UserAgent
|
||||
ua_obj = UserAgent.objects.get(id=account.user_agent_id)
|
||||
if ua_obj and hasattr(ua_obj, "user_agent") and ua_obj.user_agent:
|
||||
user_agent_string = ua_obj.user_agent
|
||||
except Exception as e:
|
||||
logger.warning(f"Error getting user agent, using fallback: {str(e)}")
|
||||
logger.debug(f"Using user agent for profile refresh: {user_agent_string}")
|
||||
# Get rate limiting delay from settings
|
||||
profile_delay = getattr(settings, 'XC_PROFILE_REFRESH_DELAY', 2.5)
|
||||
|
||||
profiles_updated = 0
|
||||
profiles_failed = 0
|
||||
|
||||
logger.info(f"Starting background refresh for {profiles.count()} profiles of account {account.name}")
|
||||
|
||||
for idx, profile in enumerate(profiles):
|
||||
try:
|
||||
# Add delay between profiles to prevent rate limiting (except for first profile)
|
||||
if idx > 0:
|
||||
logger.info(f"Waiting {profile_delay}s before refreshing next profile to avoid rate limiting")
|
||||
time.sleep(profile_delay)
|
||||
|
||||
# Get transformed credentials for this specific profile
|
||||
profile_url, profile_username, profile_password = get_transformed_credentials(account, profile)
|
||||
|
||||
# Create a separate XC client for this profile's credentials
|
||||
with XCClient(
|
||||
profile_url,
|
||||
profile_username,
|
||||
profile_password,
|
||||
user_agent_string
|
||||
) as profile_client:
|
||||
# Authenticate with this profile's credentials
|
||||
if profile_client.authenticate():
|
||||
# Get account information specific to this profile's credentials
|
||||
profile_account_info = profile_client.get_account_info()
|
||||
|
||||
# Merge with existing custom_properties if they exist
|
||||
existing_props = profile.custom_properties or {}
|
||||
existing_props.update(profile_account_info)
|
||||
profile.custom_properties = existing_props
|
||||
profile.save(update_fields=['custom_properties'])
|
||||
|
||||
profiles_updated += 1
|
||||
logger.info(f"Updated account information for profile '{profile.name}' ({profiles_updated}/{profiles.count()})")
|
||||
else:
|
||||
profiles_failed += 1
|
||||
logger.warning(f"Failed to authenticate profile '{profile.name}' with transformed credentials")
|
||||
|
||||
except Exception as profile_error:
|
||||
profiles_failed += 1
|
||||
logger.error(f"Failed to update account information for profile '{profile.name}': {str(profile_error)}")
|
||||
# Continue with other profiles even if one fails
|
||||
|
||||
result_msg = f"Profile refresh complete for account {account.name}: {profiles_updated} updated, {profiles_failed} failed"
|
||||
logger.info(result_msg)
|
||||
return result_msg
|
||||
|
||||
except M3UAccount.DoesNotExist:
|
||||
error_msg = f"Account {account_id} not found"
|
||||
logger.error(error_msg)
|
||||
return error_msg
|
||||
except Exception as e:
|
||||
error_msg = f"Error refreshing profiles for account {account_id}: {str(e)}"
|
||||
logger.error(error_msg)
|
||||
return error_msg
|
||||
|
||||
|
||||
@shared_task
|
||||
def refresh_account_info(profile_id):
|
||||
"""Refresh only the account information for a specific M3U profile."""
|
||||
|
|
@ -2623,6 +2841,17 @@ def refresh_single_m3u_account(account_id):
|
|||
account.updated_at = timezone.now()
|
||||
account.save(update_fields=["status", "last_message", "updated_at"])
|
||||
|
||||
# Log system event for M3U refresh
|
||||
log_system_event(
|
||||
event_type='m3u_refresh',
|
||||
account_name=account.name,
|
||||
elapsed_time=round(elapsed_time, 2),
|
||||
streams_created=streams_created,
|
||||
streams_updated=streams_updated,
|
||||
streams_deleted=streams_deleted,
|
||||
total_processed=streams_processed,
|
||||
)
|
||||
|
||||
# Send final update with complete metrics and explicitly include success status
|
||||
send_m3u_update(
|
||||
account_id,
|
||||
|
|
|
|||
|
|
@ -8,6 +8,34 @@ lock = threading.Lock()
|
|||
active_streams_map = {}
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def normalize_stream_url(url):
|
||||
"""
|
||||
Normalize stream URLs for compatibility with FFmpeg.
|
||||
|
||||
Handles VLC-specific syntax like udp://@239.0.0.1:1234 by removing the @ symbol.
|
||||
FFmpeg doesn't recognize the @ prefix for multicast addresses.
|
||||
|
||||
Args:
|
||||
url (str): The stream URL to normalize
|
||||
|
||||
Returns:
|
||||
str: The normalized URL
|
||||
"""
|
||||
if not url:
|
||||
return url
|
||||
|
||||
# Handle VLC-style UDP multicast URLs: udp://@239.0.0.1:1234 -> udp://239.0.0.1:1234
|
||||
# The @ symbol in VLC means "listen on all interfaces" but FFmpeg doesn't use this syntax
|
||||
if url.startswith('udp://@'):
|
||||
normalized = url.replace('udp://@', 'udp://', 1)
|
||||
logger.debug(f"Normalized VLC-style UDP URL: {url} -> {normalized}")
|
||||
return normalized
|
||||
|
||||
# Could add other normalizations here in the future (rtp://@, etc.)
|
||||
return url
|
||||
|
||||
|
||||
def increment_stream_count(account):
|
||||
with lock:
|
||||
current_usage = active_streams_map.get(account.id, 0)
|
||||
|
|
|
|||
|
|
@ -14,3 +14,26 @@ class OutputM3UTest(TestCase):
|
|||
self.assertEqual(response.status_code, 200)
|
||||
content = response.content.decode()
|
||||
self.assertIn("#EXTM3U", content)
|
||||
|
||||
def test_generate_m3u_response_post_empty_body(self):
|
||||
"""
|
||||
Test that a POST request with an empty body returns 200 OK.
|
||||
"""
|
||||
url = reverse('output:generate_m3u')
|
||||
|
||||
response = self.client.post(url, data=None, content_type='application/x-www-form-urlencoded')
|
||||
content = response.content.decode()
|
||||
|
||||
self.assertEqual(response.status_code, 200, "POST with empty body should return 200 OK")
|
||||
self.assertIn("#EXTM3U", content)
|
||||
|
||||
def test_generate_m3u_response_post_with_body(self):
|
||||
"""
|
||||
Test that a POST request with a non-empty body returns 403 Forbidden.
|
||||
"""
|
||||
url = reverse('output:generate_m3u')
|
||||
|
||||
response = self.client.post(url, data={'evilstring': 'muhahaha'})
|
||||
|
||||
self.assertEqual(response.status_code, 403, "POST with body should return 403 Forbidden")
|
||||
self.assertIn("POST requests with body are not allowed, body is:", response.content.decode())
|
||||
|
|
|
|||
1516
apps/output/views.py
1516
apps/output/views.py
File diff suppressed because it is too large
Load diff
|
|
@ -1,4 +1,6 @@
|
|||
"""Shared configuration between proxy types"""
|
||||
import time
|
||||
from django.db import connection
|
||||
|
||||
class BaseConfig:
|
||||
DEFAULT_USER_AGENT = 'VLC/3.0.20 LibVLC/3.0.20' # Will only be used if connection to settings fail
|
||||
|
|
@ -12,13 +14,29 @@ class BaseConfig:
|
|||
BUFFERING_TIMEOUT = 15 # Seconds to wait for buffering before switching streams
|
||||
BUFFER_SPEED = 1 # What speed to condsider the stream buffering, 1x is normal speed, 2x is double speed, etc.
|
||||
|
||||
# Cache for proxy settings (class-level, shared across all instances)
|
||||
_proxy_settings_cache = None
|
||||
_proxy_settings_cache_time = 0
|
||||
_proxy_settings_cache_ttl = 10 # Cache for 10 seconds
|
||||
|
||||
@classmethod
|
||||
def get_proxy_settings(cls):
|
||||
"""Get proxy settings from CoreSettings JSON data with fallback to defaults"""
|
||||
"""Get proxy settings from CoreSettings JSON data with fallback to defaults (cached)"""
|
||||
# Check if cache is still valid
|
||||
now = time.time()
|
||||
if cls._proxy_settings_cache is not None and (now - cls._proxy_settings_cache_time) < cls._proxy_settings_cache_ttl:
|
||||
return cls._proxy_settings_cache
|
||||
|
||||
# Cache miss or expired - fetch from database
|
||||
try:
|
||||
from core.models import CoreSettings
|
||||
return CoreSettings.get_proxy_settings()
|
||||
settings = CoreSettings.get_proxy_settings()
|
||||
cls._proxy_settings_cache = settings
|
||||
cls._proxy_settings_cache_time = now
|
||||
return settings
|
||||
|
||||
except Exception:
|
||||
# Return defaults if database query fails
|
||||
return {
|
||||
"buffering_timeout": 15,
|
||||
"buffering_speed": 1.0,
|
||||
|
|
@ -27,6 +45,13 @@ class BaseConfig:
|
|||
"channel_init_grace_period": 5,
|
||||
}
|
||||
|
||||
finally:
|
||||
# Always close the connection after reading settings
|
||||
try:
|
||||
connection.close()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
@classmethod
|
||||
def get_redis_chunk_ttl(cls):
|
||||
"""Get Redis chunk TTL from database or default"""
|
||||
|
|
@ -69,10 +94,10 @@ class TSConfig(BaseConfig):
|
|||
CLEANUP_INTERVAL = 60 # Check for inactive channels every 60 seconds
|
||||
|
||||
# Client tracking settings
|
||||
CLIENT_RECORD_TTL = 5 # How long client records persist in Redis (seconds). Client will be considered MIA after this time.
|
||||
CLIENT_RECORD_TTL = 60 # How long client records persist in Redis (seconds). Client will be considered MIA after this time.
|
||||
CLEANUP_CHECK_INTERVAL = 1 # How often to check for disconnected clients (seconds)
|
||||
CLIENT_HEARTBEAT_INTERVAL = 1 # How often to send client heartbeats (seconds)
|
||||
GHOST_CLIENT_MULTIPLIER = 5.0 # How many heartbeat intervals before client considered ghost (5 would mean 5 secondsif heartbeat interval is 1)
|
||||
CLIENT_HEARTBEAT_INTERVAL = 5 # How often to send client heartbeats (seconds)
|
||||
GHOST_CLIENT_MULTIPLIER = 6.0 # How many heartbeat intervals before client considered ghost (6 would mean 36 seconds if heartbeat interval is 6)
|
||||
CLIENT_WAIT_TIMEOUT = 30 # Seconds to wait for client to connect
|
||||
|
||||
# Stream health and recovery settings
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ import gevent
|
|||
from typing import Set, Optional
|
||||
from apps.proxy.config import TSConfig as Config
|
||||
from redis.exceptions import ConnectionError, TimeoutError
|
||||
from .constants import EventType
|
||||
from .constants import EventType, ChannelState, ChannelMetadataField
|
||||
from .config_helper import ConfigHelper
|
||||
from .redis_keys import RedisKeys
|
||||
from .utils import get_logger
|
||||
|
|
@ -26,6 +26,7 @@ class ClientManager:
|
|||
self.lock = threading.Lock()
|
||||
self.last_active_time = time.time()
|
||||
self.worker_id = worker_id # Store worker ID as instance variable
|
||||
self._heartbeat_running = True # Flag to control heartbeat thread
|
||||
|
||||
# STANDARDIZED KEYS: Move client set under channel namespace
|
||||
self.client_set_key = RedisKeys.clients(channel_id)
|
||||
|
|
@ -33,6 +34,10 @@ class ClientManager:
|
|||
self.heartbeat_interval = ConfigHelper.get('CLIENT_HEARTBEAT_INTERVAL', 10)
|
||||
self.last_heartbeat_time = {}
|
||||
|
||||
# Get ProxyServer instance for ownership checks
|
||||
from .server import ProxyServer
|
||||
self.proxy_server = ProxyServer.get_instance()
|
||||
|
||||
# Start heartbeat thread for local clients
|
||||
self._start_heartbeat_thread()
|
||||
self._registered_clients = set() # Track already registered client IDs
|
||||
|
|
@ -77,56 +82,28 @@ class ClientManager:
|
|||
logger.debug(f"Failed to trigger stats update: {e}")
|
||||
|
||||
def _start_heartbeat_thread(self):
|
||||
"""Start thread to regularly refresh client presence in Redis"""
|
||||
"""Start thread to regularly refresh client presence in Redis for local clients"""
|
||||
def heartbeat_task():
|
||||
no_clients_count = 0 # Track consecutive empty cycles
|
||||
max_empty_cycles = 3 # Exit after this many consecutive empty checks
|
||||
|
||||
logger.debug(f"Started heartbeat thread for channel {self.channel_id} (interval: {self.heartbeat_interval}s)")
|
||||
|
||||
while True:
|
||||
while self._heartbeat_running:
|
||||
try:
|
||||
# Wait for the interval
|
||||
gevent.sleep(self.heartbeat_interval)
|
||||
# Wait for the interval, but check stop flag frequently for quick shutdown
|
||||
# Sleep in 1-second increments to allow faster response to stop signal
|
||||
for _ in range(int(self.heartbeat_interval)):
|
||||
if not self._heartbeat_running:
|
||||
break
|
||||
time.sleep(1)
|
||||
|
||||
# Final check before doing work
|
||||
if not self._heartbeat_running:
|
||||
break
|
||||
|
||||
# Send heartbeat for all local clients
|
||||
with self.lock:
|
||||
if not self.clients or not self.redis_client:
|
||||
# No clients left, increment our counter
|
||||
no_clients_count += 1
|
||||
|
||||
# Check if we're in a shutdown delay period before exiting
|
||||
in_shutdown_delay = False
|
||||
if self.redis_client:
|
||||
try:
|
||||
disconnect_key = RedisKeys.last_client_disconnect(self.channel_id)
|
||||
disconnect_time_bytes = self.redis_client.get(disconnect_key)
|
||||
if disconnect_time_bytes:
|
||||
disconnect_time = float(disconnect_time_bytes.decode('utf-8'))
|
||||
elapsed = time.time() - disconnect_time
|
||||
shutdown_delay = ConfigHelper.channel_shutdown_delay()
|
||||
|
||||
if elapsed < shutdown_delay:
|
||||
in_shutdown_delay = True
|
||||
logger.debug(f"Channel {self.channel_id} in shutdown delay: {elapsed:.1f}s of {shutdown_delay}s elapsed")
|
||||
except Exception as e:
|
||||
logger.debug(f"Error checking shutdown delay: {e}")
|
||||
|
||||
# Only exit if we've seen no clients for several consecutive checks AND we're not in shutdown delay
|
||||
if no_clients_count >= max_empty_cycles and not in_shutdown_delay:
|
||||
logger.info(f"No clients for channel {self.channel_id} after {no_clients_count} consecutive checks and not in shutdown delay, exiting heartbeat thread")
|
||||
return # This exits the thread
|
||||
|
||||
# Skip this cycle if we have no clients but continue if in shutdown delay
|
||||
if not in_shutdown_delay:
|
||||
continue
|
||||
else:
|
||||
# Reset counter during shutdown delay to prevent premature exit
|
||||
no_clients_count = 0
|
||||
continue
|
||||
else:
|
||||
# Reset counter when we see clients
|
||||
no_clients_count = 0
|
||||
# Skip this cycle if we have no local clients
|
||||
if not self.clients:
|
||||
continue
|
||||
|
||||
# IMPROVED GHOST DETECTION: Check for stale clients before sending heartbeats
|
||||
current_time = time.time()
|
||||
|
|
@ -197,11 +174,20 @@ class ClientManager:
|
|||
except Exception as e:
|
||||
logger.error(f"Error in client heartbeat thread: {e}")
|
||||
|
||||
logger.debug(f"Heartbeat thread exiting for channel {self.channel_id}")
|
||||
|
||||
thread = threading.Thread(target=heartbeat_task, daemon=True)
|
||||
thread.name = f"client-heartbeat-{self.channel_id}"
|
||||
thread.start()
|
||||
logger.debug(f"Started client heartbeat thread for channel {self.channel_id} (interval: {self.heartbeat_interval}s)")
|
||||
|
||||
def stop(self):
|
||||
"""Stop the heartbeat thread and cleanup"""
|
||||
logger.debug(f"Stopping ClientManager for channel {self.channel_id}")
|
||||
self._heartbeat_running = False
|
||||
# Give the thread a moment to exit gracefully
|
||||
# Note: We don't join() here because it's a daemon thread and will exit on its own
|
||||
|
||||
def _execute_redis_command(self, command_func):
|
||||
"""Execute Redis command with error handling"""
|
||||
if not self.redis_client:
|
||||
|
|
@ -355,16 +341,30 @@ class ClientManager:
|
|||
|
||||
self._notify_owner_of_activity()
|
||||
|
||||
# Publish client disconnected event
|
||||
event_data = json.dumps({
|
||||
"event": EventType.CLIENT_DISCONNECTED, # Use constant instead of string
|
||||
"channel_id": self.channel_id,
|
||||
"client_id": client_id,
|
||||
"worker_id": self.worker_id or "unknown",
|
||||
"timestamp": time.time(),
|
||||
"remaining_clients": remaining
|
||||
})
|
||||
self.redis_client.publish(RedisKeys.events_channel(self.channel_id), event_data)
|
||||
# Check if we're the owner - if so, handle locally; if not, publish event
|
||||
am_i_owner = self.proxy_server and self.proxy_server.am_i_owner(self.channel_id)
|
||||
|
||||
if am_i_owner:
|
||||
# We're the owner - handle the disconnect directly
|
||||
logger.debug(f"Owner handling CLIENT_DISCONNECTED for client {client_id} locally (not publishing)")
|
||||
if remaining == 0:
|
||||
# Trigger shutdown check directly via ProxyServer method
|
||||
logger.debug(f"No clients left - triggering immediate shutdown check")
|
||||
# Spawn greenlet to avoid blocking
|
||||
import gevent
|
||||
gevent.spawn(self.proxy_server.handle_client_disconnect, self.channel_id)
|
||||
else:
|
||||
# We're not the owner - publish event so owner can handle it
|
||||
logger.debug(f"Non-owner publishing CLIENT_DISCONNECTED event for client {client_id} on channel {self.channel_id} from worker {self.worker_id}")
|
||||
event_data = json.dumps({
|
||||
"event": EventType.CLIENT_DISCONNECTED,
|
||||
"channel_id": self.channel_id,
|
||||
"client_id": client_id,
|
||||
"worker_id": self.worker_id or "unknown",
|
||||
"timestamp": time.time(),
|
||||
"remaining_clients": remaining
|
||||
})
|
||||
self.redis_client.publish(RedisKeys.events_channel(self.channel_id), event_data)
|
||||
|
||||
# Trigger channel stats update via WebSocket
|
||||
self._trigger_stats_update()
|
||||
|
|
|
|||
|
|
@ -100,3 +100,12 @@ class ConfigHelper:
|
|||
def channel_init_grace_period():
|
||||
"""Get channel initialization grace period in seconds"""
|
||||
return Config.get_channel_init_grace_period()
|
||||
|
||||
@staticmethod
|
||||
def chunk_timeout():
|
||||
"""
|
||||
Get chunk timeout in seconds (used for both socket and HTTP read timeouts).
|
||||
This controls how long we wait for each chunk before timing out.
|
||||
Set this higher (e.g., 30s) for slow providers that may have intermittent delays.
|
||||
"""
|
||||
return ConfigHelper.get('CHUNK_TIMEOUT', 5) # Default 5 seconds
|
||||
|
|
|
|||
|
|
@ -33,6 +33,8 @@ class EventType:
|
|||
# Stream types
|
||||
class StreamType:
|
||||
HLS = "hls"
|
||||
RTSP = "rtsp"
|
||||
UDP = "udp"
|
||||
TS = "ts"
|
||||
UNKNOWN = "unknown"
|
||||
|
||||
|
|
|
|||
138
apps/proxy/ts_proxy/http_streamer.py
Normal file
138
apps/proxy/ts_proxy/http_streamer.py
Normal file
|
|
@ -0,0 +1,138 @@
|
|||
"""
|
||||
HTTP Stream Reader - Thread-based HTTP stream reader that writes to a pipe.
|
||||
This allows us to use the same fetch_chunk() path for both transcode and HTTP streams.
|
||||
"""
|
||||
|
||||
import threading
|
||||
import os
|
||||
import requests
|
||||
from requests.adapters import HTTPAdapter
|
||||
from .utils import get_logger
|
||||
|
||||
logger = get_logger()
|
||||
|
||||
|
||||
class HTTPStreamReader:
|
||||
"""Thread-based HTTP stream reader that writes to a pipe"""
|
||||
|
||||
def __init__(self, url, user_agent=None, chunk_size=8192):
|
||||
self.url = url
|
||||
self.user_agent = user_agent
|
||||
self.chunk_size = chunk_size
|
||||
self.session = None
|
||||
self.response = None
|
||||
self.thread = None
|
||||
self.pipe_read = None
|
||||
self.pipe_write = None
|
||||
self.running = False
|
||||
|
||||
def start(self):
|
||||
"""Start the HTTP stream reader thread"""
|
||||
# Create a pipe (works on Windows and Unix)
|
||||
self.pipe_read, self.pipe_write = os.pipe()
|
||||
|
||||
# Start the reader thread
|
||||
self.running = True
|
||||
self.thread = threading.Thread(target=self._read_stream, daemon=True)
|
||||
self.thread.start()
|
||||
|
||||
logger.info(f"Started HTTP stream reader thread for {self.url}")
|
||||
return self.pipe_read
|
||||
|
||||
def _read_stream(self):
|
||||
"""Thread worker that reads HTTP stream and writes to pipe"""
|
||||
try:
|
||||
# Build headers
|
||||
headers = {}
|
||||
if self.user_agent:
|
||||
headers['User-Agent'] = self.user_agent
|
||||
|
||||
logger.info(f"HTTP reader connecting to {self.url}")
|
||||
|
||||
# Create session
|
||||
self.session = requests.Session()
|
||||
|
||||
# Disable retries for faster failure detection
|
||||
adapter = HTTPAdapter(max_retries=0, pool_connections=1, pool_maxsize=1)
|
||||
self.session.mount('http://', adapter)
|
||||
self.session.mount('https://', adapter)
|
||||
|
||||
# Stream the URL
|
||||
self.response = self.session.get(
|
||||
self.url,
|
||||
headers=headers,
|
||||
stream=True,
|
||||
timeout=(5, 30) # 5s connect, 30s read
|
||||
)
|
||||
|
||||
if self.response.status_code != 200:
|
||||
logger.error(f"HTTP {self.response.status_code} from {self.url}")
|
||||
return
|
||||
|
||||
logger.info(f"HTTP reader connected successfully, streaming data...")
|
||||
|
||||
# Stream chunks to pipe
|
||||
chunk_count = 0
|
||||
for chunk in self.response.iter_content(chunk_size=self.chunk_size):
|
||||
if not self.running:
|
||||
break
|
||||
|
||||
if chunk:
|
||||
try:
|
||||
# Write binary data to pipe
|
||||
os.write(self.pipe_write, chunk)
|
||||
chunk_count += 1
|
||||
|
||||
# Log progress periodically
|
||||
if chunk_count % 1000 == 0:
|
||||
logger.debug(f"HTTP reader streamed {chunk_count} chunks")
|
||||
except OSError as e:
|
||||
logger.error(f"Pipe write error: {e}")
|
||||
break
|
||||
|
||||
logger.info("HTTP stream ended")
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
logger.error(f"HTTP reader request error: {e}")
|
||||
except Exception as e:
|
||||
logger.error(f"HTTP reader unexpected error: {e}", exc_info=True)
|
||||
finally:
|
||||
self.running = False
|
||||
# Close write end of pipe to signal EOF
|
||||
try:
|
||||
if self.pipe_write is not None:
|
||||
os.close(self.pipe_write)
|
||||
self.pipe_write = None
|
||||
except:
|
||||
pass
|
||||
|
||||
def stop(self):
|
||||
"""Stop the HTTP stream reader"""
|
||||
logger.info("Stopping HTTP stream reader")
|
||||
self.running = False
|
||||
|
||||
# Close response
|
||||
if self.response:
|
||||
try:
|
||||
self.response.close()
|
||||
except:
|
||||
pass
|
||||
|
||||
# Close session
|
||||
if self.session:
|
||||
try:
|
||||
self.session.close()
|
||||
except:
|
||||
pass
|
||||
|
||||
# Close write end of pipe
|
||||
if self.pipe_write is not None:
|
||||
try:
|
||||
os.close(self.pipe_write)
|
||||
self.pipe_write = None
|
||||
except:
|
||||
pass
|
||||
|
||||
# Wait for thread
|
||||
if self.thread and self.thread.is_alive():
|
||||
self.thread.join(timeout=2.0)
|
||||
|
|
@ -19,7 +19,7 @@ import gevent # Add gevent import
|
|||
from typing import Dict, Optional, Set
|
||||
from apps.proxy.config import TSConfig as Config
|
||||
from apps.channels.models import Channel, Stream
|
||||
from core.utils import RedisClient
|
||||
from core.utils import RedisClient, log_system_event
|
||||
from redis.exceptions import ConnectionError, TimeoutError
|
||||
from .stream_manager import StreamManager
|
||||
from .stream_buffer import StreamBuffer
|
||||
|
|
@ -194,35 +194,11 @@ class ProxyServer:
|
|||
self.redis_client.delete(disconnect_key)
|
||||
|
||||
elif event_type == EventType.CLIENT_DISCONNECTED:
|
||||
logger.debug(f"Owner received {EventType.CLIENT_DISCONNECTED} event for channel {channel_id}")
|
||||
# Check if any clients remain
|
||||
if channel_id in self.client_managers:
|
||||
# VERIFY REDIS CLIENT COUNT DIRECTLY
|
||||
client_set_key = RedisKeys.clients(channel_id)
|
||||
total = self.redis_client.scard(client_set_key) or 0
|
||||
|
||||
if total == 0:
|
||||
logger.debug(f"No clients left after disconnect event - stopping channel {channel_id}")
|
||||
# Set the disconnect timer for other workers to see
|
||||
disconnect_key = RedisKeys.last_client_disconnect(channel_id)
|
||||
self.redis_client.setex(disconnect_key, 60, str(time.time()))
|
||||
|
||||
# Get configured shutdown delay or default
|
||||
shutdown_delay = ConfigHelper.channel_shutdown_delay()
|
||||
|
||||
if shutdown_delay > 0:
|
||||
logger.info(f"Waiting {shutdown_delay}s before stopping channel...")
|
||||
gevent.sleep(shutdown_delay) # REPLACE: time.sleep(shutdown_delay)
|
||||
|
||||
# Re-check client count before stopping
|
||||
total = self.redis_client.scard(client_set_key) or 0
|
||||
if total > 0:
|
||||
logger.info(f"New clients connected during shutdown delay - aborting shutdown")
|
||||
self.redis_client.delete(disconnect_key)
|
||||
return
|
||||
|
||||
# Stop the channel directly
|
||||
self.stop_channel(channel_id)
|
||||
client_id = data.get("client_id")
|
||||
worker_id = data.get("worker_id")
|
||||
logger.debug(f"Owner received {EventType.CLIENT_DISCONNECTED} event for channel {channel_id}, client {client_id} from worker {worker_id}")
|
||||
# Delegate to dedicated method
|
||||
self.handle_client_disconnect(channel_id)
|
||||
|
||||
|
||||
elif event_type == EventType.STREAM_SWITCH:
|
||||
|
|
@ -495,17 +471,18 @@ class ProxyServer:
|
|||
)
|
||||
return True
|
||||
|
||||
# Create buffer and client manager instances
|
||||
buffer = StreamBuffer(channel_id, redis_client=self.redis_client)
|
||||
client_manager = ClientManager(
|
||||
channel_id,
|
||||
redis_client=self.redis_client,
|
||||
worker_id=self.worker_id
|
||||
)
|
||||
# Create buffer and client manager instances (or reuse if they exist)
|
||||
if channel_id not in self.stream_buffers:
|
||||
buffer = StreamBuffer(channel_id, redis_client=self.redis_client)
|
||||
self.stream_buffers[channel_id] = buffer
|
||||
|
||||
# Store in local tracking
|
||||
self.stream_buffers[channel_id] = buffer
|
||||
self.client_managers[channel_id] = client_manager
|
||||
if channel_id not in self.client_managers:
|
||||
client_manager = ClientManager(
|
||||
channel_id,
|
||||
redis_client=self.redis_client,
|
||||
worker_id=self.worker_id
|
||||
)
|
||||
self.client_managers[channel_id] = client_manager
|
||||
|
||||
# IMPROVED: Set initializing state in Redis BEFORE any other operations
|
||||
if self.redis_client:
|
||||
|
|
@ -559,13 +536,15 @@ class ProxyServer:
|
|||
logger.info(f"Channel {channel_id} already owned by worker {current_owner}")
|
||||
logger.info(f"This worker ({self.worker_id}) will read from Redis buffer only")
|
||||
|
||||
# Create buffer but not stream manager
|
||||
buffer = StreamBuffer(channel_id=channel_id, redis_client=self.redis_client)
|
||||
self.stream_buffers[channel_id] = buffer
|
||||
# Create buffer but not stream manager (only if not already exists)
|
||||
if channel_id not in self.stream_buffers:
|
||||
buffer = StreamBuffer(channel_id=channel_id, redis_client=self.redis_client)
|
||||
self.stream_buffers[channel_id] = buffer
|
||||
|
||||
# Create client manager with channel_id and redis_client
|
||||
client_manager = ClientManager(channel_id=channel_id, redis_client=self.redis_client, worker_id=self.worker_id)
|
||||
self.client_managers[channel_id] = client_manager
|
||||
# Create client manager with channel_id and redis_client (only if not already exists)
|
||||
if channel_id not in self.client_managers:
|
||||
client_manager = ClientManager(channel_id=channel_id, redis_client=self.redis_client, worker_id=self.worker_id)
|
||||
self.client_managers[channel_id] = client_manager
|
||||
|
||||
return True
|
||||
|
||||
|
|
@ -580,13 +559,15 @@ class ProxyServer:
|
|||
# Another worker just acquired ownership
|
||||
logger.info(f"Another worker just acquired ownership of channel {channel_id}")
|
||||
|
||||
# Create buffer but not stream manager
|
||||
buffer = StreamBuffer(channel_id=channel_id, redis_client=self.redis_client)
|
||||
self.stream_buffers[channel_id] = buffer
|
||||
# Create buffer but not stream manager (only if not already exists)
|
||||
if channel_id not in self.stream_buffers:
|
||||
buffer = StreamBuffer(channel_id=channel_id, redis_client=self.redis_client)
|
||||
self.stream_buffers[channel_id] = buffer
|
||||
|
||||
# Create client manager with channel_id and redis_client
|
||||
client_manager = ClientManager(channel_id=channel_id, redis_client=self.redis_client, worker_id=self.worker_id)
|
||||
self.client_managers[channel_id] = client_manager
|
||||
# Create client manager with channel_id and redis_client (only if not already exists)
|
||||
if channel_id not in self.client_managers:
|
||||
client_manager = ClientManager(channel_id=channel_id, redis_client=self.redis_client, worker_id=self.worker_id)
|
||||
self.client_managers[channel_id] = client_manager
|
||||
|
||||
return True
|
||||
|
||||
|
|
@ -641,13 +622,37 @@ class ProxyServer:
|
|||
logger.info(f"Created StreamManager for channel {channel_id} with stream ID {channel_stream_id}")
|
||||
self.stream_managers[channel_id] = stream_manager
|
||||
|
||||
# Create client manager with channel_id, redis_client AND worker_id
|
||||
client_manager = ClientManager(
|
||||
channel_id=channel_id,
|
||||
redis_client=self.redis_client,
|
||||
worker_id=self.worker_id
|
||||
)
|
||||
self.client_managers[channel_id] = client_manager
|
||||
# Log channel start event
|
||||
try:
|
||||
channel_obj = Channel.objects.get(uuid=channel_id)
|
||||
|
||||
# Get stream name if stream_id is available
|
||||
stream_name = None
|
||||
if channel_stream_id:
|
||||
try:
|
||||
stream_obj = Stream.objects.get(id=channel_stream_id)
|
||||
stream_name = stream_obj.name
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
log_system_event(
|
||||
'channel_start',
|
||||
channel_id=channel_id,
|
||||
channel_name=channel_obj.name,
|
||||
stream_name=stream_name,
|
||||
stream_id=channel_stream_id
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Could not log channel start event: {e}")
|
||||
|
||||
# Create client manager with channel_id, redis_client AND worker_id (only if not already exists)
|
||||
if channel_id not in self.client_managers:
|
||||
client_manager = ClientManager(
|
||||
channel_id=channel_id,
|
||||
redis_client=self.redis_client,
|
||||
worker_id=self.worker_id
|
||||
)
|
||||
self.client_managers[channel_id] = client_manager
|
||||
|
||||
# Start stream manager thread only for the owner
|
||||
thread = threading.Thread(target=stream_manager.run, daemon=True)
|
||||
|
|
@ -697,9 +702,10 @@ class ProxyServer:
|
|||
state = metadata.get(b'state', b'unknown').decode('utf-8')
|
||||
owner = metadata.get(b'owner', b'').decode('utf-8')
|
||||
|
||||
# States that indicate the channel is running properly
|
||||
# States that indicate the channel is running properly or shutting down
|
||||
valid_states = [ChannelState.ACTIVE, ChannelState.WAITING_FOR_CLIENTS,
|
||||
ChannelState.CONNECTING, ChannelState.BUFFERING, ChannelState.INITIALIZING]
|
||||
ChannelState.CONNECTING, ChannelState.BUFFERING, ChannelState.INITIALIZING,
|
||||
ChannelState.STOPPING]
|
||||
|
||||
# If the channel is in a valid state, check if the owner is still active
|
||||
if state in valid_states:
|
||||
|
|
@ -712,12 +718,24 @@ class ProxyServer:
|
|||
else:
|
||||
# This is a zombie channel - owner is gone but metadata still exists
|
||||
logger.warning(f"Detected zombie channel {channel_id} - owner {owner} is no longer active")
|
||||
|
||||
# Check if there are any clients connected
|
||||
client_set_key = RedisKeys.clients(channel_id)
|
||||
client_count = self.redis_client.scard(client_set_key) or 0
|
||||
|
||||
if client_count > 0:
|
||||
logger.warning(f"Zombie channel {channel_id} has {client_count} clients - attempting ownership takeover")
|
||||
# Could potentially take ownership here in the future
|
||||
# For now, just clean it up to be safe
|
||||
else:
|
||||
logger.warning(f"Zombie channel {channel_id} has no clients - cleaning up")
|
||||
|
||||
self._clean_zombie_channel(channel_id, metadata)
|
||||
return False
|
||||
elif state in [ChannelState.STOPPING, ChannelState.STOPPED, ChannelState.ERROR]:
|
||||
# These states indicate the channel should be reinitialized
|
||||
logger.info(f"Channel {channel_id} exists but in terminal state: {state}")
|
||||
return True
|
||||
elif state in [ChannelState.STOPPED, ChannelState.ERROR]:
|
||||
# These terminal states indicate the channel should be cleaned up and reinitialized
|
||||
logger.info(f"Channel {channel_id} in terminal state {state} - returning False to trigger cleanup")
|
||||
return False
|
||||
else:
|
||||
# Unknown or initializing state, check how long it's been in this state
|
||||
if b'state_changed_at' in metadata:
|
||||
|
|
@ -781,6 +799,44 @@ class ProxyServer:
|
|||
logger.error(f"Error cleaning zombie channel {channel_id}: {e}", exc_info=True)
|
||||
return False
|
||||
|
||||
def handle_client_disconnect(self, channel_id):
|
||||
"""
|
||||
Handle client disconnect event - check if channel should shut down.
|
||||
Can be called directly by owner or via PubSub from non-owner workers.
|
||||
"""
|
||||
if channel_id not in self.client_managers:
|
||||
return
|
||||
|
||||
try:
|
||||
# VERIFY REDIS CLIENT COUNT DIRECTLY
|
||||
client_set_key = RedisKeys.clients(channel_id)
|
||||
total = self.redis_client.scard(client_set_key) or 0
|
||||
|
||||
if total == 0:
|
||||
logger.debug(f"No clients left after disconnect event - stopping channel {channel_id}")
|
||||
# Set the disconnect timer for other workers to see
|
||||
disconnect_key = RedisKeys.last_client_disconnect(channel_id)
|
||||
self.redis_client.setex(disconnect_key, 60, str(time.time()))
|
||||
|
||||
# Get configured shutdown delay or default
|
||||
shutdown_delay = ConfigHelper.channel_shutdown_delay()
|
||||
|
||||
if shutdown_delay > 0:
|
||||
logger.info(f"Waiting {shutdown_delay}s before stopping channel...")
|
||||
gevent.sleep(shutdown_delay)
|
||||
|
||||
# Re-check client count before stopping
|
||||
total = self.redis_client.scard(client_set_key) or 0
|
||||
if total > 0:
|
||||
logger.info(f"New clients connected during shutdown delay - aborting shutdown")
|
||||
self.redis_client.delete(disconnect_key)
|
||||
return
|
||||
|
||||
# Stop the channel directly
|
||||
self.stop_channel(channel_id)
|
||||
except Exception as e:
|
||||
logger.error(f"Error handling client disconnect for channel {channel_id}: {e}")
|
||||
|
||||
def stop_channel(self, channel_id):
|
||||
"""Stop a channel with proper ownership handling"""
|
||||
try:
|
||||
|
|
@ -828,6 +884,41 @@ class ProxyServer:
|
|||
self.release_ownership(channel_id)
|
||||
logger.info(f"Released ownership of channel {channel_id}")
|
||||
|
||||
# Log channel stop event (after cleanup, before releasing ownership section ends)
|
||||
try:
|
||||
channel_obj = Channel.objects.get(uuid=channel_id)
|
||||
|
||||
# Calculate runtime and get total bytes from metadata
|
||||
runtime = None
|
||||
total_bytes = None
|
||||
if self.redis_client:
|
||||
metadata_key = RedisKeys.channel_metadata(channel_id)
|
||||
metadata = self.redis_client.hgetall(metadata_key)
|
||||
if metadata:
|
||||
# Calculate runtime from init_time
|
||||
if b'init_time' in metadata:
|
||||
try:
|
||||
init_time = float(metadata[b'init_time'].decode('utf-8'))
|
||||
runtime = round(time.time() - init_time, 2)
|
||||
except Exception:
|
||||
pass
|
||||
# Get total bytes transferred
|
||||
if b'total_bytes' in metadata:
|
||||
try:
|
||||
total_bytes = int(metadata[b'total_bytes'].decode('utf-8'))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
log_system_event(
|
||||
'channel_stop',
|
||||
channel_id=channel_id,
|
||||
channel_name=channel_obj.name,
|
||||
runtime=runtime,
|
||||
total_bytes=total_bytes
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Could not log channel stop event: {e}")
|
||||
|
||||
# Always clean up local resources - WITH SAFE CHECKS
|
||||
if channel_id in self.stream_managers:
|
||||
del self.stream_managers[channel_id]
|
||||
|
|
@ -855,6 +946,10 @@ class ProxyServer:
|
|||
# Clean up client manager - SAFE CHECK HERE TOO
|
||||
if channel_id in self.client_managers:
|
||||
try:
|
||||
client_manager = self.client_managers[channel_id]
|
||||
# Stop the heartbeat thread before deleting
|
||||
if hasattr(client_manager, 'stop'):
|
||||
client_manager.stop()
|
||||
del self.client_managers[channel_id]
|
||||
logger.info(f"Removed client manager for channel {channel_id}")
|
||||
except KeyError:
|
||||
|
|
@ -929,6 +1024,15 @@ class ProxyServer:
|
|||
if channel_id in self.client_managers:
|
||||
client_manager = self.client_managers[channel_id]
|
||||
total_clients = client_manager.get_total_client_count()
|
||||
else:
|
||||
# This can happen during reconnection attempts or crashes
|
||||
# Check Redis directly for any connected clients
|
||||
if self.redis_client:
|
||||
client_set_key = RedisKeys.clients(channel_id)
|
||||
total_clients = self.redis_client.scard(client_set_key) or 0
|
||||
|
||||
if total_clients == 0:
|
||||
logger.warning(f"Channel {channel_id} is missing client_manager but we're the owner with 0 clients - will trigger cleanup")
|
||||
|
||||
# Log client count periodically
|
||||
if time.time() % 30 < 1: # Every ~30 seconds
|
||||
|
|
@ -936,7 +1040,14 @@ class ProxyServer:
|
|||
|
||||
# If in connecting or waiting_for_clients state, check grace period
|
||||
if channel_state in [ChannelState.CONNECTING, ChannelState.WAITING_FOR_CLIENTS]:
|
||||
# Get connection ready time from metadata
|
||||
# Check if channel is already stopping
|
||||
if self.redis_client:
|
||||
stop_key = RedisKeys.channel_stopping(channel_id)
|
||||
if self.redis_client.exists(stop_key):
|
||||
logger.debug(f"Channel {channel_id} is already stopping - skipping monitor shutdown")
|
||||
continue
|
||||
|
||||
# Get connection_ready_time from metadata (indicates if channel reached ready state)
|
||||
connection_ready_time = None
|
||||
if metadata and b'connection_ready_time' in metadata:
|
||||
try:
|
||||
|
|
@ -944,17 +1055,60 @@ class ProxyServer:
|
|||
except (ValueError, TypeError):
|
||||
pass
|
||||
|
||||
# If still connecting, give it more time
|
||||
if channel_state == ChannelState.CONNECTING:
|
||||
logger.debug(f"Channel {channel_id} still connecting - not checking for clients yet")
|
||||
continue
|
||||
if total_clients == 0:
|
||||
# Check if we have a connection_attempt timestamp (set when CONNECTING starts)
|
||||
connection_attempt_time = None
|
||||
attempt_key = RedisKeys.connection_attempt(channel_id)
|
||||
if self.redis_client:
|
||||
attempt_value = self.redis_client.get(attempt_key)
|
||||
if attempt_value:
|
||||
try:
|
||||
connection_attempt_time = float(attempt_value.decode('utf-8'))
|
||||
except (ValueError, TypeError):
|
||||
pass
|
||||
|
||||
# If waiting for clients, check grace period
|
||||
if connection_ready_time:
|
||||
# Also get init time as a fallback
|
||||
init_time = None
|
||||
if metadata and b'init_time' in metadata:
|
||||
try:
|
||||
init_time = float(metadata[b'init_time'].decode('utf-8'))
|
||||
except (ValueError, TypeError):
|
||||
pass
|
||||
|
||||
# Use whichever timestamp we have (prefer connection_attempt as it's more recent)
|
||||
start_time = connection_attempt_time or init_time
|
||||
|
||||
if start_time:
|
||||
# Check which timeout to apply based on channel lifecycle
|
||||
if connection_ready_time:
|
||||
# Already reached ready - use shutdown_delay
|
||||
time_since_ready = time.time() - connection_ready_time
|
||||
shutdown_delay = ConfigHelper.channel_shutdown_delay()
|
||||
|
||||
if time_since_ready > shutdown_delay:
|
||||
logger.warning(
|
||||
f"Channel {channel_id} in {channel_state} state with 0 clients for {time_since_ready:.1f}s "
|
||||
f"(after reaching ready, shutdown_delay: {shutdown_delay}s) - stopping channel"
|
||||
)
|
||||
self.stop_channel(channel_id)
|
||||
continue
|
||||
else:
|
||||
# Never reached ready - use grace_period timeout
|
||||
time_since_start = time.time() - start_time
|
||||
connecting_timeout = ConfigHelper.channel_init_grace_period()
|
||||
|
||||
if time_since_start > connecting_timeout:
|
||||
logger.warning(
|
||||
f"Channel {channel_id} stuck in {channel_state} state for {time_since_start:.1f}s "
|
||||
f"with no clients (timeout: {connecting_timeout}s) - stopping channel due to upstream issues"
|
||||
)
|
||||
self.stop_channel(channel_id)
|
||||
continue
|
||||
elif connection_ready_time:
|
||||
# We have clients now, but check grace period for state transition
|
||||
grace_period = ConfigHelper.channel_init_grace_period()
|
||||
time_since_ready = time.time() - connection_ready_time
|
||||
|
||||
# Add this debug log
|
||||
logger.debug(f"GRACE PERIOD CHECK: Channel {channel_id} in {channel_state} state, "
|
||||
f"time_since_ready={time_since_ready:.1f}s, grace_period={grace_period}s, "
|
||||
f"total_clients={total_clients}")
|
||||
|
|
@ -963,16 +1117,9 @@ class ProxyServer:
|
|||
# Still within grace period
|
||||
logger.debug(f"Channel {channel_id} in grace period - {time_since_ready:.1f}s of {grace_period}s elapsed")
|
||||
continue
|
||||
elif total_clients == 0:
|
||||
# Grace period expired with no clients
|
||||
logger.info(f"Grace period expired ({time_since_ready:.1f}s > {grace_period}s) with no clients - stopping channel {channel_id}")
|
||||
self.stop_channel(channel_id)
|
||||
else:
|
||||
# Grace period expired but we have clients - mark channel as active
|
||||
# Grace period expired with clients - mark channel as active
|
||||
logger.info(f"Grace period expired with {total_clients} clients - marking channel {channel_id} as active")
|
||||
old_state = "unknown"
|
||||
if metadata and b'state' in metadata:
|
||||
old_state = metadata[b'state'].decode('utf-8')
|
||||
if self.update_channel_state(channel_id, ChannelState.ACTIVE, {
|
||||
"grace_period_ended_at": str(time.time()),
|
||||
"clients_at_activation": str(total_clients)
|
||||
|
|
@ -980,6 +1127,13 @@ class ProxyServer:
|
|||
logger.info(f"Channel {channel_id} activated with {total_clients} clients after grace period")
|
||||
# If active and no clients, start normal shutdown procedure
|
||||
elif channel_state not in [ChannelState.CONNECTING, ChannelState.WAITING_FOR_CLIENTS] and total_clients == 0:
|
||||
# Check if channel is already stopping
|
||||
if self.redis_client:
|
||||
stop_key = RedisKeys.channel_stopping(channel_id)
|
||||
if self.redis_client.exists(stop_key):
|
||||
logger.debug(f"Channel {channel_id} is already stopping - skipping monitor shutdown")
|
||||
continue
|
||||
|
||||
# Check if there's a pending no-clients timeout
|
||||
disconnect_key = RedisKeys.last_client_disconnect(channel_id)
|
||||
disconnect_time = None
|
||||
|
|
@ -1039,14 +1193,30 @@ class ProxyServer:
|
|||
continue
|
||||
|
||||
# Check for local client count - if zero, clean up our local resources
|
||||
if self.client_managers[channel_id].get_client_count() == 0:
|
||||
# We're not the owner, and we have no local clients - clean up our resources
|
||||
logger.debug(f"Non-owner cleanup: Channel {channel_id} has no local clients, cleaning up local resources")
|
||||
if channel_id in self.client_managers:
|
||||
if self.client_managers[channel_id].get_client_count() == 0:
|
||||
# We're not the owner, and we have no local clients - clean up our resources
|
||||
logger.debug(f"Non-owner cleanup: Channel {channel_id} has no local clients, cleaning up local resources")
|
||||
self._cleanup_local_resources(channel_id)
|
||||
else:
|
||||
# This shouldn't happen, but clean up anyway
|
||||
logger.warning(f"Non-owner cleanup: Channel {channel_id} has no client_manager entry, cleaning up local resources")
|
||||
self._cleanup_local_resources(channel_id)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in cleanup thread: {e}", exc_info=True)
|
||||
|
||||
# Periodically check for orphaned channels (every 30 seconds)
|
||||
if hasattr(self, '_last_orphan_check'):
|
||||
if time.time() - self._last_orphan_check > 30:
|
||||
try:
|
||||
self._check_orphaned_metadata()
|
||||
self._last_orphan_check = time.time()
|
||||
except Exception as orphan_error:
|
||||
logger.error(f"Error checking orphaned metadata: {orphan_error}", exc_info=True)
|
||||
else:
|
||||
self._last_orphan_check = time.time()
|
||||
|
||||
gevent.sleep(ConfigHelper.cleanup_check_interval()) # REPLACE: time.sleep(ConfigHelper.cleanup_check_interval())
|
||||
|
||||
thread = threading.Thread(target=cleanup_task, daemon=True)
|
||||
|
|
@ -1068,10 +1238,6 @@ class ProxyServer:
|
|||
try:
|
||||
channel_id = key.decode('utf-8').split(':')[2]
|
||||
|
||||
# Skip channels we already have locally
|
||||
if channel_id in self.stream_buffers:
|
||||
continue
|
||||
|
||||
# Check if this channel has an owner
|
||||
owner = self.get_channel_owner(channel_id)
|
||||
|
||||
|
|
@ -1086,13 +1252,84 @@ class ProxyServer:
|
|||
else:
|
||||
# Orphaned channel with no clients - clean it up
|
||||
logger.info(f"Cleaning up orphaned channel {channel_id}")
|
||||
self._clean_redis_keys(channel_id)
|
||||
|
||||
# If we have it locally, stop it properly to clean up processes
|
||||
if channel_id in self.stream_managers or channel_id in self.client_managers:
|
||||
logger.info(f"Orphaned channel {channel_id} is local - calling stop_channel")
|
||||
self.stop_channel(channel_id)
|
||||
else:
|
||||
# Just clean up Redis keys for remote channels
|
||||
self._clean_redis_keys(channel_id)
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing channel key {key}: {e}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error checking orphaned channels: {e}")
|
||||
|
||||
def _check_orphaned_metadata(self):
|
||||
"""
|
||||
Check for metadata entries that have no owner and no clients.
|
||||
This catches zombie channels that weren't cleaned up properly.
|
||||
"""
|
||||
if not self.redis_client:
|
||||
return
|
||||
|
||||
try:
|
||||
# Get all channel metadata keys
|
||||
channel_pattern = "ts_proxy:channel:*:metadata"
|
||||
channel_keys = self.redis_client.keys(channel_pattern)
|
||||
|
||||
for key in channel_keys:
|
||||
try:
|
||||
channel_id = key.decode('utf-8').split(':')[2]
|
||||
|
||||
# Get metadata first
|
||||
metadata = self.redis_client.hgetall(key)
|
||||
if not metadata:
|
||||
# Empty metadata - clean it up
|
||||
logger.warning(f"Found empty metadata for channel {channel_id} - cleaning up")
|
||||
# If we have it locally, stop it properly
|
||||
if channel_id in self.stream_managers or channel_id in self.client_managers:
|
||||
self.stop_channel(channel_id)
|
||||
else:
|
||||
self._clean_redis_keys(channel_id)
|
||||
continue
|
||||
|
||||
# Get owner
|
||||
owner = metadata.get(b'owner', b'').decode('utf-8') if b'owner' in metadata else ''
|
||||
|
||||
# Check if owner is still alive
|
||||
owner_alive = False
|
||||
if owner:
|
||||
owner_heartbeat_key = f"ts_proxy:worker:{owner}:heartbeat"
|
||||
owner_alive = self.redis_client.exists(owner_heartbeat_key)
|
||||
|
||||
# Check client count
|
||||
client_set_key = RedisKeys.clients(channel_id)
|
||||
client_count = self.redis_client.scard(client_set_key) or 0
|
||||
|
||||
# If no owner and no clients, clean it up
|
||||
if not owner_alive and client_count == 0:
|
||||
state = metadata.get(b'state', b'unknown').decode('utf-8') if b'state' in metadata else 'unknown'
|
||||
logger.warning(f"Found orphaned metadata for channel {channel_id} (state: {state}, owner: {owner}, clients: {client_count}) - cleaning up")
|
||||
|
||||
# If we have it locally, stop it properly to clean up transcode/proxy processes
|
||||
if channel_id in self.stream_managers or channel_id in self.client_managers:
|
||||
logger.info(f"Channel {channel_id} is local - calling stop_channel to clean up processes")
|
||||
self.stop_channel(channel_id)
|
||||
else:
|
||||
# Just clean up Redis keys for remote channels
|
||||
self._clean_redis_keys(channel_id)
|
||||
elif not owner_alive and client_count > 0:
|
||||
# Owner is gone but clients remain - just log for now
|
||||
logger.warning(f"Found orphaned channel {channel_id} with {client_count} clients but no owner - may need ownership takeover")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing metadata key {key}: {e}", exc_info=True)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error checking orphaned metadata: {e}", exc_info=True)
|
||||
|
||||
def _clean_redis_keys(self, channel_id):
|
||||
"""Clean up all Redis keys for a channel more efficiently"""
|
||||
# Release the channel, stream, and profile keys from the channel
|
||||
|
|
|
|||
|
|
@ -14,6 +14,7 @@ from ..server import ProxyServer
|
|||
from ..redis_keys import RedisKeys
|
||||
from ..constants import EventType, ChannelState, ChannelMetadataField
|
||||
from ..url_utils import get_stream_info_for_switch
|
||||
from core.utils import log_system_event
|
||||
|
||||
logger = logging.getLogger("ts_proxy")
|
||||
|
||||
|
|
@ -597,6 +598,8 @@ class ChannelService:
|
|||
@staticmethod
|
||||
def _update_stream_stats_in_db(stream_id, **stats):
|
||||
"""Update stream stats in database"""
|
||||
from django.db import connection
|
||||
|
||||
try:
|
||||
from apps.channels.models import Stream
|
||||
from django.utils import timezone
|
||||
|
|
@ -623,6 +626,13 @@ class ChannelService:
|
|||
logger.error(f"Error updating stream stats in database for stream {stream_id}: {e}")
|
||||
return False
|
||||
|
||||
finally:
|
||||
# Always close database connection after update
|
||||
try:
|
||||
connection.close()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Helper methods for Redis operations
|
||||
|
||||
@staticmethod
|
||||
|
|
@ -691,6 +701,7 @@ class ChannelService:
|
|||
RedisKeys.events_channel(channel_id),
|
||||
json.dumps(switch_request)
|
||||
)
|
||||
|
||||
return True
|
||||
|
||||
@staticmethod
|
||||
|
|
|
|||
|
|
@ -303,6 +303,14 @@ class StreamBuffer:
|
|||
# Retrieve chunks
|
||||
chunks = self.get_chunks_exact(client_index, chunk_count)
|
||||
|
||||
# Check if we got significantly fewer chunks than expected (likely due to expiration)
|
||||
# Only check if we expected multiple chunks and got none or very few
|
||||
if chunk_count > 3 and len(chunks) == 0 and chunks_behind > 10:
|
||||
# Chunks are missing - likely expired from Redis
|
||||
# Return empty list to signal client should skip forward
|
||||
logger.debug(f"Chunks missing for client at index {client_index}, buffer at {self.index} ({chunks_behind} behind)")
|
||||
return [], client_index
|
||||
|
||||
# Check total size
|
||||
total_size = sum(len(c) for c in chunks)
|
||||
|
||||
|
|
@ -316,7 +324,7 @@ class StreamBuffer:
|
|||
additional_size = sum(len(c) for c in more_chunks)
|
||||
if total_size + additional_size <= MAX_SIZE:
|
||||
chunks.extend(more_chunks)
|
||||
chunk_count += additional
|
||||
chunk_count += len(more_chunks) # Fixed: count actual additional chunks retrieved
|
||||
|
||||
return chunks, client_index + chunk_count
|
||||
|
||||
|
|
|
|||
|
|
@ -8,6 +8,8 @@ import logging
|
|||
import threading
|
||||
import gevent # Add this import at the top of your file
|
||||
from apps.proxy.config import TSConfig as Config
|
||||
from apps.channels.models import Channel
|
||||
from core.utils import log_system_event
|
||||
from .server import ProxyServer
|
||||
from .utils import create_ts_packet, get_logger
|
||||
from .redis_keys import RedisKeys
|
||||
|
|
@ -52,6 +54,10 @@ class StreamGenerator:
|
|||
self.last_stats_bytes = 0
|
||||
self.current_rate = 0.0
|
||||
|
||||
# TTL refresh tracking
|
||||
self.last_ttl_refresh = time.time()
|
||||
self.ttl_refresh_interval = 3 # Refresh TTL every 3 seconds of active streaming
|
||||
|
||||
def generate(self):
|
||||
"""
|
||||
Generator function that produces the stream content for the client.
|
||||
|
|
@ -84,6 +90,20 @@ class StreamGenerator:
|
|||
if not self._setup_streaming():
|
||||
return
|
||||
|
||||
# Log client connect event
|
||||
try:
|
||||
channel_obj = Channel.objects.get(uuid=self.channel_id)
|
||||
log_system_event(
|
||||
'client_connect',
|
||||
channel_id=self.channel_id,
|
||||
channel_name=channel_obj.name,
|
||||
client_ip=self.client_ip,
|
||||
client_id=self.client_id,
|
||||
user_agent=self.client_user_agent[:100] if self.client_user_agent else None
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Could not log client connect event: {e}")
|
||||
|
||||
# Main streaming loop
|
||||
for chunk in self._stream_data_generator():
|
||||
yield chunk
|
||||
|
|
@ -204,6 +224,18 @@ class StreamGenerator:
|
|||
self.empty_reads += 1
|
||||
self.consecutive_empty += 1
|
||||
|
||||
# Check if we're too far behind (chunks expired from Redis)
|
||||
chunks_behind = self.buffer.index - self.local_index
|
||||
if chunks_behind > 50: # If more than 50 chunks behind, jump forward
|
||||
# Calculate new position: stay a few chunks behind current buffer
|
||||
initial_behind = ConfigHelper.initial_behind_chunks()
|
||||
new_index = max(self.local_index, self.buffer.index - initial_behind)
|
||||
|
||||
logger.warning(f"[{self.client_id}] Client too far behind ({chunks_behind} chunks), jumping from {self.local_index} to {new_index}")
|
||||
self.local_index = new_index
|
||||
self.consecutive_empty = 0 # Reset since we're repositioning
|
||||
continue # Try again immediately with new position
|
||||
|
||||
if self._should_send_keepalive(self.local_index):
|
||||
keepalive_packet = create_ts_packet('keepalive')
|
||||
logger.debug(f"[{self.client_id}] Sending keepalive packet while waiting at buffer head")
|
||||
|
|
@ -324,7 +356,20 @@ class StreamGenerator:
|
|||
ChannelMetadataField.STATS_UPDATED_AT: str(current_time)
|
||||
}
|
||||
proxy_server.redis_client.hset(client_key, mapping=stats)
|
||||
# No need to set expiration as client heartbeat will refresh this key
|
||||
|
||||
# Refresh TTL periodically while actively streaming
|
||||
# This provides proof-of-life independent of heartbeat thread
|
||||
if current_time - self.last_ttl_refresh > self.ttl_refresh_interval:
|
||||
try:
|
||||
# Refresh TTL on client key
|
||||
proxy_server.redis_client.expire(client_key, Config.CLIENT_RECORD_TTL)
|
||||
# Also refresh the client set TTL
|
||||
client_set_key = f"ts_proxy:channel:{self.channel_id}:clients"
|
||||
proxy_server.redis_client.expire(client_set_key, Config.CLIENT_RECORD_TTL)
|
||||
self.last_ttl_refresh = current_time
|
||||
logger.debug(f"[{self.client_id}] Refreshed client TTL (active streaming)")
|
||||
except Exception as ttl_error:
|
||||
logger.debug(f"[{self.client_id}] Failed to refresh TTL: {ttl_error}")
|
||||
except Exception as e:
|
||||
logger.warning(f"[{self.client_id}] Failed to store stats in Redis: {e}")
|
||||
|
||||
|
|
@ -410,6 +455,22 @@ class StreamGenerator:
|
|||
total_clients = client_manager.get_total_client_count()
|
||||
logger.info(f"[{self.client_id}] Disconnected after {elapsed:.2f}s (local: {local_clients}, total: {total_clients})")
|
||||
|
||||
# Log client disconnect event
|
||||
try:
|
||||
channel_obj = Channel.objects.get(uuid=self.channel_id)
|
||||
log_system_event(
|
||||
'client_disconnect',
|
||||
channel_id=self.channel_id,
|
||||
channel_name=channel_obj.name,
|
||||
client_ip=self.client_ip,
|
||||
client_id=self.client_id,
|
||||
user_agent=self.client_user_agent[:100] if self.client_user_agent else None,
|
||||
duration=round(elapsed, 2),
|
||||
bytes_sent=self.bytes_sent
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Could not log client disconnect event: {e}")
|
||||
|
||||
# Schedule channel shutdown if no clients left
|
||||
if not stream_released: # Only if we haven't already released the stream
|
||||
self._schedule_channel_shutdown_if_needed(local_clients)
|
||||
|
|
|
|||
|
|
@ -9,11 +9,14 @@ import subprocess
|
|||
import gevent
|
||||
import re
|
||||
from typing import Optional, List
|
||||
from django.db import connection
|
||||
from django.shortcuts import get_object_or_404
|
||||
from urllib3.exceptions import ReadTimeoutError
|
||||
from apps.proxy.config import TSConfig as Config
|
||||
from apps.channels.models import Channel, Stream
|
||||
from apps.m3u.models import M3UAccount, M3UAccountProfile
|
||||
from core.models import UserAgent, CoreSettings
|
||||
from core.utils import log_system_event
|
||||
from .stream_buffer import StreamBuffer
|
||||
from .utils import detect_stream_type, get_logger
|
||||
from .redis_keys import RedisKeys
|
||||
|
|
@ -91,11 +94,13 @@ class StreamManager:
|
|||
self.tried_stream_ids.add(self.current_stream_id)
|
||||
logger.info(f"Loaded stream ID {self.current_stream_id} from Redis for channel {buffer.channel_id}")
|
||||
else:
|
||||
logger.warning(f"No stream_id found in Redis for channel {channel_id}")
|
||||
logger.warning(f"No stream_id found in Redis for channel {channel_id}. "
|
||||
f"Stream switching will rely on URL comparison to avoid selecting the same stream.")
|
||||
except Exception as e:
|
||||
logger.warning(f"Error loading stream ID from Redis: {e}")
|
||||
else:
|
||||
logger.warning(f"Unable to get stream ID for channel {channel_id} - stream switching may not work correctly")
|
||||
logger.warning(f"Unable to get stream ID for channel {channel_id}. "
|
||||
f"Stream switching will rely on URL comparison to avoid selecting the same stream.")
|
||||
|
||||
logger.info(f"Initialized stream manager for channel {buffer.channel_id}")
|
||||
|
||||
|
|
@ -111,6 +116,9 @@ class StreamManager:
|
|||
self.stderr_reader_thread = None
|
||||
self.ffmpeg_input_phase = True # Track if we're still reading input info
|
||||
|
||||
# Add HTTP reader thread property
|
||||
self.http_reader = None
|
||||
|
||||
def _create_session(self):
|
||||
"""Create and configure requests session with optimal settings"""
|
||||
session = requests.Session()
|
||||
|
|
@ -220,11 +228,12 @@ class StreamManager:
|
|||
# Continue with normal flow
|
||||
|
||||
# Check stream type before connecting
|
||||
stream_type = detect_stream_type(self.url)
|
||||
if self.transcode == False and stream_type == StreamType.HLS:
|
||||
logger.info(f"Detected HLS stream: {self.url} for channel {self.channel_id}")
|
||||
logger.info(f"HLS streams will be handled with FFmpeg for now - future version will support HLS natively for channel {self.channel_id}")
|
||||
# Enable transcoding for HLS streams
|
||||
self.stream_type = detect_stream_type(self.url)
|
||||
if self.transcode == False and self.stream_type in (StreamType.HLS, StreamType.RTSP, StreamType.UDP):
|
||||
stream_type_name = "HLS" if self.stream_type == StreamType.HLS else ("RTSP/RTP" if self.stream_type == StreamType.RTSP else "UDP")
|
||||
logger.info(f"Detected {stream_type_name} stream: {self.url} for channel {self.channel_id}")
|
||||
logger.info(f"{stream_type_name} streams require FFmpeg for channel {self.channel_id}")
|
||||
# Enable transcoding for HLS, RTSP/RTP, and UDP streams
|
||||
self.transcode = True
|
||||
# We'll override the stream profile selection with ffmpeg in the transcoding section
|
||||
self.force_ffmpeg = True
|
||||
|
|
@ -252,6 +261,20 @@ class StreamManager:
|
|||
# Store connection start time to measure success duration
|
||||
connection_start_time = time.time()
|
||||
|
||||
# Log reconnection event if this is a retry (not first attempt)
|
||||
if self.retry_count > 0:
|
||||
try:
|
||||
channel_obj = Channel.objects.get(uuid=self.channel_id)
|
||||
log_system_event(
|
||||
'channel_reconnect',
|
||||
channel_id=self.channel_id,
|
||||
channel_name=channel_obj.name,
|
||||
attempt=self.retry_count + 1,
|
||||
max_attempts=self.max_retries
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Could not log reconnection event: {e}")
|
||||
|
||||
# Successfully connected - read stream data until disconnect/error
|
||||
self._process_stream_data()
|
||||
# If we get here, the connection was closed/failed
|
||||
|
|
@ -281,6 +304,20 @@ class StreamManager:
|
|||
if self.retry_count >= self.max_retries:
|
||||
url_failed = True
|
||||
logger.warning(f"Maximum retry attempts ({self.max_retries}) reached for URL: {self.url} for channel: {self.channel_id}")
|
||||
|
||||
# Log connection error event
|
||||
try:
|
||||
channel_obj = Channel.objects.get(uuid=self.channel_id)
|
||||
log_system_event(
|
||||
'channel_error',
|
||||
channel_id=self.channel_id,
|
||||
channel_name=channel_obj.name,
|
||||
error_type='connection_failed',
|
||||
url=self.url[:100] if self.url else None,
|
||||
attempts=self.max_retries
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Could not log connection error event: {e}")
|
||||
else:
|
||||
# Wait with exponential backoff before retrying
|
||||
timeout = min(.25 * self.retry_count, 3) # Cap at 3 seconds
|
||||
|
|
@ -294,6 +331,21 @@ class StreamManager:
|
|||
|
||||
if self.retry_count >= self.max_retries:
|
||||
url_failed = True
|
||||
|
||||
# Log connection error event with exception details
|
||||
try:
|
||||
channel_obj = Channel.objects.get(uuid=self.channel_id)
|
||||
log_system_event(
|
||||
'channel_error',
|
||||
channel_id=self.channel_id,
|
||||
channel_name=channel_obj.name,
|
||||
error_type='connection_exception',
|
||||
error_message=str(e)[:200],
|
||||
url=self.url[:100] if self.url else None,
|
||||
attempts=self.max_retries
|
||||
)
|
||||
except Exception as log_error:
|
||||
logger.error(f"Could not log connection error event: {log_error}")
|
||||
else:
|
||||
# Wait with exponential backoff before retrying
|
||||
timeout = min(.25 * self.retry_count, 3) # Cap at 3 seconds
|
||||
|
|
@ -378,6 +430,12 @@ class StreamManager:
|
|||
except Exception as e:
|
||||
logger.error(f"Failed to update channel state in Redis: {e} for channel {self.channel_id}", exc_info=True)
|
||||
|
||||
# Close database connection for this thread
|
||||
try:
|
||||
connection.close()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
logger.info(f"Stream manager stopped for channel {self.channel_id}")
|
||||
|
||||
def _establish_transcode_connection(self):
|
||||
|
|
@ -407,7 +465,7 @@ class StreamManager:
|
|||
from core.models import StreamProfile
|
||||
try:
|
||||
stream_profile = StreamProfile.objects.get(name='ffmpeg', locked=True)
|
||||
logger.info("Using FFmpeg stream profile for HLS content")
|
||||
logger.info("Using FFmpeg stream profile for unsupported proxy content (HLS/RTSP/UDP)")
|
||||
except StreamProfile.DoesNotExist:
|
||||
# Fall back to channel's profile if FFmpeg not found
|
||||
stream_profile = channel.get_stream_profile()
|
||||
|
|
@ -417,6 +475,13 @@ class StreamManager:
|
|||
|
||||
# Build and start transcode command
|
||||
self.transcode_cmd = stream_profile.build_command(self.url, self.user_agent)
|
||||
|
||||
# For UDP streams, remove any user_agent parameters from the command
|
||||
if hasattr(self, 'stream_type') and self.stream_type == StreamType.UDP:
|
||||
# Filter out any arguments that contain the user_agent value or related headers
|
||||
self.transcode_cmd = [arg for arg in self.transcode_cmd if self.user_agent not in arg and 'user-agent' not in arg.lower() and 'user_agent' not in arg.lower()]
|
||||
logger.debug(f"Removed user_agent parameters from UDP stream command for channel: {self.channel_id}")
|
||||
|
||||
logger.debug(f"Starting transcode process: {self.transcode_cmd} for channel: {self.channel_id}")
|
||||
|
||||
# Modified to capture stderr instead of discarding it
|
||||
|
|
@ -681,6 +746,19 @@ class StreamManager:
|
|||
# Reset buffering state
|
||||
self.buffering = False
|
||||
self.buffering_start_time = None
|
||||
|
||||
# Log failover event
|
||||
try:
|
||||
channel_obj = Channel.objects.get(uuid=self.channel_id)
|
||||
log_system_event(
|
||||
'channel_failover',
|
||||
channel_id=self.channel_id,
|
||||
channel_name=channel_obj.name,
|
||||
reason='buffering_timeout',
|
||||
duration=buffering_duration
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Could not log failover event: {e}")
|
||||
else:
|
||||
logger.error(f"Failed to switch to next stream for channel {self.channel_id} after buffering timeout")
|
||||
else:
|
||||
|
|
@ -688,6 +766,19 @@ class StreamManager:
|
|||
self.buffering = True
|
||||
self.buffering_start_time = time.time()
|
||||
logger.warning(f"Buffering started for channel {self.channel_id} - speed: {ffmpeg_speed}x")
|
||||
|
||||
# Log system event for buffering
|
||||
try:
|
||||
channel_obj = Channel.objects.get(uuid=self.channel_id)
|
||||
log_system_event(
|
||||
'channel_buffering',
|
||||
channel_id=self.channel_id,
|
||||
channel_name=channel_obj.name,
|
||||
speed=ffmpeg_speed
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Could not log buffering event: {e}")
|
||||
|
||||
# Log buffering warning
|
||||
logger.debug(f"FFmpeg speed on channel {self.channel_id} is below {self.buffering_speed} ({ffmpeg_speed}x) - buffering detected")
|
||||
# Set channel state to buffering
|
||||
|
|
@ -737,9 +828,9 @@ class StreamManager:
|
|||
|
||||
|
||||
def _establish_http_connection(self):
|
||||
"""Establish a direct HTTP connection to the stream"""
|
||||
"""Establish HTTP connection using thread-based reader (same as transcode path)"""
|
||||
try:
|
||||
logger.debug(f"Using TS Proxy to connect to stream: {self.url}")
|
||||
logger.debug(f"Using HTTP streamer thread to connect to stream: {self.url}")
|
||||
|
||||
# Check if we already have active HTTP connections
|
||||
if self.current_response or self.current_session:
|
||||
|
|
@ -756,41 +847,39 @@ class StreamManager:
|
|||
logger.debug(f"Closing existing transcode process before establishing HTTP connection for channel {self.channel_id}")
|
||||
self._close_socket()
|
||||
|
||||
# Create new session for each connection attempt
|
||||
session = self._create_session()
|
||||
self.current_session = session
|
||||
# Use HTTPStreamReader to fetch stream and pipe to a readable file descriptor
|
||||
# This allows us to use the same fetch_chunk() path as transcode
|
||||
from .http_streamer import HTTPStreamReader
|
||||
|
||||
# Stream the URL with proper timeout handling
|
||||
response = session.get(
|
||||
self.url,
|
||||
stream=True,
|
||||
timeout=(10, 60) # 10s connect timeout, 60s read timeout
|
||||
# Create and start the HTTP stream reader
|
||||
self.http_reader = HTTPStreamReader(
|
||||
url=self.url,
|
||||
user_agent=self.user_agent,
|
||||
chunk_size=self.chunk_size
|
||||
)
|
||||
self.current_response = response
|
||||
|
||||
if response.status_code == 200:
|
||||
self.connected = True
|
||||
self.healthy = True
|
||||
logger.info(f"Successfully connected to stream source for channel {self.channel_id}")
|
||||
# Start the reader thread and get the read end of the pipe
|
||||
pipe_fd = self.http_reader.start()
|
||||
|
||||
# Store connection start time for stability tracking
|
||||
self.connection_start_time = time.time()
|
||||
# Wrap the file descriptor in a file object (same as transcode stdout)
|
||||
import os
|
||||
self.socket = os.fdopen(pipe_fd, 'rb', buffering=0)
|
||||
self.connected = True
|
||||
self.healthy = True
|
||||
|
||||
# Set channel state to waiting for clients
|
||||
self._set_waiting_for_clients()
|
||||
logger.info(f"Successfully started HTTP streamer thread for channel {self.channel_id}")
|
||||
|
||||
# Store connection start time for stability tracking
|
||||
self.connection_start_time = time.time()
|
||||
|
||||
# Set channel state to waiting for clients
|
||||
self._set_waiting_for_clients()
|
||||
|
||||
return True
|
||||
|
||||
return True
|
||||
else:
|
||||
logger.error(f"Failed to connect to stream for channel {self.channel_id}: HTTP {response.status_code}")
|
||||
self._close_connection()
|
||||
return False
|
||||
except requests.exceptions.RequestException as e:
|
||||
logger.error(f"HTTP request error: {e}")
|
||||
self._close_connection()
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.error(f"Error establishing HTTP connection for channel {self.channel_id}: {e}", exc_info=True)
|
||||
self._close_connection()
|
||||
self._close_socket()
|
||||
return False
|
||||
|
||||
def _update_bytes_processed(self, chunk_size):
|
||||
|
|
@ -818,48 +907,19 @@ class StreamManager:
|
|||
logger.error(f"Error updating bytes processed: {e}")
|
||||
|
||||
def _process_stream_data(self):
|
||||
"""Process stream data until disconnect or error"""
|
||||
"""Process stream data until disconnect or error - unified path for both transcode and HTTP"""
|
||||
try:
|
||||
if self.transcode:
|
||||
# Handle transcoded stream data
|
||||
while self.running and self.connected and not self.stop_requested and not self.needs_stream_switch:
|
||||
if self.fetch_chunk():
|
||||
self.last_data_time = time.time()
|
||||
else:
|
||||
if not self.running:
|
||||
break
|
||||
gevent.sleep(0.1) # REPLACE time.sleep(0.1)
|
||||
else:
|
||||
# Handle direct HTTP connection
|
||||
chunk_count = 0
|
||||
try:
|
||||
for chunk in self.current_response.iter_content(chunk_size=self.chunk_size):
|
||||
# Check if we've been asked to stop
|
||||
if self.stop_requested or self.url_switching or self.needs_stream_switch:
|
||||
break
|
||||
|
||||
if chunk:
|
||||
# Track chunk size before adding to buffer
|
||||
chunk_size = len(chunk)
|
||||
self._update_bytes_processed(chunk_size)
|
||||
|
||||
# Add chunk to buffer with TS packet alignment
|
||||
success = self.buffer.add_chunk(chunk)
|
||||
|
||||
if success:
|
||||
self.last_data_time = time.time()
|
||||
chunk_count += 1
|
||||
|
||||
# Update last data timestamp in Redis
|
||||
if hasattr(self.buffer, 'redis_client') and self.buffer.redis_client:
|
||||
last_data_key = RedisKeys.last_data(self.buffer.channel_id)
|
||||
self.buffer.redis_client.set(last_data_key, str(time.time()), ex=60)
|
||||
except (AttributeError, ConnectionError) as e:
|
||||
if self.stop_requested or self.url_switching:
|
||||
logger.debug(f"Expected connection error during shutdown/URL switch for channel {self.channel_id}: {e}")
|
||||
else:
|
||||
logger.error(f"Unexpected stream error for channel {self.channel_id}: {e}")
|
||||
raise
|
||||
# Both transcode and HTTP now use the same subprocess/socket approach
|
||||
# This gives us perfect control: check flags between chunks, timeout just returns False
|
||||
while self.running and self.connected and not self.stop_requested and not self.needs_stream_switch:
|
||||
if self.fetch_chunk():
|
||||
self.last_data_time = time.time()
|
||||
else:
|
||||
# fetch_chunk() returned False - could be timeout, no data, or error
|
||||
if not self.running:
|
||||
break
|
||||
# Brief sleep before retry to avoid tight loop
|
||||
gevent.sleep(0.1)
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing stream data for channel {self.channel_id}: {e}", exc_info=True)
|
||||
|
||||
|
|
@ -948,6 +1008,7 @@ class StreamManager:
|
|||
|
||||
# Import both models for proper resource management
|
||||
from apps.channels.models import Stream, Channel
|
||||
from django.db import connection
|
||||
|
||||
# Update stream profile if we're switching streams
|
||||
if self.current_stream_id and stream_id and self.current_stream_id != stream_id:
|
||||
|
|
@ -965,9 +1026,17 @@ class StreamManager:
|
|||
logger.debug(f"Updated m3u profile for channel {self.channel_id} to use profile from stream {stream_id}")
|
||||
else:
|
||||
logger.warning(f"Failed to update stream profile for channel {self.channel_id}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating stream profile for channel {self.channel_id}: {e}")
|
||||
|
||||
finally:
|
||||
# Always close database connection after profile update
|
||||
try:
|
||||
connection.close()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# CRITICAL: Set a flag to prevent immediate reconnection with old URL
|
||||
self.url_switching = True
|
||||
self.url_switch_start_time = time.time()
|
||||
|
|
@ -1005,6 +1074,19 @@ class StreamManager:
|
|||
except Exception as e:
|
||||
logger.warning(f"Failed to reset buffer position: {e}")
|
||||
|
||||
# Log stream switch event
|
||||
try:
|
||||
channel_obj = Channel.objects.get(uuid=self.channel_id)
|
||||
log_system_event(
|
||||
'stream_switch',
|
||||
channel_id=self.channel_id,
|
||||
channel_name=channel_obj.name,
|
||||
new_url=new_url[:100] if new_url else None,
|
||||
stream_id=stream_id
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Could not log stream switch event: {e}")
|
||||
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Error during URL update for channel {self.channel_id}: {e}", exc_info=True)
|
||||
|
|
@ -1123,6 +1205,19 @@ class StreamManager:
|
|||
if connection_result:
|
||||
self.connection_start_time = time.time()
|
||||
logger.info(f"Reconnect successful for channel {self.channel_id}")
|
||||
|
||||
# Log reconnection event
|
||||
try:
|
||||
channel_obj = Channel.objects.get(uuid=self.channel_id)
|
||||
log_system_event(
|
||||
'channel_reconnect',
|
||||
channel_id=self.channel_id,
|
||||
channel_name=channel_obj.name,
|
||||
reason='health_monitor'
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Could not log reconnection event: {e}")
|
||||
|
||||
return True
|
||||
else:
|
||||
logger.warning(f"Reconnect failed for channel {self.channel_id}")
|
||||
|
|
@ -1183,6 +1278,15 @@ class StreamManager:
|
|||
if self.current_response or self.current_session:
|
||||
self._close_connection()
|
||||
|
||||
# Stop HTTP reader thread if it exists
|
||||
if hasattr(self, 'http_reader') and self.http_reader:
|
||||
try:
|
||||
logger.debug(f"Stopping HTTP reader thread for channel {self.channel_id}")
|
||||
self.http_reader.stop()
|
||||
self.http_reader = None
|
||||
except Exception as e:
|
||||
logger.debug(f"Error stopping HTTP reader for channel {self.channel_id}: {e}")
|
||||
|
||||
# Otherwise handle socket and transcode resources
|
||||
if self.socket:
|
||||
try:
|
||||
|
|
@ -1191,25 +1295,17 @@ class StreamManager:
|
|||
logger.debug(f"Error closing socket for channel {self.channel_id}: {e}")
|
||||
pass
|
||||
|
||||
# Enhanced transcode process cleanup with more aggressive termination
|
||||
# Enhanced transcode process cleanup with immediate termination
|
||||
if self.transcode_process:
|
||||
try:
|
||||
# First try polite termination
|
||||
logger.debug(f"Terminating transcode process for channel {self.channel_id}")
|
||||
self.transcode_process.terminate()
|
||||
logger.debug(f"Killing transcode process for channel {self.channel_id}")
|
||||
self.transcode_process.kill()
|
||||
|
||||
# Give it a short time to terminate gracefully
|
||||
# Give it a very short time to die
|
||||
try:
|
||||
self.transcode_process.wait(timeout=1.0)
|
||||
self.transcode_process.wait(timeout=0.5)
|
||||
except subprocess.TimeoutExpired:
|
||||
# If it doesn't terminate quickly, kill it
|
||||
logger.warning(f"Transcode process didn't terminate within timeout, killing forcefully for channel {self.channel_id}")
|
||||
self.transcode_process.kill()
|
||||
|
||||
try:
|
||||
self.transcode_process.wait(timeout=1.0)
|
||||
except subprocess.TimeoutExpired:
|
||||
logger.error(f"Failed to kill transcode process even with force for channel {self.channel_id}")
|
||||
logger.error(f"Failed to kill transcode process even with force for channel {self.channel_id}")
|
||||
except Exception as e:
|
||||
logger.debug(f"Error terminating transcode process for channel {self.channel_id}: {e}")
|
||||
|
||||
|
|
@ -1274,7 +1370,7 @@ class StreamManager:
|
|||
|
||||
try:
|
||||
# Set timeout for chunk reads
|
||||
chunk_timeout = ConfigHelper.get('CHUNK_TIMEOUT', 10) # Default 10 seconds
|
||||
chunk_timeout = ConfigHelper.chunk_timeout() # Use centralized timeout configuration
|
||||
|
||||
try:
|
||||
# Handle different socket types with timeout
|
||||
|
|
@ -1357,7 +1453,17 @@ class StreamManager:
|
|||
# Only update if not already past connecting
|
||||
if not current_state or current_state in [ChannelState.INITIALIZING, ChannelState.CONNECTING]:
|
||||
# NEW CODE: Check if buffer has enough chunks
|
||||
current_buffer_index = getattr(self.buffer, 'index', 0)
|
||||
# IMPORTANT: Read from Redis, not local buffer.index, because in multi-worker setup
|
||||
# each worker has its own StreamBuffer instance with potentially stale local index
|
||||
buffer_index_key = RedisKeys.buffer_index(channel_id)
|
||||
current_buffer_index = 0
|
||||
try:
|
||||
redis_index = redis_client.get(buffer_index_key)
|
||||
if redis_index:
|
||||
current_buffer_index = int(redis_index)
|
||||
except Exception as e:
|
||||
logger.error(f"Error reading buffer index from Redis: {e}")
|
||||
|
||||
initial_chunks_needed = ConfigHelper.initial_behind_chunks()
|
||||
|
||||
if current_buffer_index < initial_chunks_needed:
|
||||
|
|
@ -1405,10 +1511,21 @@ class StreamManager:
|
|||
# Clean up completed timers
|
||||
self._buffer_check_timers = [t for t in self._buffer_check_timers if t.is_alive()]
|
||||
|
||||
if hasattr(self.buffer, 'index') and hasattr(self.buffer, 'channel_id'):
|
||||
current_buffer_index = self.buffer.index
|
||||
initial_chunks_needed = getattr(Config, 'INITIAL_BEHIND_CHUNKS', 10)
|
||||
if hasattr(self.buffer, 'channel_id') and hasattr(self.buffer, 'redis_client'):
|
||||
channel_id = self.buffer.channel_id
|
||||
redis_client = self.buffer.redis_client
|
||||
|
||||
# IMPORTANT: Read from Redis, not local buffer.index
|
||||
buffer_index_key = RedisKeys.buffer_index(channel_id)
|
||||
current_buffer_index = 0
|
||||
try:
|
||||
redis_index = redis_client.get(buffer_index_key)
|
||||
if redis_index:
|
||||
current_buffer_index = int(redis_index)
|
||||
except Exception as e:
|
||||
logger.error(f"Error reading buffer index from Redis: {e}")
|
||||
|
||||
initial_chunks_needed = ConfigHelper.initial_behind_chunks() # Use ConfigHelper for consistency
|
||||
|
||||
if current_buffer_index >= initial_chunks_needed:
|
||||
# We now have enough buffer, call _set_waiting_for_clients again
|
||||
|
|
@ -1433,6 +1550,7 @@ class StreamManager:
|
|||
def _try_next_stream(self):
|
||||
"""
|
||||
Try to switch to the next available stream for this channel.
|
||||
Will iterate through multiple alternate streams if needed to find one with a different URL.
|
||||
|
||||
Returns:
|
||||
bool: True if successfully switched to a new stream, False otherwise
|
||||
|
|
@ -1458,60 +1576,71 @@ class StreamManager:
|
|||
logger.warning(f"All {len(alternate_streams)} alternate streams have been tried for channel {self.channel_id}")
|
||||
return False
|
||||
|
||||
# Get the next stream to try
|
||||
next_stream = untried_streams[0]
|
||||
stream_id = next_stream['stream_id']
|
||||
profile_id = next_stream['profile_id'] # This is the M3U profile ID we need
|
||||
# IMPROVED: Try multiple streams until we find one with a different URL
|
||||
for next_stream in untried_streams:
|
||||
stream_id = next_stream['stream_id']
|
||||
profile_id = next_stream['profile_id'] # This is the M3U profile ID we need
|
||||
|
||||
# Add to tried streams
|
||||
self.tried_stream_ids.add(stream_id)
|
||||
# Add to tried streams
|
||||
self.tried_stream_ids.add(stream_id)
|
||||
|
||||
# Get stream info including URL using the profile_id we already have
|
||||
logger.info(f"Trying next stream ID {stream_id} with profile ID {profile_id} for channel {self.channel_id}")
|
||||
stream_info = get_stream_info_for_switch(self.channel_id, stream_id)
|
||||
# Get stream info including URL using the profile_id we already have
|
||||
logger.info(f"Trying next stream ID {stream_id} with profile ID {profile_id} for channel {self.channel_id}")
|
||||
stream_info = get_stream_info_for_switch(self.channel_id, stream_id)
|
||||
|
||||
if 'error' in stream_info or not stream_info.get('url'):
|
||||
logger.error(f"Error getting info for stream {stream_id} for channel {self.channel_id}: {stream_info.get('error', 'No URL')}")
|
||||
return False
|
||||
if 'error' in stream_info or not stream_info.get('url'):
|
||||
logger.error(f"Error getting info for stream {stream_id} for channel {self.channel_id}: {stream_info.get('error', 'No URL')}")
|
||||
continue # Try next stream instead of giving up
|
||||
|
||||
# Update URL and user agent
|
||||
new_url = stream_info['url']
|
||||
new_user_agent = stream_info['user_agent']
|
||||
new_transcode = stream_info['transcode']
|
||||
# Update URL and user agent
|
||||
new_url = stream_info['url']
|
||||
new_user_agent = stream_info['user_agent']
|
||||
new_transcode = stream_info['transcode']
|
||||
|
||||
logger.info(f"Switching from URL {self.url} to {new_url} for channel {self.channel_id}")
|
||||
# CRITICAL FIX: Check if the new URL is the same as current URL
|
||||
# This can happen when current_stream_id is None and we accidentally select the same stream
|
||||
if new_url == self.url:
|
||||
logger.warning(f"Stream ID {stream_id} generates the same URL as current stream ({new_url}). "
|
||||
f"Skipping this stream and trying next alternative.")
|
||||
continue # Try next stream instead of giving up
|
||||
|
||||
# IMPORTANT: Just update the URL, don't stop the channel or release resources
|
||||
switch_result = self.update_url(new_url, stream_id, profile_id)
|
||||
if not switch_result:
|
||||
logger.error(f"Failed to update URL for stream ID {stream_id} for channel {self.channel_id}")
|
||||
return False
|
||||
logger.info(f"Switching from URL {self.url} to {new_url} for channel {self.channel_id}")
|
||||
|
||||
# Update stream ID tracking
|
||||
self.current_stream_id = stream_id
|
||||
# IMPORTANT: Just update the URL, don't stop the channel or release resources
|
||||
switch_result = self.update_url(new_url, stream_id, profile_id)
|
||||
if not switch_result:
|
||||
logger.error(f"Failed to update URL for stream ID {stream_id} for channel {self.channel_id}")
|
||||
continue # Try next stream
|
||||
|
||||
# Store the new user agent and transcode settings
|
||||
self.user_agent = new_user_agent
|
||||
self.transcode = new_transcode
|
||||
# Update stream ID tracking
|
||||
self.current_stream_id = stream_id
|
||||
|
||||
# Update stream metadata in Redis - use the profile_id we got from get_alternate_streams
|
||||
if hasattr(self.buffer, 'redis_client') and self.buffer.redis_client:
|
||||
metadata_key = RedisKeys.channel_metadata(self.channel_id)
|
||||
self.buffer.redis_client.hset(metadata_key, mapping={
|
||||
ChannelMetadataField.URL: new_url,
|
||||
ChannelMetadataField.USER_AGENT: new_user_agent,
|
||||
ChannelMetadataField.STREAM_PROFILE: stream_info['stream_profile'],
|
||||
ChannelMetadataField.M3U_PROFILE: str(profile_id), # Use the profile_id from get_alternate_streams
|
||||
ChannelMetadataField.STREAM_ID: str(stream_id),
|
||||
ChannelMetadataField.STREAM_SWITCH_TIME: str(time.time()),
|
||||
ChannelMetadataField.STREAM_SWITCH_REASON: "max_retries_exceeded"
|
||||
})
|
||||
# Store the new user agent and transcode settings
|
||||
self.user_agent = new_user_agent
|
||||
self.transcode = new_transcode
|
||||
|
||||
# Log the switch
|
||||
logger.info(f"Stream metadata updated for channel {self.channel_id} to stream ID {stream_id} with M3U profile {profile_id}")
|
||||
# Update stream metadata in Redis - use the profile_id we got from get_alternate_streams
|
||||
if hasattr(self.buffer, 'redis_client') and self.buffer.redis_client:
|
||||
metadata_key = RedisKeys.channel_metadata(self.channel_id)
|
||||
self.buffer.redis_client.hset(metadata_key, mapping={
|
||||
ChannelMetadataField.URL: new_url,
|
||||
ChannelMetadataField.USER_AGENT: new_user_agent,
|
||||
ChannelMetadataField.STREAM_PROFILE: stream_info['stream_profile'],
|
||||
ChannelMetadataField.M3U_PROFILE: str(profile_id), # Use the profile_id from get_alternate_streams
|
||||
ChannelMetadataField.STREAM_ID: str(stream_id),
|
||||
ChannelMetadataField.STREAM_SWITCH_TIME: str(time.time()),
|
||||
ChannelMetadataField.STREAM_SWITCH_REASON: "max_retries_exceeded"
|
||||
})
|
||||
|
||||
logger.info(f"Successfully switched to stream ID {stream_id} with URL {new_url} for channel {self.channel_id}")
|
||||
return True
|
||||
# Log the switch
|
||||
logger.info(f"Stream metadata updated for channel {self.channel_id} to stream ID {stream_id} with M3U profile {profile_id}")
|
||||
|
||||
logger.info(f"Successfully switched to stream ID {stream_id} with URL {new_url} for channel {self.channel_id}")
|
||||
return True
|
||||
|
||||
# If we get here, we tried all streams but none worked
|
||||
logger.error(f"Tried {len(untried_streams)} alternate streams but none were suitable for channel {self.channel_id}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error trying next stream for channel {self.channel_id}: {e}", exc_info=True)
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ from typing import Optional, Tuple, List
|
|||
from django.shortcuts import get_object_or_404
|
||||
from apps.channels.models import Channel, Stream
|
||||
from apps.m3u.models import M3UAccount, M3UAccountProfile
|
||||
from core.models import UserAgent, CoreSettings
|
||||
from core.models import UserAgent, CoreSettings, StreamProfile
|
||||
from .utils import get_logger
|
||||
from uuid import UUID
|
||||
import requests
|
||||
|
|
@ -26,16 +26,100 @@ def get_stream_object(id: str):
|
|||
|
||||
def generate_stream_url(channel_id: str) -> Tuple[str, str, bool, Optional[int]]:
|
||||
"""
|
||||
Generate the appropriate stream URL for a channel based on its profile settings.
|
||||
Generate the appropriate stream URL for a channel or stream based on its profile settings.
|
||||
|
||||
Args:
|
||||
channel_id: The UUID of the channel
|
||||
channel_id: The UUID of the channel or stream hash
|
||||
|
||||
Returns:
|
||||
Tuple[str, str, bool, Optional[int]]: (stream_url, user_agent, transcode_flag, profile_id)
|
||||
"""
|
||||
try:
|
||||
channel = get_stream_object(channel_id)
|
||||
channel_or_stream = get_stream_object(channel_id)
|
||||
|
||||
# Handle direct stream preview (custom streams)
|
||||
if isinstance(channel_or_stream, Stream):
|
||||
from core.utils import RedisClient
|
||||
|
||||
stream = channel_or_stream
|
||||
logger.info(f"Previewing stream directly: {stream.id} ({stream.name})")
|
||||
|
||||
# For custom streams, we need to get the M3U account and profile
|
||||
m3u_account = stream.m3u_account
|
||||
if not m3u_account:
|
||||
logger.error(f"Stream {stream.id} has no M3U account")
|
||||
return None, None, False, None
|
||||
|
||||
# Get active profiles for this M3U account
|
||||
m3u_profiles = m3u_account.profiles.filter(is_active=True)
|
||||
default_profile = next((obj for obj in m3u_profiles if obj.is_default), None)
|
||||
|
||||
if not default_profile:
|
||||
logger.error(f"No default active profile found for M3U account {m3u_account.id}")
|
||||
return None, None, False, None
|
||||
|
||||
# Check profiles in order: default first, then others
|
||||
profiles = [default_profile] + [obj for obj in m3u_profiles if not obj.is_default]
|
||||
|
||||
# Try to find an available profile with connection capacity
|
||||
redis_client = RedisClient.get_client()
|
||||
selected_profile = None
|
||||
|
||||
for profile in profiles:
|
||||
logger.info(profile)
|
||||
|
||||
# Check connection availability
|
||||
if redis_client:
|
||||
profile_connections_key = f"profile_connections:{profile.id}"
|
||||
current_connections = int(redis_client.get(profile_connections_key) or 0)
|
||||
|
||||
# Check if profile has available slots (or unlimited connections)
|
||||
if profile.max_streams == 0 or current_connections < profile.max_streams:
|
||||
selected_profile = profile
|
||||
logger.debug(f"Selected profile {profile.id} with {current_connections}/{profile.max_streams} connections for stream preview")
|
||||
break
|
||||
else:
|
||||
logger.debug(f"Profile {profile.id} at max connections: {current_connections}/{profile.max_streams}")
|
||||
else:
|
||||
# No Redis available, use first active profile
|
||||
selected_profile = profile
|
||||
break
|
||||
|
||||
if not selected_profile:
|
||||
logger.error(f"No profiles available with connection capacity for M3U account {m3u_account.id}")
|
||||
return None, None, False, None
|
||||
|
||||
# Get the appropriate user agent
|
||||
stream_user_agent = m3u_account.get_user_agent().user_agent
|
||||
if stream_user_agent is None:
|
||||
stream_user_agent = UserAgent.objects.get(id=CoreSettings.get_default_user_agent_id())
|
||||
logger.debug(f"No user agent found for account, using default: {stream_user_agent}")
|
||||
|
||||
# Get stream URL with the selected profile's URL transformation
|
||||
stream_url = transform_url(stream.url, selected_profile.search_pattern, selected_profile.replace_pattern)
|
||||
|
||||
# Check if the stream has its own stream_profile set, otherwise use default
|
||||
if stream.stream_profile:
|
||||
stream_profile = stream.stream_profile
|
||||
logger.debug(f"Using stream's own stream profile: {stream_profile.name}")
|
||||
else:
|
||||
stream_profile = StreamProfile.objects.get(
|
||||
id=CoreSettings.get_default_stream_profile_id()
|
||||
)
|
||||
logger.debug(f"Using default stream profile: {stream_profile.name}")
|
||||
|
||||
# Check if transcoding is needed
|
||||
if stream_profile.is_proxy() or stream_profile is None:
|
||||
transcode = False
|
||||
else:
|
||||
transcode = True
|
||||
|
||||
stream_profile_id = stream_profile.id
|
||||
|
||||
return stream_url, stream_user_agent, transcode, stream_profile_id
|
||||
|
||||
# Handle channel preview (existing logic)
|
||||
channel = channel_or_stream
|
||||
|
||||
# Get stream and profile for this channel
|
||||
# Note: get_stream now returns 3 values (stream_id, profile_id, error_reason)
|
||||
|
|
@ -351,6 +435,9 @@ def validate_stream_url(url, user_agent=None, timeout=(5, 5)):
|
|||
"""
|
||||
Validate if a stream URL is accessible without downloading the full content.
|
||||
|
||||
Note: UDP/RTP/RTSP streams are automatically considered valid as they cannot
|
||||
be validated via HTTP methods.
|
||||
|
||||
Args:
|
||||
url (str): The URL to validate
|
||||
user_agent (str): User agent to use for the request
|
||||
|
|
@ -359,6 +446,12 @@ def validate_stream_url(url, user_agent=None, timeout=(5, 5)):
|
|||
Returns:
|
||||
tuple: (is_valid, final_url, status_code, message)
|
||||
"""
|
||||
# Check if URL uses non-HTTP protocols (UDP/RTP/RTSP)
|
||||
# These cannot be validated via HTTP methods, so we skip validation
|
||||
if url.startswith(('udp://', 'rtp://', 'rtsp://')):
|
||||
logger.info(f"Skipping HTTP validation for non-HTTP protocol: {url}")
|
||||
return True, url, 200, "Non-HTTP protocol (UDP/RTP/RTSP) - validation skipped"
|
||||
|
||||
try:
|
||||
# Create session with proper headers
|
||||
session = requests.Session()
|
||||
|
|
|
|||
|
|
@ -7,19 +7,27 @@ logger = logging.getLogger("ts_proxy")
|
|||
|
||||
def detect_stream_type(url):
|
||||
"""
|
||||
Detect if stream URL is HLS or TS format.
|
||||
Detect if stream URL is HLS, RTSP/RTP, UDP, or TS format.
|
||||
|
||||
Args:
|
||||
url (str): The stream URL to analyze
|
||||
|
||||
Returns:
|
||||
str: 'hls' or 'ts' depending on detected format
|
||||
str: 'hls', 'rtsp', 'udp', or 'ts' depending on detected format
|
||||
"""
|
||||
if not url:
|
||||
return 'unknown'
|
||||
|
||||
url_lower = url.lower()
|
||||
|
||||
# Check for UDP streams (requires FFmpeg)
|
||||
if url_lower.startswith('udp://'):
|
||||
return 'udp'
|
||||
|
||||
# Check for RTSP/RTP streams (requires FFmpeg)
|
||||
if url_lower.startswith('rtsp://') or url_lower.startswith('rtp://'):
|
||||
return 'rtsp'
|
||||
|
||||
# Look for common HLS indicators
|
||||
if (url_lower.endswith('.m3u8') or
|
||||
'.m3u8?' in url_lower or
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import time
|
|||
import random
|
||||
import re
|
||||
import pathlib
|
||||
from django.http import StreamingHttpResponse, JsonResponse, HttpResponseRedirect
|
||||
from django.http import StreamingHttpResponse, JsonResponse, HttpResponseRedirect, HttpResponse
|
||||
from django.views.decorators.csrf import csrf_exempt
|
||||
from django.shortcuts import get_object_or_404
|
||||
from apps.proxy.config import TSConfig as Config
|
||||
|
|
@ -84,11 +84,18 @@ def stream_ts(request, channel_id):
|
|||
if state_field in metadata:
|
||||
channel_state = metadata[state_field].decode("utf-8")
|
||||
|
||||
if channel_state:
|
||||
# Channel is being initialized or already active - no need for reinitialization
|
||||
# Active/running states - channel is operational, don't reinitialize
|
||||
if channel_state in [
|
||||
ChannelState.ACTIVE,
|
||||
ChannelState.WAITING_FOR_CLIENTS,
|
||||
ChannelState.BUFFERING,
|
||||
ChannelState.INITIALIZING,
|
||||
ChannelState.CONNECTING,
|
||||
ChannelState.STOPPING,
|
||||
]:
|
||||
needs_initialization = False
|
||||
logger.debug(
|
||||
f"[{client_id}] Channel {channel_id} already in state {channel_state}, skipping initialization"
|
||||
f"[{client_id}] Channel {channel_id} in state {channel_state}, skipping initialization"
|
||||
)
|
||||
|
||||
# Special handling for initializing/connecting states
|
||||
|
|
@ -98,19 +105,34 @@ def stream_ts(request, channel_id):
|
|||
]:
|
||||
channel_initializing = True
|
||||
logger.debug(
|
||||
f"[{client_id}] Channel {channel_id} is still initializing, client will wait for completion"
|
||||
f"[{client_id}] Channel {channel_id} is still initializing, client will wait"
|
||||
)
|
||||
# Terminal states - channel needs cleanup before reinitialization
|
||||
elif channel_state in [
|
||||
ChannelState.ERROR,
|
||||
ChannelState.STOPPED,
|
||||
]:
|
||||
needs_initialization = True
|
||||
logger.info(
|
||||
f"[{client_id}] Channel {channel_id} in terminal state {channel_state}, will reinitialize"
|
||||
)
|
||||
# Unknown/empty state - check if owner is alive
|
||||
else:
|
||||
# Only check for owner if channel is in a valid state
|
||||
owner_field = ChannelMetadataField.OWNER.encode("utf-8")
|
||||
if owner_field in metadata:
|
||||
owner = metadata[owner_field].decode("utf-8")
|
||||
owner_heartbeat_key = f"ts_proxy:worker:{owner}:heartbeat"
|
||||
if proxy_server.redis_client.exists(owner_heartbeat_key):
|
||||
# Owner is still active, so we don't need to reinitialize
|
||||
# Owner is still active with unknown state - don't reinitialize
|
||||
needs_initialization = False
|
||||
logger.debug(
|
||||
f"[{client_id}] Channel {channel_id} has active owner {owner}"
|
||||
f"[{client_id}] Channel {channel_id} has active owner {owner}, skipping init"
|
||||
)
|
||||
else:
|
||||
# Owner dead - needs reinitialization
|
||||
needs_initialization = True
|
||||
logger.warning(
|
||||
f"[{client_id}] Channel {channel_id} owner {owner} is dead, will reinitialize"
|
||||
)
|
||||
|
||||
# Start initialization if needed
|
||||
|
|
@ -128,7 +150,7 @@ def stream_ts(request, channel_id):
|
|||
ChannelService.stop_channel(channel_id)
|
||||
|
||||
# Use fixed retry interval and timeout
|
||||
retry_timeout = 1.5 # 1.5 seconds total timeout
|
||||
retry_timeout = 3 # 3 seconds total timeout
|
||||
retry_interval = 0.1 # 100ms between attempts
|
||||
wait_start_time = time.time()
|
||||
|
||||
|
|
@ -138,9 +160,10 @@ def stream_ts(request, channel_id):
|
|||
profile_value = None
|
||||
error_reason = None
|
||||
attempt = 0
|
||||
should_retry = True
|
||||
|
||||
# Try to get a stream with fixed interval retries
|
||||
while time.time() - wait_start_time < retry_timeout:
|
||||
while should_retry and time.time() - wait_start_time < retry_timeout:
|
||||
attempt += 1
|
||||
stream_url, stream_user_agent, transcode, profile_value = (
|
||||
generate_stream_url(channel_id)
|
||||
|
|
@ -152,35 +175,53 @@ def stream_ts(request, channel_id):
|
|||
)
|
||||
break
|
||||
|
||||
# If we failed because there are no streams assigned, don't retry
|
||||
_, _, error_reason = channel.get_stream()
|
||||
if error_reason and "maximum connection limits" not in error_reason:
|
||||
logger.warning(
|
||||
f"[{client_id}] Can't retry - error not related to connection limits: {error_reason}"
|
||||
# On first failure, check if the error is retryable
|
||||
if attempt == 1:
|
||||
_, _, error_reason = channel.get_stream()
|
||||
if error_reason and "maximum connection limits" not in error_reason:
|
||||
logger.warning(
|
||||
f"[{client_id}] Can't retry - error not related to connection limits: {error_reason}"
|
||||
)
|
||||
should_retry = False
|
||||
break
|
||||
|
||||
# Check if we have time remaining for another sleep cycle
|
||||
elapsed_time = time.time() - wait_start_time
|
||||
remaining_time = retry_timeout - elapsed_time
|
||||
|
||||
# If we don't have enough time for the next sleep interval, break
|
||||
# but only after we've already made an attempt (the while condition will try one more time)
|
||||
if remaining_time <= retry_interval:
|
||||
logger.info(
|
||||
f"[{client_id}] Insufficient time ({remaining_time:.1f}s) for another sleep cycle, will make one final attempt"
|
||||
)
|
||||
break
|
||||
|
||||
# Wait 100ms before retrying
|
||||
elapsed_time = time.time() - wait_start_time
|
||||
remaining_time = retry_timeout - elapsed_time
|
||||
if remaining_time > retry_interval:
|
||||
# Wait before retrying
|
||||
logger.info(
|
||||
f"[{client_id}] Waiting {retry_interval*1000:.0f}ms for a connection to become available (attempt {attempt}, {remaining_time:.1f}s remaining)"
|
||||
)
|
||||
gevent.sleep(retry_interval)
|
||||
retry_interval += 0.025 # Increase wait time by 25ms for next attempt
|
||||
|
||||
# Make one final attempt if we still don't have a stream, should retry, and haven't exceeded timeout
|
||||
if stream_url is None and should_retry and time.time() - wait_start_time < retry_timeout:
|
||||
attempt += 1
|
||||
logger.info(
|
||||
f"[{client_id}] Making final attempt {attempt} at timeout boundary"
|
||||
)
|
||||
stream_url, stream_user_agent, transcode, profile_value = (
|
||||
generate_stream_url(channel_id)
|
||||
)
|
||||
if stream_url is not None:
|
||||
logger.info(
|
||||
f"[{client_id}] Waiting {retry_interval*1000:.0f}ms for a connection to become available (attempt {attempt}, {remaining_time:.1f}s remaining)"
|
||||
f"[{client_id}] Successfully obtained stream on final attempt for channel {channel_id}"
|
||||
)
|
||||
gevent.sleep(retry_interval)
|
||||
retry_interval += 0.025 # Increase wait time by 25ms for next attempt
|
||||
|
||||
if stream_url is None:
|
||||
# Make sure to release any stream locks that might have been acquired
|
||||
if hasattr(channel, "streams") and channel.streams.exists():
|
||||
for stream in channel.streams.all():
|
||||
try:
|
||||
stream.release_stream()
|
||||
logger.info(
|
||||
f"[{client_id}] Released stream {stream.id} for channel {channel_id}"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"[{client_id}] Error releasing stream: {e}")
|
||||
# Release the channel's stream lock if one was acquired
|
||||
# Note: Only call this if get_stream() actually assigned a stream
|
||||
# In our case, if stream_url is None, no stream was ever assigned, so don't release
|
||||
|
||||
# Get the specific error message if available
|
||||
wait_duration = f"{int(time.time() - wait_start_time)}s"
|
||||
|
|
@ -189,6 +230,9 @@ def stream_ts(request, channel_id):
|
|||
if error_reason
|
||||
else "No available streams for this channel"
|
||||
)
|
||||
logger.info(
|
||||
f"[{client_id}] Failed to obtain stream after {attempt} attempts over {wait_duration}: {error_msg}"
|
||||
)
|
||||
return JsonResponse(
|
||||
{"error": error_msg, "waited": wait_duration}, status=503
|
||||
) # 503 Service Unavailable is appropriate here
|
||||
|
|
@ -270,6 +314,15 @@ def stream_ts(request, channel_id):
|
|||
logger.info(
|
||||
f"[{client_id}] Redirecting to validated URL: {final_url} ({message})"
|
||||
)
|
||||
|
||||
# For non-HTTP protocols (RTSP/RTP/UDP), we need to manually create the redirect
|
||||
# because Django's HttpResponseRedirect blocks them for security
|
||||
if final_url.startswith(('rtsp://', 'rtp://', 'udp://')):
|
||||
logger.info(f"[{client_id}] Using manual redirect for non-HTTP protocol")
|
||||
response = HttpResponse(status=301)
|
||||
response['Location'] = final_url
|
||||
return response
|
||||
|
||||
return HttpResponseRedirect(final_url)
|
||||
else:
|
||||
logger.error(
|
||||
|
|
@ -474,17 +527,26 @@ def stream_xc(request, username, password, channel_id):
|
|||
|
||||
print(f"Fetchin channel with ID: {channel_id}")
|
||||
if user.user_level < 10:
|
||||
filters = {
|
||||
"id": int(channel_id),
|
||||
"channelprofilemembership__enabled": True,
|
||||
"user_level__lte": user.user_level,
|
||||
}
|
||||
user_profile_count = user.channel_profiles.count()
|
||||
|
||||
if user.channel_profiles.count() > 0:
|
||||
channel_profiles = user.channel_profiles.all()
|
||||
filters["channelprofilemembership__channel_profile__in"] = channel_profiles
|
||||
# If user has ALL profiles or NO profiles, give unrestricted access
|
||||
if user_profile_count == 0:
|
||||
# No profile filtering - user sees all channels based on user_level
|
||||
filters = {
|
||||
"id": int(channel_id),
|
||||
"user_level__lte": user.user_level
|
||||
}
|
||||
channel = Channel.objects.filter(**filters).first()
|
||||
else:
|
||||
# User has specific limited profiles assigned
|
||||
filters = {
|
||||
"id": int(channel_id),
|
||||
"channelprofilemembership__enabled": True,
|
||||
"user_level__lte": user.user_level,
|
||||
"channelprofilemembership__channel_profile__in": user.channel_profiles.all()
|
||||
}
|
||||
channel = Channel.objects.filter(**filters).distinct().first()
|
||||
|
||||
channel = Channel.objects.filter(**filters).distinct().first()
|
||||
if not channel:
|
||||
return JsonResponse({"error": "Not found"}, status=404)
|
||||
else:
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ from .api_views import (
|
|||
SeriesViewSet,
|
||||
VODCategoryViewSet,
|
||||
UnifiedContentViewSet,
|
||||
VODLogoViewSet,
|
||||
)
|
||||
|
||||
app_name = 'vod'
|
||||
|
|
@ -16,5 +17,6 @@ router.register(r'episodes', EpisodeViewSet, basename='episode')
|
|||
router.register(r'series', SeriesViewSet, basename='series')
|
||||
router.register(r'categories', VODCategoryViewSet, basename='vodcategory')
|
||||
router.register(r'all', UnifiedContentViewSet, basename='unified-content')
|
||||
router.register(r'vodlogos', VODLogoViewSet, basename='vodlogo')
|
||||
|
||||
urlpatterns = router.urls
|
||||
|
|
|
|||
|
|
@ -4,25 +4,31 @@ from rest_framework.response import Response
|
|||
from rest_framework.decorators import action
|
||||
from rest_framework.filters import SearchFilter, OrderingFilter
|
||||
from rest_framework.pagination import PageNumberPagination
|
||||
from rest_framework.permissions import AllowAny
|
||||
from django_filters.rest_framework import DjangoFilterBackend
|
||||
from django.shortcuts import get_object_or_404
|
||||
from django.urls import reverse
|
||||
from django.http import StreamingHttpResponse, HttpResponse, FileResponse
|
||||
from django.db.models import Q
|
||||
import django_filters
|
||||
from django.db.models import Q
|
||||
import logging
|
||||
import os
|
||||
import requests
|
||||
from apps.accounts.permissions import (
|
||||
Authenticated,
|
||||
permission_classes_by_action,
|
||||
)
|
||||
from .models import (
|
||||
Series, VODCategory, Movie, Episode,
|
||||
M3USeriesRelation, M3UMovieRelation, M3UEpisodeRelation
|
||||
Series, VODCategory, Movie, Episode, VODLogo,
|
||||
M3USeriesRelation, M3UMovieRelation, M3UEpisodeRelation, M3UVODCategoryRelation
|
||||
)
|
||||
from .serializers import (
|
||||
MovieSerializer,
|
||||
EpisodeSerializer,
|
||||
SeriesSerializer,
|
||||
VODCategorySerializer,
|
||||
VODLogoSerializer,
|
||||
M3UMovieRelationSerializer,
|
||||
M3USeriesRelationSerializer,
|
||||
M3UEpisodeRelationSerializer
|
||||
|
|
@ -723,6 +729,59 @@ class VODCategoryViewSet(viewsets.ReadOnlyModelViewSet):
|
|||
except KeyError:
|
||||
return [Authenticated()]
|
||||
|
||||
def list(self, request, *args, **kwargs):
|
||||
"""Override list to ensure Uncategorized categories and relations exist for all XC accounts with VOD enabled"""
|
||||
from apps.m3u.models import M3UAccount
|
||||
|
||||
# Ensure Uncategorized categories exist
|
||||
movie_category, _ = VODCategory.objects.get_or_create(
|
||||
name="Uncategorized",
|
||||
category_type="movie",
|
||||
defaults={}
|
||||
)
|
||||
|
||||
series_category, _ = VODCategory.objects.get_or_create(
|
||||
name="Uncategorized",
|
||||
category_type="series",
|
||||
defaults={}
|
||||
)
|
||||
|
||||
# Get all active XC accounts with VOD enabled
|
||||
xc_accounts = M3UAccount.objects.filter(
|
||||
account_type=M3UAccount.Types.XC,
|
||||
is_active=True
|
||||
)
|
||||
|
||||
for account in xc_accounts:
|
||||
if account.custom_properties:
|
||||
custom_props = account.custom_properties or {}
|
||||
vod_enabled = custom_props.get("enable_vod", False)
|
||||
|
||||
if vod_enabled:
|
||||
# Ensure relations exist for this account
|
||||
auto_enable_new = custom_props.get("auto_enable_new_groups_vod", True)
|
||||
|
||||
M3UVODCategoryRelation.objects.get_or_create(
|
||||
category=movie_category,
|
||||
m3u_account=account,
|
||||
defaults={
|
||||
'enabled': auto_enable_new,
|
||||
'custom_properties': {}
|
||||
}
|
||||
)
|
||||
|
||||
M3UVODCategoryRelation.objects.get_or_create(
|
||||
category=series_category,
|
||||
m3u_account=account,
|
||||
defaults={
|
||||
'enabled': auto_enable_new,
|
||||
'custom_properties': {}
|
||||
}
|
||||
)
|
||||
|
||||
# Now proceed with normal list operation
|
||||
return super().list(request, *args, **kwargs)
|
||||
|
||||
|
||||
class UnifiedContentViewSet(viewsets.ReadOnlyModelViewSet):
|
||||
"""ViewSet that combines Movies and Series for unified 'All' view"""
|
||||
|
|
@ -847,8 +906,8 @@ class UnifiedContentViewSet(viewsets.ReadOnlyModelViewSet):
|
|||
logo.url as logo_url,
|
||||
'movie' as content_type
|
||||
FROM vod_movie movies
|
||||
LEFT JOIN dispatcharr_channels_logo logo ON movies.logo_id = logo.id
|
||||
WHERE {movie_where}
|
||||
LEFT JOIN vod_vodlogo logo ON movies.logo_id = logo.id
|
||||
WHERE {where_conditions[0]}
|
||||
|
||||
UNION ALL
|
||||
|
||||
|
|
@ -869,8 +928,8 @@ class UnifiedContentViewSet(viewsets.ReadOnlyModelViewSet):
|
|||
logo.url as logo_url,
|
||||
'series' as content_type
|
||||
FROM vod_series series
|
||||
LEFT JOIN dispatcharr_channels_logo logo ON series.logo_id = logo.id
|
||||
WHERE {series_where}
|
||||
LEFT JOIN vod_vodlogo logo ON series.logo_id = logo.id
|
||||
WHERE {where_conditions[1]}
|
||||
)
|
||||
SELECT * FROM unified_content
|
||||
ORDER BY LOWER(name), id
|
||||
|
|
@ -906,25 +965,11 @@ class UnifiedContentViewSet(viewsets.ReadOnlyModelViewSet):
|
|||
'id': item_dict['logo_id'],
|
||||
'name': item_dict['logo_name'],
|
||||
'url': item_dict['logo_url'],
|
||||
'cache_url': cache_url,
|
||||
'channel_count': 0,
|
||||
'is_used': True,
|
||||
'channel_names': []
|
||||
'cache_url': f"/api/vod/vodlogos/{item_dict['logo_id']}/cache/",
|
||||
'movie_count': 0, # We don't calculate this in raw SQL
|
||||
'series_count': 0, # We don't calculate this in raw SQL
|
||||
'is_used': True
|
||||
}
|
||||
if not poster_candidate:
|
||||
poster_candidate = custom_props.get('poster_url') or custom_props.get('cover')
|
||||
|
||||
backdrop_values = []
|
||||
if isinstance(custom_props.get('backdrop_path'), list):
|
||||
backdrop_values = custom_props['backdrop_path']
|
||||
elif custom_props.get('backdrop_url'):
|
||||
backdrop_values = [custom_props['backdrop_url']]
|
||||
|
||||
rating_value = item_dict['rating']
|
||||
try:
|
||||
rating_parsed = float(rating_value) if rating_value is not None else 0.0
|
||||
except (TypeError, ValueError):
|
||||
rating_parsed = rating_value
|
||||
|
||||
# Convert to the format expected by frontend
|
||||
formatted_item = {
|
||||
|
|
@ -980,3 +1025,172 @@ class UnifiedContentViewSet(viewsets.ReadOnlyModelViewSet):
|
|||
import traceback
|
||||
logger.error(traceback.format_exc())
|
||||
return Response({'error': str(e)}, status=500)
|
||||
|
||||
|
||||
class VODLogoPagination(PageNumberPagination):
|
||||
page_size = 100
|
||||
page_size_query_param = "page_size"
|
||||
max_page_size = 1000
|
||||
|
||||
|
||||
class VODLogoViewSet(viewsets.ModelViewSet):
|
||||
"""ViewSet for VOD Logo management"""
|
||||
queryset = VODLogo.objects.all()
|
||||
serializer_class = VODLogoSerializer
|
||||
pagination_class = VODLogoPagination
|
||||
filter_backends = [SearchFilter, OrderingFilter]
|
||||
search_fields = ['name', 'url']
|
||||
ordering_fields = ['name', 'id']
|
||||
ordering = ['name']
|
||||
|
||||
def get_permissions(self):
|
||||
try:
|
||||
return [perm() for perm in permission_classes_by_action[self.action]]
|
||||
except KeyError:
|
||||
if self.action == 'cache':
|
||||
return [AllowAny()]
|
||||
return [Authenticated()]
|
||||
|
||||
def get_queryset(self):
|
||||
"""Optimize queryset with prefetch and add filtering"""
|
||||
queryset = VODLogo.objects.prefetch_related('movie', 'series').order_by('name')
|
||||
|
||||
# Filter by specific IDs
|
||||
ids = self.request.query_params.getlist('ids')
|
||||
if ids:
|
||||
try:
|
||||
id_list = [int(id_str) for id_str in ids if id_str.isdigit()]
|
||||
if id_list:
|
||||
queryset = queryset.filter(id__in=id_list)
|
||||
except (ValueError, TypeError):
|
||||
queryset = VODLogo.objects.none()
|
||||
|
||||
# Filter by usage
|
||||
used_filter = self.request.query_params.get('used', None)
|
||||
if used_filter == 'true':
|
||||
# Return logos that are used by movies OR series
|
||||
queryset = queryset.filter(
|
||||
Q(movie__isnull=False) | Q(series__isnull=False)
|
||||
).distinct()
|
||||
elif used_filter == 'false':
|
||||
# Return logos that are NOT used by either
|
||||
queryset = queryset.filter(
|
||||
movie__isnull=True,
|
||||
series__isnull=True
|
||||
)
|
||||
elif used_filter == 'movies':
|
||||
# Return logos that are used by movies (may also be used by series)
|
||||
queryset = queryset.filter(movie__isnull=False).distinct()
|
||||
elif used_filter == 'series':
|
||||
# Return logos that are used by series (may also be used by movies)
|
||||
queryset = queryset.filter(series__isnull=False).distinct()
|
||||
|
||||
|
||||
# Filter by name
|
||||
name_query = self.request.query_params.get('name', None)
|
||||
if name_query:
|
||||
queryset = queryset.filter(name__icontains=name_query)
|
||||
|
||||
# No pagination mode
|
||||
if self.request.query_params.get('no_pagination', 'false').lower() == 'true':
|
||||
self.pagination_class = None
|
||||
|
||||
return queryset
|
||||
|
||||
@action(detail=True, methods=["get"], permission_classes=[AllowAny])
|
||||
def cache(self, request, pk=None):
|
||||
"""Streams the VOD logo file, whether it's local or remote."""
|
||||
logo = self.get_object()
|
||||
|
||||
if not logo.url:
|
||||
return HttpResponse(status=404)
|
||||
|
||||
# Check if this is a local file path
|
||||
if logo.url.startswith('/data/'):
|
||||
# It's a local file
|
||||
file_path = logo.url
|
||||
if not os.path.exists(file_path):
|
||||
logger.error(f"VOD logo file not found: {file_path}")
|
||||
return HttpResponse(status=404)
|
||||
|
||||
try:
|
||||
return FileResponse(open(file_path, 'rb'), content_type='image/png')
|
||||
except Exception as e:
|
||||
logger.error(f"Error serving VOD logo file {file_path}: {str(e)}")
|
||||
return HttpResponse(status=500)
|
||||
else:
|
||||
# It's a remote URL - proxy it
|
||||
try:
|
||||
response = requests.get(logo.url, stream=True, timeout=10)
|
||||
response.raise_for_status()
|
||||
|
||||
content_type = response.headers.get('Content-Type', 'image/png')
|
||||
|
||||
return StreamingHttpResponse(
|
||||
response.iter_content(chunk_size=8192),
|
||||
content_type=content_type
|
||||
)
|
||||
except requests.exceptions.RequestException as e:
|
||||
logger.error(f"Error fetching remote VOD logo {logo.url}: {str(e)}")
|
||||
return HttpResponse(status=404)
|
||||
|
||||
@action(detail=False, methods=["delete"], url_path="bulk-delete")
|
||||
def bulk_delete(self, request):
|
||||
"""Delete multiple VOD logos at once"""
|
||||
logo_ids = request.data.get('logo_ids', [])
|
||||
|
||||
if not logo_ids:
|
||||
return Response(
|
||||
{"error": "No logo IDs provided"},
|
||||
status=status.HTTP_400_BAD_REQUEST
|
||||
)
|
||||
|
||||
try:
|
||||
# Get logos to delete
|
||||
logos = VODLogo.objects.filter(id__in=logo_ids)
|
||||
deleted_count = logos.count()
|
||||
|
||||
# Delete them
|
||||
logos.delete()
|
||||
|
||||
return Response({
|
||||
"deleted_count": deleted_count,
|
||||
"message": f"Successfully deleted {deleted_count} VOD logo(s)"
|
||||
})
|
||||
except Exception as e:
|
||||
logger.error(f"Error during bulk VOD logo deletion: {str(e)}")
|
||||
return Response(
|
||||
{"error": str(e)},
|
||||
status=status.HTTP_500_INTERNAL_SERVER_ERROR
|
||||
)
|
||||
|
||||
@action(detail=False, methods=["post"])
|
||||
def cleanup(self, request):
|
||||
"""Delete all VOD logos that are not used by any movies or series"""
|
||||
try:
|
||||
# Find unused logos
|
||||
unused_logos = VODLogo.objects.filter(
|
||||
movie__isnull=True,
|
||||
series__isnull=True
|
||||
)
|
||||
|
||||
deleted_count = unused_logos.count()
|
||||
logo_names = list(unused_logos.values_list('name', flat=True))
|
||||
|
||||
# Delete them
|
||||
unused_logos.delete()
|
||||
|
||||
logger.info(f"Cleaned up {deleted_count} unused VOD logos: {logo_names}")
|
||||
|
||||
return Response({
|
||||
"deleted_count": deleted_count,
|
||||
"deleted_logos": logo_names,
|
||||
"message": f"Successfully deleted {deleted_count} unused VOD logo(s)"
|
||||
})
|
||||
except Exception as e:
|
||||
logger.error(f"Error during VOD logo cleanup: {str(e)}")
|
||||
return Response(
|
||||
{"error": str(e)},
|
||||
status=status.HTTP_500_INTERNAL_SERVER_ERROR
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,264 @@
|
|||
# Generated by Django 5.2.4 on 2025-11-06 23:01
|
||||
|
||||
import django.db.models.deletion
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
def migrate_vod_logos_forward(apps, schema_editor):
|
||||
"""
|
||||
Migrate VOD logos from the Logo table to the new VODLogo table.
|
||||
This copies all logos referenced by movies or series to VODLogo.
|
||||
Uses pure SQL for maximum performance.
|
||||
"""
|
||||
from django.db import connection
|
||||
|
||||
print("\n" + "="*80)
|
||||
print("Starting VOD logo migration...")
|
||||
print("="*80)
|
||||
|
||||
with connection.cursor() as cursor:
|
||||
# Step 1: Copy unique logos from Logo table to VODLogo table
|
||||
# Only copy logos that are used by movies or series
|
||||
print("Copying logos to VODLogo table...")
|
||||
cursor.execute("""
|
||||
INSERT INTO vod_vodlogo (name, url)
|
||||
SELECT DISTINCT l.name, l.url
|
||||
FROM dispatcharr_channels_logo l
|
||||
WHERE l.id IN (
|
||||
SELECT DISTINCT logo_id FROM vod_movie WHERE logo_id IS NOT NULL
|
||||
UNION
|
||||
SELECT DISTINCT logo_id FROM vod_series WHERE logo_id IS NOT NULL
|
||||
)
|
||||
ON CONFLICT (url) DO NOTHING
|
||||
""")
|
||||
print(f"Created VODLogo entries")
|
||||
|
||||
# Step 2: Update movies to point to VODLogo IDs using JOIN
|
||||
print("Updating movie references...")
|
||||
cursor.execute("""
|
||||
UPDATE vod_movie m
|
||||
SET logo_id = v.id
|
||||
FROM dispatcharr_channels_logo l
|
||||
INNER JOIN vod_vodlogo v ON l.url = v.url
|
||||
WHERE m.logo_id = l.id
|
||||
AND m.logo_id IS NOT NULL
|
||||
""")
|
||||
movie_count = cursor.rowcount
|
||||
print(f"Updated {movie_count} movies with new VOD logo references")
|
||||
|
||||
# Step 3: Update series to point to VODLogo IDs using JOIN
|
||||
print("Updating series references...")
|
||||
cursor.execute("""
|
||||
UPDATE vod_series s
|
||||
SET logo_id = v.id
|
||||
FROM dispatcharr_channels_logo l
|
||||
INNER JOIN vod_vodlogo v ON l.url = v.url
|
||||
WHERE s.logo_id = l.id
|
||||
AND s.logo_id IS NOT NULL
|
||||
""")
|
||||
series_count = cursor.rowcount
|
||||
print(f"Updated {series_count} series with new VOD logo references")
|
||||
|
||||
print("="*80)
|
||||
print("VOD logo migration completed successfully!")
|
||||
print(f"Summary: Updated {movie_count} movies and {series_count} series")
|
||||
print("="*80 + "\n")
|
||||
|
||||
|
||||
def migrate_vod_logos_backward(apps, schema_editor):
|
||||
"""
|
||||
Reverse migration - moves VODLogos back to Logo table.
|
||||
This recreates Logo entries for all VODLogos and updates Movie/Series references.
|
||||
"""
|
||||
Logo = apps.get_model('dispatcharr_channels', 'Logo')
|
||||
VODLogo = apps.get_model('vod', 'VODLogo')
|
||||
Movie = apps.get_model('vod', 'Movie')
|
||||
Series = apps.get_model('vod', 'Series')
|
||||
|
||||
print("\n" + "="*80)
|
||||
print("REVERSE: Moving VOD logos back to Logo table...")
|
||||
print("="*80)
|
||||
|
||||
# Get all VODLogos
|
||||
vod_logos = VODLogo.objects.all()
|
||||
print(f"Found {vod_logos.count()} VOD logos to reverse migrate")
|
||||
|
||||
# Create Logo entries for each VODLogo
|
||||
logos_to_create = []
|
||||
vod_to_logo_mapping = {} # VODLogo ID -> Logo ID
|
||||
|
||||
for vod_logo in vod_logos:
|
||||
# Check if a Logo with this URL already exists
|
||||
existing_logo = Logo.objects.filter(url=vod_logo.url).first()
|
||||
|
||||
if existing_logo:
|
||||
# Logo already exists, just map to it
|
||||
vod_to_logo_mapping[vod_logo.id] = existing_logo.id
|
||||
print(f"Logo already exists for URL: {vod_logo.url[:50]}... (using existing)")
|
||||
else:
|
||||
# Create new Logo entry
|
||||
new_logo = Logo(name=vod_logo.name, url=vod_logo.url)
|
||||
logos_to_create.append(new_logo)
|
||||
|
||||
# Bulk create new Logo entries
|
||||
if logos_to_create:
|
||||
print(f"Creating {len(logos_to_create)} new Logo entries...")
|
||||
Logo.objects.bulk_create(logos_to_create, ignore_conflicts=True)
|
||||
print("Logo entries created")
|
||||
|
||||
# Get the created Logo instances with their IDs
|
||||
for vod_logo in vod_logos:
|
||||
if vod_logo.id not in vod_to_logo_mapping:
|
||||
try:
|
||||
logo = Logo.objects.get(url=vod_logo.url)
|
||||
vod_to_logo_mapping[vod_logo.id] = logo.id
|
||||
except Logo.DoesNotExist:
|
||||
print(f"Warning: Could not find Logo for URL: {vod_logo.url[:100]}...")
|
||||
|
||||
print(f"Created mapping for {len(vod_to_logo_mapping)} VOD logos -> Logos")
|
||||
|
||||
# Update movies to point back to Logo table
|
||||
movie_count = 0
|
||||
for movie in Movie.objects.exclude(logo__isnull=True):
|
||||
if movie.logo_id in vod_to_logo_mapping:
|
||||
movie.logo_id = vod_to_logo_mapping[movie.logo_id]
|
||||
movie.save(update_fields=['logo_id'])
|
||||
movie_count += 1
|
||||
print(f"Updated {movie_count} movies to use Logo table")
|
||||
|
||||
# Update series to point back to Logo table
|
||||
series_count = 0
|
||||
for series in Series.objects.exclude(logo__isnull=True):
|
||||
if series.logo_id in vod_to_logo_mapping:
|
||||
series.logo_id = vod_to_logo_mapping[series.logo_id]
|
||||
series.save(update_fields=['logo_id'])
|
||||
series_count += 1
|
||||
print(f"Updated {series_count} series to use Logo table")
|
||||
|
||||
# Delete VODLogos (they're now redundant)
|
||||
vod_logo_count = vod_logos.count()
|
||||
vod_logos.delete()
|
||||
print(f"Deleted {vod_logo_count} VOD logos")
|
||||
|
||||
print("="*80)
|
||||
print("Reverse migration completed!")
|
||||
print(f"Summary: Created/reused {len(vod_to_logo_mapping)} logos, updated {movie_count} movies and {series_count} series")
|
||||
print("="*80 + "\n")
|
||||
|
||||
|
||||
def cleanup_migrated_logos(apps, schema_editor):
|
||||
"""
|
||||
Delete Logo entries that were successfully migrated to VODLogo.
|
||||
|
||||
Uses efficient JOIN-based approach with LEFT JOIN to exclude channel usage.
|
||||
"""
|
||||
from django.db import connection
|
||||
|
||||
print("\n" + "="*80)
|
||||
print("Cleaning up migrated Logo entries...")
|
||||
print("="*80)
|
||||
|
||||
with connection.cursor() as cursor:
|
||||
# Single efficient query using JOINs:
|
||||
# - JOIN with vod_vodlogo to find migrated logos
|
||||
# - LEFT JOIN with channels to find which aren't used
|
||||
cursor.execute("""
|
||||
DELETE FROM dispatcharr_channels_logo
|
||||
WHERE id IN (
|
||||
SELECT l.id
|
||||
FROM dispatcharr_channels_logo l
|
||||
INNER JOIN vod_vodlogo v ON l.url = v.url
|
||||
LEFT JOIN dispatcharr_channels_channel c ON c.logo_id = l.id
|
||||
WHERE c.id IS NULL
|
||||
)
|
||||
""")
|
||||
deleted_count = cursor.rowcount
|
||||
|
||||
print(f"✓ Deleted {deleted_count} migrated Logo entries (not used by channels)")
|
||||
print("="*80 + "\n")
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('vod', '0002_add_last_seen_with_default'),
|
||||
('dispatcharr_channels', '0013_alter_logo_url'), # Ensure Logo table exists
|
||||
]
|
||||
|
||||
operations = [
|
||||
# Step 1: Create the VODLogo model
|
||||
migrations.CreateModel(
|
||||
name='VODLogo',
|
||||
fields=[
|
||||
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
|
||||
('name', models.CharField(max_length=255)),
|
||||
('url', models.TextField(unique=True)),
|
||||
],
|
||||
options={
|
||||
'verbose_name': 'VOD Logo',
|
||||
'verbose_name_plural': 'VOD Logos',
|
||||
},
|
||||
),
|
||||
|
||||
# Step 2: Remove foreign key constraints temporarily (so we can change the IDs)
|
||||
# We need to find and drop the actual constraint names dynamically
|
||||
migrations.RunSQL(
|
||||
sql=[
|
||||
# Drop movie logo constraint (find it dynamically)
|
||||
"""
|
||||
DO $$
|
||||
DECLARE
|
||||
constraint_name text;
|
||||
BEGIN
|
||||
SELECT conname INTO constraint_name
|
||||
FROM pg_constraint
|
||||
WHERE conrelid = 'vod_movie'::regclass
|
||||
AND conname LIKE '%logo_id%fk%';
|
||||
|
||||
IF constraint_name IS NOT NULL THEN
|
||||
EXECUTE 'ALTER TABLE vod_movie DROP CONSTRAINT ' || constraint_name;
|
||||
END IF;
|
||||
END $$;
|
||||
""",
|
||||
# Drop series logo constraint (find it dynamically)
|
||||
"""
|
||||
DO $$
|
||||
DECLARE
|
||||
constraint_name text;
|
||||
BEGIN
|
||||
SELECT conname INTO constraint_name
|
||||
FROM pg_constraint
|
||||
WHERE conrelid = 'vod_series'::regclass
|
||||
AND conname LIKE '%logo_id%fk%';
|
||||
|
||||
IF constraint_name IS NOT NULL THEN
|
||||
EXECUTE 'ALTER TABLE vod_series DROP CONSTRAINT ' || constraint_name;
|
||||
END IF;
|
||||
END $$;
|
||||
""",
|
||||
],
|
||||
reverse_sql=[
|
||||
# The AlterField operations will recreate the constraints pointing to VODLogo,
|
||||
# so we don't need to manually recreate them in reverse
|
||||
migrations.RunSQL.noop,
|
||||
],
|
||||
),
|
||||
|
||||
# Step 3: Migrate the data (this copies logos and updates references)
|
||||
migrations.RunPython(migrate_vod_logos_forward, migrate_vod_logos_backward),
|
||||
|
||||
# Step 4: Now we can safely alter the foreign keys to point to VODLogo
|
||||
migrations.AlterField(
|
||||
model_name='movie',
|
||||
name='logo',
|
||||
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='movie', to='vod.vodlogo'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='series',
|
||||
name='logo',
|
||||
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='series', to='vod.vodlogo'),
|
||||
),
|
||||
|
||||
# Step 5: Clean up migrated Logo entries
|
||||
migrations.RunPython(cleanup_migrated_logos, migrations.RunPython.noop),
|
||||
]
|
||||
|
|
@ -4,10 +4,22 @@ from django.utils import timezone
|
|||
from django.contrib.contenttypes.fields import GenericForeignKey
|
||||
from django.contrib.contenttypes.models import ContentType
|
||||
from apps.m3u.models import M3UAccount
|
||||
from apps.channels.models import Logo
|
||||
import uuid
|
||||
|
||||
|
||||
class VODLogo(models.Model):
|
||||
"""Logo model specifically for VOD content (movies and series)"""
|
||||
name = models.CharField(max_length=255)
|
||||
url = models.TextField(unique=True)
|
||||
|
||||
def __str__(self):
|
||||
return self.name
|
||||
|
||||
class Meta:
|
||||
verbose_name = 'VOD Logo'
|
||||
verbose_name_plural = 'VOD Logos'
|
||||
|
||||
|
||||
class VODCategory(models.Model):
|
||||
"""Categories for organizing VODs (e.g., Action, Comedy, Drama)"""
|
||||
|
||||
|
|
@ -69,7 +81,7 @@ class Series(models.Model):
|
|||
year = models.IntegerField(blank=True, null=True)
|
||||
rating = models.CharField(max_length=10, blank=True, null=True)
|
||||
genre = models.CharField(max_length=255, blank=True, null=True)
|
||||
logo = models.ForeignKey(Logo, on_delete=models.SET_NULL, null=True, blank=True, related_name='series')
|
||||
logo = models.ForeignKey(VODLogo, on_delete=models.SET_NULL, null=True, blank=True, related_name='series')
|
||||
|
||||
# Metadata IDs for deduplication - these should be globally unique when present
|
||||
tmdb_id = models.CharField(max_length=50, blank=True, null=True, unique=True, help_text="TMDB ID for metadata")
|
||||
|
|
@ -108,7 +120,7 @@ class Movie(models.Model):
|
|||
rating = models.CharField(max_length=10, blank=True, null=True)
|
||||
genre = models.CharField(max_length=255, blank=True, null=True)
|
||||
duration_secs = models.IntegerField(blank=True, null=True, help_text="Duration in seconds")
|
||||
logo = models.ForeignKey(Logo, on_delete=models.SET_NULL, null=True, blank=True, related_name='movie')
|
||||
logo = models.ForeignKey(VODLogo, on_delete=models.SET_NULL, null=True, blank=True, related_name='movie')
|
||||
|
||||
# Metadata IDs for deduplication - these should be globally unique when present
|
||||
tmdb_id = models.CharField(max_length=50, blank=True, null=True, unique=True, help_text="TMDB ID for metadata")
|
||||
|
|
|
|||
|
|
@ -1,13 +1,79 @@
|
|||
from django.urls import reverse
|
||||
from rest_framework import serializers
|
||||
from django.urls import reverse
|
||||
from .models import (
|
||||
Series, VODCategory, Movie, Episode,
|
||||
Series, VODCategory, Movie, Episode, VODLogo,
|
||||
M3USeriesRelation, M3UMovieRelation, M3UEpisodeRelation, M3UVODCategoryRelation
|
||||
)
|
||||
from apps.channels.serializers import LogoSerializer
|
||||
from apps.m3u.serializers import M3UAccountSerializer
|
||||
|
||||
|
||||
class VODLogoSerializer(serializers.ModelSerializer):
|
||||
cache_url = serializers.SerializerMethodField()
|
||||
movie_count = serializers.SerializerMethodField()
|
||||
series_count = serializers.SerializerMethodField()
|
||||
is_used = serializers.SerializerMethodField()
|
||||
item_names = serializers.SerializerMethodField()
|
||||
|
||||
class Meta:
|
||||
model = VODLogo
|
||||
fields = ["id", "name", "url", "cache_url", "movie_count", "series_count", "is_used", "item_names"]
|
||||
|
||||
def validate_url(self, value):
|
||||
"""Validate that the URL is unique for creation or update"""
|
||||
if self.instance and self.instance.url == value:
|
||||
return value
|
||||
|
||||
if VODLogo.objects.filter(url=value).exists():
|
||||
raise serializers.ValidationError("A VOD logo with this URL already exists.")
|
||||
|
||||
return value
|
||||
|
||||
def create(self, validated_data):
|
||||
"""Handle logo creation with proper URL validation"""
|
||||
return VODLogo.objects.create(**validated_data)
|
||||
|
||||
def update(self, instance, validated_data):
|
||||
"""Handle logo updates"""
|
||||
for attr, value in validated_data.items():
|
||||
setattr(instance, attr, value)
|
||||
instance.save()
|
||||
return instance
|
||||
|
||||
def get_cache_url(self, obj):
|
||||
request = self.context.get("request")
|
||||
if request:
|
||||
return request.build_absolute_uri(
|
||||
reverse("api:vod:vodlogo-cache", args=[obj.id])
|
||||
)
|
||||
return reverse("api:vod:vodlogo-cache", args=[obj.id])
|
||||
|
||||
def get_movie_count(self, obj):
|
||||
"""Get the number of movies using this logo"""
|
||||
return obj.movie.count() if hasattr(obj, 'movie') else 0
|
||||
|
||||
def get_series_count(self, obj):
|
||||
"""Get the number of series using this logo"""
|
||||
return obj.series.count() if hasattr(obj, 'series') else 0
|
||||
|
||||
def get_is_used(self, obj):
|
||||
"""Check if this logo is used by any movies or series"""
|
||||
return (hasattr(obj, 'movie') and obj.movie.exists()) or (hasattr(obj, 'series') and obj.series.exists())
|
||||
|
||||
def get_item_names(self, obj):
|
||||
"""Get the list of movies and series using this logo"""
|
||||
names = []
|
||||
|
||||
if hasattr(obj, 'movie'):
|
||||
for movie in obj.movie.all()[:10]: # Limit to 10 items for performance
|
||||
names.append(f"Movie: {movie.name}")
|
||||
|
||||
if hasattr(obj, 'series'):
|
||||
for series in obj.series.all()[:10]: # Limit to 10 items for performance
|
||||
names.append(f"Series: {series.name}")
|
||||
|
||||
return names
|
||||
|
||||
|
||||
class M3UVODCategoryRelationSerializer(serializers.ModelSerializer):
|
||||
category = serializers.IntegerField(source="category.id")
|
||||
m3u_account = serializers.IntegerField(source="m3u_account.id")
|
||||
|
|
@ -31,21 +97,9 @@ class VODCategorySerializer(serializers.ModelSerializer):
|
|||
"m3u_accounts",
|
||||
]
|
||||
|
||||
def _build_logo_cache_url(request, logo_id):
|
||||
if not logo_id:
|
||||
return None
|
||||
cache_path = reverse("api:channels:logo-cache", args=[logo_id])
|
||||
if request:
|
||||
return request.build_absolute_uri(cache_path)
|
||||
return cache_path
|
||||
|
||||
|
||||
class SeriesSerializer(serializers.ModelSerializer):
|
||||
logo = LogoSerializer(read_only=True)
|
||||
logo = VODLogoSerializer(read_only=True)
|
||||
episode_count = serializers.SerializerMethodField()
|
||||
library_sources = serializers.SerializerMethodField()
|
||||
series_image = serializers.SerializerMethodField()
|
||||
backdrop_path = serializers.SerializerMethodField()
|
||||
|
||||
class Meta:
|
||||
model = Series
|
||||
|
|
@ -54,129 +108,22 @@ class SeriesSerializer(serializers.ModelSerializer):
|
|||
def get_episode_count(self, obj):
|
||||
return obj.episodes.count()
|
||||
|
||||
def get_library_sources(self, obj):
|
||||
sources = []
|
||||
for item in obj.library_items.select_related("library").filter(library__use_as_vod_source=True):
|
||||
library = item.library
|
||||
sources.append(
|
||||
{
|
||||
"library_id": library.id,
|
||||
"library_name": library.name,
|
||||
"media_item_id": item.id,
|
||||
}
|
||||
)
|
||||
return sources
|
||||
|
||||
def get_series_image(self, obj):
|
||||
request = self.context.get("request")
|
||||
if obj.logo_id:
|
||||
cache_url = _build_logo_cache_url(request, obj.logo_id)
|
||||
return cache_url or (obj.logo.url if obj.logo else None)
|
||||
|
||||
custom = obj.custom_properties or {}
|
||||
return custom.get("poster_url") or custom.get("cover")
|
||||
|
||||
def get_backdrop_path(self, obj):
|
||||
custom = obj.custom_properties or {}
|
||||
if "backdrop_path" in custom and isinstance(custom["backdrop_path"], list):
|
||||
return custom["backdrop_path"]
|
||||
backdrop_url = custom.get("backdrop_url")
|
||||
if backdrop_url:
|
||||
return [backdrop_url]
|
||||
return []
|
||||
|
||||
|
||||
class MovieSerializer(serializers.ModelSerializer):
|
||||
logo = LogoSerializer(read_only=True)
|
||||
library_sources = serializers.SerializerMethodField()
|
||||
movie_image = serializers.SerializerMethodField()
|
||||
backdrop_path = serializers.SerializerMethodField()
|
||||
logo = VODLogoSerializer(read_only=True)
|
||||
|
||||
class Meta:
|
||||
model = Movie
|
||||
fields = '__all__'
|
||||
|
||||
def get_library_sources(self, obj):
|
||||
sources = []
|
||||
for item in obj.library_items.select_related("library").filter(library__use_as_vod_source=True):
|
||||
library = item.library
|
||||
sources.append(
|
||||
{
|
||||
"library_id": library.id,
|
||||
"library_name": library.name,
|
||||
"media_item_id": item.id,
|
||||
}
|
||||
)
|
||||
return sources
|
||||
|
||||
def get_movie_image(self, obj):
|
||||
request = self.context.get("request")
|
||||
if obj.logo_id:
|
||||
cache_url = _build_logo_cache_url(request, obj.logo_id)
|
||||
return cache_url or (obj.logo.url if obj.logo else None)
|
||||
|
||||
custom = obj.custom_properties or {}
|
||||
return custom.get("poster_url") or custom.get("cover")
|
||||
|
||||
def get_backdrop_path(self, obj):
|
||||
custom = obj.custom_properties or {}
|
||||
if "backdrop_path" in custom and isinstance(custom["backdrop_path"], list):
|
||||
return custom["backdrop_path"]
|
||||
backdrop_url = custom.get("backdrop_url")
|
||||
if backdrop_url:
|
||||
return [backdrop_url]
|
||||
return []
|
||||
|
||||
|
||||
class EpisodeSerializer(serializers.ModelSerializer):
|
||||
series = SeriesSerializer(read_only=True)
|
||||
library_sources = serializers.SerializerMethodField()
|
||||
movie_image = serializers.SerializerMethodField()
|
||||
backdrop_path = serializers.SerializerMethodField()
|
||||
|
||||
class Meta:
|
||||
model = Episode
|
||||
fields = '__all__'
|
||||
|
||||
def get_library_sources(self, obj):
|
||||
sources = []
|
||||
for item in obj.library_items.select_related("library").filter(library__use_as_vod_source=True):
|
||||
library = item.library
|
||||
sources.append(
|
||||
{
|
||||
"library_id": library.id,
|
||||
"library_name": library.name,
|
||||
"media_item_id": item.id,
|
||||
}
|
||||
)
|
||||
return sources
|
||||
|
||||
def get_movie_image(self, obj):
|
||||
custom = obj.custom_properties or {}
|
||||
if custom.get("poster_url"):
|
||||
return custom["poster_url"]
|
||||
if obj.series_id and obj.series and obj.series.logo_id:
|
||||
request = self.context.get("request")
|
||||
return _build_logo_cache_url(request, obj.series.logo_id) or (
|
||||
obj.series.logo.url if obj.series.logo else None
|
||||
)
|
||||
return None
|
||||
|
||||
def get_backdrop_path(self, obj):
|
||||
custom = obj.custom_properties or {}
|
||||
if isinstance(custom.get("backdrop_path"), list):
|
||||
return custom["backdrop_path"]
|
||||
backdrop_url = custom.get("backdrop_url")
|
||||
if backdrop_url:
|
||||
return [backdrop_url]
|
||||
if obj.series_id and obj.series:
|
||||
series_custom = obj.series.custom_properties or {}
|
||||
if isinstance(series_custom.get("backdrop_path"), list):
|
||||
return series_custom["backdrop_path"]
|
||||
if series_custom.get("backdrop_url"):
|
||||
return [series_custom["backdrop_url"]]
|
||||
return []
|
||||
|
||||
|
||||
class M3USeriesRelationSerializer(serializers.ModelSerializer):
|
||||
series = SeriesSerializer(read_only=True)
|
||||
|
|
@ -345,7 +292,7 @@ class M3UEpisodeRelationSerializer(serializers.ModelSerializer):
|
|||
|
||||
class EnhancedSeriesSerializer(serializers.ModelSerializer):
|
||||
"""Enhanced serializer for series with provider information"""
|
||||
logo = LogoSerializer(read_only=True)
|
||||
logo = VODLogoSerializer(read_only=True)
|
||||
providers = M3USeriesRelationSerializer(source='m3u_relations', many=True, read_only=True)
|
||||
episode_count = serializers.SerializerMethodField()
|
||||
|
||||
|
|
|
|||
|
|
@ -5,10 +5,9 @@ from django.db.models import Q
|
|||
from apps.m3u.models import M3UAccount
|
||||
from core.xtream_codes import Client as XtreamCodesClient
|
||||
from .models import (
|
||||
VODCategory, Series, Movie, Episode,
|
||||
VODCategory, Series, Movie, Episode, VODLogo,
|
||||
M3USeriesRelation, M3UMovieRelation, M3UEpisodeRelation, M3UVODCategoryRelation
|
||||
)
|
||||
from apps.channels.models import Logo
|
||||
from datetime import datetime
|
||||
import logging
|
||||
import json
|
||||
|
|
@ -128,6 +127,37 @@ def refresh_movies(client, account, categories_by_provider, relations, scan_star
|
|||
"""Refresh movie content using single API call for all movies"""
|
||||
logger.info(f"Refreshing movies for account {account.name}")
|
||||
|
||||
# Ensure "Uncategorized" category exists for movies without a category
|
||||
uncategorized_category, created = VODCategory.objects.get_or_create(
|
||||
name="Uncategorized",
|
||||
category_type="movie",
|
||||
defaults={}
|
||||
)
|
||||
|
||||
# Ensure there's a relation for the Uncategorized category
|
||||
account_custom_props = account.custom_properties or {}
|
||||
auto_enable_new = account_custom_props.get("auto_enable_new_groups_vod", True)
|
||||
|
||||
uncategorized_relation, rel_created = M3UVODCategoryRelation.objects.get_or_create(
|
||||
category=uncategorized_category,
|
||||
m3u_account=account,
|
||||
defaults={
|
||||
'enabled': auto_enable_new,
|
||||
'custom_properties': {}
|
||||
}
|
||||
)
|
||||
|
||||
if created:
|
||||
logger.info(f"Created 'Uncategorized' category for movies")
|
||||
if rel_created:
|
||||
logger.info(f"Created relation for 'Uncategorized' category (enabled={auto_enable_new})")
|
||||
|
||||
# Add uncategorized category to relations dict for easy access
|
||||
relations[uncategorized_category.id] = uncategorized_relation
|
||||
|
||||
# Add to categories_by_provider with a special key for items without category
|
||||
categories_by_provider['__uncategorized__'] = uncategorized_category
|
||||
|
||||
# Get all movies in a single API call
|
||||
logger.info("Fetching all movies from provider...")
|
||||
all_movies_data = client.get_vod_streams() # No category_id = get all movies
|
||||
|
|
@ -151,6 +181,37 @@ def refresh_series(client, account, categories_by_provider, relations, scan_star
|
|||
"""Refresh series content using single API call for all series"""
|
||||
logger.info(f"Refreshing series for account {account.name}")
|
||||
|
||||
# Ensure "Uncategorized" category exists for series without a category
|
||||
uncategorized_category, created = VODCategory.objects.get_or_create(
|
||||
name="Uncategorized",
|
||||
category_type="series",
|
||||
defaults={}
|
||||
)
|
||||
|
||||
# Ensure there's a relation for the Uncategorized category
|
||||
account_custom_props = account.custom_properties or {}
|
||||
auto_enable_new = account_custom_props.get("auto_enable_new_groups_series", True)
|
||||
|
||||
uncategorized_relation, rel_created = M3UVODCategoryRelation.objects.get_or_create(
|
||||
category=uncategorized_category,
|
||||
m3u_account=account,
|
||||
defaults={
|
||||
'enabled': auto_enable_new,
|
||||
'custom_properties': {}
|
||||
}
|
||||
)
|
||||
|
||||
if created:
|
||||
logger.info(f"Created 'Uncategorized' category for series")
|
||||
if rel_created:
|
||||
logger.info(f"Created relation for 'Uncategorized' category (enabled={auto_enable_new})")
|
||||
|
||||
# Add uncategorized category to relations dict for easy access
|
||||
relations[uncategorized_category.id] = uncategorized_relation
|
||||
|
||||
# Add to categories_by_provider with a special key for items without category
|
||||
categories_by_provider['__uncategorized__'] = uncategorized_category
|
||||
|
||||
# Get all series in a single API call
|
||||
logger.info("Fetching all series from provider...")
|
||||
all_series_data = client.get_series() # No category_id = get all series
|
||||
|
|
@ -241,6 +302,7 @@ def batch_create_categories(categories_data, category_type, account):
|
|||
M3UVODCategoryRelation.objects.bulk_create(relations_to_create, ignore_conflicts=True)
|
||||
|
||||
# Delete orphaned category relationships (categories no longer in the M3U source)
|
||||
# Exclude "Uncategorized" from cleanup as it's a special category we manage
|
||||
current_category_ids = set(existing_categories[name].id for name in category_names)
|
||||
existing_relations = M3UVODCategoryRelation.objects.filter(
|
||||
m3u_account=account,
|
||||
|
|
@ -249,7 +311,7 @@ def batch_create_categories(categories_data, category_type, account):
|
|||
|
||||
relations_to_delete = [
|
||||
rel for rel in existing_relations
|
||||
if rel.category_id not in current_category_ids
|
||||
if rel.category_id not in current_category_ids and rel.category.name != "Uncategorized"
|
||||
]
|
||||
|
||||
if relations_to_delete:
|
||||
|
|
@ -332,7 +394,16 @@ def process_movie_batch(account, batch, categories, relations, scan_start_time=N
|
|||
logger.debug("Skipping disabled category")
|
||||
continue
|
||||
else:
|
||||
logger.warning(f"No category ID provided for movie {name}")
|
||||
# Assign to Uncategorized category if no category_id provided
|
||||
logger.debug(f"No category ID provided for movie {name}, assigning to 'Uncategorized'")
|
||||
category = categories.get('__uncategorized__')
|
||||
if category:
|
||||
movie_data['_category_id'] = category.id
|
||||
# Check if uncategorized is disabled
|
||||
relation = relations.get(category.id, None)
|
||||
if relation and not relation.enabled:
|
||||
logger.debug("Skipping disabled 'Uncategorized' category")
|
||||
continue
|
||||
|
||||
# Extract metadata
|
||||
year = extract_year_from_data(movie_data, 'name')
|
||||
|
|
@ -403,7 +474,7 @@ def process_movie_batch(account, batch, categories, relations, scan_start_time=N
|
|||
|
||||
# Get existing logos
|
||||
existing_logos = {
|
||||
logo.url: logo for logo in Logo.objects.filter(url__in=logo_urls)
|
||||
logo.url: logo for logo in VODLogo.objects.filter(url__in=logo_urls)
|
||||
} if logo_urls else {}
|
||||
|
||||
# Create missing logos
|
||||
|
|
@ -411,20 +482,20 @@ def process_movie_batch(account, batch, categories, relations, scan_start_time=N
|
|||
for logo_url in logo_urls:
|
||||
if logo_url not in existing_logos:
|
||||
movie_name = logo_url_to_name.get(logo_url, 'Unknown Movie')
|
||||
logos_to_create.append(Logo(url=logo_url, name=movie_name))
|
||||
logos_to_create.append(VODLogo(url=logo_url, name=movie_name))
|
||||
|
||||
if logos_to_create:
|
||||
try:
|
||||
Logo.objects.bulk_create(logos_to_create, ignore_conflicts=True)
|
||||
VODLogo.objects.bulk_create(logos_to_create, ignore_conflicts=True)
|
||||
# Refresh existing_logos with newly created ones
|
||||
new_logo_urls = [logo.url for logo in logos_to_create]
|
||||
newly_created = {
|
||||
logo.url: logo for logo in Logo.objects.filter(url__in=new_logo_urls)
|
||||
logo.url: logo for logo in VODLogo.objects.filter(url__in=new_logo_urls)
|
||||
}
|
||||
existing_logos.update(newly_created)
|
||||
logger.info(f"Created {len(newly_created)} new logos for movies")
|
||||
logger.info(f"Created {len(newly_created)} new VOD logos for movies")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to create logos: {e}")
|
||||
logger.warning(f"Failed to create VOD logos: {e}")
|
||||
|
||||
# Get existing movies based on our keys
|
||||
existing_movies = {}
|
||||
|
|
@ -634,7 +705,16 @@ def process_series_batch(account, batch, categories, relations, scan_start_time=
|
|||
logger.debug("Skipping disabled category")
|
||||
continue
|
||||
else:
|
||||
logger.warning(f"No category ID provided for series {name}")
|
||||
# Assign to Uncategorized category if no category_id provided
|
||||
logger.debug(f"No category ID provided for series {name}, assigning to 'Uncategorized'")
|
||||
category = categories.get('__uncategorized__')
|
||||
if category:
|
||||
series_data['_category_id'] = category.id
|
||||
# Check if uncategorized is disabled
|
||||
relation = relations.get(category.id, None)
|
||||
if relation and not relation.enabled:
|
||||
logger.debug("Skipping disabled 'Uncategorized' category")
|
||||
continue
|
||||
|
||||
# Extract metadata
|
||||
year = extract_year(series_data.get('releaseDate', ''))
|
||||
|
|
@ -725,7 +805,7 @@ def process_series_batch(account, batch, categories, relations, scan_start_time=
|
|||
|
||||
# Get existing logos
|
||||
existing_logos = {
|
||||
logo.url: logo for logo in Logo.objects.filter(url__in=logo_urls)
|
||||
logo.url: logo for logo in VODLogo.objects.filter(url__in=logo_urls)
|
||||
} if logo_urls else {}
|
||||
|
||||
# Create missing logos
|
||||
|
|
@ -733,20 +813,20 @@ def process_series_batch(account, batch, categories, relations, scan_start_time=
|
|||
for logo_url in logo_urls:
|
||||
if logo_url not in existing_logos:
|
||||
series_name = logo_url_to_name.get(logo_url, 'Unknown Series')
|
||||
logos_to_create.append(Logo(url=logo_url, name=series_name))
|
||||
logos_to_create.append(VODLogo(url=logo_url, name=series_name))
|
||||
|
||||
if logos_to_create:
|
||||
try:
|
||||
Logo.objects.bulk_create(logos_to_create, ignore_conflicts=True)
|
||||
VODLogo.objects.bulk_create(logos_to_create, ignore_conflicts=True)
|
||||
# Refresh existing_logos with newly created ones
|
||||
new_logo_urls = [logo.url for logo in logos_to_create]
|
||||
newly_created = {
|
||||
logo.url: logo for logo in Logo.objects.filter(url__in=new_logo_urls)
|
||||
logo.url: logo for logo in VODLogo.objects.filter(url__in=new_logo_urls)
|
||||
}
|
||||
existing_logos.update(newly_created)
|
||||
logger.info(f"Created {len(newly_created)} new logos for series")
|
||||
logger.info(f"Created {len(newly_created)} new VOD logos for series")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to create logos: {e}")
|
||||
logger.warning(f"Failed to create VOD logos: {e}")
|
||||
|
||||
# Get existing series based on our keys - same pattern as movies
|
||||
existing_series = {}
|
||||
|
|
@ -1424,21 +1504,21 @@ def cleanup_orphaned_vod_content(stale_days=0, scan_start_time=None, account_id=
|
|||
stale_episode_count = stale_episode_relations.count()
|
||||
stale_episode_relations.delete()
|
||||
|
||||
# Clean up movies with no relations (orphaned) - only if no account_id specified (global cleanup)
|
||||
if not account_id:
|
||||
orphaned_movies = Movie.objects.filter(m3u_relations__isnull=True)
|
||||
orphaned_movie_count = orphaned_movies.count()
|
||||
# Clean up movies with no relations (orphaned)
|
||||
# Safe to delete even during account-specific cleanup because if ANY account
|
||||
# has a relation, m3u_relations will not be null
|
||||
orphaned_movies = Movie.objects.filter(m3u_relations__isnull=True)
|
||||
orphaned_movie_count = orphaned_movies.count()
|
||||
if orphaned_movie_count > 0:
|
||||
logger.info(f"Deleting {orphaned_movie_count} orphaned movies with no M3U relations")
|
||||
orphaned_movies.delete()
|
||||
|
||||
# Clean up series with no relations (orphaned) - only if no account_id specified (global cleanup)
|
||||
orphaned_series = Series.objects.filter(m3u_relations__isnull=True)
|
||||
orphaned_series_count = orphaned_series.count()
|
||||
# Clean up series with no relations (orphaned)
|
||||
orphaned_series = Series.objects.filter(m3u_relations__isnull=True)
|
||||
orphaned_series_count = orphaned_series.count()
|
||||
if orphaned_series_count > 0:
|
||||
logger.info(f"Deleting {orphaned_series_count} orphaned series with no M3U relations")
|
||||
orphaned_series.delete()
|
||||
else:
|
||||
# When cleaning up for specific account, we don't remove orphaned content
|
||||
# as other accounts might still reference it
|
||||
orphaned_movie_count = 0
|
||||
orphaned_series_count = 0
|
||||
|
||||
# Episodes will be cleaned up via CASCADE when series are deleted
|
||||
|
||||
|
|
@ -1999,7 +2079,7 @@ def refresh_movie_advanced_data(m3u_movie_relation_id, force_refresh=False):
|
|||
|
||||
def validate_logo_reference(obj, obj_type="object"):
|
||||
"""
|
||||
Validate that a logo reference exists in the database.
|
||||
Validate that a VOD logo reference exists in the database.
|
||||
If not, set it to None to prevent foreign key constraint violations.
|
||||
|
||||
Args:
|
||||
|
|
@ -2019,9 +2099,9 @@ def validate_logo_reference(obj, obj_type="object"):
|
|||
|
||||
try:
|
||||
# Verify the logo exists in the database
|
||||
Logo.objects.get(pk=obj.logo.pk)
|
||||
VODLogo.objects.get(pk=obj.logo.pk)
|
||||
return True
|
||||
except Logo.DoesNotExist:
|
||||
logger.warning(f"Logo with ID {obj.logo.pk} does not exist in database for {obj_type} '{getattr(obj, 'name', 'Unknown')}', setting to None")
|
||||
except VODLogo.DoesNotExist:
|
||||
logger.warning(f"VOD Logo with ID {obj.logo.pk} does not exist in database for {obj_type} '{getattr(obj, 'name', 'Unknown')}', setting to None")
|
||||
obj.logo = None
|
||||
return False
|
||||
|
|
|
|||
|
|
@ -2,7 +2,16 @@
|
|||
|
||||
from django.urls import path, include
|
||||
from rest_framework.routers import DefaultRouter
|
||||
from .api_views import UserAgentViewSet, StreamProfileViewSet, CoreSettingsViewSet, environment, version, rehash_streams_endpoint
|
||||
from .api_views import (
|
||||
UserAgentViewSet,
|
||||
StreamProfileViewSet,
|
||||
CoreSettingsViewSet,
|
||||
environment,
|
||||
version,
|
||||
rehash_streams_endpoint,
|
||||
TimezoneListView,
|
||||
get_system_events
|
||||
)
|
||||
|
||||
router = DefaultRouter()
|
||||
router.register(r'useragents', UserAgentViewSet, basename='useragent')
|
||||
|
|
@ -12,5 +21,7 @@ urlpatterns = [
|
|||
path('settings/env/', environment, name='token_refresh'),
|
||||
path('version/', version, name='version'),
|
||||
path('rehash-streams/', rehash_streams_endpoint, name='rehash_streams'),
|
||||
path('timezones/', TimezoneListView.as_view(), name='timezones'),
|
||||
path('system-events/', get_system_events, name='system_events'),
|
||||
path('', include(router.urls)),
|
||||
]
|
||||
|
|
|
|||
|
|
@ -6,10 +6,12 @@ import logging
|
|||
from typing import Optional
|
||||
from rest_framework import viewsets, status
|
||||
from rest_framework.response import Response
|
||||
from rest_framework.views import APIView
|
||||
from django.shortcuts import get_object_or_404
|
||||
from rest_framework.permissions import IsAuthenticated
|
||||
from rest_framework.decorators import api_view, permission_classes, action
|
||||
from drf_yasg.utils import swagger_auto_schema
|
||||
from drf_yasg import openapi
|
||||
from .models import (
|
||||
UserAgent,
|
||||
StreamProfile,
|
||||
|
|
@ -395,25 +397,130 @@ def rehash_streams_endpoint(request):
|
|||
# Get the current hash keys from settings
|
||||
hash_key_setting = CoreSettings.objects.get(key=STREAM_HASH_KEY)
|
||||
hash_keys = hash_key_setting.value.split(",")
|
||||
|
||||
|
||||
# Queue the rehash task
|
||||
task = rehash_streams.delay(hash_keys)
|
||||
|
||||
|
||||
return Response({
|
||||
"success": True,
|
||||
"message": "Stream rehashing task has been queued",
|
||||
"task_id": task.id
|
||||
}, status=status.HTTP_200_OK)
|
||||
|
||||
|
||||
except CoreSettings.DoesNotExist:
|
||||
return Response({
|
||||
"success": False,
|
||||
"message": "Hash key settings not found"
|
||||
}, status=status.HTTP_400_BAD_REQUEST)
|
||||
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error triggering rehash streams: {e}")
|
||||
return Response({
|
||||
"success": False,
|
||||
"message": "Failed to trigger rehash task"
|
||||
}, status=status.HTTP_500_INTERNAL_SERVER_ERROR)
|
||||
|
||||
|
||||
# ─────────────────────────────
|
||||
# Timezone List API
|
||||
# ─────────────────────────────
|
||||
class TimezoneListView(APIView):
|
||||
"""
|
||||
API endpoint that returns all available timezones supported by pytz.
|
||||
Returns a list of timezone names grouped by region for easy selection.
|
||||
This is a general utility endpoint that can be used throughout the application.
|
||||
"""
|
||||
|
||||
def get_permissions(self):
|
||||
return [Authenticated()]
|
||||
|
||||
@swagger_auto_schema(
|
||||
operation_description="Get list of all supported timezones",
|
||||
responses={200: openapi.Response('List of timezones with grouping by region')}
|
||||
)
|
||||
def get(self, request):
|
||||
import pytz
|
||||
|
||||
# Get all common timezones (excludes deprecated ones)
|
||||
all_timezones = sorted(pytz.common_timezones)
|
||||
|
||||
# Group by region for better UX
|
||||
grouped = {}
|
||||
for tz in all_timezones:
|
||||
if '/' in tz:
|
||||
region = tz.split('/')[0]
|
||||
if region not in grouped:
|
||||
grouped[region] = []
|
||||
grouped[region].append(tz)
|
||||
else:
|
||||
# Handle special zones like UTC, GMT, etc.
|
||||
if 'Other' not in grouped:
|
||||
grouped['Other'] = []
|
||||
grouped['Other'].append(tz)
|
||||
|
||||
return Response({
|
||||
'timezones': all_timezones,
|
||||
'grouped': grouped,
|
||||
'count': len(all_timezones)
|
||||
})
|
||||
|
||||
|
||||
# ─────────────────────────────
|
||||
# System Events API
|
||||
# ─────────────────────────────
|
||||
@api_view(['GET'])
|
||||
@permission_classes([IsAuthenticated])
|
||||
def get_system_events(request):
|
||||
"""
|
||||
Get recent system events (channel start/stop, buffering, client connections, etc.)
|
||||
|
||||
Query Parameters:
|
||||
limit: Number of events to return per page (default: 100, max: 1000)
|
||||
offset: Number of events to skip (for pagination, default: 0)
|
||||
event_type: Filter by specific event type (optional)
|
||||
"""
|
||||
from core.models import SystemEvent
|
||||
|
||||
try:
|
||||
# Get pagination params
|
||||
limit = min(int(request.GET.get('limit', 100)), 1000)
|
||||
offset = int(request.GET.get('offset', 0))
|
||||
|
||||
# Start with all events
|
||||
events = SystemEvent.objects.all()
|
||||
|
||||
# Filter by event_type if provided
|
||||
event_type = request.GET.get('event_type')
|
||||
if event_type:
|
||||
events = events.filter(event_type=event_type)
|
||||
|
||||
# Get total count before applying pagination
|
||||
total_count = events.count()
|
||||
|
||||
# Apply offset and limit for pagination
|
||||
events = events[offset:offset + limit]
|
||||
|
||||
# Serialize the data
|
||||
events_data = [{
|
||||
'id': event.id,
|
||||
'event_type': event.event_type,
|
||||
'event_type_display': event.get_event_type_display(),
|
||||
'timestamp': event.timestamp.isoformat(),
|
||||
'channel_id': str(event.channel_id) if event.channel_id else None,
|
||||
'channel_name': event.channel_name,
|
||||
'details': event.details
|
||||
} for event in events]
|
||||
|
||||
return Response({
|
||||
'events': events_data,
|
||||
'count': len(events_data),
|
||||
'total': total_count,
|
||||
'offset': offset,
|
||||
'limit': limit
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching system events: {e}")
|
||||
return Response({
|
||||
'error': 'Failed to fetch system events'
|
||||
}, status=status.HTTP_500_INTERNAL_SERVER_ERROR)
|
||||
|
|
|
|||
28
core/migrations/0017_systemevent.py
Normal file
28
core/migrations/0017_systemevent.py
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
# Generated by Django 5.2.4 on 2025-11-20 20:47
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('core', '0016_update_dvr_template_paths'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name='SystemEvent',
|
||||
fields=[
|
||||
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
|
||||
('event_type', models.CharField(choices=[('channel_start', 'Channel Started'), ('channel_stop', 'Channel Stopped'), ('channel_buffering', 'Channel Buffering'), ('channel_failover', 'Channel Failover'), ('channel_reconnect', 'Channel Reconnected'), ('channel_error', 'Channel Error'), ('client_connect', 'Client Connected'), ('client_disconnect', 'Client Disconnected'), ('recording_start', 'Recording Started'), ('recording_end', 'Recording Ended'), ('stream_switch', 'Stream Switched'), ('m3u_refresh', 'M3U Refreshed'), ('m3u_download', 'M3U Downloaded'), ('epg_refresh', 'EPG Refreshed'), ('epg_download', 'EPG Downloaded')], db_index=True, max_length=50)),
|
||||
('timestamp', models.DateTimeField(auto_now_add=True, db_index=True)),
|
||||
('channel_id', models.UUIDField(blank=True, db_index=True, null=True)),
|
||||
('channel_name', models.CharField(blank=True, max_length=255, null=True)),
|
||||
('details', models.JSONField(blank=True, default=dict)),
|
||||
],
|
||||
options={
|
||||
'ordering': ['-timestamp'],
|
||||
'indexes': [models.Index(fields=['-timestamp'], name='core_system_timesta_c6c3d1_idx'), models.Index(fields=['event_type', '-timestamp'], name='core_system_event_t_4267d9_idx')],
|
||||
},
|
||||
),
|
||||
]
|
||||
18
core/migrations/0018_alter_systemevent_event_type.py
Normal file
18
core/migrations/0018_alter_systemevent_event_type.py
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
# Generated by Django 5.2.4 on 2025-11-21 15:59
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('core', '0017_systemevent'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name='systemevent',
|
||||
name='event_type',
|
||||
field=models.CharField(choices=[('channel_start', 'Channel Started'), ('channel_stop', 'Channel Stopped'), ('channel_buffering', 'Channel Buffering'), ('channel_failover', 'Channel Failover'), ('channel_reconnect', 'Channel Reconnected'), ('channel_error', 'Channel Error'), ('client_connect', 'Client Connected'), ('client_disconnect', 'Client Disconnected'), ('recording_start', 'Recording Started'), ('recording_end', 'Recording Ended'), ('stream_switch', 'Stream Switched'), ('m3u_refresh', 'M3U Refreshed'), ('m3u_download', 'M3U Downloaded'), ('epg_refresh', 'EPG Refreshed'), ('epg_download', 'EPG Downloaded'), ('login_success', 'Login Successful'), ('login_failed', 'Login Failed'), ('logout', 'User Logged Out'), ('m3u_blocked', 'M3U Download Blocked'), ('epg_blocked', 'EPG Download Blocked')], db_index=True, max_length=50),
|
||||
),
|
||||
]
|
||||
|
|
@ -375,3 +375,48 @@ class CoreSettings(models.Model):
|
|||
return rules
|
||||
except Exception:
|
||||
return rules
|
||||
|
||||
|
||||
class SystemEvent(models.Model):
|
||||
"""
|
||||
Tracks system events like channel start/stop, buffering, failover, client connections.
|
||||
Maintains a rolling history based on max_system_events setting.
|
||||
"""
|
||||
EVENT_TYPES = [
|
||||
('channel_start', 'Channel Started'),
|
||||
('channel_stop', 'Channel Stopped'),
|
||||
('channel_buffering', 'Channel Buffering'),
|
||||
('channel_failover', 'Channel Failover'),
|
||||
('channel_reconnect', 'Channel Reconnected'),
|
||||
('channel_error', 'Channel Error'),
|
||||
('client_connect', 'Client Connected'),
|
||||
('client_disconnect', 'Client Disconnected'),
|
||||
('recording_start', 'Recording Started'),
|
||||
('recording_end', 'Recording Ended'),
|
||||
('stream_switch', 'Stream Switched'),
|
||||
('m3u_refresh', 'M3U Refreshed'),
|
||||
('m3u_download', 'M3U Downloaded'),
|
||||
('epg_refresh', 'EPG Refreshed'),
|
||||
('epg_download', 'EPG Downloaded'),
|
||||
('login_success', 'Login Successful'),
|
||||
('login_failed', 'Login Failed'),
|
||||
('logout', 'User Logged Out'),
|
||||
('m3u_blocked', 'M3U Download Blocked'),
|
||||
('epg_blocked', 'EPG Download Blocked'),
|
||||
]
|
||||
|
||||
event_type = models.CharField(max_length=50, choices=EVENT_TYPES, db_index=True)
|
||||
timestamp = models.DateTimeField(auto_now_add=True, db_index=True)
|
||||
channel_id = models.UUIDField(null=True, blank=True, db_index=True)
|
||||
channel_name = models.CharField(max_length=255, null=True, blank=True)
|
||||
details = models.JSONField(default=dict, blank=True)
|
||||
|
||||
class Meta:
|
||||
ordering = ['-timestamp']
|
||||
indexes = [
|
||||
models.Index(fields=['-timestamp']),
|
||||
models.Index(fields=['event_type', '-timestamp']),
|
||||
]
|
||||
|
||||
def __str__(self):
|
||||
return f"{self.event_type} - {self.channel_name or 'N/A'} @ {self.timestamp}"
|
||||
|
|
|
|||
|
|
@ -377,12 +377,59 @@ def validate_flexible_url(value):
|
|||
import re
|
||||
|
||||
# More flexible pattern for non-FQDN hostnames with paths
|
||||
# Matches: http://hostname, http://hostname/, http://hostname:port/path/to/file.xml
|
||||
non_fqdn_pattern = r'^https?://[a-zA-Z0-9]([a-zA-Z0-9\-]{0,61}[a-zA-Z0-9])?(\:[0-9]+)?(/[^\s]*)?$'
|
||||
# Matches: http://hostname, https://hostname/, http://hostname:port/path/to/file.xml, rtp://192.168.2.1, rtsp://192.168.178.1, udp://239.0.0.1:1234
|
||||
# Also matches FQDNs for rtsp/rtp/udp protocols: rtsp://FQDN/path?query=value
|
||||
# Also supports authentication: rtsp://user:pass@hostname/path
|
||||
non_fqdn_pattern = r'^(rts?p|https?|udp)://([a-zA-Z0-9_\-\.]+:[^\s@]+@)?([a-zA-Z0-9]([a-zA-Z0-9\-\.]{0,61}[a-zA-Z0-9])?|[0-9.]+)?(\:[0-9]+)?(/[^\s]*)?$'
|
||||
non_fqdn_match = re.match(non_fqdn_pattern, value)
|
||||
|
||||
if non_fqdn_match:
|
||||
return # Accept non-FQDN hostnames
|
||||
return # Accept non-FQDN hostnames and rtsp/rtp/udp URLs with optional authentication
|
||||
|
||||
# If it doesn't match our flexible patterns, raise the original error
|
||||
raise ValidationError("Enter a valid URL.")
|
||||
|
||||
|
||||
def log_system_event(event_type, channel_id=None, channel_name=None, **details):
|
||||
"""
|
||||
Log a system event and maintain the configured max history.
|
||||
|
||||
Args:
|
||||
event_type: Type of event (e.g., 'channel_start', 'client_connect')
|
||||
channel_id: Optional UUID of the channel
|
||||
channel_name: Optional name of the channel
|
||||
**details: Additional details to store in the event (stored as JSON)
|
||||
|
||||
Example:
|
||||
log_system_event('channel_start', channel_id=uuid, channel_name='CNN',
|
||||
stream_url='http://...', user='admin')
|
||||
"""
|
||||
from core.models import SystemEvent, CoreSettings
|
||||
|
||||
try:
|
||||
# Create the event
|
||||
SystemEvent.objects.create(
|
||||
event_type=event_type,
|
||||
channel_id=channel_id,
|
||||
channel_name=channel_name,
|
||||
details=details
|
||||
)
|
||||
|
||||
# Get max events from settings (default 100)
|
||||
try:
|
||||
max_events_setting = CoreSettings.objects.filter(key='max-system-events').first()
|
||||
max_events = int(max_events_setting.value) if max_events_setting else 100
|
||||
except Exception:
|
||||
max_events = 100
|
||||
|
||||
# Delete old events beyond the limit (keep it efficient with a single query)
|
||||
total_count = SystemEvent.objects.count()
|
||||
if total_count > max_events:
|
||||
# Get the ID of the event at the cutoff point
|
||||
cutoff_event = SystemEvent.objects.values_list('id', flat=True)[max_events]
|
||||
# Delete all events with ID less than cutoff (older events)
|
||||
SystemEvent.objects.filter(id__lt=cutoff_event).delete()
|
||||
|
||||
except Exception as e:
|
||||
# Don't let event logging break the main application
|
||||
logger.error(f"Failed to log system event {event_type}: {e}")
|
||||
|
|
|
|||
|
|
@ -50,13 +50,21 @@ app.conf.update(
|
|||
)
|
||||
|
||||
# Add memory cleanup after task completion
|
||||
#@task_postrun.connect # Use the imported signal
|
||||
@task_postrun.connect # Use the imported signal
|
||||
def cleanup_task_memory(**kwargs):
|
||||
"""Clean up memory after each task completes"""
|
||||
"""Clean up memory and database connections after each task completes"""
|
||||
from django.db import connection
|
||||
|
||||
# Get task name from kwargs
|
||||
task_name = kwargs.get('task').name if kwargs.get('task') else ''
|
||||
|
||||
# Only run cleanup for memory-intensive tasks
|
||||
# Close database connection for this Celery worker process
|
||||
try:
|
||||
connection.close()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Only run memory cleanup for memory-intensive tasks
|
||||
memory_intensive_tasks = [
|
||||
'apps.m3u.tasks.refresh_single_m3u_account',
|
||||
'apps.m3u.tasks.refresh_m3u_accounts',
|
||||
|
|
|
|||
|
|
@ -52,6 +52,11 @@ EPG_BATCH_SIZE = 1000 # Number of records to process in a batch
|
|||
EPG_MEMORY_LIMIT = 512 # Memory limit in MB before forcing garbage collection
|
||||
EPG_ENABLE_MEMORY_MONITORING = True # Whether to monitor memory usage during processing
|
||||
|
||||
# XtreamCodes Rate Limiting Settings
|
||||
# Delay between profile authentications when refreshing multiple profiles
|
||||
# This prevents providers from temporarily banning users with many profiles
|
||||
XC_PROFILE_REFRESH_DELAY = float(os.environ.get('XC_PROFILE_REFRESH_DELAY', '2.5')) # seconds between profile refreshes
|
||||
|
||||
# Database optimization settings
|
||||
DATABASE_STATEMENT_TIMEOUT = 300 # Seconds before timing out long-running queries
|
||||
DATABASE_CONN_MAX_AGE = (
|
||||
|
|
@ -135,6 +140,7 @@ else:
|
|||
"PASSWORD": os.environ.get("POSTGRES_PASSWORD", "secret"),
|
||||
"HOST": os.environ.get("POSTGRES_HOST", "localhost"),
|
||||
"PORT": int(os.environ.get("POSTGRES_PORT", 5432)),
|
||||
"CONN_MAX_AGE": DATABASE_CONN_MAX_AGE,
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ def network_access_allowed(request, settings_key):
|
|||
cidrs = (
|
||||
network_access[settings_key].split(",")
|
||||
if settings_key in network_access
|
||||
else ["0.0.0.0/0"]
|
||||
else ["0.0.0.0/0", "::/0"]
|
||||
)
|
||||
|
||||
network_allowed = False
|
||||
|
|
|
|||
|
|
@ -14,6 +14,15 @@ services:
|
|||
- REDIS_HOST=localhost
|
||||
- CELERY_BROKER_URL=redis://localhost:6379/0
|
||||
- DISPATCHARR_LOG_LEVEL=info
|
||||
# Process Priority Configuration (Optional)
|
||||
# Lower values = higher priority. Range: -20 (highest) to 19 (lowest)
|
||||
# Negative values require cap_add: SYS_NICE (uncomment below)
|
||||
#- UWSGI_NICE_LEVEL=-5 # uWSGI/FFmpeg/Streaming (default: 0, recommended: -5 for high priority)
|
||||
#- CELERY_NICE_LEVEL=5 # Celery/EPG/Background tasks (default: 5, low priority)
|
||||
#
|
||||
# Uncomment to enable high priority for streaming (required if UWSGI_NICE_LEVEL < 0)
|
||||
#cap_add:
|
||||
# - SYS_NICE
|
||||
# Optional for hardware acceleration
|
||||
#devices:
|
||||
# - /dev/dri:/dev/dri # For Intel/AMD GPU acceleration (VA-API)
|
||||
|
|
|
|||
|
|
@ -18,3 +18,12 @@ services:
|
|||
- REDIS_HOST=localhost
|
||||
- CELERY_BROKER_URL=redis://localhost:6379/0
|
||||
- DISPATCHARR_LOG_LEVEL=trace
|
||||
# Process Priority Configuration (Optional)
|
||||
# Lower values = higher priority. Range: -20 (highest) to 19 (lowest)
|
||||
# Negative values require cap_add: SYS_NICE (uncomment below)
|
||||
#- UWSGI_NICE_LEVEL=-5 # uWSGI/FFmpeg/Streaming (default: 0, recommended: -5 for high priority)
|
||||
#- CELERY_NICE_LEVEL=5 # Celery/EPG/Background tasks (default: 5, low priority)
|
||||
#
|
||||
# Uncomment to enable high priority for streaming (required if UWSGI_NICE_LEVEL < 0)
|
||||
#cap_add:
|
||||
# - SYS_NICE
|
||||
|
|
|
|||
|
|
@ -17,6 +17,15 @@ services:
|
|||
- REDIS_HOST=localhost
|
||||
- CELERY_BROKER_URL=redis://localhost:6379/0
|
||||
- DISPATCHARR_LOG_LEVEL=debug
|
||||
# Process Priority Configuration (Optional)
|
||||
# Lower values = higher priority. Range: -20 (highest) to 19 (lowest)
|
||||
# Negative values require cap_add: SYS_NICE (uncomment below)
|
||||
#- UWSGI_NICE_LEVEL=-5 # uWSGI/FFmpeg/Streaming (default: 0, recommended: -5 for high priority)
|
||||
#- CELERY_NICE_LEVEL=5 # Celery/EPG/Background tasks (default: 5, low priority)
|
||||
#
|
||||
# Uncomment to enable high priority for streaming (required if UWSGI_NICE_LEVEL < 0)
|
||||
#cap_add:
|
||||
# - SYS_NICE
|
||||
|
||||
pgadmin:
|
||||
image: dpage/pgadmin4
|
||||
|
|
|
|||
|
|
@ -17,6 +17,15 @@ services:
|
|||
- REDIS_HOST=redis
|
||||
- CELERY_BROKER_URL=redis://redis:6379/0
|
||||
- DISPATCHARR_LOG_LEVEL=info
|
||||
# Process Priority Configuration (Optional)
|
||||
# Lower values = higher priority. Range: -20 (highest) to 19 (lowest)
|
||||
# Negative values require cap_add: SYS_NICE (uncomment below)
|
||||
#- UWSGI_NICE_LEVEL=-5 # uWSGI/FFmpeg/Streaming (default: 0, recommended: -5 for high priority)
|
||||
#- CELERY_NICE_LEVEL=5 # Celery/EPG/Background tasks (default: 5, low priority)
|
||||
#
|
||||
# Uncomment to enable high priority for streaming (required if UWSGI_NICE_LEVEL < 0)
|
||||
#cap_add:
|
||||
# - SYS_NICE
|
||||
# Optional for hardware acceleration
|
||||
#group_add:
|
||||
# - video
|
||||
|
|
|
|||
|
|
@ -40,6 +40,18 @@ export REDIS_DB=${REDIS_DB:-0}
|
|||
export DISPATCHARR_PORT=${DISPATCHARR_PORT:-9191}
|
||||
export LIBVA_DRIVERS_PATH='/usr/local/lib/x86_64-linux-gnu/dri'
|
||||
export LD_LIBRARY_PATH='/usr/local/lib'
|
||||
|
||||
# Process priority configuration
|
||||
# UWSGI_NICE_LEVEL: Absolute nice value for uWSGI/streaming (default: 0 = normal priority)
|
||||
# CELERY_NICE_LEVEL: Absolute nice value for Celery/background tasks (default: 5 = low priority)
|
||||
# Note: The script will automatically calculate the relative offset for Celery since it's spawned by uWSGI
|
||||
export UWSGI_NICE_LEVEL=${UWSGI_NICE_LEVEL:-0}
|
||||
CELERY_NICE_ABSOLUTE=${CELERY_NICE_LEVEL:-5}
|
||||
|
||||
# Calculate relative nice value for Celery (since nice is relative to parent process)
|
||||
# Celery is spawned by uWSGI, so we need to add the offset to reach the desired absolute value
|
||||
export CELERY_NICE_LEVEL=$((CELERY_NICE_ABSOLUTE - UWSGI_NICE_LEVEL))
|
||||
|
||||
# Set LIBVA_DRIVER_NAME if user has specified it
|
||||
if [ -v LIBVA_DRIVER_NAME ]; then
|
||||
export LIBVA_DRIVER_NAME
|
||||
|
|
@ -78,6 +90,7 @@ if [[ ! -f /etc/profile.d/dispatcharr.sh ]]; then
|
|||
DISPATCHARR_ENV DISPATCHARR_DEBUG DISPATCHARR_LOG_LEVEL
|
||||
REDIS_HOST REDIS_DB POSTGRES_DIR DISPATCHARR_PORT
|
||||
DISPATCHARR_VERSION DISPATCHARR_TIMESTAMP LIBVA_DRIVERS_PATH LIBVA_DRIVER_NAME LD_LIBRARY_PATH
|
||||
CELERY_NICE_LEVEL UWSGI_NICE_LEVEL
|
||||
)
|
||||
|
||||
# Process each variable for both profile.d and environment
|
||||
|
|
@ -96,7 +109,16 @@ fi
|
|||
|
||||
chmod +x /etc/profile.d/dispatcharr.sh
|
||||
|
||||
pip install django-filter
|
||||
# Ensure root's .bashrc sources the profile.d scripts for interactive non-login shells
|
||||
if ! grep -q "profile.d/dispatcharr.sh" /root/.bashrc 2>/dev/null; then
|
||||
cat >> /root/.bashrc << 'EOF'
|
||||
|
||||
# Source Dispatcharr environment variables
|
||||
if [ -f /etc/profile.d/dispatcharr.sh ]; then
|
||||
. /etc/profile.d/dispatcharr.sh
|
||||
fi
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Run init scripts
|
||||
echo "Starting user setup..."
|
||||
|
|
@ -161,10 +183,12 @@ if [ "$DISPATCHARR_DEBUG" != "true" ]; then
|
|||
uwsgi_args+=" --disable-logging"
|
||||
fi
|
||||
|
||||
# Launch uwsgi -p passes environment variables to the process
|
||||
su -p - $POSTGRES_USER -c "cd /app && uwsgi $uwsgi_args &"
|
||||
uwsgi_pid=$(pgrep uwsgi | sort | head -n1)
|
||||
echo "✅ uwsgi started with PID $uwsgi_pid"
|
||||
# Launch uwsgi with configurable nice level (default: 0 for normal priority)
|
||||
# Users can override via UWSGI_NICE_LEVEL environment variable in docker-compose
|
||||
# Start with nice as root, then use setpriv to drop privileges to dispatch user
|
||||
# This preserves both the nice value and environment variables
|
||||
nice -n $UWSGI_NICE_LEVEL su -p - "$POSTGRES_USER" -c "cd /app && exec uwsgi $uwsgi_args" & uwsgi_pid=$!
|
||||
echo "✅ uwsgi started with PID $uwsgi_pid (nice $UWSGI_NICE_LEVEL)"
|
||||
pids+=("$uwsgi_pid")
|
||||
|
||||
# sed -i 's/protected-mode yes/protected-mode no/g' /etc/redis/redis.conf
|
||||
|
|
@ -209,7 +233,7 @@ echo "🔍 Running hardware acceleration check..."
|
|||
|
||||
# Wait for at least one process to exit and log the process that exited first
|
||||
if [ ${#pids[@]} -gt 0 ]; then
|
||||
echo "⏳ Waiting for processes to exit..."
|
||||
echo "⏳ Dispatcharr is running. Monitoring processes..."
|
||||
while kill -0 "${pids[@]}" 2>/dev/null; do
|
||||
sleep 1 # Wait for a second before checking again
|
||||
done
|
||||
|
|
|
|||
|
|
@ -1,25 +1,67 @@
|
|||
#!/bin/bash
|
||||
|
||||
mkdir -p /data/logos
|
||||
mkdir -p /data/recordings
|
||||
mkdir -p /data/uploads/m3us
|
||||
mkdir -p /data/uploads/epgs
|
||||
mkdir -p /data/m3us
|
||||
mkdir -p /data/epgs
|
||||
mkdir -p /data/plugins
|
||||
mkdir -p /app/logo_cache
|
||||
mkdir -p /app/media
|
||||
# Define directories that need to exist and be owned by PUID:PGID
|
||||
DATA_DIRS=(
|
||||
"/data/logos"
|
||||
"/data/recordings"
|
||||
"/data/uploads/m3us"
|
||||
"/data/uploads/epgs"
|
||||
"/data/m3us"
|
||||
"/data/epgs"
|
||||
"/data/plugins"
|
||||
"/data/models"
|
||||
)
|
||||
|
||||
APP_DIRS=(
|
||||
"/app/logo_cache"
|
||||
"/app/media"
|
||||
)
|
||||
|
||||
# Create all directories
|
||||
for dir in "${DATA_DIRS[@]}" "${APP_DIRS[@]}"; do
|
||||
mkdir -p "$dir"
|
||||
done
|
||||
|
||||
# Ensure /app itself is owned by PUID:PGID (needed for uwsgi socket creation)
|
||||
if [ "$(id -u)" = "0" ] && [ -d "/app" ]; then
|
||||
if [ "$(stat -c '%u:%g' /app)" != "$PUID:$PGID" ]; then
|
||||
echo "Fixing ownership for /app (non-recursive)"
|
||||
chown $PUID:$PGID /app
|
||||
fi
|
||||
fi
|
||||
|
||||
sed -i "s/NGINX_PORT/${DISPATCHARR_PORT}/g" /etc/nginx/sites-enabled/default
|
||||
|
||||
# NOTE: mac doesn't run as root, so only manage permissions
|
||||
# if this script is running as root
|
||||
if [ "$(id -u)" = "0" ]; then
|
||||
# Needs to own ALL of /data except db, we handle that below
|
||||
chown -R $PUID:$PGID /data
|
||||
chown -R $PUID:$PGID /app
|
||||
# Fix data directories (non-recursive to avoid touching user files)
|
||||
for dir in "${DATA_DIRS[@]}"; do
|
||||
if [ -d "$dir" ] && [ "$(stat -c '%u:%g' "$dir")" != "$PUID:$PGID" ]; then
|
||||
echo "Fixing ownership for $dir"
|
||||
chown $PUID:$PGID "$dir"
|
||||
fi
|
||||
done
|
||||
|
||||
# Fix app directories (recursive since they're managed by the app)
|
||||
for dir in "${APP_DIRS[@]}"; do
|
||||
if [ -d "$dir" ] && [ "$(stat -c '%u:%g' "$dir")" != "$PUID:$PGID" ]; then
|
||||
echo "Fixing ownership for $dir (recursive)"
|
||||
chown -R $PUID:$PGID "$dir"
|
||||
fi
|
||||
done
|
||||
|
||||
# Database permissions
|
||||
if [ -d /data/db ] && [ "$(stat -c '%u' /data/db)" != "$(id -u postgres)" ]; then
|
||||
echo "Fixing ownership for /data/db"
|
||||
chown -R postgres:postgres /data/db
|
||||
fi
|
||||
|
||||
# Fix /data directory ownership (non-recursive)
|
||||
if [ -d "/data" ] && [ "$(stat -c '%u:%g' /data)" != "$PUID:$PGID" ]; then
|
||||
echo "Fixing ownership for /data (non-recursive)"
|
||||
chown $PUID:$PGID /data
|
||||
fi
|
||||
|
||||
# Permissions
|
||||
chown -R postgres:postgres /data/db
|
||||
chmod +x /data
|
||||
fi
|
||||
fi
|
||||
|
|
@ -3,6 +3,7 @@ proxy_cache_path /app/logo_cache levels=1:2 keys_zone=logo_cache:10m
|
|||
|
||||
server {
|
||||
listen NGINX_PORT;
|
||||
listen [::]:NGINX_PORT;
|
||||
|
||||
proxy_connect_timeout 75;
|
||||
proxy_send_timeout 300;
|
||||
|
|
|
|||
|
|
@ -7,9 +7,10 @@ exec-before = python /app/scripts/wait_for_redis.py
|
|||
|
||||
; Start Redis first
|
||||
attach-daemon = redis-server
|
||||
; Then start other services
|
||||
attach-daemon = nice -n 5 celery -A dispatcharr worker --autoscale=6,1
|
||||
attach-daemon = nice -n 5 celery -A dispatcharr beat
|
||||
; Then start other services with configurable nice level (default: 5 for low priority)
|
||||
; Users can override via CELERY_NICE_LEVEL environment variable in docker-compose
|
||||
attach-daemon = nice -n $(CELERY_NICE_LEVEL) celery -A dispatcharr worker --autoscale=6,1
|
||||
attach-daemon = nice -n $(CELERY_NICE_LEVEL) celery -A dispatcharr beat
|
||||
attach-daemon = daphne -b 0.0.0.0 -p 8001 dispatcharr.asgi:application
|
||||
attach-daemon = cd /app/frontend && npm run dev
|
||||
|
||||
|
|
|
|||
|
|
@ -9,9 +9,10 @@ exec-pre = python /app/scripts/wait_for_redis.py
|
|||
|
||||
; Start Redis first
|
||||
attach-daemon = redis-server
|
||||
; Then start other services
|
||||
attach-daemon = nice -n 5 celery -A dispatcharr worker --autoscale=6,1
|
||||
attach-daemon = nice -n 5 celery -A dispatcharr beat
|
||||
; Then start other services with configurable nice level (default: 5 for low priority)
|
||||
; Users can override via CELERY_NICE_LEVEL environment variable in docker-compose
|
||||
attach-daemon = nice -n $(CELERY_NICE_LEVEL) celery -A dispatcharr worker --autoscale=6,1
|
||||
attach-daemon = nice -n $(CELERY_NICE_LEVEL) celery -A dispatcharr beat
|
||||
attach-daemon = daphne -b 0.0.0.0 -p 8001 dispatcharr.asgi:application
|
||||
attach-daemon = cd /app/frontend && npm run dev
|
||||
|
||||
|
|
|
|||
|
|
@ -9,9 +9,10 @@ exec-pre = python /app/scripts/wait_for_redis.py
|
|||
|
||||
; Start Redis first
|
||||
attach-daemon = redis-server
|
||||
; Then start other services
|
||||
attach-daemon = nice -n 5 celery -A dispatcharr worker --autoscale=6,1
|
||||
attach-daemon = nice -n 5 celery -A dispatcharr beat
|
||||
; Then start other services with configurable nice level (default: 5 for low priority)
|
||||
; Users can override via CELERY_NICE_LEVEL environment variable in docker-compose
|
||||
attach-daemon = nice -n $(CELERY_NICE_LEVEL) celery -A dispatcharr worker --autoscale=6,1
|
||||
attach-daemon = nice -n $(CELERY_NICE_LEVEL) celery -A dispatcharr beat
|
||||
attach-daemon = daphne -b 0.0.0.0 -p 8001 dispatcharr.asgi:application
|
||||
|
||||
# Core settings
|
||||
|
|
|
|||
|
|
@ -113,15 +113,21 @@ const App = () => {
|
|||
height: 0,
|
||||
}}
|
||||
navbar={{
|
||||
width: open ? drawerWidth : miniDrawerWidth,
|
||||
width: isAuthenticated
|
||||
? open
|
||||
? drawerWidth
|
||||
: miniDrawerWidth
|
||||
: 0,
|
||||
}}
|
||||
>
|
||||
<Sidebar
|
||||
drawerWidth
|
||||
miniDrawerWidth
|
||||
collapsed={!open}
|
||||
toggleDrawer={toggleDrawer}
|
||||
/>
|
||||
{isAuthenticated && (
|
||||
<Sidebar
|
||||
drawerWidth={drawerWidth}
|
||||
miniDrawerWidth={miniDrawerWidth}
|
||||
collapsed={!open}
|
||||
toggleDrawer={toggleDrawer}
|
||||
/>
|
||||
)}
|
||||
|
||||
<AppShell.Main>
|
||||
<Box
|
||||
|
|
|
|||
|
|
@ -615,14 +615,22 @@ export const WebsocketProvider = ({ children }) => {
|
|||
break;
|
||||
|
||||
case 'epg_refresh':
|
||||
// Update the store with progress information
|
||||
updateEPGProgress(parsedEvent.data);
|
||||
|
||||
// If we have source_id/account info, update the EPG source status
|
||||
if (parsedEvent.data.source_id || parsedEvent.data.account) {
|
||||
// If we have source/account info, check if EPG exists before processing
|
||||
if (parsedEvent.data.source || parsedEvent.data.account) {
|
||||
const sourceId =
|
||||
parsedEvent.data.source_id || parsedEvent.data.account;
|
||||
parsedEvent.data.source || parsedEvent.data.account;
|
||||
const epg = epgs[sourceId];
|
||||
|
||||
// Only update progress if the EPG still exists in the store
|
||||
// This prevents crashes when receiving updates for deleted EPGs
|
||||
if (epg) {
|
||||
// Update the store with progress information
|
||||
updateEPGProgress(parsedEvent.data);
|
||||
} else {
|
||||
// EPG was deleted, ignore this update
|
||||
console.debug(`Ignoring EPG refresh update for deleted EPG ${sourceId}`);
|
||||
break;
|
||||
}
|
||||
|
||||
if (epg) {
|
||||
// Check for any indication of an error (either via status or error field)
|
||||
|
|
@ -688,6 +696,16 @@ export const WebsocketProvider = ({ children }) => {
|
|||
}
|
||||
break;
|
||||
|
||||
case 'epg_data_created':
|
||||
// A new EPG data entry was created (e.g., for a dummy EPG)
|
||||
// Fetch EPG data so the channel form can immediately assign it
|
||||
try {
|
||||
await fetchEPGData();
|
||||
} catch (e) {
|
||||
console.warn('Failed to refresh EPG data after creation:', e);
|
||||
}
|
||||
break;
|
||||
|
||||
case 'stream_rehash':
|
||||
// Handle stream rehash progress updates
|
||||
if (parsedEvent.data.action === 'starting') {
|
||||
|
|
|
|||
|
|
@ -170,7 +170,7 @@ export default class API {
|
|||
|
||||
static async logout() {
|
||||
return await request(`${host}/api/accounts/auth/logout/`, {
|
||||
auth: false,
|
||||
auth: true, // Send JWT token so backend can identify the user
|
||||
method: 'POST',
|
||||
});
|
||||
}
|
||||
|
|
@ -462,7 +462,16 @@ export default class API {
|
|||
}
|
||||
);
|
||||
|
||||
// Don't automatically update the store here - let the caller handle it
|
||||
// Show success notification
|
||||
if (response.message) {
|
||||
notifications.show({
|
||||
title: 'Channels Updated',
|
||||
message: response.message,
|
||||
color: 'green',
|
||||
autoClose: 4000,
|
||||
});
|
||||
}
|
||||
|
||||
return response;
|
||||
} catch (e) {
|
||||
errorNotification('Failed to update channels', e);
|
||||
|
|
@ -1044,8 +1053,20 @@ export default class API {
|
|||
}
|
||||
|
||||
static async updateEPG(values, isToggle = false) {
|
||||
// Validate that values is an object
|
||||
if (!values || typeof values !== 'object') {
|
||||
console.error('updateEPG called with invalid values:', values);
|
||||
return;
|
||||
}
|
||||
|
||||
const { id, ...payload } = values;
|
||||
|
||||
// Validate that we have an ID and payload is an object
|
||||
if (!id || typeof payload !== 'object') {
|
||||
console.error('updateEPG: invalid id or payload', { id, payload });
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// If this is just toggling the active state, make a simpler request
|
||||
if (
|
||||
|
|
@ -1118,6 +1139,21 @@ export default class API {
|
|||
}
|
||||
}
|
||||
|
||||
static async getTimezones() {
|
||||
try {
|
||||
const response = await request(`${host}/api/core/timezones/`);
|
||||
return response;
|
||||
} catch (e) {
|
||||
errorNotification('Failed to retrieve timezones', e);
|
||||
// Return fallback data instead of throwing
|
||||
return {
|
||||
timezones: ['UTC', 'US/Eastern', 'US/Central', 'US/Mountain', 'US/Pacific'],
|
||||
grouped: {},
|
||||
count: 5
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
static async getStreamProfiles() {
|
||||
try {
|
||||
const response = await request(`${host}/api/core/streamprofiles/`);
|
||||
|
|
@ -1802,6 +1838,77 @@ export default class API {
|
|||
}
|
||||
}
|
||||
|
||||
// VOD Logo Methods
|
||||
static async getVODLogos(params = {}) {
|
||||
try {
|
||||
// Transform usage filter to match backend expectations
|
||||
const apiParams = { ...params };
|
||||
if (apiParams.usage === 'used') {
|
||||
apiParams.used = 'true';
|
||||
delete apiParams.usage;
|
||||
} else if (apiParams.usage === 'unused') {
|
||||
apiParams.used = 'false';
|
||||
delete apiParams.usage;
|
||||
} else if (apiParams.usage === 'movies') {
|
||||
apiParams.used = 'movies';
|
||||
delete apiParams.usage;
|
||||
} else if (apiParams.usage === 'series') {
|
||||
apiParams.used = 'series';
|
||||
delete apiParams.usage;
|
||||
}
|
||||
|
||||
const queryParams = new URLSearchParams(apiParams);
|
||||
const response = await request(
|
||||
`${host}/api/vod/vodlogos/?${queryParams.toString()}`
|
||||
);
|
||||
|
||||
return response;
|
||||
} catch (e) {
|
||||
errorNotification('Failed to retrieve VOD logos', e);
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
|
||||
static async deleteVODLogo(id) {
|
||||
try {
|
||||
await request(`${host}/api/vod/vodlogos/${id}/`, {
|
||||
method: 'DELETE',
|
||||
});
|
||||
|
||||
return true;
|
||||
} catch (e) {
|
||||
errorNotification('Failed to delete VOD logo', e);
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
|
||||
static async deleteVODLogos(ids) {
|
||||
try {
|
||||
await request(`${host}/api/vod/vodlogos/bulk-delete/`, {
|
||||
method: 'DELETE',
|
||||
body: { logo_ids: ids },
|
||||
});
|
||||
|
||||
return true;
|
||||
} catch (e) {
|
||||
errorNotification('Failed to delete VOD logos', e);
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
|
||||
static async cleanupUnusedVODLogos() {
|
||||
try {
|
||||
const response = await request(`${host}/api/vod/vodlogos/cleanup/`, {
|
||||
method: 'POST',
|
||||
});
|
||||
|
||||
return response;
|
||||
} catch (e) {
|
||||
errorNotification('Failed to cleanup unused VOD logos', e);
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
|
||||
static async getChannelProfiles() {
|
||||
try {
|
||||
const response = await request(`${host}/api/channels/profiles/`);
|
||||
|
|
@ -2146,9 +2253,15 @@ export default class API {
|
|||
|
||||
// If successful, requery channels to update UI
|
||||
if (response.success) {
|
||||
// Build message based on whether EPG sources need refreshing
|
||||
let message = `Updated ${response.channels_updated} channel${response.channels_updated !== 1 ? 's' : ''}`;
|
||||
if (response.programs_refreshed > 0) {
|
||||
message += `, refreshing ${response.programs_refreshed} EPG source${response.programs_refreshed !== 1 ? 's' : ''}`;
|
||||
}
|
||||
|
||||
notifications.show({
|
||||
title: 'EPG Association',
|
||||
message: `Updated ${response.channels_updated} channels, refreshing ${response.programs_refreshed} EPG sources.`,
|
||||
message: message,
|
||||
color: 'blue',
|
||||
});
|
||||
|
||||
|
|
@ -2797,4 +2910,21 @@ export default class API {
|
|||
throw e;
|
||||
}
|
||||
}
|
||||
|
||||
static async getSystemEvents(limit = 100, offset = 0, eventType = null) {
|
||||
try {
|
||||
const params = new URLSearchParams();
|
||||
params.append('limit', limit);
|
||||
params.append('offset', offset);
|
||||
if (eventType) {
|
||||
params.append('event_type', eventType);
|
||||
}
|
||||
const response = await request(
|
||||
`${host}/api/core/system-events/?${params.toString()}`
|
||||
);
|
||||
return response;
|
||||
} catch (e) {
|
||||
errorNotification('Failed to retrieve system events', e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -17,7 +17,9 @@ import {
|
|||
Table,
|
||||
Divider,
|
||||
} from '@mantine/core';
|
||||
import { Play } from 'lucide-react';
|
||||
import { Play, Copy } from 'lucide-react';
|
||||
import { notifications } from '@mantine/notifications';
|
||||
import { copyToClipboard } from '../utils';
|
||||
import useVODStore from '../store/useVODStore';
|
||||
import useVideoStore from '../store/useVideoStore';
|
||||
import useSettingsStore from '../store/settings';
|
||||
|
|
@ -332,6 +334,39 @@ const SeriesModal = ({ series, opened, onClose }) => {
|
|||
showVideo(streamUrl, 'vod', episode);
|
||||
};
|
||||
|
||||
const getEpisodeStreamUrl = (episode) => {
|
||||
let streamUrl = `/proxy/vod/episode/${episode.uuid}`;
|
||||
|
||||
// Add selected provider as query parameter if available
|
||||
if (selectedProvider) {
|
||||
// Use stream_id for most specific selection, fallback to account_id
|
||||
if (selectedProvider.stream_id) {
|
||||
streamUrl += `?stream_id=${encodeURIComponent(selectedProvider.stream_id)}`;
|
||||
} else {
|
||||
streamUrl += `?m3u_account_id=${selectedProvider.m3u_account.id}`;
|
||||
}
|
||||
}
|
||||
|
||||
if (env_mode === 'dev') {
|
||||
streamUrl = `${window.location.protocol}//${window.location.hostname}:5656${streamUrl}`;
|
||||
} else {
|
||||
streamUrl = `${window.location.origin}${streamUrl}`;
|
||||
}
|
||||
return streamUrl;
|
||||
};
|
||||
|
||||
const handleCopyEpisodeLink = async (episode) => {
|
||||
const streamUrl = getEpisodeStreamUrl(episode);
|
||||
const success = await copyToClipboard(streamUrl);
|
||||
notifications.show({
|
||||
title: success ? 'Link Copied!' : 'Copy Failed',
|
||||
message: success
|
||||
? 'Episode link copied to clipboard'
|
||||
: 'Failed to copy link to clipboard',
|
||||
color: success ? 'green' : 'red',
|
||||
});
|
||||
};
|
||||
|
||||
const handleEpisodeRowClick = (episode) => {
|
||||
setExpandedEpisode(expandedEpisode === episode.id ? null : episode.id);
|
||||
};
|
||||
|
|
@ -690,20 +725,34 @@ const SeriesModal = ({ series, opened, onClose }) => {
|
|||
</Text>
|
||||
</Table.Td>
|
||||
<Table.Td>
|
||||
<ActionIcon
|
||||
variant="filled"
|
||||
color="blue"
|
||||
size="sm"
|
||||
disabled={
|
||||
providers.length > 0 && !selectedProvider
|
||||
}
|
||||
onClick={(e) => {
|
||||
e.stopPropagation();
|
||||
handlePlayEpisode(episode);
|
||||
}}
|
||||
>
|
||||
<Play size={12} />
|
||||
</ActionIcon>
|
||||
<Group spacing="xs">
|
||||
<ActionIcon
|
||||
variant="filled"
|
||||
color="blue"
|
||||
size="sm"
|
||||
disabled={
|
||||
providers.length > 0 &&
|
||||
!selectedProvider
|
||||
}
|
||||
onClick={(e) => {
|
||||
e.stopPropagation();
|
||||
handlePlayEpisode(episode);
|
||||
}}
|
||||
>
|
||||
<Play size={12} />
|
||||
</ActionIcon>
|
||||
<ActionIcon
|
||||
variant="outline"
|
||||
color="gray"
|
||||
size="sm"
|
||||
onClick={(e) => {
|
||||
e.stopPropagation();
|
||||
handleCopyEpisodeLink(episode);
|
||||
}}
|
||||
>
|
||||
<Copy size={12} />
|
||||
</ActionIcon>
|
||||
</Group>
|
||||
</Table.Td>
|
||||
</Table.Tr>
|
||||
{expandedEpisode === episode.id && (
|
||||
|
|
@ -965,7 +1014,8 @@ const SeriesModal = ({ series, opened, onClose }) => {
|
|||
src={trailerUrl}
|
||||
title="YouTube Trailer"
|
||||
frameBorder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
referrerPolicy="strict-origin-when-cross-origin"
|
||||
allowFullScreen
|
||||
style={{
|
||||
position: 'absolute',
|
||||
|
|
|
|||
|
|
@ -211,8 +211,8 @@ const Sidebar = ({ collapsed, toggleDrawer, drawerWidth, miniDrawerWidth }) => {
|
|||
}
|
||||
};
|
||||
|
||||
const onLogout = () => {
|
||||
logout();
|
||||
const onLogout = async () => {
|
||||
await logout();
|
||||
window.location.reload();
|
||||
};
|
||||
|
||||
|
|
|
|||
333
frontend/src/components/SystemEvents.jsx
Normal file
333
frontend/src/components/SystemEvents.jsx
Normal file
|
|
@ -0,0 +1,333 @@
|
|||
import React, { useState, useEffect, useCallback } from 'react';
|
||||
import {
|
||||
ActionIcon,
|
||||
Box,
|
||||
Button,
|
||||
Card,
|
||||
Group,
|
||||
NumberInput,
|
||||
Pagination,
|
||||
Select,
|
||||
Stack,
|
||||
Text,
|
||||
Title,
|
||||
} from '@mantine/core';
|
||||
import { useElementSize } from '@mantine/hooks';
|
||||
import {
|
||||
ChevronDown,
|
||||
CirclePlay,
|
||||
Download,
|
||||
Gauge,
|
||||
HardDriveDownload,
|
||||
List,
|
||||
LogIn,
|
||||
LogOut,
|
||||
RefreshCw,
|
||||
Shield,
|
||||
ShieldAlert,
|
||||
SquareX,
|
||||
Timer,
|
||||
Users,
|
||||
Video,
|
||||
XCircle,
|
||||
} from 'lucide-react';
|
||||
import dayjs from 'dayjs';
|
||||
import API from '../api';
|
||||
import useLocalStorage from '../hooks/useLocalStorage';
|
||||
|
||||
const SystemEvents = () => {
|
||||
const [events, setEvents] = useState([]);
|
||||
const [totalEvents, setTotalEvents] = useState(0);
|
||||
const [isExpanded, setIsExpanded] = useState(false);
|
||||
const { ref: cardRef, width: cardWidth } = useElementSize();
|
||||
const isNarrow = cardWidth < 650;
|
||||
const [isLoading, setIsLoading] = useState(false);
|
||||
const [dateFormatSetting] = useLocalStorage('date-format', 'mdy');
|
||||
const dateFormat = dateFormatSetting === 'mdy' ? 'MM/DD' : 'DD/MM';
|
||||
const [eventsRefreshInterval, setEventsRefreshInterval] = useLocalStorage(
|
||||
'events-refresh-interval',
|
||||
0
|
||||
);
|
||||
const [eventsLimit, setEventsLimit] = useLocalStorage('events-limit', 100);
|
||||
const [currentPage, setCurrentPage] = useState(1);
|
||||
|
||||
// Calculate offset based on current page and limit
|
||||
const offset = (currentPage - 1) * eventsLimit;
|
||||
const totalPages = Math.ceil(totalEvents / eventsLimit);
|
||||
|
||||
const fetchEvents = useCallback(async () => {
|
||||
try {
|
||||
setIsLoading(true);
|
||||
const response = await API.getSystemEvents(eventsLimit, offset);
|
||||
if (response && response.events) {
|
||||
setEvents(response.events);
|
||||
setTotalEvents(response.total || 0);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error fetching system events:', error);
|
||||
} finally {
|
||||
setIsLoading(false);
|
||||
}
|
||||
}, [eventsLimit, offset]);
|
||||
|
||||
// Fetch events on mount and when eventsRefreshInterval changes
|
||||
useEffect(() => {
|
||||
fetchEvents();
|
||||
|
||||
// Set up polling if interval is set and events section is expanded
|
||||
if (eventsRefreshInterval > 0 && isExpanded) {
|
||||
const interval = setInterval(fetchEvents, eventsRefreshInterval * 1000);
|
||||
return () => clearInterval(interval);
|
||||
}
|
||||
}, [fetchEvents, eventsRefreshInterval, isExpanded]);
|
||||
|
||||
// Reset to first page when limit changes
|
||||
useEffect(() => {
|
||||
setCurrentPage(1);
|
||||
}, [eventsLimit]);
|
||||
|
||||
const getEventIcon = (eventType) => {
|
||||
switch (eventType) {
|
||||
case 'channel_start':
|
||||
return <CirclePlay size={16} />;
|
||||
case 'channel_stop':
|
||||
return <SquareX size={16} />;
|
||||
case 'channel_reconnect':
|
||||
return <RefreshCw size={16} />;
|
||||
case 'channel_buffering':
|
||||
return <Timer size={16} />;
|
||||
case 'channel_failover':
|
||||
return <HardDriveDownload size={16} />;
|
||||
case 'client_connect':
|
||||
return <Users size={16} />;
|
||||
case 'client_disconnect':
|
||||
return <Users size={16} />;
|
||||
case 'recording_start':
|
||||
return <Video size={16} />;
|
||||
case 'recording_end':
|
||||
return <Video size={16} />;
|
||||
case 'stream_switch':
|
||||
return <HardDriveDownload size={16} />;
|
||||
case 'm3u_refresh':
|
||||
return <RefreshCw size={16} />;
|
||||
case 'm3u_download':
|
||||
return <Download size={16} />;
|
||||
case 'epg_refresh':
|
||||
return <RefreshCw size={16} />;
|
||||
case 'epg_download':
|
||||
return <Download size={16} />;
|
||||
case 'login_success':
|
||||
return <LogIn size={16} />;
|
||||
case 'login_failed':
|
||||
return <ShieldAlert size={16} />;
|
||||
case 'logout':
|
||||
return <LogOut size={16} />;
|
||||
case 'm3u_blocked':
|
||||
return <XCircle size={16} />;
|
||||
case 'epg_blocked':
|
||||
return <XCircle size={16} />;
|
||||
default:
|
||||
return <Gauge size={16} />;
|
||||
}
|
||||
};
|
||||
|
||||
const getEventColor = (eventType) => {
|
||||
switch (eventType) {
|
||||
case 'channel_start':
|
||||
case 'client_connect':
|
||||
case 'recording_start':
|
||||
case 'login_success':
|
||||
return 'green';
|
||||
case 'channel_reconnect':
|
||||
return 'yellow';
|
||||
case 'channel_stop':
|
||||
case 'client_disconnect':
|
||||
case 'recording_end':
|
||||
case 'logout':
|
||||
return 'gray';
|
||||
case 'channel_buffering':
|
||||
return 'yellow';
|
||||
case 'channel_failover':
|
||||
case 'channel_error':
|
||||
return 'orange';
|
||||
case 'stream_switch':
|
||||
return 'blue';
|
||||
case 'm3u_refresh':
|
||||
case 'epg_refresh':
|
||||
return 'cyan';
|
||||
case 'm3u_download':
|
||||
case 'epg_download':
|
||||
return 'teal';
|
||||
case 'login_failed':
|
||||
case 'm3u_blocked':
|
||||
case 'epg_blocked':
|
||||
return 'red';
|
||||
default:
|
||||
return 'gray';
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<Card
|
||||
ref={cardRef}
|
||||
shadow="sm"
|
||||
padding="sm"
|
||||
radius="md"
|
||||
withBorder
|
||||
style={{
|
||||
color: '#fff',
|
||||
backgroundColor: '#27272A',
|
||||
width: '100%',
|
||||
maxWidth: isExpanded ? '100%' : '800px',
|
||||
marginLeft: 'auto',
|
||||
marginRight: 'auto',
|
||||
transition: 'max-width 0.3s ease',
|
||||
}}
|
||||
>
|
||||
<Group justify="space-between" mb={isExpanded ? 'sm' : 0}>
|
||||
<Group gap="xs">
|
||||
<Gauge size={20} />
|
||||
<Title order={4}>System Events</Title>
|
||||
</Group>
|
||||
<Group gap="xs">
|
||||
{(isExpanded || !isNarrow) && (
|
||||
<>
|
||||
<NumberInput
|
||||
size="xs"
|
||||
label="Events Per Page"
|
||||
value={eventsLimit}
|
||||
onChange={(value) => setEventsLimit(value || 10)}
|
||||
min={10}
|
||||
max={1000}
|
||||
step={10}
|
||||
style={{ width: 130 }}
|
||||
/>
|
||||
<Select
|
||||
size="xs"
|
||||
label="Auto Refresh"
|
||||
value={eventsRefreshInterval.toString()}
|
||||
onChange={(value) => setEventsRefreshInterval(parseInt(value))}
|
||||
data={[
|
||||
{ value: '0', label: 'Manual' },
|
||||
{ value: '5', label: '5s' },
|
||||
{ value: '10', label: '10s' },
|
||||
{ value: '30', label: '30s' },
|
||||
{ value: '60', label: '1m' },
|
||||
]}
|
||||
style={{ width: 120 }}
|
||||
/>
|
||||
<Button
|
||||
size="xs"
|
||||
variant="subtle"
|
||||
onClick={fetchEvents}
|
||||
loading={isLoading}
|
||||
style={{ marginTop: 'auto' }}
|
||||
>
|
||||
Refresh
|
||||
</Button>
|
||||
</>
|
||||
)}
|
||||
<ActionIcon
|
||||
variant="subtle"
|
||||
onClick={() => setIsExpanded(!isExpanded)}
|
||||
>
|
||||
<ChevronDown
|
||||
size={18}
|
||||
style={{
|
||||
transform: isExpanded ? 'rotate(180deg)' : 'rotate(0deg)',
|
||||
transition: 'transform 0.2s',
|
||||
}}
|
||||
/>
|
||||
</ActionIcon>
|
||||
</Group>
|
||||
</Group>
|
||||
|
||||
{isExpanded && (
|
||||
<>
|
||||
{totalEvents > eventsLimit && (
|
||||
<Group justify="space-between" align="center" mt="sm" mb="xs">
|
||||
<Text size="xs" c="dimmed">
|
||||
Showing {offset + 1}-
|
||||
{Math.min(offset + eventsLimit, totalEvents)} of {totalEvents}
|
||||
</Text>
|
||||
<Pagination
|
||||
total={totalPages}
|
||||
value={currentPage}
|
||||
onChange={setCurrentPage}
|
||||
size="sm"
|
||||
/>
|
||||
</Group>
|
||||
)}
|
||||
<Stack
|
||||
gap="xs"
|
||||
mt="sm"
|
||||
style={{
|
||||
maxHeight: '60vh',
|
||||
overflowY: 'auto',
|
||||
}}
|
||||
>
|
||||
{events.length === 0 ? (
|
||||
<Text size="sm" c="dimmed" ta="center" py="xl">
|
||||
No events recorded yet
|
||||
</Text>
|
||||
) : (
|
||||
events.map((event) => (
|
||||
<Box
|
||||
key={event.id}
|
||||
p="xs"
|
||||
style={{
|
||||
backgroundColor: '#1A1B1E',
|
||||
borderRadius: '4px',
|
||||
borderLeft: `3px solid var(--mantine-color-${getEventColor(event.event_type)}-6)`,
|
||||
}}
|
||||
>
|
||||
<Group justify="space-between" wrap="nowrap">
|
||||
<Group gap="xs" style={{ flex: 1, minWidth: 0 }}>
|
||||
<Box c={`${getEventColor(event.event_type)}.6`}>
|
||||
{getEventIcon(event.event_type)}
|
||||
</Box>
|
||||
<Stack gap={2} style={{ flex: 1, minWidth: 0 }}>
|
||||
<Group gap="xs" wrap="nowrap">
|
||||
<Text size="sm" fw={500}>
|
||||
{event.event_type_display || event.event_type}
|
||||
</Text>
|
||||
{event.channel_name && (
|
||||
<Text
|
||||
size="sm"
|
||||
c="dimmed"
|
||||
truncate
|
||||
style={{ maxWidth: '300px' }}
|
||||
>
|
||||
{event.channel_name}
|
||||
</Text>
|
||||
)}
|
||||
</Group>
|
||||
{event.details &&
|
||||
Object.keys(event.details).length > 0 && (
|
||||
<Text size="xs" c="dimmed">
|
||||
{Object.entries(event.details)
|
||||
.filter(
|
||||
([key]) =>
|
||||
!['stream_url', 'new_url'].includes(key)
|
||||
)
|
||||
.map(([key, value]) => `${key}: ${value}`)
|
||||
.join(', ')}
|
||||
</Text>
|
||||
)}
|
||||
</Stack>
|
||||
</Group>
|
||||
<Text size="xs" c="dimmed" style={{ whiteSpace: 'nowrap' }}>
|
||||
{dayjs(event.timestamp).format(`${dateFormat} HH:mm:ss`)}
|
||||
</Text>
|
||||
</Group>
|
||||
</Box>
|
||||
))
|
||||
)}
|
||||
</Stack>
|
||||
</>
|
||||
)}
|
||||
</Card>
|
||||
);
|
||||
};
|
||||
|
||||
export default SystemEvents;
|
||||
|
|
@ -13,11 +13,12 @@ import {
|
|||
Stack,
|
||||
Modal,
|
||||
} from '@mantine/core';
|
||||
import { Play } from 'lucide-react';
|
||||
import { Play, Copy } from 'lucide-react';
|
||||
import { notifications } from '@mantine/notifications';
|
||||
import { copyToClipboard } from '../utils';
|
||||
import useVODStore from '../store/useVODStore';
|
||||
import useVideoStore from '../store/useVideoStore';
|
||||
import useSettingsStore from '../store/settings';
|
||||
import API from '../api';
|
||||
|
||||
const imdbUrl = (imdb_id) =>
|
||||
imdb_id ? `https://www.imdb.com/title/${imdb_id}` : '';
|
||||
|
|
@ -32,17 +33,9 @@ const formatDuration = (seconds) => {
|
|||
};
|
||||
|
||||
const formatStreamLabel = (relation) => {
|
||||
if (relation?.provider_type === 'library' || relation?.type === 'library') {
|
||||
const libraryName =
|
||||
relation?.library?.name ||
|
||||
relation?.m3u_account?.name ||
|
||||
'Library';
|
||||
return `${libraryName} - Local Library`;
|
||||
}
|
||||
|
||||
// Create a label for the stream that includes provider name and stream-specific info
|
||||
const provider = relation?.m3u_account?.name || 'Provider';
|
||||
const streamId = relation?.stream_id;
|
||||
const provider = relation.m3u_account.name;
|
||||
const streamId = relation.stream_id;
|
||||
|
||||
// Try to extract quality info - prioritizing the new quality_info field from backend
|
||||
let qualityInfo = '';
|
||||
|
|
@ -241,62 +234,9 @@ const VODModal = ({ vod, opened, onClose }) => {
|
|||
}
|
||||
}, [opened]);
|
||||
|
||||
const resolveLibraryMediaItemId = () => {
|
||||
if (selectedProvider?.library_item_id) return selectedProvider.library_item_id;
|
||||
if (selectedProvider?.custom_properties?.media_item_id) {
|
||||
return selectedProvider.custom_properties.media_item_id;
|
||||
}
|
||||
const fallback = detailedVOD?.library_sources || vod?.library_sources || [];
|
||||
if (fallback.length > 0) {
|
||||
return fallback[0].media_item_id;
|
||||
}
|
||||
return null;
|
||||
};
|
||||
|
||||
const handlePlayVOD = async () => {
|
||||
const getStreamUrl = () => {
|
||||
const vodToPlay = detailedVOD || vod;
|
||||
if (!vodToPlay) return;
|
||||
|
||||
if (
|
||||
selectedProvider?.provider_type === 'library' ||
|
||||
selectedProvider?.type === 'library' ||
|
||||
(vodToPlay.library_sources && vodToPlay.library_sources.length > 0)
|
||||
) {
|
||||
const mediaItemId = resolveLibraryMediaItemId();
|
||||
if (!mediaItemId) {
|
||||
console.warn('No library media item available for playback');
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const streamInfo = await API.streamMediaItem(mediaItemId);
|
||||
if (!streamInfo?.url) {
|
||||
console.error('Library stream did not return a URL');
|
||||
return;
|
||||
}
|
||||
const playbackMeta = {
|
||||
...vodToPlay,
|
||||
provider_type: 'library',
|
||||
library_media_item_id: mediaItemId,
|
||||
mediaItemId,
|
||||
fileId: streamInfo?.file_id,
|
||||
resumePositionMs: 0,
|
||||
resumeHandledByServer: Boolean(streamInfo?.start_offset_ms),
|
||||
startOffsetMs: streamInfo?.start_offset_ms ?? 0,
|
||||
requiresTranscode: Boolean(streamInfo?.requires_transcode),
|
||||
transcodeStatus: streamInfo?.transcode_status ?? null,
|
||||
durationMs:
|
||||
streamInfo?.duration_ms ??
|
||||
vodToPlay?.runtime_ms ??
|
||||
vodToPlay?.files?.[0]?.duration_ms ??
|
||||
null,
|
||||
};
|
||||
showVideo(streamInfo.url, 'library', playbackMeta);
|
||||
} catch (error) {
|
||||
console.error('Failed to start library stream:', error);
|
||||
}
|
||||
return;
|
||||
}
|
||||
if (!vodToPlay) return null;
|
||||
|
||||
let streamUrl = `/proxy/vod/movie/${vod.uuid}`;
|
||||
|
||||
|
|
@ -305,7 +245,7 @@ const VODModal = ({ vod, opened, onClose }) => {
|
|||
// Use stream_id for most specific selection, fallback to account_id
|
||||
if (selectedProvider.stream_id) {
|
||||
streamUrl += `?stream_id=${encodeURIComponent(selectedProvider.stream_id)}`;
|
||||
} else if (selectedProvider?.m3u_account?.id) {
|
||||
} else {
|
||||
streamUrl += `?m3u_account_id=${selectedProvider.m3u_account.id}`;
|
||||
}
|
||||
}
|
||||
|
|
@ -315,9 +255,29 @@ const VODModal = ({ vod, opened, onClose }) => {
|
|||
} else {
|
||||
streamUrl = `${window.location.origin}${streamUrl}`;
|
||||
}
|
||||
return streamUrl;
|
||||
};
|
||||
|
||||
const handlePlayVOD = () => {
|
||||
const streamUrl = getStreamUrl();
|
||||
if (!streamUrl) return;
|
||||
const vodToPlay = detailedVOD || vod;
|
||||
showVideo(streamUrl, 'vod', vodToPlay);
|
||||
};
|
||||
|
||||
const handleCopyLink = async () => {
|
||||
const streamUrl = getStreamUrl();
|
||||
if (!streamUrl) return;
|
||||
const success = await copyToClipboard(streamUrl);
|
||||
notifications.show({
|
||||
title: success ? 'Link Copied!' : 'Copy Failed',
|
||||
message: success
|
||||
? 'Stream link copied to clipboard'
|
||||
: 'Failed to copy link to clipboard',
|
||||
color: success ? 'green' : 'red',
|
||||
});
|
||||
};
|
||||
|
||||
// Helper to get embeddable YouTube URL
|
||||
const getEmbedUrl = (url) => {
|
||||
if (!url) return '';
|
||||
|
|
@ -331,12 +291,6 @@ const VODModal = ({ vod, opened, onClose }) => {
|
|||
|
||||
// Use detailed data if available, otherwise use basic vod data
|
||||
const displayVOD = detailedVOD || vod;
|
||||
const posterSrc =
|
||||
displayVOD?.movie_image ||
|
||||
displayVOD?.logo?.cache_url ||
|
||||
displayVOD?.logo?.url ||
|
||||
displayVOD?.custom_properties?.poster_url ||
|
||||
null;
|
||||
|
||||
return (
|
||||
<>
|
||||
|
|
@ -398,10 +352,10 @@ const VODModal = ({ vod, opened, onClose }) => {
|
|||
{/* Movie poster and basic info */}
|
||||
<Flex gap="md">
|
||||
{/* Use movie_image or logo */}
|
||||
{posterSrc ? (
|
||||
{displayVOD.movie_image || displayVOD.logo?.url ? (
|
||||
<Box style={{ flexShrink: 0 }}>
|
||||
<Image
|
||||
src={posterSrc}
|
||||
src={displayVOD.movie_image || displayVOD.logo.url}
|
||||
width={200}
|
||||
height={300}
|
||||
alt={displayVOD.name}
|
||||
|
|
@ -554,6 +508,16 @@ const VODModal = ({ vod, opened, onClose }) => {
|
|||
Watch Trailer
|
||||
</Button>
|
||||
)}
|
||||
<Button
|
||||
leftSection={<Copy size={16} />}
|
||||
variant="outline"
|
||||
color="gray"
|
||||
size="sm"
|
||||
onClick={handleCopyLink}
|
||||
style={{ alignSelf: 'flex-start' }}
|
||||
>
|
||||
Copy Link
|
||||
</Button>
|
||||
</Group>
|
||||
</Stack>
|
||||
</Flex>
|
||||
|
|
@ -572,7 +536,7 @@ const VODModal = ({ vod, opened, onClose }) => {
|
|||
{providers.length === 1 ? (
|
||||
<Group spacing="md">
|
||||
<Badge color="blue" variant="light">
|
||||
{formatStreamLabel(providers[0])}
|
||||
{providers[0].m3u_account.name}
|
||||
</Badge>
|
||||
</Group>
|
||||
) : (
|
||||
|
|
@ -599,16 +563,14 @@ const VODModal = ({ vod, opened, onClose }) => {
|
|||
{/* Fallback provider info if no providers loaded yet */}
|
||||
{providers.length === 0 &&
|
||||
!loadingProviders &&
|
||||
(vod?.m3u_account || (vod?.library_sources?.length ?? 0) > 0) && (
|
||||
vod?.m3u_account && (
|
||||
<Box>
|
||||
<Text size="sm" weight={500} mb={8}>
|
||||
Stream Selection
|
||||
</Text>
|
||||
<Group spacing="md">
|
||||
<Badge color="blue" variant="light">
|
||||
{vod?.m3u_account?.name ||
|
||||
vod?.library_sources?.[0]?.library_name ||
|
||||
'Library'}
|
||||
{vod.m3u_account.name}
|
||||
</Badge>
|
||||
</Group>
|
||||
</Box>
|
||||
|
|
@ -732,7 +694,8 @@ const VODModal = ({ vod, opened, onClose }) => {
|
|||
src={trailerUrl}
|
||||
title="YouTube Trailer"
|
||||
frameBorder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
referrerPolicy="strict-origin-when-cross-origin"
|
||||
allowFullScreen
|
||||
style={{
|
||||
position: 'absolute',
|
||||
|
|
|
|||
|
|
@ -1048,8 +1048,10 @@ const ChannelForm = ({ channel = null, isOpen, onClose }) => {
|
|||
type="submit"
|
||||
variant="default"
|
||||
disabled={formik.isSubmitting}
|
||||
loading={formik.isSubmitting}
|
||||
loaderProps={{ type: 'dots' }}
|
||||
>
|
||||
Submit
|
||||
{formik.isSubmitting ? 'Saving...' : 'Submit'}
|
||||
</Button>
|
||||
</Flex>
|
||||
</form>
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ import React, { useState, useEffect, useMemo, useRef } from 'react';
|
|||
import useChannelsStore from '../../store/channels';
|
||||
import API from '../../api';
|
||||
import useStreamProfilesStore from '../../store/streamProfiles';
|
||||
import useEPGsStore from '../../store/epgs';
|
||||
import ChannelGroupForm from './ChannelGroup';
|
||||
import {
|
||||
Box,
|
||||
|
|
@ -53,6 +54,9 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
|
|||
}, [ensureLogosLoaded]);
|
||||
|
||||
const streamProfiles = useStreamProfilesStore((s) => s.profiles);
|
||||
const epgs = useEPGsStore((s) => s.epgs);
|
||||
const tvgs = useEPGsStore((s) => s.tvgs);
|
||||
const fetchEPGs = useEPGsStore((s) => s.fetchEPGs);
|
||||
|
||||
const [channelGroupModelOpen, setChannelGroupModalOpen] = useState(false);
|
||||
const [selectedChannelGroup, setSelectedChannelGroup] = useState('-1');
|
||||
|
|
@ -60,6 +64,7 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
|
|||
const [isSubmitting, setIsSubmitting] = useState(false);
|
||||
const [regexFind, setRegexFind] = useState('');
|
||||
const [regexReplace, setRegexReplace] = useState('');
|
||||
const [selectedDummyEpgId, setSelectedDummyEpgId] = useState(null);
|
||||
|
||||
const [groupPopoverOpened, setGroupPopoverOpened] = useState(false);
|
||||
const [groupFilter, setGroupFilter] = useState('');
|
||||
|
|
@ -71,10 +76,22 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
|
|||
const [confirmSetNamesOpen, setConfirmSetNamesOpen] = useState(false);
|
||||
const [confirmSetLogosOpen, setConfirmSetLogosOpen] = useState(false);
|
||||
const [confirmSetTvgIdsOpen, setConfirmSetTvgIdsOpen] = useState(false);
|
||||
const [confirmClearEpgsOpen, setConfirmClearEpgsOpen] = useState(false);
|
||||
const [confirmBatchUpdateOpen, setConfirmBatchUpdateOpen] = useState(false);
|
||||
const isWarningSuppressed = useWarningsStore((s) => s.isWarningSuppressed);
|
||||
const suppressWarning = useWarningsStore((s) => s.suppressWarning);
|
||||
|
||||
// Fetch EPG sources when modal opens
|
||||
useEffect(() => {
|
||||
if (isOpen) {
|
||||
fetchEPGs();
|
||||
}
|
||||
}, [isOpen, fetchEPGs]);
|
||||
|
||||
// Get dummy EPG sources
|
||||
const dummyEpgSources = useMemo(() => {
|
||||
return Object.values(epgs).filter((epg) => epg.source_type === 'dummy');
|
||||
}, [epgs]);
|
||||
|
||||
const form = useForm({
|
||||
mode: 'uncontrolled',
|
||||
initialValues: {
|
||||
|
|
@ -85,7 +102,88 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
|
|||
},
|
||||
});
|
||||
|
||||
// Build confirmation message based on selected changes
|
||||
const getConfirmationMessage = () => {
|
||||
const changes = [];
|
||||
const values = form.getValues();
|
||||
|
||||
// Check for regex name changes
|
||||
if (regexFind.trim().length > 0) {
|
||||
changes.push(
|
||||
`• Name Change: Apply regex find "${regexFind}" replace with "${regexReplace || ''}"`
|
||||
);
|
||||
}
|
||||
|
||||
// Check channel group
|
||||
if (selectedChannelGroup && selectedChannelGroup !== '-1') {
|
||||
const groupName = channelGroups[selectedChannelGroup]?.name || 'Unknown';
|
||||
changes.push(`• Channel Group: ${groupName}`);
|
||||
}
|
||||
|
||||
// Check logo
|
||||
if (selectedLogoId && selectedLogoId !== '-1') {
|
||||
if (selectedLogoId === '0') {
|
||||
changes.push(`• Logo: Use Default`);
|
||||
} else {
|
||||
const logoName = channelLogos[selectedLogoId]?.name || 'Selected Logo';
|
||||
changes.push(`• Logo: ${logoName}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Check stream profile
|
||||
if (values.stream_profile_id && values.stream_profile_id !== '-1') {
|
||||
if (values.stream_profile_id === '0') {
|
||||
changes.push(`• Stream Profile: Use Default`);
|
||||
} else {
|
||||
const profileName =
|
||||
streamProfiles[values.stream_profile_id]?.name || 'Selected Profile';
|
||||
changes.push(`• Stream Profile: ${profileName}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Check user level
|
||||
if (values.user_level && values.user_level !== '-1') {
|
||||
const userLevelLabel =
|
||||
USER_LEVEL_LABELS[values.user_level] || values.user_level;
|
||||
changes.push(`• User Level: ${userLevelLabel}`);
|
||||
}
|
||||
|
||||
// Check dummy EPG
|
||||
if (selectedDummyEpgId) {
|
||||
if (selectedDummyEpgId === 'clear') {
|
||||
changes.push(`• EPG: Clear Assignment (use default dummy)`);
|
||||
} else {
|
||||
const epgName = epgs[selectedDummyEpgId]?.name || 'Selected EPG';
|
||||
changes.push(`• Dummy EPG: ${epgName}`);
|
||||
}
|
||||
}
|
||||
|
||||
return changes;
|
||||
};
|
||||
|
||||
const handleSubmit = () => {
|
||||
const changes = getConfirmationMessage();
|
||||
|
||||
// If no changes detected, show notification
|
||||
if (changes.length === 0) {
|
||||
notifications.show({
|
||||
title: 'No Changes',
|
||||
message: 'Please select at least one field to update.',
|
||||
color: 'orange',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Skip warning if suppressed
|
||||
if (isWarningSuppressed('batch-update-channels')) {
|
||||
return onSubmit();
|
||||
}
|
||||
|
||||
setConfirmBatchUpdateOpen(true);
|
||||
};
|
||||
|
||||
const onSubmit = async () => {
|
||||
setConfirmBatchUpdateOpen(false);
|
||||
setIsSubmitting(true);
|
||||
|
||||
const values = {
|
||||
|
|
@ -126,6 +224,7 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
|
|||
try {
|
||||
const applyRegex = regexFind.trim().length > 0;
|
||||
|
||||
// First, handle standard field updates (name, group, logo, etc.)
|
||||
if (applyRegex) {
|
||||
// Build per-channel updates to apply unique names via regex
|
||||
let flags = 'g';
|
||||
|
|
@ -153,10 +252,48 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
|
|||
});
|
||||
|
||||
await API.bulkUpdateChannels(updates);
|
||||
} else {
|
||||
} else if (Object.keys(values).length > 0) {
|
||||
await API.updateChannels(channelIds, values);
|
||||
}
|
||||
|
||||
// Then, handle EPG assignment if a dummy EPG was selected
|
||||
if (selectedDummyEpgId) {
|
||||
if (selectedDummyEpgId === 'clear') {
|
||||
// Clear EPG assignments
|
||||
const associations = channelIds.map((id) => ({
|
||||
channel_id: id,
|
||||
epg_data_id: null,
|
||||
}));
|
||||
await API.batchSetEPG(associations);
|
||||
} else {
|
||||
// Assign the selected dummy EPG
|
||||
const selectedEpg = epgs[selectedDummyEpgId];
|
||||
if (selectedEpg && selectedEpg.epg_data_count > 0) {
|
||||
// Convert to number for comparison since Select returns string
|
||||
const epgSourceId = parseInt(selectedDummyEpgId, 10);
|
||||
|
||||
// Check if we already have EPG data loaded in the store
|
||||
let epgData = tvgs.find((data) => data.epg_source === epgSourceId);
|
||||
|
||||
// If not in store, fetch it
|
||||
if (!epgData) {
|
||||
const epgDataList = await API.getEPGData();
|
||||
epgData = epgDataList.find(
|
||||
(data) => data.epg_source === epgSourceId
|
||||
);
|
||||
}
|
||||
|
||||
if (epgData) {
|
||||
const associations = channelIds.map((id) => ({
|
||||
channel_id: id,
|
||||
epg_data_id: epgData.id,
|
||||
}));
|
||||
await API.batchSetEPG(associations);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Refresh both the channels table data and the main channels store
|
||||
await Promise.all([
|
||||
API.requeryChannels(),
|
||||
|
|
@ -305,49 +442,6 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
|
|||
}
|
||||
};
|
||||
|
||||
const handleClearEpgs = async () => {
|
||||
if (!channelIds || channelIds.length === 0) {
|
||||
notifications.show({
|
||||
title: 'No Channels Selected',
|
||||
message: 'No channels to update.',
|
||||
color: 'orange',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Skip warning if suppressed
|
||||
if (isWarningSuppressed('batch-clear-epgs')) {
|
||||
return executeClearEpgs();
|
||||
}
|
||||
|
||||
setConfirmClearEpgsOpen(true);
|
||||
};
|
||||
|
||||
const executeClearEpgs = async () => {
|
||||
try {
|
||||
// Clear EPG assignments (set to null/dummy) using existing batchSetEPG API
|
||||
const associations = channelIds.map((id) => ({
|
||||
channel_id: id,
|
||||
epg_data_id: null,
|
||||
}));
|
||||
|
||||
await API.batchSetEPG(associations);
|
||||
|
||||
// batchSetEPG already shows a notification and refreshes channels
|
||||
// Close the modal
|
||||
setConfirmClearEpgsOpen(false);
|
||||
onClose();
|
||||
} catch (error) {
|
||||
console.error('Failed to clear EPG assignments:', error);
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: 'Failed to clear EPG assignments.',
|
||||
color: 'red',
|
||||
});
|
||||
setConfirmClearEpgsOpen(false);
|
||||
}
|
||||
};
|
||||
|
||||
// useEffect(() => {
|
||||
// // const sameStreamProfile = channels.every(
|
||||
// // (channel) => channel.stream_profile_id == channels[0].stream_profile_id
|
||||
|
|
@ -418,7 +512,7 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
|
|||
}
|
||||
styles={{ hannontent: { '--mantine-color-body': '#27272A' } }}
|
||||
>
|
||||
<form onSubmit={form.onSubmit(onSubmit)}>
|
||||
<form onSubmit={form.onSubmit(handleSubmit)}>
|
||||
<Group justify="space-between" align="top">
|
||||
<Stack gap="5" style={{ flex: 1 }}>
|
||||
<Paper withBorder p="xs" radius="md">
|
||||
|
|
@ -484,20 +578,30 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
|
|||
Set TVG-IDs from EPG
|
||||
</Button>
|
||||
</Group>
|
||||
<Group gap="xs" wrap="nowrap" mt="xs">
|
||||
<Button
|
||||
<Divider my="xs" />
|
||||
<Stack gap="xs">
|
||||
<Text size="xs" fw={600}>
|
||||
Assign Dummy EPG
|
||||
</Text>
|
||||
<Select
|
||||
size="xs"
|
||||
variant="light"
|
||||
color="red"
|
||||
onClick={handleClearEpgs}
|
||||
style={{ flex: 1 }}
|
||||
>
|
||||
Clear EPG (Set to Dummy)
|
||||
</Button>
|
||||
</Group>
|
||||
placeholder="Select a dummy EPG..."
|
||||
data={[
|
||||
{ value: 'clear', label: '(Clear EPG Assignment)' },
|
||||
...dummyEpgSources.map((epg) => ({
|
||||
value: String(epg.id),
|
||||
label: epg.name,
|
||||
})),
|
||||
]}
|
||||
value={selectedDummyEpgId}
|
||||
onChange={setSelectedDummyEpgId}
|
||||
clearable
|
||||
/>
|
||||
</Stack>
|
||||
<Text size="xs" c="dimmed" mt="xs">
|
||||
Updates channel names, logos, and TVG-IDs based on their
|
||||
assigned EPG data, or clear EPG assignments to use dummy EPG
|
||||
assigned EPG data, or assign a custom dummy EPG to selected
|
||||
channels
|
||||
</Text>
|
||||
</Paper>
|
||||
|
||||
|
|
@ -819,8 +923,14 @@ const ChannelBatchForm = ({ channelIds, isOpen, onClose }) => {
|
|||
</Stack>
|
||||
</Group>
|
||||
<Flex mih={50} gap="xs" justify="flex-end" align="flex-end">
|
||||
<Button type="submit" variant="default" disabled={isSubmitting}>
|
||||
Submit
|
||||
<Button
|
||||
type="submit"
|
||||
variant="default"
|
||||
disabled={isSubmitting}
|
||||
loading={isSubmitting}
|
||||
loaderProps={{ type: 'dots' }}
|
||||
>
|
||||
{isSubmitting ? 'Saving...' : 'Submit'}
|
||||
</Button>
|
||||
</Flex>
|
||||
</form>
|
||||
|
|
@ -895,22 +1005,42 @@ This action cannot be undone.`}
|
|||
/>
|
||||
|
||||
<ConfirmationDialog
|
||||
opened={confirmClearEpgsOpen}
|
||||
onClose={() => setConfirmClearEpgsOpen(false)}
|
||||
onConfirm={executeClearEpgs}
|
||||
title="Confirm Clear EPG Assignments"
|
||||
opened={confirmBatchUpdateOpen}
|
||||
onClose={() => setConfirmBatchUpdateOpen(false)}
|
||||
onConfirm={onSubmit}
|
||||
title="Confirm Batch Update"
|
||||
message={
|
||||
<div style={{ whiteSpace: 'pre-line' }}>
|
||||
{`Are you sure you want to clear EPG assignments for ${channelIds?.length || 0} selected channels?
|
||||
|
||||
This will set all selected channels to use dummy EPG data.
|
||||
|
||||
This action cannot be undone.`}
|
||||
<div>
|
||||
<Text mb="md">
|
||||
You are about to apply the following changes to{' '}
|
||||
<strong>{channelIds?.length || 0}</strong> selected channel
|
||||
{(channelIds?.length || 0) !== 1 ? 's' : ''}:
|
||||
</Text>
|
||||
<Paper
|
||||
withBorder
|
||||
p="sm"
|
||||
style={{ backgroundColor: 'rgba(0, 0, 0, 0.2)' }}
|
||||
>
|
||||
<Stack gap="xs">
|
||||
{getConfirmationMessage().map((change, index) => (
|
||||
<Text
|
||||
key={index}
|
||||
size="sm"
|
||||
style={{ fontFamily: 'monospace' }}
|
||||
>
|
||||
{change}
|
||||
</Text>
|
||||
))}
|
||||
</Stack>
|
||||
</Paper>
|
||||
<Text mt="md" size="sm" c="dimmed">
|
||||
This action cannot be undone.
|
||||
</Text>
|
||||
</div>
|
||||
}
|
||||
confirmLabel="Clear EPGs"
|
||||
confirmLabel="Apply Changes"
|
||||
cancelLabel="Cancel"
|
||||
actionKey="batch-clear-epgs"
|
||||
actionKey="batch-update-channels"
|
||||
onSuppressChange={suppressWarning}
|
||||
size="md"
|
||||
/>
|
||||
|
|
|
|||
|
|
@ -1,729 +0,0 @@
|
|||
import React, { useState, useEffect, useRef, useMemo } from 'react';
|
||||
import { useFormik } from 'formik';
|
||||
import * as Yup from 'yup';
|
||||
import useChannelsStore from '../../store/channels';
|
||||
import API from '../../api';
|
||||
import useStreamProfilesStore from '../../store/streamProfiles';
|
||||
import useStreamsStore from '../../store/streams';
|
||||
import { useChannelLogoSelection } from '../../hooks/useSmartLogos';
|
||||
import LazyLogo from '../LazyLogo';
|
||||
import ChannelGroupForm from './ChannelGroup';
|
||||
import usePlaylistsStore from '../../store/playlists';
|
||||
import logo from '../../images/logo.png';
|
||||
import {
|
||||
Box,
|
||||
Button,
|
||||
Modal,
|
||||
TextInput,
|
||||
NativeSelect,
|
||||
Text,
|
||||
Group,
|
||||
ActionIcon,
|
||||
Center,
|
||||
Grid,
|
||||
Flex,
|
||||
Select,
|
||||
Divider,
|
||||
Stack,
|
||||
useMantineTheme,
|
||||
Popover,
|
||||
ScrollArea,
|
||||
Tooltip,
|
||||
NumberInput,
|
||||
Image,
|
||||
UnstyledButton,
|
||||
} from '@mantine/core';
|
||||
import { ListOrdered, SquarePlus, SquareX, X } from 'lucide-react';
|
||||
import useEPGsStore from '../../store/epgs';
|
||||
import { Dropzone } from '@mantine/dropzone';
|
||||
import { notifications } from '@mantine/notifications';
|
||||
import { FixedSizeList as List } from 'react-window';
|
||||
|
||||
const ChannelsForm = ({ channel = null, isOpen, onClose }) => {
|
||||
const theme = useMantineTheme();
|
||||
|
||||
const listRef = useRef(null);
|
||||
const logoListRef = useRef(null);
|
||||
const groupListRef = useRef(null);
|
||||
|
||||
const channelGroups = useChannelsStore((s) => s.channelGroups);
|
||||
const { logos, ensureLogosLoaded } = useChannelLogoSelection();
|
||||
const streams = useStreamsStore((state) => state.streams);
|
||||
const streamProfiles = useStreamProfilesStore((s) => s.profiles);
|
||||
const playlists = usePlaylistsStore((s) => s.playlists);
|
||||
const epgs = useEPGsStore((s) => s.epgs);
|
||||
const tvgs = useEPGsStore((s) => s.tvgs);
|
||||
const tvgsById = useEPGsStore((s) => s.tvgsById);
|
||||
|
||||
const [logoPreview, setLogoPreview] = useState(null);
|
||||
const [channelStreams, setChannelStreams] = useState([]);
|
||||
const [channelGroupModelOpen, setChannelGroupModalOpen] = useState(false);
|
||||
const [epgPopoverOpened, setEpgPopoverOpened] = useState(false);
|
||||
const [logoPopoverOpened, setLogoPopoverOpened] = useState(false);
|
||||
const [selectedEPG, setSelectedEPG] = useState('');
|
||||
const [tvgFilter, setTvgFilter] = useState('');
|
||||
const [logoFilter, setLogoFilter] = useState('');
|
||||
|
||||
const [groupPopoverOpened, setGroupPopoverOpened] = useState(false);
|
||||
const [groupFilter, setGroupFilter] = useState('');
|
||||
const groupOptions = Object.values(channelGroups);
|
||||
|
||||
const addStream = (stream) => {
|
||||
const streamSet = new Set(channelStreams);
|
||||
streamSet.add(stream);
|
||||
setChannelStreams(Array.from(streamSet));
|
||||
};
|
||||
|
||||
const removeStream = (stream) => {
|
||||
const streamSet = new Set(channelStreams);
|
||||
streamSet.delete(stream);
|
||||
setChannelStreams(Array.from(streamSet));
|
||||
};
|
||||
|
||||
const handleLogoChange = async (files) => {
|
||||
if (files.length === 1) {
|
||||
const file = files[0];
|
||||
|
||||
// Validate file size on frontend first
|
||||
if (file.size > 5 * 1024 * 1024) {
|
||||
// 5MB
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: 'File too large. Maximum size is 5MB.',
|
||||
color: 'red',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const retval = await API.uploadLogo(file);
|
||||
// Note: API.uploadLogo already adds the logo to the store, no need to fetch
|
||||
setLogoPreview(retval.cache_url);
|
||||
formik.setFieldValue('logo_id', retval.id);
|
||||
} catch (error) {
|
||||
console.error('Logo upload failed:', error);
|
||||
// Error notification is already handled in API.uploadLogo
|
||||
}
|
||||
} else {
|
||||
setLogoPreview(null);
|
||||
}
|
||||
};
|
||||
|
||||
const formik = useFormik({
|
||||
initialValues: {
|
||||
name: '',
|
||||
channel_number: '', // Change from 0 to empty string for consistency
|
||||
channel_group_id:
|
||||
Object.keys(channelGroups).length > 0
|
||||
? Object.keys(channelGroups)[0]
|
||||
: '',
|
||||
stream_profile_id: '0',
|
||||
tvg_id: '',
|
||||
tvc_guide_stationid: '',
|
||||
epg_data_id: '',
|
||||
logo_id: '',
|
||||
},
|
||||
validationSchema: Yup.object({
|
||||
name: Yup.string().required('Name is required'),
|
||||
channel_group_id: Yup.string().required('Channel group is required'),
|
||||
}),
|
||||
onSubmit: async (values, { setSubmitting }) => {
|
||||
let response;
|
||||
|
||||
try {
|
||||
const formattedValues = { ...values };
|
||||
|
||||
// Convert empty or "0" stream_profile_id to null for the API
|
||||
if (
|
||||
!formattedValues.stream_profile_id ||
|
||||
formattedValues.stream_profile_id === '0'
|
||||
) {
|
||||
formattedValues.stream_profile_id = null;
|
||||
}
|
||||
|
||||
// Ensure tvg_id is properly included (no empty strings)
|
||||
formattedValues.tvg_id = formattedValues.tvg_id || null;
|
||||
|
||||
// Ensure tvc_guide_stationid is properly included (no empty strings)
|
||||
formattedValues.tvc_guide_stationid =
|
||||
formattedValues.tvc_guide_stationid || null;
|
||||
|
||||
if (channel) {
|
||||
// If there's an EPG to set, use our enhanced endpoint
|
||||
if (values.epg_data_id !== (channel.epg_data_id ?? '')) {
|
||||
// Use the special endpoint to set EPG and trigger refresh
|
||||
const epgResponse = await API.setChannelEPG(
|
||||
channel.id,
|
||||
values.epg_data_id
|
||||
);
|
||||
|
||||
// Remove epg_data_id from values since we've handled it separately
|
||||
const { epg_data_id, ...otherValues } = formattedValues;
|
||||
|
||||
// Update other channel fields if needed
|
||||
if (Object.keys(otherValues).length > 0) {
|
||||
response = await API.updateChannel({
|
||||
id: channel.id,
|
||||
...otherValues,
|
||||
streams: channelStreams.map((stream) => stream.id),
|
||||
});
|
||||
}
|
||||
} else {
|
||||
// No EPG change, regular update
|
||||
response = await API.updateChannel({
|
||||
id: channel.id,
|
||||
...formattedValues,
|
||||
streams: channelStreams.map((stream) => stream.id),
|
||||
});
|
||||
}
|
||||
} else {
|
||||
// New channel creation - use the standard method
|
||||
response = await API.addChannel({
|
||||
...formattedValues,
|
||||
streams: channelStreams.map((stream) => stream.id),
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error saving channel:', error);
|
||||
}
|
||||
|
||||
formik.resetForm();
|
||||
API.requeryChannels();
|
||||
|
||||
// Refresh channel profiles to update the membership information
|
||||
useChannelsStore.getState().fetchChannelProfiles();
|
||||
|
||||
setSubmitting(false);
|
||||
setTvgFilter('');
|
||||
setLogoFilter('');
|
||||
onClose();
|
||||
},
|
||||
});
|
||||
|
||||
useEffect(() => {
|
||||
if (channel) {
|
||||
if (channel.epg_data_id) {
|
||||
const epgSource = epgs[tvgsById[channel.epg_data_id]?.epg_source];
|
||||
setSelectedEPG(epgSource ? `${epgSource.id}` : '');
|
||||
}
|
||||
|
||||
formik.setValues({
|
||||
name: channel.name || '',
|
||||
channel_number:
|
||||
channel.channel_number !== null ? channel.channel_number : '',
|
||||
channel_group_id: channel.channel_group_id
|
||||
? `${channel.channel_group_id}`
|
||||
: '',
|
||||
stream_profile_id: channel.stream_profile_id
|
||||
? `${channel.stream_profile_id}`
|
||||
: '0',
|
||||
tvg_id: channel.tvg_id || '',
|
||||
tvc_guide_stationid: channel.tvc_guide_stationid || '',
|
||||
epg_data_id: channel.epg_data_id ?? '',
|
||||
logo_id: channel.logo_id ? `${channel.logo_id}` : '',
|
||||
});
|
||||
|
||||
setChannelStreams(channel.streams || []);
|
||||
} else {
|
||||
formik.resetForm();
|
||||
setTvgFilter('');
|
||||
setLogoFilter('');
|
||||
}
|
||||
}, [channel, tvgsById, channelGroups]);
|
||||
|
||||
// Memoize logo options to prevent infinite re-renders during background loading
|
||||
const logoOptions = useMemo(() => {
|
||||
return [{ id: '0', name: 'Default' }].concat(Object.values(logos));
|
||||
}, [logos]); // Only depend on logos object
|
||||
|
||||
const renderLogoOption = ({ option, checked }) => {
|
||||
return (
|
||||
<Center style={{ width: '100%' }}>
|
||||
<img src={logos[option.value].cache_url} width="30" />
|
||||
</Center>
|
||||
);
|
||||
};
|
||||
|
||||
// Update the handler for when channel group modal is closed
|
||||
const handleChannelGroupModalClose = (newGroup) => {
|
||||
setChannelGroupModalOpen(false);
|
||||
|
||||
// If a new group was created and returned, update the form with it
|
||||
if (newGroup && newGroup.id) {
|
||||
// Preserve all current form values while updating just the channel_group_id
|
||||
formik.setValues({
|
||||
...formik.values,
|
||||
channel_group_id: `${newGroup.id}`,
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
if (!isOpen) {
|
||||
return <></>;
|
||||
}
|
||||
|
||||
const filteredTvgs = tvgs
|
||||
.filter((tvg) => tvg.epg_source == selectedEPG)
|
||||
.filter(
|
||||
(tvg) =>
|
||||
tvg.name.toLowerCase().includes(tvgFilter.toLowerCase()) ||
|
||||
tvg.tvg_id.toLowerCase().includes(tvgFilter.toLowerCase())
|
||||
);
|
||||
|
||||
const filteredLogos = logoOptions.filter((logo) =>
|
||||
logo.name.toLowerCase().includes(logoFilter.toLowerCase())
|
||||
);
|
||||
|
||||
const filteredGroups = groupOptions.filter((group) =>
|
||||
group.name.toLowerCase().includes(groupFilter.toLowerCase())
|
||||
);
|
||||
|
||||
return (
|
||||
<Modal
|
||||
opened={isOpen}
|
||||
onClose={onClose}
|
||||
size={1000}
|
||||
title={
|
||||
<Group gap="5">
|
||||
<ListOrdered size="20" />
|
||||
<Text>Channels</Text>
|
||||
</Group>
|
||||
}
|
||||
styles={{ content: { '--mantine-color-body': '#27272A' } }}
|
||||
>
|
||||
<form onSubmit={formik.handleSubmit}>
|
||||
<Group justify="space-between" align="top">
|
||||
<Stack gap="5" style={{ flex: 1 }}>
|
||||
<TextInput
|
||||
id="name"
|
||||
name="name"
|
||||
label="Channel Name"
|
||||
value={formik.values.name}
|
||||
onChange={formik.handleChange}
|
||||
error={formik.errors.name ? formik.touched.name : ''}
|
||||
size="xs"
|
||||
/>
|
||||
|
||||
<Flex gap="sm">
|
||||
<Popover
|
||||
opened={groupPopoverOpened}
|
||||
onChange={setGroupPopoverOpened}
|
||||
// position="bottom-start"
|
||||
withArrow
|
||||
>
|
||||
<Popover.Target>
|
||||
<TextInput
|
||||
id="channel_group_id"
|
||||
name="channel_group_id"
|
||||
label="Channel Group"
|
||||
readOnly
|
||||
value={
|
||||
channelGroups[formik.values.channel_group_id]
|
||||
? channelGroups[formik.values.channel_group_id].name
|
||||
: ''
|
||||
}
|
||||
onClick={() => setGroupPopoverOpened(true)}
|
||||
size="xs"
|
||||
/>
|
||||
</Popover.Target>
|
||||
|
||||
<Popover.Dropdown onMouseDown={(e) => e.stopPropagation()}>
|
||||
<Group>
|
||||
<TextInput
|
||||
placeholder="Filter"
|
||||
value={groupFilter}
|
||||
onChange={(event) =>
|
||||
setGroupFilter(event.currentTarget.value)
|
||||
}
|
||||
mb="xs"
|
||||
size="xs"
|
||||
/>
|
||||
</Group>
|
||||
|
||||
<ScrollArea style={{ height: 200 }}>
|
||||
<List
|
||||
height={200} // Set max height for visible items
|
||||
itemCount={filteredGroups.length}
|
||||
itemSize={20} // Adjust row height for each item
|
||||
width={200}
|
||||
ref={groupListRef}
|
||||
>
|
||||
{({ index, style }) => (
|
||||
<Box
|
||||
style={{ ...style, height: 20, overflow: 'hidden' }}
|
||||
>
|
||||
<Tooltip
|
||||
openDelay={500}
|
||||
label={filteredGroups[index].name}
|
||||
size="xs"
|
||||
>
|
||||
<UnstyledButton
|
||||
onClick={() => {
|
||||
formik.setFieldValue(
|
||||
'channel_group_id',
|
||||
filteredGroups[index].id
|
||||
);
|
||||
setGroupPopoverOpened(false);
|
||||
}}
|
||||
>
|
||||
<Text
|
||||
size="xs"
|
||||
style={{
|
||||
whiteSpace: 'nowrap',
|
||||
overflow: 'hidden',
|
||||
textOverflow: 'ellipsis',
|
||||
}}
|
||||
>
|
||||
{filteredGroups[index].name}
|
||||
</Text>
|
||||
</UnstyledButton>
|
||||
</Tooltip>
|
||||
</Box>
|
||||
)}
|
||||
</List>
|
||||
</ScrollArea>
|
||||
</Popover.Dropdown>
|
||||
</Popover>
|
||||
|
||||
{/* <Select
|
||||
id="channel_group_id"
|
||||
name="channel_group_id"
|
||||
label="Channel Group"
|
||||
value={formik.values.channel_group_id}
|
||||
searchable
|
||||
onChange={(value) => {
|
||||
formik.setFieldValue('channel_group_id', value); // Update Formik's state with the new value
|
||||
}}
|
||||
error={
|
||||
formik.errors.channel_group_id
|
||||
? formik.touched.channel_group_id
|
||||
: ''
|
||||
}
|
||||
data={Object.values(channelGroups).map((option, index) => ({
|
||||
value: `${option.id}`,
|
||||
label: option.name,
|
||||
}))}
|
||||
size="xs"
|
||||
style={{ flex: 1 }}
|
||||
/> */}
|
||||
<Flex align="flex-end">
|
||||
<ActionIcon
|
||||
color={theme.tailwind.green[5]}
|
||||
onClick={() => setChannelGroupModalOpen(true)}
|
||||
title="Create new group"
|
||||
size="small"
|
||||
variant="transparent"
|
||||
style={{ marginBottom: 5 }}
|
||||
>
|
||||
<SquarePlus size="20" />
|
||||
</ActionIcon>
|
||||
</Flex>
|
||||
</Flex>
|
||||
|
||||
<Select
|
||||
id="stream_profile_id"
|
||||
label="Stream Profile"
|
||||
name="stream_profile_id"
|
||||
value={formik.values.stream_profile_id}
|
||||
onChange={(value) => {
|
||||
formik.setFieldValue('stream_profile_id', value); // Update Formik's state with the new value
|
||||
}}
|
||||
error={
|
||||
formik.errors.stream_profile_id
|
||||
? formik.touched.stream_profile_id
|
||||
: ''
|
||||
}
|
||||
data={[{ value: '0', label: '(use default)' }].concat(
|
||||
streamProfiles.map((option) => ({
|
||||
value: `${option.id}`,
|
||||
label: option.name,
|
||||
}))
|
||||
)}
|
||||
size="xs"
|
||||
/>
|
||||
</Stack>
|
||||
|
||||
<Divider size="sm" orientation="vertical" />
|
||||
|
||||
<Stack justify="flex-start" style={{ flex: 1 }}>
|
||||
<Group justify="space-between">
|
||||
<Popover
|
||||
opened={logoPopoverOpened}
|
||||
onChange={(opened) => {
|
||||
setLogoPopoverOpened(opened);
|
||||
if (opened) {
|
||||
ensureLogosLoaded();
|
||||
}
|
||||
}}
|
||||
// position="bottom-start"
|
||||
withArrow
|
||||
>
|
||||
<Popover.Target>
|
||||
<TextInput
|
||||
id="logo_id"
|
||||
name="logo_id"
|
||||
label="Logo"
|
||||
readOnly
|
||||
value={logos[formik.values.logo_id]?.name || 'Default'}
|
||||
onClick={() => setLogoPopoverOpened(true)}
|
||||
size="xs"
|
||||
/>
|
||||
</Popover.Target>
|
||||
|
||||
<Popover.Dropdown onMouseDown={(e) => e.stopPropagation()}>
|
||||
<Group>
|
||||
<TextInput
|
||||
placeholder="Filter"
|
||||
value={logoFilter}
|
||||
onChange={(event) =>
|
||||
setLogoFilter(event.currentTarget.value)
|
||||
}
|
||||
mb="xs"
|
||||
size="xs"
|
||||
/>
|
||||
</Group>
|
||||
|
||||
<ScrollArea style={{ height: 200 }}>
|
||||
<List
|
||||
height={200} // Set max height for visible items
|
||||
itemCount={filteredLogos.length}
|
||||
itemSize={20} // Adjust row height for each item
|
||||
width="100%"
|
||||
ref={logoListRef}
|
||||
>
|
||||
{({ index, style }) => (
|
||||
<div style={style}>
|
||||
<Center>
|
||||
<img
|
||||
src={filteredLogos[index].cache_url || logo}
|
||||
height="20"
|
||||
style={{ maxWidth: 80 }}
|
||||
onClick={() => {
|
||||
formik.setFieldValue(
|
||||
'logo_id',
|
||||
filteredLogos[index].id
|
||||
);
|
||||
}}
|
||||
/>
|
||||
</Center>
|
||||
</div>
|
||||
)}
|
||||
</List>
|
||||
</ScrollArea>
|
||||
</Popover.Dropdown>
|
||||
</Popover>
|
||||
|
||||
<LazyLogo
|
||||
logoId={formik.values.logo_id}
|
||||
alt="channel logo"
|
||||
style={{ height: 40 }}
|
||||
/>
|
||||
</Group>
|
||||
|
||||
<Group>
|
||||
<Divider size="xs" style={{ flex: 1 }} />
|
||||
<Text size="xs" c="dimmed">
|
||||
OR
|
||||
</Text>
|
||||
<Divider size="xs" style={{ flex: 1 }} />
|
||||
</Group>
|
||||
|
||||
<Stack>
|
||||
<Text size="sm">Upload Logo</Text>
|
||||
<Dropzone
|
||||
onDrop={handleLogoChange}
|
||||
onReject={(files) => console.log('rejected files', files)}
|
||||
maxSize={5 * 1024 ** 2}
|
||||
>
|
||||
<Group
|
||||
justify="center"
|
||||
gap="xl"
|
||||
mih={40}
|
||||
style={{ pointerEvents: 'none' }}
|
||||
>
|
||||
<Text size="sm" inline>
|
||||
Drag images here or click to select files
|
||||
</Text>
|
||||
</Group>
|
||||
</Dropzone>
|
||||
|
||||
<Center></Center>
|
||||
</Stack>
|
||||
</Stack>
|
||||
|
||||
<Divider size="sm" orientation="vertical" />
|
||||
|
||||
<Stack gap="5" style={{ flex: 1 }} justify="flex-start">
|
||||
<NumberInput
|
||||
id="channel_number"
|
||||
name="channel_number"
|
||||
label="Channel # (blank to auto-assign)"
|
||||
value={formik.values.channel_number}
|
||||
onChange={(value) =>
|
||||
formik.setFieldValue('channel_number', value)
|
||||
}
|
||||
error={
|
||||
formik.errors.channel_number
|
||||
? formik.touched.channel_number
|
||||
: ''
|
||||
}
|
||||
size="xs"
|
||||
/>
|
||||
|
||||
<TextInput
|
||||
id="tvg_id"
|
||||
name="tvg_id"
|
||||
label="TVG-ID"
|
||||
value={formik.values.tvg_id}
|
||||
onChange={formik.handleChange}
|
||||
error={formik.errors.tvg_id ? formik.touched.tvg_id : ''}
|
||||
size="xs"
|
||||
/>
|
||||
|
||||
<TextInput
|
||||
id="tvc_guide_stationid"
|
||||
name="tvc_guide_stationid"
|
||||
label="Gracenote StationId"
|
||||
value={formik.values.tvc_guide_stationid}
|
||||
onChange={formik.handleChange}
|
||||
error={
|
||||
formik.errors.tvc_guide_stationid
|
||||
? formik.touched.tvc_guide_stationid
|
||||
: ''
|
||||
}
|
||||
size="xs"
|
||||
/>
|
||||
|
||||
<Popover
|
||||
opened={epgPopoverOpened}
|
||||
onChange={setEpgPopoverOpened}
|
||||
// position="bottom-start"
|
||||
withArrow
|
||||
>
|
||||
<Popover.Target>
|
||||
<TextInput
|
||||
id="epg_data_id"
|
||||
name="epg_data_id"
|
||||
label={
|
||||
<Group style={{ width: '100%' }}>
|
||||
<Box>EPG</Box>
|
||||
<Button
|
||||
size="xs"
|
||||
variant="transparent"
|
||||
onClick={() =>
|
||||
formik.setFieldValue('epg_data_id', null)
|
||||
}
|
||||
>
|
||||
Use Dummy
|
||||
</Button>
|
||||
</Group>
|
||||
}
|
||||
readOnly
|
||||
value={
|
||||
formik.values.epg_data_id
|
||||
? tvgsById[formik.values.epg_data_id].name
|
||||
: 'Dummy'
|
||||
}
|
||||
onClick={() => setEpgPopoverOpened(true)}
|
||||
size="xs"
|
||||
rightSection={
|
||||
<Tooltip label="Use dummy EPG">
|
||||
<ActionIcon
|
||||
// color={theme.tailwind.green[5]}
|
||||
color="white"
|
||||
onClick={(e) => {
|
||||
e.stopPropagation();
|
||||
formik.setFieldValue('epg_data_id', null);
|
||||
}}
|
||||
title="Create new group"
|
||||
size="small"
|
||||
variant="transparent"
|
||||
>
|
||||
<X size="20" />
|
||||
</ActionIcon>
|
||||
</Tooltip>
|
||||
}
|
||||
/>
|
||||
</Popover.Target>
|
||||
|
||||
<Popover.Dropdown onMouseDown={(e) => e.stopPropagation()}>
|
||||
<Group>
|
||||
<Select
|
||||
label="Source"
|
||||
value={selectedEPG}
|
||||
onChange={setSelectedEPG}
|
||||
data={Object.values(epgs).map((epg) => ({
|
||||
value: `${epg.id}`,
|
||||
label: epg.name,
|
||||
}))}
|
||||
size="xs"
|
||||
mb="xs"
|
||||
/>
|
||||
|
||||
{/* Filter Input */}
|
||||
<TextInput
|
||||
label="Filter"
|
||||
value={tvgFilter}
|
||||
onChange={(event) =>
|
||||
setTvgFilter(event.currentTarget.value)
|
||||
}
|
||||
mb="xs"
|
||||
size="xs"
|
||||
/>
|
||||
</Group>
|
||||
|
||||
<ScrollArea style={{ height: 200 }}>
|
||||
<List
|
||||
height={200} // Set max height for visible items
|
||||
itemCount={filteredTvgs.length}
|
||||
itemSize={40} // Adjust row height for each item
|
||||
width="100%"
|
||||
ref={listRef}
|
||||
>
|
||||
{({ index, style }) => (
|
||||
<div style={style}>
|
||||
<Button
|
||||
key={filteredTvgs[index].id}
|
||||
variant="subtle"
|
||||
color="gray"
|
||||
fullWidth
|
||||
justify="left"
|
||||
size="xs"
|
||||
onClick={() => {
|
||||
if (filteredTvgs[index].id == '0') {
|
||||
formik.setFieldValue('epg_data_id', null);
|
||||
} else {
|
||||
formik.setFieldValue(
|
||||
'epg_data_id',
|
||||
filteredTvgs[index].id
|
||||
);
|
||||
}
|
||||
setEpgPopoverOpened(false);
|
||||
}}
|
||||
>
|
||||
{filteredTvgs[index].tvg_id}
|
||||
</Button>
|
||||
</div>
|
||||
)}
|
||||
</List>
|
||||
</ScrollArea>
|
||||
</Popover.Dropdown>
|
||||
</Popover>
|
||||
</Stack>
|
||||
</Group>
|
||||
|
||||
<Flex mih={50} gap="xs" justify="flex-end" align="flex-end">
|
||||
<Button
|
||||
type="submit"
|
||||
variant="default"
|
||||
disabled={formik.isSubmitting}
|
||||
>
|
||||
Submit
|
||||
</Button>
|
||||
</Flex>
|
||||
</form>
|
||||
</Modal>
|
||||
);
|
||||
};
|
||||
|
||||
export default ChannelsForm;
|
||||
1497
frontend/src/components/forms/DummyEPG.jsx
Normal file
1497
frontend/src/components/forms/DummyEPG.jsx
Normal file
File diff suppressed because it is too large
Load diff
|
|
@ -1,31 +1,23 @@
|
|||
// Modal.js
|
||||
import React, { useState, useEffect } from 'react';
|
||||
import API from '../../api';
|
||||
import useEPGsStore from '../../store/epgs';
|
||||
import {
|
||||
LoadingOverlay,
|
||||
TextInput,
|
||||
Button,
|
||||
Checkbox,
|
||||
Modal,
|
||||
Flex,
|
||||
NativeSelect,
|
||||
NumberInput,
|
||||
Space,
|
||||
Grid,
|
||||
Group,
|
||||
FileInput,
|
||||
Title,
|
||||
Text,
|
||||
Divider,
|
||||
Stack,
|
||||
Group,
|
||||
Divider,
|
||||
Box,
|
||||
Text,
|
||||
} from '@mantine/core';
|
||||
import { isNotEmpty, useForm } from '@mantine/form';
|
||||
import { notifications } from '@mantine/notifications';
|
||||
|
||||
const EPG = ({ epg = null, isOpen, onClose }) => {
|
||||
const epgs = useEPGsStore((state) => state.epgs);
|
||||
// Remove the file state and handler since we're not supporting file uploads
|
||||
const [sourceType, setSourceType] = useState('xmltv');
|
||||
|
||||
const form = useForm({
|
||||
|
|
@ -49,13 +41,19 @@ const EPG = ({ epg = null, isOpen, onClose }) => {
|
|||
const values = form.getValues();
|
||||
|
||||
if (epg?.id) {
|
||||
// Remove file from API call
|
||||
// Validate that we have a valid EPG object before updating
|
||||
if (!epg || typeof epg !== 'object' || !epg.id) {
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: 'Invalid EPG data. Please close and reopen this form.',
|
||||
color: 'red',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
await API.updateEPG({ id: epg.id, ...values });
|
||||
} else {
|
||||
// Remove file from API call
|
||||
await API.addEPG({
|
||||
...values,
|
||||
});
|
||||
await API.addEPG(values);
|
||||
}
|
||||
|
||||
form.reset();
|
||||
|
|
@ -73,11 +71,12 @@ const EPG = ({ epg = null, isOpen, onClose }) => {
|
|||
refresh_interval: epg.refresh_interval,
|
||||
};
|
||||
form.setValues(values);
|
||||
setSourceType(epg.source_type); // Update source type state
|
||||
setSourceType(epg.source_type);
|
||||
} else {
|
||||
form.reset();
|
||||
setSourceType('xmltv'); // Reset to xmltv
|
||||
setSourceType('xmltv');
|
||||
}
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [epg]);
|
||||
|
||||
// Function to handle source type changes
|
||||
|
|
@ -156,7 +155,7 @@ const EPG = ({ epg = null, isOpen, onClose }) => {
|
|||
description="API key for services that require authentication"
|
||||
{...form.getInputProps('api_key')}
|
||||
key={form.key('api_key')}
|
||||
disabled={sourceType !== 'schedules_direct'} // Use the state variable
|
||||
disabled={sourceType !== 'schedules_direct'}
|
||||
/>
|
||||
|
||||
{/* Put checkbox at the same level as Refresh Interval */}
|
||||
|
|
@ -171,8 +170,8 @@ const EPG = ({ epg = null, isOpen, onClose }) => {
|
|||
style={{
|
||||
display: 'flex',
|
||||
alignItems: 'center',
|
||||
height: '30px', // Reduced height
|
||||
marginTop: '-4px', // Slight negative margin to move it up
|
||||
height: '30px',
|
||||
marginTop: '-4px',
|
||||
}}
|
||||
>
|
||||
<Checkbox
|
||||
|
|
|
|||
|
|
@ -16,11 +16,20 @@ import {
|
|||
Box,
|
||||
MultiSelect,
|
||||
Tooltip,
|
||||
Popover,
|
||||
ScrollArea,
|
||||
Center,
|
||||
} from '@mantine/core';
|
||||
import { Info } from 'lucide-react';
|
||||
import useChannelsStore from '../../store/channels';
|
||||
import useStreamProfilesStore from '../../store/streamProfiles';
|
||||
import { CircleCheck, CircleX } from 'lucide-react';
|
||||
import { useChannelLogoSelection } from '../../hooks/useSmartLogos';
|
||||
import { FixedSizeList as List } from 'react-window';
|
||||
import LazyLogo from '../LazyLogo';
|
||||
import LogoForm from './Logo';
|
||||
import logo from '../../images/logo.png';
|
||||
import API from '../../api';
|
||||
|
||||
// Custom item component for MultiSelect with tooltip
|
||||
const OptionWithTooltip = forwardRef(
|
||||
|
|
@ -45,6 +54,21 @@ const LiveGroupFilter = ({
|
|||
const streamProfiles = useStreamProfilesStore((s) => s.profiles);
|
||||
const fetchStreamProfiles = useStreamProfilesStore((s) => s.fetchProfiles);
|
||||
const [groupFilter, setGroupFilter] = useState('');
|
||||
const [epgSources, setEpgSources] = useState([]);
|
||||
|
||||
// Logo selection functionality
|
||||
const {
|
||||
logos: channelLogos,
|
||||
ensureLogosLoaded,
|
||||
isLoading: logosLoading,
|
||||
} = useChannelLogoSelection();
|
||||
const [logoModalOpen, setLogoModalOpen] = useState(false);
|
||||
const [currentEditingGroupId, setCurrentEditingGroupId] = useState(null);
|
||||
|
||||
// Ensure logos are loaded when component mounts
|
||||
useEffect(() => {
|
||||
ensureLogosLoaded();
|
||||
}, [ensureLogosLoaded]);
|
||||
|
||||
// Fetch stream profiles when component mounts
|
||||
useEffect(() => {
|
||||
|
|
@ -53,6 +77,19 @@ const LiveGroupFilter = ({
|
|||
}
|
||||
}, [streamProfiles.length, fetchStreamProfiles]);
|
||||
|
||||
// Fetch EPG sources when component mounts
|
||||
useEffect(() => {
|
||||
const fetchEPGSources = async () => {
|
||||
try {
|
||||
const sources = await API.getEPGs();
|
||||
setEpgSources(sources || []);
|
||||
} catch (error) {
|
||||
console.error('Failed to fetch EPG sources:', error);
|
||||
}
|
||||
};
|
||||
fetchEPGSources();
|
||||
}, []);
|
||||
|
||||
useEffect(() => {
|
||||
if (Object.keys(channelGroups).length === 0) {
|
||||
return;
|
||||
|
|
@ -68,7 +105,7 @@ const LiveGroupFilter = ({
|
|||
typeof group.custom_properties === 'string'
|
||||
? JSON.parse(group.custom_properties)
|
||||
: group.custom_properties;
|
||||
} catch (e) {
|
||||
} catch {
|
||||
customProps = {};
|
||||
}
|
||||
}
|
||||
|
|
@ -115,21 +152,27 @@ const LiveGroupFilter = ({
|
|||
);
|
||||
};
|
||||
|
||||
// Toggle force_dummy_epg in custom_properties for a group
|
||||
const toggleForceDummyEPG = (id) => {
|
||||
setGroupStates(
|
||||
groupStates.map((state) => {
|
||||
if (state.channel_group == id) {
|
||||
const customProps = { ...(state.custom_properties || {}) };
|
||||
customProps.force_dummy_epg = !customProps.force_dummy_epg;
|
||||
return {
|
||||
...state,
|
||||
custom_properties: customProps,
|
||||
};
|
||||
}
|
||||
return state;
|
||||
})
|
||||
);
|
||||
// Handle logo selection from LogoForm
|
||||
const handleLogoSuccess = ({ logo }) => {
|
||||
if (logo && logo.id && currentEditingGroupId !== null) {
|
||||
setGroupStates(
|
||||
groupStates.map((state) => {
|
||||
if (state.channel_group === currentEditingGroupId) {
|
||||
return {
|
||||
...state,
|
||||
custom_properties: {
|
||||
...state.custom_properties,
|
||||
custom_logo_id: logo.id,
|
||||
},
|
||||
};
|
||||
}
|
||||
return state;
|
||||
})
|
||||
);
|
||||
ensureLogosLoaded(); // Refresh logos
|
||||
}
|
||||
setLogoModalOpen(false);
|
||||
setCurrentEditingGroupId(null);
|
||||
};
|
||||
|
||||
const selectAll = () => {
|
||||
|
|
@ -270,10 +313,10 @@ const LiveGroupFilter = ({
|
|||
placeholder="Select options..."
|
||||
data={[
|
||||
{
|
||||
value: 'force_dummy_epg',
|
||||
label: 'Force Dummy EPG',
|
||||
value: 'force_epg',
|
||||
label: 'Force EPG Source',
|
||||
description:
|
||||
'Assign a dummy EPG to all channels in this group if no EPG is matched',
|
||||
'Force a specific EPG source for all auto-synced channels, or disable EPG assignment entirely',
|
||||
},
|
||||
{
|
||||
value: 'group_override',
|
||||
|
|
@ -311,12 +354,22 @@ const LiveGroupFilter = ({
|
|||
description:
|
||||
'Assign a specific stream profile to all channels in this group during auto sync',
|
||||
},
|
||||
{
|
||||
value: 'custom_logo',
|
||||
label: 'Custom Logo',
|
||||
description:
|
||||
'Assign a custom logo to all auto-synced channels in this group',
|
||||
},
|
||||
]}
|
||||
itemComponent={OptionWithTooltip}
|
||||
value={(() => {
|
||||
const selectedValues = [];
|
||||
if (group.custom_properties?.force_dummy_epg) {
|
||||
selectedValues.push('force_dummy_epg');
|
||||
if (
|
||||
group.custom_properties?.custom_epg_id !==
|
||||
undefined ||
|
||||
group.custom_properties?.force_dummy_epg
|
||||
) {
|
||||
selectedValues.push('force_epg');
|
||||
}
|
||||
if (
|
||||
group.custom_properties?.group_override !==
|
||||
|
|
@ -356,6 +409,12 @@ const LiveGroupFilter = ({
|
|||
) {
|
||||
selectedValues.push('stream_profile_assignment');
|
||||
}
|
||||
if (
|
||||
group.custom_properties?.custom_logo_id !==
|
||||
undefined
|
||||
) {
|
||||
selectedValues.push('custom_logo');
|
||||
}
|
||||
return selectedValues;
|
||||
})()}
|
||||
onChange={(values) => {
|
||||
|
|
@ -369,13 +428,25 @@ const LiveGroupFilter = ({
|
|||
...(state.custom_properties || {}),
|
||||
};
|
||||
|
||||
// Handle force_dummy_epg
|
||||
if (
|
||||
selectedOptions.includes('force_dummy_epg')
|
||||
) {
|
||||
newCustomProps.force_dummy_epg = true;
|
||||
// Handle force_epg
|
||||
if (selectedOptions.includes('force_epg')) {
|
||||
// Migrate from old force_dummy_epg if present
|
||||
if (
|
||||
newCustomProps.force_dummy_epg &&
|
||||
newCustomProps.custom_epg_id === undefined
|
||||
) {
|
||||
// Migrate: force_dummy_epg=true becomes custom_epg_id=null
|
||||
newCustomProps.custom_epg_id = null;
|
||||
delete newCustomProps.force_dummy_epg;
|
||||
} else if (
|
||||
newCustomProps.custom_epg_id === undefined
|
||||
) {
|
||||
// New configuration: initialize with null (no EPG/default dummy)
|
||||
newCustomProps.custom_epg_id = null;
|
||||
}
|
||||
} else {
|
||||
delete newCustomProps.force_dummy_epg;
|
||||
// Only remove custom_epg_id when deselected
|
||||
delete newCustomProps.custom_epg_id;
|
||||
}
|
||||
|
||||
// Handle group_override
|
||||
|
|
@ -475,6 +546,17 @@ const LiveGroupFilter = ({
|
|||
delete newCustomProps.stream_profile_id;
|
||||
}
|
||||
|
||||
// Handle custom_logo
|
||||
if (selectedOptions.includes('custom_logo')) {
|
||||
if (
|
||||
newCustomProps.custom_logo_id === undefined
|
||||
) {
|
||||
newCustomProps.custom_logo_id = null;
|
||||
}
|
||||
} else {
|
||||
delete newCustomProps.custom_logo_id;
|
||||
}
|
||||
|
||||
return {
|
||||
...state,
|
||||
custom_properties: newCustomProps,
|
||||
|
|
@ -801,6 +883,317 @@ const LiveGroupFilter = ({
|
|||
/>
|
||||
</Tooltip>
|
||||
)}
|
||||
|
||||
{/* Show logo selector only if custom_logo is selected */}
|
||||
{group.custom_properties?.custom_logo_id !==
|
||||
undefined && (
|
||||
<Box>
|
||||
<Group justify="space-between">
|
||||
<Popover
|
||||
opened={group.logoPopoverOpened || false}
|
||||
onChange={(opened) => {
|
||||
setGroupStates(
|
||||
groupStates.map((state) => {
|
||||
if (
|
||||
state.channel_group ===
|
||||
group.channel_group
|
||||
) {
|
||||
return {
|
||||
...state,
|
||||
logoPopoverOpened: opened,
|
||||
};
|
||||
}
|
||||
return state;
|
||||
})
|
||||
);
|
||||
if (opened) {
|
||||
ensureLogosLoaded();
|
||||
}
|
||||
}}
|
||||
withArrow
|
||||
>
|
||||
<Popover.Target>
|
||||
<TextInput
|
||||
label="Custom Logo"
|
||||
readOnly
|
||||
value={
|
||||
channelLogos[
|
||||
group.custom_properties?.custom_logo_id
|
||||
]?.name || 'Default'
|
||||
}
|
||||
onClick={() => {
|
||||
setGroupStates(
|
||||
groupStates.map((state) => {
|
||||
if (
|
||||
state.channel_group ===
|
||||
group.channel_group
|
||||
) {
|
||||
return {
|
||||
...state,
|
||||
logoPopoverOpened: true,
|
||||
};
|
||||
}
|
||||
return {
|
||||
...state,
|
||||
logoPopoverOpened: false,
|
||||
};
|
||||
})
|
||||
);
|
||||
}}
|
||||
size="xs"
|
||||
/>
|
||||
</Popover.Target>
|
||||
|
||||
<Popover.Dropdown
|
||||
onMouseDown={(e) => e.stopPropagation()}
|
||||
>
|
||||
<Group>
|
||||
<TextInput
|
||||
placeholder="Filter logos..."
|
||||
size="xs"
|
||||
value={group.logoFilter || ''}
|
||||
onChange={(e) => {
|
||||
const val = e.currentTarget.value;
|
||||
setGroupStates(
|
||||
groupStates.map((state) =>
|
||||
state.channel_group ===
|
||||
group.channel_group
|
||||
? {
|
||||
...state,
|
||||
logoFilter: val,
|
||||
}
|
||||
: state
|
||||
)
|
||||
);
|
||||
}}
|
||||
/>
|
||||
{logosLoading && (
|
||||
<Text size="xs" c="dimmed">
|
||||
Loading...
|
||||
</Text>
|
||||
)}
|
||||
</Group>
|
||||
|
||||
<ScrollArea style={{ height: 200 }}>
|
||||
{(() => {
|
||||
const logoOptions = [
|
||||
{ id: '0', name: 'Default' },
|
||||
...Object.values(channelLogos),
|
||||
];
|
||||
const filteredLogos = logoOptions.filter(
|
||||
(logo) =>
|
||||
logo.name
|
||||
.toLowerCase()
|
||||
.includes(
|
||||
(
|
||||
group.logoFilter || ''
|
||||
).toLowerCase()
|
||||
)
|
||||
);
|
||||
|
||||
if (filteredLogos.length === 0) {
|
||||
return (
|
||||
<Center style={{ height: 200 }}>
|
||||
<Text size="sm" c="dimmed">
|
||||
{group.logoFilter
|
||||
? 'No logos match your filter'
|
||||
: 'No logos available'}
|
||||
</Text>
|
||||
</Center>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<List
|
||||
height={200}
|
||||
itemCount={filteredLogos.length}
|
||||
itemSize={55}
|
||||
style={{ width: '100%' }}
|
||||
>
|
||||
{({ index, style }) => {
|
||||
const logoItem = filteredLogos[index];
|
||||
return (
|
||||
<div
|
||||
style={{
|
||||
...style,
|
||||
cursor: 'pointer',
|
||||
padding: '5px',
|
||||
borderRadius: '4px',
|
||||
}}
|
||||
onClick={() => {
|
||||
setGroupStates(
|
||||
groupStates.map((state) => {
|
||||
if (
|
||||
state.channel_group ===
|
||||
group.channel_group
|
||||
) {
|
||||
return {
|
||||
...state,
|
||||
custom_properties: {
|
||||
...state.custom_properties,
|
||||
custom_logo_id:
|
||||
logoItem.id,
|
||||
},
|
||||
logoPopoverOpened: false,
|
||||
};
|
||||
}
|
||||
return state;
|
||||
})
|
||||
);
|
||||
}}
|
||||
onMouseEnter={(e) => {
|
||||
e.currentTarget.style.backgroundColor =
|
||||
'rgb(68, 68, 68)';
|
||||
}}
|
||||
onMouseLeave={(e) => {
|
||||
e.currentTarget.style.backgroundColor =
|
||||
'transparent';
|
||||
}}
|
||||
>
|
||||
<Center
|
||||
style={{
|
||||
flexDirection: 'column',
|
||||
gap: '2px',
|
||||
}}
|
||||
>
|
||||
<img
|
||||
src={
|
||||
logoItem.cache_url || logo
|
||||
}
|
||||
height="30"
|
||||
style={{
|
||||
maxWidth: 80,
|
||||
objectFit: 'contain',
|
||||
}}
|
||||
alt={logoItem.name || 'Logo'}
|
||||
onError={(e) => {
|
||||
if (e.target.src !== logo) {
|
||||
e.target.src = logo;
|
||||
}
|
||||
}}
|
||||
/>
|
||||
<Text
|
||||
size="xs"
|
||||
c="dimmed"
|
||||
ta="center"
|
||||
style={{
|
||||
maxWidth: 80,
|
||||
overflow: 'hidden',
|
||||
textOverflow: 'ellipsis',
|
||||
whiteSpace: 'nowrap',
|
||||
}}
|
||||
>
|
||||
{logoItem.name || 'Default'}
|
||||
</Text>
|
||||
</Center>
|
||||
</div>
|
||||
);
|
||||
}}
|
||||
</List>
|
||||
);
|
||||
})()}
|
||||
</ScrollArea>
|
||||
</Popover.Dropdown>
|
||||
</Popover>
|
||||
|
||||
<Stack gap="xs" align="center">
|
||||
<LazyLogo
|
||||
logoId={group.custom_properties?.custom_logo_id}
|
||||
alt="custom logo"
|
||||
style={{ height: 40 }}
|
||||
/>
|
||||
</Stack>
|
||||
</Group>
|
||||
|
||||
<Button
|
||||
onClick={() => {
|
||||
setCurrentEditingGroupId(group.channel_group);
|
||||
setLogoModalOpen(true);
|
||||
}}
|
||||
fullWidth
|
||||
variant="default"
|
||||
size="xs"
|
||||
mt="xs"
|
||||
>
|
||||
Upload or Create Logo
|
||||
</Button>
|
||||
</Box>
|
||||
)}
|
||||
|
||||
{/* Show EPG selector when force_epg is selected */}
|
||||
{(group.custom_properties?.custom_epg_id !== undefined ||
|
||||
group.custom_properties?.force_dummy_epg) && (
|
||||
<Tooltip
|
||||
label="Force a specific EPG source for all auto-synced channels in this group. For dummy EPGs, all channels will share the same EPG data. For regular EPG sources (XMLTV, Schedules Direct), channels will be matched by their tvg_id within that source. Select 'No EPG' to disable EPG assignment."
|
||||
withArrow
|
||||
>
|
||||
<Select
|
||||
label="EPG Source"
|
||||
placeholder="No EPG (Disabled)"
|
||||
value={(() => {
|
||||
// Handle migration from force_dummy_epg
|
||||
if (
|
||||
group.custom_properties?.custom_epg_id !==
|
||||
undefined
|
||||
) {
|
||||
// Convert to string, use '0' for null/no EPG
|
||||
return group.custom_properties.custom_epg_id ===
|
||||
null
|
||||
? '0'
|
||||
: group.custom_properties.custom_epg_id.toString();
|
||||
} else if (
|
||||
group.custom_properties?.force_dummy_epg
|
||||
) {
|
||||
// Show "No EPG" for old force_dummy_epg configs
|
||||
return '0';
|
||||
}
|
||||
return '0';
|
||||
})()}
|
||||
onChange={(value) => {
|
||||
// Convert back: '0' means no EPG (null)
|
||||
const newValue =
|
||||
value === '0' ? null : parseInt(value);
|
||||
setGroupStates(
|
||||
groupStates.map((state) => {
|
||||
if (
|
||||
state.channel_group === group.channel_group
|
||||
) {
|
||||
return {
|
||||
...state,
|
||||
custom_properties: {
|
||||
...state.custom_properties,
|
||||
custom_epg_id: newValue,
|
||||
},
|
||||
};
|
||||
}
|
||||
return state;
|
||||
})
|
||||
);
|
||||
}}
|
||||
data={[
|
||||
{ value: '0', label: 'No EPG (Disabled)' },
|
||||
...[...epgSources]
|
||||
.sort((a, b) => a.name.localeCompare(b.name))
|
||||
.map((source) => ({
|
||||
value: source.id.toString(),
|
||||
label: `${source.name} (${
|
||||
source.source_type === 'dummy'
|
||||
? 'Dummy'
|
||||
: source.source_type === 'xmltv'
|
||||
? 'XMLTV'
|
||||
: source.source_type ===
|
||||
'schedules_direct'
|
||||
? 'Schedules Direct'
|
||||
: source.source_type
|
||||
})`,
|
||||
})),
|
||||
]}
|
||||
clearable
|
||||
searchable
|
||||
size="xs"
|
||||
/>
|
||||
</Tooltip>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
</Stack>
|
||||
|
|
@ -808,6 +1201,16 @@ const LiveGroupFilter = ({
|
|||
))}
|
||||
</SimpleGrid>
|
||||
</Box>
|
||||
|
||||
{/* Logo Upload Modal */}
|
||||
<LogoForm
|
||||
isOpen={logoModalOpen}
|
||||
onClose={() => {
|
||||
setLogoModalOpen(false);
|
||||
setCurrentEditingGroupId(null);
|
||||
}}
|
||||
onSuccess={handleLogoSuccess}
|
||||
/>
|
||||
</Stack>
|
||||
);
|
||||
};
|
||||
|
|
|
|||
|
|
@ -1,7 +1,24 @@
|
|||
import React, { useState, useEffect } from 'react';
|
||||
import { useNavigate } from 'react-router-dom';
|
||||
import useAuthStore from '../../store/auth';
|
||||
import { Paper, Title, TextInput, Button, Center, Stack } from '@mantine/core';
|
||||
import API from '../../api';
|
||||
import {
|
||||
Paper,
|
||||
Title,
|
||||
TextInput,
|
||||
Button,
|
||||
Center,
|
||||
Stack,
|
||||
Text,
|
||||
Image,
|
||||
Group,
|
||||
Divider,
|
||||
Modal,
|
||||
Anchor,
|
||||
Code,
|
||||
Checkbox,
|
||||
} from '@mantine/core';
|
||||
import logo from '../../assets/logo.png';
|
||||
|
||||
const LoginForm = () => {
|
||||
const login = useAuthStore((s) => s.login);
|
||||
|
|
@ -11,12 +28,69 @@ const LoginForm = () => {
|
|||
|
||||
const navigate = useNavigate(); // Hook to navigate to other routes
|
||||
const [formData, setFormData] = useState({ username: '', password: '' });
|
||||
const [rememberMe, setRememberMe] = useState(false);
|
||||
const [savePassword, setSavePassword] = useState(false);
|
||||
const [forgotPasswordOpened, setForgotPasswordOpened] = useState(false);
|
||||
const [version, setVersion] = useState(null);
|
||||
const [isLoading, setIsLoading] = useState(false);
|
||||
|
||||
// useEffect(() => {
|
||||
// if (isAuthenticated) {
|
||||
// navigate('/channels');
|
||||
// }
|
||||
// }, [isAuthenticated, navigate]);
|
||||
// Simple base64 encoding/decoding for localStorage
|
||||
// Note: This is obfuscation, not encryption. Use browser's password manager for real security.
|
||||
const encodePassword = (password) => {
|
||||
try {
|
||||
return btoa(password);
|
||||
} catch (error) {
|
||||
console.error('Encoding error:', error);
|
||||
return null;
|
||||
}
|
||||
};
|
||||
|
||||
const decodePassword = (encoded) => {
|
||||
try {
|
||||
return atob(encoded);
|
||||
} catch (error) {
|
||||
console.error('Decoding error:', error);
|
||||
return '';
|
||||
}
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
// Fetch version info
|
||||
API.getVersion().then((data) => {
|
||||
setVersion(data?.version);
|
||||
});
|
||||
}, []);
|
||||
|
||||
useEffect(() => {
|
||||
// Load saved username if it exists
|
||||
const savedUsername = localStorage.getItem(
|
||||
'dispatcharr_remembered_username'
|
||||
);
|
||||
const savedPassword = localStorage.getItem('dispatcharr_saved_password');
|
||||
|
||||
if (savedUsername) {
|
||||
setFormData((prev) => ({ ...prev, username: savedUsername }));
|
||||
setRememberMe(true);
|
||||
|
||||
if (savedPassword) {
|
||||
try {
|
||||
const decrypted = decodePassword(savedPassword);
|
||||
if (decrypted) {
|
||||
setFormData((prev) => ({ ...prev, password: decrypted }));
|
||||
setSavePassword(true);
|
||||
}
|
||||
} catch {
|
||||
// If decoding fails, just skip
|
||||
}
|
||||
}
|
||||
}
|
||||
}, []);
|
||||
|
||||
useEffect(() => {
|
||||
if (isAuthenticated) {
|
||||
navigate('/channels');
|
||||
}
|
||||
}, [isAuthenticated, navigate]);
|
||||
|
||||
const handleInputChange = (e) => {
|
||||
setFormData({
|
||||
|
|
@ -27,13 +101,38 @@ const LoginForm = () => {
|
|||
|
||||
const handleSubmit = async (e) => {
|
||||
e.preventDefault();
|
||||
await login(formData);
|
||||
setIsLoading(true);
|
||||
|
||||
try {
|
||||
await login(formData);
|
||||
|
||||
// Save username if remember me is checked
|
||||
if (rememberMe) {
|
||||
localStorage.setItem(
|
||||
'dispatcharr_remembered_username',
|
||||
formData.username
|
||||
);
|
||||
|
||||
// Save password if save password is checked
|
||||
if (savePassword) {
|
||||
const encoded = encodePassword(formData.password);
|
||||
if (encoded) {
|
||||
localStorage.setItem('dispatcharr_saved_password', encoded);
|
||||
}
|
||||
} else {
|
||||
localStorage.removeItem('dispatcharr_saved_password');
|
||||
}
|
||||
} else {
|
||||
localStorage.removeItem('dispatcharr_remembered_username');
|
||||
localStorage.removeItem('dispatcharr_saved_password');
|
||||
}
|
||||
|
||||
await initData();
|
||||
navigate('/channels');
|
||||
// Navigation will happen automatically via the useEffect or route protection
|
||||
} catch (e) {
|
||||
console.log(`Failed to login: ${e}`);
|
||||
await logout();
|
||||
setIsLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
|
|
@ -45,11 +144,29 @@ const LoginForm = () => {
|
|||
>
|
||||
<Paper
|
||||
elevation={3}
|
||||
style={{ padding: 30, width: '100%', maxWidth: 400 }}
|
||||
style={{
|
||||
padding: 30,
|
||||
width: '100%',
|
||||
maxWidth: 500,
|
||||
position: 'relative',
|
||||
}}
|
||||
>
|
||||
<Title order={4} align="center">
|
||||
Login
|
||||
</Title>
|
||||
<Stack align="center" spacing="lg">
|
||||
<Image
|
||||
src={logo}
|
||||
alt="Dispatcharr Logo"
|
||||
width={120}
|
||||
height={120}
|
||||
fit="contain"
|
||||
/>
|
||||
<Title order={2} align="center">
|
||||
Dispatcharr
|
||||
</Title>
|
||||
<Text size="sm" color="dimmed" align="center">
|
||||
Welcome back! Please log in to continue.
|
||||
</Text>
|
||||
<Divider style={{ width: '100%' }} />
|
||||
</Stack>
|
||||
<form onSubmit={handleSubmit}>
|
||||
<Stack>
|
||||
<TextInput
|
||||
|
|
@ -69,12 +186,124 @@ const LoginForm = () => {
|
|||
// required
|
||||
/>
|
||||
|
||||
<Button type="submit" mt="sm">
|
||||
Login
|
||||
<Group justify="space-between" align="center">
|
||||
<Group align="center" spacing="xs">
|
||||
<Checkbox
|
||||
label="Remember me"
|
||||
checked={rememberMe}
|
||||
onChange={(e) => setRememberMe(e.currentTarget.checked)}
|
||||
size="sm"
|
||||
/>
|
||||
{rememberMe && (
|
||||
<Checkbox
|
||||
label="Save password"
|
||||
checked={savePassword}
|
||||
onChange={(e) => setSavePassword(e.currentTarget.checked)}
|
||||
size="sm"
|
||||
/>
|
||||
)}
|
||||
</Group>
|
||||
<Anchor
|
||||
size="sm"
|
||||
component="button"
|
||||
type="button"
|
||||
onClick={(e) => {
|
||||
e.preventDefault();
|
||||
setForgotPasswordOpened(true);
|
||||
}}
|
||||
>
|
||||
Forgot password?
|
||||
</Anchor>
|
||||
</Group>
|
||||
|
||||
<div
|
||||
style={{
|
||||
position: 'relative',
|
||||
height: '0',
|
||||
overflow: 'visible',
|
||||
marginBottom: '-4px',
|
||||
}}
|
||||
>
|
||||
{savePassword && (
|
||||
<Text
|
||||
size="xs"
|
||||
color="red"
|
||||
style={{
|
||||
marginTop: '-10px',
|
||||
marginBottom: '0',
|
||||
lineHeight: '1.2',
|
||||
}}
|
||||
>
|
||||
⚠ Password will be stored locally without encryption. Only
|
||||
use on trusted devices.
|
||||
</Text>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<Button
|
||||
type="submit"
|
||||
fullWidth
|
||||
loading={isLoading}
|
||||
disabled={isLoading}
|
||||
loaderProps={{ type: 'dots' }}
|
||||
>
|
||||
{isLoading ? 'Logging you in...' : 'Login'}
|
||||
</Button>
|
||||
</Stack>
|
||||
</form>
|
||||
|
||||
{version && (
|
||||
<Text
|
||||
size="xs"
|
||||
color="dimmed"
|
||||
style={{
|
||||
position: 'absolute',
|
||||
bottom: 6,
|
||||
right: 30,
|
||||
}}
|
||||
>
|
||||
v{version}
|
||||
</Text>
|
||||
)}
|
||||
</Paper>
|
||||
|
||||
<Modal
|
||||
opened={forgotPasswordOpened}
|
||||
onClose={() => setForgotPasswordOpened(false)}
|
||||
title="Reset Your Password"
|
||||
centered
|
||||
>
|
||||
<Stack spacing="md">
|
||||
<Text>
|
||||
To reset your password, your administrator needs to run a Django
|
||||
management command:
|
||||
</Text>
|
||||
<div>
|
||||
<Text weight={500} size="sm" mb={8}>
|
||||
If running with Docker:
|
||||
</Text>
|
||||
<Code block>
|
||||
docker exec <container_name> python manage.py changepassword
|
||||
<username>
|
||||
</Code>
|
||||
</div>
|
||||
<div>
|
||||
<Text weight={500} size="sm" mb={8}>
|
||||
If running locally:
|
||||
</Text>
|
||||
<Code block>python manage.py changepassword <username></Code>
|
||||
</div>
|
||||
<Text size="sm" color="dimmed">
|
||||
The command will prompt for a new password. Replace
|
||||
<code><container_name></code> with your Docker container name
|
||||
and <code><username></code> with the account username.
|
||||
</Text>
|
||||
<Text size="sm" color="dimmed" italic>
|
||||
Please contact your system administrator to perform a password
|
||||
reset.
|
||||
</Text>
|
||||
</Stack>
|
||||
</Modal>
|
||||
</Center>
|
||||
);
|
||||
};
|
||||
|
|
|
|||
|
|
@ -226,7 +226,17 @@ const M3U = ({
|
|||
|
||||
return (
|
||||
<>
|
||||
<Modal size={700} opened={isOpen} onClose={close} title="M3U Account">
|
||||
<Modal
|
||||
size={700}
|
||||
opened={isOpen}
|
||||
onClose={close}
|
||||
title="M3U Account"
|
||||
scrollAreaComponent={Modal.NativeScrollArea}
|
||||
lockScroll={false}
|
||||
withinPortal={true}
|
||||
trapFocus={false}
|
||||
yOffset="2vh"
|
||||
>
|
||||
<LoadingOverlay
|
||||
visible={form.submitting}
|
||||
overlayBlur={2}
|
||||
|
|
|
|||
|
|
@ -253,7 +253,16 @@ const M3UFilters = ({ playlist, isOpen, onClose }) => {
|
|||
|
||||
return (
|
||||
<>
|
||||
<Modal opened={isOpen} onClose={onClose} title="Filters" size="lg">
|
||||
<Modal
|
||||
opened={isOpen}
|
||||
onClose={onClose}
|
||||
title="Filters"
|
||||
size="lg"
|
||||
scrollAreaComponent={Modal.NativeScrollArea}
|
||||
lockScroll={false}
|
||||
withinPortal={true}
|
||||
yOffset="2vh"
|
||||
>
|
||||
<Alert
|
||||
icon={<Info size={16} />}
|
||||
color="blue"
|
||||
|
|
|
|||
|
|
@ -77,27 +77,29 @@ const M3UGroupFilter = ({ playlist = null, isOpen, onClose }) => {
|
|||
}
|
||||
|
||||
setGroupStates(
|
||||
playlist.channel_groups.map((group) => {
|
||||
// Parse custom_properties if present
|
||||
let customProps = {};
|
||||
if (group.custom_properties) {
|
||||
try {
|
||||
customProps =
|
||||
typeof group.custom_properties === 'string'
|
||||
? JSON.parse(group.custom_properties)
|
||||
: group.custom_properties;
|
||||
} catch (e) {
|
||||
customProps = {};
|
||||
playlist.channel_groups
|
||||
.filter((group) => channelGroups[group.channel_group]) // Filter out groups that don't exist
|
||||
.map((group) => {
|
||||
// Parse custom_properties if present
|
||||
let customProps = {};
|
||||
if (group.custom_properties) {
|
||||
try {
|
||||
customProps =
|
||||
typeof group.custom_properties === 'string'
|
||||
? JSON.parse(group.custom_properties)
|
||||
: group.custom_properties;
|
||||
} catch (e) {
|
||||
customProps = {};
|
||||
}
|
||||
}
|
||||
}
|
||||
return {
|
||||
...group,
|
||||
name: channelGroups[group.channel_group].name,
|
||||
auto_channel_sync: group.auto_channel_sync || false,
|
||||
auto_sync_channel_start: group.auto_sync_channel_start || 1.0,
|
||||
custom_properties: customProps,
|
||||
};
|
||||
})
|
||||
return {
|
||||
...group,
|
||||
name: channelGroups[group.channel_group].name,
|
||||
auto_channel_sync: group.auto_channel_sync || false,
|
||||
auto_sync_channel_start: group.auto_sync_channel_start || 1.0,
|
||||
custom_properties: customProps,
|
||||
};
|
||||
})
|
||||
);
|
||||
}, [playlist, channelGroups]);
|
||||
|
||||
|
|
@ -184,6 +186,10 @@ const M3UGroupFilter = ({ playlist = null, isOpen, onClose }) => {
|
|||
title="M3U Group Filter & Auto Channel Sync"
|
||||
size={1000}
|
||||
styles={{ content: { '--mantine-color-body': '#27272A' } }}
|
||||
scrollAreaComponent={Modal.NativeScrollArea}
|
||||
lockScroll={false}
|
||||
withinPortal={true}
|
||||
yOffset="2vh"
|
||||
>
|
||||
<LoadingOverlay visible={isLoading} overlayBlur={2} />
|
||||
<Stack>
|
||||
|
|
|
|||
|
|
@ -192,7 +192,15 @@ const M3UProfiles = ({ playlist = null, isOpen, onClose }) => {
|
|||
|
||||
return (
|
||||
<>
|
||||
<Modal opened={isOpen} onClose={onClose} title="Profiles">
|
||||
<Modal
|
||||
opened={isOpen}
|
||||
onClose={onClose}
|
||||
title="Profiles"
|
||||
scrollAreaComponent={Modal.NativeScrollArea}
|
||||
lockScroll={false}
|
||||
withinPortal={true}
|
||||
yOffset="2vh"
|
||||
>
|
||||
{profilesArray
|
||||
.sort((a, b) => {
|
||||
// Always put default profile first
|
||||
|
|
|
|||
|
|
@ -25,10 +25,22 @@ const Stream = ({ stream = null, isOpen, onClose }) => {
|
|||
}),
|
||||
onSubmit: async (values, { setSubmitting, resetForm }) => {
|
||||
console.log(values);
|
||||
|
||||
// Convert string IDs back to integers for the API
|
||||
const payload = {
|
||||
...values,
|
||||
channel_group: values.channel_group
|
||||
? parseInt(values.channel_group, 10)
|
||||
: null,
|
||||
stream_profile_id: values.stream_profile_id
|
||||
? parseInt(values.stream_profile_id, 10)
|
||||
: null,
|
||||
};
|
||||
|
||||
if (stream?.id) {
|
||||
await API.updateStream({ id: stream.id, ...values });
|
||||
await API.updateStream({ id: stream.id, ...payload });
|
||||
} else {
|
||||
await API.addStream(values);
|
||||
await API.addStream(payload);
|
||||
}
|
||||
|
||||
resetForm();
|
||||
|
|
@ -42,12 +54,18 @@ const Stream = ({ stream = null, isOpen, onClose }) => {
|
|||
formik.setValues({
|
||||
name: stream.name,
|
||||
url: stream.url,
|
||||
channel_group: stream.channel_group,
|
||||
stream_profile_id: stream.stream_profile_id,
|
||||
// Convert IDs to strings to match Select component values
|
||||
channel_group: stream.channel_group
|
||||
? String(stream.channel_group)
|
||||
: null,
|
||||
stream_profile_id: stream.stream_profile_id
|
||||
? String(stream.stream_profile_id)
|
||||
: '',
|
||||
});
|
||||
} else {
|
||||
formik.resetForm();
|
||||
}
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [stream]);
|
||||
|
||||
if (!isOpen) {
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@ import {
|
|||
ArrowUpNarrowWide,
|
||||
ArrowUpDown,
|
||||
ArrowDownWideNarrow,
|
||||
Search,
|
||||
} from 'lucide-react';
|
||||
import {
|
||||
Box,
|
||||
|
|
@ -307,6 +308,7 @@ const ChannelsTable = ({}) => {
|
|||
const [channelToDelete, setChannelToDelete] = useState(null);
|
||||
|
||||
// Column sizing state for resizable columns
|
||||
// Store in localStorage but with empty object as default
|
||||
const [columnSizing, setColumnSizing] = useLocalStorage(
|
||||
'channels-table-column-sizing',
|
||||
{}
|
||||
|
|
@ -882,7 +884,12 @@ const ChannelsTable = ({}) => {
|
|||
),
|
||||
},
|
||||
],
|
||||
[selectedProfileId, channelGroups, logos, theme, columnSizing]
|
||||
// Note: columnSizing is intentionally excluded from dependencies to prevent
|
||||
// columns from being recreated during drag operations (which causes infinite loops).
|
||||
// The column.size values are only used for INITIAL sizing - TanStack Table manages
|
||||
// the actual sizes through its own state after initialization.
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
[selectedProfileId, channelGroups, logos, theme]
|
||||
);
|
||||
|
||||
const renderHeaderCell = (header) => {
|
||||
|
|
@ -943,6 +950,7 @@ const ChannelsTable = ({}) => {
|
|||
size="xs"
|
||||
variant="unstyled"
|
||||
className="table-input-header"
|
||||
leftSection={<Search size={14} opacity={0.5} />}
|
||||
/>
|
||||
<Center>
|
||||
{React.createElement(sortingIcon, {
|
||||
|
|
@ -979,17 +987,18 @@ const ChannelsTable = ({}) => {
|
|||
filters,
|
||||
pagination,
|
||||
sorting,
|
||||
columnSizing,
|
||||
setColumnSizing,
|
||||
manualPagination: true,
|
||||
manualSorting: true,
|
||||
manualFiltering: true,
|
||||
enableRowSelection: true,
|
||||
onRowSelectionChange: onRowSelectionChange,
|
||||
state: {
|
||||
columnSizing,
|
||||
pagination,
|
||||
sorting,
|
||||
},
|
||||
onColumnSizingChange: setColumnSizing,
|
||||
columnResizeMode: 'onChange',
|
||||
getExpandedRowHeight: (row) => {
|
||||
return 20 + 28 * row.original.streams.length;
|
||||
},
|
||||
|
|
|
|||
|
|
@ -105,6 +105,7 @@ const CustomTableHeader = ({
|
|||
...(header.column.columnDef.style &&
|
||||
header.column.columnDef.style),
|
||||
height: '100%',
|
||||
width: '100%',
|
||||
paddingRight: header.column.getCanResize() ? '8px' : '0px', // Add padding for resize handle
|
||||
}}
|
||||
>
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ import React, { useEffect, useMemo, useRef, useState } from 'react';
|
|||
import API from '../../api';
|
||||
import useEPGsStore from '../../store/epgs';
|
||||
import EPGForm from '../forms/EPG';
|
||||
import DummyEPGForm from '../forms/DummyEPG';
|
||||
import { TableHelper } from '../../helpers';
|
||||
import {
|
||||
ActionIcon,
|
||||
|
|
@ -17,6 +18,7 @@ import {
|
|||
Progress,
|
||||
Stack,
|
||||
Group,
|
||||
Menu,
|
||||
} from '@mantine/core';
|
||||
import { notifications } from '@mantine/notifications';
|
||||
import {
|
||||
|
|
@ -27,6 +29,7 @@ import {
|
|||
SquareMinus,
|
||||
SquarePen,
|
||||
SquarePlus,
|
||||
ChevronDown,
|
||||
} from 'lucide-react';
|
||||
import dayjs from 'dayjs';
|
||||
import useSettingsStore from '../../store/settings';
|
||||
|
|
@ -62,6 +65,7 @@ const getStatusColor = (status) => {
|
|||
const RowActions = ({ tableSize, row, editEPG, deleteEPG, refreshEPG }) => {
|
||||
const iconSize =
|
||||
tableSize == 'default' ? 'sm' : tableSize == 'compact' ? 'xs' : 'md';
|
||||
const isDummyEPG = row.original.source_type === 'dummy';
|
||||
|
||||
return (
|
||||
<>
|
||||
|
|
@ -88,7 +92,7 @@ const RowActions = ({ tableSize, row, editEPG, deleteEPG, refreshEPG }) => {
|
|||
size={iconSize} // Use standardized icon size
|
||||
color="blue.5" // Red color for delete actions
|
||||
onClick={() => refreshEPG(row.original.id)}
|
||||
disabled={!row.original.is_active}
|
||||
disabled={!row.original.is_active || isDummyEPG}
|
||||
>
|
||||
<RefreshCcw size={tableSize === 'compact' ? 16 : 18} />{' '}
|
||||
{/* Small icon size */}
|
||||
|
|
@ -100,6 +104,7 @@ const RowActions = ({ tableSize, row, editEPG, deleteEPG, refreshEPG }) => {
|
|||
const EPGsTable = () => {
|
||||
const [epg, setEPG] = useState(null);
|
||||
const [epgModalOpen, setEPGModalOpen] = useState(false);
|
||||
const [dummyEpgModalOpen, setDummyEpgModalOpen] = useState(false);
|
||||
const [rowSelection, setRowSelection] = useState([]);
|
||||
const [confirmDeleteOpen, setConfirmDeleteOpen] = useState(false);
|
||||
const [deleteTarget, setDeleteTarget] = useState(null);
|
||||
|
|
@ -126,6 +131,12 @@ const EPGsTable = () => {
|
|||
|
||||
const toggleActive = async (epg) => {
|
||||
try {
|
||||
// Validate that epg is a valid object with an id
|
||||
if (!epg || typeof epg !== 'object' || !epg.id) {
|
||||
console.error('toggleActive called with invalid epg:', epg);
|
||||
return;
|
||||
}
|
||||
|
||||
// Send only the is_active field to trigger our special handling
|
||||
await API.updateEPG(
|
||||
{
|
||||
|
|
@ -176,8 +187,6 @@ const EPGsTable = () => {
|
|||
);
|
||||
};
|
||||
|
||||
console.log(epgs);
|
||||
|
||||
const columns = useMemo(
|
||||
//column definitions...
|
||||
() => [
|
||||
|
|
@ -224,11 +233,14 @@ const EPGsTable = () => {
|
|||
size: 100,
|
||||
cell: ({ row }) => {
|
||||
const data = row.original;
|
||||
const isDummyEPG = data.source_type === 'dummy';
|
||||
|
||||
// Dummy EPGs always show idle status
|
||||
const displayStatus = isDummyEPG ? 'idle' : data.status;
|
||||
|
||||
// Always show status text, even when there's progress happening
|
||||
return (
|
||||
<Text size="sm" fw={500} c={getStatusColor(data.status)}>
|
||||
{formatStatusText(data.status)}
|
||||
<Text size="sm" fw={500} c={getStatusColor(displayStatus)}>
|
||||
{formatStatusText(displayStatus)}
|
||||
</Text>
|
||||
);
|
||||
},
|
||||
|
|
@ -241,6 +253,12 @@ const EPGsTable = () => {
|
|||
grow: true,
|
||||
cell: ({ row }) => {
|
||||
const data = row.original;
|
||||
const isDummyEPG = data.source_type === 'dummy';
|
||||
|
||||
// Dummy EPGs don't have status messages
|
||||
if (isDummyEPG) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Check if there's an active progress for this EPG - show progress first if active
|
||||
if (
|
||||
|
|
@ -305,15 +323,19 @@ const EPGsTable = () => {
|
|||
mantineTableBodyCellProps: {
|
||||
align: 'left',
|
||||
},
|
||||
cell: ({ row, cell }) => (
|
||||
<Box sx={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<Switch
|
||||
size="xs"
|
||||
checked={cell.getValue()}
|
||||
onChange={() => toggleActive(row.original)}
|
||||
/>
|
||||
</Box>
|
||||
),
|
||||
cell: ({ row, cell }) => {
|
||||
const isDummyEPG = row.original.source_type === 'dummy';
|
||||
return (
|
||||
<Box sx={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<Switch
|
||||
size="xs"
|
||||
checked={cell.getValue()}
|
||||
onChange={() => toggleActive(row.original)}
|
||||
disabled={isDummyEPG}
|
||||
/>
|
||||
</Box>
|
||||
);
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'actions',
|
||||
|
|
@ -329,9 +351,24 @@ const EPGsTable = () => {
|
|||
|
||||
const editEPG = async (epg = null) => {
|
||||
setEPG(epg);
|
||||
// Open the appropriate modal based on source type
|
||||
if (epg?.source_type === 'dummy') {
|
||||
setDummyEpgModalOpen(true);
|
||||
} else {
|
||||
setEPGModalOpen(true);
|
||||
}
|
||||
};
|
||||
|
||||
const createStandardEPG = () => {
|
||||
setEPG(null);
|
||||
setEPGModalOpen(true);
|
||||
};
|
||||
|
||||
const createDummyEPG = () => {
|
||||
setEPG(null);
|
||||
setDummyEpgModalOpen(true);
|
||||
};
|
||||
|
||||
const deleteEPG = async (id) => {
|
||||
// Get EPG details for the confirmation dialog
|
||||
const epgObj = epgs[id];
|
||||
|
|
@ -365,6 +402,11 @@ const EPGsTable = () => {
|
|||
setEPGModalOpen(false);
|
||||
};
|
||||
|
||||
const closeDummyEPGForm = () => {
|
||||
setEPG(null);
|
||||
setDummyEpgModalOpen(false);
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
setData(
|
||||
Object.values(epgs).sort((a, b) => {
|
||||
|
|
@ -522,21 +564,31 @@ const EPGsTable = () => {
|
|||
>
|
||||
EPGs
|
||||
</Text>
|
||||
<Button
|
||||
leftSection={<SquarePlus size={18} />}
|
||||
variant="light"
|
||||
size="xs"
|
||||
onClick={() => editEPG()}
|
||||
p={5}
|
||||
color="green"
|
||||
style={{
|
||||
borderWidth: '1px',
|
||||
borderColor: 'green',
|
||||
color: 'white',
|
||||
}}
|
||||
>
|
||||
Add EPG
|
||||
</Button>
|
||||
<Menu shadow="md" width={200}>
|
||||
<Menu.Target>
|
||||
<Button
|
||||
leftSection={<SquarePlus size={18} />}
|
||||
rightSection={<ChevronDown size={16} />}
|
||||
variant="light"
|
||||
size="xs"
|
||||
p={5}
|
||||
color="green"
|
||||
style={{
|
||||
borderWidth: '1px',
|
||||
borderColor: 'green',
|
||||
color: 'white',
|
||||
}}
|
||||
>
|
||||
Add EPG
|
||||
</Button>
|
||||
</Menu.Target>
|
||||
<Menu.Dropdown>
|
||||
<Menu.Item onClick={createStandardEPG}>
|
||||
Standard EPG Source
|
||||
</Menu.Item>
|
||||
<Menu.Item onClick={createDummyEPG}>Dummy EPG Source</Menu.Item>
|
||||
</Menu.Dropdown>
|
||||
</Menu>
|
||||
</Flex>
|
||||
|
||||
<Paper
|
||||
|
|
@ -579,6 +631,11 @@ const EPGsTable = () => {
|
|||
</Box>
|
||||
|
||||
<EPGForm epg={epg} isOpen={epgModalOpen} onClose={closeEPGForm} />
|
||||
<DummyEPGForm
|
||||
epg={epg}
|
||||
isOpen={dummyEpgModalOpen}
|
||||
onClose={closeDummyEPGForm}
|
||||
/>
|
||||
|
||||
<ConfirmationDialog
|
||||
opened={confirmDeleteOpen}
|
||||
|
|
|
|||
|
|
@ -115,6 +115,7 @@ const LogosTable = () => {
|
|||
pageSize: pageSize,
|
||||
});
|
||||
const [paginationString, setPaginationString] = useState('');
|
||||
const tableRef = React.useRef(null);
|
||||
|
||||
// Debounce the name filter
|
||||
useEffect(() => {
|
||||
|
|
@ -162,6 +163,14 @@ const LogosTable = () => {
|
|||
/**
|
||||
* Functions
|
||||
*/
|
||||
const clearSelections = useCallback(() => {
|
||||
setSelectedRows(new Set());
|
||||
// Clear table's internal selection state if table is initialized
|
||||
if (tableRef.current?.setSelectedTableIds) {
|
||||
tableRef.current.setSelectedTableIds([]);
|
||||
}
|
||||
}, []);
|
||||
|
||||
const executeDeleteLogo = useCallback(
|
||||
async (id, deleteFile = false) => {
|
||||
setIsLoading(true);
|
||||
|
|
@ -185,10 +194,10 @@ const LogosTable = () => {
|
|||
setDeleteTarget(null);
|
||||
setLogoToDelete(null);
|
||||
setIsBulkDelete(false);
|
||||
setSelectedRows(new Set()); // Clear selections
|
||||
clearSelections(); // Clear selections
|
||||
}
|
||||
},
|
||||
[fetchAllLogos]
|
||||
[fetchAllLogos, clearSelections]
|
||||
);
|
||||
|
||||
const executeBulkDelete = useCallback(
|
||||
|
|
@ -215,10 +224,10 @@ const LogosTable = () => {
|
|||
setIsLoading(false);
|
||||
setConfirmDeleteOpen(false);
|
||||
setIsBulkDelete(false);
|
||||
setSelectedRows(new Set()); // Clear selections
|
||||
clearSelections(); // Clear selections
|
||||
}
|
||||
},
|
||||
[selectedRows, fetchAllLogos]
|
||||
[selectedRows, fetchAllLogos, clearSelections]
|
||||
);
|
||||
|
||||
const executeCleanupUnused = useCallback(
|
||||
|
|
@ -226,7 +235,6 @@ const LogosTable = () => {
|
|||
setIsCleaningUp(true);
|
||||
try {
|
||||
const result = await API.cleanupUnusedLogos(deleteFiles);
|
||||
await fetchAllLogos(); // Refresh all logos to maintain full view
|
||||
|
||||
let message = `Successfully deleted ${result.deleted_count} unused logos`;
|
||||
if (result.local_files_deleted > 0) {
|
||||
|
|
@ -238,6 +246,9 @@ const LogosTable = () => {
|
|||
message: message,
|
||||
color: 'green',
|
||||
});
|
||||
|
||||
// Force refresh all logos after cleanup to maintain full view
|
||||
await fetchAllLogos(true);
|
||||
} catch (error) {
|
||||
notifications.show({
|
||||
title: 'Cleanup Failed',
|
||||
|
|
@ -247,10 +258,10 @@ const LogosTable = () => {
|
|||
} finally {
|
||||
setIsCleaningUp(false);
|
||||
setConfirmCleanupOpen(false);
|
||||
setSelectedRows(new Set()); // Clear selections after cleanup
|
||||
clearSelections(); // Clear selections after cleanup
|
||||
}
|
||||
},
|
||||
[fetchAllLogos]
|
||||
[fetchAllLogos, clearSelections]
|
||||
);
|
||||
|
||||
const editLogo = useCallback(async (logo = null) => {
|
||||
|
|
@ -287,10 +298,10 @@ const LogosTable = () => {
|
|||
if (checked) {
|
||||
setSelectedRows(new Set(data.map((logo) => logo.id)));
|
||||
} else {
|
||||
setSelectedRows(new Set());
|
||||
clearSelections();
|
||||
}
|
||||
},
|
||||
[data]
|
||||
[data, clearSelections]
|
||||
);
|
||||
|
||||
const deleteBulkLogos = useCallback(() => {
|
||||
|
|
@ -308,8 +319,8 @@ const LogosTable = () => {
|
|||
|
||||
// Clear selections when logos data changes (e.g., after filtering)
|
||||
useEffect(() => {
|
||||
setSelectedRows(new Set());
|
||||
}, [data.length]);
|
||||
clearSelections();
|
||||
}, [data.length, clearSelections]);
|
||||
|
||||
// Update pagination when pageSize changes
|
||||
useEffect(() => {
|
||||
|
|
@ -614,6 +625,11 @@ const LogosTable = () => {
|
|||
},
|
||||
});
|
||||
|
||||
// Store table reference for clearing selections
|
||||
React.useEffect(() => {
|
||||
tableRef.current = table;
|
||||
}, [table]);
|
||||
|
||||
return (
|
||||
<>
|
||||
<Box
|
||||
|
|
@ -626,25 +642,6 @@ const LogosTable = () => {
|
|||
}}
|
||||
>
|
||||
<Stack gap="md" style={{ maxWidth: '1200px', width: '100%' }}>
|
||||
<Flex style={{ alignItems: 'center', paddingBottom: 10 }} gap={15}>
|
||||
<Text
|
||||
style={{
|
||||
fontFamily: 'Inter, sans-serif',
|
||||
fontWeight: 500,
|
||||
fontSize: '20px',
|
||||
lineHeight: 1,
|
||||
letterSpacing: '-0.3px',
|
||||
color: 'gray.6',
|
||||
marginBottom: 0,
|
||||
}}
|
||||
>
|
||||
Logos
|
||||
</Text>
|
||||
<Text size="sm" c="dimmed">
|
||||
({data.length} logo{data.length !== 1 ? 's' : ''})
|
||||
</Text>
|
||||
</Flex>
|
||||
|
||||
<Paper
|
||||
style={{
|
||||
backgroundColor: '#27272A',
|
||||
|
|
|
|||
|
|
@ -3,7 +3,6 @@ import API from '../../api';
|
|||
import StreamForm from '../forms/Stream';
|
||||
import usePlaylistsStore from '../../store/playlists';
|
||||
import useChannelsStore from '../../store/channels';
|
||||
import useLogosStore from '../../store/logos';
|
||||
import { copyToClipboard, useDebounce } from '../../utils';
|
||||
import {
|
||||
SquarePlus,
|
||||
|
|
@ -14,6 +13,7 @@ import {
|
|||
ArrowUpDown,
|
||||
ArrowUpNarrowWide,
|
||||
ArrowDownWideNarrow,
|
||||
Search,
|
||||
} from 'lucide-react';
|
||||
import {
|
||||
TextInput,
|
||||
|
|
@ -51,6 +51,7 @@ import useChannelsTableStore from '../../store/channelsTable';
|
|||
import useWarningsStore from '../../store/warnings';
|
||||
import { CustomTable, useTable } from './CustomTable';
|
||||
import useLocalStorage from '../../hooks/useLocalStorage';
|
||||
import ConfirmationDialog from '../ConfirmationDialog';
|
||||
|
||||
const StreamRowActions = ({
|
||||
theme,
|
||||
|
|
@ -66,7 +67,6 @@ const StreamRowActions = ({
|
|||
(state) =>
|
||||
state.channels.find((chan) => chan.id === selectedChannelIds[0])?.streams
|
||||
);
|
||||
const fetchLogos = useLogosStore((s) => s.fetchLogos);
|
||||
|
||||
const addStreamToChannel = async () => {
|
||||
await API.updateChannel({
|
||||
|
|
@ -183,7 +183,7 @@ const StreamsTable = () => {
|
|||
const [pageCount, setPageCount] = useState(0);
|
||||
const [paginationString, setPaginationString] = useState('');
|
||||
const [isLoading, setIsLoading] = useState(true);
|
||||
const [sorting, setSorting] = useState([{ id: 'name', desc: '' }]);
|
||||
const [sorting, setSorting] = useState([{ id: 'name', desc: false }]);
|
||||
const [selectedStreamIds, setSelectedStreamIds] = useState([]);
|
||||
|
||||
// Channel numbering modal state
|
||||
|
|
@ -200,6 +200,12 @@ const StreamsTable = () => {
|
|||
const [rememberSingleChoice, setRememberSingleChoice] = useState(false);
|
||||
const [currentStreamForChannel, setCurrentStreamForChannel] = useState(null);
|
||||
|
||||
// Confirmation dialog state
|
||||
const [confirmDeleteOpen, setConfirmDeleteOpen] = useState(false);
|
||||
const [deleteTarget, setDeleteTarget] = useState(null);
|
||||
const [streamToDelete, setStreamToDelete] = useState(null);
|
||||
const [isBulkDelete, setIsBulkDelete] = useState(false);
|
||||
|
||||
// const [allRowsSelected, setAllRowsSelected] = useState(false);
|
||||
|
||||
// Add local storage for page size
|
||||
|
|
@ -243,7 +249,6 @@ const StreamsTable = () => {
|
|||
const channelGroups = useChannelsStore((s) => s.channelGroups);
|
||||
|
||||
const selectedChannelIds = useChannelsTableStore((s) => s.selectedChannelIds);
|
||||
const fetchLogos = useChannelsStore((s) => s.fetchLogos);
|
||||
const channelSelectionStreams = useChannelsTableStore(
|
||||
(state) =>
|
||||
state.channels.find((chan) => chan.id === selectedChannelIds[0])?.streams
|
||||
|
|
@ -280,18 +285,21 @@ const StreamsTable = () => {
|
|||
grow: true,
|
||||
size: columnSizing.name || 200,
|
||||
cell: ({ getValue }) => (
|
||||
<Box
|
||||
style={{
|
||||
whiteSpace: 'pre',
|
||||
overflow: 'hidden',
|
||||
textOverflow: 'ellipsis',
|
||||
}}
|
||||
>
|
||||
{getValue()}
|
||||
</Box>
|
||||
<Tooltip label={getValue()} openDelay={500}>
|
||||
<Box
|
||||
style={{
|
||||
whiteSpace: 'pre',
|
||||
overflow: 'hidden',
|
||||
textOverflow: 'ellipsis',
|
||||
}}
|
||||
>
|
||||
{getValue()}
|
||||
</Box>
|
||||
</Tooltip>
|
||||
),
|
||||
},
|
||||
{
|
||||
header: 'Group',
|
||||
id: 'group',
|
||||
accessorFn: (row) =>
|
||||
channelGroups[row.channel_group]
|
||||
|
|
@ -299,34 +307,37 @@ const StreamsTable = () => {
|
|||
: '',
|
||||
size: columnSizing.group || 150,
|
||||
cell: ({ getValue }) => (
|
||||
<Box
|
||||
style={{
|
||||
whiteSpace: 'pre',
|
||||
overflow: 'hidden',
|
||||
textOverflow: 'ellipsis',
|
||||
}}
|
||||
>
|
||||
{getValue()}
|
||||
</Box>
|
||||
<Tooltip label={getValue()} openDelay={500}>
|
||||
<Box
|
||||
style={{
|
||||
whiteSpace: 'pre',
|
||||
overflow: 'hidden',
|
||||
textOverflow: 'ellipsis',
|
||||
}}
|
||||
>
|
||||
{getValue()}
|
||||
</Box>
|
||||
</Tooltip>
|
||||
),
|
||||
},
|
||||
{
|
||||
header: 'M3U',
|
||||
id: 'm3u',
|
||||
size: columnSizing.m3u || 150,
|
||||
accessorFn: (row) =>
|
||||
playlists.find((playlist) => playlist.id === row.m3u_account)?.name,
|
||||
cell: ({ getValue }) => (
|
||||
<Box
|
||||
style={{
|
||||
whiteSpace: 'nowrap',
|
||||
overflow: 'hidden',
|
||||
textOverflow: 'ellipsis',
|
||||
}}
|
||||
>
|
||||
<Tooltip label={getValue()} openDelay={500}>
|
||||
<Box>{getValue()}</Box>
|
||||
</Tooltip>
|
||||
</Box>
|
||||
<Tooltip label={getValue()} openDelay={500}>
|
||||
<Box
|
||||
style={{
|
||||
whiteSpace: 'nowrap',
|
||||
overflow: 'hidden',
|
||||
textOverflow: 'ellipsis',
|
||||
}}
|
||||
>
|
||||
{getValue()}
|
||||
</Box>
|
||||
</Tooltip>
|
||||
),
|
||||
},
|
||||
],
|
||||
|
|
@ -377,7 +388,14 @@ const StreamsTable = () => {
|
|||
|
||||
// Apply sorting
|
||||
if (sorting.length > 0) {
|
||||
const sortField = sorting[0].id;
|
||||
const columnId = sorting[0].id;
|
||||
// Map frontend column IDs to backend field names
|
||||
const fieldMapping = {
|
||||
name: 'name',
|
||||
group: 'channel_group__name',
|
||||
m3u: 'm3u_account__name',
|
||||
};
|
||||
const sortField = fieldMapping[columnId] || columnId;
|
||||
const sortDirection = sorting[0].desc ? '-' : '';
|
||||
params.append('ordering', `${sortDirection}${sortField}`);
|
||||
}
|
||||
|
|
@ -510,13 +528,49 @@ const StreamsTable = () => {
|
|||
};
|
||||
|
||||
const deleteStream = async (id) => {
|
||||
// Get stream details for the confirmation dialog
|
||||
const streamObj = data.find((s) => s.id === id);
|
||||
setStreamToDelete(streamObj);
|
||||
setDeleteTarget(id);
|
||||
setIsBulkDelete(false);
|
||||
|
||||
// Skip warning if it's been suppressed
|
||||
if (isWarningSuppressed('delete-stream')) {
|
||||
return executeDeleteStream(id);
|
||||
}
|
||||
|
||||
setConfirmDeleteOpen(true);
|
||||
};
|
||||
|
||||
const executeDeleteStream = async (id) => {
|
||||
await API.deleteStream(id);
|
||||
fetchData();
|
||||
// Clear the selection for the deleted stream
|
||||
setSelectedStreamIds([]);
|
||||
table.setSelectedTableIds([]);
|
||||
setConfirmDeleteOpen(false);
|
||||
};
|
||||
|
||||
const deleteStreams = async () => {
|
||||
setIsBulkDelete(true);
|
||||
setStreamToDelete(null);
|
||||
|
||||
// Skip warning if it's been suppressed
|
||||
if (isWarningSuppressed('delete-streams')) {
|
||||
return executeDeleteStreams();
|
||||
}
|
||||
|
||||
setConfirmDeleteOpen(true);
|
||||
};
|
||||
|
||||
const executeDeleteStreams = async () => {
|
||||
setIsLoading(true);
|
||||
await API.deleteStreams(selectedStreamIds);
|
||||
setIsLoading(false);
|
||||
fetchData();
|
||||
setSelectedStreamIds([]);
|
||||
table.setSelectedTableIds([]);
|
||||
setConfirmDeleteOpen(false);
|
||||
};
|
||||
|
||||
const closeStreamForm = () => {
|
||||
|
|
@ -647,8 +701,8 @@ const StreamsTable = () => {
|
|||
const sortField = sorting[0]?.id;
|
||||
const sortDirection = sorting[0]?.desc;
|
||||
|
||||
if (sortField == column) {
|
||||
if (sortDirection == false) {
|
||||
if (sortField === column) {
|
||||
if (sortDirection === false) {
|
||||
setSorting([
|
||||
{
|
||||
id: column,
|
||||
|
|
@ -656,7 +710,8 @@ const StreamsTable = () => {
|
|||
},
|
||||
]);
|
||||
} else {
|
||||
setSorting([]);
|
||||
// Reset to default sort (name ascending) instead of clearing
|
||||
setSorting([{ id: 'name', desc: false }]);
|
||||
}
|
||||
} else {
|
||||
setSorting([
|
||||
|
|
@ -681,7 +736,7 @@ const StreamsTable = () => {
|
|||
switch (header.id) {
|
||||
case 'name':
|
||||
return (
|
||||
<Flex gap="sm">
|
||||
<Flex align="center" style={{ width: '100%', flex: 1 }}>
|
||||
<TextInput
|
||||
name="name"
|
||||
placeholder="Name"
|
||||
|
|
@ -691,19 +746,24 @@ const StreamsTable = () => {
|
|||
size="xs"
|
||||
variant="unstyled"
|
||||
className="table-input-header"
|
||||
/>
|
||||
<Center>
|
||||
{React.createElement(sortingIcon, {
|
||||
onClick: () => onSortingChange('name'),
|
||||
leftSection={<Search size={14} opacity={0.5} />}
|
||||
style={{ flex: 1, minWidth: 0 }}
|
||||
rightSectionPointerEvents="auto"
|
||||
rightSection={React.createElement(sortingIcon, {
|
||||
onClick: (e) => {
|
||||
e.stopPropagation();
|
||||
onSortingChange('name');
|
||||
},
|
||||
size: 14,
|
||||
style: { cursor: 'pointer' },
|
||||
})}
|
||||
</Center>
|
||||
/>
|
||||
</Flex>
|
||||
);
|
||||
|
||||
case 'group':
|
||||
return (
|
||||
<Box onClick={handleSelectClick} style={{ width: '100%' }}>
|
||||
<Flex align="center" style={{ width: '100%', flex: 1 }}>
|
||||
<MultiSelect
|
||||
placeholder="Group"
|
||||
searchable
|
||||
|
|
@ -715,13 +775,23 @@ const StreamsTable = () => {
|
|||
variant="unstyled"
|
||||
className="table-input-header custom-multiselect"
|
||||
clearable
|
||||
style={{ flex: 1, minWidth: 0 }}
|
||||
rightSectionPointerEvents="auto"
|
||||
rightSection={React.createElement(sortingIcon, {
|
||||
onClick: (e) => {
|
||||
e.stopPropagation();
|
||||
onSortingChange('group');
|
||||
},
|
||||
size: 14,
|
||||
style: { cursor: 'pointer' },
|
||||
})}
|
||||
/>
|
||||
</Box>
|
||||
</Flex>
|
||||
);
|
||||
|
||||
case 'm3u':
|
||||
return (
|
||||
<Box onClick={handleSelectClick}>
|
||||
<Flex align="center" style={{ width: '100%', flex: 1 }}>
|
||||
<Select
|
||||
placeholder="M3U"
|
||||
searchable
|
||||
|
|
@ -736,8 +806,18 @@ const StreamsTable = () => {
|
|||
}))}
|
||||
variant="unstyled"
|
||||
className="table-input-header"
|
||||
style={{ flex: 1, minWidth: 0 }}
|
||||
rightSectionPointerEvents="auto"
|
||||
rightSection={React.createElement(sortingIcon, {
|
||||
onClick: (e) => {
|
||||
e.stopPropagation();
|
||||
onSortingChange('m3u');
|
||||
},
|
||||
size: 14,
|
||||
style: { cursor: 'pointer' },
|
||||
})}
|
||||
/>
|
||||
</Box>
|
||||
</Flex>
|
||||
);
|
||||
}
|
||||
};
|
||||
|
|
@ -831,8 +911,14 @@ const StreamsTable = () => {
|
|||
}}
|
||||
>
|
||||
{/* Top toolbar with Remove, Assign, Auto-match, and Add buttons */}
|
||||
<Group justify="space-between" style={{ paddingLeft: 10 }}>
|
||||
<Box>
|
||||
<Flex
|
||||
justify="space-between"
|
||||
align="center"
|
||||
wrap="nowrap"
|
||||
style={{ padding: 10 }}
|
||||
gap={6}
|
||||
>
|
||||
<Flex gap={6} wrap="nowrap" style={{ flexShrink: 0 }}>
|
||||
<Button
|
||||
leftSection={<SquarePlus size={18} />}
|
||||
variant={
|
||||
|
|
@ -866,55 +952,47 @@ const StreamsTable = () => {
|
|||
>
|
||||
Add Streams to Channel
|
||||
</Button>
|
||||
</Box>
|
||||
|
||||
<Box
|
||||
style={{
|
||||
display: 'flex',
|
||||
justifyContent: 'flex-end',
|
||||
padding: 10,
|
||||
}}
|
||||
>
|
||||
<Flex gap={6}>
|
||||
<Button
|
||||
leftSection={<SquareMinus size={18} />}
|
||||
variant="default"
|
||||
size="xs"
|
||||
onClick={deleteStreams}
|
||||
disabled={selectedStreamIds.length == 0}
|
||||
>
|
||||
Remove
|
||||
</Button>
|
||||
<Button
|
||||
leftSection={<SquarePlus size={18} />}
|
||||
variant="default"
|
||||
size="xs"
|
||||
onClick={createChannelsFromStreams}
|
||||
p={5}
|
||||
disabled={selectedStreamIds.length == 0}
|
||||
>
|
||||
{`Create Channels (${selectedStreamIds.length})`}
|
||||
</Button>
|
||||
</Flex>
|
||||
|
||||
<Button
|
||||
leftSection={<SquarePlus size={18} />}
|
||||
variant="default"
|
||||
size="xs"
|
||||
onClick={createChannelsFromStreams}
|
||||
p={5}
|
||||
disabled={selectedStreamIds.length == 0}
|
||||
>
|
||||
{`Create Channels (${selectedStreamIds.length})`}
|
||||
</Button>
|
||||
<Flex gap={6} wrap="nowrap" style={{ flexShrink: 0 }}>
|
||||
<Button
|
||||
leftSection={<SquarePlus size={18} />}
|
||||
variant="light"
|
||||
size="xs"
|
||||
onClick={() => editStream()}
|
||||
p={5}
|
||||
color={theme.tailwind.green[5]}
|
||||
style={{
|
||||
borderWidth: '1px',
|
||||
borderColor: theme.tailwind.green[5],
|
||||
color: 'white',
|
||||
}}
|
||||
>
|
||||
Create Stream
|
||||
</Button>
|
||||
|
||||
<Button
|
||||
leftSection={<SquarePlus size={18} />}
|
||||
variant="light"
|
||||
size="xs"
|
||||
onClick={() => editStream()}
|
||||
p={5}
|
||||
color={theme.tailwind.green[5]}
|
||||
style={{
|
||||
borderWidth: '1px',
|
||||
borderColor: theme.tailwind.green[5],
|
||||
color: 'white',
|
||||
}}
|
||||
>
|
||||
Create Stream
|
||||
</Button>
|
||||
</Flex>
|
||||
</Box>
|
||||
</Group>
|
||||
<Button
|
||||
leftSection={<SquareMinus size={18} />}
|
||||
variant="default"
|
||||
size="xs"
|
||||
onClick={deleteStreams}
|
||||
disabled={selectedStreamIds.length == 0}
|
||||
>
|
||||
Remove
|
||||
</Button>
|
||||
</Flex>
|
||||
</Flex>
|
||||
|
||||
{initialDataCount === 0 && (
|
||||
<Center style={{ paddingTop: 20 }}>
|
||||
|
|
@ -1175,6 +1253,39 @@ const StreamsTable = () => {
|
|||
</Group>
|
||||
</Stack>
|
||||
</Modal>
|
||||
|
||||
<ConfirmationDialog
|
||||
opened={confirmDeleteOpen}
|
||||
onClose={() => setConfirmDeleteOpen(false)}
|
||||
onConfirm={() =>
|
||||
isBulkDelete
|
||||
? executeDeleteStreams()
|
||||
: executeDeleteStream(deleteTarget)
|
||||
}
|
||||
title={`Confirm ${isBulkDelete ? 'Bulk ' : ''}Stream Deletion`}
|
||||
message={
|
||||
isBulkDelete ? (
|
||||
`Are you sure you want to delete ${selectedStreamIds.length} stream${selectedStreamIds.length !== 1 ? 's' : ''}? This action cannot be undone.`
|
||||
) : streamToDelete ? (
|
||||
<div style={{ whiteSpace: 'pre-line' }}>
|
||||
{`Are you sure you want to delete the following stream?
|
||||
|
||||
Name: ${streamToDelete.name}
|
||||
${streamToDelete.channel_group ? `Group: ${channelGroups[streamToDelete.channel_group]?.name || 'Unknown'}` : ''}
|
||||
${streamToDelete.m3u_account ? `M3U Account: ${playlists.find((p) => p.id === streamToDelete.m3u_account)?.name || 'Unknown'}` : ''}
|
||||
|
||||
This action cannot be undone.`}
|
||||
</div>
|
||||
) : (
|
||||
'Are you sure you want to delete this stream? This action cannot be undone.'
|
||||
)
|
||||
}
|
||||
confirmLabel="Delete"
|
||||
cancelLabel="Cancel"
|
||||
actionKey={isBulkDelete ? 'delete-streams' : 'delete-stream'}
|
||||
onSuppressChange={suppressWarning}
|
||||
size="md"
|
||||
/>
|
||||
</>
|
||||
);
|
||||
};
|
||||
|
|
|
|||
659
frontend/src/components/tables/VODLogosTable.jsx
Normal file
659
frontend/src/components/tables/VODLogosTable.jsx
Normal file
|
|
@ -0,0 +1,659 @@
|
|||
import React, { useCallback, useEffect, useMemo, useState } from 'react';
|
||||
import {
|
||||
ActionIcon,
|
||||
Badge,
|
||||
Box,
|
||||
Button,
|
||||
Center,
|
||||
Checkbox,
|
||||
Flex,
|
||||
Group,
|
||||
Image,
|
||||
LoadingOverlay,
|
||||
NativeSelect,
|
||||
Pagination,
|
||||
Paper,
|
||||
Select,
|
||||
Stack,
|
||||
Text,
|
||||
TextInput,
|
||||
Tooltip,
|
||||
useMantineTheme,
|
||||
} from '@mantine/core';
|
||||
import { ExternalLink, Search, Trash2, Trash, SquareMinus } from 'lucide-react';
|
||||
import useVODLogosStore from '../../store/vodLogos';
|
||||
import useLocalStorage from '../../hooks/useLocalStorage';
|
||||
import { CustomTable, useTable } from './CustomTable';
|
||||
import ConfirmationDialog from '../ConfirmationDialog';
|
||||
import { notifications } from '@mantine/notifications';
|
||||
|
||||
const VODLogoRowActions = ({ theme, row, deleteLogo }) => {
|
||||
const [tableSize] = useLocalStorage('table-size', 'default');
|
||||
|
||||
const onDelete = useCallback(() => {
|
||||
deleteLogo(row.original.id);
|
||||
}, [row.original.id, deleteLogo]);
|
||||
|
||||
const iconSize =
|
||||
tableSize === 'default' ? 'sm' : tableSize === 'compact' ? 'xs' : 'md';
|
||||
|
||||
return (
|
||||
<Box style={{ width: '100%', justifyContent: 'left' }}>
|
||||
<Group gap={2} justify="center">
|
||||
<ActionIcon
|
||||
size={iconSize}
|
||||
variant="transparent"
|
||||
color={theme.tailwind.red[6]}
|
||||
onClick={onDelete}
|
||||
>
|
||||
<SquareMinus size="18" />
|
||||
</ActionIcon>
|
||||
</Group>
|
||||
</Box>
|
||||
);
|
||||
};
|
||||
|
||||
export default function VODLogosTable() {
|
||||
const theme = useMantineTheme();
|
||||
|
||||
const {
|
||||
logos,
|
||||
totalCount,
|
||||
isLoading,
|
||||
fetchVODLogos,
|
||||
deleteVODLogo,
|
||||
deleteVODLogos,
|
||||
cleanupUnusedVODLogos,
|
||||
} = useVODLogosStore();
|
||||
|
||||
const [currentPage, setCurrentPage] = useState(1);
|
||||
const [pageSize, setPageSize] = useState(25);
|
||||
const [nameFilter, setNameFilter] = useState('');
|
||||
const [usageFilter, setUsageFilter] = useState('all');
|
||||
const [selectedRows, setSelectedRows] = useState(new Set());
|
||||
const [confirmDeleteOpen, setConfirmDeleteOpen] = useState(false);
|
||||
const [deleteTarget, setDeleteTarget] = useState(null);
|
||||
const [confirmCleanupOpen, setConfirmCleanupOpen] = useState(false);
|
||||
const [paginationString, setPaginationString] = useState('');
|
||||
const [isCleaningUp, setIsCleaningUp] = useState(false);
|
||||
const tableRef = React.useRef(null);
|
||||
|
||||
// Calculate unused logos count
|
||||
const unusedLogosCount = useMemo(() => {
|
||||
return logos.filter(
|
||||
(logo) => logo.movie_count === 0 && logo.series_count === 0
|
||||
).length;
|
||||
}, [logos]);
|
||||
useEffect(() => {
|
||||
fetchVODLogos({
|
||||
page: currentPage,
|
||||
page_size: pageSize,
|
||||
name: nameFilter,
|
||||
usage: usageFilter === 'all' ? undefined : usageFilter,
|
||||
});
|
||||
}, [currentPage, pageSize, nameFilter, usageFilter, fetchVODLogos]);
|
||||
|
||||
const handleSelectAll = useCallback(
|
||||
(checked) => {
|
||||
if (checked) {
|
||||
setSelectedRows(new Set(logos.map((logo) => logo.id)));
|
||||
} else {
|
||||
setSelectedRows(new Set());
|
||||
}
|
||||
},
|
||||
[logos]
|
||||
);
|
||||
|
||||
const handleSelectRow = useCallback((id, checked) => {
|
||||
setSelectedRows((prev) => {
|
||||
const newSet = new Set(prev);
|
||||
if (checked) {
|
||||
newSet.add(id);
|
||||
} else {
|
||||
newSet.delete(id);
|
||||
}
|
||||
return newSet;
|
||||
});
|
||||
}, []);
|
||||
|
||||
const deleteLogo = useCallback((id) => {
|
||||
setDeleteTarget([id]);
|
||||
setConfirmDeleteOpen(true);
|
||||
}, []);
|
||||
|
||||
const handleDeleteSelected = useCallback(() => {
|
||||
setDeleteTarget(Array.from(selectedRows));
|
||||
setConfirmDeleteOpen(true);
|
||||
}, [selectedRows]);
|
||||
|
||||
const onRowSelectionChange = useCallback((newSelection) => {
|
||||
setSelectedRows(new Set(newSelection));
|
||||
}, []);
|
||||
|
||||
const clearSelections = useCallback(() => {
|
||||
setSelectedRows(new Set());
|
||||
// Clear table's internal selection state if table is initialized
|
||||
if (tableRef.current?.setSelectedTableIds) {
|
||||
tableRef.current.setSelectedTableIds([]);
|
||||
}
|
||||
}, []);
|
||||
|
||||
const handleConfirmDelete = async () => {
|
||||
try {
|
||||
if (deleteTarget.length === 1) {
|
||||
await deleteVODLogo(deleteTarget[0]);
|
||||
notifications.show({
|
||||
title: 'Success',
|
||||
message: 'VOD logo deleted successfully',
|
||||
color: 'green',
|
||||
});
|
||||
} else {
|
||||
await deleteVODLogos(deleteTarget);
|
||||
notifications.show({
|
||||
title: 'Success',
|
||||
message: `${deleteTarget.length} VOD logos deleted successfully`,
|
||||
color: 'green',
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: error.message || 'Failed to delete VOD logos',
|
||||
color: 'red',
|
||||
});
|
||||
} finally {
|
||||
// Always clear selections and close dialog, even on error
|
||||
clearSelections();
|
||||
setConfirmDeleteOpen(false);
|
||||
setDeleteTarget(null);
|
||||
}
|
||||
};
|
||||
|
||||
const handleCleanupUnused = useCallback(() => {
|
||||
setConfirmCleanupOpen(true);
|
||||
}, []);
|
||||
|
||||
const handleConfirmCleanup = async () => {
|
||||
setIsCleaningUp(true);
|
||||
try {
|
||||
const result = await cleanupUnusedVODLogos();
|
||||
notifications.show({
|
||||
title: 'Success',
|
||||
message: `Cleaned up ${result.deleted_count} unused VOD logos`,
|
||||
color: 'green',
|
||||
});
|
||||
} catch (error) {
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: error.message || 'Failed to cleanup unused VOD logos',
|
||||
color: 'red',
|
||||
});
|
||||
} finally {
|
||||
setIsCleaningUp(false);
|
||||
setConfirmCleanupOpen(false);
|
||||
clearSelections(); // Clear selections after cleanup
|
||||
}
|
||||
};
|
||||
|
||||
// Clear selections only when filters change (not on every data fetch)
|
||||
useEffect(() => {
|
||||
clearSelections();
|
||||
}, [nameFilter, usageFilter, clearSelections]);
|
||||
|
||||
useEffect(() => {
|
||||
const startItem = (currentPage - 1) * pageSize + 1;
|
||||
const endItem = Math.min(currentPage * pageSize, totalCount);
|
||||
setPaginationString(`${startItem} to ${endItem} of ${totalCount}`);
|
||||
}, [currentPage, pageSize, totalCount]);
|
||||
|
||||
const pageCount = useMemo(() => {
|
||||
return Math.ceil(totalCount / pageSize);
|
||||
}, [totalCount, pageSize]);
|
||||
|
||||
const columns = useMemo(
|
||||
() => [
|
||||
{
|
||||
id: 'select',
|
||||
header: () => (
|
||||
<Checkbox
|
||||
checked={
|
||||
selectedRows.size > 0 && selectedRows.size === logos.length
|
||||
}
|
||||
indeterminate={
|
||||
selectedRows.size > 0 && selectedRows.size < logos.length
|
||||
}
|
||||
onChange={(event) => handleSelectAll(event.currentTarget.checked)}
|
||||
size="sm"
|
||||
/>
|
||||
),
|
||||
cell: ({ row }) => (
|
||||
<Checkbox
|
||||
checked={selectedRows.has(row.original.id)}
|
||||
onChange={(event) =>
|
||||
handleSelectRow(row.original.id, event.currentTarget.checked)
|
||||
}
|
||||
size="sm"
|
||||
/>
|
||||
),
|
||||
size: 50,
|
||||
enableSorting: false,
|
||||
},
|
||||
{
|
||||
header: 'Preview',
|
||||
accessorKey: 'cache_url',
|
||||
size: 80,
|
||||
enableSorting: false,
|
||||
cell: ({ getValue, row }) => (
|
||||
<Center style={{ width: '100%', padding: '4px' }}>
|
||||
<Image
|
||||
src={getValue()}
|
||||
alt={row.original.name}
|
||||
width={40}
|
||||
height={30}
|
||||
fit="contain"
|
||||
fallbackSrc="/logo.png"
|
||||
style={{
|
||||
transition: 'transform 0.3s ease',
|
||||
cursor: 'pointer',
|
||||
}}
|
||||
onMouseEnter={(e) => {
|
||||
e.target.style.transform = 'scale(1.5)';
|
||||
}}
|
||||
onMouseLeave={(e) => {
|
||||
e.target.style.transform = 'scale(1)';
|
||||
}}
|
||||
/>
|
||||
</Center>
|
||||
),
|
||||
},
|
||||
{
|
||||
header: 'Name',
|
||||
accessorKey: 'name',
|
||||
size: 250,
|
||||
cell: ({ getValue }) => (
|
||||
<Text fw={500} size="sm">
|
||||
{getValue()}
|
||||
</Text>
|
||||
),
|
||||
},
|
||||
{
|
||||
header: 'Usage',
|
||||
accessorKey: 'usage',
|
||||
size: 120,
|
||||
cell: ({ row }) => {
|
||||
const { movie_count, series_count, item_names } = row.original;
|
||||
const totalUsage = movie_count + series_count;
|
||||
|
||||
if (totalUsage === 0) {
|
||||
return (
|
||||
<Badge size="sm" variant="light" color="gray">
|
||||
Unused
|
||||
</Badge>
|
||||
);
|
||||
}
|
||||
|
||||
// Build usage description
|
||||
const usageParts = [];
|
||||
if (movie_count > 0) {
|
||||
usageParts.push(
|
||||
`${movie_count} movie${movie_count !== 1 ? 's' : ''}`
|
||||
);
|
||||
}
|
||||
if (series_count > 0) {
|
||||
usageParts.push(`${series_count} series`);
|
||||
}
|
||||
|
||||
const label =
|
||||
usageParts.length === 1
|
||||
? usageParts[0]
|
||||
: `${totalUsage} item${totalUsage !== 1 ? 's' : ''}`;
|
||||
|
||||
return (
|
||||
<Tooltip
|
||||
label={
|
||||
<div>
|
||||
<Text size="xs" fw={600}>
|
||||
Used by {usageParts.join(' & ')}:
|
||||
</Text>
|
||||
{item_names &&
|
||||
item_names.map((name, index) => (
|
||||
<Text key={index} size="xs">
|
||||
• {name}
|
||||
</Text>
|
||||
))}
|
||||
</div>
|
||||
}
|
||||
multiline
|
||||
width={220}
|
||||
>
|
||||
<Badge size="sm" variant="light" color="blue">
|
||||
{label}
|
||||
</Badge>
|
||||
</Tooltip>
|
||||
);
|
||||
},
|
||||
},
|
||||
{
|
||||
header: 'URL',
|
||||
accessorKey: 'url',
|
||||
grow: true,
|
||||
cell: ({ getValue }) => (
|
||||
<Group gap={4} style={{ alignItems: 'center' }}>
|
||||
<Box
|
||||
style={{
|
||||
whiteSpace: 'nowrap',
|
||||
overflow: 'hidden',
|
||||
textOverflow: 'ellipsis',
|
||||
maxWidth: 300,
|
||||
}}
|
||||
>
|
||||
<Text size="sm" c="dimmed">
|
||||
{getValue()}
|
||||
</Text>
|
||||
</Box>
|
||||
{getValue()?.startsWith('http') && (
|
||||
<ActionIcon
|
||||
size="xs"
|
||||
variant="transparent"
|
||||
color="gray"
|
||||
onClick={() => window.open(getValue(), '_blank')}
|
||||
>
|
||||
<ExternalLink size={12} />
|
||||
</ActionIcon>
|
||||
)}
|
||||
</Group>
|
||||
),
|
||||
},
|
||||
{
|
||||
id: 'actions',
|
||||
size: 80,
|
||||
header: 'Actions',
|
||||
enableSorting: false,
|
||||
cell: ({ row }) => (
|
||||
<VODLogoRowActions theme={theme} row={row} deleteLogo={deleteLogo} />
|
||||
),
|
||||
},
|
||||
],
|
||||
[theme, deleteLogo, selectedRows, handleSelectAll, handleSelectRow, logos]
|
||||
);
|
||||
|
||||
const renderHeaderCell = (header) => {
|
||||
return (
|
||||
<Text size="sm" name={header.id}>
|
||||
{header.column.columnDef.header}
|
||||
</Text>
|
||||
);
|
||||
};
|
||||
|
||||
const table = useTable({
|
||||
data: logos,
|
||||
columns,
|
||||
manualPagination: true,
|
||||
pageCount: pageCount,
|
||||
allRowIds: logos.map((logo) => logo.id),
|
||||
enablePagination: false,
|
||||
enableRowSelection: true,
|
||||
enableRowVirtualization: false,
|
||||
renderTopToolbar: false,
|
||||
manualSorting: false,
|
||||
manualFiltering: false,
|
||||
onRowSelectionChange: onRowSelectionChange,
|
||||
headerCellRenderFns: {
|
||||
actions: renderHeaderCell,
|
||||
cache_url: renderHeaderCell,
|
||||
name: renderHeaderCell,
|
||||
url: renderHeaderCell,
|
||||
usage: renderHeaderCell,
|
||||
},
|
||||
});
|
||||
|
||||
// Store table reference for clearing selections
|
||||
React.useEffect(() => {
|
||||
tableRef.current = table;
|
||||
}, [table]);
|
||||
|
||||
// Helper to get single logo when confirming single-delete
|
||||
const logoToDelete =
|
||||
deleteTarget && deleteTarget.length === 1
|
||||
? logos.find((l) => l.id === deleteTarget[0])
|
||||
: null;
|
||||
return (
|
||||
<Box
|
||||
style={{
|
||||
display: 'flex',
|
||||
justifyContent: 'center',
|
||||
padding: '0px',
|
||||
minHeight: 'calc(100vh - 200px)',
|
||||
minWidth: '900px',
|
||||
}}
|
||||
>
|
||||
<Stack gap="md" style={{ maxWidth: '1200px', width: '100%' }}>
|
||||
<Paper
|
||||
style={{
|
||||
backgroundColor: '#27272A',
|
||||
border: '1px solid #3f3f46',
|
||||
borderRadius: 'var(--mantine-radius-md)',
|
||||
}}
|
||||
>
|
||||
{/* Top toolbar */}
|
||||
<Box
|
||||
style={{
|
||||
display: 'flex',
|
||||
justifyContent: 'space-between',
|
||||
alignItems: 'center',
|
||||
padding: '16px',
|
||||
borderBottom: '1px solid #3f3f46',
|
||||
}}
|
||||
>
|
||||
<Group gap="sm">
|
||||
<TextInput
|
||||
placeholder="Filter by name..."
|
||||
value={nameFilter}
|
||||
onChange={(event) => {
|
||||
const value = event.target.value;
|
||||
setNameFilter(value);
|
||||
}}
|
||||
size="xs"
|
||||
style={{ width: 200 }}
|
||||
/>
|
||||
<Select
|
||||
placeholder="All"
|
||||
value={usageFilter}
|
||||
onChange={(value) => setUsageFilter(value)}
|
||||
data={[
|
||||
{ value: 'all', label: 'All logos' },
|
||||
{ value: 'used', label: 'Used only' },
|
||||
{ value: 'unused', label: 'Unused only' },
|
||||
{ value: 'movies', label: 'Movies logos' },
|
||||
{ value: 'series', label: 'Series logos' },
|
||||
]}
|
||||
size="xs"
|
||||
style={{ width: 120 }}
|
||||
/>
|
||||
</Group>
|
||||
|
||||
<Group gap="sm">
|
||||
<Button
|
||||
leftSection={<Trash size={16} />}
|
||||
variant="light"
|
||||
size="xs"
|
||||
color="orange"
|
||||
onClick={handleCleanupUnused}
|
||||
loading={isCleaningUp}
|
||||
disabled={unusedLogosCount === 0}
|
||||
>
|
||||
Cleanup Unused{' '}
|
||||
{unusedLogosCount > 0 ? `(${unusedLogosCount})` : ''}
|
||||
</Button>
|
||||
|
||||
<Button
|
||||
leftSection={<SquareMinus size={18} />}
|
||||
variant="default"
|
||||
size="xs"
|
||||
onClick={handleDeleteSelected}
|
||||
disabled={selectedRows.size === 0}
|
||||
>
|
||||
Delete {selectedRows.size > 0 ? `(${selectedRows.size})` : ''}
|
||||
</Button>
|
||||
</Group>
|
||||
</Box>
|
||||
|
||||
{/* Table container */}
|
||||
<Box
|
||||
style={{
|
||||
position: 'relative',
|
||||
borderRadius:
|
||||
'0 0 var(--mantine-radius-md) var(--mantine-radius-md)',
|
||||
}}
|
||||
>
|
||||
<Box
|
||||
style={{
|
||||
overflow: 'auto',
|
||||
height: 'calc(100vh - 200px)',
|
||||
}}
|
||||
>
|
||||
<div>
|
||||
<LoadingOverlay visible={isLoading} />
|
||||
<CustomTable table={table} />
|
||||
</div>
|
||||
</Box>
|
||||
|
||||
{/* Pagination Controls */}
|
||||
<Box
|
||||
style={{
|
||||
position: 'sticky',
|
||||
bottom: 0,
|
||||
zIndex: 3,
|
||||
backgroundColor: '#27272A',
|
||||
borderTop: '1px solid #3f3f46',
|
||||
}}
|
||||
>
|
||||
<Group
|
||||
gap={5}
|
||||
justify="center"
|
||||
style={{
|
||||
padding: 8,
|
||||
}}
|
||||
>
|
||||
<Text size="xs">Page Size</Text>
|
||||
<NativeSelect
|
||||
size="xxs"
|
||||
value={String(pageSize)}
|
||||
data={['25', '50', '100', '250']}
|
||||
onChange={(event) => {
|
||||
setPageSize(Number(event.target.value));
|
||||
setCurrentPage(1);
|
||||
}}
|
||||
style={{ paddingRight: 20 }}
|
||||
/>
|
||||
<Pagination
|
||||
total={pageCount}
|
||||
value={currentPage}
|
||||
onChange={setCurrentPage}
|
||||
size="xs"
|
||||
withEdges
|
||||
style={{ paddingRight: 20 }}
|
||||
/>
|
||||
<Text size="xs">{paginationString}</Text>
|
||||
</Group>
|
||||
</Box>
|
||||
</Box>
|
||||
</Paper>
|
||||
</Stack>
|
||||
|
||||
<ConfirmationDialog
|
||||
opened={confirmDeleteOpen}
|
||||
onClose={() => {
|
||||
setConfirmDeleteOpen(false);
|
||||
setDeleteTarget(null);
|
||||
}}
|
||||
onConfirm={(deleteFiles) => {
|
||||
// pass deleteFiles option through
|
||||
handleConfirmDelete(deleteFiles);
|
||||
}}
|
||||
title={
|
||||
deleteTarget && deleteTarget.length > 1
|
||||
? 'Delete Multiple Logos'
|
||||
: 'Delete Logo'
|
||||
}
|
||||
message={
|
||||
deleteTarget && deleteTarget.length > 1 ? (
|
||||
<div>
|
||||
Are you sure you want to delete {deleteTarget.length} selected
|
||||
logos?
|
||||
<Text size="sm" c="dimmed" mt="xs">
|
||||
Any movies or series using these logos will have their logo
|
||||
removed.
|
||||
</Text>
|
||||
<Text size="sm" c="dimmed" mt="xs">
|
||||
This action cannot be undone.
|
||||
</Text>
|
||||
</div>
|
||||
) : logoToDelete ? (
|
||||
<div>
|
||||
Are you sure you want to delete the logo "{logoToDelete.name}"?
|
||||
{logoToDelete.movie_count + logoToDelete.series_count > 0 && (
|
||||
<Text size="sm" c="orange" mt="xs">
|
||||
This logo is currently used by{' '}
|
||||
{logoToDelete.movie_count + logoToDelete.series_count} item
|
||||
{logoToDelete.movie_count + logoToDelete.series_count !== 1
|
||||
? 's'
|
||||
: ''}
|
||||
. They will have their logo removed.
|
||||
</Text>
|
||||
)}
|
||||
<Text size="sm" c="dimmed" mt="xs">
|
||||
This action cannot be undone.
|
||||
</Text>
|
||||
</div>
|
||||
) : (
|
||||
'Are you sure you want to delete this logo?'
|
||||
)
|
||||
}
|
||||
confirmLabel="Delete"
|
||||
cancelLabel="Cancel"
|
||||
size="md"
|
||||
showDeleteFileOption={
|
||||
deleteTarget && deleteTarget.length > 1
|
||||
? Array.from(deleteTarget).some((id) => {
|
||||
const logo = logos.find((l) => l.id === id);
|
||||
return logo && logo.url && logo.url.startsWith('/data/logos');
|
||||
})
|
||||
: logoToDelete &&
|
||||
logoToDelete.url &&
|
||||
logoToDelete.url.startsWith('/data/logos')
|
||||
}
|
||||
deleteFileLabel={
|
||||
deleteTarget && deleteTarget.length > 1
|
||||
? 'Also delete local logo files from disk'
|
||||
: 'Also delete logo file from disk'
|
||||
}
|
||||
/>
|
||||
|
||||
<ConfirmationDialog
|
||||
opened={confirmCleanupOpen}
|
||||
onClose={() => setConfirmCleanupOpen(false)}
|
||||
onConfirm={handleConfirmCleanup}
|
||||
title="Cleanup Unused Logos"
|
||||
message={
|
||||
<div>
|
||||
Are you sure you want to cleanup {unusedLogosCount} unused logo
|
||||
{unusedLogosCount !== 1 ? 's' : ''}?
|
||||
<Text size="sm" c="dimmed" mt="xs">
|
||||
This will permanently delete all logos that are not currently used
|
||||
by any series or movies.
|
||||
</Text>
|
||||
<Text size="sm" c="dimmed" mt="xs">
|
||||
This action cannot be undone.
|
||||
</Text>
|
||||
</div>
|
||||
}
|
||||
confirmLabel="Cleanup"
|
||||
cancelLabel="Cancel"
|
||||
size="md"
|
||||
showDeleteFileOption={true}
|
||||
deleteFileLabel="Also delete local logo files from disk"
|
||||
/>
|
||||
</Box>
|
||||
);
|
||||
}
|
||||
|
|
@ -38,8 +38,7 @@ export const useLogoSelection = () => {
|
|||
};
|
||||
|
||||
/**
|
||||
* Hook for channel forms that need only channel-assignable logos
|
||||
* (unused + channel-used, excluding VOD-only logos)
|
||||
* Hook for channel forms that need channel logos
|
||||
*/
|
||||
export const useChannelLogoSelection = () => {
|
||||
const [isInitialized, setIsInitialized] = useState(false);
|
||||
|
|
@ -65,7 +64,7 @@ export const useChannelLogoSelection = () => {
|
|||
await fetchChannelAssignableLogos();
|
||||
setIsInitialized(true);
|
||||
} catch (error) {
|
||||
console.error('Failed to load channel-assignable logos:', error);
|
||||
console.error('Failed to load channel logos:', error);
|
||||
}
|
||||
}, [
|
||||
backgroundLoading,
|
||||
|
|
|
|||
|
|
@ -1,13 +1,10 @@
|
|||
import React, { useState } from 'react';
|
||||
import useUserAgentsStore from '../store/userAgents';
|
||||
import M3UsTable from '../components/tables/M3UsTable';
|
||||
import EPGsTable from '../components/tables/EPGsTable';
|
||||
import { Box, Stack } from '@mantine/core';
|
||||
|
||||
const M3UPage = () => {
|
||||
const isLoading = useUserAgentsStore((state) => state.isLoading);
|
||||
const error = useUserAgentsStore((state) => state.error);
|
||||
if (isLoading) return <div>Loading...</div>;
|
||||
if (error) return <div>Error: {error}</div>;
|
||||
return (
|
||||
<Stack
|
||||
|
|
|
|||
|
|
@ -102,6 +102,16 @@ const RECURRING_DAY_OPTIONS = [
|
|||
{ value: 5, label: 'Sat' },
|
||||
];
|
||||
|
||||
const useDateTimeFormat = () => {
|
||||
const [timeFormatSetting] = useLocalStorage('time-format', '12h');
|
||||
const [dateFormatSetting] = useLocalStorage('date-format', 'mdy');
|
||||
// Use user preference for time format
|
||||
const timeFormat = timeFormatSetting === '12h' ? 'h:mma' : 'HH:mm';
|
||||
const dateFormat = dateFormatSetting === 'mdy' ? 'MMM D' : 'D MMM';
|
||||
|
||||
return [timeFormat, dateFormat]
|
||||
};
|
||||
|
||||
// Short preview that triggers the details modal when clicked
|
||||
const RecordingSynopsis = ({ description, onOpen }) => {
|
||||
const truncated = description?.length > 140;
|
||||
|
|
@ -139,6 +149,7 @@ const RecordingDetailsModal = ({
|
|||
const { toUserTime, userNow } = useTimeHelpers();
|
||||
const [childOpen, setChildOpen] = React.useState(false);
|
||||
const [childRec, setChildRec] = React.useState(null);
|
||||
const [timeformat, dateformat] = useDateTimeFormat();
|
||||
|
||||
const safeRecording = recording || {};
|
||||
const customProps = safeRecording.custom_properties || {};
|
||||
|
|
@ -320,7 +331,7 @@ const RecordingDetailsModal = ({
|
|||
)}
|
||||
</Group>
|
||||
<Text size="xs">
|
||||
{start.format('MMM D, YYYY h:mma')} – {end.format('h:mma')}
|
||||
{start.format(`${dateformat}, YYYY ${timeformat}`)} – {end.format(timeformat)}
|
||||
</Text>
|
||||
</Stack>
|
||||
<Group gap={6}>
|
||||
|
|
@ -498,7 +509,7 @@ const RecordingDetailsModal = ({
|
|||
</Group>
|
||||
</Group>
|
||||
<Text size="sm">
|
||||
{start.format('MMM D, YYYY h:mma')} – {end.format('h:mma')}
|
||||
{start.format(`${dateformat}, YYYY ${timeformat}`)} – {end.format(timeformat)}
|
||||
</Text>
|
||||
{rating && (
|
||||
<Group gap={8}>
|
||||
|
|
@ -558,6 +569,7 @@ const RecurringRuleModal = ({ opened, onClose, ruleId, onEditOccurrence }) => {
|
|||
const fetchRecordings = useChannelsStore((s) => s.fetchRecordings);
|
||||
const recordings = useChannelsStore((s) => s.recordings);
|
||||
const { toUserTime, userNow } = useTimeHelpers();
|
||||
const [timeformat, dateformat] = useDateTimeFormat();
|
||||
|
||||
const [saving, setSaving] = useState(false);
|
||||
const [deleting, setDeleting] = useState(false);
|
||||
|
|
@ -892,10 +904,10 @@ const RecurringRuleModal = ({ opened, onClose, ruleId, onEditOccurrence }) => {
|
|||
<Group justify="space-between" align="center">
|
||||
<Stack gap={2} style={{ flex: 1 }}>
|
||||
<Text fw={600} size="sm">
|
||||
{occStart.format('MMM D, YYYY')}
|
||||
{occStart.format(`${dateformat}, YYYY`)}
|
||||
</Text>
|
||||
<Text size="xs" c="dimmed">
|
||||
{occStart.format('h:mma')} – {occEnd.format('h:mma')}
|
||||
{occStart.format(timeformat)} – {occEnd.format(timeformat)}
|
||||
</Text>
|
||||
</Stack>
|
||||
<Group gap={6}>
|
||||
|
|
@ -937,6 +949,7 @@ const RecordingCard = ({ recording, onOpenDetails, onOpenRecurring }) => {
|
|||
const showVideo = useVideoStore((s) => s.showVideo);
|
||||
const fetchRecordings = useChannelsStore((s) => s.fetchRecordings);
|
||||
const { toUserTime, userNow } = useTimeHelpers();
|
||||
const [timeformat, dateformat] = useDateTimeFormat();
|
||||
|
||||
const channel = channels?.[recording.channel];
|
||||
|
||||
|
|
@ -1221,7 +1234,7 @@ const RecordingCard = ({ recording, onOpenDetails, onOpenRecurring }) => {
|
|||
{isSeriesGroup ? 'Next recording' : 'Time'}
|
||||
</Text>
|
||||
<Text size="sm">
|
||||
{start.format('MMM D, YYYY h:mma')} – {end.format('h:mma')}
|
||||
{start.format(`${dateformat}, YYYY ${timeformat}`)} – {end.format(timeformat)}
|
||||
</Text>
|
||||
</Group>
|
||||
|
||||
|
|
@ -1698,4 +1711,4 @@ const DVRPage = () => {
|
|||
);
|
||||
};
|
||||
|
||||
export default DVRPage;
|
||||
export default DVRPage;
|
||||
|
|
@ -250,6 +250,7 @@ export default function TVChannelGuide({ startDate, endDate }) {
|
|||
const logos = useLogosStore((s) => s.logos);
|
||||
|
||||
const tvgsById = useEPGsStore((s) => s.tvgsById);
|
||||
const epgs = useEPGsStore((s) => s.epgs);
|
||||
|
||||
const [programs, setPrograms] = useState([]);
|
||||
const [guideChannels, setGuideChannels] = useState([]);
|
||||
|
|
@ -274,6 +275,7 @@ export default function TVChannelGuide({ startDate, endDate }) {
|
|||
const guideRef = useRef(null);
|
||||
const timelineRef = useRef(null); // New ref for timeline scrolling
|
||||
const listRef = useRef(null);
|
||||
const tvGuideRef = useRef(null); // Ref for the main tv-guide wrapper
|
||||
const isSyncingScroll = useRef(false);
|
||||
const guideScrollLeftRef = useRef(0);
|
||||
const {
|
||||
|
|
@ -400,8 +402,8 @@ export default function TVChannelGuide({ startDate, endDate }) {
|
|||
: defaultEnd;
|
||||
|
||||
const channelIdByTvgId = useMemo(
|
||||
() => buildChannelIdMap(guideChannels, tvgsById),
|
||||
[guideChannels, tvgsById]
|
||||
() => buildChannelIdMap(guideChannels, tvgsById, epgs),
|
||||
[guideChannels, tvgsById, epgs]
|
||||
);
|
||||
|
||||
const channelById = useMemo(() => {
|
||||
|
|
@ -505,37 +507,39 @@ export default function TVChannelGuide({ startDate, endDate }) {
|
|||
if (!node) return undefined;
|
||||
|
||||
const handleScroll = () => {
|
||||
const { scrollLeft } = node;
|
||||
if (scrollLeft === guideScrollLeftRef.current) {
|
||||
return;
|
||||
}
|
||||
|
||||
guideScrollLeftRef.current = scrollLeft;
|
||||
setGuideScrollLeft(scrollLeft);
|
||||
|
||||
if (isSyncingScroll.current) {
|
||||
return;
|
||||
}
|
||||
|
||||
const { scrollLeft } = node;
|
||||
|
||||
// Always sync if timeline is out of sync, even if ref matches
|
||||
if (
|
||||
timelineRef.current &&
|
||||
timelineRef.current.scrollLeft !== scrollLeft
|
||||
) {
|
||||
isSyncingScroll.current = true;
|
||||
timelineRef.current.scrollLeft = scrollLeft;
|
||||
guideScrollLeftRef.current = scrollLeft;
|
||||
setGuideScrollLeft(scrollLeft);
|
||||
requestAnimationFrame(() => {
|
||||
isSyncingScroll.current = false;
|
||||
});
|
||||
} else if (scrollLeft !== guideScrollLeftRef.current) {
|
||||
// Update ref even if timeline was already synced
|
||||
guideScrollLeftRef.current = scrollLeft;
|
||||
setGuideScrollLeft(scrollLeft);
|
||||
}
|
||||
};
|
||||
|
||||
node.addEventListener('scroll', handleScroll, { passive: true });
|
||||
|
||||
return () => {
|
||||
node.removeEventListener('scroll', handleScroll);
|
||||
};
|
||||
}, []);
|
||||
|
||||
// Update “now” every second
|
||||
// Update "now" every second
|
||||
useEffect(() => {
|
||||
const interval = setInterval(() => {
|
||||
setNow(dayjs());
|
||||
|
|
@ -543,13 +547,191 @@ export default function TVChannelGuide({ startDate, endDate }) {
|
|||
return () => clearInterval(interval);
|
||||
}, []);
|
||||
|
||||
// Pixel offset for the “now” vertical line
|
||||
// Pixel offset for the "now" vertical line
|
||||
const nowPosition = useMemo(() => {
|
||||
if (now.isBefore(start) || now.isAfter(end)) return -1;
|
||||
const minutesSinceStart = now.diff(start, 'minute');
|
||||
return (minutesSinceStart / MINUTE_INCREMENT) * MINUTE_BLOCK_WIDTH;
|
||||
}, [now, start, end]);
|
||||
|
||||
useEffect(() => {
|
||||
const tvGuide = tvGuideRef.current;
|
||||
|
||||
if (!tvGuide) return undefined;
|
||||
|
||||
const handleContainerWheel = (event) => {
|
||||
const guide = guideRef.current;
|
||||
const timeline = timelineRef.current;
|
||||
|
||||
if (!guide) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (event.deltaX !== 0 || (event.shiftKey && event.deltaY !== 0)) {
|
||||
event.preventDefault();
|
||||
event.stopPropagation();
|
||||
|
||||
const delta = event.deltaX !== 0 ? event.deltaX : event.deltaY;
|
||||
const newScrollLeft = guide.scrollLeft + delta;
|
||||
|
||||
// Set both guide and timeline scroll positions
|
||||
if (typeof guide.scrollTo === 'function') {
|
||||
guide.scrollTo({ left: newScrollLeft, behavior: 'auto' });
|
||||
} else {
|
||||
guide.scrollLeft = newScrollLeft;
|
||||
}
|
||||
|
||||
// Also sync timeline immediately
|
||||
if (timeline) {
|
||||
if (typeof timeline.scrollTo === 'function') {
|
||||
timeline.scrollTo({ left: newScrollLeft, behavior: 'auto' });
|
||||
} else {
|
||||
timeline.scrollLeft = newScrollLeft;
|
||||
}
|
||||
}
|
||||
|
||||
// Update the ref to keep state in sync
|
||||
guideScrollLeftRef.current = newScrollLeft;
|
||||
setGuideScrollLeft(newScrollLeft);
|
||||
}
|
||||
};
|
||||
|
||||
tvGuide.addEventListener('wheel', handleContainerWheel, {
|
||||
passive: false,
|
||||
capture: true,
|
||||
});
|
||||
|
||||
return () => {
|
||||
tvGuide.removeEventListener('wheel', handleContainerWheel, {
|
||||
capture: true,
|
||||
});
|
||||
};
|
||||
}, []);
|
||||
|
||||
// Fallback: continuously monitor for any scroll changes
|
||||
useEffect(() => {
|
||||
let rafId = null;
|
||||
let lastCheck = 0;
|
||||
|
||||
const checkSync = (timestamp) => {
|
||||
// Throttle to check every 100ms instead of every frame
|
||||
if (timestamp - lastCheck > 100) {
|
||||
const guide = guideRef.current;
|
||||
const timeline = timelineRef.current;
|
||||
|
||||
if (guide && timeline && guide.scrollLeft !== timeline.scrollLeft) {
|
||||
timeline.scrollLeft = guide.scrollLeft;
|
||||
guideScrollLeftRef.current = guide.scrollLeft;
|
||||
setGuideScrollLeft(guide.scrollLeft);
|
||||
}
|
||||
lastCheck = timestamp;
|
||||
}
|
||||
|
||||
rafId = requestAnimationFrame(checkSync);
|
||||
};
|
||||
|
||||
rafId = requestAnimationFrame(checkSync);
|
||||
|
||||
return () => {
|
||||
if (rafId) cancelAnimationFrame(rafId);
|
||||
};
|
||||
}, []);
|
||||
|
||||
useEffect(() => {
|
||||
const tvGuide = tvGuideRef.current;
|
||||
if (!tvGuide) return;
|
||||
|
||||
let lastTouchX = null;
|
||||
let isTouching = false;
|
||||
let rafId = null;
|
||||
let lastScrollLeft = 0;
|
||||
let stableFrames = 0;
|
||||
|
||||
const syncScrollPositions = () => {
|
||||
const guide = guideRef.current;
|
||||
const timeline = timelineRef.current;
|
||||
|
||||
if (!guide || !timeline) return false;
|
||||
|
||||
const currentScroll = guide.scrollLeft;
|
||||
|
||||
// Check if scroll position has changed
|
||||
if (currentScroll !== lastScrollLeft) {
|
||||
timeline.scrollLeft = currentScroll;
|
||||
guideScrollLeftRef.current = currentScroll;
|
||||
setGuideScrollLeft(currentScroll);
|
||||
lastScrollLeft = currentScroll;
|
||||
stableFrames = 0;
|
||||
return true; // Still scrolling
|
||||
} else {
|
||||
stableFrames++;
|
||||
return stableFrames < 10; // Continue for 10 stable frames to catch late updates
|
||||
}
|
||||
};
|
||||
|
||||
const startPolling = () => {
|
||||
if (rafId) return; // Already polling
|
||||
|
||||
const poll = () => {
|
||||
const shouldContinue = isTouching || syncScrollPositions();
|
||||
|
||||
if (shouldContinue) {
|
||||
rafId = requestAnimationFrame(poll);
|
||||
} else {
|
||||
rafId = null;
|
||||
}
|
||||
};
|
||||
|
||||
rafId = requestAnimationFrame(poll);
|
||||
};
|
||||
|
||||
const handleTouchStart = (e) => {
|
||||
if (e.touches.length === 1) {
|
||||
const guide = guideRef.current;
|
||||
if (guide) {
|
||||
lastTouchX = e.touches[0].clientX;
|
||||
lastScrollLeft = guide.scrollLeft;
|
||||
isTouching = true;
|
||||
stableFrames = 0;
|
||||
startPolling();
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const handleTouchMove = (e) => {
|
||||
if (!isTouching || e.touches.length !== 1) return;
|
||||
const guide = guideRef.current;
|
||||
if (!guide) return;
|
||||
|
||||
const touchX = e.touches[0].clientX;
|
||||
const deltaX = lastTouchX - touchX;
|
||||
lastTouchX = touchX;
|
||||
|
||||
if (Math.abs(deltaX) > 0) {
|
||||
guide.scrollLeft += deltaX;
|
||||
}
|
||||
};
|
||||
|
||||
const handleTouchEnd = () => {
|
||||
isTouching = false;
|
||||
lastTouchX = null;
|
||||
// Polling continues until scroll stabilizes
|
||||
};
|
||||
|
||||
tvGuide.addEventListener('touchstart', handleTouchStart, { passive: true });
|
||||
tvGuide.addEventListener('touchmove', handleTouchMove, { passive: false });
|
||||
tvGuide.addEventListener('touchend', handleTouchEnd, { passive: true });
|
||||
tvGuide.addEventListener('touchcancel', handleTouchEnd, { passive: true });
|
||||
|
||||
return () => {
|
||||
if (rafId) cancelAnimationFrame(rafId);
|
||||
tvGuide.removeEventListener('touchstart', handleTouchStart);
|
||||
tvGuide.removeEventListener('touchmove', handleTouchMove);
|
||||
tvGuide.removeEventListener('touchend', handleTouchEnd);
|
||||
tvGuide.removeEventListener('touchcancel', handleTouchEnd);
|
||||
};
|
||||
}, []);
|
||||
|
||||
const syncScrollLeft = useCallback((nextLeft, behavior = 'auto') => {
|
||||
const guideNode = guideRef.current;
|
||||
const timelineNode = timelineRef.current;
|
||||
|
|
@ -779,18 +961,18 @@ export default function TVChannelGuide({ startDate, endDate }) {
|
|||
}, [now, nowPosition, start, syncScrollLeft]);
|
||||
|
||||
const handleTimelineScroll = useCallback(() => {
|
||||
if (!timelineRef.current) {
|
||||
if (!timelineRef.current || isSyncingScroll.current) {
|
||||
return;
|
||||
}
|
||||
|
||||
const nextLeft = timelineRef.current.scrollLeft;
|
||||
guideScrollLeftRef.current = nextLeft;
|
||||
setGuideScrollLeft(nextLeft);
|
||||
|
||||
if (isSyncingScroll.current) {
|
||||
if (nextLeft === guideScrollLeftRef.current) {
|
||||
return;
|
||||
}
|
||||
|
||||
guideScrollLeftRef.current = nextLeft;
|
||||
setGuideScrollLeft(nextLeft);
|
||||
|
||||
isSyncingScroll.current = true;
|
||||
if (guideRef.current) {
|
||||
if (typeof guideRef.current.scrollTo === 'function') {
|
||||
|
|
@ -1177,6 +1359,7 @@ export default function TVChannelGuide({ startDate, endDate }) {
|
|||
|
||||
return (
|
||||
<Box
|
||||
ref={tvGuideRef}
|
||||
className="tv-guide"
|
||||
style={{
|
||||
overflow: 'hidden',
|
||||
|
|
@ -1476,6 +1659,7 @@ export default function TVChannelGuide({ startDate, endDate }) {
|
|||
|
||||
{filteredChannels.length > 0 ? (
|
||||
<VariableSizeList
|
||||
className="guide-list-outer"
|
||||
height={virtualizedHeight}
|
||||
width={virtualizedWidth}
|
||||
itemCount={filteredChannels.length}
|
||||
|
|
|
|||
|
|
@ -1,13 +1,20 @@
|
|||
import React, { useEffect, useCallback } from 'react';
|
||||
import { Box, Loader, Center, Text, Stack } from '@mantine/core';
|
||||
import React, { useEffect, useCallback, useState } from 'react';
|
||||
import { Box, Tabs, Flex, Text } from '@mantine/core';
|
||||
import { notifications } from '@mantine/notifications';
|
||||
import useLogosStore from '../store/logos';
|
||||
import useVODLogosStore from '../store/vodLogos';
|
||||
import LogosTable from '../components/tables/LogosTable';
|
||||
import VODLogosTable from '../components/tables/VODLogosTable';
|
||||
|
||||
const LogosPage = () => {
|
||||
const { fetchAllLogos, isLoading, needsAllLogos } = useLogosStore();
|
||||
const { fetchAllLogos, needsAllLogos, logos } = useLogosStore();
|
||||
const { totalCount } = useVODLogosStore();
|
||||
const [activeTab, setActiveTab] = useState('channel');
|
||||
|
||||
const loadLogos = useCallback(async () => {
|
||||
const channelLogosCount = Object.keys(logos).length;
|
||||
const vodLogosCount = totalCount;
|
||||
|
||||
const loadChannelLogos = useCallback(async () => {
|
||||
try {
|
||||
// Only fetch all logos if we haven't loaded them yet
|
||||
if (needsAllLogos()) {
|
||||
|
|
@ -16,30 +23,74 @@ const LogosPage = () => {
|
|||
} catch (err) {
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: 'Failed to load logos',
|
||||
message: 'Failed to load channel logos',
|
||||
color: 'red',
|
||||
});
|
||||
console.error('Failed to load logos:', err);
|
||||
console.error('Failed to load channel logos:', err);
|
||||
}
|
||||
}, [fetchAllLogos, needsAllLogos]);
|
||||
|
||||
useEffect(() => {
|
||||
loadLogos();
|
||||
}, [loadLogos]);
|
||||
// Always load channel logos on mount
|
||||
loadChannelLogos();
|
||||
}, [loadChannelLogos]);
|
||||
|
||||
return (
|
||||
<Box style={{ padding: 10 }}>
|
||||
{isLoading && (
|
||||
<Center style={{ marginBottom: 20 }}>
|
||||
<Stack align="center" spacing="sm">
|
||||
<Loader size="sm" />
|
||||
<Text size="sm" color="dimmed">
|
||||
Loading all logos...
|
||||
<Box>
|
||||
{/* Header with title and tabs */}
|
||||
<Box
|
||||
style={{
|
||||
display: 'flex',
|
||||
justifyContent: 'center',
|
||||
padding: '10px 0',
|
||||
}}
|
||||
>
|
||||
<Flex
|
||||
style={{
|
||||
alignItems: 'center',
|
||||
justifyContent: 'space-between',
|
||||
width: '100%',
|
||||
maxWidth: '1200px',
|
||||
paddingBottom: 10,
|
||||
}}
|
||||
>
|
||||
<Flex gap={8} align="center">
|
||||
<Text
|
||||
style={{
|
||||
fontFamily: 'Inter, sans-serif',
|
||||
fontWeight: 500,
|
||||
fontSize: '20px',
|
||||
lineHeight: 1,
|
||||
letterSpacing: '-0.3px',
|
||||
color: 'gray.6',
|
||||
marginBottom: 0,
|
||||
}}
|
||||
>
|
||||
Logos
|
||||
</Text>
|
||||
</Stack>
|
||||
</Center>
|
||||
)}
|
||||
<LogosTable />
|
||||
<Text size="sm" c="dimmed">
|
||||
({activeTab === 'channel' ? channelLogosCount : vodLogosCount}{' '}
|
||||
logo
|
||||
{(activeTab === 'channel' ? channelLogosCount : vodLogosCount) !==
|
||||
1
|
||||
? 's'
|
||||
: ''}
|
||||
)
|
||||
</Text>
|
||||
</Flex>
|
||||
|
||||
<Tabs value={activeTab} onChange={setActiveTab} variant="pills">
|
||||
<Tabs.List>
|
||||
<Tabs.Tab value="channel">Channel Logos</Tabs.Tab>
|
||||
<Tabs.Tab value="vod">VOD Logos</Tabs.Tab>
|
||||
</Tabs.List>
|
||||
</Tabs>
|
||||
</Flex>
|
||||
</Box>
|
||||
|
||||
{/* Content based on active tab */}
|
||||
{activeTab === 'channel' && <LogosTable />}
|
||||
{activeTab === 'vod' && <VODLogosTable />}
|
||||
</Box>
|
||||
);
|
||||
};
|
||||
|
|
|
|||
|
|
@ -11,19 +11,13 @@ import useUserAgentsStore from '../store/userAgents';
|
|||
import useStreamProfilesStore from '../store/streamProfiles';
|
||||
import {
|
||||
Accordion,
|
||||
ActionIcon,
|
||||
Alert,
|
||||
Anchor,
|
||||
Badge,
|
||||
Box,
|
||||
Button,
|
||||
Center,
|
||||
Flex,
|
||||
Group,
|
||||
Loader,
|
||||
FileInput,
|
||||
List,
|
||||
Modal,
|
||||
MultiSelect,
|
||||
Select,
|
||||
Stack,
|
||||
|
|
@ -31,7 +25,6 @@ import {
|
|||
Text,
|
||||
TextInput,
|
||||
NumberInput,
|
||||
Tooltip,
|
||||
} from '@mantine/core';
|
||||
import { isNotEmpty, useForm } from '@mantine/form';
|
||||
import { notifications } from '@mantine/notifications';
|
||||
|
|
@ -47,11 +40,6 @@ import {
|
|||
} from '../constants';
|
||||
import ConfirmationDialog from '../components/ConfirmationDialog';
|
||||
import useWarningsStore from '../store/warnings';
|
||||
import { shallow } from 'zustand/shallow';
|
||||
import useLibraryStore from '../store/library';
|
||||
import LibraryFormModal from '../components/library/LibraryFormModal';
|
||||
import { Pencil, Plus, RefreshCcw, Trash2 } from 'lucide-react';
|
||||
import tmdbLogoUrl from '../assets/tmdb-logo-blue.svg?url';
|
||||
|
||||
const TIMEZONE_FALLBACKS = [
|
||||
'UTC',
|
||||
|
|
@ -195,14 +183,6 @@ const SettingsPage = () => {
|
|||
const suppressWarning = useWarningsStore((s) => s.suppressWarning);
|
||||
const isWarningSuppressed = useWarningsStore((s) => s.isWarningSuppressed);
|
||||
|
||||
const mediaLibraries = useLibraryStore((state) => state.libraries);
|
||||
const librariesLoading = useLibraryStore((state) => state.loading);
|
||||
const fetchMediaLibraries = useLibraryStore((state) => state.fetchLibraries);
|
||||
const createMediaLibrary = useLibraryStore((state) => state.createLibrary);
|
||||
const updateMediaLibrary = useLibraryStore((state) => state.updateLibrary);
|
||||
const deleteMediaLibrary = useLibraryStore((state) => state.deleteLibrary);
|
||||
const triggerLibraryScan = useLibraryStore((state) => state.triggerScan);
|
||||
|
||||
const [accordianValue, setAccordianValue] = useState(null);
|
||||
const [networkAccessSaved, setNetworkAccessSaved] = useState(false);
|
||||
const [networkAccessError, setNetworkAccessError] = useState(null);
|
||||
|
|
@ -212,6 +192,7 @@ const SettingsPage = () => {
|
|||
useState([]);
|
||||
|
||||
const [proxySettingsSaved, setProxySettingsSaved] = useState(false);
|
||||
const [generalSettingsSaved, setGeneralSettingsSaved] = useState(false);
|
||||
const [rehashingStreams, setRehashingStreams] = useState(false);
|
||||
const [rehashSuccess, setRehashSuccess] = useState(false);
|
||||
const [rehashConfirmOpen, setRehashConfirmOpen] = useState(false);
|
||||
|
|
@ -219,22 +200,6 @@ const SettingsPage = () => {
|
|||
// Add a new state to track the dialog type
|
||||
const [rehashDialogType, setRehashDialogType] = useState(null); // 'save' or 'rehash'
|
||||
|
||||
const [libraryModalOpen, setLibraryModalOpen] = useState(false);
|
||||
const [editingLibrarySettings, setEditingLibrarySettings] = useState(null);
|
||||
const [librarySubmitting, setLibrarySubmitting] = useState(false);
|
||||
const tmdbSetting = settings['tmdb-api-key'];
|
||||
const TMDB_REQUIREMENT_MESSAGE =
|
||||
'Metadata uses TMDB when available and falls back to Movie-DB when necessary.';
|
||||
const [tmdbKey, setTmdbKey] = useState('');
|
||||
const [metadataSourcesAvailable, setMetadataSourcesAvailable] = useState(false);
|
||||
const [tmdbValidating, setTmdbValidating] = useState(false);
|
||||
const [tmdbValidationState, setTmdbValidationState] = useState('info');
|
||||
const [tmdbValidationMessage, setTmdbValidationMessage] = useState(
|
||||
TMDB_REQUIREMENT_MESSAGE
|
||||
);
|
||||
const [activeMetadataSource, setActiveMetadataSource] = useState('unavailable');
|
||||
const [savingTmdbKey, setSavingTmdbKey] = useState(false);
|
||||
const [tmdbHintOpen, setTmdbHintOpen] = useState(false);
|
||||
// Store pending changed settings when showing the dialog
|
||||
const [pendingChangedSettings, setPendingChangedSettings] = useState(null);
|
||||
const [comskipFile, setComskipFile] = useState(null);
|
||||
|
|
@ -313,14 +278,17 @@ const SettingsPage = () => {
|
|||
const networkAccessForm = useForm({
|
||||
mode: 'controlled',
|
||||
initialValues: Object.keys(NETWORK_ACCESS_OPTIONS).reduce((acc, key) => {
|
||||
acc[key] = '0.0.0.0/0';
|
||||
acc[key] = '0.0.0.0/0,::/0';
|
||||
return acc;
|
||||
}, {}),
|
||||
validate: Object.keys(NETWORK_ACCESS_OPTIONS).reduce((acc, key) => {
|
||||
acc[key] = (value) => {
|
||||
const cidrs = value.split(',');
|
||||
const ipv4CidrRegex = /^([0-9]{1,3}\.){3}[0-9]{1,3}\/\d+$/;
|
||||
const ipv6CidrRegex =
|
||||
/(?:(?:(?:[A-F0-9]{1,4}:){6}|(?=(?:[A-F0-9]{0,4}:){0,6}(?:[0-9]{1,3}\.){3}[0-9]{1,3}(?![:.\w]))(([0-9A-F]{1,4}:){0,5}|:)((:[0-9A-F]{1,4}){1,5}:|:)|::(?:[A-F0-9]{1,4}:){5})(?:(?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)|(?:[A-F0-9]{1,4}:){7}[A-F0-9]{1,4}|(?=(?:[A-F0-9]{0,4}:){0,7}[A-F0-9]{0,4}(?![:.\w]))(([0-9A-F]{1,4}:){1,7}|:)((:[0-9A-F]{1,4}){1,7}|:)|(?:[A-F0-9]{1,4}:){7}:|:(:[A-F0-9]{1,4}){7})(?![:.\w])\/(?:12[0-8]|1[01][0-9]|[1-9]?[0-9])/;
|
||||
for (const cidr of cidrs) {
|
||||
if (cidr.match(/^([0-9]{1,3}\.){3}[0-9]{1,3}\/\d+$/)) {
|
||||
if (cidr.match(ipv4CidrRegex) || cidr.match(ipv6CidrRegex)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
|
|
@ -358,7 +326,8 @@ const SettingsPage = () => {
|
|||
let val = null;
|
||||
switch (key) {
|
||||
case 'm3u-hash-key':
|
||||
val = value.value.split(',');
|
||||
// Split comma-separated string, filter out empty strings
|
||||
val = value.value ? value.value.split(',').filter((v) => v) : [];
|
||||
break;
|
||||
case 'dvr-pre-offset-minutes':
|
||||
case 'dvr-post-offset-minutes':
|
||||
|
|
@ -389,7 +358,7 @@ const SettingsPage = () => {
|
|||
);
|
||||
networkAccessForm.setValues(
|
||||
Object.keys(NETWORK_ACCESS_OPTIONS).reduce((acc, key) => {
|
||||
acc[key] = networkAccessSettings[key] || '0.0.0.0/0';
|
||||
acc[key] = networkAccessSettings[key] || '0.0.0.0/0,::/0';
|
||||
return acc;
|
||||
}, {})
|
||||
);
|
||||
|
|
@ -436,13 +405,17 @@ const SettingsPage = () => {
|
|||
loadComskipConfig();
|
||||
}, []);
|
||||
|
||||
// Clear success states when switching accordion panels
|
||||
useEffect(() => {
|
||||
if (authUser?.user_level === USER_LEVELS.ADMIN) {
|
||||
fetchMediaLibraries();
|
||||
}
|
||||
}, [authUser?.user_level, fetchMediaLibraries]);
|
||||
setGeneralSettingsSaved(false);
|
||||
setProxySettingsSaved(false);
|
||||
setNetworkAccessSaved(false);
|
||||
setRehashSuccess(false);
|
||||
}, [accordianValue]);
|
||||
|
||||
const onSubmit = async () => {
|
||||
setGeneralSettingsSaved(false);
|
||||
|
||||
const values = form.getValues();
|
||||
const changedSettings = {};
|
||||
let m3uHashKeyChanged = false;
|
||||
|
|
@ -450,12 +423,26 @@ const SettingsPage = () => {
|
|||
for (const settingKey in values) {
|
||||
// Only compare against existing value if the setting exists
|
||||
const existing = settings[settingKey];
|
||||
|
||||
// Convert array values (like m3u-hash-key) to comma-separated strings
|
||||
let stringValue;
|
||||
if (Array.isArray(values[settingKey])) {
|
||||
stringValue = values[settingKey].join(',');
|
||||
} else {
|
||||
stringValue = `${values[settingKey]}`;
|
||||
}
|
||||
|
||||
// Skip empty values to avoid validation errors
|
||||
if (!stringValue) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!existing) {
|
||||
// Create new setting on save
|
||||
changedSettings[settingKey] = `${values[settingKey]}`;
|
||||
} else if (String(values[settingKey]) !== String(existing.value)) {
|
||||
changedSettings[settingKey] = stringValue;
|
||||
} else if (stringValue !== String(existing.value)) {
|
||||
// If the user changed the setting's value from what's in the DB:
|
||||
changedSettings[settingKey] = `${values[settingKey]}`;
|
||||
changedSettings[settingKey] = stringValue;
|
||||
|
||||
// Check if M3U hash key was changed
|
||||
if (settingKey === 'm3u-hash-key') {
|
||||
|
|
@ -474,260 +461,39 @@ const SettingsPage = () => {
|
|||
}
|
||||
|
||||
// Update each changed setting in the backend (create if missing)
|
||||
for (const updatedKey in changedSettings) {
|
||||
const existing = settings[updatedKey];
|
||||
if (existing && existing.id) {
|
||||
await API.updateSetting({
|
||||
...existing,
|
||||
value: changedSettings[updatedKey],
|
||||
});
|
||||
} else {
|
||||
await API.createSetting({
|
||||
key: updatedKey,
|
||||
name: updatedKey.replace(/-/g, ' '),
|
||||
value: changedSettings[updatedKey],
|
||||
});
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const handleLibrarySettingsSubmit = async (values) => {
|
||||
setLibrarySubmitting(true);
|
||||
try {
|
||||
if (editingLibrarySettings) {
|
||||
await updateMediaLibrary(editingLibrarySettings.id, values);
|
||||
notifications.show({
|
||||
title: 'Library updated',
|
||||
message: 'Changes saved.',
|
||||
color: 'green',
|
||||
});
|
||||
} else {
|
||||
await createMediaLibrary(values);
|
||||
notifications.show({
|
||||
title: 'Library created',
|
||||
message: 'New library added.',
|
||||
color: 'green',
|
||||
});
|
||||
}
|
||||
setLibraryModalOpen(false);
|
||||
setEditingLibrarySettings(null);
|
||||
fetchMediaLibraries();
|
||||
} catch (error) {
|
||||
console.error('Failed to save library', error);
|
||||
notifications.show({
|
||||
title: 'Library error',
|
||||
message: error?.body?.detail || 'Unable to save library changes.',
|
||||
color: 'red',
|
||||
});
|
||||
} finally {
|
||||
setLibrarySubmitting(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleLibrarySettingsDelete = async (library) => {
|
||||
if (!window.confirm(`Delete library "${library.name}"?`)) return;
|
||||
try {
|
||||
await deleteMediaLibrary(library.id);
|
||||
notifications.show({
|
||||
title: 'Library deleted',
|
||||
message: 'Library removed successfully.',
|
||||
color: 'green',
|
||||
});
|
||||
fetchMediaLibraries();
|
||||
} catch (error) {
|
||||
console.error('Failed to delete library', error);
|
||||
notifications.show({
|
||||
title: 'Library error',
|
||||
message: 'Unable to delete library.',
|
||||
color: 'red',
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
const handleLibrarySettingsScan = async (library) => {
|
||||
try {
|
||||
await triggerLibraryScan(library.id, { full: false });
|
||||
notifications.show({
|
||||
title: 'Scan started',
|
||||
message: `Library ${library.name} queued for scanning.`,
|
||||
color: 'blue',
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Failed to trigger scan', error);
|
||||
notifications.show({
|
||||
title: 'Scan error',
|
||||
message: 'Unable to start scan.',
|
||||
color: 'red',
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
const validateTmdbKeyValue = useCallback(
|
||||
async (value) => {
|
||||
const trimmed = (value || '').trim();
|
||||
setTmdbValidating(true);
|
||||
setTmdbValidationState('info');
|
||||
setTmdbValidationMessage('Checking metadata providers…');
|
||||
try {
|
||||
const result = await API.validateTmdbApiKey(trimmed);
|
||||
const overallValid = Boolean(result?.overall_valid);
|
||||
const provider = result?.provider || 'unavailable';
|
||||
const message =
|
||||
result?.message ||
|
||||
(overallValid
|
||||
? provider === 'tmdb'
|
||||
? 'TMDB key verified successfully. Metadata and artwork will load for your libraries.'
|
||||
: 'Using Movie-DB fallback for metadata.'
|
||||
: 'All metadata sources are unavailable.');
|
||||
|
||||
setMetadataSourcesAvailable(overallValid);
|
||||
setActiveMetadataSource(provider);
|
||||
|
||||
if (provider === 'tmdb') {
|
||||
setTmdbValidationState('valid');
|
||||
} else if (provider === 'movie-db' && overallValid) {
|
||||
setTmdbValidationState('fallback');
|
||||
} else {
|
||||
setTmdbValidationState('invalid');
|
||||
}
|
||||
|
||||
setTmdbValidationMessage(message);
|
||||
return result ?? { overall_valid: overallValid, provider, message };
|
||||
} catch (error) {
|
||||
setMetadataSourcesAvailable(false);
|
||||
setActiveMetadataSource('unavailable');
|
||||
setTmdbValidationState('error');
|
||||
setTmdbValidationMessage('Unable to reach metadata services right now.');
|
||||
throw error;
|
||||
} finally {
|
||||
setTmdbValidating(false);
|
||||
}
|
||||
},
|
||||
[TMDB_REQUIREMENT_MESSAGE]
|
||||
);
|
||||
|
||||
useEffect(() => {
|
||||
const currentValue =
|
||||
tmdbSetting && tmdbSetting.value !== undefined ? tmdbSetting.value : '';
|
||||
setTmdbKey(currentValue);
|
||||
validateTmdbKeyValue(currentValue).catch((error) => {
|
||||
console.error('Failed to validate TMDB API key', error);
|
||||
});
|
||||
}, [tmdbSetting?.value, validateTmdbKeyValue]);
|
||||
|
||||
const persistTmdbKey = async (rawValue) => {
|
||||
const trimmedKey = (rawValue || '').trim();
|
||||
setTmdbKey(trimmedKey);
|
||||
setSavingTmdbKey(true);
|
||||
try {
|
||||
const validationResult = await validateTmdbKeyValue(trimmedKey);
|
||||
|
||||
if (trimmedKey && !validationResult?.overall_valid) {
|
||||
notifications.show({
|
||||
title: 'Invalid TMDB key',
|
||||
message:
|
||||
validationResult?.message ||
|
||||
'Metadata providers are unavailable. TMDB rejected the API key and Movie-DB is unreachable.',
|
||||
color: 'red',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
if (trimmedKey) {
|
||||
if (tmdbSetting && tmdbSetting.id) {
|
||||
await API.updateSetting({
|
||||
...tmdbSetting,
|
||||
value: trimmedKey,
|
||||
for (const updatedKey in changedSettings) {
|
||||
const existing = settings[updatedKey];
|
||||
if (existing && existing.id) {
|
||||
const result = await API.updateSetting({
|
||||
...existing,
|
||||
value: changedSettings[updatedKey],
|
||||
});
|
||||
// API functions return undefined on error
|
||||
if (!result) {
|
||||
throw new Error('Failed to update setting');
|
||||
}
|
||||
} else {
|
||||
await API.createSetting({
|
||||
key: 'tmdb-api-key',
|
||||
name: 'TMDB API Key',
|
||||
value: trimmedKey,
|
||||
const result = await API.createSetting({
|
||||
key: updatedKey,
|
||||
name: updatedKey.replace(/-/g, ' '),
|
||||
value: changedSettings[updatedKey],
|
||||
});
|
||||
// API functions return undefined on error
|
||||
if (!result) {
|
||||
throw new Error('Failed to create setting');
|
||||
}
|
||||
}
|
||||
} else if (tmdbSetting && tmdbSetting.id) {
|
||||
await API.deleteSetting(tmdbSetting);
|
||||
}
|
||||
|
||||
const provider = validationResult?.provider || activeMetadataSource;
|
||||
const usingFallback = provider === 'movie-db';
|
||||
const title = trimmedKey
|
||||
? provider === 'tmdb'
|
||||
? 'TMDB key saved'
|
||||
: 'Saved with fallback'
|
||||
: usingFallback
|
||||
? 'TMDB key removed'
|
||||
: 'Metadata unavailable';
|
||||
const message = trimmedKey
|
||||
? provider === 'tmdb'
|
||||
? 'TMDB API key saved and verified.'
|
||||
: usingFallback
|
||||
? 'Movie-DB fallback will be used for metadata until TMDB becomes available.'
|
||||
: 'Metadata providers are currently unavailable. Libraries may fail to scan.'
|
||||
: usingFallback
|
||||
? 'TMDB API key removed. Movie-DB fallback will be used for metadata.'
|
||||
: 'TMDB API key removed, but no metadata providers are currently available.';
|
||||
const color = trimmedKey
|
||||
? provider === 'tmdb'
|
||||
? 'green'
|
||||
: usingFallback
|
||||
? 'blue'
|
||||
: 'red'
|
||||
: usingFallback
|
||||
? 'blue'
|
||||
: 'red';
|
||||
|
||||
notifications.show({
|
||||
title,
|
||||
message,
|
||||
color,
|
||||
});
|
||||
setGeneralSettingsSaved(true);
|
||||
} catch (error) {
|
||||
console.error('Failed to save TMDB key', error);
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: 'Unable to update TMDB API key.',
|
||||
color: 'red',
|
||||
});
|
||||
} finally {
|
||||
setSavingTmdbKey(false);
|
||||
// Error notifications are already shown by API functions
|
||||
// Just don't show the success message
|
||||
console.error('Error saving settings:', error);
|
||||
}
|
||||
};
|
||||
|
||||
const handleSaveTmdbKey = () => persistTmdbKey(tmdbKey);
|
||||
|
||||
const handleDeleteTmdbKey = async () => {
|
||||
if (!tmdbKey) return;
|
||||
const confirmed =
|
||||
typeof window === 'undefined'
|
||||
? true
|
||||
: window.confirm(
|
||||
'Remove the TMDB API key? Metadata will fall back to Movie-DB when possible.'
|
||||
);
|
||||
if (!confirmed) return;
|
||||
try {
|
||||
await persistTmdbKey('');
|
||||
} catch (error) {
|
||||
console.error('Failed to remove TMDB key', error);
|
||||
}
|
||||
};
|
||||
|
||||
const libraryActionsDisabled = tmdbValidating || !metadataSourcesAvailable;
|
||||
const tmdbMessageColor =
|
||||
tmdbValidationState === 'valid'
|
||||
? 'teal.6'
|
||||
: tmdbValidationState === 'fallback'
|
||||
? 'blue.4'
|
||||
: tmdbValidationState === 'error' || tmdbValidationState === 'invalid'
|
||||
? 'red.6'
|
||||
: 'orange.6';
|
||||
const addLibraryTooltipLabel = metadataSourcesAvailable
|
||||
? activeMetadataSource === 'tmdb'
|
||||
? 'TMDB metadata is available.'
|
||||
: 'Using Movie-DB fallback for metadata.'
|
||||
: 'Metadata sources are unavailable. Configure TMDB or try again later.';
|
||||
|
||||
const onNetworkAccessSubmit = async () => {
|
||||
setNetworkAccessSaved(false);
|
||||
setNetworkAccessError(null);
|
||||
|
|
@ -754,12 +520,19 @@ const SettingsPage = () => {
|
|||
const onProxySettingsSubmit = async () => {
|
||||
setProxySettingsSaved(false);
|
||||
|
||||
await API.updateSetting({
|
||||
...settings['proxy-settings'],
|
||||
value: JSON.stringify(proxySettingsForm.getValues()),
|
||||
});
|
||||
|
||||
setProxySettingsSaved(true);
|
||||
try {
|
||||
const result = await API.updateSetting({
|
||||
...settings['proxy-settings'],
|
||||
value: JSON.stringify(proxySettingsForm.getValues()),
|
||||
});
|
||||
// API functions return undefined on error
|
||||
if (result) {
|
||||
setProxySettingsSaved(true);
|
||||
}
|
||||
} catch (error) {
|
||||
// Error notifications are already shown by API functions
|
||||
console.error('Error saving proxy settings:', error);
|
||||
}
|
||||
};
|
||||
|
||||
const onComskipUpload = async () => {
|
||||
|
|
@ -847,29 +620,46 @@ const SettingsPage = () => {
|
|||
|
||||
const executeSettingsSaveAndRehash = async () => {
|
||||
setRehashConfirmOpen(false);
|
||||
setGeneralSettingsSaved(false);
|
||||
|
||||
// Use the stored pending values that were captured before the dialog was shown
|
||||
const changedSettings = pendingChangedSettings || {};
|
||||
|
||||
// Update each changed setting in the backend (create if missing)
|
||||
for (const updatedKey in changedSettings) {
|
||||
const existing = settings[updatedKey];
|
||||
if (existing && existing.id) {
|
||||
await API.updateSetting({
|
||||
...existing,
|
||||
value: changedSettings[updatedKey],
|
||||
});
|
||||
} else {
|
||||
await API.createSetting({
|
||||
key: updatedKey,
|
||||
name: updatedKey.replace(/-/g, ' '),
|
||||
value: changedSettings[updatedKey],
|
||||
});
|
||||
try {
|
||||
for (const updatedKey in changedSettings) {
|
||||
const existing = settings[updatedKey];
|
||||
if (existing && existing.id) {
|
||||
const result = await API.updateSetting({
|
||||
...existing,
|
||||
value: changedSettings[updatedKey],
|
||||
});
|
||||
// API functions return undefined on error
|
||||
if (!result) {
|
||||
throw new Error('Failed to update setting');
|
||||
}
|
||||
} else {
|
||||
const result = await API.createSetting({
|
||||
key: updatedKey,
|
||||
name: updatedKey.replace(/-/g, ' '),
|
||||
value: changedSettings[updatedKey],
|
||||
});
|
||||
// API functions return undefined on error
|
||||
if (!result) {
|
||||
throw new Error('Failed to create setting');
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Clear the pending values
|
||||
setPendingChangedSettings(null);
|
||||
// Clear the pending values
|
||||
setPendingChangedSettings(null);
|
||||
setGeneralSettingsSaved(true);
|
||||
} catch (error) {
|
||||
// Error notifications are already shown by API functions
|
||||
// Just don't show the success message
|
||||
console.error('Error saving settings:', error);
|
||||
setPendingChangedSettings(null);
|
||||
}
|
||||
};
|
||||
|
||||
const executeRehashStreamsOnly = async () => {
|
||||
|
|
@ -985,196 +775,18 @@ const SettingsPage = () => {
|
|||
|
||||
{authUser.user_level == USER_LEVELS.ADMIN && (
|
||||
<>
|
||||
<Accordion.Item value="media-libraries">
|
||||
<Accordion.Control>Media Libraries</Accordion.Control>
|
||||
<Accordion.Panel>
|
||||
<Stack spacing="md">
|
||||
<Group justify="space-between" align="center">
|
||||
<Text c="dimmed" size="sm">
|
||||
Configure local media libraries used for scanning and playback.
|
||||
</Text>
|
||||
<Tooltip
|
||||
label={addLibraryTooltipLabel}
|
||||
disabled={!libraryActionsDisabled}
|
||||
withArrow
|
||||
>
|
||||
<span>
|
||||
<Button
|
||||
size="xs"
|
||||
leftSection={<Plus size={14} />}
|
||||
disabled={libraryActionsDisabled}
|
||||
onClick={() => {
|
||||
setEditingLibrarySettings(null);
|
||||
setLibraryModalOpen(true);
|
||||
}}
|
||||
>
|
||||
Add Library
|
||||
</Button>
|
||||
</span>
|
||||
</Tooltip>
|
||||
</Group>
|
||||
|
||||
<Stack spacing="xs">
|
||||
<TextInput
|
||||
label="TMDB API Key"
|
||||
placeholder="Enter TMDB API key"
|
||||
value={tmdbKey}
|
||||
onChange={(event) => setTmdbKey(event.currentTarget.value)}
|
||||
rightSection={
|
||||
tmdbKey && tmdbKey.trim().length > 0 ? (
|
||||
<Tooltip label="Remove TMDB API key" withArrow>
|
||||
<ActionIcon
|
||||
variant="subtle"
|
||||
color="red"
|
||||
size="sm"
|
||||
onClick={handleDeleteTmdbKey}
|
||||
disabled={savingTmdbKey || tmdbValidating}
|
||||
aria-label="Remove TMDB API key"
|
||||
>
|
||||
<Trash2 size={14} />
|
||||
</ActionIcon>
|
||||
</Tooltip>
|
||||
) : null
|
||||
}
|
||||
rightSectionPointerEvents="auto"
|
||||
description="Used for metadata and artwork lookups."
|
||||
/>
|
||||
<Group justify="space-between" align="center">
|
||||
<Button
|
||||
variant="subtle"
|
||||
size="xs"
|
||||
color="gray"
|
||||
onClick={() => setTmdbHintOpen(true)}
|
||||
>
|
||||
Where do I get this?
|
||||
</Button>
|
||||
<Button
|
||||
size="xs"
|
||||
variant="light"
|
||||
onClick={handleSaveTmdbKey}
|
||||
loading={savingTmdbKey || tmdbValidating}
|
||||
>
|
||||
Save Metadata Settings
|
||||
</Button>
|
||||
</Group>
|
||||
{tmdbValidationMessage && (
|
||||
<Group gap="xs" align="center">
|
||||
{tmdbValidating && <Loader size="xs" />}
|
||||
<Text size="xs" c={tmdbMessageColor}>
|
||||
{tmdbValidationMessage}
|
||||
</Text>
|
||||
</Group>
|
||||
)}
|
||||
</Stack>
|
||||
|
||||
{!metadataSourcesAvailable && !tmdbValidating && (
|
||||
<Alert color="yellow" variant="light" radius="md">
|
||||
All metadata sources are currently unavailable. Configure a working TMDB
|
||||
key or try again once Movie-DB fallback is reachable.
|
||||
</Alert>
|
||||
)}
|
||||
|
||||
{librariesLoading ? (
|
||||
<Group justify="center" py="md">
|
||||
<Loader size="sm" />
|
||||
</Group>
|
||||
) : mediaLibraries.length === 0 ? (
|
||||
<Text c="dimmed" size="sm">
|
||||
No libraries configured yet.
|
||||
</Text>
|
||||
) : (
|
||||
<Stack spacing="sm">
|
||||
{mediaLibraries.map((library) => (
|
||||
<Group
|
||||
key={library.id}
|
||||
justify="space-between"
|
||||
align="center"
|
||||
p="sm"
|
||||
style={{
|
||||
border: '1px solid rgba(148, 163, 184, 0.2)',
|
||||
borderRadius: 8,
|
||||
}}
|
||||
>
|
||||
<Stack spacing={4} style={{ flex: 1 }}>
|
||||
<Group gap="sm">
|
||||
<Text fw={600}>{library.name}</Text>
|
||||
<Badge color="violet" variant="light">
|
||||
{library.library_type}
|
||||
</Badge>
|
||||
<Badge
|
||||
color={library.auto_scan_enabled ? 'green' : 'gray'}
|
||||
variant="outline"
|
||||
>
|
||||
{library.auto_scan_enabled ? 'Auto-scan' : 'Manual'}
|
||||
</Badge>
|
||||
</Group>
|
||||
<Text size="xs" c="dimmed">
|
||||
Last scan:{' '}
|
||||
{library.last_scan_at
|
||||
? new Date(library.last_scan_at).toLocaleString()
|
||||
: 'Never'}
|
||||
</Text>
|
||||
</Stack>
|
||||
<Group gap="xs">
|
||||
<Tooltip label="Trigger scan">
|
||||
<ActionIcon
|
||||
variant="light"
|
||||
onClick={() => handleLibrarySettingsScan(library)}
|
||||
>
|
||||
<RefreshCcw size={16} />
|
||||
</ActionIcon>
|
||||
</Tooltip>
|
||||
<Tooltip label="Edit">
|
||||
<ActionIcon
|
||||
variant="light"
|
||||
onClick={() => {
|
||||
setEditingLibrarySettings(library);
|
||||
setLibraryModalOpen(true);
|
||||
}}
|
||||
>
|
||||
<Pencil size={16} />
|
||||
</ActionIcon>
|
||||
</Tooltip>
|
||||
<Tooltip label="Delete">
|
||||
<ActionIcon
|
||||
variant="light"
|
||||
color="red"
|
||||
onClick={() => handleLibrarySettingsDelete(library)}
|
||||
>
|
||||
<Trash2 size={16} />
|
||||
</ActionIcon>
|
||||
</Tooltip>
|
||||
</Group>
|
||||
</Group>
|
||||
))}
|
||||
</Stack>
|
||||
)}
|
||||
<Center py="md">
|
||||
<Anchor
|
||||
href="https://www.themoviedb.org/"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
>
|
||||
<img
|
||||
src={tmdbLogoUrl}
|
||||
alt="Powered by TMDB"
|
||||
style={{
|
||||
width: 180,
|
||||
height: 'auto',
|
||||
display: 'block',
|
||||
}}
|
||||
/>
|
||||
</Anchor>
|
||||
</Center>
|
||||
</Stack>
|
||||
</Accordion.Panel>
|
||||
</Accordion.Item>
|
||||
|
||||
<Accordion.Item value="dvr-settings">
|
||||
<Accordion.Control>DVR</Accordion.Control>
|
||||
<Accordion.Panel>
|
||||
<form onSubmit={form.onSubmit(onSubmit)}>
|
||||
<Stack gap="sm">
|
||||
{generalSettingsSaved && (
|
||||
<Alert
|
||||
variant="light"
|
||||
color="green"
|
||||
title="Saved Successfully"
|
||||
/>
|
||||
)}
|
||||
<Switch
|
||||
label="Enable Comskip (remove commercials after recording)"
|
||||
{...form.getInputProps('dvr-comskip-enabled', {
|
||||
|
|
@ -1338,6 +950,13 @@ const SettingsPage = () => {
|
|||
<Accordion.Control>Stream Settings</Accordion.Control>
|
||||
<Accordion.Panel>
|
||||
<form onSubmit={form.onSubmit(onSubmit)}>
|
||||
{generalSettingsSaved && (
|
||||
<Alert
|
||||
variant="light"
|
||||
color="green"
|
||||
title="Saved Successfully"
|
||||
/>
|
||||
)}
|
||||
<Select
|
||||
searchable
|
||||
{...form.getInputProps('default-user-agent')}
|
||||
|
|
@ -1475,6 +1094,46 @@ const SettingsPage = () => {
|
|||
</Accordion.Panel>
|
||||
</Accordion.Item>
|
||||
|
||||
<Accordion.Item value="system-settings">
|
||||
<Accordion.Control>System Settings</Accordion.Control>
|
||||
<Accordion.Panel>
|
||||
<Stack gap="md">
|
||||
{generalSettingsSaved && (
|
||||
<Alert
|
||||
variant="light"
|
||||
color="green"
|
||||
title="Saved Successfully"
|
||||
/>
|
||||
)}
|
||||
<Text size="sm" c="dimmed">
|
||||
Configure how many system events (channel start/stop,
|
||||
buffering, etc.) to keep in the database. Events are
|
||||
displayed on the Stats page.
|
||||
</Text>
|
||||
<NumberInput
|
||||
label="Maximum System Events"
|
||||
description="Number of events to retain (minimum: 10, maximum: 1000)"
|
||||
value={form.values['max-system-events'] || 100}
|
||||
onChange={(value) => {
|
||||
form.setFieldValue('max-system-events', value);
|
||||
}}
|
||||
min={10}
|
||||
max={1000}
|
||||
step={10}
|
||||
/>
|
||||
<Flex mih={50} gap="xs" justify="flex-end" align="flex-end">
|
||||
<Button
|
||||
onClick={form.onSubmit(onSubmit)}
|
||||
disabled={form.submitting}
|
||||
variant="default"
|
||||
>
|
||||
Save
|
||||
</Button>
|
||||
</Flex>
|
||||
</Stack>
|
||||
</Accordion.Panel>
|
||||
</Accordion.Item>
|
||||
|
||||
<Accordion.Item value="user-agents">
|
||||
<Accordion.Control>User-Agents</Accordion.Control>
|
||||
<Accordion.Panel>
|
||||
|
|
@ -1648,66 +1307,9 @@ const SettingsPage = () => {
|
|||
</Accordion.Panel>
|
||||
</Accordion.Item>
|
||||
</>
|
||||
)}
|
||||
</Accordion>
|
||||
</Box>
|
||||
|
||||
<Modal
|
||||
opened={tmdbHintOpen}
|
||||
onClose={() => setTmdbHintOpen(false)}
|
||||
title="How to get a TMDB API key"
|
||||
size="lg"
|
||||
overlayProps={{ backgroundOpacity: 0.55, blur: 2 }}
|
||||
>
|
||||
<Stack spacing="sm">
|
||||
<Text size="sm">
|
||||
Dispatcharr uses TMDB (The Movie Database) for artwork and metadata. You can create
|
||||
a key in just a couple of minutes:
|
||||
</Text>
|
||||
<List size="sm" spacing="xs">
|
||||
<List.Item>
|
||||
Visit{' '}
|
||||
<Anchor
|
||||
href="https://www.themoviedb.org/"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
>
|
||||
themoviedb.org
|
||||
</Anchor>{' '}
|
||||
and sign in or create a free account.
|
||||
</List.Item>
|
||||
<List.Item>
|
||||
Open your{' '}
|
||||
<Anchor
|
||||
href="https://www.themoviedb.org/settings/api"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
>
|
||||
TMDB account settings
|
||||
</Anchor>{' '}
|
||||
and choose <Text component="span" fw={500}>API</Text> from the sidebar.
|
||||
</List.Item>
|
||||
<List.Item>
|
||||
Complete the short API application and copy the generated v3 API key into the field
|
||||
above.
|
||||
</List.Item>
|
||||
</List>
|
||||
<Text size="sm" c="dimmed">
|
||||
TMDB issues separate v3 and v4 keys—Dispatcharr only needs the v3 key for metadata lookups.
|
||||
</Text>
|
||||
</Stack>
|
||||
</Modal>
|
||||
|
||||
<LibraryFormModal
|
||||
opened={libraryModalOpen}
|
||||
onClose={() => {
|
||||
setLibraryModalOpen(false);
|
||||
setEditingLibrarySettings(null);
|
||||
}}
|
||||
library={editingLibrarySettings}
|
||||
onSubmit={handleLibrarySettingsSubmit}
|
||||
submitting={librarySubmitting}
|
||||
/>
|
||||
)}
|
||||
</Accordion>
|
||||
</Box>
|
||||
|
||||
<ConfirmationDialog
|
||||
opened={rehashConfirmOpen}
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@ import {
|
|||
Container,
|
||||
Flex,
|
||||
Group,
|
||||
Pagination,
|
||||
Progress,
|
||||
SimpleGrid,
|
||||
Stack,
|
||||
|
|
@ -25,9 +26,11 @@ import useLogosStore from '../store/logos';
|
|||
import logo from '../images/logo.png';
|
||||
import {
|
||||
ChevronDown,
|
||||
CirclePlay,
|
||||
Gauge,
|
||||
HardDriveDownload,
|
||||
HardDriveUpload,
|
||||
RefreshCw,
|
||||
SquareX,
|
||||
Timer,
|
||||
Users,
|
||||
|
|
@ -44,6 +47,7 @@ import { useLocation } from 'react-router-dom';
|
|||
import { notifications } from '@mantine/notifications';
|
||||
import { CustomTable, useTable } from '../components/tables/CustomTable';
|
||||
import useLocalStorage from '../hooks/useLocalStorage';
|
||||
import SystemEvents from '../components/SystemEvents';
|
||||
|
||||
dayjs.extend(duration);
|
||||
dayjs.extend(relativeTime);
|
||||
|
|
@ -1482,108 +1486,132 @@ const ChannelsPage = () => {
|
|||
}, [channelHistory, vodConnections]);
|
||||
|
||||
return (
|
||||
<Box style={{ overflowX: 'auto' }}>
|
||||
<Box style={{ padding: '10px', borderBottom: '1px solid #444' }}>
|
||||
<Group justify="space-between" align="center">
|
||||
<Title order={3}>Active Connections</Title>
|
||||
<Group align="center">
|
||||
<Text size="sm" c="dimmed">
|
||||
{Object.keys(channelHistory).length} stream
|
||||
{Object.keys(channelHistory).length !== 1 ? 's' : ''} •{' '}
|
||||
{vodConnections.reduce(
|
||||
(total, vodContent) =>
|
||||
total + (vodContent.connections?.length || 0),
|
||||
0
|
||||
)}{' '}
|
||||
VOD connection
|
||||
{vodConnections.reduce(
|
||||
(total, vodContent) =>
|
||||
total + (vodContent.connections?.length || 0),
|
||||
0
|
||||
) !== 1
|
||||
? 's'
|
||||
: ''}
|
||||
</Text>
|
||||
<Group align="center" gap="xs">
|
||||
<Text size="sm">Refresh Interval (seconds):</Text>
|
||||
<NumberInput
|
||||
value={refreshIntervalSeconds}
|
||||
onChange={(value) => setRefreshIntervalSeconds(value || 0)}
|
||||
min={0}
|
||||
max={300}
|
||||
step={1}
|
||||
size="xs"
|
||||
style={{ width: 120 }}
|
||||
/>
|
||||
{refreshIntervalSeconds === 0 && (
|
||||
<>
|
||||
<Box style={{ overflowX: 'auto' }}>
|
||||
<Box style={{ minWidth: '520px' }}>
|
||||
<Box style={{ padding: '10px', borderBottom: '1px solid #444' }}>
|
||||
<Group justify="space-between" align="center">
|
||||
<Title order={3}>Active Connections</Title>
|
||||
<Group align="center">
|
||||
<Text size="sm" c="dimmed">
|
||||
Refreshing disabled
|
||||
{Object.keys(channelHistory).length} stream
|
||||
{Object.keys(channelHistory).length !== 1 ? 's' : ''} •{' '}
|
||||
{vodConnections.reduce(
|
||||
(total, vodContent) =>
|
||||
total + (vodContent.connections?.length || 0),
|
||||
0
|
||||
)}{' '}
|
||||
VOD connection
|
||||
{vodConnections.reduce(
|
||||
(total, vodContent) =>
|
||||
total + (vodContent.connections?.length || 0),
|
||||
0
|
||||
) !== 1
|
||||
? 's'
|
||||
: ''}
|
||||
</Text>
|
||||
)}
|
||||
<Group align="center" gap="xs">
|
||||
<Text size="sm">Refresh Interval (seconds):</Text>
|
||||
<NumberInput
|
||||
value={refreshIntervalSeconds}
|
||||
onChange={(value) => setRefreshIntervalSeconds(value || 0)}
|
||||
min={0}
|
||||
max={300}
|
||||
step={1}
|
||||
size="xs"
|
||||
style={{ width: 120 }}
|
||||
/>
|
||||
{refreshIntervalSeconds === 0 && (
|
||||
<Text size="sm" c="dimmed">
|
||||
Refreshing disabled
|
||||
</Text>
|
||||
)}
|
||||
</Group>
|
||||
{isPollingActive && refreshInterval > 0 && (
|
||||
<Text size="sm" c="dimmed">
|
||||
Refreshing every {refreshIntervalSeconds}s
|
||||
</Text>
|
||||
)}
|
||||
<Button
|
||||
size="xs"
|
||||
variant="subtle"
|
||||
onClick={() => {
|
||||
fetchChannelStats();
|
||||
fetchVODStats();
|
||||
}}
|
||||
loading={false}
|
||||
>
|
||||
Refresh Now
|
||||
</Button>
|
||||
</Group>
|
||||
</Group>
|
||||
{isPollingActive && refreshInterval > 0 && (
|
||||
<Text size="sm" c="dimmed">
|
||||
Refreshing every {refreshIntervalSeconds}s
|
||||
</Text>
|
||||
)}
|
||||
<Button
|
||||
size="xs"
|
||||
variant="subtle"
|
||||
onClick={() => {
|
||||
fetchChannelStats();
|
||||
fetchVODStats();
|
||||
}}
|
||||
loading={false}
|
||||
>
|
||||
Refresh Now
|
||||
</Button>
|
||||
</Group>
|
||||
</Group>
|
||||
</Box>
|
||||
<div
|
||||
style={{
|
||||
display: 'grid',
|
||||
gap: '1rem',
|
||||
padding: '10px',
|
||||
gridTemplateColumns: 'repeat(auto-fill, minmax(500px, 1fr))',
|
||||
}}
|
||||
>
|
||||
{combinedConnections.length === 0 ? (
|
||||
</Box>
|
||||
<Box
|
||||
style={{
|
||||
gridColumn: '1 / -1',
|
||||
textAlign: 'center',
|
||||
padding: '40px',
|
||||
display: 'grid',
|
||||
gap: '1rem',
|
||||
padding: '10px',
|
||||
paddingBottom: '120px',
|
||||
gridTemplateColumns: 'repeat(auto-fill, minmax(500px, 1fr))',
|
||||
minHeight: 'calc(100vh - 250px)',
|
||||
alignContent: 'start',
|
||||
}}
|
||||
>
|
||||
<Text size="xl" color="dimmed">
|
||||
No active connections
|
||||
</Text>
|
||||
{combinedConnections.length === 0 ? (
|
||||
<Box
|
||||
style={{
|
||||
gridColumn: '1 / -1',
|
||||
textAlign: 'center',
|
||||
padding: '40px',
|
||||
}}
|
||||
>
|
||||
<Text size="xl" color="dimmed">
|
||||
No active connections
|
||||
</Text>
|
||||
</Box>
|
||||
) : (
|
||||
combinedConnections.map((connection) => {
|
||||
if (connection.type === 'stream') {
|
||||
return (
|
||||
<ChannelCard
|
||||
key={connection.id}
|
||||
channel={connection.data}
|
||||
clients={clients}
|
||||
stopClient={stopClient}
|
||||
stopChannel={stopChannel}
|
||||
logos={logos}
|
||||
channelsByUUID={channelsByUUID}
|
||||
/>
|
||||
);
|
||||
} else if (connection.type === 'vod') {
|
||||
return (
|
||||
<VODCard key={connection.id} vodContent={connection.data} />
|
||||
);
|
||||
}
|
||||
return null;
|
||||
})
|
||||
)}
|
||||
</Box>
|
||||
) : (
|
||||
combinedConnections.map((connection) => {
|
||||
if (connection.type === 'stream') {
|
||||
return (
|
||||
<ChannelCard
|
||||
key={connection.id}
|
||||
channel={connection.data}
|
||||
clients={clients}
|
||||
stopClient={stopClient}
|
||||
stopChannel={stopChannel}
|
||||
logos={logos}
|
||||
channelsByUUID={channelsByUUID}
|
||||
/>
|
||||
);
|
||||
} else if (connection.type === 'vod') {
|
||||
return (
|
||||
<VODCard key={connection.id} vodContent={connection.data} />
|
||||
);
|
||||
}
|
||||
return null;
|
||||
})
|
||||
)}
|
||||
</div>
|
||||
</Box>
|
||||
</Box>
|
||||
</Box>
|
||||
|
||||
{/* System Events Section - Fixed at bottom */}
|
||||
<Box
|
||||
style={{
|
||||
position: 'fixed',
|
||||
bottom: 0,
|
||||
left: 'var(--app-shell-navbar-width, 0)',
|
||||
right: 0,
|
||||
zIndex: 100,
|
||||
padding: '0 1rem 1rem 1rem',
|
||||
pointerEvents: 'none',
|
||||
}}
|
||||
>
|
||||
<Box style={{ pointerEvents: 'auto' }}>
|
||||
<SystemEvents />
|
||||
</Box>
|
||||
</Box>
|
||||
</>
|
||||
);
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -67,3 +67,16 @@
|
|||
.tv-guide {
|
||||
position: relative;
|
||||
}
|
||||
|
||||
/* Hide bottom horizontal scrollbar for the guide's virtualized list only */
|
||||
.tv-guide .guide-list-outer {
|
||||
/* Allow horizontal scrolling but hide the scrollbar visually */
|
||||
overflow-x: auto !important;
|
||||
scrollbar-width: none; /* Firefox */
|
||||
-ms-overflow-style: none; /* IE and Edge */
|
||||
}
|
||||
|
||||
/* Also hide scrollbars visually across browsers for the outer container */
|
||||
.tv-guide .guide-list-outer::-webkit-scrollbar {
|
||||
display: none; /* Chrome, Safari, Opera */
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,13 +3,30 @@ import dayjs from 'dayjs';
|
|||
export const PROGRAM_HEIGHT = 90;
|
||||
export const EXPANDED_PROGRAM_HEIGHT = 180;
|
||||
|
||||
export function buildChannelIdMap(channels, tvgsById) {
|
||||
export function buildChannelIdMap(channels, tvgsById, epgs = {}) {
|
||||
const map = new Map();
|
||||
channels.forEach((channel) => {
|
||||
const tvgRecord = channel.epg_data_id
|
||||
? tvgsById[channel.epg_data_id]
|
||||
: null;
|
||||
const tvgId = tvgRecord?.tvg_id ?? channel.uuid;
|
||||
|
||||
// For dummy EPG sources, ALWAYS use channel UUID to ensure unique programs per channel
|
||||
// This prevents multiple channels with the same dummy EPG from showing identical data
|
||||
let tvgId;
|
||||
if (tvgRecord?.epg_source) {
|
||||
const epgSource = epgs[tvgRecord.epg_source];
|
||||
if (epgSource?.source_type === 'dummy') {
|
||||
// Dummy EPG: use channel UUID for uniqueness
|
||||
tvgId = channel.uuid;
|
||||
} else {
|
||||
// Regular EPG: use tvg_id from EPG data, or fall back to channel UUID
|
||||
tvgId = tvgRecord.tvg_id ?? channel.uuid;
|
||||
}
|
||||
} else {
|
||||
// No EPG data: use channel UUID
|
||||
tvgId = channel.uuid;
|
||||
}
|
||||
|
||||
if (tvgId) {
|
||||
const tvgKey = String(tvgId);
|
||||
if (!map.has(tvgKey)) {
|
||||
|
|
|
|||
|
|
@ -134,13 +134,21 @@ const useAuthStore = create((set, get) => ({
|
|||
return false; // Add explicit return for when data.access is not available
|
||||
} catch (error) {
|
||||
console.error('Token refresh failed:', error);
|
||||
get().logout();
|
||||
await get().logout();
|
||||
return false; // Add explicit return after error
|
||||
}
|
||||
},
|
||||
|
||||
// Action to logout
|
||||
logout: () => {
|
||||
logout: async () => {
|
||||
// Call backend logout endpoint to log the event
|
||||
try {
|
||||
await API.logout();
|
||||
} catch (error) {
|
||||
// Continue with logout even if API call fails
|
||||
console.error('Logout API call failed:', error);
|
||||
}
|
||||
|
||||
set({
|
||||
accessToken: null,
|
||||
refreshToken: null,
|
||||
|
|
|
|||
|
|
@ -50,9 +50,17 @@ const useEPGsStore = create((set) => ({
|
|||
})),
|
||||
|
||||
updateEPG: (epg) =>
|
||||
set((state) => ({
|
||||
epgs: { ...state.epgs, [epg.id]: epg },
|
||||
})),
|
||||
set((state) => {
|
||||
// Validate that epg is an object with an id
|
||||
if (!epg || typeof epg !== 'object' || !epg.id) {
|
||||
console.error('updateEPG called with invalid epg:', epg);
|
||||
return state;
|
||||
}
|
||||
|
||||
return {
|
||||
epgs: { ...state.epgs, [epg.id]: epg },
|
||||
};
|
||||
}),
|
||||
|
||||
removeEPGs: (epgIds) =>
|
||||
set((state) => {
|
||||
|
|
@ -66,6 +74,12 @@ const useEPGsStore = create((set) => ({
|
|||
|
||||
updateEPGProgress: (data) =>
|
||||
set((state) => {
|
||||
// Validate that data is an object with a source
|
||||
if (!data || typeof data !== 'object' || !data.source) {
|
||||
console.error('updateEPGProgress called with invalid data:', data);
|
||||
return state;
|
||||
}
|
||||
|
||||
// Early exit if source doesn't exist in our EPGs store
|
||||
if (!state.epgs[data.source] && !data.status) {
|
||||
return state;
|
||||
|
|
@ -97,18 +111,29 @@ const useEPGsStore = create((set) => ({
|
|||
? 'success' // Mark as success when progress is 100%
|
||||
: state.epgs[data.source]?.status || 'idle';
|
||||
|
||||
// Create a new epgs object with the updated source status
|
||||
const newEpgs = {
|
||||
...state.epgs,
|
||||
[data.source]: {
|
||||
...state.epgs[data.source],
|
||||
status: sourceStatus,
|
||||
last_message:
|
||||
data.status === 'error'
|
||||
? data.error || 'Unknown error'
|
||||
: state.epgs[data.source]?.last_message,
|
||||
},
|
||||
};
|
||||
// Only update epgs object if status or last_message actually changed
|
||||
// This prevents unnecessary re-renders on every progress update
|
||||
const currentEpg = state.epgs[data.source];
|
||||
const newLastMessage =
|
||||
data.status === 'error'
|
||||
? data.error || 'Unknown error'
|
||||
: currentEpg?.last_message;
|
||||
|
||||
let newEpgs = state.epgs;
|
||||
if (
|
||||
currentEpg &&
|
||||
(currentEpg.status !== sourceStatus ||
|
||||
currentEpg.last_message !== newLastMessage)
|
||||
) {
|
||||
newEpgs = {
|
||||
...state.epgs,
|
||||
[data.source]: {
|
||||
...currentEpg,
|
||||
status: sourceStatus,
|
||||
last_message: newLastMessage,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
refreshProgress: newRefreshProgress,
|
||||
|
|
|
|||
|
|
@ -3,11 +3,11 @@ import api from '../api';
|
|||
|
||||
const useLogosStore = create((set, get) => ({
|
||||
logos: {},
|
||||
channelLogos: {}, // Keep this for simplicity, but we'll be more careful about when we populate it
|
||||
channelLogos: {}, // Separate cache for channel forms to avoid reloading
|
||||
isLoading: false,
|
||||
backgroundLoading: false,
|
||||
hasLoadedAll: false, // Track if we've loaded all logos
|
||||
hasLoadedChannelLogos: false, // Track if we've loaded channel-assignable logos
|
||||
hasLoadedChannelLogos: false, // Track if we've loaded channel logos
|
||||
error: null,
|
||||
|
||||
// Basic CRUD operations
|
||||
|
|
@ -27,10 +27,9 @@ const useLogosStore = create((set, get) => ({
|
|||
...state.logos,
|
||||
[newLogo.id]: { ...newLogo },
|
||||
};
|
||||
|
||||
// Add to channelLogos if the user has loaded channel-assignable logos
|
||||
|
||||
// Add to channelLogos if the user has loaded channel logos
|
||||
// This means they're using channel forms and the new logo should be available there
|
||||
// Newly created logos are channel-assignable (they start unused)
|
||||
let newChannelLogos = state.channelLogos;
|
||||
if (state.hasLoadedChannelLogos) {
|
||||
newChannelLogos = {
|
||||
|
|
@ -96,11 +95,14 @@ const useLogosStore = create((set, get) => ({
|
|||
}
|
||||
},
|
||||
|
||||
fetchAllLogos: async () => {
|
||||
fetchAllLogos: async (force = false) => {
|
||||
const { isLoading, hasLoadedAll, logos } = get();
|
||||
|
||||
// Prevent unnecessary reloading if we already have all logos
|
||||
if (isLoading || (hasLoadedAll && Object.keys(logos).length > 0)) {
|
||||
if (
|
||||
!force &&
|
||||
(isLoading || (hasLoadedAll && Object.keys(logos).length > 0))
|
||||
) {
|
||||
return Object.values(logos);
|
||||
}
|
||||
|
||||
|
|
@ -173,16 +175,15 @@ const useLogosStore = create((set, get) => ({
|
|||
|
||||
set({ backgroundLoading: true, error: null });
|
||||
try {
|
||||
// Load logos suitable for channel assignment (unused + channel-used, exclude VOD-only)
|
||||
// Load all channel logos (no special filtering needed - all Logo entries are for channels)
|
||||
const response = await api.getLogos({
|
||||
channel_assignable: 'true',
|
||||
no_pagination: 'true', // Get all channel-assignable logos
|
||||
no_pagination: 'true', // Get all channel logos
|
||||
});
|
||||
|
||||
// Handle both paginated and non-paginated responses
|
||||
const logos = Array.isArray(response) ? response : response.results || [];
|
||||
|
||||
console.log(`Fetched ${logos.length} channel-assignable logos`);
|
||||
console.log(`Fetched ${logos.length} channel logos`);
|
||||
|
||||
// Store in both places, but this is intentional and only when specifically requested
|
||||
set({
|
||||
|
|
@ -203,9 +204,9 @@ const useLogosStore = create((set, get) => ({
|
|||
|
||||
return logos;
|
||||
} catch (error) {
|
||||
console.error('Failed to fetch channel-assignable logos:', error);
|
||||
console.error('Failed to fetch channel logos:', error);
|
||||
set({
|
||||
error: 'Failed to load channel-assignable logos.',
|
||||
error: 'Failed to load channel logos.',
|
||||
backgroundLoading: false,
|
||||
});
|
||||
throw error;
|
||||
|
|
@ -327,7 +328,7 @@ const useLogosStore = create((set, get) => ({
|
|||
}, 0); // Execute immediately but asynchronously
|
||||
},
|
||||
|
||||
// Background loading specifically for channel-assignable logos after login
|
||||
// Background loading for channel logos after login
|
||||
backgroundLoadChannelLogos: async () => {
|
||||
const { backgroundLoading, channelLogos, hasLoadedChannelLogos } = get();
|
||||
|
||||
|
|
@ -342,10 +343,10 @@ const useLogosStore = create((set, get) => ({
|
|||
|
||||
set({ backgroundLoading: true });
|
||||
try {
|
||||
console.log('Background loading channel-assignable logos...');
|
||||
console.log('Background loading channel logos...');
|
||||
await get().fetchChannelAssignableLogos();
|
||||
console.log(
|
||||
`Background loaded ${Object.keys(get().channelLogos).length} channel-assignable logos`
|
||||
`Background loaded ${Object.keys(get().channelLogos).length} channel logos`
|
||||
);
|
||||
} catch (error) {
|
||||
console.error('Background channel logo loading failed:', error);
|
||||
|
|
|
|||
|
|
@ -210,17 +210,6 @@ const useVODStore = create((set, get) => ({
|
|||
bitrate: response.bitrate || 0,
|
||||
video: response.video || {},
|
||||
audio: response.audio || {},
|
||||
library_sources:
|
||||
response.library_sources ||
|
||||
(response.library_item_id
|
||||
? [
|
||||
{
|
||||
library_id: response.m3u_account?.id || null,
|
||||
library_name: response.m3u_account?.name || 'Library',
|
||||
media_item_id: response.library_item_id,
|
||||
},
|
||||
]
|
||||
: []),
|
||||
};
|
||||
|
||||
set({ loading: false }); // Only update loading state
|
||||
|
|
@ -343,8 +332,6 @@ const useVODStore = create((set, get) => ({
|
|||
tmdb_id: response.tmdb_id || '',
|
||||
imdb_id: response.imdb_id || '',
|
||||
episode_count: response.episode_count || 0,
|
||||
library_sources: response.library_sources || [],
|
||||
library_item_id: response.library_item_id || null,
|
||||
// Additional provider fields
|
||||
backdrop_path: response.custom_properties?.backdrop_path || [],
|
||||
release_date: response.release_date || '',
|
||||
|
|
@ -364,9 +351,9 @@ const useVODStore = create((set, get) => ({
|
|||
seasonEpisodes.forEach((episode) => {
|
||||
const episodeData = {
|
||||
id: episode.id,
|
||||
stream_id: episode.stream_id || episode.id,
|
||||
name: episode.title || episode.name || '',
|
||||
description: episode.plot || episode.description || '',
|
||||
stream_id: episode.id,
|
||||
name: episode.title || '',
|
||||
description: episode.plot || '',
|
||||
season_number: parseInt(seasonNumber) || 0,
|
||||
episode_number: episode.episode_number || 0,
|
||||
duration_secs: episode.duration_secs || null,
|
||||
|
|
@ -377,15 +364,12 @@ const useVODStore = create((set, get) => ({
|
|||
name: seriesInfo.name,
|
||||
},
|
||||
type: 'episode',
|
||||
uuid: episode.uuid || episode.id,
|
||||
uuid: episode.uuid,
|
||||
logo: episode.movie_image ? { url: episode.movie_image } : null,
|
||||
air_date: episode.air_date || null,
|
||||
movie_image: episode.movie_image || null,
|
||||
tmdb_id: episode.tmdb_id || '',
|
||||
imdb_id: episode.imdb_id || '',
|
||||
library_media_item_ids:
|
||||
episode.library_media_item_ids || [],
|
||||
providers: episode.providers || [],
|
||||
};
|
||||
episodesData[episode.id] = episodeData;
|
||||
});
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue