56 Commits

Author SHA1 Message Date
7da5084337 fix: prevent duplicate release assets on re-triggered CI workflows
All checks were successful
Build Release / create-release (push) Successful in 1s
Build Release / build-docker (push) Successful in 37s
Lint & Test / test (push) Successful in 2m15s
Build Release / build-linux (push) Successful in 1m50s
Build Release / build-windows (push) Successful in 3m24s
Gitea silently appends duplicate asset names. Added delete-before-upload
logic to both Windows and Linux asset upload steps.
2026-03-25 13:20:05 +03:00
d9cb1eb225 fix: replace emoji characters with SVG icons in buttons and labels
All checks were successful
Lint & Test / test (push) Successful in 1m19s
- Endpoint copy buttons: 📋 → ICON_CLONE (3 places)
- Scene preset "used by": 🔗 → ICON_LINK
- Automation webhook condition: 🔗 → ICON_WEB
- Add "No Emoji" rule to contexts/frontend.md
2026-03-25 13:16:46 +03:00
9a3433a733 fix: remove destructive DELETE+INSERT shutdown save that caused progressive data loss
_save_all() in BaseSqliteStore did DELETE FROM table + INSERT all in-memory items
on every shutdown. Since SQLite stores use write-through caching (every CRUD writes
immediately), this was redundant. Worse, if in-memory state had fewer items than
the DB, the DELETE wiped rows and only partial data was reinserted.

- Make _save_all() a no-op (DB is always up to date via write-through)
- Replace self._save() with self._save_item() in 6 seed/default creation methods
- Remove _save_all_stores() shutdown hook (replaced with log-only message)
2026-03-25 13:16:35 +03:00
382a42755d feat: add auto-update system with release checking, notification UI, and install-type-aware apply
- Abstract ReleaseProvider interface (Gitea impl, swappable for GitHub/GitLab)
- Background UpdateService with periodic checks, debounce, dismissed version persistence
- Install type detection (installer/portable/docker/dev) with platform-aware asset matching
- Download with progress events, silent NSIS reinstall, portable ZIP/tarball swap scripts
- Version badge pulse animation, dismissible banner with icon buttons, Settings > Updates tab
- Single source of truth: pyproject.toml version via importlib.metadata, CI stamps tag with sed
- API: GET/POST status, check, dismiss, apply, GET/PUT settings
- i18n: en, ru, zh (27+ keys each)
2026-03-25 13:16:18 +03:00
d2b3fdf786 fix: remove unused Path import in test_device_store
All checks were successful
Lint & Test / test (push) Successful in 1m22s
2026-03-25 11:40:47 +03:00
2da5c047f9 fix: update test fixtures for SQLite storage migration
Some checks failed
Lint & Test / test (push) Failing after 1m33s
All store tests were passing file paths instead of Database objects
after the JSON-to-SQLite migration. Updated fixtures to create temp
Database instances, rewrote backup e2e tests for binary .db format,
and fixed config tests for the simplified StorageConfig.
2026-03-25 11:38:07 +03:00
9dfd2365f4 feat: migrate storage from JSON files to SQLite
Some checks failed
Lint & Test / test (push) Failing after 28s
Replace 22 individual JSON store files with a single SQLite database
(data/ledgrab.db). All entity stores now use BaseSqliteStore backed by
SQLite with WAL mode, write-through caching, and thread-safe access.

- Add Database class with SQLite backup/restore API
- Add BaseSqliteStore as drop-in replacement for BaseJsonStore
- Convert all 16 entity stores to SQLite
- Move global settings (MQTT, external URL, auto-backup) to SQLite
  settings table
- Replace JSON backup/restore with SQLite snapshot backups (.db files)
- Remove partial export/import feature (backend + frontend)
- Update demo seed to write directly to SQLite
- Add "Backup Now" button to settings UI
- Remove StorageConfig file path fields (single database_file remains)
2026-03-25 00:03:19 +03:00
29fb944494 fix: resolve test pollution from freeze_saves and fix os_listener toggle
All checks were successful
Lint & Test / test (push) Successful in 27s
- Add unfreeze_saves() to base_store.py and call it in e2e cleanup.
  The backup restore flow calls freeze_saves() which sets a module-level
  flag that silently disables all store _save() calls. Without reset,
  this poisoned all subsequent persistence tests (9 failures).
- Fix os_listener toggle to use toggle-switch/toggle-slider CSS classes
  instead of nonexistent switch/slider classes (was showing plain checkbox).
- Add mandatory test run to CLAUDE.md pre-commit checks.

All 341 tests now pass.
2026-03-24 22:26:57 +03:00
ea5dc47641 fix: add missing os_listener toggle to notification CSS editor
Some checks failed
Lint & Test / test (push) Failing after 27s
The os_listener field existed in the backend model but was never
exposed in the UI. It defaulted to false, so OS notifications were
captured to history but never fired the visual effect. Now shows
a toggle at the top of the notification section, defaults to ON
for new sources.
2026-03-24 22:01:53 +03:00
347b252f06 feat: add auto-name generation to all remaining creation modals
Some checks failed
Lint & Test / test (push) Failing after 27s
Audio sources: type + device/parent/channel/band detail
Weather sources: provider + coordinates (updates on geolocation)
Sync clocks: "Sync Clocks · Nx" (updates on speed slider)
Automations: scene name + condition count/logic
Scene presets: "Scenes · N targets" (updates on add/remove)
Pattern templates: "Pattern Templates · N rects" (updates on add/remove)

All follow the same pattern: name auto-generates on create, stops
when user manually edits the name field.
2026-03-24 21:44:28 +03:00
d6f796a499 feat: use IconSelect grid for frequency band selector in band extract editor
Some checks failed
Lint & Test / test (push) Failing after 1m1s
Replace plain HTML select with icon grid picker (bass/mid/treble/custom)
using volume, music, zap, and sliders icons for visual differentiation.
2026-03-24 19:59:55 +03:00
c1940dadb7 refactor: split audio sources into 3 separate navtree subtabs
Some checks failed
Lint & Test / test (push) Failing after 31s
Multichannel, Mono, and Band Extract each get their own subtab and
panel within the Audio group, replacing the single combined Audio
Sources tab. Cross-links from CSS, value sources, and command palette
updated to navigate to the correct subtab.
2026-03-24 19:43:33 +03:00
ae0a5cb160 feat: add band_extract audio source type for frequency band filtering
Some checks failed
Lint & Test / test (push) Failing after 29s
New audio source type that filters a parent source to a specific frequency
band (bass 20-250Hz, mid 250-4kHz, treble 4k-20kHz, or custom range).
Supports chaining with frequency range intersection and cycle detection.
Band filtering applied in both CSS audio streams and test WebSocket.
2026-03-24 19:36:11 +03:00
a62e2f474d fix: add crosslink from weather CSS card to weather source entity
Some checks failed
Lint & Test / test (push) Failing after 29s
2026-03-24 18:58:29 +03:00
ef33935188 feat: add weather source entity and weather-reactive CSS source type
Some checks failed
Lint & Test / test (push) Failing after 34s
New standalone WeatherSource entity with pluggable provider architecture
(Open-Meteo v1, free, no API key). Full CRUD, test endpoint, browser
geolocation, IconSelect provider picker, CardSection with test/clone/edit.

WeatherColorStripStream maps WMO weather codes to ambient color palettes
with temperature hue shifting and thunderstorm flash effects. Ref-counted
WeatherManager polls API and caches data per source.

CSS editor integration: weather type with EntitySelect source picker,
speed and temperature influence sliders. Backup/restore support.

i18n for en/ru/zh.
2026-03-24 18:52:46 +03:00
0723c5c68c feat: add overlay, soft light, hard light, difference, exclusion blend modes to composite
Some checks failed
Lint & Test / test (push) Failing after 27s
Integer-math implementations with pre-allocated scratch buffers.
IconSelect picker updated with 10 blend modes. i18n for en/ru/zh.
2026-03-24 17:24:39 +03:00
bbef7e5869 feat: add per-layer LED range and collapsible layers to composite source
Some checks failed
Lint & Test / test (push) Failing after 33s
Composite layers now support optional start/end LED range (toggleable)
and reverse flag, making composite a superset of mapped source. Layers
are collapsible with animated expand/collapse and consistent 0.85rem
font sizing. Delete button restyled as ghost icon button.

Also includes minor dashboard CSS overflow fixes.
2026-03-24 17:15:22 +03:00
4caafbb78f fix: remove counter badges from main tab headers
Some checks failed
Lint & Test / test (push) Failing after 28s
2026-03-24 15:57:17 +03:00
9b4dbac088 feat: graceful shutdown with store persistence and restart overlay
Some checks failed
Lint & Test / test (push) Failing after 29s
- Add /api/v1/system/shutdown endpoint that triggers clean uvicorn exit
- Persist all 15 stores to disk during shutdown via _save_all_stores()
- Add force parameter to BaseJsonStore._save() to bypass restore freeze
- Restart script now requests graceful shutdown via API (15s timeout),
  falls back to force-kill only if server doesn't exit in time
- Broadcast server_restarting event over WebSocket before shutdown
- Frontend shows "Server restarting..." overlay instantly on WS event,
  replacing the old dynamically-created overlay from settings.ts
- Add server_ref module to share uvicorn Server + TrayManager refs
- Add i18n keys for restart overlay (en/ru/zh)
2026-03-24 15:50:32 +03:00
73947eb6cb refactor: replace type-dispatch if/elif chains with registry patterns and handler maps
Some checks failed
Lint & Test / test (push) Failing after 30s
Backend: add registry dicts (_CONDITION_MAP, _VALUE_SOURCE_MAP, _PICTURE_SOURCE_MAP)
and per-subclass from_dict() methods to eliminate ~300 lines of if/elif in factory
functions. Convert automation engine dispatch (condition eval, match_mode, match_type,
deactivation_mode) to dict-based lookup.

Frontend: extract CSS_CARD_RENDERERS, CSS_SECTION_MAP, CSS_TYPE_SETUP,
CONDITION_PILL_RENDERERS, and PICTURE_SOURCE_CARD_RENDERERS handler maps to replace
scattered type-check chains in color-strips.ts, automations.ts, and streams.ts.
2026-03-24 14:51:27 +03:00
b63944bb34 fix: skip browser open on tray restart, add WLED rename TODO
Some checks failed
Lint & Test / test (push) Failing after 30s
Set WLED_RESTART=1 env var on tray restart so __main__.py skips
opening a new browser tab — the user already has one open.

Add important TODO item to eliminate WLED naming throughout the app
(package rename, i18n, build scripts, etc.).
2026-03-24 14:21:27 +03:00
40e951c882 fix: gradient/effect/audio palette pickers not showing items
Some checks failed
Lint & Test / test (push) Failing after 33s
Populate <select> <option> elements from gradient entities before
creating IconSelect — the trigger display needs matching options to
sync correctly. Add _syncSelectOptions helper used by all three
palette pickers (gradient, effect, audio).
2026-03-24 14:14:04 +03:00
524f910cf0 feat: add Restart and Shutdown buttons to system tray
Some checks failed
Lint & Test / test (push) Failing after 29s
Both actions show a confirmation dialog before proceeding.
Restart uses os.execv to re-launch the process in-place.
Shutdown stops the server and exits the tray.
2026-03-24 14:10:46 +03:00
178d115cc5 refactor: remove inline gradient editor from CSS modal, use entity picker
Some checks failed
Lint & Test / test (push) Failing after 30s
Replace the gradient stop editor (canvas, markers, stop list) in the
CSS editor gradient section with a simple gradient entity selector.
Gradients are now created/edited exclusively in the Gradients tab.

Fix effect and audio palette pickers to populate from gradient entities
dynamically instead of hardcoded HTML options.
Unify all gradient/palette pickers via _buildGradientEntityItems().

Also: rename "None (use own speed)" → "None (no sync)" for sync clocks.
Add i18n keys for gradient selector and missing error messages.
2026-03-24 14:09:49 +03:00
fc62d5d3b1 chore: sync CI/CD with upstream guide and update context docs
Some checks failed
Lint & Test / test (push) Failing after 30s
release.yml: add fallback for existing releases on tag re-push.
installer.nsi: add .onInit file lock check, use LaunchApp function
instead of RUN_PARAMETERS to fix NSIS quoting bug.
build-dist.ps1: copy start-hidden.vbs to dist scripts/.
start-hidden.vbs: embedded Python fallback for installed/dev envs.

Update ci-cd.md with version detection, NSIS best practices, local
build testing, Gitea vs GitHub differences, troubleshooting.
Update frontend.md with full entity type checklist and common pitfalls.
2026-03-24 13:59:27 +03:00
1111ab7355 fix: resolve all TypeScript strict null check errors
Fix ~68 pre-existing strict null errors across 13 feature modules.
Add non-null assertions for DOM element lookups, null coalescing for
optional values, and type guards for nullable properties. Zero tsc
errors now with --noEmit.
2026-03-24 13:59:07 +03:00
c0d0d839dc feat: add gradient entity modal and fix color picker clipping
Add full gradient editor modal with name, description, visual stop
editor, tags, and dirty checking. Gradient editor now supports ID
prefix to avoid DOM conflicts between CSS editor and standalone modal.

Fix color picker popover clipped by template-card overflow:hidden.
Fix gradient canvas not sizing correctly in standalone modal.
2026-03-24 13:58:51 +03:00
227b82f522 feat: expand color strip sources with gradient references and effect improvements
Add gradient_id field to color strip sources for referencing reusable
gradient entities. Improve audio stream processing and effect stream
with new parameters.
2026-03-24 13:58:33 +03:00
6a881f8fdd feat: add system tray and __main__ entry point
Add pystray-based system tray icon with Open UI / Restart / Quit
actions. Add __main__.py for `python -m wled_controller` support.
Update start-hidden.vbs with embedded Python fallback for both
installed and dev environments.
2026-03-24 13:58:19 +03:00
c26aec916e feat: add gradient entity with CRUD API and storage
Reusable gradient definitions with built-in presets (rainbow, sunset,
ocean, etc.) and user-created gradients. Includes model, JSON store,
Pydantic schemas, REST routes (list/create/update/clone/delete), and
backup/restore integration.
2026-03-24 13:58:04 +03:00
2c3f08344c feat: add chase and gradient flash notification effects with priority queue
Some checks failed
Lint & Test / test (push) Failing after 28s
New notification effects:
- Chase: light bounces across strip with Gaussian glow tail
- Gradient flash: bright center fades to edges with exponential decay

Queue priority: notifications with color_override get high priority and
interrupt the current effect.

Also fixes transient preview for notification sources — adds WebSocket
"fire" command so inline preview works without a saved source, plus
auto-fires on preview open so the effect is visible immediately.
2026-03-24 00:00:49 +03:00
9b80076b5b feat: expand color strip sources with new effects, gradient improvements, and daylight/candlelight enhancements
Some checks failed
Lint & Test / test (push) Failing after 29s
Effects: add 7 new procedural effects (rain, comet, bouncing ball, fireworks,
sparkle rain, lava lamp, wave interference) and custom palette support via
user-defined [[pos,R,G,B],...] stops.

Gradient: add easing functions (linear, ease_in_out, step, cubic) for stop
interpolation, plus noise_perturb and hue_rotate animation types.

Daylight: add longitude field and NOAA solar equations for accurate
sunrise/sunset based on latitude, longitude, and day of year.

Candlelight: add wind simulation (correlated gusts), candle type presets
(taper/votive/bonfire), and wax drip effect with localized brightness dips.

Also fixes editor preview to include all new fields for inline LED test.
2026-03-23 22:40:55 +03:00
c4dce19b2e feat: add HSL shift, contrast, and temporal blur CSPT filters
Some checks failed
Lint & Test / test (push) Failing after 29s
Three new processing template filters for both picture and color strip sources:
- HSL Shift: hue rotation (0-359°) + lightness multiplier via vectorized RGB↔HSL
- Contrast: LUT-based contrast adjustment around mid-gray (0.0-3.0)
- Temporal Blur: exponential moving average across frames for motion smoothing
2026-03-23 21:59:13 +03:00
b27ac8783b feat: expand appearance with shader effects and new style presets
Some checks failed
Lint & Test / test (push) Failing after 29s
Add 5 WebGL shader background effects (Aurora, Plasma, Digital Rain,
Starfield, Warp Tunnel) via a new bg-shaders.ts engine that shares
a dedicated canvas. Add 5 style presets (Sakura, Ocean, Copper, Vapor,
Monolith) with distinctive font pairings. Remove CSS particles effect
in favor of shader-based alternatives. Fix dot grid visibility and
tune all shader intensities for subtle ambient appearance.
2026-03-23 18:31:20 +03:00
73b2ee6222 feat: add visual customization presets to Settings > Appearance tab
Some checks failed
Lint & Test / test (push) Failing after 30s
Add style presets (font + color combinations) and background effect
presets as a new Appearance tab in the settings modal. Style presets
include Default, Midnight, Ember, Arctic, Terminal, and Neon — each
with curated dark/light theme colors and Google Font pairings.
Background effects (Dot Grid, Gradient Mesh, Scanlines, Particles)
use a dedicated overlay div alongside the existing WebGL Noise Field.
All choices persist to localStorage and restore on page load.
2026-03-23 15:42:08 +03:00
1b5b04afaa feat: add scene crosslinks to automation cards
Some checks failed
Lint & Test / test (push) Failing after 31s
Scene name and fallback scene in automation cards are now clickable,
navigating to the corresponding scene preset card. Also renders the
deactivation mode label which was previously set but never displayed.
2026-03-23 15:01:45 +03:00
4975a74ff3 feat: optional auth + backup/restore reliability fixes
Some checks failed
Lint & Test / test (push) Failing after 29s
Auth is now optional: when `auth.api_keys` is empty, all endpoints are
open (no login screen, no Bearer tokens). Health endpoint reports
`auth_required` so the frontend knows which mode to use.

Backup/restore fixes:
- Auto-backup uses atomic writes (was `write_text`, risked corruption)
- Startup backup skipped if recent backup exists (<5 min cooldown),
  preventing rapid restarts from rotating out good backups
- Restore rejects all-empty backups to prevent accidental data wipes
- Store saves frozen after restore to prevent stale in-memory data
  from overwriting freshly-restored files before restart completes
- Missing stores during restore logged as warnings
- STORE_MAP completeness verified at startup against StorageConfig
2026-03-23 14:50:25 +03:00
cd3137b0ec refactor: make OpenCV an optional dependency
All checks were successful
Lint & Test / test (push) Successful in 25s
Camera engine and video stream support now gracefully degrade when
opencv-python-headless is not installed. The app starts fine without
it — camera engine simply doesn't register and video streams raise
a clear ImportError with install instructions.

Saves ~45MB for users who don't need camera/video capture.
2026-03-23 02:44:46 +03:00
e391346b4b docs: reference Gitea CI/CD guide from claude-code-facts repo
All checks were successful
Lint & Test / test (push) Successful in 29s
2026-03-23 01:17:34 +03:00
f376622482 fix: UI polish — notification history buttons, CSS test error handling, schedule time picker, empty state labels
All checks were successful
Lint & Test / test (push) Successful in 31s
- Notification history: replace text buttons with icon buttons, use modal-footer for proper positioning
- CSS test: reject 0-LED picture sources with clear error message, show WS close reason in UI
- Calibration: distribute LEDs by aspect ratio (16:9 default) instead of evenly across edges
- Value source schedule: replace native time input with custom HH:MM picker matching automation style
- Remove "No ... yet" empty state labels from all CardSection instances
2026-03-22 20:58:13 +03:00
52c8614a3c fix: escape release body via Python to avoid YAML parsing errors
All checks were successful
Lint & Test / test (push) Successful in 31s
Build Release / create-release (push) Successful in 1s
Build Release / build-linux (push) Successful in 1m14s
Build Release / build-docker (push) Successful in 7s
Build Release / build-windows (push) Successful in 2m41s
2026-03-22 13:59:14 +03:00
5c814a64a7 fix: remove extraneous f-string prefixes in startup banner
All checks were successful
Lint & Test / test (push) Successful in 35s
2026-03-22 13:53:30 +03:00
0716d602e2 docs: add CI/CD context file and pre-commit lint rule
Some checks failed
Lint & Test / test (push) Failing after 13s
Create contexts/ci-cd.md documenting release pipeline, build scripts,
CI runners, and versioning. Reference it from CLAUDE.md context table.
Add mandatory pre-commit lint check rule to CLAUDE.md.
2026-03-22 13:52:47 +03:00
42bc05c968 fix: update release description to match current build artifacts
Some checks failed
Lint & Test / test (push) Failing after 14s
Add Windows installer, Docker volume mount, and first-time setup
instructions to the Gitea release body. Fix Docker registry URL.
Add CI/Release sync rule to CLAUDE.md.
2026-03-22 13:50:46 +03:00
8bed09a401 perf: pre-compile Python bytecode in portable build
Some checks failed
Build Release / create-release (push) Successful in 1s
Lint & Test / test (push) Failing after 24s
Build Release / build-linux (push) Successful in 1m11s
Build Release / build-docker (push) Successful in 8s
Build Release / build-windows (push) Successful in 2m51s
Add compileall step to build-dist-windows.sh that generates .pyc files
for both app source and site-packages. Saves ~100-200ms on startup by
skipping parse/compile on first import.
2026-03-22 13:38:15 +03:00
6a6c8b2c52 perf: reduce portable build size ~40MB — replace winsdk/wmi, migrate cv2 to Pillow
Some checks failed
Lint & Test / test (push) Failing after 19s
- Replace winsdk (~35MB) with winrt packages (~2.5MB) for OS notification
  listener. API is identical, 93% size reduction.
- Replace wmi (~3-5MB) with ctypes for monitor names (EnumDisplayDevicesW)
  and camera names (SetupAPI). Zero external dependency.
- Migrate cv2.resize/imencode/LUT to Pillow/numpy in 5 files (filters,
  preview helpers, kc_target_processor). OpenCV only needed for camera
  and video stream now.
- Fix DefWindowProcW ctypes overflow on 64-bit Python (pre-existing bug
  in platform_detector display power listener).
- Fix openLightbox import in streams-capture-templates.ts (was using
  broken window cast instead of direct import).
- Add mandatory data migration policy to CLAUDE.md after silent data
  loss incident from storage file rename without migration.
2026-03-22 13:35:01 +03:00
4aa209f7d1 perf: strip OpenCV ffmpeg DLL and PyWin32 help from portable build
Some checks failed
Lint & Test / test (push) Failing after 14s
- Remove opencv_videoio_ffmpeg (28MB) — only needed for video file I/O,
  camera capture uses cv2.VideoCapture which links directly to DirectShow
- Remove PyWin32.chm help file (2.6MB)
- Keep cv2.pyd intact (needed for resize, cvtColor, camera)

Future: migrate non-camera cv2 usage to Pillow, replace winsdk (37MB
monolithic binary) with lighter notification API.
2026-03-22 03:49:55 +03:00
14adc8172b perf: reduce Windows portable build size by ~80MB
Some checks failed
Lint & Test / test (push) Failing after 28s
Strip unnecessary files from site-packages:
- Remove pip, setuptools, pythonwin (not needed at runtime)
- OpenCV: remove unused extra modules and data
- numpy: remove tests, f2py, typing stubs
- Remove all .dist-info directories and .pyi type stubs
- Remove winsdk type stubs
2026-03-22 03:46:20 +03:00
0e54616000 fix: use msiextract for tkinter, fix step numbering, graceful Docker fallback
Some checks failed
Build Release / create-release (push) Successful in 0s
Lint & Test / test (push) Failing after 14s
Build Release / build-linux (push) Successful in 1m27s
Build Release / build-docker (push) Successful in 6s
Build Release / build-windows (push) Successful in 3m26s
- Replace 7z with msiextract (msitools) to extract tkinter from
  python.org's individual MSI packages (tcltk.msi + lib.msi)
- Fix build step numbering to /9
- Docker job continues on login failure (registry may not be enabled)
- Show makensis output for debugging
2026-03-22 03:44:37 +03:00
3633793972 fix: extract tkinter from Python installer via 7z, fix NSIS icon path
Some checks failed
Build Release / create-release (push) Successful in 1s
Lint & Test / test (push) Failing after 15s
Build Release / build-linux (push) Successful in 1m20s
Build Release / build-docker (push) Failing after 9s
Build Release / build-windows (push) Successful in 3m19s
- Replace nuget approach (doesn't contain tkinter) with extracting
  from the official Python amd64.exe installer using 7z
- Remove MUI_ICON/MUI_UNICON (no .ico file available, use NSIS default)
- Add p7zip-full to CI dependencies
2026-03-22 03:40:06 +03:00
7f799a914d feat: add NSIS Windows installer to release workflow
Some checks failed
Build Release / create-release (push) Successful in 1s
Lint & Test / test (push) Failing after 15s
Build Release / build-linux (push) Successful in 1m21s
Build Release / build-docker (push) Failing after 9s
Build Release / build-windows (push) Failing after 1m37s
- installer.nsi: per-user install to AppData, Start Menu shortcuts,
  optional desktop shortcut and autostart, clean uninstall (preserves
  data/), Add/Remove Programs registration
- build-dist-windows.sh: runs makensis after ZIP if available
- release.yml: install nsis in CI, upload both ZIP and setup.exe
- Fix Docker registry login (sed -E for https:// stripping)
2026-03-22 03:35:34 +03:00
d5b5c255e8 feat: bundle tkinter into Windows portable build
Some checks failed
Build Release / create-release (push) Successful in 5s
Lint & Test / test (push) Failing after 21s
Build Release / build-linux (push) Successful in 1m7s
Build Release / build-docker (push) Failing after 4s
Build Release / build-windows (push) Successful in 1m50s
Download _tkinter.pyd, tkinter package, and Tcl/Tk DLLs from the
official Python nuget package and copy them into the embedded Python
directory. This enables the screen overlay visualization during
calibration in the portable build.
2026-03-22 03:32:02 +03:00
564e4c9c9c fix: accurate port banner and tkinter graceful fallback
Some checks failed
Build Release / build-linux (push) Successful in 1m24s
Build Release / create-release (push) Successful in 1s
Lint & Test / test (push) Failing after 13s
Build Release / build-windows (push) Successful in 1m35s
Build Release / build-docker (push) Failing after 9s
- Move startup banner into main.py so it shows the actual configured
  port instead of a hardcoded 8080 in the launcher scripts
- Wrap tkinter import in try/except so embedded Python (which lacks
  tkinter) logs a warning instead of crashing the overlay thread
2026-03-22 03:30:19 +03:00
7c80500d48 feat: add autostart scripts and fix port configuration in launchers
Some checks failed
Build Release / create-release (push) Successful in 1s
Build Release / build-windows (push) Successful in 1m16s
Build Release / build-docker (push) Failing after 8s
Build Release / build-linux (push) Successful in 59s
Lint & Test / test (push) Successful in 1m52s
Windows: install-autostart.bat (Startup folder shortcut),
uninstall-autostart.bat. Linux: install-service.sh (systemd unit),
uninstall-service.sh.

Both launchers now use python -m wled_controller.main so port is
read from config/env instead of being hardcoded to 8080.
2026-03-22 03:25:05 +03:00
39e3d64654 fix: replace Docker Buildx with plain docker build/push
All checks were successful
Lint & Test / test (push) Successful in 1m51s
Buildx requires container networking that fails on TrueNAS runners.
Plain docker build + docker push works without Buildx setup.
2026-03-22 03:20:37 +03:00
47a62b1aed fix: add app/src to embedded Python ._pth for module discovery
Some checks failed
Build Release / create-release (push) Successful in 1s
Build Release / build-windows (push) Successful in 1m14s
Lint & Test / test (push) Successful in 1m50s
Build Release / build-docker (push) Failing after 46s
Build Release / build-linux (push) Successful in 1m37s
Windows embedded Python ignores PYTHONPATH when a ._pth file exists.
Add ../app/src to the ._pth so wled_controller is importable.
Fixes ModuleNotFoundError on portable builds.
2026-03-22 03:14:12 +03:00
185 changed files with 12961 additions and 2460 deletions

View File

@@ -25,20 +25,65 @@ jobs:
IS_PRE="true" IS_PRE="true"
fi fi
# Build registry path for Docker instructions
SERVER_HOST=$(echo "${{ gitea.server_url }}" | sed -E 's|https?://||')
REPO=$(echo "${{ gitea.repository }}" | tr '[:upper:]' '[:lower:]')
DOCKER_IMAGE="${SERVER_HOST}/${REPO}"
# Build release body via Python to avoid YAML escaping issues
BODY_JSON=$(python3 -c "
import json, sys
tag = '$TAG'
image = '$DOCKER_IMAGE'
body = f'''## Downloads
| Platform | File | Description |
|----------|------|-------------|
| Windows (installer) | \`LedGrab-{tag}-setup.exe\` | Install with Start Menu shortcut, optional autostart, uninstaller |
| Windows (portable) | \`LedGrab-{tag}-win-x64.zip\` | Unzip anywhere, run LedGrab.bat |
| Linux | \`LedGrab-{tag}-linux-x64.tar.gz\` | Extract, run ./run.sh |
| Docker | See below | docker pull + docker run |
After starting, open **http://localhost:8080** in your browser.
### Docker
\`\`\`bash
docker pull {image}:{tag}
docker run -d --name ledgrab -p 8080:8080 -v ledgrab-data:/app/data {image}:{tag}
\`\`\`
### First-time setup
1. Change the default API key in config/default_config.yaml
2. Open http://localhost:8080 and discover your WLED devices
3. See INSTALLATION.md for detailed configuration
'''
import textwrap
print(json.dumps(textwrap.dedent(body).strip()))
")
RELEASE=$(curl -s -X POST "$BASE_URL/releases" \ RELEASE=$(curl -s -X POST "$BASE_URL/releases" \
-H "Authorization: token $GITEA_TOKEN" \ -H "Authorization: token $GITEA_TOKEN" \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d "{ -d "{
\"tag_name\": \"$TAG\", \"tag_name\": \"$TAG\",
\"name\": \"LedGrab $TAG\", \"name\": \"LedGrab $TAG\",
\"body\": \"## Downloads\\n\\n| Platform | File | How to run |\\n|----------|------|------------|\\n| Windows | \`LedGrab-${TAG}-win-x64.zip\` | Unzip → run \`LedGrab.bat\` → open http://localhost:8080 |\\n| Linux | \`LedGrab-${TAG}-linux-x64.tar.gz\` | Extract → run \`./run.sh\` → open http://localhost:8080 |\\n| Docker | See below | \`docker pull\` → \`docker run\` |\\n\\n### Docker\\n\\n\`\`\`bash\\ndocker pull ${{ gitea.server_url }}/${{ gitea.repository }}:${TAG}\\ndocker run -d -p 8080:8080 ${{ gitea.server_url }}/${{ gitea.repository }}:${TAG}\\n\`\`\`\", \"body\": $BODY_JSON,
\"draft\": false, \"draft\": false,
\"prerelease\": $IS_PRE \"prerelease\": $IS_PRE
}") }")
RELEASE_ID=$(echo "$RELEASE" | python3 -c "import sys,json; print(json.load(sys.stdin)['id'])") # Fallback: if release already exists for this tag, fetch it instead
RELEASE_ID=$(echo "$RELEASE" | python3 -c "import sys,json; print(json.load(sys.stdin)['id'])" 2>/dev/null)
if [ -z "$RELEASE_ID" ]; then
echo "::warning::Release already exists for tag $TAG — reusing existing release"
RELEASE=$(curl -s "$BASE_URL/releases/tags/$TAG" \
-H "Authorization: token $GITEA_TOKEN")
RELEASE_ID=$(echo "$RELEASE" | python3 -c "import sys,json; print(json.load(sys.stdin)['id'])")
fi
echo "release_id=$RELEASE_ID" >> "$GITHUB_OUTPUT" echo "release_id=$RELEASE_ID" >> "$GITHUB_OUTPUT"
echo "Created release ID: $RELEASE_ID" echo "Release ID: $RELEASE_ID"
# ── Windows portable ZIP (cross-built from Linux) ───────── # ── Windows portable ZIP (cross-built from Linux) ─────────
build-windows: build-windows:
@@ -63,37 +108,54 @@ jobs:
- name: Install system dependencies - name: Install system dependencies
run: | run: |
sudo apt-get update sudo apt-get update
sudo apt-get install -y --no-install-recommends zip libportaudio2 sudo apt-get install -y --no-install-recommends zip libportaudio2 nsis msitools
- name: Cross-build Windows distribution - name: Cross-build Windows distribution
run: | run: |
chmod +x build-dist-windows.sh chmod +x build-dist-windows.sh
./build-dist-windows.sh "${{ gitea.ref_name }}" ./build-dist-windows.sh "${{ gitea.ref_name }}"
- name: Upload build artifact - name: Upload build artifacts
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v3
with: with:
name: LedGrab-${{ gitea.ref_name }}-win-x64 name: LedGrab-${{ gitea.ref_name }}-win-x64
path: build/LedGrab-*.zip path: |
build/LedGrab-*.zip
build/LedGrab-*-setup.exe
retention-days: 90 retention-days: 90
- name: Attach ZIP to release - name: Attach assets to release
env: env:
GITEA_TOKEN: ${{ secrets.GITEA_TOKEN }} GITEA_TOKEN: ${{ secrets.GITEA_TOKEN }}
run: | run: |
TAG="${{ gitea.ref_name }}"
RELEASE_ID="${{ needs.create-release.outputs.release_id }}" RELEASE_ID="${{ needs.create-release.outputs.release_id }}"
BASE_URL="${{ gitea.server_url }}/api/v1/repos/${{ gitea.repository }}" BASE_URL="${{ gitea.server_url }}/api/v1/repos/${{ gitea.repository }}"
# Upload helper — deletes existing asset with same name to prevent duplicates on re-run
upload_asset() {
local FILE="$1"
local NAME=$(basename "$FILE")
EXISTING_ID=$(curl -s "$BASE_URL/releases/$RELEASE_ID/assets" \
-H "Authorization: token $GITEA_TOKEN" \
| python3 -c "import sys,json; assets=json.load(sys.stdin); print(next((str(a['id']) for a in assets if a['name']=='$NAME'),''))" 2>/dev/null)
if [ -n "$EXISTING_ID" ]; then
curl -s -X DELETE "$BASE_URL/releases/$RELEASE_ID/assets/$EXISTING_ID" \
-H "Authorization: token $GITEA_TOKEN"
echo "Replaced existing asset: $NAME"
fi
curl -s -X POST \
"$BASE_URL/releases/$RELEASE_ID/assets?name=$NAME" \
-H "Authorization: token $GITEA_TOKEN" \
-H "Content-Type: application/octet-stream" \
--data-binary "@$FILE"
echo "Uploaded: $NAME"
}
ZIP_FILE=$(ls build/LedGrab-*.zip | head -1) ZIP_FILE=$(ls build/LedGrab-*.zip | head -1)
ZIP_NAME=$(basename "$ZIP_FILE") [ -f "$ZIP_FILE" ] && upload_asset "$ZIP_FILE"
curl -s -X POST \ SETUP_FILE=$(ls build/LedGrab-*-setup.exe 2>/dev/null | head -1)
"$BASE_URL/releases/$RELEASE_ID/assets?name=$ZIP_NAME" \ [ -f "$SETUP_FILE" ] && upload_asset "$SETUP_FILE"
-H "Authorization: token $GITEA_TOKEN" \
-H "Content-Type: application/octet-stream" \
--data-binary "@$ZIP_FILE"
echo "Uploaded: $ZIP_NAME"
# ── Linux tarball ────────────────────────────────────────── # ── Linux tarball ──────────────────────────────────────────
build-linux: build-linux:
@@ -136,18 +198,26 @@ jobs:
env: env:
GITEA_TOKEN: ${{ secrets.GITEA_TOKEN }} GITEA_TOKEN: ${{ secrets.GITEA_TOKEN }}
run: | run: |
TAG="${{ gitea.ref_name }}"
RELEASE_ID="${{ needs.create-release.outputs.release_id }}" RELEASE_ID="${{ needs.create-release.outputs.release_id }}"
BASE_URL="${{ gitea.server_url }}/api/v1/repos/${{ gitea.repository }}" BASE_URL="${{ gitea.server_url }}/api/v1/repos/${{ gitea.repository }}"
TAR_FILE=$(ls build/LedGrab-*.tar.gz | head -1) TAR_FILE=$(ls build/LedGrab-*.tar.gz | head -1)
TAR_NAME=$(basename "$TAR_FILE") TAR_NAME=$(basename "$TAR_FILE")
# Delete existing asset with same name to prevent duplicates on re-run
EXISTING_ID=$(curl -s "$BASE_URL/releases/$RELEASE_ID/assets" \
-H "Authorization: token $GITEA_TOKEN" \
| python3 -c "import sys,json; assets=json.load(sys.stdin); print(next((str(a['id']) for a in assets if a['name']=='$TAR_NAME'),''))" 2>/dev/null)
if [ -n "$EXISTING_ID" ]; then
curl -s -X DELETE "$BASE_URL/releases/$RELEASE_ID/assets/$EXISTING_ID" \
-H "Authorization: token $GITEA_TOKEN"
echo "Replaced existing asset: $TAR_NAME"
fi
curl -s -X POST \ curl -s -X POST \
"$BASE_URL/releases/$RELEASE_ID/assets?name=$TAR_NAME" \ "$BASE_URL/releases/$RELEASE_ID/assets?name=$TAR_NAME" \
-H "Authorization: token $GITEA_TOKEN" \ -H "Authorization: token $GITEA_TOKEN" \
-H "Content-Type: application/octet-stream" \ -H "Content-Type: application/octet-stream" \
--data-binary "@$TAR_FILE" --data-binary "@$TAR_FILE"
echo "Uploaded: $TAR_NAME" echo "Uploaded: $TAR_NAME"
# ── Docker image ─────────────────────────────────────────── # ── Docker image ───────────────────────────────────────────
@@ -160,43 +230,56 @@ jobs:
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Gitea Container Registry
uses: docker/login-action@v3
with:
registry: ${{ gitea.server_url }}
username: ${{ gitea.actor }}
password: ${{ secrets.GITEA_TOKEN }}
- name: Extract version metadata - name: Extract version metadata
id: meta id: meta
run: | run: |
TAG="${{ gitea.ref_name }}" TAG="${{ gitea.ref_name }}"
VERSION="${TAG#v}" VERSION="${TAG#v}"
REGISTRY="${{ gitea.server_url }}/${{ gitea.repository }}" # Strip protocol and lowercase for Docker registry path
# Lowercase the registry path (Docker requires it) SERVER_HOST=$(echo "${{ gitea.server_url }}" | sed -E 's|https?://||')
REGISTRY=$(echo "$REGISTRY" | tr '[:upper:]' '[:lower:]' | sed 's|https\?://||') REPO=$(echo "${{ gitea.repository }}" | tr '[:upper:]' '[:lower:]')
REGISTRY="${SERVER_HOST}/${REPO}"
echo "version=$VERSION" >> "$GITHUB_OUTPUT" echo "version=$VERSION" >> "$GITHUB_OUTPUT"
echo "registry=$REGISTRY" >> "$GITHUB_OUTPUT" echo "registry=$REGISTRY" >> "$GITHUB_OUTPUT"
echo "server_host=$SERVER_HOST" >> "$GITHUB_OUTPUT"
# Build tag list: version + latest (only for stable releases) - name: Login to Gitea Container Registry
TAGS="$REGISTRY:$TAG,$REGISTRY:$VERSION" id: docker-login
continue-on-error: true
run: |
echo "${{ secrets.GITEA_TOKEN }}" | docker login \
"${{ steps.meta.outputs.server_host }}" \
-u "${{ gitea.actor }}" --password-stdin
- name: Build Docker image
if: steps.docker-login.outcome == 'success'
run: |
TAG="${{ gitea.ref_name }}"
REGISTRY="${{ steps.meta.outputs.registry }}"
docker build \
--build-arg APP_VERSION="${{ steps.meta.outputs.version }}" \
--label "org.opencontainers.image.version=${{ steps.meta.outputs.version }}" \
--label "org.opencontainers.image.revision=${{ gitea.sha }}" \
-t "$REGISTRY:$TAG" \
-t "$REGISTRY:${{ steps.meta.outputs.version }}" \
./server
# Tag as latest only for stable releases
if ! echo "$TAG" | grep -qE '(alpha|beta|rc)'; then if ! echo "$TAG" | grep -qE '(alpha|beta|rc)'; then
TAGS="$TAGS,$REGISTRY:latest" docker tag "$REGISTRY:$TAG" "$REGISTRY:latest"
fi fi
echo "tags=$TAGS" >> "$GITHUB_OUTPUT"
- name: Build and push Docker image - name: Push Docker image
uses: docker/build-push-action@v5 if: steps.docker-login.outcome == 'success'
with: run: |
context: ./server TAG="${{ gitea.ref_name }}"
push: true REGISTRY="${{ steps.meta.outputs.registry }}"
tags: ${{ steps.meta.outputs.tags }}
labels: | docker push "$REGISTRY:$TAG"
org.opencontainers.image.version=${{ steps.meta.outputs.version }} docker push "$REGISTRY:${{ steps.meta.outputs.version }}"
org.opencontainers.image.revision=${{ gitea.sha }}
cache-from: type=gha if ! echo "$TAG" | grep -qE '(alpha|beta|rc)'; then
cache-to: type=gha,mode=max docker push "$REGISTRY:latest"
fi

View File

@@ -38,6 +38,8 @@ ast-index changed --base master # Show symbols changed in current bran
| [contexts/graph-editor.md](contexts/graph-editor.md) | Visual graph editor changes | | [contexts/graph-editor.md](contexts/graph-editor.md) | Visual graph editor changes |
| [contexts/server-operations.md](contexts/server-operations.md) | Server restart, startup modes, demo mode | | [contexts/server-operations.md](contexts/server-operations.md) | Server restart, startup modes, demo mode |
| [contexts/chrome-tools.md](contexts/chrome-tools.md) | Chrome MCP tool usage for testing | | [contexts/chrome-tools.md](contexts/chrome-tools.md) | Chrome MCP tool usage for testing |
| [contexts/ci-cd.md](contexts/ci-cd.md) | CI/CD pipelines, release workflow, build scripts |
| [Gitea Python CI/CD Guide](https://git.dolgolyov-family.by/alexei.dolgolyov/claude-code-facts/src/branch/main/gitea-python-ci-cd.md) | Reusable CI/CD patterns: Gitea Actions, cross-build, NSIS, Docker |
| [server/CLAUDE.md](server/CLAUDE.md) | Backend architecture, API patterns, common tasks | | [server/CLAUDE.md](server/CLAUDE.md) | Backend architecture, API patterns, common tasks |
## Task Tracking via TODO.md ## Task Tracking via TODO.md
@@ -48,6 +50,41 @@ Use `TODO.md` in the project root as the primary task tracker. **Do NOT use the
**Use context7 MCP tools for library/framework documentation lookups** (FastAPI, OpenCV, Pydantic, yt-dlp, etc.) instead of relying on potentially outdated training data. **Use context7 MCP tools for library/framework documentation lookups** (FastAPI, OpenCV, Pydantic, yt-dlp, etc.) instead of relying on potentially outdated training data.
## Data Migration Policy (CRITICAL)
**NEVER rename a storage file path, store key, entity ID prefix, or JSON field name without writing a migration.** User data lives in JSON files under `data/`. If the code starts reading from a new filename while the old file still has user data, THAT DATA IS SILENTLY LOST.
When renaming any storage-related identifier:
1. **Add migration logic in `BaseJsonStore.__init__`** (or the specific store) that detects the old file/key and migrates data to the new name automatically on startup
2. **Log a clear warning** when migration happens so the user knows
3. **Keep the old file as a backup** after migration (rename to `.migrated` or similar)
4. **Test the migration** with both old-format and new-format data files
5. **Document the migration** in the commit message
This applies to: file paths in `StorageConfig`, JSON root keys (e.g. `picture_targets``output_targets`), entity ID prefixes (e.g. `pt_``ot_`), and any field renames in dataclass models.
**Incident context:** A past rename of `picture_targets.json``output_targets.json` was done without migration. The app created a new empty `output_targets.json` while the user's 7 targets sat unread in the old file. Data was silently lost.
## UI Component Rules (CRITICAL)
**NEVER use plain HTML `<select>` elements.** The project uses custom selector components:
- **IconSelect** (icon grid) — for predefined items (effect types, palettes, easing modes, animation types)
- **EntitySelect** (entity picker) — for entity references (sources, templates, devices)
Plain HTML selects break the visual consistency of the UI.
## Pre-Commit Checks (MANDATORY)
Before every commit, run the relevant checks and fix any issues:
- **Python changes**: `cd server && ruff check src/ tests/ --fix`
- **TypeScript changes**: `cd server && npx tsc --noEmit && npm run build`
- **Both**: Run both checks
- **Always run tests**: `cd server && py -3.13 -m pytest tests/ --no-cov -q` — all tests MUST pass before committing. Do NOT commit code that fails tests.
Do NOT commit code that fails linting or tests. Fix the issues first.
## General Guidelines ## General Guidelines
- Always test changes before marking as complete - Always test changes before marking as complete

114
TODO-css-improvements.md Normal file
View File

@@ -0,0 +1,114 @@
# TODO
## IMPORTANT: Remove WLED naming throughout the app
- [ ] Rename all references to "WLED" in user-facing strings, class names, module names, config keys, file paths, and documentation
- [ ] The app is **LedGrab** — not tied to WLED specifically. WLED is just one of many supported output protocols
- [ ] Audit: i18n keys, page titles, tray labels, installer text, pyproject.toml description, README, CLAUDE.md, context files, API docs
- [ ] Rename `wled_controller` package → decide on new package name (e.g. `ledgrab`)
- [ ] Update import paths, entry points, config references, build scripts, Docker, CI/CD
- [ ] **Migration required** if renaming storage paths or config keys (see data migration policy in CLAUDE.md)
---
## Donation / Open-Source Banner
- [ ] Add a persistent but dismissible banner or notification in the dashboard UI informing users that the project is open-source and under active development, and that donations are highly appreciated
- [ ] Include a link to the donation page (GitHub Sponsors, Ko-fi, or similar — decide on platform)
- [ ] Remember dismissal in localStorage so it doesn't reappear every session
- [ ] Add i18n keys for the banner text (`en.json`, `ru.json`, `zh.json`)
---
# Color Strip Source Improvements
## New Source Types
- [x] **`weather`** — Weather-reactive ambient: maps weather conditions (rain, snow, clear, storm) to colors/animations via API
- [ ] **`music_sync`** — Beat-synced patterns: BPM detection, energy envelope, drop detection (higher-level than raw `audio`)
- [ ] **`math_wave`** — Mathematical wave generator: user-defined sine/triangle/sawtooth expressions, superposition
- [ ] **`text_scroll`** — Scrolling text marquee: bitmap font rendering, static text or RSS/API data source *(delayed)*
### Discuss: `home_assistant`
Need to research HAOS communication options first (WebSocket API, REST API, MQTT, etc.) before deciding scope.
### Deferred
- `image` — Static image sampler *(not now)*
- `clock` — Time display *(not now)*
## Improvements to Existing Sources
### `effect` (now 12 types)
- [x] Add effects: rain, comet, bouncing ball, fireworks, sparkle rain, lava lamp, wave interference
- [x] Custom palette support: user-defined [[pos,R,G,B],...] stops via JSON textarea
### `gradient`
- [x] Noise-perturbed gradient: value noise displacement on stop positions (`noise_perturb` animation type)
- [x] Gradient hue rotation: `hue_rotate` animation type — preserves S/V, rotates H
- [x] Easing functions between stops: linear, ease_in_out (smoothstep), step, cubic
### `audio`
- [x] New audio source type: band extractor (bass/mid/treble split) — responsibility of audio source layer, not CSS
- [ ] Peak hold indicator: global option on audio source (not per-mode), configurable decay time
### `daylight`
- [x] Longitude support for accurate solar position (NOAA solar equations)
- [x] Season awareness (day-of-year drives sunrise/sunset via solar declination)
### `candlelight`
- [x] Wind simulation: correlated flicker bursts across all candles (wind_strength 0.0-2.0)
- [x] Candle type presets: taper (steady), votive (flickery), bonfire (chaotic) — applied at render time
- [x] Wax drip effect: localized brightness dips with fade-in/fade-out recovery
### `composite`
- [ ] Allow nested composites (with cycle detection)
- [x] More blend modes: overlay, soft light, hard light, difference, exclusion
- [x] Per-layer LED range masks (optional start/end/reverse on each composite layer)
### `notification`
- [x] Chase effect (light bounces across strip with glowing tail)
- [x] Gradient flash (bright center fades to edges, exponential decay)
- [x] Queue priority levels (color_override = high priority, interrupts current)
### `api_input`
- [ ] Crossfade transition when new data arrives
- [ ] Interpolation when incoming LED count differs from strip count
- [ ] Last-write-wins from any client (no multi-source blending)
## Architectural / Pipeline
### Processing Templates (CSPT)
- [x] HSL shift filter (hue rotation + lightness adjustment)
- [x] ~~Color temperature filter~~ — already exists as `color_correction`
- [x] Contrast filter
- [x] ~~Saturation filter~~ — already exists
- [x] ~~Pixelation filter~~ — already exists as `pixelate`
- [x] Temporal blur filter (blend frames over time)
### Transition Engine
Needs deeper design discussion. Likely a new entity type `ColorStripSourceTransition` that defines how source switches happen (crossfade, wipe, etc.). Interacts with automations when they switch a target's active source.
### Deferred
- Global BPM sync *(not sure)*
- Recording/playback *(not now)*
- Source preview in editor modal *(not needed — overlay preview on devices is sufficient)*
---
## Remaining Open Discussion
1. **`home_assistant` source** — Need to research HAOS communication protocols first
2. **Transition engine** — Design as `ColorStripSourceTransition` entity: what transition types? (crossfade, wipe, dissolve?) How does a target reference its transition config? How do automations trigger it?

26
TODO.md Normal file
View File

@@ -0,0 +1,26 @@
# Auto-Update Phase 1: Check & Notify
## Backend
- [ ] Add `packaging` to pyproject.toml dependencies
- [ ] Create `core/update/__init__.py`
- [ ] Create `core/update/release_provider.py` — ABC + data models
- [ ] Create `core/update/gitea_provider.py` — Gitea REST API implementation
- [ ] Create `core/update/version_check.py` — semver normalization + comparison
- [ ] Create `core/update/update_service.py` — background service + state machine
- [ ] Create `api/schemas/update.py` — Pydantic request/response models
- [ ] Create `api/routes/update.py` — REST endpoints
- [ ] Wire into `api/__init__.py`, `dependencies.py`, `main.py`
## Frontend
- [ ] Add update banner HTML to `index.html`
- [ ] Add Updates tab to `settings.html`
- [ ] Add `has-update` CSS styles for version badge in `layout.css`
- [ ] Add update banner CSS styles in `components.css`
- [ ] Create `features/update.ts` — update check/settings/banner logic
- [ ] Wire exports in `app.ts`
- [ ] Add i18n keys to `en.json`, `ru.json`, `zh.json`
## Verification
- [ ] Lint check: `ruff check src/ tests/ --fix`
- [ ] TypeScript check: `npx tsc --noEmit && npm run build`
- [ ] Tests pass: `py -3.13 -m pytest tests/ --no-cov -q`

View File

@@ -33,12 +33,16 @@ if [ -z "$VERSION" ]; then
VERSION="${GITEA_REF_NAME:-${GITHUB_REF_NAME:-}}" VERSION="${GITEA_REF_NAME:-${GITHUB_REF_NAME:-}}"
fi fi
if [ -z "$VERSION" ]; then if [ -z "$VERSION" ]; then
VERSION=$(grep -oP '__version__\s*=\s*"\K[^"]+' "$SERVER_DIR/src/wled_controller/__init__.py" 2>/dev/null || echo "0.0.0") VERSION=$(grep -oP '^version\s*=\s*"\K[^"]+' "$SERVER_DIR/pyproject.toml" 2>/dev/null || echo "0.0.0")
fi fi
VERSION_CLEAN="${VERSION#v}" VERSION_CLEAN="${VERSION#v}"
ZIP_NAME="LedGrab-v${VERSION_CLEAN}-win-x64.zip" ZIP_NAME="LedGrab-v${VERSION_CLEAN}-win-x64.zip"
# Stamp the resolved version into pyproject.toml so that
# importlib.metadata reads the correct value at runtime.
sed -i "s/^version = .*/version = \"${VERSION_CLEAN}\"/" "$SERVER_DIR/pyproject.toml"
echo "=== Cross-building LedGrab v${VERSION_CLEAN} (Windows from Linux) ===" echo "=== Cross-building LedGrab v${VERSION_CLEAN} (Windows from Linux) ==="
echo " Embedded Python: $PYTHON_VERSION" echo " Embedded Python: $PYTHON_VERSION"
echo " Output: build/$ZIP_NAME" echo " Output: build/$ZIP_NAME"
@@ -47,7 +51,7 @@ echo ""
# ── Clean ──────────────────────────────────────────────────── # ── Clean ────────────────────────────────────────────────────
if [ -d "$DIST_DIR" ]; then if [ -d "$DIST_DIR" ]; then
echo "[1/8] Cleaning previous build..." echo "[1/9] Cleaning previous build..."
rm -rf "$DIST_DIR" rm -rf "$DIST_DIR"
fi fi
mkdir -p "$DIST_DIR" mkdir -p "$DIST_DIR"
@@ -57,7 +61,7 @@ mkdir -p "$DIST_DIR"
PYTHON_ZIP_URL="https://www.python.org/ftp/python/${PYTHON_VERSION}/python-${PYTHON_VERSION}-embed-amd64.zip" PYTHON_ZIP_URL="https://www.python.org/ftp/python/${PYTHON_VERSION}/python-${PYTHON_VERSION}-embed-amd64.zip"
PYTHON_ZIP_PATH="$BUILD_DIR/python-embed-win.zip" PYTHON_ZIP_PATH="$BUILD_DIR/python-embed-win.zip"
echo "[2/8] Downloading Windows embedded Python ${PYTHON_VERSION}..." echo "[2/9] Downloading Windows embedded Python ${PYTHON_VERSION}..."
if [ ! -f "$PYTHON_ZIP_PATH" ]; then if [ ! -f "$PYTHON_ZIP_PATH" ]; then
curl -sL "$PYTHON_ZIP_URL" -o "$PYTHON_ZIP_PATH" curl -sL "$PYTHON_ZIP_URL" -o "$PYTHON_ZIP_PATH"
fi fi
@@ -66,23 +70,105 @@ unzip -qo "$PYTHON_ZIP_PATH" -d "$PYTHON_DIR"
# ── Patch ._pth to enable site-packages ────────────────────── # ── Patch ._pth to enable site-packages ──────────────────────
echo "[3/8] Patching Python path configuration..." echo "[3/9] Patching Python path configuration..."
PTH_FILE=$(ls "$PYTHON_DIR"/python*._pth 2>/dev/null | head -1) PTH_FILE=$(ls "$PYTHON_DIR"/python*._pth 2>/dev/null | head -1)
if [ -z "$PTH_FILE" ]; then if [ -z "$PTH_FILE" ]; then
echo "ERROR: Could not find python*._pth in $PYTHON_DIR" >&2 echo "ERROR: Could not find python*._pth in $PYTHON_DIR" >&2
exit 1 exit 1
fi fi
# Uncomment 'import site' and add Lib\site-packages # Uncomment 'import site', add Lib\site-packages and app source path
sed -i 's/^#\s*import site/import site/' "$PTH_FILE" sed -i 's/^#\s*import site/import site/' "$PTH_FILE"
if ! grep -q 'Lib\\site-packages' "$PTH_FILE"; then if ! grep -q 'Lib\\site-packages' "$PTH_FILE"; then
echo 'Lib\site-packages' >> "$PTH_FILE" echo 'Lib\site-packages' >> "$PTH_FILE"
fi fi
# Embedded Python ._pth overrides PYTHONPATH, so we must add the app
# source directory here for wled_controller to be importable
if ! grep -q '\.\./app/src' "$PTH_FILE"; then
echo '../app/src' >> "$PTH_FILE"
fi
echo " Patched $(basename "$PTH_FILE")" echo " Patched $(basename "$PTH_FILE")"
# ── Bundle tkinter into embedded Python ───────────────────────
# Embedded Python doesn't include tkinter. We download the individual
# MSI packages from python.org (tcltk.msi + lib.msi) and extract them
# using msiextract (from msitools).
echo "[4/9] Bundling tkinter for screen overlay support..."
TK_EXTRACT="$BUILD_DIR/tk-extract"
rm -rf "$TK_EXTRACT"
mkdir -p "$TK_EXTRACT"
MSI_BASE="https://www.python.org/ftp/python/${PYTHON_VERSION}/amd64"
# Download tcltk.msi (contains _tkinter.pyd, tcl/tk DLLs, tcl8.6/, tk8.6/)
TCLTK_MSI="$BUILD_DIR/tcltk.msi"
if [ ! -f "$TCLTK_MSI" ]; then
curl -sL "$MSI_BASE/tcltk.msi" -o "$TCLTK_MSI"
fi
# Download lib.msi (contains tkinter/ Python package in the stdlib)
LIB_MSI="$BUILD_DIR/lib.msi"
if [ ! -f "$LIB_MSI" ]; then
curl -sL "$MSI_BASE/lib.msi" -o "$LIB_MSI"
fi
if command -v msiextract &>/dev/null; then
# Extract both MSIs
(cd "$TK_EXTRACT" && msiextract "$TCLTK_MSI" 2>/dev/null)
(cd "$TK_EXTRACT" && msiextract "$LIB_MSI" 2>/dev/null)
# Copy _tkinter.pyd
TKINTER_PYD=$(find "$TK_EXTRACT" -name "_tkinter.pyd" 2>/dev/null | head -1)
if [ -n "$TKINTER_PYD" ]; then
cp "$TKINTER_PYD" "$PYTHON_DIR/"
echo " Copied _tkinter.pyd"
else
echo " WARNING: _tkinter.pyd not found in tcltk.msi"
fi
# Copy Tcl/Tk DLLs
for dll in tcl86t.dll tk86t.dll; do
DLL_PATH=$(find "$TK_EXTRACT" -name "$dll" 2>/dev/null | head -1)
if [ -n "$DLL_PATH" ]; then
cp "$DLL_PATH" "$PYTHON_DIR/"
echo " Copied $dll"
fi
done
# Copy tkinter Python package
TKINTER_PKG=$(find "$TK_EXTRACT" -type d -name "tkinter" 2>/dev/null | head -1)
if [ -n "$TKINTER_PKG" ]; then
mkdir -p "$PYTHON_DIR/Lib"
cp -r "$TKINTER_PKG" "$PYTHON_DIR/Lib/tkinter"
echo " Copied tkinter/ package"
fi
# Copy tcl/tk data directories
for tcldir in tcl8.6 tk8.6; do
TCL_PATH=$(find "$TK_EXTRACT" -type d -name "$tcldir" 2>/dev/null | head -1)
if [ -n "$TCL_PATH" ]; then
cp -r "$TCL_PATH" "$PYTHON_DIR/$tcldir"
echo " Copied $tcldir/"
fi
done
echo " tkinter bundled successfully"
else
echo " WARNING: msiextract not found — skipping tkinter (install msitools)"
fi
# Add Lib to ._pth so tkinter package is importable
if ! grep -q '^Lib$' "$PTH_FILE"; then
echo 'Lib' >> "$PTH_FILE"
fi
rm -rf "$TK_EXTRACT"
# ── Download pip and install into embedded Python ──────────── # ── Download pip and install into embedded Python ────────────
echo "[4/8] Installing pip into embedded Python..." echo "[5/9] Installing pip into embedded Python..."
SITE_PACKAGES="$PYTHON_DIR/Lib/site-packages" SITE_PACKAGES="$PYTHON_DIR/Lib/site-packages"
mkdir -p "$SITE_PACKAGES" mkdir -p "$SITE_PACKAGES"
@@ -104,7 +190,7 @@ done
# ── Download Windows wheels for all dependencies ───────────── # ── Download Windows wheels for all dependencies ─────────────
echo "[5/8] Downloading Windows dependencies..." echo "[6/9] Downloading Windows dependencies..."
WHEEL_DIR="$BUILD_DIR/win-wheels" WHEEL_DIR="$BUILD_DIR/win-wheels"
mkdir -p "$WHEEL_DIR" mkdir -p "$WHEEL_DIR"
@@ -140,9 +226,14 @@ DEPS=(
# Windows-only deps # Windows-only deps
WIN_DEPS=( WIN_DEPS=(
"wmi>=1.5.1"
"PyAudioWPatch>=0.2.12" "PyAudioWPatch>=0.2.12"
"winsdk>=1.0.0b10" "winrt-Windows.UI.Notifications>=3.0.0"
"winrt-Windows.UI.Notifications.Management>=3.0.0"
"winrt-Windows.Foundation>=3.0.0"
"winrt-Windows.Foundation.Collections>=3.0.0"
"winrt-Windows.ApplicationModel>=3.0.0"
# System tray
"pystray>=0.19.0"
) )
# Download cross-platform deps (prefer binary, allow source for pure Python) # Download cross-platform deps (prefer binary, allow source for pure Python)
@@ -194,27 +285,65 @@ for sdist in "$WHEEL_DIR"/*.tar.gz; do
rm -rf "$TMPDIR" rm -rf "$TMPDIR"
done done
# Remove dist-info, caches, tests to reduce size # ── Reduce package size ────────────────────────────────────────
echo " Cleaning up to reduce size..."
# Remove caches, tests, docs, type stubs
find "$SITE_PACKAGES" -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true find "$SITE_PACKAGES" -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true
find "$SITE_PACKAGES" -type d -name tests -exec rm -rf {} + 2>/dev/null || true find "$SITE_PACKAGES" -type d -name tests -exec rm -rf {} + 2>/dev/null || true
find "$SITE_PACKAGES" -type d -name test -exec rm -rf {} + 2>/dev/null || true find "$SITE_PACKAGES" -type d -name test -exec rm -rf {} + 2>/dev/null || true
find "$SITE_PACKAGES" -type d -name "*.dist-info" -exec rm -rf {} + 2>/dev/null || true
find "$SITE_PACKAGES" -name "*.pyi" -delete 2>/dev/null || true
# Remove pip and setuptools (not needed at runtime)
rm -rf "$SITE_PACKAGES"/pip "$SITE_PACKAGES"/pip-* 2>/dev/null || true
rm -rf "$SITE_PACKAGES"/setuptools "$SITE_PACKAGES"/setuptools-* "$SITE_PACKAGES"/pkg_resources 2>/dev/null || true
rm -rf "$SITE_PACKAGES"/_distutils_hack 2>/dev/null || true
# Remove pythonwin GUI IDE and help file (ships with pywin32 but not needed)
rm -rf "$SITE_PACKAGES"/pythonwin 2>/dev/null || true
rm -f "$SITE_PACKAGES"/PyWin32.chm 2>/dev/null || true
# OpenCV: remove ffmpeg DLL (28MB, only for video file I/O, not camera),
# Haar cascades (2.6MB), and misc dev files
CV2_DIR="$SITE_PACKAGES/cv2"
if [ -d "$CV2_DIR" ]; then
rm -f "$CV2_DIR"/opencv_videoio_ffmpeg*.dll 2>/dev/null || true
rm -rf "$CV2_DIR/data" "$CV2_DIR/gapi" "$CV2_DIR/misc" "$CV2_DIR/utils" 2>/dev/null || true
rm -rf "$CV2_DIR/typing_stubs" "$CV2_DIR/typing" 2>/dev/null || true
fi
# numpy: remove tests, f2py, typing stubs
rm -rf "$SITE_PACKAGES/numpy/tests" "$SITE_PACKAGES/numpy/*/tests" 2>/dev/null || true
rm -rf "$SITE_PACKAGES/numpy/f2py" 2>/dev/null || true
rm -rf "$SITE_PACKAGES/numpy/typing" 2>/dev/null || true
rm -rf "$SITE_PACKAGES/numpy/_pyinstaller" 2>/dev/null || true
# Pillow: remove unused image plugins' test data
rm -rf "$SITE_PACKAGES/PIL/tests" 2>/dev/null || true
# winrt: remove type stubs
find "$SITE_PACKAGES/winrt" -name "*.pyi" -delete 2>/dev/null || true
# Remove wled_controller if it got installed # Remove wled_controller if it got installed
rm -rf "$SITE_PACKAGES"/wled_controller* "$SITE_PACKAGES"/wled*.dist-info 2>/dev/null || true rm -rf "$SITE_PACKAGES"/wled_controller* "$SITE_PACKAGES"/wled*.dist-info 2>/dev/null || true
CLEANED_SIZE=$(du -sh "$SITE_PACKAGES" | cut -f1)
echo " Site-packages after cleanup: $CLEANED_SIZE"
WHEEL_COUNT=$(ls "$WHEEL_DIR"/*.whl 2>/dev/null | wc -l) WHEEL_COUNT=$(ls "$WHEEL_DIR"/*.whl 2>/dev/null | wc -l)
echo " Installed $WHEEL_COUNT packages" echo " Installed $WHEEL_COUNT packages"
# ── Build frontend ─────────────────────────────────────────── # ── Build frontend ───────────────────────────────────────────
echo "[6/8] Building frontend bundle..." echo "[7/9] Building frontend bundle..."
(cd "$SERVER_DIR" && npm ci --loglevel error && npm run build) 2>&1 | { (cd "$SERVER_DIR" && npm ci --loglevel error && npm run build) 2>&1 | {
grep -v 'RemoteException' || true grep -v 'RemoteException' || true
} }
# ── Copy application files ─────────────────────────────────── # ── Copy application files ───────────────────────────────────
echo "[7/8] Copying application files..." echo "[8/9] Copying application files..."
mkdir -p "$APP_DIR" mkdir -p "$APP_DIR"
cp -r "$SERVER_DIR/src" "$APP_DIR/src" cp -r "$SERVER_DIR/src" "$APP_DIR/src"
@@ -225,13 +354,17 @@ mkdir -p "$DIST_DIR/data" "$DIST_DIR/logs"
find "$APP_DIR" -name "*.map" -delete 2>/dev/null || true find "$APP_DIR" -name "*.map" -delete 2>/dev/null || true
find "$APP_DIR" -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true find "$APP_DIR" -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true
# Pre-compile Python bytecode for faster startup
echo " Pre-compiling Python bytecode..."
python -m compileall -b -q "$APP_DIR/src" 2>/dev/null || true
python -m compileall -b -q "$SITE_PACKAGES" 2>/dev/null || true
# ── Create launcher ────────────────────────────────────────── # ── Create launcher ──────────────────────────────────────────
echo "[8/8] Creating launcher and packaging..." echo "[8b/9] Creating launcher and packaging..."
cat > "$DIST_DIR/LedGrab.bat" << LAUNCHER cat > "$DIST_DIR/LedGrab.bat" << LAUNCHER
@echo off @echo off
title LedGrab v${VERSION_CLEAN}
cd /d "%~dp0" cd /d "%~dp0"
:: Set paths :: Set paths
@@ -242,23 +375,75 @@ set WLED_CONFIG_PATH=%~dp0app\config\default_config.yaml
if not exist "%~dp0data" mkdir "%~dp0data" if not exist "%~dp0data" mkdir "%~dp0data"
if not exist "%~dp0logs" mkdir "%~dp0logs" if not exist "%~dp0logs" mkdir "%~dp0logs"
echo. :: Start the server (tray icon handles UI and exit)
echo ============================================= "%~dp0python\pythonw.exe" -m wled_controller
echo LedGrab v${VERSION_CLEAN}
echo Open http://localhost:8080 in your browser
echo =============================================
echo.
:: Start the server (open browser after short delay)
start "" /b cmd /c "timeout /t 2 /nobreak >nul && start http://localhost:8080"
"%~dp0python\python.exe" -m uvicorn wled_controller.main:app --host 0.0.0.0 --port 8080
pause
LAUNCHER LAUNCHER
# Convert launcher to Windows line endings # Convert launcher to Windows line endings
sed -i 's/$/\r/' "$DIST_DIR/LedGrab.bat" sed -i 's/$/\r/' "$DIST_DIR/LedGrab.bat"
# Copy hidden launcher VBS
mkdir -p "$DIST_DIR/scripts"
cp server/scripts/start-hidden.vbs "$DIST_DIR/scripts/"
# ── Create autostart scripts ─────────────────────────────────
cat > "$DIST_DIR/install-autostart.bat" << 'AUTOSTART'
@echo off
:: Install LedGrab to start automatically on Windows login
:: Creates a shortcut in the Startup folder
set SHORTCUT_NAME=LedGrab
set STARTUP_DIR=%APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup
set TARGET=%~dp0LedGrab.bat
set SHORTCUT=%STARTUP_DIR%\%SHORTCUT_NAME%.lnk
echo Installing LedGrab autostart...
:: Use PowerShell to create a proper shortcut
powershell -NoProfile -Command ^
"$ws = New-Object -ComObject WScript.Shell; ^
$sc = $ws.CreateShortcut('%SHORTCUT%'); ^
$sc.TargetPath = '%TARGET%'; ^
$sc.WorkingDirectory = '%~dp0'; ^
$sc.WindowStyle = 7; ^
$sc.Description = 'LedGrab ambient lighting server'; ^
$sc.Save()"
if exist "%SHORTCUT%" (
echo.
echo [OK] LedGrab will start automatically on login.
echo Shortcut: %SHORTCUT%
echo.
echo To remove: run uninstall-autostart.bat
) else (
echo.
echo [ERROR] Failed to create shortcut.
)
pause
AUTOSTART
sed -i 's/$/\r/' "$DIST_DIR/install-autostart.bat"
cat > "$DIST_DIR/uninstall-autostart.bat" << 'UNAUTOSTART'
@echo off
:: Remove LedGrab from Windows startup
set SHORTCUT=%APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup\LedGrab.lnk
if exist "%SHORTCUT%" (
del "%SHORTCUT%"
echo.
echo [OK] LedGrab autostart removed.
) else (
echo.
echo LedGrab autostart was not installed.
)
pause
UNAUTOSTART
sed -i 's/$/\r/' "$DIST_DIR/uninstall-autostart.bat"
# ── Create ZIP ─────────────────────────────────────────────── # ── Create ZIP ───────────────────────────────────────────────
ZIP_PATH="$BUILD_DIR/$ZIP_NAME" ZIP_PATH="$BUILD_DIR/$ZIP_NAME"
@@ -267,8 +452,29 @@ rm -f "$ZIP_PATH"
(cd "$BUILD_DIR" && zip -rq "$ZIP_NAME" "$DIST_NAME") (cd "$BUILD_DIR" && zip -rq "$ZIP_NAME" "$DIST_NAME")
ZIP_SIZE=$(du -h "$ZIP_PATH" | cut -f1) ZIP_SIZE=$(du -h "$ZIP_PATH" | cut -f1)
# ── Build NSIS installer (if makensis is available) ──────────
SETUP_NAME="LedGrab-v${VERSION_CLEAN}-win-x64-setup.exe"
SETUP_PATH="$BUILD_DIR/$SETUP_NAME"
if command -v makensis &>/dev/null; then
echo "[9/9] Building NSIS installer..."
makensis -DVERSION="${VERSION_CLEAN}" "$SCRIPT_DIR/installer.nsi"
if [ -f "$SETUP_PATH" ]; then
SETUP_SIZE=$(du -h "$SETUP_PATH" | cut -f1)
echo " Installer: $SETUP_PATH ($SETUP_SIZE)"
else
echo " WARNING: makensis ran but installer not found at $SETUP_PATH"
fi
else
echo "[9/9] Skipping installer (makensis not found — install nsis to enable)"
fi
echo "" echo ""
echo "=== Build complete ===" echo "=== Build complete ==="
echo " Archive: $ZIP_PATH" echo " ZIP: $ZIP_PATH ($ZIP_SIZE)"
echo " Size: $ZIP_SIZE" if [ -f "$SETUP_PATH" ]; then
echo " Installer: $SETUP_PATH ($SETUP_SIZE)"
fi
echo "" echo ""

View File

@@ -106,6 +106,11 @@ $pthContent = $pthContent -replace '#\s*import site', 'import site'
if ($pthContent -notmatch 'Lib\\site-packages') { if ($pthContent -notmatch 'Lib\\site-packages') {
$pthContent = $pthContent.TrimEnd() + "`nLib\site-packages`n" $pthContent = $pthContent.TrimEnd() + "`nLib\site-packages`n"
} }
# Embedded Python ._pth overrides PYTHONPATH, so add the app source path
# directly for wled_controller to be importable
if ($pthContent -notmatch '\.\.[/\\]app[/\\]src') {
$pthContent = $pthContent.TrimEnd() + "`n..\app\src`n"
}
Set-Content -Path $pthFile.FullName -Value $pthContent -NoNewline Set-Content -Path $pthFile.FullName -Value $pthContent -NoNewline
Write-Host " Patched $($pthFile.Name)" Write-Host " Patched $($pthFile.Name)"
@@ -125,7 +130,7 @@ if ($LASTEXITCODE -ne 0) { throw "Failed to install pip" }
# ── Install dependencies ────────────────────────────────────── # ── Install dependencies ──────────────────────────────────────
Write-Host "[5/8] Installing dependencies..." Write-Host "[5/8] Installing dependencies..."
$extras = "camera,notifications" $extras = "camera,notifications,tray"
if (-not $SkipPerf) { $extras += ",perf" } if (-not $SkipPerf) { $extras += ",perf" }
# Install the project (pulls all deps via pyproject.toml), then remove # Install the project (pulls all deps via pyproject.toml), then remove
@@ -197,7 +202,6 @@ Write-Host "[8/8] Creating launcher..."
$launcherContent = @' $launcherContent = @'
@echo off @echo off
title LedGrab v%VERSION%
cd /d "%~dp0" cd /d "%~dp0"
:: Set paths :: Set paths
@@ -208,24 +212,19 @@ set WLED_CONFIG_PATH=%~dp0app\config\default_config.yaml
if not exist "%~dp0data" mkdir "%~dp0data" if not exist "%~dp0data" mkdir "%~dp0data"
if not exist "%~dp0logs" mkdir "%~dp0logs" if not exist "%~dp0logs" mkdir "%~dp0logs"
echo. :: Start the server (tray icon handles UI and exit)
echo ============================================= "%~dp0python\pythonw.exe" -m wled_controller
echo LedGrab v%VERSION%
echo Open http://localhost:8080 in your browser
echo =============================================
echo.
:: Start the server (open browser after short delay)
start "" /b cmd /c "timeout /t 2 /nobreak >nul && start http://localhost:8080"
"%~dp0python\python.exe" -m uvicorn wled_controller.main:app --host 0.0.0.0 --port 8080
pause
'@ '@
$launcherContent = $launcherContent -replace '%VERSION%', $VersionClean $launcherContent = $launcherContent -replace '%VERSION%', $VersionClean
$launcherPath = Join-Path $DistDir "LedGrab.bat" $launcherPath = Join-Path $DistDir "LedGrab.bat"
Set-Content -Path $launcherPath -Value $launcherContent -Encoding ASCII Set-Content -Path $launcherPath -Value $launcherContent -Encoding ASCII
# Copy hidden launcher VBS
$scriptsDir = Join-Path $DistDir "scripts"
New-Item -ItemType Directory -Path $scriptsDir -Force | Out-Null
Copy-Item -Path (Join-Path $ServerDir "scripts\start-hidden.vbs") -Destination $scriptsDir
# ── Create ZIP ───────────────────────────────────────────────── # ── Create ZIP ─────────────────────────────────────────────────
$ZipPath = Join-Path $BuildDir $ZipName $ZipPath = Join-Path $BuildDir $ZipName

View File

@@ -28,12 +28,16 @@ if [ -z "$VERSION" ]; then
VERSION="${GITEA_REF_NAME:-${GITHUB_REF_NAME:-}}" VERSION="${GITEA_REF_NAME:-${GITHUB_REF_NAME:-}}"
fi fi
if [ -z "$VERSION" ]; then if [ -z "$VERSION" ]; then
VERSION=$(grep -oP '__version__\s*=\s*"\K[^"]+' "$SERVER_DIR/src/wled_controller/__init__.py" 2>/dev/null || echo "0.0.0") VERSION=$(grep -oP '^version\s*=\s*"\K[^"]+' "$SERVER_DIR/pyproject.toml" 2>/dev/null || echo "0.0.0")
fi fi
VERSION_CLEAN="${VERSION#v}" VERSION_CLEAN="${VERSION#v}"
TAR_NAME="LedGrab-v${VERSION_CLEAN}-linux-x64.tar.gz" TAR_NAME="LedGrab-v${VERSION_CLEAN}-linux-x64.tar.gz"
# Stamp the resolved version into pyproject.toml so that
# importlib.metadata reads the correct value at runtime.
sed -i "s/^version = .*/version = \"${VERSION_CLEAN}\"/" "$SERVER_DIR/pyproject.toml"
echo "=== Building LedGrab v${VERSION_CLEAN} (Linux) ===" echo "=== Building LedGrab v${VERSION_CLEAN} (Linux) ==="
echo " Output: build/$TAR_NAME" echo " Output: build/$TAR_NAME"
echo "" echo ""
@@ -103,20 +107,100 @@ export WLED_CONFIG_PATH="$SCRIPT_DIR/app/config/default_config.yaml"
mkdir -p "$SCRIPT_DIR/data" "$SCRIPT_DIR/logs" mkdir -p "$SCRIPT_DIR/data" "$SCRIPT_DIR/logs"
echo ""
echo " ============================================="
echo " LedGrab vVERSION_PLACEHOLDER"
echo " Open http://localhost:8080 in your browser"
echo " ============================================="
echo ""
source "$SCRIPT_DIR/venv/bin/activate" source "$SCRIPT_DIR/venv/bin/activate"
exec python -m uvicorn wled_controller.main:app --host 0.0.0.0 --port 8080 exec python -m wled_controller.main
LAUNCHER LAUNCHER
sed -i "s/VERSION_PLACEHOLDER/${VERSION_CLEAN}/" "$DIST_DIR/run.sh" sed -i "s/VERSION_PLACEHOLDER/${VERSION_CLEAN}/" "$DIST_DIR/run.sh"
chmod +x "$DIST_DIR/run.sh" chmod +x "$DIST_DIR/run.sh"
# ── Create autostart scripts ─────────────────────────────────
cat > "$DIST_DIR/install-service.sh" << 'SERVICE_INSTALL'
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
SERVICE_NAME="ledgrab"
SERVICE_FILE="/etc/systemd/system/${SERVICE_NAME}.service"
RUN_SCRIPT="$SCRIPT_DIR/run.sh"
CURRENT_USER="$(whoami)"
if [ "$EUID" -ne 0 ] && [ "$CURRENT_USER" != "root" ]; then
echo "This script requires root privileges. Re-running with sudo..."
exec sudo "$0" "$@"
fi
# Resolve the actual user (not root) when run via sudo
ACTUAL_USER="${SUDO_USER:-$CURRENT_USER}"
ACTUAL_HOME=$(eval echo "~$ACTUAL_USER")
echo "Installing LedGrab systemd service..."
cat > "$SERVICE_FILE" << EOF
[Unit]
Description=LedGrab ambient lighting server
After=network.target
[Service]
Type=simple
User=$ACTUAL_USER
WorkingDirectory=$SCRIPT_DIR
ExecStart=$RUN_SCRIPT
Restart=on-failure
RestartSec=5
Environment=HOME=$ACTUAL_HOME
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable "$SERVICE_NAME"
systemctl start "$SERVICE_NAME"
echo ""
echo " [OK] LedGrab service installed and started."
echo ""
echo " Commands:"
echo " sudo systemctl status $SERVICE_NAME # Check status"
echo " sudo systemctl stop $SERVICE_NAME # Stop"
echo " sudo systemctl restart $SERVICE_NAME # Restart"
echo " sudo journalctl -u $SERVICE_NAME -f # View logs"
echo ""
echo " To remove: run ./uninstall-service.sh"
SERVICE_INSTALL
chmod +x "$DIST_DIR/install-service.sh"
cat > "$DIST_DIR/uninstall-service.sh" << 'SERVICE_UNINSTALL'
#!/usr/bin/env bash
set -euo pipefail
SERVICE_NAME="ledgrab"
SERVICE_FILE="/etc/systemd/system/${SERVICE_NAME}.service"
if [ "$EUID" -ne 0 ] && [ "$(whoami)" != "root" ]; then
echo "This script requires root privileges. Re-running with sudo..."
exec sudo "$0" "$@"
fi
if [ ! -f "$SERVICE_FILE" ]; then
echo "LedGrab service is not installed."
exit 0
fi
echo "Removing LedGrab systemd service..."
systemctl stop "$SERVICE_NAME" 2>/dev/null || true
systemctl disable "$SERVICE_NAME" 2>/dev/null || true
rm -f "$SERVICE_FILE"
systemctl daemon-reload
echo ""
echo " [OK] LedGrab service removed."
SERVICE_UNINSTALL
chmod +x "$DIST_DIR/uninstall-service.sh"
# ── Create tarball ─────────────────────────────────────────── # ── Create tarball ───────────────────────────────────────────
echo "[7/7] Creating $TAR_NAME..." echo "[7/7] Creating $TAR_NAME..."

View File

@@ -0,0 +1,143 @@
# Auto-Update Plan — Phase 1: Check & Notify
> Created: 2026-03-25. Status: **planned, not started.**
## Backend Architecture
### Release Provider Abstraction
```
core/update/
release_provider.py — ABC: get_releases(), get_releases_page_url()
gitea_provider.py — Gitea REST API implementation
version_check.py — normalize_version(), is_newer() using packaging.version
update_service.py — Background asyncio task + state machine
```
**`ReleaseProvider` interface** — two methods:
- `get_releases(limit) → list[ReleaseInfo]` — fetch releases (newest first)
- `get_releases_page_url() → str` — link for "view on web"
**`GiteaReleaseProvider`** calls `GET {base_url}/api/v1/repos/{repo}/releases`. Swapping to GitHub later means implementing the same interface against `api.github.com`.
**Data models:**
```python
@dataclass(frozen=True)
class AssetInfo:
name: str # "LedGrab-v0.3.0-win-x64.zip"
size: int # bytes
download_url: str
@dataclass(frozen=True)
class ReleaseInfo:
tag: str # "v0.3.0"
version: str # "0.3.0"
name: str # "LedGrab v0.3.0"
body: str # release notes markdown
prerelease: bool
published_at: str # ISO 8601
assets: tuple[AssetInfo, ...]
```
### Version Comparison
`version_check.py` — normalize Gitea tags to PEP 440:
- `v0.3.0-alpha.1``0.3.0a1`
- `v0.3.0-beta.2``0.3.0b2`
- `v0.3.0-rc.3``0.3.0rc3`
Uses `packaging.version.Version` for comparison.
### Update Service
Follows the **AutoBackupEngine pattern**:
- Settings in `Database.get_setting("auto_update")`
- asyncio.Task for periodic checks
- 30s startup delay (avoid slowing boot)
- 60s debounce on manual checks
**State machine (Phase 1):** `IDLE → CHECKING → UPDATE_AVAILABLE`
No download/apply in Phase 1 — just detection and notification.
**Settings:** `enabled` (bool), `check_interval_hours` (float), `channel` ("stable" | "pre-release")
**Persisted state:** `dismissed_version`, `last_check` (survives restarts)
### API Endpoints
| Method | Path | Purpose |
|--------|------|---------|
| `GET` | `/api/v1/system/update/status` | Current state + available version |
| `POST` | `/api/v1/system/update/check` | Trigger immediate check |
| `POST` | `/api/v1/system/update/dismiss` | Dismiss notification for current version |
| `GET` | `/api/v1/system/update/settings` | Get settings |
| `PUT` | `/api/v1/system/update/settings` | Update settings |
### Wiring
- New `get_update_service()` in `dependencies.py`
- `UpdateService` created in `main.py` lifespan, `start()`/`stop()` alongside other engines
- Router registered in `api/__init__.py`
- WebSocket event: `update_available` fired via `processor_manager.fire_event()`
## Frontend
### Version badge highlight
The existing `#server-version` pill in the header gets a CSS class `has-update` when a newer version exists — changes the background to `var(--warning-color)` with a subtle pulse, making it clickable to open the update panel in settings.
### Notification popup
On `server:update_available` WebSocket event (and on page load if status says `has_update` and not dismissed):
- A **persistent dismissible banner** slides in below the header (not the ephemeral 3s toast)
- Shows: "Version {x.y.z} is available" + [View Release Notes] + [Dismiss]
- Dismiss calls `POST /dismiss` and hides the bar for that version
- Stored in `localStorage` so it doesn't re-show after page refresh for dismissed versions
### Settings tab: "Updates"
New 5th tab in the settings modal:
- Current version display
- "Check for updates" button + spinner
- Channel selector (stable / pre-release) via IconSelect
- Auto-check toggle + interval selector
- When update available: release name, notes preview, link to releases page
### i18n keys
New `update.*` keys in `en.json`, `ru.json`, `zh.json` for all labels.
## Files to Create
| File | Purpose |
|------|---------|
| `core/update/__init__.py` | Package init |
| `core/update/release_provider.py` | Abstract provider interface + data models |
| `core/update/gitea_provider.py` | Gitea API implementation |
| `core/update/version_check.py` | Semver normalization + comparison |
| `core/update/update_service.py` | Background service + state machine |
| `api/routes/update.py` | REST endpoints |
| `api/schemas/update.py` | Pydantic request/response models |
## Files to Modify
| File | Change |
|------|--------|
| `api/__init__.py` | Register update router |
| `api/dependencies.py` | Add `get_update_service()` |
| `main.py` | Create & start/stop UpdateService in lifespan |
| `templates/modals/settings.html` | Add Updates tab |
| `static/js/features/settings.ts` | Update check/settings UI logic |
| `static/js/core/api.ts` | Version badge highlight on health check |
| `static/css/layout.css` | `.has-update` styles for version badge |
| `static/locales/en.json` | i18n keys |
| `static/locales/ru.json` | i18n keys |
| `static/locales/zh.json` | i18n keys |
## Future Phases (not in scope)
- **Phase 2**: Download & stage artifacts
- **Phase 3**: Apply update & restart (external updater script, NSIS silent mode)
- **Phase 4**: Checksums, "What's new" dialog, update history

154
contexts/ci-cd.md Normal file
View File

@@ -0,0 +1,154 @@
# CI/CD & Release Workflow
> **Reference guide:** [Gitea Python CI/CD Guide](https://git.dolgolyov-family.by/alexei.dolgolyov/claude-code-facts/src/branch/main/gitea-python-ci-cd.md) — reusable patterns for Gitea Actions, cross-build, NSIS, Docker. When modifying workflows or build scripts, consult this guide to stay in sync with established patterns.
## Workflows
| File | Trigger | Purpose |
|------|---------|---------|
| `.gitea/workflows/test.yml` | Push/PR to master | Lint (ruff) + pytest |
| `.gitea/workflows/release.yml` | Tag `v*` | Build artifacts + create Gitea release |
## Release Pipeline (`release.yml`)
Four parallel jobs triggered by pushing a `v*` tag:
### 1. `create-release`
Creates the Gitea release with a description table listing all artifacts. **The description must stay in sync with actual build outputs** — if you add/remove/rename an artifact, update the body template here.
### 2. `build-windows` (cross-built from Linux)
- Runs `build-dist-windows.sh` on Ubuntu with NSIS + msitools
- Downloads Windows embedded Python 3.11 + pip wheels cross-platform
- Bundles tkinter from Python MSI via msiextract
- Builds frontend (`npm run build`)
- Pre-compiles Python bytecode (`compileall`)
- Produces: **`LedGrab-{tag}-win-x64.zip`** (portable) and **`LedGrab-{tag}-setup.exe`** (NSIS installer)
### 3. `build-linux`
- Runs `build-dist.sh` on Ubuntu
- Creates a venv, installs deps, builds frontend
- Produces: **`LedGrab-{tag}-linux-x64.tar.gz`**
### 4. `build-docker`
- Plain `docker build` + `docker push` (no Buildx — TrueNAS runners lack nested networking)
- Registry: `{gitea_host}/{repo}:{tag}`
- Tags: `v0.x.x`, `0.x.x`, and `latest` (stable only, not alpha/beta/rc)
## Build Scripts
| Script | Platform | Output |
|--------|----------|--------|
| `build-dist-windows.sh` | Linux → Windows cross-build | ZIP + NSIS installer |
| `build-dist.sh` | Linux native | tarball |
| `server/Dockerfile` | Docker | Container image |
## Release Versioning
- Tags: `v{major}.{minor}.{patch}` for stable, `v{major}.{minor}.{patch}-alpha.{n}` for pre-release
- Pre-release tags set `prerelease: true` on the Gitea release
- Docker `latest` tag only applied to stable releases
- Version in `server/pyproject.toml` should match the tag (without `v` prefix)
## CI Runners
- Two TrueNAS Gitea runners with `ubuntu` tags
- No Windows runner available — Windows builds are cross-compiled from Linux
- Docker Buildx not available (networking limitations) — use plain `docker build`
## Test Pipeline (`test.yml`)
- Installs `opencv-python-headless` and `libportaudio2` for CI compatibility
- Display-dependent tests are skipped via `@requires_display` marker
- Uses `python` not `python3` (Git Bash on Windows resolves `python3` to MS Store stub)
## Version Detection Pattern
Build scripts use a fallback chain: CLI argument → exact git tag → CI env var (`GITEA_REF_NAME` / `GITHUB_REF_NAME`) → hardcoded in source. Always strip leading `v` for clean version strings.
## NSIS Installer Best Practices
- **User-scoped install** (`$LOCALAPPDATA`, `RequestExecutionLevel user`) — no admin required
- **Launch after install**: Use `MUI_FINISHPAGE_RUN_FUNCTION` (not `MUI_FINISHPAGE_RUN_PARAMETERS` — NSIS `Exec` chokes on quoting). Still requires `MUI_FINISHPAGE_RUN ""` defined for checkbox visibility
- **Detect running instance**: `.onInit` checks file lock on `python.exe`, offers to kill process before install
- **Uninstall preserves user data**: Remove `python/`, `app/`, `logs/` but NOT `data/`
- **CI build**: `sudo apt-get install -y nsis msitools zip` then `makensis -DVERSION="${VERSION}" installer.nsi`
## Hidden Launcher (VBS)
All shortcuts and the installer finish page use `scripts/start-hidden.vbs` instead of `.bat` to avoid console window flash. The VBS launcher must include an embedded Python fallback — installed distributions don't have Python on PATH, dev environment uses system Python.
## Gitea vs GitHub Actions Differences
| Feature | GitHub Actions | Gitea Actions |
| ------- | -------------- | ------------- |
| Context prefix | `github.*` | `gitea.*` |
| Ref name | `${{ github.ref_name }}` | `${{ gitea.ref_name }}` |
| Server URL | `${{ github.server_url }}` | `${{ gitea.server_url }}` |
| Output vars | `$GITHUB_OUTPUT` | `$GITHUB_OUTPUT` (same) |
| Secrets | `${{ secrets.NAME }}` | `${{ secrets.NAME }}` (same) |
| Docker Buildx | Available | May not work (runner networking) |
## Common Tasks
### Creating a release
```bash
git tag v0.2.0
git push origin v0.2.0
```
### Creating a pre-release
```bash
git tag v0.2.0-alpha.1
git push origin v0.2.0-alpha.1
```
### Adding a new build artifact
1. Update the build script to produce the new file
2. Add upload step in the relevant `build-*` job
3. **Update the release description** in `create-release` job body template
4. Test with a pre-release tag first
### Re-triggering a failed release workflow
```bash
# Option A: Delete and re-push the same tag
git push origin :refs/tags/v0.1.0-alpha.2
# Delete the release in Gitea UI or via API
git tag -f v0.1.0-alpha.2
git push origin v0.1.0-alpha.2
# Option B: Just bump the version (simpler)
git tag v0.1.0-alpha.3
git push origin v0.1.0-alpha.3
```
The `create-release` job has fallback logic — if the release already exists for a tag, it fetches and reuses the existing release ID.
## Local Build Testing (Windows)
### Prerequisites
- NSIS: `& "$env:LOCALAPPDATA\Microsoft\WindowsApps\winget.exe" install NSIS.NSIS`
- Installs to `C:\Program Files (x86)\NSIS\makensis.exe`
### Build steps
```bash
npm ci && npm run build # frontend
bash build-dist-windows.sh v1.0.0 # Windows dist
"/c/Program Files (x86)/NSIS/makensis.exe" -DVERSION="1.0.0" installer.nsi # installer
```
### Iterating on installer only
If only `installer.nsi` changed (not app code), skip the full rebuild — just re-run `makensis`. If app code changed, re-run `build-dist-windows.sh` first since `dist/` is a snapshot.
### Common issues
| Issue | Fix |
| ----- | --- |
| `zip: command not found` | Git Bash doesn't include `zip` — harmless for installer builds |
| `Exec expects 1 parameters, got 2` | Use `MUI_FINISHPAGE_RUN_FUNCTION` instead of `MUI_FINISHPAGE_RUN_PARAMETERS` |
| `Error opening file for writing: python\_asyncio.pyd` | Server is running — stop it before installing |
| App doesn't start after install | VBS must use embedded Python fallback, not bare `python` |
| `winget` not recognized | Use full path: `$env:LOCALAPPDATA\Microsoft\WindowsApps\winget.exe` |
| `dist/` has stale files | Re-run full build script — `dist/` doesn't auto-update |

View File

@@ -32,6 +32,16 @@ Defined in `server/src/wled_controller/static/css/base.css`.
| `--success-color` | `#28a745` | `#2e7d32` | Success indicators | | `--success-color` | `#28a745` | `#2e7d32` | Success indicators |
| `--shadow-color` | `rgba(0,0,0,0.3)` | `rgba(0,0,0,0.12)` | Box shadows | | `--shadow-color` | `rgba(0,0,0,0.3)` | `rgba(0,0,0,0.12)` | Box shadows |
## Icons — No Emoji (IMPORTANT)
**NEVER use emoji characters (`🔗`, `📋`, `🔍`, etc.) in buttons, labels, or card metadata.** Always use SVG icons from `core/icons.ts` (which wraps Lucide icon paths from `core/icon-paths.ts`).
- Import the constant: `import { ICON_CLONE } from '../core/icons.ts'`
- Use in template literals: `` `<button class="btn btn-icon">${ICON_CLONE}</button>` ``
- To add a new icon: copy inner SVG elements from [Lucide](https://lucide.dev) into `icon-paths.ts`, then export a named constant in `icons.ts`
Emoji render inconsistently across OS, break monochrome icon themes, and cannot be styled with CSS `color`/`stroke`.
## UI Conventions for Dialogs ## UI Conventions for Dialogs
### Hints ### Hints
@@ -70,13 +80,17 @@ For `EntitySelect` with `allowNone: true`, pass the same i18n string as `noneLab
### Enhanced selectors (IconSelect & EntitySelect) ### Enhanced selectors (IconSelect & EntitySelect)
**IMPORTANT:** Always use icon grid or entity pickers instead of plain `<select>` dropdowns wherever appropriate. Plain HTML selects break the visual consistency of the UI. Any selector with a small fixed set of options (types, modes, presets, bands) should use `IconSelect`; any selector referencing dynamic entities should use `EntitySelect`.
Plain `<select>` dropdowns should be enhanced with visual selectors depending on the data type: Plain `<select>` dropdowns should be enhanced with visual selectors depending on the data type:
- **Predefined options** (source types, effect types, palettes, waveforms, viz modes) → use `IconSelect` from `js/core/icon-select.ts`. This replaces the `<select>` with a visual grid of icon+label+description cells. See `_ensureCSSTypeIconSelect()`, `_ensureEffectTypeIconSelect()`, `_ensureInterpolationIconSelect()` in `color-strips.ts` for examples. - **Predefined options** (source types, effect types, palettes, waveforms, viz modes) → use `IconSelect` from `js/core/icon-select.ts`. This replaces the `<select>` with a visual grid of icon+label+description cells. See `_ensureCSSTypeIconSelect()`, `_ensureEffectTypeIconSelect()`, `_ensureInterpolationIconSelect()` in `color-strips.ts` for examples.
- **Entity references** (picture sources, audio sources, devices, templates, clocks) → use `EntitySelect` from `js/core/entity-palette.ts`. This replaces the `<select>` with a searchable command-palette-style picker. See `_cssPictureSourceEntitySelect` in `color-strips.ts` or `_lineSourceEntitySelect` in `advanced-calibration.ts` for examples. - **Entity references** (picture sources, audio sources, devices, templates, clocks) → use `EntitySelect` from `js/core/entity-palette.ts`. This replaces the `<select>` with a searchable command-palette-style picker. See `_cssPictureSourceEntitySelect` in `color-strips.ts` or `_lineSourceEntitySelect` in `advanced-calibration.ts` for examples.
Both widgets hide the native `<select>` but keep it in the DOM with its value in sync. After programmatically changing the `<select>` value, call `.refresh()` (EntitySelect) or `.setValue(val)` (IconSelect) to update the trigger display. Call `.destroy()` when the modal closes. Both widgets hide the native `<select>` but keep it in the DOM with its value in sync. **The `<select>` and the visual widget are two separate things — changing one does NOT automatically update the other.** After programmatically changing the `<select>` value, call `.refresh()` (EntitySelect) or `.setValue(val)` (IconSelect) to update the trigger display. Call `.destroy()` when the modal closes.
**Common pitfall:** Using a preset/palette selector (e.g. gradient preset dropdown or effect type picker) that changes the underlying `<select>` value but forgets to call `.setValue()` on the IconSelect — the visual grid still shows the old selection.
**IMPORTANT:** For `IconSelect` item icons, use SVG icons from `js/core/icon-paths.ts` (via `_icon(P.iconName)`) or styled `<span>` elements (e.g., `<span style="font-weight:bold">A</span>`). **Never use emoji** — they render inconsistently across platforms and themes. **IMPORTANT:** For `IconSelect` item icons, use SVG icons from `js/core/icon-paths.ts` (via `_icon(P.iconName)`) or styled `<span>` elements (e.g., `<span style="font-weight:bold">A</span>`). **Never use emoji** — they render inconsistently across platforms and themes.
@@ -190,6 +204,17 @@ document.addEventListener('languageChanged', () => {
Static HTML using `data-i18n` attributes is handled automatically by the i18n system. Only dynamically generated HTML needs this pattern. Static HTML using `data-i18n` attributes is handled automatically by the i18n system. Only dynamically generated HTML needs this pattern.
## API Calls (CRITICAL)
**ALWAYS use `fetchWithAuth()` from `core/api.ts` for authenticated API requests.** It auto-prepends `API_BASE` (`/api/v1`) and attaches the auth token.
- **Paths are relative to `/api/v1`** — pass `/gradients`, NOT `/api/v1/gradients`
- `fetchWithAuth('/gradients')` → `GET /api/v1/gradients` with auth header
- `fetchWithAuth('/devices/dev_123', { method: 'DELETE' })` → `DELETE /api/v1/devices/dev_123`
- Passing `/api/v1/gradients` results in **double prefix**: `/api/v1/api/v1/gradients` (404)
For raw `fetch()` without auth (rare), use the full path manually.
## Bundling & Development Workflow ## Bundling & Development Workflow
The frontend uses **esbuild** to bundle all JS modules and CSS files into single files for production. The frontend uses **esbuild** to bundle all JS modules and CSS files into single files for production.
@@ -262,6 +287,315 @@ Reference: `.dashboard-metric-value` in `dashboard.css` uses `font-family: var(-
Use `createFpsSparkline(canvasId, actualHistory, currentHistory, fpsTarget)` from `core/chart-utils.ts`. Wrap the canvas in `.target-fps-sparkline` (36px height, `position: relative`, `overflow: hidden`). Show the value in `.target-fps-label` with `.metric-value` and `.target-fps-avg`. Use `createFpsSparkline(canvasId, actualHistory, currentHistory, fpsTarget)` from `core/chart-utils.ts`. Wrap the canvas in `.target-fps-sparkline` (36px height, `position: relative`, `overflow: hidden`). Show the value in `.target-fps-label` with `.metric-value` and `.target-fps-avg`.
## Adding a New Entity Type (Full Checklist)
This section documents the complete pattern for adding a new entity type to the frontend, covering tabs, cards, modals, CRUD operations, and all required wiring.
### 1. DataCache (state.ts)
Register a cache for the new entity's API endpoint in `static/js/core/state.ts`:
```typescript
export const myEntitiesCache = new DataCache<MyEntity[]>({
endpoint: '/my-entities',
extractData: json => json.entities || [],
});
```
- `endpoint` is relative to `/api/v1` (fetchWithAuth prepends it)
- `extractData` unwraps the response envelope
- Subscribe to sync into a legacy variable if needed: `myEntitiesCache.subscribe(v => { _cachedMyEntities = v; });`
### 2. CardSection instance
Create a global CardSection in the feature module (e.g. `features/my-entities.ts`):
```typescript
const csMyEntities = new CardSection('my-entities', {
titleKey: 'my_entity.section_title', // i18n key for section header
gridClass: 'templates-grid', // CSS grid class
addCardOnclick: "showMyEntityEditor()", // onclick for the "+" card
keyAttr: 'data-my-id', // attribute used to match cards during reconcile
emptyKey: 'section.empty.my_entities', // i18n key shown when no cards
bulkActions: _myEntityBulkActions, // optional bulk action definitions
});
```
### 3. Tab registration (streams.ts)
Add the tab entry in the `tabs` array inside `loadSourcesTab()`:
```typescript
{ key: 'my_entities', icon: ICON_MY_ENTITY, titleKey: 'streams.group.my_entities', count: myEntities.length },
```
Then add the rendering block for the tab content:
```typescript
// First load: full render
if (!csMyEntities.isMounted()) {
const items = myEntities.map(e => ({ key: e.id, html: createMyEntityCard(e) }));
html += csMyEntities.render(csMyEntities.applySortOrder(items));
} else {
// Incremental update
csMyEntities.reconcile(myEntities.map(e => ({ key: e.id, html: createMyEntityCard(e) })));
}
```
After `innerHTML` assignment, call `csMyEntities.bind()` for first mount.
### 4. Card builder function
Build cards using `wrapCard()` from `core/card-colors.ts`:
```typescript
function createMyEntityCard(entity: MyEntity): string {
return wrapCard({
dataAttr: 'data-my-id',
id: entity.id,
removeOnclick: `deleteMyEntity('${entity.id}')`,
removeTitle: t('common.delete'),
content: `
<div class="card-header">
<div class="card-title" title="${escapeHtml(entity.name)}">
${ICON_MY_ENTITY} <span class="card-title-text">${escapeHtml(entity.name)}</span>
</div>
</div>
<div class="stream-card-props">
<span class="stream-card-prop">🏷️ ${escapeHtml(entity.description || '')}</span>
</div>
${renderTagChips(entity.tags)}`,
actions: `
<button class="btn btn-icon btn-secondary" onclick="cloneMyEntity('${entity.id}')" title="${t('common.clone')}">${ICON_CLONE}</button>
<button class="btn btn-icon btn-secondary" onclick="showMyEntityEditor('${entity.id}')" title="${t('common.edit')}">${ICON_EDIT}</button>`,
});
}
```
**Required HTML classes:**
- `.template-card` — root (auto-added by wrapCard)
- `.card-header` > `.card-title` > `.card-title-text` — title with icon
- `.stream-card-props` > `.stream-card-prop` — property badges
- `.template-card-actions` — button row (auto-added by wrapCard)
- `.card-remove-btn` — delete X button (auto-added by wrapCard)
### 5. Modal HTML template
Create `templates/modals/my-entity-editor.html`:
```html
<div id="my-entity-modal" class="modal" role="dialog" aria-modal="true" aria-labelledby="my-entity-title">
<div class="modal-content">
<div class="modal-header">
<h2 id="my-entity-title" data-i18n="my_entity.add">Add Entity</h2>
<button class="modal-close-btn" onclick="closeMyEntityModal()" data-i18n-aria-label="aria.close">&times;</button>
</div>
<div class="modal-body">
<input type="hidden" id="my-entity-id">
<div id="my-entity-error" class="modal-error" style="display:none"></div>
<div class="form-group">
<div class="label-row">
<label for="my-entity-name" data-i18n="my_entity.name">Name:</label>
<button type="button" class="hint-toggle" onclick="toggleHint(this)" title="?">?</button>
</div>
<small class="input-hint" style="display:none" data-i18n="my_entity.name.hint">...</small>
<input type="text" id="my-entity-name" required>
</div>
<!-- Type-specific fields here -->
<div id="my-entity-tags-container"></div>
</div>
<div class="modal-footer">
<button class="btn btn-icon btn-secondary" onclick="closeMyEntityModal()" title="Cancel" data-i18n-title="settings.button.cancel">&times;</button>
<button class="btn btn-icon btn-primary" onclick="saveMyEntity()" title="Save" data-i18n-title="settings.button.save">&#x2713;</button>
</div>
</div>
</div>
```
Include it in `templates/index.html`: `{% include 'modals/my-entity-editor.html' %}`
### 6. Modal class (dirty checking)
```typescript
class MyEntityModal extends Modal {
constructor() { super('my-entity-modal'); }
snapshotValues() {
return {
name: (document.getElementById('my-entity-name') as HTMLInputElement).value,
// ... all tracked fields, serialize complex state as JSON strings
tags: JSON.stringify(_tagsInput ? _tagsInput.getValue() : []),
};
}
onForceClose() {
// Cleanup: destroy tag inputs, entity selects, etc.
if (_tagsInput) { _tagsInput.destroy(); _tagsInput = null; }
}
}
const myEntityModal = new MyEntityModal();
```
### 7. CRUD functions
**Create / Edit (unified):**
```typescript
export async function showMyEntityEditor(editId: string | null = null) {
const titleEl = document.getElementById('my-entity-title')!;
const idInput = document.getElementById('my-entity-id') as HTMLInputElement;
const nameInput = document.getElementById('my-entity-name') as HTMLInputElement;
idInput.value = '';
nameInput.value = '';
if (editId) {
// Edit mode: populate from cache
const entities = await myEntitiesCache.fetch();
const entity = entities.find(e => e.id === editId);
if (!entity) return;
idInput.value = entity.id;
nameInput.value = entity.name;
titleEl.innerHTML = `${ICON_MY_ENTITY} ${t('my_entity.edit')}`;
} else {
titleEl.innerHTML = `${ICON_MY_ENTITY} ${t('my_entity.add')}`;
}
myEntityModal.open();
myEntityModal.snapshot();
}
```
**Clone:** Fetch existing entity, open editor with its data but no ID (creates new):
```typescript
export async function cloneMyEntity(entityId: string) {
const entities = await myEntitiesCache.fetch();
const source = entities.find(e => e.id === entityId);
if (!source) return;
// Open editor as "create" with pre-filled data
await showMyEntityEditor(null);
(document.getElementById('my-entity-name') as HTMLInputElement).value = source.name + ' (Copy)';
// ... populate other fields from source
myEntityModal.snapshot(); // Re-snapshot after populating clone data
}
```
**Save:** POST (new) or PUT (edit) based on hidden ID field:
```typescript
export async function saveMyEntity() {
const id = (document.getElementById('my-entity-id') as HTMLInputElement).value;
const name = (document.getElementById('my-entity-name') as HTMLInputElement).value.trim();
if (!name) { myEntityModal.showError(t('my_entity.error.name_required')); return; }
const payload = { name, /* ... other fields */ };
try {
const url = id ? `/my-entities/${id}` : '/my-entities';
const method = id ? 'PUT' : 'POST';
const res = await fetchWithAuth(url, { method, body: JSON.stringify(payload) });
if (!res!.ok) { const err = await res!.json(); throw new Error(err.detail); }
showToast(id ? t('my_entity.updated') : t('my_entity.created'), 'success');
myEntitiesCache.invalidate();
myEntityModal.forceClose();
if (window.loadSourcesTab) window.loadSourcesTab();
} catch (e: any) {
if (e.isAuth) return;
myEntityModal.showError(e.message);
}
}
```
**Delete:** Confirm, API call, invalidate cache, reload:
```typescript
export async function deleteMyEntity(entityId: string) {
const ok = await showConfirm(t('my_entity.confirm_delete'));
if (!ok) return;
try {
await fetchWithAuth(`/my-entities/${entityId}`, { method: 'DELETE' });
showToast(t('my_entity.deleted'), 'success');
myEntitiesCache.invalidate();
if (window.loadSourcesTab) window.loadSourcesTab();
} catch (e: any) {
if (e.isAuth) return;
showToast(e.message || t('my_entity.error.delete_failed'), 'error');
}
}
```
### 8. Window exports (app.ts)
Import and expose all onclick handlers:
```typescript
import { showMyEntityEditor, saveMyEntity, closeMyEntityModal, cloneMyEntity, deleteMyEntity } from './features/my-entities.ts';
Object.assign(window, {
showMyEntityEditor, saveMyEntity, closeMyEntityModal, cloneMyEntity, deleteMyEntity,
});
```
**Critical:** Functions used in `onclick="..."` HTML attributes MUST appear in `Object.assign(window, ...)` or they will be undefined at runtime.
### 9. global.d.ts
Add the window function declarations so TypeScript doesn't complain:
```typescript
showMyEntityEditor?: (id?: string | null) => void;
cloneMyEntity?: (id: string) => void;
deleteMyEntity?: (id: string) => void;
```
### 10. i18n keys
Add keys to all three locale files (`en.json`, `ru.json`, `zh.json`):
```json
"my_entity.section_title": "My Entities",
"my_entity.add": "Add Entity",
"my_entity.edit": "Edit Entity",
"my_entity.created": "Entity created",
"my_entity.updated": "Entity updated",
"my_entity.deleted": "Entity deleted",
"my_entity.confirm_delete": "Delete this entity?",
"my_entity.error.name_required": "Name is required",
"my_entity.name": "Name:",
"my_entity.name.hint": "A descriptive name for this entity",
"section.empty.my_entities": "No entities yet. Click + to create one."
```
### 11. Cross-references
After adding the entity:
- **Backup/restore:** Add to `STORE_MAP` in `api/routes/system.py`
- **Graph editor:** Update entity maps in graph editor files (see graph-editor.md)
- **Tutorials:** Update tutorial steps if adding a new tab
### CRITICAL: Common Pitfalls (MUST READ)
These mistakes have been made repeatedly. **Check every one before considering the entity complete:**
1. **DOM ID conflicts:** If your modal reuses a shared component (e.g. gradient stop editor, color picker) that uses hardcoded `document.getElementById()` calls, **both modals exist in the DOM simultaneously**. The shared component will render into whichever element it finds first (the wrong one). Fix: add an ID prefix mechanism to the shared component, set it before init, reset it on modal close.
2. **Tags go under the name input:** The tags container `<div>` goes **inside the same `form-group` as the name `<input>`**, directly after it — NOT in a separate section or at the bottom of the modal. Look at any existing modal (css-editor.html, audio-source-editor.html, device-settings.html) for the pattern.
3. **Cache reload after save/delete/clone:** Use `cache.invalidate()` then `await loadPictureSources()` (imported directly from streams.ts). Do NOT use `window.loadSourcesTab()` without `await` — it reads the stale cache before the invalidation takes effect, so the new entity won't appear until page reload.
4. **IconSelect / EntitySelect sync:** When programmatically changing a `<select>` value (e.g. loading a preset, populating for edit), you MUST also call `.setValue(val)` (IconSelect) or `.refresh()` (EntitySelect). The native `<select>` and the visual widget are **separate** — changing one does NOT update the other.
5. **Never use `window.prompt()`:** Always use a proper Modal subclass with `snapshotValues()` for dirty checking. Prompts break the UX and have no validation, no hint text, no i18n.
6. **Never do API-only clone:** Clone should open the editor modal pre-filled with the source entity's data (name + " (Copy)"), with an empty ID field so saving creates a new entity. Do NOT call a `/clone` endpoint and refresh — the user must be able to edit before saving.
## Visual Graph Editor ## Visual Graph Editor
See [`contexts/graph-editor.md`](graph-editor.md) for full graph editor architecture and conventions. See [`contexts/graph-editor.md`](graph-editor.md) for full graph editor architecture and conventions.

View File

@@ -8,7 +8,7 @@ Two independent server modes with separate configs, ports, and data directories:
| Mode | Command | Config | Port | API Key | Data | | Mode | Command | Config | Port | API Key | Data |
| ---- | ------- | ------ | ---- | ------- | ---- | | ---- | ------- | ------ | ---- | ------- | ---- |
| **Real** | `python -m wled_controller.main` | `config/default_config.yaml` | 8080 | `development-key-change-in-production` | `data/` | | **Real** | `python -m wled_controller` | `config/default_config.yaml` | 8080 | `development-key-change-in-production` | `data/` |
| **Demo** | `python -m wled_controller.demo` | `config/demo_config.yaml` | 8081 | `demo` | `data/demo/` | | **Demo** | `python -m wled_controller.demo` | `config/demo_config.yaml` | 8081 | `demo` | `data/demo/` |
Both can run simultaneously on different ports. Both can run simultaneously on different ports.

View File

@@ -21,7 +21,7 @@ STEP_USER_DATA_SCHEMA = vol.Schema(
{ {
vol.Optional(CONF_SERVER_NAME, default="LED Screen Controller"): str, vol.Optional(CONF_SERVER_NAME, default="LED Screen Controller"): str,
vol.Required(CONF_SERVER_URL, default="http://localhost:8080"): str, vol.Required(CONF_SERVER_URL, default="http://localhost:8080"): str,
vol.Required(CONF_API_KEY): str, vol.Optional(CONF_API_KEY, default=""): str,
} }
) )
@@ -57,21 +57,25 @@ async def validate_server(
except aiohttp.ClientError as err: except aiohttp.ClientError as err:
raise ConnectionError(f"Cannot connect to server: {err}") from err raise ConnectionError(f"Cannot connect to server: {err}") from err
# Step 2: Validate API key via authenticated endpoint # Step 2: Validate API key via authenticated endpoint (skip if no key and auth not required)
headers = {"Authorization": f"Bearer {api_key}"} auth_required = data.get("auth_required", True)
try: if api_key:
async with session.get( headers = {"Authorization": f"Bearer {api_key}"}
f"{server_url}/api/v1/output-targets", try:
headers=headers, async with session.get(
timeout=timeout, f"{server_url}/api/v1/output-targets",
) as resp: headers=headers,
if resp.status == 401: timeout=timeout,
raise PermissionError("Invalid API key") ) as resp:
resp.raise_for_status() if resp.status == 401:
except PermissionError: raise PermissionError("Invalid API key")
raise resp.raise_for_status()
except aiohttp.ClientError as err: except PermissionError:
raise ConnectionError(f"API request failed: {err}") from err raise
except aiohttp.ClientError as err:
raise ConnectionError(f"API request failed: {err}") from err
elif auth_required:
raise PermissionError("Server requires an API key")
return {"version": version} return {"version": version}

View File

@@ -36,7 +36,7 @@ class WLEDScreenControllerCoordinator(DataUpdateCoordinator):
self.session = session self.session = session
self.api_key = api_key self.api_key = api_key
self.server_version = "unknown" self.server_version = "unknown"
self._auth_headers = {"Authorization": f"Bearer {api_key}"} self._auth_headers = {"Authorization": f"Bearer {api_key}"} if api_key else {}
self._timeout = aiohttp.ClientTimeout(total=DEFAULT_TIMEOUT) self._timeout = aiohttp.ClientTimeout(total=DEFAULT_TIMEOUT)
super().__init__( super().__init__(

185
installer.nsi Normal file
View File

@@ -0,0 +1,185 @@
; LedGrab NSIS Installer Script
; Cross-compilable on Linux: apt install nsis && makensis installer.nsi
;
; Expects the portable build to already exist at build/LedGrab/
; (run build-dist-windows.sh first)
!include "MUI2.nsh"
!include "FileFunc.nsh"
; ── Metadata ────────────────────────────────────────────────
!define APPNAME "LedGrab"
!define VBSNAME "start-hidden.vbs"
!define DESCRIPTION "Ambient lighting system — captures screen content and drives LED strips in real time"
!define VERSIONMAJOR 0
!define VERSIONMINOR 1
!define VERSIONBUILD 0
; Set from command line: makensis -DVERSION=0.1.0 installer.nsi
!ifndef VERSION
!define VERSION "${VERSIONMAJOR}.${VERSIONMINOR}.${VERSIONBUILD}"
!endif
Name "${APPNAME} v${VERSION}"
OutFile "build\${APPNAME}-v${VERSION}-win-x64-setup.exe"
InstallDir "$LOCALAPPDATA\${APPNAME}"
InstallDirRegKey HKCU "Software\${APPNAME}" "InstallDir"
RequestExecutionLevel user
SetCompressor /SOLID lzma
; ── Modern UI Configuration ─────────────────────────────────
!define MUI_ABORTWARNING
; ── Pages ───────────────────────────────────────────────────
; Use MUI_FINISHPAGE_RUN_FUNCTION instead of MUI_FINISHPAGE_RUN_PARAMETERS —
; NSIS Exec command chokes on the quoting with RUN_PARAMETERS.
!define MUI_FINISHPAGE_RUN ""
!define MUI_FINISHPAGE_RUN_TEXT "Launch ${APPNAME}"
!define MUI_FINISHPAGE_RUN_FUNCTION LaunchApp
!insertmacro MUI_PAGE_WELCOME
!insertmacro MUI_PAGE_DIRECTORY
!insertmacro MUI_PAGE_COMPONENTS
!insertmacro MUI_PAGE_INSTFILES
!insertmacro MUI_PAGE_FINISH
!insertmacro MUI_UNPAGE_CONFIRM
!insertmacro MUI_UNPAGE_INSTFILES
!insertmacro MUI_LANGUAGE "English"
; ── Functions ─────────────────────────────────────────────
Function LaunchApp
ExecShell "open" "wscript.exe" '"$INSTDIR\scripts\${VBSNAME}"'
Sleep 2000
ExecShell "open" "http://localhost:8080/"
FunctionEnd
; Detect running instance before install (file lock check on python.exe)
Function .onInit
IfFileExists "$INSTDIR\python\python.exe" 0 done
ClearErrors
FileOpen $0 "$INSTDIR\python\python.exe" a
IfErrors locked
FileClose $0
Goto done
locked:
MessageBox MB_YESNOCANCEL|MB_ICONEXCLAMATION \
"${APPNAME} is currently running.$\n$\nYes = Stop and continue$\nNo = Continue anyway (may cause errors)$\nCancel = Abort" \
IDYES kill IDNO done
Abort
kill:
nsExec::ExecToLog 'wmic process where "ExecutablePath like $\'%${APPNAME}%python%$\'" call terminate'
Sleep 2000
done:
FunctionEnd
; ── Installer Sections ──────────────────────────────────────
Section "!${APPNAME} (required)" SecCore
SectionIn RO
SetOutPath "$INSTDIR"
; Copy the entire portable build
File /r "build\LedGrab\python"
File /r "build\LedGrab\app"
File /r "build\LedGrab\scripts"
File "build\LedGrab\LedGrab.bat"
; Create data and logs directories
CreateDirectory "$INSTDIR\data"
CreateDirectory "$INSTDIR\logs"
; Create uninstaller
WriteUninstaller "$INSTDIR\uninstall.exe"
; Start Menu shortcuts
CreateDirectory "$SMPROGRAMS\${APPNAME}"
CreateShortcut "$SMPROGRAMS\${APPNAME}\${APPNAME}.lnk" \
"wscript.exe" '"$INSTDIR\scripts\${VBSNAME}"' \
"$INSTDIR\python\pythonw.exe" 0
CreateShortcut "$SMPROGRAMS\${APPNAME}\Uninstall.lnk" "$INSTDIR\uninstall.exe"
; Registry: install location + Add/Remove Programs entry
WriteRegStr HKCU "Software\${APPNAME}" "InstallDir" "$INSTDIR"
WriteRegStr HKCU "Software\${APPNAME}" "Version" "${VERSION}"
WriteRegStr HKCU "Software\Microsoft\Windows\CurrentVersion\Uninstall\${APPNAME}" \
"DisplayName" "${APPNAME}"
WriteRegStr HKCU "Software\Microsoft\Windows\CurrentVersion\Uninstall\${APPNAME}" \
"DisplayVersion" "${VERSION}"
WriteRegStr HKCU "Software\Microsoft\Windows\CurrentVersion\Uninstall\${APPNAME}" \
"UninstallString" '"$INSTDIR\uninstall.exe"'
WriteRegStr HKCU "Software\Microsoft\Windows\CurrentVersion\Uninstall\${APPNAME}" \
"InstallLocation" "$INSTDIR"
WriteRegStr HKCU "Software\Microsoft\Windows\CurrentVersion\Uninstall\${APPNAME}" \
"Publisher" "Alexei Dolgolyov"
WriteRegStr HKCU "Software\Microsoft\Windows\CurrentVersion\Uninstall\${APPNAME}" \
"URLInfoAbout" "https://git.dolgolyov-family.by/alexei.dolgolyov/wled-screen-controller-mixed"
WriteRegDWORD HKCU "Software\Microsoft\Windows\CurrentVersion\Uninstall\${APPNAME}" \
"NoModify" 1
WriteRegDWORD HKCU "Software\Microsoft\Windows\CurrentVersion\Uninstall\${APPNAME}" \
"NoRepair" 1
; Calculate installed size for Add/Remove Programs
${GetSize} "$INSTDIR" "/S=0K" $0 $1 $2
IntFmt $0 "0x%08X" $0
WriteRegDWORD HKCU "Software\Microsoft\Windows\CurrentVersion\Uninstall\${APPNAME}" \
"EstimatedSize" "$0"
SectionEnd
Section "Desktop shortcut" SecDesktop
CreateShortcut "$DESKTOP\${APPNAME}.lnk" \
"wscript.exe" '"$INSTDIR\scripts\${VBSNAME}"' \
"$INSTDIR\python\pythonw.exe" 0
SectionEnd
Section "Start with Windows" SecAutostart
CreateShortcut "$SMSTARTUP\${APPNAME}.lnk" \
"wscript.exe" '"$INSTDIR\scripts\${VBSNAME}"' \
"$INSTDIR\python\pythonw.exe" 0
SectionEnd
; ── Section Descriptions ────────────────────────────────────
!insertmacro MUI_FUNCTION_DESCRIPTION_BEGIN
!insertmacro MUI_DESCRIPTION_TEXT ${SecCore} \
"Install ${APPNAME} server and all required files."
!insertmacro MUI_DESCRIPTION_TEXT ${SecDesktop} \
"Create a shortcut on your desktop."
!insertmacro MUI_DESCRIPTION_TEXT ${SecAutostart} \
"Start ${APPNAME} automatically when you log in."
!insertmacro MUI_FUNCTION_DESCRIPTION_END
; ── Uninstaller ─────────────────────────────────────────────
Section "Uninstall"
; Remove shortcuts
Delete "$SMPROGRAMS\${APPNAME}\${APPNAME}.lnk"
Delete "$SMPROGRAMS\${APPNAME}\Uninstall.lnk"
RMDir "$SMPROGRAMS\${APPNAME}"
Delete "$DESKTOP\${APPNAME}.lnk"
Delete "$SMSTARTUP\${APPNAME}.lnk"
; Remove application files (but NOT data/ — preserve user config)
RMDir /r "$INSTDIR\python"
RMDir /r "$INSTDIR\app"
RMDir /r "$INSTDIR\scripts"
Delete "$INSTDIR\LedGrab.bat"
Delete "$INSTDIR\uninstall.exe"
; Remove logs (but keep data/)
RMDir /r "$INSTDIR\logs"
; Try to remove install dir (only succeeds if empty — data/ may remain)
RMDir "$INSTDIR"
; Remove registry keys
DeleteRegKey HKCU "Software\${APPNAME}"
DeleteRegKey HKCU "Software\Microsoft\Windows\CurrentVersion\Uninstall\${APPNAME}"
SectionEnd

View File

@@ -23,7 +23,8 @@ Server uses API key authentication via Bearer token in `Authorization` header.
- Config: `config/default_config.yaml` under `auth.api_keys` - Config: `config/default_config.yaml` under `auth.api_keys`
- Env var: `WLED_AUTH__API_KEYS` - Env var: `WLED_AUTH__API_KEYS`
- Dev key: `development-key-change-in-production` - When `api_keys` is empty (default), auth is disabled — all endpoints are open
- To enable auth, add key entries (e.g. `dev: "your-secret-key"`)
## Common Tasks ## Common Tasks

View File

@@ -10,10 +10,12 @@ RUN npm run build
## Stage 2: Python application ## Stage 2: Python application
FROM python:3.11.11-slim AS runtime FROM python:3.11.11-slim AS runtime
ARG APP_VERSION=0.0.0
LABEL maintainer="Alexei Dolgolyov <dolgolyov.alexei@gmail.com>" LABEL maintainer="Alexei Dolgolyov <dolgolyov.alexei@gmail.com>"
LABEL org.opencontainers.image.title="LED Grab" LABEL org.opencontainers.image.title="LED Grab"
LABEL org.opencontainers.image.description="Ambient lighting system that captures screen content and drives LED strips in real time" LABEL org.opencontainers.image.description="Ambient lighting system that captures screen content and drives LED strips in real time"
LABEL org.opencontainers.image.version="0.2.0" LABEL org.opencontainers.image.version="${APP_VERSION}"
LABEL org.opencontainers.image.url="https://git.dolgolyov-family.by/alexei.dolgolyov/wled-screen-controller-mixed" LABEL org.opencontainers.image.url="https://git.dolgolyov-family.by/alexei.dolgolyov/wled-screen-controller-mixed"
LABEL org.opencontainers.image.source="https://git.dolgolyov-family.by/alexei.dolgolyov/wled-screen-controller-mixed" LABEL org.opencontainers.image.source="https://git.dolgolyov-family.by/alexei.dolgolyov/wled-screen-controller-mixed"
LABEL org.opencontainers.image.licenses="MIT" LABEL org.opencontainers.image.licenses="MIT"
@@ -34,7 +36,8 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
# Copy pyproject.toml with a minimal package stub so pip can resolve deps. # Copy pyproject.toml with a minimal package stub so pip can resolve deps.
# The real source is copied afterward, keeping the dep layer cached. # The real source is copied afterward, keeping the dep layer cached.
COPY pyproject.toml . COPY pyproject.toml .
RUN mkdir -p src/wled_controller && touch src/wled_controller/__init__.py \ RUN sed -i "s/^version = .*/version = \"${APP_VERSION}\"/" pyproject.toml \
&& mkdir -p src/wled_controller && touch src/wled_controller/__init__.py \
&& pip install --no-cache-dir ".[notifications]" \ && pip install --no-cache-dir ".[notifications]" \
&& rm -rf src/wled_controller && rm -rf src/wled_controller

View File

@@ -8,19 +8,14 @@ server:
- "http://localhost:8080" - "http://localhost:8080"
auth: auth:
# API keys are REQUIRED - authentication is always enforced # API keys — when empty, authentication is disabled (open access).
# Format: label: "api-key" # To enable auth, add one or more label: "api-key" entries.
# Generate secure keys: openssl rand -hex 32
api_keys: api_keys:
# Generate secure keys: openssl rand -hex 32 dev: "development-key-change-in-production"
dev: "development-key-change-in-production" # Development key - CHANGE THIS!
storage: storage:
devices_file: "data/devices.json" database_file: "data/ledgrab.db"
templates_file: "data/capture_templates.json"
postprocessing_templates_file: "data/postprocessing_templates.json"
picture_sources_file: "data/picture_sources.json"
output_targets_file: "data/output_targets.json"
pattern_templates_file: "data/pattern_templates.json"
mqtt: mqtt:
enabled: false enabled: false

View File

@@ -19,12 +19,7 @@ auth:
demo: "demo" demo: "demo"
storage: storage:
devices_file: "data/devices.json" database_file: "data/ledgrab.db"
templates_file: "data/capture_templates.json"
postprocessing_templates_file: "data/postprocessing_templates.json"
picture_sources_file: "data/picture_sources.json"
output_targets_file: "data/output_targets.json"
pattern_templates_file: "data/pattern_templates.json"
mqtt: mqtt:
enabled: false enabled: false

View File

@@ -11,12 +11,7 @@ auth:
test_client: "eb8a89cfd33ab067751fd0e38f74ddf7ac3d75ff012fbab35a616c45c12e0c8d" test_client: "eb8a89cfd33ab067751fd0e38f74ddf7ac3d75ff012fbab35a616c45c12e0c8d"
storage: storage:
devices_file: "data/test_devices.json" database_file: "data/test_ledgrab.db"
templates_file: "data/capture_templates.json"
postprocessing_templates_file: "data/postprocessing_templates.json"
picture_sources_file: "data/picture_sources.json"
output_targets_file: "data/output_targets.json"
pattern_templates_file: "data/pattern_templates.json"
logging: logging:
format: "text" format: "text"

View File

@@ -26,6 +26,7 @@ dependencies = [
"fastapi>=0.115.0", "fastapi>=0.115.0",
"uvicorn[standard]>=0.32.0", "uvicorn[standard]>=0.32.0",
"httpx>=0.27.2", "httpx>=0.27.2",
"packaging>=23.0",
"mss>=9.0.2", "mss>=9.0.2",
"Pillow>=10.4.0", "Pillow>=10.4.0",
"numpy>=2.1.3", "numpy>=2.1.3",
@@ -37,7 +38,6 @@ dependencies = [
"python-dateutil>=2.9.0", "python-dateutil>=2.9.0",
"python-multipart>=0.0.12", "python-multipart>=0.0.12",
"jinja2>=3.1.0", "jinja2>=3.1.0",
"wmi>=1.5.1; sys_platform == 'win32'",
"zeroconf>=0.131.0", "zeroconf>=0.131.0",
"pyserial>=3.5", "pyserial>=3.5",
"psutil>=5.9.0", "psutil>=5.9.0",
@@ -61,9 +61,13 @@ dev = [
camera = [ camera = [
"opencv-python-headless>=4.8.0", "opencv-python-headless>=4.8.0",
] ]
# OS notification capture # OS notification capture (winrt packages are ~2.5MB total vs winsdk's ~35MB)
notifications = [ notifications = [
"winsdk>=1.0.0b10; sys_platform == 'win32'", "winrt-Windows.UI.Notifications>=3.0.0; sys_platform == 'win32'",
"winrt-Windows.UI.Notifications.Management>=3.0.0; sys_platform == 'win32'",
"winrt-Windows.Foundation>=3.0.0; sys_platform == 'win32'",
"winrt-Windows.Foundation.Collections>=3.0.0; sys_platform == 'win32'",
"winrt-Windows.ApplicationModel>=3.0.0; sys_platform == 'win32'",
"dbus-next>=0.2.3; sys_platform == 'linux'", "dbus-next>=0.2.3; sys_platform == 'linux'",
] ]
# High-performance screen capture engines (Windows only) # High-performance screen capture engines (Windows only)
@@ -72,6 +76,9 @@ perf = [
"bettercam>=1.0.0; sys_platform == 'win32'", "bettercam>=1.0.0; sys_platform == 'win32'",
"windows-capture>=1.5.0; sys_platform == 'win32'", "windows-capture>=1.5.0; sys_platform == 'win32'",
] ]
tray = [
"pystray>=0.19.0; sys_platform == 'win32'",
]
[project.urls] [project.urls]
Homepage = "https://git.dolgolyov-family.by/alexei.dolgolyov/wled-screen-controller-mixed" Homepage = "https://git.dolgolyov-family.by/alexei.dolgolyov/wled-screen-controller-mixed"

View File

@@ -1,12 +1,76 @@
# Restart the WLED Screen Controller server # Restart the WLED Screen Controller server
# Stop any running instance # Uses graceful shutdown first (lets the server persist data to disk),
$procs = Get-CimInstance Win32_Process -Filter "Name='python.exe'" | # then force-kills as a fallback.
Where-Object { $_.CommandLine -like '*wled_controller.main*' }
foreach ($p in $procs) { $serverRoot = 'c:\Users\Alexei\Documents\wled-screen-controller\server'
Write-Host "Stopping server (PID $($p.ProcessId))..."
Stop-Process -Id $p.ProcessId -Force -ErrorAction SilentlyContinue # Read API key from config for authenticated shutdown request
$configPath = Join-Path $serverRoot 'config\default_config.yaml'
$apiKey = $null
if (Test-Path $configPath) {
$inKeys = $false
foreach ($line in Get-Content $configPath) {
if ($line -match '^\s*api_keys:') { $inKeys = $true; continue }
if ($inKeys -and $line -match '^\s+\w+:\s*"(.+)"') {
$apiKey = $Matches[1]; break
}
if ($inKeys -and $line -match '^\S') { break } # left the api_keys block
}
}
# Find running server processes
$procs = Get-CimInstance Win32_Process -Filter "Name='python.exe'" |
Where-Object { $_.CommandLine -like '*wled_controller*' -and $_.CommandLine -notlike '*demo*' -and $_.CommandLine -notlike '*vscode*' -and $_.CommandLine -notlike '*isort*' }
if ($procs) {
# Step 1: Request graceful shutdown via API (triggers lifespan shutdown + store save)
$shutdownOk = $false
if ($apiKey) {
Write-Host "Requesting graceful shutdown..."
try {
$headers = @{ Authorization = "Bearer $apiKey" }
Invoke-RestMethod -Uri 'http://localhost:8080/api/v1/system/shutdown' `
-Method Post -Headers $headers -TimeoutSec 5 -ErrorAction Stop | Out-Null
$shutdownOk = $true
} catch {
Write-Host " API shutdown failed ($($_.Exception.Message)), falling back to process kill"
}
}
if ($shutdownOk) {
# Step 2: Wait for the server to exit gracefully (up to 15 seconds)
# The server needs time to stop processors, disconnect devices, and persist stores.
Write-Host "Waiting for graceful shutdown..."
$waited = 0
while ($waited -lt 15) {
Start-Sleep -Seconds 1
$waited++
$still = Get-CimInstance Win32_Process -Filter "Name='python.exe'" |
Where-Object { $_.CommandLine -like '*wled_controller*' -and $_.CommandLine -notlike '*demo*' -and $_.CommandLine -notlike '*vscode*' -and $_.CommandLine -notlike '*isort*' }
if (-not $still) {
Write-Host " Server exited cleanly after ${waited}s"
break
}
}
# Step 3: Force-kill stragglers
$still = Get-CimInstance Win32_Process -Filter "Name='python.exe'" |
Where-Object { $_.CommandLine -like '*wled_controller*' -and $_.CommandLine -notlike '*demo*' -and $_.CommandLine -notlike '*vscode*' -and $_.CommandLine -notlike '*isort*' }
if ($still) {
Write-Host " Force-killing remaining processes..."
foreach ($p in $still) {
Stop-Process -Id $p.ProcessId -Force -ErrorAction SilentlyContinue
}
Start-Sleep -Seconds 1
}
} else {
# No API key or API call failed — force-kill directly
foreach ($p in $procs) {
Write-Host "Stopping server (PID $($p.ProcessId))..."
Stop-Process -Id $p.ProcessId -Force -ErrorAction SilentlyContinue
}
Start-Sleep -Seconds 2
}
} }
if ($procs) { Start-Sleep -Seconds 2 }
# Merge registry PATH with current PATH so newly-installed tools (e.g. scrcpy) are visible # Merge registry PATH with current PATH so newly-installed tools (e.g. scrcpy) are visible
$regUser = [Environment]::GetEnvironmentVariable('PATH', 'User') $regUser = [Environment]::GetEnvironmentVariable('PATH', 'User')
@@ -19,17 +83,23 @@ if ($regUser) {
} }
} }
# Start server detached # Start server detached (set WLED_RESTART=1 to skip browser open)
Write-Host "Starting server..." Write-Host "Starting server..."
Start-Process -FilePath python -ArgumentList '-m', 'wled_controller.main' ` $env:WLED_RESTART = "1"
-WorkingDirectory 'c:\Users\Alexei\Documents\wled-screen-controller\server' ` $pythonExe = (Get-Command python -ErrorAction SilentlyContinue).Source
if (-not $pythonExe) {
# Fallback to known install location
$pythonExe = "$env:LOCALAPPDATA\Programs\Python\Python313\python.exe"
}
Start-Process -FilePath $pythonExe -ArgumentList '-m', 'wled_controller' `
-WorkingDirectory $serverRoot `
-WindowStyle Hidden -WindowStyle Hidden
Start-Sleep -Seconds 3 Start-Sleep -Seconds 3
# Verify it's running # Verify it's running
$check = Get-CimInstance Win32_Process -Filter "Name='python.exe'" | $check = Get-CimInstance Win32_Process -Filter "Name='python.exe'" |
Where-Object { $_.CommandLine -like '*wled_controller.main*' } Where-Object { $_.CommandLine -like '*wled_controller*' -and $_.CommandLine -notlike '*demo*' -and $_.CommandLine -notlike '*vscode*' -and $_.CommandLine -notlike '*isort*' }
if ($check) { if ($check) {
Write-Host "Server started (PID $($check[0].ProcessId))" Write-Host "Server started (PID $($check[0].ProcessId))"
} else { } else {

View File

@@ -18,7 +18,7 @@ cd /d "%~dp0\.."
REM Start the server REM Start the server
echo. echo.
echo [2/2] Starting server... echo [2/2] Starting server...
python -m uvicorn wled_controller.main:app --host 0.0.0.0 --port 8080 python -m wled_controller
REM If the server exits, pause to show any error messages REM If the server exits, pause to show any error messages
pause pause

View File

@@ -0,0 +1,13 @@
Set fso = CreateObject("Scripting.FileSystemObject")
Set WshShell = CreateObject("WScript.Shell")
' Get the directory of this script (scripts\), then go up to app root
scriptDir = fso.GetParentFolderName(WScript.ScriptFullName)
appRoot = fso.GetParentFolderName(scriptDir)
WshShell.CurrentDirectory = appRoot
' Use embedded Python if present (installed dist), otherwise system Python
embeddedPython = appRoot & "\python\pythonw.exe"
If fso.FileExists(embeddedPython) Then
WshShell.Run """" & embeddedPython & """ -m wled_controller", 0, False
Else
WshShell.Run "python -m wled_controller", 0, False
End If

View File

@@ -2,6 +2,6 @@ Set WshShell = CreateObject("WScript.Shell")
Set FSO = CreateObject("Scripting.FileSystemObject") Set FSO = CreateObject("Scripting.FileSystemObject")
' Get parent folder of scripts folder (server root) ' Get parent folder of scripts folder (server root)
WshShell.CurrentDirectory = FSO.GetParentFolderName(FSO.GetParentFolderName(WScript.ScriptFullName)) WshShell.CurrentDirectory = FSO.GetParentFolderName(FSO.GetParentFolderName(WScript.ScriptFullName))
WshShell.Run "python -m uvicorn wled_controller.main:app --host 0.0.0.0 --port 8080", 0, False WshShell.Run "python -m wled_controller", 0, False
Set FSO = Nothing Set FSO = Nothing
Set WshShell = Nothing Set WshShell = Nothing

View File

@@ -9,7 +9,7 @@ REM Change to the server directory (parent of scripts folder)
cd /d "%~dp0\.." cd /d "%~dp0\.."
REM Start the server REM Start the server
python -m uvicorn wled_controller.main:app --host 0.0.0.0 --port 8080 python -m wled_controller
REM If the server exits, pause to show any error messages REM If the server exits, pause to show any error messages
pause pause

View File

@@ -1,5 +1,12 @@
"""LED Grab - Ambient lighting based on screen content.""" """LED Grab - Ambient lighting based on screen content."""
__version__ = "0.1.0" from importlib.metadata import version, PackageNotFoundError
try:
__version__ = version("wled-screen-controller")
except PackageNotFoundError:
# Running from source without pip install (e.g. dev, embedded Python)
__version__ = "0.0.0-dev"
__author__ = "Alexei Dolgolyov" __author__ = "Alexei Dolgolyov"
__email__ = "dolgolyov.alexei@gmail.com" __email__ = "dolgolyov.alexei@gmail.com"

View File

@@ -0,0 +1,111 @@
"""Entry point for ``python -m wled_controller``.
Starts the uvicorn server and, on Windows when *pystray* is installed,
shows a system-tray icon with **Show UI** / **Exit** actions.
"""
import asyncio
import os
import sys
import threading
import time
import webbrowser
from pathlib import Path
import uvicorn
from wled_controller.config import get_config
from wled_controller.server_ref import set_server, set_tray
from wled_controller.tray import PYSTRAY_AVAILABLE, TrayManager
from wled_controller.utils import setup_logging, get_logger
setup_logging()
logger = get_logger(__name__)
_ICON_PATH = Path(__file__).parent / "static" / "icons" / "icon-192.png"
def _run_server(server: uvicorn.Server) -> None:
"""Run uvicorn in a dedicated asyncio event loop (background thread)."""
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(server.serve())
def _open_browser(port: int, delay: float = 2.0) -> None:
"""Open the UI in the default browser after a short delay."""
time.sleep(delay)
webbrowser.open(f"http://localhost:{port}")
def _is_restart() -> bool:
"""Detect if this is a restart (vs first launch)."""
return os.environ.get("WLED_RESTART", "") == "1"
def main() -> None:
config = get_config()
uv_config = uvicorn.Config(
"wled_controller.main:app",
host=config.server.host,
port=config.server.port,
log_level=config.server.log_level.lower(),
)
server = uvicorn.Server(uv_config)
set_server(server)
use_tray = PYSTRAY_AVAILABLE and (
sys.platform == "win32" or _force_tray()
)
if use_tray:
logger.info("Starting with system tray icon")
# Uvicorn in a background thread
server_thread = threading.Thread(
target=_run_server, args=(server,), daemon=True,
)
server_thread.start()
# Browser after a short delay (skip on restart — user already has a tab)
if not _is_restart():
threading.Thread(
target=_open_browser,
args=(config.server.port,),
daemon=True,
).start()
# Tray on main thread (blocking)
tray = TrayManager(
icon_path=_ICON_PATH,
port=config.server.port,
on_exit=lambda: _request_shutdown(server),
)
set_tray(tray)
tray.run()
# Tray exited — wait for server to finish its graceful shutdown
server_thread.join(timeout=10)
else:
if not PYSTRAY_AVAILABLE:
logger.info(
"System tray not available (install pystray for tray support)"
)
server.run()
def _request_shutdown(server: uvicorn.Server) -> None:
"""Signal uvicorn to perform a graceful shutdown."""
server.should_exit = True
def _force_tray() -> bool:
"""Allow forcing tray on non-Windows via WLED_TRAY=1."""
import os
return os.environ.get("WLED_TRAY", "").strip() in ("1", "true", "yes")
if __name__ == "__main__":
main()

View File

@@ -23,6 +23,9 @@ from .routes.scene_presets import router as scene_presets_router
from .routes.webhooks import router as webhooks_router from .routes.webhooks import router as webhooks_router
from .routes.sync_clocks import router as sync_clocks_router from .routes.sync_clocks import router as sync_clocks_router
from .routes.color_strip_processing import router as cspt_router from .routes.color_strip_processing import router as cspt_router
from .routes.gradients import router as gradients_router
from .routes.weather_sources import router as weather_sources_router
from .routes.update import router as update_router
router = APIRouter() router = APIRouter()
router.include_router(system_router) router.include_router(system_router)
@@ -46,5 +49,8 @@ router.include_router(scene_presets_router)
router.include_router(webhooks_router) router.include_router(webhooks_router)
router.include_router(sync_clocks_router) router.include_router(sync_clocks_router)
router.include_router(cspt_router) router.include_router(cspt_router)
router.include_router(gradients_router)
router.include_router(weather_sources_router)
router.include_router(update_router)
__all__ = ["router"] __all__ = ["router"]

View File

@@ -15,11 +15,19 @@ logger = get_logger(__name__)
security = HTTPBearer(auto_error=False) security = HTTPBearer(auto_error=False)
def is_auth_enabled() -> bool:
"""Return True when at least one API key is configured."""
return bool(get_config().auth.api_keys)
def verify_api_key( def verify_api_key(
credentials: Annotated[HTTPAuthorizationCredentials | None, Security(security)] credentials: Annotated[HTTPAuthorizationCredentials | None, Security(security)]
) -> str: ) -> str:
"""Verify API key from Authorization header. """Verify API key from Authorization header.
When no API keys are configured, authentication is disabled and all
requests are allowed through as "anonymous".
Args: Args:
credentials: HTTP authorization credentials credentials: HTTP authorization credentials
@@ -31,6 +39,10 @@ def verify_api_key(
""" """
config = get_config() config = get_config()
# No keys configured → auth disabled, allow all requests
if not config.auth.api_keys:
return "anonymous"
# Check if credentials are provided # Check if credentials are provided
if not credentials: if not credentials:
logger.warning("Request missing Authorization header") logger.warning("Request missing Authorization header")
@@ -43,14 +55,6 @@ def verify_api_key(
# Extract token # Extract token
token = credentials.credentials token = credentials.credentials
# Verify against configured API keys
if not config.auth.api_keys:
logger.error("No API keys configured - server misconfiguration")
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="Server authentication not configured properly",
)
# Find matching key and return its label using constant-time comparison # Find matching key and return its label using constant-time comparison
authenticated_as = None authenticated_as = None
for label, api_key in config.auth.api_keys.items(): for label, api_key in config.auth.api_keys.items():
@@ -80,10 +84,14 @@ AuthRequired = Annotated[str, Depends(verify_api_key)]
def verify_ws_token(token: str) -> bool: def verify_ws_token(token: str) -> bool:
"""Check a WebSocket query-param token against configured API keys. """Check a WebSocket query-param token against configured API keys.
Use this for WebSocket endpoints where FastAPI's Depends() isn't available. When no API keys are configured, authentication is disabled and all
WebSocket connections are allowed.
""" """
config = get_config() config = get_config()
if token and config.auth.api_keys: # No keys configured → auth disabled, allow all connections
if not config.auth.api_keys:
return True
if token:
for _label, api_key in config.auth.api_keys.items(): for _label, api_key in config.auth.api_keys.items():
if secrets.compare_digest(token, api_key): if secrets.compare_digest(token, api_key):
return True return True

View File

@@ -7,6 +7,7 @@ All getter function signatures remain unchanged for FastAPI Depends() compatibil
from typing import Any, Dict, TypeVar from typing import Any, Dict, TypeVar
from wled_controller.core.processing.processor_manager import ProcessorManager from wled_controller.core.processing.processor_manager import ProcessorManager
from wled_controller.storage.database import Database
from wled_controller.storage import DeviceStore from wled_controller.storage import DeviceStore
from wled_controller.storage.template_store import TemplateStore from wled_controller.storage.template_store import TemplateStore
from wled_controller.storage.postprocessing_template_store import PostprocessingTemplateStore from wled_controller.storage.postprocessing_template_store import PostprocessingTemplateStore
@@ -21,9 +22,13 @@ from wled_controller.storage.automation_store import AutomationStore
from wled_controller.storage.scene_preset_store import ScenePresetStore from wled_controller.storage.scene_preset_store import ScenePresetStore
from wled_controller.storage.sync_clock_store import SyncClockStore from wled_controller.storage.sync_clock_store import SyncClockStore
from wled_controller.storage.color_strip_processing_template_store import ColorStripProcessingTemplateStore from wled_controller.storage.color_strip_processing_template_store import ColorStripProcessingTemplateStore
from wled_controller.storage.gradient_store import GradientStore
from wled_controller.storage.weather_source_store import WeatherSourceStore
from wled_controller.core.automations.automation_engine import AutomationEngine from wled_controller.core.automations.automation_engine import AutomationEngine
from wled_controller.core.weather.weather_manager import WeatherManager
from wled_controller.core.backup.auto_backup import AutoBackupEngine from wled_controller.core.backup.auto_backup import AutoBackupEngine
from wled_controller.core.processing.sync_clock_manager import SyncClockManager from wled_controller.core.processing.sync_clock_manager import SyncClockManager
from wled_controller.core.update.update_service import UpdateService
T = TypeVar("T") T = TypeVar("T")
@@ -114,6 +119,26 @@ def get_cspt_store() -> ColorStripProcessingTemplateStore:
return _get("cspt_store", "Color strip processing template store") return _get("cspt_store", "Color strip processing template store")
def get_gradient_store() -> GradientStore:
return _get("gradient_store", "Gradient store")
def get_weather_source_store() -> WeatherSourceStore:
return _get("weather_source_store", "Weather source store")
def get_weather_manager() -> WeatherManager:
return _get("weather_manager", "Weather manager")
def get_database() -> Database:
return _get("database", "Database")
def get_update_service() -> UpdateService:
return _get("update_service", "Update service")
# ── Event helper ──────────────────────────────────────────────────────── # ── Event helper ────────────────────────────────────────────────────────
@@ -142,6 +167,7 @@ def init_dependencies(
device_store: DeviceStore, device_store: DeviceStore,
template_store: TemplateStore, template_store: TemplateStore,
processor_manager: ProcessorManager, processor_manager: ProcessorManager,
database: Database | None = None,
pp_template_store: PostprocessingTemplateStore | None = None, pp_template_store: PostprocessingTemplateStore | None = None,
pattern_template_store: PatternTemplateStore | None = None, pattern_template_store: PatternTemplateStore | None = None,
picture_source_store: PictureSourceStore | None = None, picture_source_store: PictureSourceStore | None = None,
@@ -157,9 +183,14 @@ def init_dependencies(
sync_clock_store: SyncClockStore | None = None, sync_clock_store: SyncClockStore | None = None,
sync_clock_manager: SyncClockManager | None = None, sync_clock_manager: SyncClockManager | None = None,
cspt_store: ColorStripProcessingTemplateStore | None = None, cspt_store: ColorStripProcessingTemplateStore | None = None,
gradient_store: GradientStore | None = None,
weather_source_store: WeatherSourceStore | None = None,
weather_manager: WeatherManager | None = None,
update_service: UpdateService | None = None,
): ):
"""Initialize global dependencies.""" """Initialize global dependencies."""
_deps.update({ _deps.update({
"database": database,
"device_store": device_store, "device_store": device_store,
"template_store": template_store, "template_store": template_store,
"processor_manager": processor_manager, "processor_manager": processor_manager,
@@ -178,4 +209,8 @@ def init_dependencies(
"sync_clock_store": sync_clock_store, "sync_clock_store": sync_clock_store,
"sync_clock_manager": sync_clock_manager, "sync_clock_manager": sync_clock_manager,
"cspt_store": cspt_store, "cspt_store": cspt_store,
"gradient_store": gradient_store,
"weather_source_store": weather_source_store,
"weather_manager": weather_manager,
"update_service": update_service,
}) })

View File

@@ -43,15 +43,14 @@ def _encode_jpeg(pil_image: Image.Image, quality: int = 85) -> str:
def encode_preview_frame(image: np.ndarray, max_width: int = None, quality: int = 80) -> bytes: def encode_preview_frame(image: np.ndarray, max_width: int = None, quality: int = 80) -> bytes:
"""Encode a numpy RGB image to JPEG bytes, optionally downscaling.""" """Encode a numpy RGB image to JPEG bytes, optionally downscaling."""
import cv2 pil_img = Image.fromarray(image)
if max_width and image.shape[1] > max_width: if max_width and image.shape[1] > max_width:
scale = max_width / image.shape[1] scale = max_width / image.shape[1]
new_h = int(image.shape[0] * scale) new_h = int(image.shape[0] * scale)
image = cv2.resize(image, (max_width, new_h), interpolation=cv2.INTER_AREA) pil_img = pil_img.resize((max_width, new_h), Image.LANCZOS)
# RGB -> BGR for OpenCV JPEG encoding buf = io.BytesIO()
bgr = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) pil_img.save(buf, format="JPEG", quality=quality)
_, buf = cv2.imencode('.jpg', bgr, [cv2.IMWRITE_JPEG_QUALITY, quality]) return buf.getvalue()
return buf.tobytes()
def _make_thumbnail(pil_image: Image.Image, max_width: int) -> Image.Image: def _make_thumbnail(pil_image: Image.Image, max_width: int) -> Image.Image:

View File

@@ -42,6 +42,9 @@ def _to_response(source: AudioSource) -> AudioSourceResponse:
audio_template_id=getattr(source, "audio_template_id", None), audio_template_id=getattr(source, "audio_template_id", None),
audio_source_id=getattr(source, "audio_source_id", None), audio_source_id=getattr(source, "audio_source_id", None),
channel=getattr(source, "channel", None), channel=getattr(source, "channel", None),
band=getattr(source, "band", None),
freq_low=getattr(source, "freq_low", None),
freq_high=getattr(source, "freq_high", None),
description=source.description, description=source.description,
tags=source.tags, tags=source.tags,
created_at=source.created_at, created_at=source.created_at,
@@ -52,7 +55,7 @@ def _to_response(source: AudioSource) -> AudioSourceResponse:
@router.get("/api/v1/audio-sources", response_model=AudioSourceListResponse, tags=["Audio Sources"]) @router.get("/api/v1/audio-sources", response_model=AudioSourceListResponse, tags=["Audio Sources"])
async def list_audio_sources( async def list_audio_sources(
_auth: AuthRequired, _auth: AuthRequired,
source_type: Optional[str] = Query(None, description="Filter by source_type: multichannel or mono"), source_type: Optional[str] = Query(None, description="Filter by source_type: multichannel, mono, or band_extract"),
store: AudioSourceStore = Depends(get_audio_source_store), store: AudioSourceStore = Depends(get_audio_source_store),
): ):
"""List all audio sources, optionally filtered by type.""" """List all audio sources, optionally filtered by type."""
@@ -83,6 +86,9 @@ async def create_audio_source(
description=data.description, description=data.description,
audio_template_id=data.audio_template_id, audio_template_id=data.audio_template_id,
tags=data.tags, tags=data.tags,
band=data.band,
freq_low=data.freq_low,
freq_high=data.freq_high,
) )
fire_entity_event("audio_source", "created", source.id) fire_entity_event("audio_source", "created", source.id)
return _to_response(source) return _to_response(source)
@@ -126,6 +132,9 @@ async def update_audio_source(
description=data.description, description=data.description,
audio_template_id=data.audio_template_id, audio_template_id=data.audio_template_id,
tags=data.tags, tags=data.tags,
band=data.band,
freq_low=data.freq_low,
freq_high=data.freq_high,
) )
fire_entity_event("audio_source", "updated", source_id) fire_entity_event("audio_source", "updated", source_id)
return _to_response(source) return _to_response(source)
@@ -182,17 +191,28 @@ async def test_audio_source_ws(
await websocket.close(code=4001, reason="Unauthorized") await websocket.close(code=4001, reason="Unauthorized")
return return
# Resolve source → device info # Resolve source → device info + optional band filter
store = get_audio_source_store() store = get_audio_source_store()
template_store = get_audio_template_store() template_store = get_audio_template_store()
manager = get_processor_manager() manager = get_processor_manager()
try: try:
device_index, is_loopback, channel, audio_template_id = store.resolve_audio_source(source_id) resolved = store.resolve_audio_source(source_id)
except ValueError as e: except ValueError as e:
await websocket.close(code=4004, reason=str(e)) await websocket.close(code=4004, reason=str(e))
return return
device_index = resolved.device_index
is_loopback = resolved.is_loopback
channel = resolved.channel
audio_template_id = resolved.audio_template_id
# Precompute band mask if this is a band_extract source
band_mask = None
if resolved.freq_low is not None and resolved.freq_high is not None:
from wled_controller.core.audio.band_filter import compute_band_mask
band_mask = compute_band_mask(resolved.freq_low, resolved.freq_high)
# Resolve template → engine_type + config # Resolve template → engine_type + config
engine_type = None engine_type = None
engine_config = None engine_config = None
@@ -233,6 +253,11 @@ async def test_audio_source_ws(
spectrum = analysis.spectrum spectrum = analysis.spectrum
rms = analysis.rms rms = analysis.rms
# Apply band filter if present
if band_mask is not None:
from wled_controller.core.audio.band_filter import apply_band_filter
spectrum, rms = apply_band_filter(spectrum, rms, band_mask)
await websocket.send_json({ await websocket.send_json({
"spectrum": spectrum.tolist(), "spectrum": spectrum.tolist(),
"rms": round(rms, 4), "rms": round(rms, 4),

View File

@@ -42,40 +42,37 @@ router = APIRouter()
# ===== Helpers ===== # ===== Helpers =====
def _condition_from_schema(s: ConditionSchema) -> Condition: def _condition_from_schema(s: ConditionSchema) -> Condition:
if s.condition_type == "always": _SCHEMA_TO_CONDITION = {
return AlwaysCondition() "always": lambda: AlwaysCondition(),
if s.condition_type == "application": "application": lambda: ApplicationCondition(
return ApplicationCondition(
apps=s.apps or [], apps=s.apps or [],
match_type=s.match_type or "running", match_type=s.match_type or "running",
) ),
if s.condition_type == "time_of_day": "time_of_day": lambda: TimeOfDayCondition(
return TimeOfDayCondition(
start_time=s.start_time or "00:00", start_time=s.start_time or "00:00",
end_time=s.end_time or "23:59", end_time=s.end_time or "23:59",
) ),
if s.condition_type == "system_idle": "system_idle": lambda: SystemIdleCondition(
return SystemIdleCondition(
idle_minutes=s.idle_minutes if s.idle_minutes is not None else 5, idle_minutes=s.idle_minutes if s.idle_minutes is not None else 5,
when_idle=s.when_idle if s.when_idle is not None else True, when_idle=s.when_idle if s.when_idle is not None else True,
) ),
if s.condition_type == "display_state": "display_state": lambda: DisplayStateCondition(
return DisplayStateCondition(
state=s.state or "on", state=s.state or "on",
) ),
if s.condition_type == "mqtt": "mqtt": lambda: MQTTCondition(
return MQTTCondition(
topic=s.topic or "", topic=s.topic or "",
payload=s.payload or "", payload=s.payload or "",
match_mode=s.match_mode or "exact", match_mode=s.match_mode or "exact",
) ),
if s.condition_type == "webhook": "webhook": lambda: WebhookCondition(
return WebhookCondition(
token=s.token or secrets.token_hex(16), token=s.token or secrets.token_hex(16),
) ),
if s.condition_type == "startup": "startup": lambda: StartupCondition(),
return StartupCondition() }
raise ValueError(f"Unknown condition type: {s.condition_type}") factory = _SCHEMA_TO_CONDITION.get(s.condition_type)
if factory is None:
raise ValueError(f"Unknown condition type: {s.condition_type}")
return factory()
def _condition_to_schema(c: Condition) -> ConditionSchema: def _condition_to_schema(c: Condition) -> ConditionSchema:

View File

@@ -1,23 +1,20 @@
"""System routes: backup, restore, export, import, auto-backup. """System routes: backup, restore, auto-backup.
Extracted from system.py to keep files under 800 lines. All backups are SQLite database snapshots (.db files).
""" """
import asyncio import asyncio
import io import io
import json
import subprocess import subprocess
import sys import sys
import threading import threading
from datetime import datetime, timezone
from pathlib import Path from pathlib import Path
from fastapi import APIRouter, Depends, File, HTTPException, Query, UploadFile from fastapi import APIRouter, Depends, File, HTTPException, UploadFile
from fastapi.responses import StreamingResponse from fastapi.responses import StreamingResponse
from wled_controller import __version__
from wled_controller.api.auth import AuthRequired from wled_controller.api.auth import AuthRequired
from wled_controller.api.dependencies import get_auto_backup_engine from wled_controller.api.dependencies import get_auto_backup_engine, get_database
from wled_controller.api.schemas.system import ( from wled_controller.api.schemas.system import (
AutoBackupSettings, AutoBackupSettings,
AutoBackupStatusResponse, AutoBackupStatusResponse,
@@ -26,35 +23,13 @@ from wled_controller.api.schemas.system import (
RestoreResponse, RestoreResponse,
) )
from wled_controller.core.backup.auto_backup import AutoBackupEngine from wled_controller.core.backup.auto_backup import AutoBackupEngine
from wled_controller.config import get_config from wled_controller.storage.database import Database, freeze_writes
from wled_controller.utils import atomic_write_json, get_logger from wled_controller.utils import get_logger
logger = get_logger(__name__) logger = get_logger(__name__)
router = APIRouter() router = APIRouter()
# ---------------------------------------------------------------------------
# Configuration backup / restore
# ---------------------------------------------------------------------------
# Mapping: logical store name -> StorageConfig attribute name
STORE_MAP = {
"devices": "devices_file",
"capture_templates": "templates_file",
"postprocessing_templates": "postprocessing_templates_file",
"picture_sources": "picture_sources_file",
"output_targets": "output_targets_file",
"pattern_templates": "pattern_templates_file",
"color_strip_sources": "color_strip_sources_file",
"audio_sources": "audio_sources_file",
"audio_templates": "audio_templates_file",
"value_sources": "value_sources_file",
"sync_clocks": "sync_clocks_file",
"color_strip_processing_templates": "color_strip_processing_templates_file",
"automations": "automations_file",
"scene_presets": "scene_presets_file",
}
_SERVER_DIR = Path(__file__).resolve().parents[4] _SERVER_DIR = Path(__file__).resolve().parents[4]
@@ -79,225 +54,94 @@ def _schedule_restart() -> None:
threading.Thread(target=_restart, daemon=True).start() threading.Thread(target=_restart, daemon=True).start()
@router.get("/api/v1/system/export/{store_key}", tags=["System"]) # ---------------------------------------------------------------------------
def export_store(store_key: str, _: AuthRequired): # Backup / restore (SQLite snapshots)
"""Download a single entity store as a JSON file.""" # ---------------------------------------------------------------------------
if store_key not in STORE_MAP:
raise HTTPException(
status_code=404,
detail=f"Unknown store '{store_key}'. Valid keys: {sorted(STORE_MAP.keys())}",
)
config = get_config()
file_path = Path(getattr(config.storage, STORE_MAP[store_key]))
if file_path.exists():
with open(file_path, "r", encoding="utf-8") as f:
data = json.load(f)
else:
data = {}
export = {
"meta": {
"format": "ledgrab-partial-export",
"format_version": 1,
"store_key": store_key,
"app_version": __version__,
"created_at": datetime.now(timezone.utc).isoformat() + "Z",
},
"store": data,
}
content = json.dumps(export, indent=2, ensure_ascii=False)
timestamp = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H%M%S")
filename = f"ledgrab-{store_key}-{timestamp}.json"
return StreamingResponse(
io.BytesIO(content.encode("utf-8")),
media_type="application/json",
headers={"Content-Disposition": f'attachment; filename="{filename}"'},
)
@router.post("/api/v1/system/import/{store_key}", tags=["System"])
async def import_store(
store_key: str,
_: AuthRequired,
file: UploadFile = File(...),
merge: bool = Query(False, description="Merge into existing data instead of replacing"),
):
"""Upload a partial export file to replace or merge one entity store. Triggers server restart."""
if store_key not in STORE_MAP:
raise HTTPException(
status_code=404,
detail=f"Unknown store '{store_key}'. Valid keys: {sorted(STORE_MAP.keys())}",
)
try:
raw = await file.read()
if len(raw) > 10 * 1024 * 1024:
raise HTTPException(status_code=400, detail="File too large (max 10 MB)")
payload = json.loads(raw)
except json.JSONDecodeError as e:
raise HTTPException(status_code=400, detail=f"Invalid JSON: {e}")
# Support both full-backup format and partial-export format
if "stores" in payload and isinstance(payload.get("meta"), dict):
# Full backup: extract the specific store
if payload["meta"].get("format") not in ("ledgrab-backup",):
raise HTTPException(status_code=400, detail="Not a valid LED Grab backup or partial export file")
stores = payload.get("stores", {})
if store_key not in stores:
raise HTTPException(status_code=400, detail=f"Backup does not contain store '{store_key}'")
incoming = stores[store_key]
elif isinstance(payload.get("meta"), dict) and payload["meta"].get("format") == "ledgrab-partial-export":
# Partial export format
if payload["meta"].get("store_key") != store_key:
raise HTTPException(
status_code=400,
detail=f"File is for store '{payload['meta']['store_key']}', not '{store_key}'",
)
incoming = payload.get("store", {})
else:
raise HTTPException(status_code=400, detail="Not a valid LED Grab backup or partial export file")
if not isinstance(incoming, dict):
raise HTTPException(status_code=400, detail="Store data must be a JSON object")
config = get_config()
file_path = Path(getattr(config.storage, STORE_MAP[store_key]))
def _write():
if merge and file_path.exists():
with open(file_path, "r", encoding="utf-8") as f:
existing = json.load(f)
if isinstance(existing, dict):
existing.update(incoming)
atomic_write_json(file_path, existing)
return len(existing)
atomic_write_json(file_path, incoming)
return len(incoming)
count = await asyncio.to_thread(_write)
logger.info(f"Imported store '{store_key}' ({count} entries, merge={merge}). Scheduling restart...")
_schedule_restart()
return {
"status": "imported",
"store_key": store_key,
"entries": count,
"merge": merge,
"restart_scheduled": True,
"message": f"Imported {count} entries for '{store_key}'. Server restarting...",
}
@router.get("/api/v1/system/backup", tags=["System"]) @router.get("/api/v1/system/backup", tags=["System"])
def backup_config(_: AuthRequired): def backup_config(_: AuthRequired, db: Database = Depends(get_database)):
"""Download all configuration as a single JSON backup file.""" """Download a full database backup as a .db file."""
config = get_config() import tempfile
stores = {} with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as tmp:
for store_key, config_attr in STORE_MAP.items(): tmp_path = Path(tmp.name)
file_path = Path(getattr(config.storage, config_attr))
if file_path.exists():
with open(file_path, "r", encoding="utf-8") as f:
stores[store_key] = json.load(f)
else:
stores[store_key] = {}
backup = { try:
"meta": { db.backup_to(tmp_path)
"format": "ledgrab-backup", content = tmp_path.read_bytes()
"format_version": 1, finally:
"app_version": __version__, tmp_path.unlink(missing_ok=True)
"created_at": datetime.now(timezone.utc).isoformat() + "Z",
"store_count": len(stores),
},
"stores": stores,
}
content = json.dumps(backup, indent=2, ensure_ascii=False) from datetime import datetime, timezone
timestamp = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H%M%S") timestamp = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H%M%S")
filename = f"ledgrab-backup-{timestamp}.json" filename = f"ledgrab-backup-{timestamp}.db"
return StreamingResponse( return StreamingResponse(
io.BytesIO(content.encode("utf-8")), io.BytesIO(content),
media_type="application/json", media_type="application/octet-stream",
headers={"Content-Disposition": f'attachment; filename="{filename}"'}, headers={"Content-Disposition": f'attachment; filename="{filename}"'},
) )
@router.post("/api/v1/system/restart", tags=["System"])
def restart_server(_: AuthRequired):
"""Schedule a server restart and return immediately."""
_schedule_restart()
return {"status": "restarting"}
@router.post("/api/v1/system/restore", response_model=RestoreResponse, tags=["System"]) @router.post("/api/v1/system/restore", response_model=RestoreResponse, tags=["System"])
async def restore_config( async def restore_config(
_: AuthRequired, _: AuthRequired,
file: UploadFile = File(...), file: UploadFile = File(...),
db: Database = Depends(get_database),
): ):
"""Upload a backup file to restore all configuration. Triggers server restart.""" """Upload a .db backup file to restore all configuration. Triggers server restart."""
# Read and parse raw = await file.read()
if len(raw) > 50 * 1024 * 1024: # 50 MB limit
raise HTTPException(status_code=400, detail="Backup file too large (max 50 MB)")
if len(raw) < 100:
raise HTTPException(status_code=400, detail="File too small to be a valid SQLite database")
# SQLite files start with "SQLite format 3\000"
if not raw[:16].startswith(b"SQLite format 3"):
raise HTTPException(status_code=400, detail="Not a valid SQLite database file")
import tempfile
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as tmp:
tmp.write(raw)
tmp_path = Path(tmp.name)
try: try:
raw = await file.read() def _restore():
if len(raw) > 10 * 1024 * 1024: # 10 MB limit db.restore_from(tmp_path)
raise HTTPException(status_code=400, detail="Backup file too large (max 10 MB)")
backup = json.loads(raw)
except json.JSONDecodeError as e:
raise HTTPException(status_code=400, detail=f"Invalid JSON file: {e}")
# Validate envelope await asyncio.to_thread(_restore)
meta = backup.get("meta") finally:
if not isinstance(meta, dict) or meta.get("format") != "ledgrab-backup": tmp_path.unlink(missing_ok=True)
raise HTTPException(status_code=400, detail="Not a valid LED Grab backup file")
fmt_version = meta.get("format_version", 0) freeze_writes()
if fmt_version > 1: logger.info("Database restored from uploaded backup. Scheduling restart...")
raise HTTPException(
status_code=400,
detail=f"Backup format version {fmt_version} is not supported by this server version",
)
stores = backup.get("stores")
if not isinstance(stores, dict):
raise HTTPException(status_code=400, detail="Backup file missing 'stores' section")
known_keys = set(STORE_MAP.keys())
present_keys = known_keys & set(stores.keys())
if not present_keys:
raise HTTPException(status_code=400, detail="Backup contains no recognized store data")
for key in present_keys:
if not isinstance(stores[key], dict):
raise HTTPException(status_code=400, detail=f"Store '{key}' in backup is not a valid JSON object")
# Write store files atomically (in thread to avoid blocking event loop)
config = get_config()
def _write_stores():
count = 0
for store_key, config_attr in STORE_MAP.items():
if store_key in stores:
file_path = Path(getattr(config.storage, config_attr))
atomic_write_json(file_path, stores[store_key])
count += 1
logger.info(f"Restored store: {store_key} -> {file_path}")
return count
written = await asyncio.to_thread(_write_stores)
logger.info(f"Restore complete: {written}/{len(STORE_MAP)} stores written. Scheduling restart...")
_schedule_restart() _schedule_restart()
missing = known_keys - present_keys
return RestoreResponse( return RestoreResponse(
status="restored", status="restored",
stores_written=written,
stores_total=len(STORE_MAP),
missing_stores=sorted(missing) if missing else [],
restart_scheduled=True, restart_scheduled=True,
message=f"Restored {written} stores. Server restarting...", message="Database restored from backup. Server restarting...",
) )
@router.post("/api/v1/system/restart", tags=["System"])
def restart_server(_: AuthRequired):
"""Schedule a server restart and return immediately."""
from wled_controller.server_ref import _broadcast_restarting
_broadcast_restarting()
_schedule_restart()
return {"status": "restarting"}
@router.post("/api/v1/system/shutdown", tags=["System"])
def shutdown_server(_: AuthRequired):
"""Gracefully shut down the server."""
from wled_controller.server_ref import request_shutdown
request_shutdown()
return {"status": "shutting_down"}
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Auto-backup settings & saved backups # Auto-backup settings & saved backups
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
@@ -376,7 +220,7 @@ def download_saved_backup(
content = path.read_bytes() content = path.read_bytes()
return StreamingResponse( return StreamingResponse(
io.BytesIO(content), io.BytesIO(content),
media_type="application/json", media_type="application/octet-stream",
headers={"Content-Disposition": f'attachment; filename="{filename}"'}, headers={"Content-Disposition": f'attachment; filename="{filename}"'},
) )

View File

@@ -507,7 +507,7 @@ async def os_notification_history(_auth: AuthRequired):
# ── Transient Preview WebSocket ──────────────────────────────────────── # ── Transient Preview WebSocket ────────────────────────────────────────
_PREVIEW_ALLOWED_TYPES = {"static", "gradient", "color_cycle", "effect", "daylight", "candlelight"} _PREVIEW_ALLOWED_TYPES = {"static", "gradient", "color_cycle", "effect", "daylight", "candlelight", "notification"}
@router.websocket("/api/v1/color-strip-sources/preview/ws") @router.websocket("/api/v1/color-strip-sources/preview/ws")
@@ -567,6 +567,13 @@ async def preview_color_strip_ws(
if not stream_cls: if not stream_cls:
raise ValueError(f"Unsupported preview source_type: {source.source_type}") raise ValueError(f"Unsupported preview source_type: {source.source_type}")
s = stream_cls(source) s = stream_cls(source)
# Inject gradient store for palette resolution
if hasattr(s, "set_gradient_store"):
try:
from wled_controller.api.dependencies import get_gradient_store
s.set_gradient_store(get_gradient_store())
except Exception:
pass
if hasattr(s, "configure"): if hasattr(s, "configure"):
s.configure(led_count) s.configure(led_count)
# Inject sync clock if requested # Inject sync clock if requested
@@ -648,6 +655,17 @@ async def preview_color_strip_ws(
if msg is not None: if msg is not None:
try: try:
new_config = _json.loads(msg) new_config = _json.loads(msg)
# Handle "fire" command for notification streams
if new_config.get("action") == "fire":
from wled_controller.core.processing.notification_stream import NotificationColorStripStream
if isinstance(stream, NotificationColorStripStream):
stream.fire(
app_name=new_config.get("app", ""),
color_override=new_config.get("color"),
)
continue
new_type = new_config.get("source_type") new_type = new_config.get("source_type")
if new_type not in _PREVIEW_ALLOWED_TYPES: if new_type not in _PREVIEW_ALLOWED_TYPES:
await websocket.send_text(_json.dumps({"type": "error", "detail": f"source_type must be one of {sorted(_PREVIEW_ALLOWED_TYPES)}"})) await websocket.send_text(_json.dumps({"type": "error", "detail": f"source_type must be one of {sorted(_PREVIEW_ALLOWED_TYPES)}"}))
@@ -829,6 +847,15 @@ async def test_color_strip_ws(
if hasattr(stream, "configure"): if hasattr(stream, "configure"):
stream.configure(max(1, led_count)) stream.configure(max(1, led_count))
# Reject picture sources with 0 calibration LEDs (no edges configured)
if stream.led_count <= 0:
csm.release(source_id, consumer_id)
await websocket.close(
code=4005,
reason="No LEDs configured. Open Calibration and set LED counts for each edge.",
)
return
# Clamp FPS to sane range # Clamp FPS to sane range
fps = max(1, min(60, fps)) fps = max(1, min(60, fps))
_frame_interval = 1.0 / fps _frame_interval = 1.0 / fps

View File

@@ -0,0 +1,153 @@
"""Gradient routes: CRUD for reusable gradient definitions."""
from fastapi import APIRouter, Depends, HTTPException
from wled_controller.api.auth import AuthRequired
from wled_controller.api.dependencies import (
fire_entity_event,
get_color_strip_store,
get_gradient_store,
)
from wled_controller.api.schemas.gradients import (
GradientCreate,
GradientListResponse,
GradientResponse,
GradientUpdate,
)
from wled_controller.storage.gradient import Gradient
from wled_controller.storage.gradient_store import GradientStore
from wled_controller.storage.color_strip_store import ColorStripStore
from wled_controller.storage.base_store import EntityNotFoundError
from wled_controller.utils import get_logger
logger = get_logger(__name__)
router = APIRouter()
def _to_response(gradient: Gradient) -> GradientResponse:
return GradientResponse(
id=gradient.id,
name=gradient.name,
stops=[{"position": s["position"], "color": s["color"]} for s in gradient.stops],
is_builtin=gradient.is_builtin,
description=gradient.description,
tags=gradient.tags,
created_at=gradient.created_at,
updated_at=gradient.updated_at,
)
@router.get("/api/v1/gradients", response_model=GradientListResponse, tags=["Gradients"])
async def list_gradients(
_auth: AuthRequired,
store: GradientStore = Depends(get_gradient_store),
):
"""List all gradients (built-in + user-created)."""
gradients = store.get_all_gradients()
return GradientListResponse(
gradients=[_to_response(g) for g in gradients],
count=len(gradients),
)
@router.post("/api/v1/gradients", response_model=GradientResponse, status_code=201, tags=["Gradients"])
async def create_gradient(
data: GradientCreate,
_auth: AuthRequired,
store: GradientStore = Depends(get_gradient_store),
):
"""Create a new user-defined gradient."""
try:
gradient = store.create_gradient(
name=data.name,
stops=[s.model_dump() for s in data.stops],
description=data.description,
tags=data.tags,
)
fire_entity_event("gradient", "created", gradient.id)
return _to_response(gradient)
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
@router.get("/api/v1/gradients/{gradient_id}", response_model=GradientResponse, tags=["Gradients"])
async def get_gradient(
gradient_id: str,
_auth: AuthRequired,
store: GradientStore = Depends(get_gradient_store),
):
"""Get a gradient by ID."""
try:
gradient = store.get_gradient(gradient_id)
return _to_response(gradient)
except (ValueError, EntityNotFoundError) as e:
raise HTTPException(status_code=404, detail=str(e))
@router.put("/api/v1/gradients/{gradient_id}", response_model=GradientResponse, tags=["Gradients"])
async def update_gradient(
gradient_id: str,
data: GradientUpdate,
_auth: AuthRequired,
store: GradientStore = Depends(get_gradient_store),
):
"""Update a gradient (built-in gradients are read-only)."""
try:
stops = [s.model_dump() for s in data.stops] if data.stops is not None else None
gradient = store.update_gradient(
gradient_id=gradient_id,
name=data.name,
stops=stops,
description=data.description,
tags=data.tags,
)
fire_entity_event("gradient", "updated", gradient_id)
return _to_response(gradient)
except (ValueError, EntityNotFoundError) as e:
status = 404 if "not found" in str(e).lower() else 400
raise HTTPException(status_code=status, detail=str(e))
@router.post("/api/v1/gradients/{gradient_id}/clone", response_model=GradientResponse, status_code=201, tags=["Gradients"])
async def clone_gradient(
gradient_id: str,
_auth: AuthRequired,
store: GradientStore = Depends(get_gradient_store),
):
"""Clone a gradient (useful for customizing built-in gradients)."""
try:
original = store.get_gradient(gradient_id)
clone = store.create_gradient(
name=f"{original.name} (copy)",
stops=original.stops,
description=original.description,
tags=original.tags,
)
fire_entity_event("gradient", "created", clone.id)
return _to_response(clone)
except (ValueError, EntityNotFoundError) as e:
status = 404 if "not found" in str(e).lower() else 400
raise HTTPException(status_code=status, detail=str(e))
@router.delete("/api/v1/gradients/{gradient_id}", status_code=204, tags=["Gradients"])
async def delete_gradient(
gradient_id: str,
_auth: AuthRequired,
store: GradientStore = Depends(get_gradient_store),
css_store: ColorStripStore = Depends(get_color_strip_store),
):
"""Delete a gradient (fails if built-in or referenced by sources)."""
try:
# Check references
for source in css_store.get_all_sources():
if getattr(source, "gradient_id", None) == gradient_id:
raise ValueError(
f"Cannot delete: referenced by color strip source '{source.name}'"
)
store.delete_gradient(gradient_id)
fire_entity_event("gradient", "deleted", gradient_id)
except (ValueError, EntityNotFoundError) as e:
status = 404 if "not found" in str(e).lower() else 400
raise HTTPException(status_code=status, detail=str(e))

View File

@@ -14,7 +14,7 @@ import psutil
from fastapi import APIRouter, Depends, HTTPException, Query from fastapi import APIRouter, Depends, HTTPException, Query
from wled_controller import __version__ from wled_controller import __version__
from wled_controller.api.auth import AuthRequired from wled_controller.api.auth import AuthRequired, is_auth_enabled
from wled_controller.api.dependencies import ( from wled_controller.api.dependencies import (
get_audio_source_store, get_audio_source_store,
get_audio_template_store, get_audio_template_store,
@@ -45,8 +45,7 @@ from wled_controller.core.capture.screen_capture import get_available_displays
from wled_controller.utils import get_logger from wled_controller.utils import get_logger
from wled_controller.storage.base_store import EntityNotFoundError from wled_controller.storage.base_store import EntityNotFoundError
# Re-export STORE_MAP and load_external_url so existing callers still work # Re-export load_external_url so existing callers still work
from wled_controller.api.routes.backup import STORE_MAP # noqa: F401
from wled_controller.api.routes.system_settings import load_external_url # noqa: F401 from wled_controller.api.routes.system_settings import load_external_url # noqa: F401
logger = get_logger(__name__) logger = get_logger(__name__)
@@ -107,6 +106,7 @@ async def health_check():
timestamp=datetime.now(timezone.utc), timestamp=datetime.now(timezone.utc),
version=__version__, version=__version__,
demo_mode=get_config().demo, demo_mode=get_config().demo,
auth_required=is_auth_enabled(),
) )

View File

@@ -4,15 +4,14 @@ Extracted from system.py to keep files under 800 lines.
""" """
import asyncio import asyncio
import json
import logging import logging
import re import re
from pathlib import Path
from fastapi import APIRouter, HTTPException, Query, WebSocket, WebSocketDisconnect from fastapi import APIRouter, Depends, HTTPException, Query, WebSocket, WebSocketDisconnect
from pydantic import BaseModel from pydantic import BaseModel
from wled_controller.api.auth import AuthRequired from wled_controller.api.auth import AuthRequired
from wled_controller.api.dependencies import get_database
from wled_controller.api.schemas.system import ( from wled_controller.api.schemas.system import (
ExternalUrlRequest, ExternalUrlRequest,
ExternalUrlResponse, ExternalUrlResponse,
@@ -22,6 +21,7 @@ from wled_controller.api.schemas.system import (
MQTTSettingsResponse, MQTTSettingsResponse,
) )
from wled_controller.config import get_config from wled_controller.config import get_config
from wled_controller.storage.database import Database
from wled_controller.utils import get_logger from wled_controller.utils import get_logger
logger = get_logger(__name__) logger = get_logger(__name__)
@@ -33,21 +33,9 @@ router = APIRouter()
# MQTT settings # MQTT settings
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
_MQTT_SETTINGS_FILE: Path | None = None
def _load_mqtt_settings(db: Database) -> dict:
def _get_mqtt_settings_path() -> Path: """Load MQTT settings: YAML config defaults overridden by DB settings."""
global _MQTT_SETTINGS_FILE
if _MQTT_SETTINGS_FILE is None:
cfg = get_config()
# Derive the data directory from any known storage file path
data_dir = Path(cfg.storage.devices_file).parent
_MQTT_SETTINGS_FILE = data_dir / "mqtt_settings.json"
return _MQTT_SETTINGS_FILE
def _load_mqtt_settings() -> dict:
"""Load MQTT settings: YAML config defaults overridden by JSON overrides file."""
cfg = get_config() cfg = get_config()
defaults = { defaults = {
"enabled": cfg.mqtt.enabled, "enabled": cfg.mqtt.enabled,
@@ -58,31 +46,20 @@ def _load_mqtt_settings() -> dict:
"client_id": cfg.mqtt.client_id, "client_id": cfg.mqtt.client_id,
"base_topic": cfg.mqtt.base_topic, "base_topic": cfg.mqtt.base_topic,
} }
path = _get_mqtt_settings_path() overrides = db.get_setting("mqtt")
if path.exists(): if overrides:
try: defaults.update(overrides)
with open(path, "r", encoding="utf-8") as f:
overrides = json.load(f)
defaults.update(overrides)
except Exception as e:
logger.warning(f"Failed to load MQTT settings override file: {e}")
return defaults return defaults
def _save_mqtt_settings(settings: dict) -> None:
"""Persist MQTT settings to the JSON override file."""
from wled_controller.utils import atomic_write_json
atomic_write_json(_get_mqtt_settings_path(), settings)
@router.get( @router.get(
"/api/v1/system/mqtt/settings", "/api/v1/system/mqtt/settings",
response_model=MQTTSettingsResponse, response_model=MQTTSettingsResponse,
tags=["System"], tags=["System"],
) )
async def get_mqtt_settings(_: AuthRequired): async def get_mqtt_settings(_: AuthRequired, db: Database = Depends(get_database)):
"""Get current MQTT broker settings. Password is masked.""" """Get current MQTT broker settings. Password is masked."""
s = _load_mqtt_settings() s = _load_mqtt_settings(db)
return MQTTSettingsResponse( return MQTTSettingsResponse(
enabled=s["enabled"], enabled=s["enabled"],
broker_host=s["broker_host"], broker_host=s["broker_host"],
@@ -99,9 +76,9 @@ async def get_mqtt_settings(_: AuthRequired):
response_model=MQTTSettingsResponse, response_model=MQTTSettingsResponse,
tags=["System"], tags=["System"],
) )
async def update_mqtt_settings(_: AuthRequired, body: MQTTSettingsRequest): async def update_mqtt_settings(_: AuthRequired, body: MQTTSettingsRequest, db: Database = Depends(get_database)):
"""Update MQTT broker settings. If password is empty string, the existing password is preserved.""" """Update MQTT broker settings. If password is empty string, the existing password is preserved."""
current = _load_mqtt_settings() current = _load_mqtt_settings(db)
# If caller sends an empty password, keep the existing one # If caller sends an empty password, keep the existing one
password = body.password if body.password else current.get("password", "") password = body.password if body.password else current.get("password", "")
@@ -115,7 +92,7 @@ async def update_mqtt_settings(_: AuthRequired, body: MQTTSettingsRequest):
"client_id": body.client_id, "client_id": body.client_id,
"base_topic": body.base_topic, "base_topic": body.base_topic,
} }
_save_mqtt_settings(new_settings) db.set_setting("mqtt", new_settings)
logger.info("MQTT settings updated") logger.info("MQTT settings updated")
return MQTTSettingsResponse( return MQTTSettingsResponse(
@@ -133,44 +110,25 @@ async def update_mqtt_settings(_: AuthRequired, body: MQTTSettingsRequest):
# External URL setting # External URL setting
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
_EXTERNAL_URL_FILE: Path | None = None def load_external_url(db: Database | None = None) -> str:
def _get_external_url_path() -> Path:
global _EXTERNAL_URL_FILE
if _EXTERNAL_URL_FILE is None:
cfg = get_config()
data_dir = Path(cfg.storage.devices_file).parent
_EXTERNAL_URL_FILE = data_dir / "external_url.json"
return _EXTERNAL_URL_FILE
def load_external_url() -> str:
"""Load the external URL setting. Returns empty string if not set.""" """Load the external URL setting. Returns empty string if not set."""
path = _get_external_url_path() if db is None:
if path.exists(): from wled_controller.api.dependencies import get_database
try: db = get_database()
with open(path, "r", encoding="utf-8") as f: data = db.get_setting("external_url")
data = json.load(f) if data:
return data.get("external_url", "") return data.get("external_url", "")
except Exception:
pass
return "" return ""
def _save_external_url(url: str) -> None:
from wled_controller.utils import atomic_write_json
atomic_write_json(_get_external_url_path(), {"external_url": url})
@router.get( @router.get(
"/api/v1/system/external-url", "/api/v1/system/external-url",
response_model=ExternalUrlResponse, response_model=ExternalUrlResponse,
tags=["System"], tags=["System"],
) )
async def get_external_url(_: AuthRequired): async def get_external_url(_: AuthRequired, db: Database = Depends(get_database)):
"""Get the configured external base URL.""" """Get the configured external base URL."""
return ExternalUrlResponse(external_url=load_external_url()) return ExternalUrlResponse(external_url=load_external_url(db))
@router.put( @router.put(
@@ -178,10 +136,10 @@ async def get_external_url(_: AuthRequired):
response_model=ExternalUrlResponse, response_model=ExternalUrlResponse,
tags=["System"], tags=["System"],
) )
async def update_external_url(_: AuthRequired, body: ExternalUrlRequest): async def update_external_url(_: AuthRequired, body: ExternalUrlRequest, db: Database = Depends(get_database)):
"""Set the external base URL used in webhook URLs and other user-visible URLs.""" """Set the external base URL used in webhook URLs and other user-visible URLs."""
url = body.external_url.strip().rstrip("/") url = body.external_url.strip().rstrip("/")
_save_external_url(url) db.set_setting("external_url", {"external_url": url})
logger.info("External URL updated: %s", url or "(cleared)") logger.info("External URL updated: %s", url or "(cleared)")
return ExternalUrlResponse(external_url=url) return ExternalUrlResponse(external_url=url)

View File

@@ -0,0 +1,81 @@
"""API routes for the auto-update system."""
from fastapi import APIRouter, Depends
from fastapi.responses import JSONResponse
from wled_controller.api.dependencies import get_update_service
from wled_controller.api.schemas.update import (
DismissRequest,
UpdateSettingsRequest,
UpdateSettingsResponse,
UpdateStatusResponse,
)
from wled_controller.core.update.update_service import UpdateService
from wled_controller.utils import get_logger
logger = get_logger(__name__)
router = APIRouter(prefix="/api/v1/system/update", tags=["update"])
@router.get("/status", response_model=UpdateStatusResponse)
async def get_update_status(
service: UpdateService = Depends(get_update_service),
):
return service.get_status()
@router.post("/check", response_model=UpdateStatusResponse)
async def check_for_updates(
service: UpdateService = Depends(get_update_service),
):
return await service.check_now()
@router.post("/dismiss")
async def dismiss_update(
body: DismissRequest,
service: UpdateService = Depends(get_update_service),
):
service.dismiss(body.version)
return {"ok": True}
@router.post("/apply")
async def apply_update(
service: UpdateService = Depends(get_update_service),
):
"""Download (if needed) and apply the available update."""
status = service.get_status()
if not status["has_update"]:
return JSONResponse(status_code=400, content={"detail": "No update available"})
if not status["can_auto_update"]:
return JSONResponse(
status_code=400,
content={"detail": f"Auto-update not supported for install type: {status['install_type']}"},
)
try:
await service.apply_update()
return {"ok": True, "message": "Update applied, server shutting down"}
except Exception as exc:
logger.error("Failed to apply update: %s", exc, exc_info=True)
return JSONResponse(status_code=500, content={"detail": str(exc)})
@router.get("/settings", response_model=UpdateSettingsResponse)
async def get_update_settings(
service: UpdateService = Depends(get_update_service),
):
return service.get_settings()
@router.put("/settings", response_model=UpdateSettingsResponse)
async def update_update_settings(
body: UpdateSettingsRequest,
service: UpdateService = Depends(get_update_service),
):
return await service.update_settings(
enabled=body.enabled,
check_interval_hours=body.check_interval_hours,
include_prerelease=body.include_prerelease,
)

View File

@@ -0,0 +1,157 @@
"""Weather source routes: CRUD + test endpoint."""
from fastapi import APIRouter, Depends, HTTPException
from wled_controller.api.auth import AuthRequired
from wled_controller.api.dependencies import (
fire_entity_event,
get_weather_manager,
get_weather_source_store,
)
from wled_controller.api.schemas.weather_sources import (
WeatherSourceCreate,
WeatherSourceListResponse,
WeatherSourceResponse,
WeatherSourceUpdate,
WeatherTestResponse,
)
from wled_controller.core.weather.weather_manager import WeatherManager
from wled_controller.core.weather.weather_provider import WMO_CONDITION_NAMES
from wled_controller.storage.base_store import EntityNotFoundError
from wled_controller.storage.weather_source import WeatherSource
from wled_controller.storage.weather_source_store import WeatherSourceStore
from wled_controller.utils import get_logger
logger = get_logger(__name__)
router = APIRouter()
def _to_response(source: WeatherSource) -> WeatherSourceResponse:
d = source.to_dict()
return WeatherSourceResponse(
id=d["id"],
name=d["name"],
provider=d["provider"],
provider_config=d.get("provider_config", {}),
latitude=d["latitude"],
longitude=d["longitude"],
update_interval=d["update_interval"],
description=d.get("description"),
tags=d.get("tags", []),
created_at=source.created_at,
updated_at=source.updated_at,
)
@router.get("/api/v1/weather-sources", response_model=WeatherSourceListResponse, tags=["Weather Sources"])
async def list_weather_sources(
_auth: AuthRequired,
store: WeatherSourceStore = Depends(get_weather_source_store),
):
sources = store.get_all_sources()
return WeatherSourceListResponse(
sources=[_to_response(s) for s in sources],
count=len(sources),
)
@router.post("/api/v1/weather-sources", response_model=WeatherSourceResponse, status_code=201, tags=["Weather Sources"])
async def create_weather_source(
data: WeatherSourceCreate,
_auth: AuthRequired,
store: WeatherSourceStore = Depends(get_weather_source_store),
):
try:
source = store.create_source(
name=data.name,
provider=data.provider,
provider_config=data.provider_config,
latitude=data.latitude,
longitude=data.longitude,
update_interval=data.update_interval,
description=data.description,
tags=data.tags,
)
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
fire_entity_event("weather_source", "created", source.id)
return _to_response(source)
@router.get("/api/v1/weather-sources/{source_id}", response_model=WeatherSourceResponse, tags=["Weather Sources"])
async def get_weather_source(
source_id: str,
_auth: AuthRequired,
store: WeatherSourceStore = Depends(get_weather_source_store),
):
try:
return _to_response(store.get_source(source_id))
except EntityNotFoundError:
raise HTTPException(status_code=404, detail=f"Weather source {source_id} not found")
@router.put("/api/v1/weather-sources/{source_id}", response_model=WeatherSourceResponse, tags=["Weather Sources"])
async def update_weather_source(
source_id: str,
data: WeatherSourceUpdate,
_auth: AuthRequired,
store: WeatherSourceStore = Depends(get_weather_source_store),
manager: WeatherManager = Depends(get_weather_manager),
):
try:
source = store.update_source(
source_id,
name=data.name,
provider=data.provider,
provider_config=data.provider_config,
latitude=data.latitude,
longitude=data.longitude,
update_interval=data.update_interval,
description=data.description,
tags=data.tags,
)
except EntityNotFoundError:
raise HTTPException(status_code=404, detail=f"Weather source {source_id} not found")
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
manager.update_source(source_id)
fire_entity_event("weather_source", "updated", source.id)
return _to_response(source)
@router.delete("/api/v1/weather-sources/{source_id}", status_code=204, tags=["Weather Sources"])
async def delete_weather_source(
source_id: str,
_auth: AuthRequired,
store: WeatherSourceStore = Depends(get_weather_source_store),
):
try:
store.delete_source(source_id)
except EntityNotFoundError:
raise HTTPException(status_code=404, detail=f"Weather source {source_id} not found")
fire_entity_event("weather_source", "deleted", source_id)
@router.post("/api/v1/weather-sources/{source_id}/test", response_model=WeatherTestResponse, tags=["Weather Sources"])
async def test_weather_source(
source_id: str,
_auth: AuthRequired,
store: WeatherSourceStore = Depends(get_weather_source_store),
manager: WeatherManager = Depends(get_weather_manager),
):
"""Force-fetch current weather and return the result."""
try:
store.get_source(source_id) # validate exists
except EntityNotFoundError:
raise HTTPException(status_code=404, detail=f"Weather source {source_id} not found")
data = manager.fetch_now(source_id)
condition = WMO_CONDITION_NAMES.get(data.code, f"Unknown ({data.code})")
return WeatherTestResponse(
code=data.code,
condition=condition,
temperature=data.temperature,
wind_speed=data.wind_speed,
cloud_cover=data.cloud_cover,
)

View File

@@ -10,14 +10,18 @@ class AudioSourceCreate(BaseModel):
"""Request to create an audio source.""" """Request to create an audio source."""
name: str = Field(description="Source name", min_length=1, max_length=100) name: str = Field(description="Source name", min_length=1, max_length=100)
source_type: Literal["multichannel", "mono"] = Field(description="Source type") source_type: Literal["multichannel", "mono", "band_extract"] = Field(description="Source type")
# multichannel fields # multichannel fields
device_index: Optional[int] = Field(None, description="Audio device index (-1 = default)") device_index: Optional[int] = Field(None, description="Audio device index (-1 = default)")
is_loopback: Optional[bool] = Field(None, description="True for system audio (WASAPI loopback)") is_loopback: Optional[bool] = Field(None, description="True for system audio (WASAPI loopback)")
audio_template_id: Optional[str] = Field(None, description="Audio capture template ID") audio_template_id: Optional[str] = Field(None, description="Audio capture template ID")
# mono fields # mono fields
audio_source_id: Optional[str] = Field(None, description="Parent multichannel audio source ID") audio_source_id: Optional[str] = Field(None, description="Parent audio source ID")
channel: Optional[str] = Field(None, description="Channel: mono|left|right") channel: Optional[str] = Field(None, description="Channel: mono|left|right")
# band_extract fields
band: Optional[str] = Field(None, description="Band preset: bass|mid|treble|custom")
freq_low: Optional[float] = Field(None, description="Low frequency bound (Hz)", ge=20, le=20000)
freq_high: Optional[float] = Field(None, description="High frequency bound (Hz)", ge=20, le=20000)
description: Optional[str] = Field(None, description="Optional description", max_length=500) description: Optional[str] = Field(None, description="Optional description", max_length=500)
tags: List[str] = Field(default_factory=list, description="User-defined tags") tags: List[str] = Field(default_factory=list, description="User-defined tags")
@@ -29,8 +33,11 @@ class AudioSourceUpdate(BaseModel):
device_index: Optional[int] = Field(None, description="Audio device index (-1 = default)") device_index: Optional[int] = Field(None, description="Audio device index (-1 = default)")
is_loopback: Optional[bool] = Field(None, description="True for system audio (WASAPI loopback)") is_loopback: Optional[bool] = Field(None, description="True for system audio (WASAPI loopback)")
audio_template_id: Optional[str] = Field(None, description="Audio capture template ID") audio_template_id: Optional[str] = Field(None, description="Audio capture template ID")
audio_source_id: Optional[str] = Field(None, description="Parent multichannel audio source ID") audio_source_id: Optional[str] = Field(None, description="Parent audio source ID")
channel: Optional[str] = Field(None, description="Channel: mono|left|right") channel: Optional[str] = Field(None, description="Channel: mono|left|right")
band: Optional[str] = Field(None, description="Band preset: bass|mid|treble|custom")
freq_low: Optional[float] = Field(None, description="Low frequency bound (Hz)", ge=20, le=20000)
freq_high: Optional[float] = Field(None, description="High frequency bound (Hz)", ge=20, le=20000)
description: Optional[str] = Field(None, description="Optional description", max_length=500) description: Optional[str] = Field(None, description="Optional description", max_length=500)
tags: Optional[List[str]] = None tags: Optional[List[str]] = None
@@ -40,12 +47,15 @@ class AudioSourceResponse(BaseModel):
id: str = Field(description="Source ID") id: str = Field(description="Source ID")
name: str = Field(description="Source name") name: str = Field(description="Source name")
source_type: str = Field(description="Source type: multichannel or mono") source_type: str = Field(description="Source type: multichannel, mono, or band_extract")
device_index: Optional[int] = Field(None, description="Audio device index") device_index: Optional[int] = Field(None, description="Audio device index")
is_loopback: Optional[bool] = Field(None, description="WASAPI loopback mode") is_loopback: Optional[bool] = Field(None, description="WASAPI loopback mode")
audio_template_id: Optional[str] = Field(None, description="Audio capture template ID") audio_template_id: Optional[str] = Field(None, description="Audio capture template ID")
audio_source_id: Optional[str] = Field(None, description="Parent multichannel source ID") audio_source_id: Optional[str] = Field(None, description="Parent audio source ID")
channel: Optional[str] = Field(None, description="Channel: mono|left|right") channel: Optional[str] = Field(None, description="Channel: mono|left|right")
band: Optional[str] = Field(None, description="Band preset: bass|mid|treble|custom")
freq_low: Optional[float] = Field(None, description="Low frequency bound (Hz)")
freq_high: Optional[float] = Field(None, description="High frequency bound (Hz)")
description: Optional[str] = Field(None, description="Description") description: Optional[str] = Field(None, description="Description")
tags: List[str] = Field(default_factory=list, description="User-defined tags") tags: List[str] = Field(default_factory=list, description="User-defined tags")
created_at: datetime = Field(description="Creation timestamp") created_at: datetime = Field(description="Creation timestamp")

View File

@@ -36,6 +36,9 @@ class CompositeLayer(BaseModel):
enabled: bool = Field(default=True, description="Whether this layer is active") enabled: bool = Field(default=True, description="Whether this layer is active")
brightness_source_id: Optional[str] = Field(None, description="Optional value source ID for dynamic brightness") brightness_source_id: Optional[str] = Field(None, description="Optional value source ID for dynamic brightness")
processing_template_id: Optional[str] = Field(None, description="Optional color strip processing template ID") processing_template_id: Optional[str] = Field(None, description="Optional color strip processing template ID")
start: int = Field(default=0, ge=0, description="First LED index for range (0 = full strip)")
end: int = Field(default=0, ge=0, description="Last LED index exclusive for range (0 = full strip)")
reverse: bool = Field(default=False, description="Reverse layer output within its range")
class MappedZone(BaseModel): class MappedZone(BaseModel):
@@ -51,7 +54,7 @@ class ColorStripSourceCreate(BaseModel):
"""Request to create a color strip source.""" """Request to create a color strip source."""
name: str = Field(description="Source name", min_length=1, max_length=100) name: str = Field(description="Source name", min_length=1, max_length=100)
source_type: Literal["picture", "picture_advanced", "static", "gradient", "color_cycle", "effect", "composite", "mapped", "audio", "api_input", "notification", "daylight", "candlelight", "processed"] = Field(default="picture", description="Source type") source_type: Literal["picture", "picture_advanced", "static", "gradient", "color_cycle", "effect", "composite", "mapped", "audio", "api_input", "notification", "daylight", "candlelight", "processed", "weather"] = Field(default="picture", description="Source type")
# picture-type fields # picture-type fields
picture_source_id: str = Field(default="", description="Picture source ID (for picture type)") picture_source_id: str = Field(default="", description="Picture source ID (for picture type)")
smoothing: float = Field(default=0.3, description="Temporal smoothing (0.0=none, 1.0=full)", ge=0.0, le=1.0) smoothing: float = Field(default=0.3, description="Temporal smoothing (0.0=none, 1.0=full)", ge=0.0, le=1.0)
@@ -64,11 +67,16 @@ class ColorStripSourceCreate(BaseModel):
# color_cycle-type fields # color_cycle-type fields
colors: Optional[List[List[int]]] = Field(None, description="List of [R,G,B] colors to cycle (color_cycle type)") colors: Optional[List[List[int]]] = Field(None, description="List of [R,G,B] colors to cycle (color_cycle type)")
# effect-type fields # effect-type fields
effect_type: Optional[str] = Field(None, description="Effect algorithm: fire|meteor|plasma|noise|aurora") effect_type: Optional[str] = Field(None, description="Effect algorithm: fire|meteor|plasma|noise|aurora|rain|comet|bouncing_ball|fireworks|sparkle_rain|lava_lamp|wave_interference")
palette: Optional[str] = Field(None, description="Named palette (fire/ocean/lava/forest/rainbow/aurora/sunset/ice)") palette: Optional[str] = Field(None, description="Named palette (fire/ocean/lava/forest/rainbow/aurora/sunset/ice) or 'custom'")
intensity: Optional[float] = Field(None, description="Effect intensity 0.1-2.0", ge=0.1, le=2.0) intensity: Optional[float] = Field(None, description="Effect intensity 0.1-2.0", ge=0.1, le=2.0)
scale: Optional[float] = Field(None, description="Spatial scale 0.5-5.0", ge=0.5, le=5.0) scale: Optional[float] = Field(None, description="Spatial scale 0.5-5.0", ge=0.5, le=5.0)
mirror: Optional[bool] = Field(None, description="Mirror/bounce mode (meteor)") mirror: Optional[bool] = Field(None, description="Mirror/bounce mode (meteor/comet)")
custom_palette: Optional[List[List[float]]] = Field(None, description="Custom palette stops [[pos,R,G,B],...]")
# gradient entity reference (effect, gradient, audio types)
gradient_id: Optional[str] = Field(None, description="Gradient entity ID (overrides palette/inline stops)")
# gradient-type easing
easing: Optional[str] = Field(None, description="Gradient interpolation easing: linear|ease_in_out|step|cubic")
# composite-type fields # composite-type fields
layers: Optional[List[CompositeLayer]] = Field(None, description="Layers for composite type") layers: Optional[List[CompositeLayer]] = Field(None, description="Layers for composite type")
# mapped-type fields # mapped-type fields
@@ -97,11 +105,17 @@ class ColorStripSourceCreate(BaseModel):
speed: Optional[float] = Field(None, description="Cycle/flicker speed multiplier", ge=0.1, le=10.0) speed: Optional[float] = Field(None, description="Cycle/flicker speed multiplier", ge=0.1, le=10.0)
use_real_time: Optional[bool] = Field(None, description="Use wall-clock time for daylight cycle") use_real_time: Optional[bool] = Field(None, description="Use wall-clock time for daylight cycle")
latitude: Optional[float] = Field(None, description="Latitude for daylight timing (-90 to 90)", ge=-90.0, le=90.0) latitude: Optional[float] = Field(None, description="Latitude for daylight timing (-90 to 90)", ge=-90.0, le=90.0)
longitude: Optional[float] = Field(None, description="Longitude for daylight timing (-180 to 180)", ge=-180.0, le=180.0)
# candlelight-type fields # candlelight-type fields
num_candles: Optional[int] = Field(None, description="Number of independent candle sources (1-20)", ge=1, le=20) num_candles: Optional[int] = Field(None, description="Number of independent candle sources (1-20)", ge=1, le=20)
wind_strength: Optional[float] = Field(None, description="Wind simulation strength (0.0-2.0)", ge=0.0, le=2.0)
candle_type: Optional[str] = Field(None, description="Candle type preset: default|taper|votive|bonfire")
# processed-type fields # processed-type fields
input_source_id: Optional[str] = Field(None, description="Input color strip source ID (for processed type)") input_source_id: Optional[str] = Field(None, description="Input color strip source ID (for processed type)")
processing_template_id: Optional[str] = Field(None, description="Color strip processing template ID (for processed type)") processing_template_id: Optional[str] = Field(None, description="Color strip processing template ID (for processed type)")
# weather-type fields
weather_source_id: Optional[str] = Field(None, description="Weather source entity ID (for weather type)")
temperature_influence: Optional[float] = Field(None, description="Temperature color shift strength (0.0-1.0)", ge=0.0, le=1.0)
# sync clock # sync clock
clock_id: Optional[str] = Field(None, description="Optional sync clock ID for synchronized animation") clock_id: Optional[str] = Field(None, description="Optional sync clock ID for synchronized animation")
tags: List[str] = Field(default_factory=list, description="User-defined tags") tags: List[str] = Field(default_factory=list, description="User-defined tags")
@@ -123,11 +137,16 @@ class ColorStripSourceUpdate(BaseModel):
# color_cycle-type fields # color_cycle-type fields
colors: Optional[List[List[int]]] = Field(None, description="List of [R,G,B] colors to cycle (color_cycle type)") colors: Optional[List[List[int]]] = Field(None, description="List of [R,G,B] colors to cycle (color_cycle type)")
# effect-type fields # effect-type fields
effect_type: Optional[str] = Field(None, description="Effect algorithm: fire|meteor|plasma|noise|aurora") effect_type: Optional[str] = Field(None, description="Effect algorithm")
palette: Optional[str] = Field(None, description="Named palette") palette: Optional[str] = Field(None, description="Named palette")
intensity: Optional[float] = Field(None, description="Effect intensity 0.1-2.0", ge=0.1, le=2.0) intensity: Optional[float] = Field(None, description="Effect intensity 0.1-2.0", ge=0.1, le=2.0)
scale: Optional[float] = Field(None, description="Spatial scale 0.5-5.0", ge=0.5, le=5.0) scale: Optional[float] = Field(None, description="Spatial scale 0.5-5.0", ge=0.5, le=5.0)
mirror: Optional[bool] = Field(None, description="Mirror/bounce mode") mirror: Optional[bool] = Field(None, description="Mirror/bounce mode")
custom_palette: Optional[List[List[float]]] = Field(None, description="Custom palette stops [[pos,R,G,B],...]")
# gradient entity reference (effect, gradient, audio types)
gradient_id: Optional[str] = Field(None, description="Gradient entity ID (overrides palette/inline stops)")
# gradient-type easing
easing: Optional[str] = Field(None, description="Gradient interpolation easing: linear|ease_in_out|step|cubic")
# composite-type fields # composite-type fields
layers: Optional[List[CompositeLayer]] = Field(None, description="Layers for composite type") layers: Optional[List[CompositeLayer]] = Field(None, description="Layers for composite type")
# mapped-type fields # mapped-type fields
@@ -156,11 +175,17 @@ class ColorStripSourceUpdate(BaseModel):
speed: Optional[float] = Field(None, description="Cycle/flicker speed multiplier", ge=0.1, le=10.0) speed: Optional[float] = Field(None, description="Cycle/flicker speed multiplier", ge=0.1, le=10.0)
use_real_time: Optional[bool] = Field(None, description="Use wall-clock time for daylight cycle") use_real_time: Optional[bool] = Field(None, description="Use wall-clock time for daylight cycle")
latitude: Optional[float] = Field(None, description="Latitude for daylight timing (-90 to 90)", ge=-90.0, le=90.0) latitude: Optional[float] = Field(None, description="Latitude for daylight timing (-90 to 90)", ge=-90.0, le=90.0)
longitude: Optional[float] = Field(None, description="Longitude for daylight timing (-180 to 180)", ge=-180.0, le=180.0)
# candlelight-type fields # candlelight-type fields
num_candles: Optional[int] = Field(None, description="Number of independent candle sources (1-20)", ge=1, le=20) num_candles: Optional[int] = Field(None, description="Number of independent candle sources (1-20)", ge=1, le=20)
wind_strength: Optional[float] = Field(None, description="Wind simulation strength (0.0-2.0)", ge=0.0, le=2.0)
candle_type: Optional[str] = Field(None, description="Candle type preset: default|taper|votive|bonfire")
# processed-type fields # processed-type fields
input_source_id: Optional[str] = Field(None, description="Input color strip source ID (for processed type)") input_source_id: Optional[str] = Field(None, description="Input color strip source ID (for processed type)")
processing_template_id: Optional[str] = Field(None, description="Color strip processing template ID (for processed type)") processing_template_id: Optional[str] = Field(None, description="Color strip processing template ID (for processed type)")
# weather-type fields
weather_source_id: Optional[str] = Field(None, description="Weather source entity ID (for weather type)")
temperature_influence: Optional[float] = Field(None, description="Temperature color shift strength (0.0-1.0)", ge=0.0, le=1.0)
# sync clock # sync clock
clock_id: Optional[str] = Field(None, description="Optional sync clock ID for synchronized animation") clock_id: Optional[str] = Field(None, description="Optional sync clock ID for synchronized animation")
tags: Optional[List[str]] = None tags: Optional[List[str]] = None
@@ -189,6 +214,10 @@ class ColorStripSourceResponse(BaseModel):
intensity: Optional[float] = Field(None, description="Effect intensity") intensity: Optional[float] = Field(None, description="Effect intensity")
scale: Optional[float] = Field(None, description="Spatial scale") scale: Optional[float] = Field(None, description="Spatial scale")
mirror: Optional[bool] = Field(None, description="Mirror/bounce mode") mirror: Optional[bool] = Field(None, description="Mirror/bounce mode")
custom_palette: Optional[List[List[float]]] = Field(None, description="Custom palette stops")
gradient_id: Optional[str] = Field(None, description="Gradient entity ID")
# gradient-type easing
easing: Optional[str] = Field(None, description="Gradient interpolation easing")
# composite-type fields # composite-type fields
layers: Optional[List[dict]] = Field(None, description="Layers for composite type") layers: Optional[List[dict]] = Field(None, description="Layers for composite type")
# mapped-type fields # mapped-type fields
@@ -217,11 +246,17 @@ class ColorStripSourceResponse(BaseModel):
speed: Optional[float] = Field(None, description="Cycle/flicker speed multiplier") speed: Optional[float] = Field(None, description="Cycle/flicker speed multiplier")
use_real_time: Optional[bool] = Field(None, description="Use wall-clock time for daylight cycle") use_real_time: Optional[bool] = Field(None, description="Use wall-clock time for daylight cycle")
latitude: Optional[float] = Field(None, description="Latitude for daylight timing") latitude: Optional[float] = Field(None, description="Latitude for daylight timing")
longitude: Optional[float] = Field(None, description="Longitude for daylight timing")
# candlelight-type fields # candlelight-type fields
num_candles: Optional[int] = Field(None, description="Number of independent candle sources") num_candles: Optional[int] = Field(None, description="Number of independent candle sources")
wind_strength: Optional[float] = Field(None, description="Wind simulation strength")
candle_type: Optional[str] = Field(None, description="Candle type preset")
# processed-type fields # processed-type fields
input_source_id: Optional[str] = Field(None, description="Input color strip source ID") input_source_id: Optional[str] = Field(None, description="Input color strip source ID")
processing_template_id: Optional[str] = Field(None, description="Color strip processing template ID") processing_template_id: Optional[str] = Field(None, description="Color strip processing template ID")
# weather-type fields
weather_source_id: Optional[str] = Field(None, description="Weather source entity ID")
temperature_influence: Optional[float] = Field(None, description="Temperature color shift strength")
# sync clock # sync clock
clock_id: Optional[str] = Field(None, description="Optional sync clock ID for synchronized animation") clock_id: Optional[str] = Field(None, description="Optional sync clock ID for synchronized animation")
tags: List[str] = Field(default_factory=list, description="User-defined tags") tags: List[str] = Field(default_factory=list, description="User-defined tags")

View File

@@ -0,0 +1,51 @@
"""Gradient schemas (CRUD)."""
from datetime import datetime
from typing import List, Optional
from pydantic import BaseModel, Field
class GradientStopSchema(BaseModel):
"""A single gradient color stop."""
position: float = Field(description="Position along gradient (0.0-1.0)", ge=0.0, le=1.0)
color: List[int] = Field(description="RGB color [R, G, B]", min_length=3, max_length=3)
class GradientCreate(BaseModel):
"""Request to create a gradient."""
name: str = Field(description="Gradient name", min_length=1, max_length=100)
stops: List[GradientStopSchema] = Field(description="Color stops", min_length=2)
description: Optional[str] = Field(None, description="Optional description", max_length=500)
tags: List[str] = Field(default_factory=list, description="User-defined tags")
class GradientUpdate(BaseModel):
"""Request to update a gradient."""
name: Optional[str] = Field(None, description="Gradient name", min_length=1, max_length=100)
stops: Optional[List[GradientStopSchema]] = Field(None, description="Color stops", min_length=2)
description: Optional[str] = Field(None, description="Optional description", max_length=500)
tags: Optional[List[str]] = None
class GradientResponse(BaseModel):
"""Gradient response."""
id: str = Field(description="Gradient ID")
name: str = Field(description="Gradient name")
stops: List[GradientStopSchema] = Field(description="Color stops")
is_builtin: bool = Field(description="Whether this is a built-in gradient")
description: Optional[str] = Field(None, description="Description")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
created_at: datetime = Field(description="Creation timestamp")
updated_at: datetime = Field(description="Last update timestamp")
class GradientListResponse(BaseModel):
"""List of gradients."""
gradients: List[GradientResponse] = Field(description="List of gradients")
count: int = Field(description="Number of gradients")

View File

@@ -13,6 +13,7 @@ class HealthResponse(BaseModel):
timestamp: datetime = Field(description="Current server time") timestamp: datetime = Field(description="Current server time")
version: str = Field(description="Application version") version: str = Field(description="Application version")
demo_mode: bool = Field(default=False, description="Whether demo mode is active") demo_mode: bool = Field(default=False, description="Whether demo mode is active")
auth_required: bool = Field(default=True, description="Whether API key authentication is required")
class VersionResponse(BaseModel): class VersionResponse(BaseModel):
@@ -74,12 +75,9 @@ class PerformanceResponse(BaseModel):
class RestoreResponse(BaseModel): class RestoreResponse(BaseModel):
"""Response after restoring configuration backup.""" """Response after restoring database backup."""
status: str = Field(description="Status of restore operation") status: str = Field(description="Status of restore operation")
stores_written: int = Field(description="Number of stores successfully written")
stores_total: int = Field(description="Total number of known stores")
missing_stores: List[str] = Field(default_factory=list, description="Store keys not found in backup")
restart_scheduled: bool = Field(description="Whether server restart was scheduled") restart_scheduled: bool = Field(description="Whether server restart was scheduled")
message: str = Field(description="Human-readable status message") message: str = Field(description="Human-readable status message")

View File

@@ -0,0 +1,44 @@
"""Pydantic schemas for the update API."""
from pydantic import BaseModel, Field
class UpdateReleaseInfo(BaseModel):
version: str
tag: str
name: str
body: str
prerelease: bool
published_at: str
class UpdateStatusResponse(BaseModel):
current_version: str
has_update: bool
checking: bool
last_check: float | None
last_error: str | None
releases_url: str
install_type: str
can_auto_update: bool
downloading: bool
download_progress: float
applying: bool
release: UpdateReleaseInfo | None
dismissed_version: str
class UpdateSettingsResponse(BaseModel):
enabled: bool
check_interval_hours: float
include_prerelease: bool
class UpdateSettingsRequest(BaseModel):
enabled: bool
check_interval_hours: float = Field(ge=0.5, le=168)
include_prerelease: bool
class DismissRequest(BaseModel):
version: str

View File

@@ -0,0 +1,65 @@
"""Weather source schemas (CRUD)."""
from datetime import datetime
from typing import Dict, List, Literal, Optional
from pydantic import BaseModel, Field
class WeatherSourceCreate(BaseModel):
"""Request to create a weather source."""
name: str = Field(description="Source name", min_length=1, max_length=100)
provider: Literal["open_meteo"] = Field(default="open_meteo", description="Weather data provider")
provider_config: Optional[Dict] = Field(None, description="Provider-specific configuration")
latitude: float = Field(default=50.0, description="Geographic latitude (-90 to 90)", ge=-90.0, le=90.0)
longitude: float = Field(default=0.0, description="Geographic longitude (-180 to 180)", ge=-180.0, le=180.0)
update_interval: int = Field(default=600, description="API poll interval in seconds (60-3600)", ge=60, le=3600)
description: Optional[str] = Field(None, description="Optional description", max_length=500)
tags: List[str] = Field(default_factory=list, description="User-defined tags")
class WeatherSourceUpdate(BaseModel):
"""Request to update a weather source."""
name: Optional[str] = Field(None, description="Source name", min_length=1, max_length=100)
provider: Optional[Literal["open_meteo"]] = Field(None, description="Weather data provider")
provider_config: Optional[Dict] = Field(None, description="Provider-specific configuration")
latitude: Optional[float] = Field(None, description="Geographic latitude (-90 to 90)", ge=-90.0, le=90.0)
longitude: Optional[float] = Field(None, description="Geographic longitude (-180 to 180)", ge=-180.0, le=180.0)
update_interval: Optional[int] = Field(None, description="API poll interval in seconds (60-3600)", ge=60, le=3600)
description: Optional[str] = Field(None, description="Optional description", max_length=500)
tags: Optional[List[str]] = None
class WeatherSourceResponse(BaseModel):
"""Weather source response."""
id: str = Field(description="Source ID")
name: str = Field(description="Source name")
provider: str = Field(description="Weather data provider")
provider_config: Dict = Field(default_factory=dict, description="Provider-specific configuration")
latitude: float = Field(description="Geographic latitude")
longitude: float = Field(description="Geographic longitude")
update_interval: int = Field(description="API poll interval in seconds")
description: Optional[str] = Field(None, description="Description")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
created_at: datetime = Field(description="Creation timestamp")
updated_at: datetime = Field(description="Last update timestamp")
class WeatherSourceListResponse(BaseModel):
"""List of weather sources."""
sources: List[WeatherSourceResponse] = Field(description="List of weather sources")
count: int = Field(description="Number of sources")
class WeatherTestResponse(BaseModel):
"""Weather test/fetch result."""
code: int = Field(description="WMO weather code")
condition: str = Field(description="Human-readable condition name")
temperature: float = Field(description="Temperature in Celsius")
wind_speed: float = Field(description="Wind speed in km/h")
cloud_cover: int = Field(description="Cloud cover percentage (0-100)")

View File

@@ -21,26 +21,13 @@ class ServerConfig(BaseSettings):
class AuthConfig(BaseSettings): class AuthConfig(BaseSettings):
"""Authentication configuration.""" """Authentication configuration."""
api_keys: dict[str, str] = {} # label: key mapping (required for security) api_keys: dict[str, str] = {} # label: key mapping (empty = auth disabled)
class StorageConfig(BaseSettings): class StorageConfig(BaseSettings):
"""Storage configuration.""" """Storage configuration."""
devices_file: str = "data/devices.json" database_file: str = "data/ledgrab.db"
templates_file: str = "data/capture_templates.json"
postprocessing_templates_file: str = "data/postprocessing_templates.json"
picture_sources_file: str = "data/picture_sources.json"
output_targets_file: str = "data/output_targets.json"
pattern_templates_file: str = "data/pattern_templates.json"
color_strip_sources_file: str = "data/color_strip_sources.json"
audio_sources_file: str = "data/audio_sources.json"
audio_templates_file: str = "data/audio_templates.json"
value_sources_file: str = "data/value_sources.json"
automations_file: str = "data/automations.json"
scene_presets_file: str = "data/scene_presets.json"
color_strip_processing_templates_file: str = "data/color_strip_processing_templates.json"
sync_clocks_file: str = "data/sync_clocks.json"
class MQTTConfig(BaseSettings): class MQTTConfig(BaseSettings):

View File

@@ -0,0 +1,63 @@
"""Frequency band filtering for audio spectrum data.
Computes masks that select specific frequency ranges from the 64-band
log-spaced spectrum, and applies them to filter spectrum + RMS data.
"""
import math
from typing import Tuple
import numpy as np
from wled_controller.core.audio.analysis import NUM_BANDS
def compute_band_mask(freq_low: float, freq_high: float) -> np.ndarray:
"""Compute a boolean-style float mask for the 64 log-spaced spectrum bands.
Each band's center frequency is computed using the same log-spacing as
analysis._build_log_bands (20 Hz to 20 kHz). Bands whose center falls
within [freq_low, freq_high] get mask=1.0, others get 0.0.
Returns:
float32 array of shape (NUM_BANDS,) with 1.0/0.0 values.
"""
min_freq = 20.0
max_freq = 20000.0
log_min = math.log10(min_freq)
log_max = math.log10(max_freq)
# Band edge frequencies (NUM_BANDS + 1 edges)
edges = np.logspace(log_min, log_max, NUM_BANDS + 1)
mask = np.zeros(NUM_BANDS, dtype=np.float32)
for i in range(NUM_BANDS):
center = math.sqrt(edges[i] * edges[i + 1]) # geometric mean
if freq_low <= center <= freq_high:
mask[i] = 1.0
return mask
def apply_band_filter(
spectrum: np.ndarray,
rms: float,
mask: np.ndarray,
) -> Tuple[np.ndarray, float]:
"""Apply a band mask to spectrum data, returning filtered spectrum and RMS.
Args:
spectrum: float32 array of shape (NUM_BANDS,) — normalized 0-1 amplitudes.
rms: Original RMS value from the full spectrum.
mask: float32 array from compute_band_mask().
Returns:
(filtered_spectrum, filtered_rms) — spectrum with out-of-band zeroed,
RMS recomputed from in-band values only.
"""
filtered = spectrum * mask
active = mask > 0
if active.any():
filtered_rms = float(np.sqrt(np.mean(filtered[active] ** 2)))
else:
filtered_rms = 0.0
return filtered, filtered_rms

View File

@@ -205,21 +205,20 @@ class AutomationEngine:
fullscreen_procs: Set[str], fullscreen_procs: Set[str],
idle_seconds: Optional[float], display_state: Optional[str], idle_seconds: Optional[float], display_state: Optional[str],
) -> bool: ) -> bool:
if isinstance(condition, (AlwaysCondition, StartupCondition)): dispatch = {
return True AlwaysCondition: lambda c: True,
if isinstance(condition, ApplicationCondition): StartupCondition: lambda c: True,
return self._evaluate_app_condition(condition, running_procs, topmost_proc, topmost_fullscreen, fullscreen_procs) ApplicationCondition: lambda c: self._evaluate_app_condition(c, running_procs, topmost_proc, topmost_fullscreen, fullscreen_procs),
if isinstance(condition, TimeOfDayCondition): TimeOfDayCondition: lambda c: self._evaluate_time_of_day(c),
return self._evaluate_time_of_day(condition) SystemIdleCondition: lambda c: self._evaluate_idle(c, idle_seconds),
if isinstance(condition, SystemIdleCondition): DisplayStateCondition: lambda c: self._evaluate_display_state(c, display_state),
return self._evaluate_idle(condition, idle_seconds) MQTTCondition: lambda c: self._evaluate_mqtt(c),
if isinstance(condition, DisplayStateCondition): WebhookCondition: lambda c: self._webhook_states.get(c.token, False),
return self._evaluate_display_state(condition, display_state) }
if isinstance(condition, MQTTCondition): handler = dispatch.get(type(condition))
return self._evaluate_mqtt(condition) if handler is None:
if isinstance(condition, WebhookCondition): return False
return self._webhook_states.get(condition.token, False) return handler(condition)
return False
@staticmethod @staticmethod
def _evaluate_time_of_day(condition: TimeOfDayCondition) -> bool: def _evaluate_time_of_day(condition: TimeOfDayCondition) -> bool:
@@ -253,16 +252,18 @@ class AutomationEngine:
value = self._mqtt_service.get_last_value(condition.topic) value = self._mqtt_service.get_last_value(condition.topic)
if value is None: if value is None:
return False return False
if condition.match_mode == "exact": matchers = {
return value == condition.payload "exact": lambda: value == condition.payload,
if condition.match_mode == "contains": "contains": lambda: condition.payload in value,
return condition.payload in value "regex": lambda: bool(re.search(condition.payload, value)),
if condition.match_mode == "regex": }
try: matcher = matchers.get(condition.match_mode)
return bool(re.search(condition.payload, value)) if matcher is None:
except re.error: return False
return False try:
return False return matcher()
except re.error:
return False
def _evaluate_app_condition( def _evaluate_app_condition(
self, self,
@@ -277,19 +278,21 @@ class AutomationEngine:
apps_lower = [a.lower() for a in condition.apps] apps_lower = [a.lower() for a in condition.apps]
if condition.match_type == "fullscreen": match_handlers = {
return any(app in fullscreen_procs for app in apps_lower) "fullscreen": lambda: any(app in fullscreen_procs for app in apps_lower),
"topmost_fullscreen": lambda: (
if condition.match_type == "topmost_fullscreen": topmost_proc is not None
if topmost_proc is None or not topmost_fullscreen: and topmost_fullscreen
return False and any(app == topmost_proc for app in apps_lower)
return any(app == topmost_proc for app in apps_lower) ),
"topmost": lambda: (
if condition.match_type == "topmost": topmost_proc is not None
if topmost_proc is None: and any(app == topmost_proc for app in apps_lower)
return False ),
return any(app == topmost_proc for app in apps_lower) }
handler = match_handlers.get(condition.match_type)
if handler is not None:
return handler()
# Default: "running" # Default: "running"
return any(app in running_procs for app in apps_lower) return any(app in running_procs for app in apps_lower)
@@ -352,38 +355,10 @@ class AutomationEngine:
deactivation_mode = automation.deactivation_mode if automation else "none" deactivation_mode = automation.deactivation_mode if automation else "none"
if deactivation_mode == "revert": if deactivation_mode == "revert":
snapshot = self._pre_activation_snapshots.pop(automation_id, None) await self._deactivate_revert(automation_id)
if snapshot and self._target_store:
from wled_controller.core.scenes.scene_activator import apply_scene_state
status, errors = await apply_scene_state(
snapshot, self._target_store, self._manager,
)
if errors:
logger.warning(f"Automation {automation_id} revert errors: {errors}")
else:
logger.info(f"Automation {automation_id} deactivated (reverted to previous state)")
else:
logger.warning(f"Automation {automation_id}: no snapshot available for revert")
elif deactivation_mode == "fallback_scene": elif deactivation_mode == "fallback_scene":
fallback_id = automation.deactivation_scene_preset_id if automation else None await self._deactivate_fallback(automation_id, automation)
if fallback_id and self._scene_preset_store and self._target_store:
try:
fallback = self._scene_preset_store.get_preset(fallback_id)
from wled_controller.core.scenes.scene_activator import apply_scene_state
status, errors = await apply_scene_state(
fallback, self._target_store, self._manager,
)
if errors:
logger.warning(f"Automation {automation_id} fallback errors: {errors}")
else:
logger.info(f"Automation {automation_id} deactivated (fallback scene '{fallback.name}' applied)")
except ValueError:
logger.warning(f"Automation {automation_id}: fallback scene {fallback_id} not found")
else:
logger.info(f"Automation {automation_id} deactivated (no fallback scene configured)")
else: else:
# "none" mode — just clear active state
logger.info(f"Automation {automation_id} deactivated") logger.info(f"Automation {automation_id} deactivated")
self._last_deactivated[automation_id] = datetime.now(timezone.utc) self._last_deactivated[automation_id] = datetime.now(timezone.utc)
@@ -391,6 +366,40 @@ class AutomationEngine:
# Clean up any leftover snapshot # Clean up any leftover snapshot
self._pre_activation_snapshots.pop(automation_id, None) self._pre_activation_snapshots.pop(automation_id, None)
async def _deactivate_revert(self, automation_id: str) -> None:
"""Revert to pre-activation snapshot."""
snapshot = self._pre_activation_snapshots.pop(automation_id, None)
if snapshot and self._target_store:
from wled_controller.core.scenes.scene_activator import apply_scene_state
status, errors = await apply_scene_state(
snapshot, self._target_store, self._manager,
)
if errors:
logger.warning(f"Automation {automation_id} revert errors: {errors}")
else:
logger.info(f"Automation {automation_id} deactivated (reverted to previous state)")
else:
logger.warning(f"Automation {automation_id}: no snapshot available for revert")
async def _deactivate_fallback(self, automation_id: str, automation) -> None:
"""Activate fallback scene on deactivation."""
fallback_id = automation.deactivation_scene_preset_id if automation else None
if fallback_id and self._scene_preset_store and self._target_store:
try:
fallback = self._scene_preset_store.get_preset(fallback_id)
from wled_controller.core.scenes.scene_activator import apply_scene_state
status, errors = await apply_scene_state(
fallback, self._target_store, self._manager,
)
if errors:
logger.warning(f"Automation {automation_id} fallback errors: {errors}")
else:
logger.info(f"Automation {automation_id} deactivated (fallback scene '{fallback.name}' applied)")
except ValueError:
logger.warning(f"Automation {automation_id}: fallback scene {fallback_id} not found")
else:
logger.info(f"Automation {automation_id} deactivated (no fallback scene configured)")
def _fire_event(self, automation_id: str, action: str) -> None: def _fire_event(self, automation_id: str, action: str) -> None:
try: try:
self._manager.fire_event({ self._manager.fire_event({

View File

@@ -1,6 +1,6 @@
"""Platform-specific process and window detection. """Platform-specific process and window detection.
Windows: uses wmi for process listing, ctypes for foreground window detection. Windows: uses ctypes for process listing and foreground window detection.
Non-Windows: graceful degradation (returns empty results). Non-Windows: graceful degradation (returns empty results).
""" """
@@ -37,7 +37,7 @@ class PlatformDetector:
user32 = ctypes.windll.user32 user32 = ctypes.windll.user32
WNDPROC = ctypes.WINFUNCTYPE( WNDPROC = ctypes.WINFUNCTYPE(
ctypes.c_long, ctypes.c_ssize_t, # LRESULT (64-bit on x64)
ctypes.wintypes.HWND, ctypes.wintypes.HWND,
ctypes.c_uint, ctypes.c_uint,
ctypes.wintypes.WPARAM, ctypes.wintypes.WPARAM,
@@ -60,6 +60,12 @@ class PlatformDetector:
0x8F, 0x24, 0xC2, 0x8D, 0x93, 0x6F, 0xDA, 0x47, 0x8F, 0x24, 0xC2, 0x8D, 0x93, 0x6F, 0xDA, 0x47,
) )
user32.DefWindowProcW.argtypes = [
ctypes.wintypes.HWND, ctypes.c_uint,
ctypes.wintypes.WPARAM, ctypes.wintypes.LPARAM,
]
user32.DefWindowProcW.restype = ctypes.c_ssize_t
def wnd_proc(hwnd, msg, wparam, lparam): def wnd_proc(hwnd, msg, wparam, lparam):
if msg == WM_POWERBROADCAST and wparam == PBT_POWERSETTINGCHANGE: if msg == WM_POWERBROADCAST and wparam == PBT_POWERSETTINGCHANGE:
try: try:

View File

@@ -1,14 +1,13 @@
"""Auto-backup engine — periodic background backups of all configuration stores.""" """Auto-backup engine — periodic SQLite snapshot backups."""
import asyncio import asyncio
import json
import os import os
from datetime import datetime, timezone from datetime import datetime, timedelta, timezone
from pathlib import Path from pathlib import Path
from typing import Any, Dict, List, Optional from typing import List, Optional
from wled_controller import __version__ from wled_controller.storage.database import Database
from wled_controller.utils import atomic_write_json, get_logger from wled_controller.utils import get_logger
logger = get_logger(__name__) logger = get_logger(__name__)
@@ -18,21 +17,22 @@ DEFAULT_SETTINGS = {
"max_backups": 10, "max_backups": 10,
} }
# Skip the immediate-on-start backup if a recent backup exists within this window.
_STARTUP_BACKUP_COOLDOWN = timedelta(minutes=5)
_BACKUP_EXT = ".db"
class AutoBackupEngine: class AutoBackupEngine:
"""Creates periodic backups of all configuration stores.""" """Creates periodic SQLite snapshot backups of the database."""
def __init__( def __init__(
self, self,
settings_path: Path,
backup_dir: Path, backup_dir: Path,
store_map: Dict[str, str], db: Database,
storage_config: Any,
): ):
self._settings_path = Path(settings_path)
self._backup_dir = Path(backup_dir) self._backup_dir = Path(backup_dir)
self._store_map = store_map self._db = db
self._storage_config = storage_config
self._task: Optional[asyncio.Task] = None self._task: Optional[asyncio.Task] = None
self._last_backup_time: Optional[datetime] = None self._last_backup_time: Optional[datetime] = None
@@ -42,17 +42,13 @@ class AutoBackupEngine:
# ─── Settings persistence ────────────────────────────────── # ─── Settings persistence ──────────────────────────────────
def _load_settings(self) -> dict: def _load_settings(self) -> dict:
if self._settings_path.exists(): data = self._db.get_setting("auto_backup")
try: if data:
with open(self._settings_path, "r", encoding="utf-8") as f: return {**DEFAULT_SETTINGS, **data}
data = json.load(f)
return {**DEFAULT_SETTINGS, **data}
except Exception as e:
logger.warning(f"Failed to load auto-backup settings: {e}")
return dict(DEFAULT_SETTINGS) return dict(DEFAULT_SETTINGS)
def _save_settings(self) -> None: def _save_settings(self) -> None:
atomic_write_json(self._settings_path, { self._db.set_setting("auto_backup", {
"enabled": self._settings["enabled"], "enabled": self._settings["enabled"],
"interval_hours": self._settings["interval_hours"], "interval_hours": self._settings["interval_hours"],
"max_backups": self._settings["max_backups"], "max_backups": self._settings["max_backups"],
@@ -83,11 +79,25 @@ class AutoBackupEngine:
self._task.cancel() self._task.cancel()
self._task = None self._task = None
def _most_recent_backup_age(self) -> timedelta | None:
"""Return the age of the newest backup file, or None if no backups exist."""
files = list(self._backup_dir.glob(f"*{_BACKUP_EXT}"))
if not files:
return None
newest = max(files, key=lambda p: p.stat().st_mtime)
mtime = datetime.fromtimestamp(newest.stat().st_mtime, tz=timezone.utc)
return datetime.now(timezone.utc) - mtime
async def _backup_loop(self) -> None: async def _backup_loop(self) -> None:
try: try:
# Perform first backup immediately on start age = self._most_recent_backup_age()
await self._perform_backup() if age is None or age > _STARTUP_BACKUP_COOLDOWN:
self._prune_old_backups() await self._perform_backup()
self._prune_old_backups()
else:
logger.info(
f"Skipping startup backup — most recent is only {age.total_seconds():.0f}s old"
)
interval_secs = self._settings["interval_hours"] * 3600 interval_secs = self._settings["interval_hours"] * 3600
while True: while True:
@@ -103,45 +113,22 @@ class AutoBackupEngine:
# ─── Backup operations ───────────────────────────────────── # ─── Backup operations ─────────────────────────────────────
async def _perform_backup(self) -> None: async def _perform_backup(self) -> None:
loop = asyncio.get_event_loop() await asyncio.to_thread(self._perform_backup_sync)
await loop.run_in_executor(None, self._perform_backup_sync)
def _perform_backup_sync(self) -> None: def _perform_backup_sync(self) -> None:
stores = {}
for store_key, config_attr in self._store_map.items():
file_path = Path(getattr(self._storage_config, config_attr))
if file_path.exists():
with open(file_path, "r", encoding="utf-8") as f:
stores[store_key] = json.load(f)
else:
stores[store_key] = {}
now = datetime.now(timezone.utc) now = datetime.now(timezone.utc)
backup = {
"meta": {
"format": "ledgrab-backup",
"format_version": 1,
"app_version": __version__,
"created_at": now.isoformat(),
"store_count": len(stores),
"auto_backup": True,
},
"stores": stores,
}
timestamp = now.strftime("%Y-%m-%dT%H%M%S") timestamp = now.strftime("%Y-%m-%dT%H%M%S")
filename = f"ledgrab-autobackup-{timestamp}.json" filename = f"ledgrab-backup-{timestamp}{_BACKUP_EXT}"
file_path = self._backup_dir / filename file_path = self._backup_dir / filename
content = json.dumps(backup, indent=2, ensure_ascii=False) self._db.backup_to(file_path)
file_path.write_text(content, encoding="utf-8")
self._last_backup_time = now self._last_backup_time = now
logger.info(f"Auto-backup created: {filename}") logger.info(f"Backup created: {filename}")
def _prune_old_backups(self) -> None: def _prune_old_backups(self) -> None:
max_backups = self._settings["max_backups"] max_backups = self._settings["max_backups"]
files = sorted(self._backup_dir.glob("*.json"), key=lambda p: p.stat().st_mtime) files = sorted(self._backup_dir.glob(f"*{_BACKUP_EXT}"), key=lambda p: p.stat().st_mtime)
excess = len(files) - max_backups excess = len(files) - max_backups
if excess > 0: if excess > 0:
for f in files[:excess]: for f in files[:excess]:
@@ -156,7 +143,6 @@ class AutoBackupEngine:
def get_settings(self) -> dict: def get_settings(self) -> dict:
next_backup = None next_backup = None
if self._settings["enabled"] and self._last_backup_time: if self._settings["enabled"] and self._last_backup_time:
from datetime import timedelta
next_backup = ( next_backup = (
self._last_backup_time + timedelta(hours=self._settings["interval_hours"]) self._last_backup_time + timedelta(hours=self._settings["interval_hours"])
).isoformat() ).isoformat()
@@ -175,7 +161,6 @@ class AutoBackupEngine:
self._settings["max_backups"] = max_backups self._settings["max_backups"] = max_backups
self._save_settings() self._save_settings()
# Restart or stop the loop
if enabled: if enabled:
self._start_loop() self._start_loop()
logger.info( logger.info(
@@ -185,14 +170,12 @@ class AutoBackupEngine:
self._cancel_loop() self._cancel_loop()
logger.info("Auto-backup disabled") logger.info("Auto-backup disabled")
# Prune if max_backups was reduced
self._prune_old_backups() self._prune_old_backups()
return self.get_settings() return self.get_settings()
def list_backups(self) -> List[dict]: def list_backups(self) -> List[dict]:
backups = [] backups = []
for f in sorted(self._backup_dir.glob("*.json"), key=lambda p: p.stat().st_mtime, reverse=True): for f in sorted(self._backup_dir.glob(f"*{_BACKUP_EXT}"), key=lambda p: p.stat().st_mtime, reverse=True):
stat = f.stat() stat = f.stat()
backups.append({ backups.append({
"filename": f.name, "filename": f.name,
@@ -206,7 +189,6 @@ class AutoBackupEngine:
if not filename or os.sep in filename or "/" in filename or ".." in filename: if not filename or os.sep in filename or "/" in filename or ".." in filename:
raise ValueError("Invalid filename") raise ValueError("Invalid filename")
target = (self._backup_dir / filename).resolve() target = (self._backup_dir / filename).resolve()
# Ensure resolved path is still inside the backup directory
if not target.is_relative_to(self._backup_dir.resolve()): if not target.is_relative_to(self._backup_dir.resolve()):
raise ValueError("Invalid filename") raise ValueError("Invalid filename")
return target return target
@@ -215,7 +197,6 @@ class AutoBackupEngine:
"""Manually trigger a backup and prune old ones. Returns the created backup info.""" """Manually trigger a backup and prune old ones. Returns the created backup info."""
await self._perform_backup() await self._perform_backup()
self._prune_old_backups() self._prune_old_backups()
# Return the most recent backup entry
backups = self.list_backups() backups = self.list_backups()
return backups[0] if backups else {} return backups[0] if backups else {}

View File

@@ -668,14 +668,20 @@ def create_pixel_mapper(
return PixelMapper(calibration, interpolation_mode) return PixelMapper(calibration, interpolation_mode)
def create_default_calibration(led_count: int) -> CalibrationConfig: def create_default_calibration(
led_count: int,
aspect_width: int = 16,
aspect_height: int = 9,
) -> CalibrationConfig:
"""Create a default calibration for a rectangular screen. """Create a default calibration for a rectangular screen.
Assumes LEDs are evenly distributed around the screen edges in clockwise order Distributes LEDs proportionally to the screen aspect ratio so that
starting from bottom-left. horizontal and vertical edges have equal LED density.
Args: Args:
led_count: Total number of LEDs led_count: Total number of LEDs
aspect_width: Screen width component of the aspect ratio (default 16)
aspect_height: Screen height component of the aspect ratio (default 9)
Returns: Returns:
Default calibration configuration Default calibration configuration
@@ -683,15 +689,48 @@ def create_default_calibration(led_count: int) -> CalibrationConfig:
if led_count < 4: if led_count < 4:
raise ValueError("Need at least 4 LEDs for default calibration") raise ValueError("Need at least 4 LEDs for default calibration")
# Distribute LEDs evenly across 4 edges # Distribute LEDs proportionally to aspect ratio (same density per edge)
leds_per_edge = led_count // 4 perimeter = 2 * (aspect_width + aspect_height)
remainder = led_count % 4 h_frac = aspect_width / perimeter # fraction for each horizontal edge
v_frac = aspect_height / perimeter # fraction for each vertical edge
# Distribute remainder to longer edges (bottom and top) # Float counts, then round so total == led_count
bottom_count = leds_per_edge + (1 if remainder > 0 else 0) raw_h = led_count * h_frac
right_count = leds_per_edge raw_v = led_count * v_frac
top_count = leds_per_edge + (1 if remainder > 1 else 0) bottom_count = round(raw_h)
left_count = leds_per_edge + (1 if remainder > 2 else 0) top_count = round(raw_h)
right_count = round(raw_v)
left_count = round(raw_v)
# Fix rounding error
diff = led_count - (bottom_count + top_count + right_count + left_count)
# Distribute remainder to horizontal edges first (longer edges)
if diff > 0:
bottom_count += 1
diff -= 1
if diff > 0:
top_count += 1
diff -= 1
if diff > 0:
right_count += 1
diff -= 1
if diff > 0:
left_count += 1
diff -= 1
# If we over-counted, remove from shorter edges first
if diff < 0:
left_count += diff # diff is negative
diff = 0
if left_count < 0:
diff = left_count
left_count = 0
right_count += diff
# Ensure each edge has at least 1 LED
bottom_count = max(1, bottom_count)
top_count = max(1, top_count)
right_count = max(1, right_count)
left_count = max(1, left_count)
config = CalibrationConfig( config = CalibrationConfig(
layout="clockwise", layout="clockwise",
@@ -703,7 +742,8 @@ def create_default_calibration(led_count: int) -> CalibrationConfig:
) )
logger.info( logger.info(
f"Created default calibration for {led_count} LEDs: " f"Created default calibration for {led_count} LEDs "
f"(aspect {aspect_width}:{aspect_height}): "
f"bottom={bottom_count}, right={right_count}, " f"bottom={bottom_count}, right={right_count}, "
f"top={top_count}, left={left_count}" f"top={top_count}, left={left_count}"
) )

View File

@@ -278,7 +278,12 @@ class OverlayManager:
def _start_tk_thread(self) -> None: def _start_tk_thread(self) -> None:
def _run(): def _run():
import tkinter as tk # lazy import — tkinter unavailable in headless CI try:
import tkinter as tk # lazy import — tkinter unavailable in embedded Python / headless CI
except ImportError:
logger.warning("tkinter not available — screen overlay disabled")
self._tk_ready.set()
return
try: try:
self._tk_root = tk.Tk() self._tk_root = tk.Tk()

View File

@@ -12,16 +12,23 @@ from wled_controller.core.capture_engines.dxcam_engine import DXcamEngine, DXcam
from wled_controller.core.capture_engines.bettercam_engine import BetterCamEngine, BetterCamCaptureStream from wled_controller.core.capture_engines.bettercam_engine import BetterCamEngine, BetterCamCaptureStream
from wled_controller.core.capture_engines.wgc_engine import WGCEngine, WGCCaptureStream from wled_controller.core.capture_engines.wgc_engine import WGCEngine, WGCCaptureStream
from wled_controller.core.capture_engines.scrcpy_engine import ScrcpyEngine, ScrcpyCaptureStream from wled_controller.core.capture_engines.scrcpy_engine import ScrcpyEngine, ScrcpyCaptureStream
from wled_controller.core.capture_engines.camera_engine import CameraEngine, CameraCaptureStream
from wled_controller.core.capture_engines.demo_engine import DemoCaptureEngine, DemoCaptureStream from wled_controller.core.capture_engines.demo_engine import DemoCaptureEngine, DemoCaptureStream
# Camera engine requires OpenCV — optional dependency
try:
from wled_controller.core.capture_engines.camera_engine import CameraEngine, CameraCaptureStream
_has_camera = True
except ImportError:
_has_camera = False
# Auto-register available engines # Auto-register available engines
EngineRegistry.register(MSSEngine) EngineRegistry.register(MSSEngine)
EngineRegistry.register(DXcamEngine) EngineRegistry.register(DXcamEngine)
EngineRegistry.register(BetterCamEngine) EngineRegistry.register(BetterCamEngine)
EngineRegistry.register(WGCEngine) EngineRegistry.register(WGCEngine)
EngineRegistry.register(ScrcpyEngine) EngineRegistry.register(ScrcpyEngine)
EngineRegistry.register(CameraEngine) if _has_camera:
EngineRegistry.register(CameraEngine)
EngineRegistry.register(DemoCaptureEngine) EngineRegistry.register(DemoCaptureEngine)
__all__ = [ __all__ = [
@@ -40,8 +47,9 @@ __all__ = [
"WGCCaptureStream", "WGCCaptureStream",
"ScrcpyEngine", "ScrcpyEngine",
"ScrcpyCaptureStream", "ScrcpyCaptureStream",
"CameraEngine",
"CameraCaptureStream",
"DemoCaptureEngine", "DemoCaptureEngine",
"DemoCaptureStream", "DemoCaptureStream",
] ]
if _has_camera:
__all__ += ["CameraEngine", "CameraCaptureStream"]

View File

@@ -56,21 +56,115 @@ def _cv2_backend_id(backend_name: str) -> Optional[int]:
def _get_camera_friendly_names() -> Dict[int, str]: def _get_camera_friendly_names() -> Dict[int, str]:
"""Get friendly names for cameras from OS. """Get friendly names for cameras from OS.
On Windows, queries WMI for PnP camera devices. On Windows, enumerates camera devices via the SetupAPI (pure ctypes,
Returns a dict mapping sequential index → friendly name. no third-party dependencies). Uses the camera device class GUID
``{ca3e7ab9-b4c3-4ae6-8251-579ef933890f}``.
Returns a dict mapping sequential index to friendly name.
""" """
if platform.system() != "Windows": if platform.system() != "Windows":
return {} return {}
try: try:
import wmi import ctypes
c = wmi.WMI() from ctypes import wintypes
cameras = c.query(
"SELECT Name FROM Win32_PnPEntity WHERE PNPClass = 'Camera'" # ── SetupAPI types ────────────────────────────────────────
class GUID(ctypes.Structure):
_fields_ = [
("Data1", wintypes.DWORD),
("Data2", wintypes.WORD),
("Data3", wintypes.WORD),
("Data4", ctypes.c_ubyte * 8),
]
class SP_DEVINFO_DATA(ctypes.Structure):
_fields_ = [
("cbSize", wintypes.DWORD),
("ClassGuid", GUID),
("DevInst", wintypes.DWORD),
("Reserved", ctypes.POINTER(ctypes.c_ulong)),
]
setupapi = ctypes.windll.setupapi
# Camera device class GUID: {ca3e7ab9-b4c3-4ae6-8251-579ef933890f}
GUID_DEVCLASS_CAMERA = GUID(
0xCA3E7AB9, 0xB4C3, 0x4AE6,
(ctypes.c_ubyte * 8)(0x82, 0x51, 0x57, 0x9E, 0xF9, 0x33, 0x89, 0x0F),
) )
return {i: cam.Name for i, cam in enumerate(cameras)}
DIGCF_PRESENT = 0x00000002
SPDRP_FRIENDLYNAME = 0x0000000C
SPDRP_DEVICEDESC = 0x00000000
INVALID_HANDLE_VALUE = ctypes.c_void_p(-1).value
# SetupDiGetClassDevsW → HDEVINFO
setupapi.SetupDiGetClassDevsW.restype = ctypes.c_void_p
setupapi.SetupDiGetClassDevsW.argtypes = [
ctypes.POINTER(GUID), ctypes.c_wchar_p,
ctypes.c_void_p, wintypes.DWORD,
]
# SetupDiEnumDeviceInfo → BOOL
setupapi.SetupDiEnumDeviceInfo.restype = wintypes.BOOL
setupapi.SetupDiEnumDeviceInfo.argtypes = [
ctypes.c_void_p, wintypes.DWORD, ctypes.POINTER(SP_DEVINFO_DATA),
]
# SetupDiGetDeviceRegistryPropertyW → BOOL
setupapi.SetupDiGetDeviceRegistryPropertyW.restype = wintypes.BOOL
setupapi.SetupDiGetDeviceRegistryPropertyW.argtypes = [
ctypes.c_void_p, ctypes.POINTER(SP_DEVINFO_DATA),
wintypes.DWORD, ctypes.POINTER(wintypes.DWORD),
ctypes.c_void_p, wintypes.DWORD, ctypes.POINTER(wintypes.DWORD),
]
# SetupDiDestroyDeviceInfoList → BOOL
setupapi.SetupDiDestroyDeviceInfoList.restype = wintypes.BOOL
setupapi.SetupDiDestroyDeviceInfoList.argtypes = [ctypes.c_void_p]
# ── Enumerate cameras ─────────────────────────────────────
hdevinfo = setupapi.SetupDiGetClassDevsW(
ctypes.byref(GUID_DEVCLASS_CAMERA), None, None, DIGCF_PRESENT,
)
if hdevinfo == INVALID_HANDLE_VALUE:
return {}
cameras: Dict[int, str] = {}
idx = 0
try:
while True:
devinfo = SP_DEVINFO_DATA()
devinfo.cbSize = ctypes.sizeof(SP_DEVINFO_DATA)
if not setupapi.SetupDiEnumDeviceInfo(hdevinfo, idx, ctypes.byref(devinfo)):
break # ERROR_NO_MORE_ITEMS
# Try SPDRP_FRIENDLYNAME first, fall back to SPDRP_DEVICEDESC
name = None
buf = ctypes.create_unicode_buffer(256)
buf_size = wintypes.DWORD(ctypes.sizeof(buf))
for prop in (SPDRP_FRIENDLYNAME, SPDRP_DEVICEDESC):
if setupapi.SetupDiGetDeviceRegistryPropertyW(
hdevinfo, ctypes.byref(devinfo), prop,
None, buf, buf_size, None,
):
name = buf.value.strip()
if name:
break
cameras[idx] = name if name else f"Camera {idx}"
idx += 1
finally:
setupapi.SetupDiDestroyDeviceInfoList(hdevinfo)
return cameras
except Exception as e: except Exception as e:
logger.debug(f"WMI camera enumeration failed: {e}") logger.debug(f"SetupAPI camera enumeration failed: {e}")
return {} return {}

View File

@@ -1,15 +1,14 @@
"""Seed data generator for demo mode. """Seed data generator for demo mode.
Populates the demo data directory with sample entities on first run, Populates the demo SQLite database with sample entities on first run,
giving new users a realistic out-of-the-box experience without needing giving new users a realistic out-of-the-box experience without needing
real hardware. real hardware.
""" """
import json import json
from datetime import datetime, timezone from datetime import datetime, timezone
from pathlib import Path
from wled_controller.config import StorageConfig from wled_controller.storage.database import Database
from wled_controller.utils import get_logger from wled_controller.utils import get_logger
logger = get_logger(__name__) logger = get_logger(__name__)
@@ -50,63 +49,48 @@ _SCENE_ID = "scene_demo0001"
_NOW = datetime.now(timezone.utc).isoformat() _NOW = datetime.now(timezone.utc).isoformat()
def _write_store(path: Path, json_key: str, items: dict) -> None: def _insert_entities(db: Database, table: str, items: dict) -> None:
"""Write a store JSON file with version wrapper.""" """Insert entity dicts into a SQLite table."""
path.parent.mkdir(parents=True, exist_ok=True) rows = []
data = { for entity_id, entity_data in items.items():
"version": "1.0.0", name = entity_data.get("name", "")
json_key: items, data_json = json.dumps(entity_data, ensure_ascii=False)
} rows.append((entity_id, name, data_json))
path.write_text(json.dumps(data, indent=2), encoding="utf-8") if rows:
logger.info(f"Seeded {len(items)} {json_key} -> {path}") db.bulk_insert(table, rows)
logger.info(f"Seeded {len(rows)} entities into {table}")
def _has_data(storage_config: StorageConfig) -> bool: def seed_demo_data(db: Database) -> None:
"""Check if any demo store file already has entities.""" """Populate demo database with sample entities.
for field_name in storage_config.model_fields:
value = getattr(storage_config, field_name)
if not isinstance(value, str):
continue
p = Path(value)
if p.exists() and p.stat().st_size > 20:
# File exists and is non-trivial — check if it has entities
try:
raw = json.loads(p.read_text(encoding="utf-8"))
for key, val in raw.items():
if key != "version" and isinstance(val, dict) and val:
return True
except Exception:
pass
return False
Only runs when the database has no entities in any table.
def seed_demo_data(storage_config: StorageConfig) -> None:
"""Populate demo data directory with sample entities.
Only runs when the demo data directory is empty (no existing entities).
Must be called BEFORE store constructors run so they load the seeded data. Must be called BEFORE store constructors run so they load the seeded data.
""" """
if _has_data(storage_config): # Check if any table already has data
logger.info("Demo data already exists — skipping seed") for table in ["devices", "output_targets", "color_strip_sources",
return "picture_sources", "audio_sources", "scene_presets"]:
if db.table_exists_with_data(table):
logger.info("Demo data already exists — skipping seed")
return
logger.info("Seeding demo data for first-run experience") logger.info("Seeding demo data for first-run experience")
_seed_devices(Path(storage_config.devices_file)) _insert_entities(db, "devices", _build_devices())
_seed_capture_templates(Path(storage_config.templates_file)) _insert_entities(db, "capture_templates", _build_capture_templates())
_seed_output_targets(Path(storage_config.output_targets_file)) _insert_entities(db, "output_targets", _build_output_targets())
_seed_picture_sources(Path(storage_config.picture_sources_file)) _insert_entities(db, "picture_sources", _build_picture_sources())
_seed_color_strip_sources(Path(storage_config.color_strip_sources_file)) _insert_entities(db, "color_strip_sources", _build_color_strip_sources())
_seed_audio_sources(Path(storage_config.audio_sources_file)) _insert_entities(db, "audio_sources", _build_audio_sources())
_seed_scene_presets(Path(storage_config.scene_presets_file)) _insert_entities(db, "scene_presets", _build_scene_presets())
logger.info("Demo seed data complete") logger.info("Demo seed data complete")
# ── Devices ──────────────────────────────────────────────────────── # ── Devices ────────────────────────────────────────────────────────
def _seed_devices(path: Path) -> None: def _build_devices() -> dict:
devices = { return {
_DEVICE_IDS["strip"]: { _DEVICE_IDS["strip"]: {
"id": _DEVICE_IDS["strip"], "id": _DEVICE_IDS["strip"],
"name": "Demo LED Strip", "name": "Demo LED Strip",
@@ -138,13 +122,12 @@ def _seed_devices(path: Path) -> None:
"updated_at": _NOW, "updated_at": _NOW,
}, },
} }
_write_store(path, "devices", devices)
# ── Capture Templates ────────────────────────────────────────────── # ── Capture Templates ──────────────────────────────────────────────
def _seed_capture_templates(path: Path) -> None: def _build_capture_templates() -> dict:
templates = { return {
_TPL_ID: { _TPL_ID: {
"id": _TPL_ID, "id": _TPL_ID,
"name": "Demo Capture", "name": "Demo Capture",
@@ -156,13 +139,12 @@ def _seed_capture_templates(path: Path) -> None:
"updated_at": _NOW, "updated_at": _NOW,
}, },
} }
_write_store(path, "templates", templates)
# ── Output Targets ───────────────────────────────────────────────── # ── Output Targets ─────────────────────────────────────────────────
def _seed_output_targets(path: Path) -> None: def _build_output_targets() -> dict:
targets = { return {
_TARGET_IDS["strip"]: { _TARGET_IDS["strip"]: {
"id": _TARGET_IDS["strip"], "id": _TARGET_IDS["strip"],
"name": "Strip — Gradient", "name": "Strip — Gradient",
@@ -200,13 +182,12 @@ def _seed_output_targets(path: Path) -> None:
"updated_at": _NOW, "updated_at": _NOW,
}, },
} }
_write_store(path, "output_targets", targets)
# ── Picture Sources ──────────────────────────────────────────────── # ── Picture Sources ────────────────────────────────────────────────
def _seed_picture_sources(path: Path) -> None: def _build_picture_sources() -> dict:
sources = { return {
_PS_IDS["main"]: { _PS_IDS["main"]: {
"id": _PS_IDS["main"], "id": _PS_IDS["main"],
"name": "Demo Display 1080p", "name": "Demo Display 1080p",
@@ -218,7 +199,6 @@ def _seed_picture_sources(path: Path) -> None:
"tags": ["demo"], "tags": ["demo"],
"created_at": _NOW, "created_at": _NOW,
"updated_at": _NOW, "updated_at": _NOW,
# Nulls for non-applicable subclass fields
"source_stream_id": None, "source_stream_id": None,
"postprocessing_template_id": None, "postprocessing_template_id": None,
"image_source": None, "image_source": None,
@@ -253,13 +233,12 @@ def _seed_picture_sources(path: Path) -> None:
"clock_id": None, "clock_id": None,
}, },
} }
_write_store(path, "picture_sources", sources)
# ── Color Strip Sources ──────────────────────────────────────────── # ── Color Strip Sources ────────────────────────────────────────────
def _seed_color_strip_sources(path: Path) -> None: def _build_color_strip_sources() -> dict:
sources = { return {
_CSS_IDS["gradient"]: { _CSS_IDS["gradient"]: {
"id": _CSS_IDS["gradient"], "id": _CSS_IDS["gradient"],
"name": "Rainbow Gradient", "name": "Rainbow Gradient",
@@ -338,13 +317,12 @@ def _seed_color_strip_sources(path: Path) -> None:
"updated_at": _NOW, "updated_at": _NOW,
}, },
} }
_write_store(path, "color_strip_sources", sources)
# ── Audio Sources ────────────────────────────────────────────────── # ── Audio Sources ──────────────────────────────────────────────────
def _seed_audio_sources(path: Path) -> None: def _build_audio_sources() -> dict:
sources = { return {
_AS_IDS["system"]: { _AS_IDS["system"]: {
"id": _AS_IDS["system"], "id": _AS_IDS["system"],
"name": "Demo System Audio", "name": "Demo System Audio",
@@ -356,7 +334,6 @@ def _seed_audio_sources(path: Path) -> None:
"tags": ["demo"], "tags": ["demo"],
"created_at": _NOW, "created_at": _NOW,
"updated_at": _NOW, "updated_at": _NOW,
# Forward-compat null fields
"audio_source_id": None, "audio_source_id": None,
"channel": None, "channel": None,
}, },
@@ -370,19 +347,17 @@ def _seed_audio_sources(path: Path) -> None:
"tags": ["demo"], "tags": ["demo"],
"created_at": _NOW, "created_at": _NOW,
"updated_at": _NOW, "updated_at": _NOW,
# Forward-compat null fields
"device_index": None, "device_index": None,
"is_loopback": None, "is_loopback": None,
"audio_template_id": None, "audio_template_id": None,
}, },
} }
_write_store(path, "audio_sources", sources)
# ── Scene Presets ────────────────────────────────────────────────── # ── Scene Presets ──────────────────────────────────────────────────
def _seed_scene_presets(path: Path) -> None: def _build_scene_presets() -> dict:
presets = { return {
_SCENE_ID: { _SCENE_ID: {
"id": _SCENE_ID, "id": _SCENE_ID,
"name": "Demo Ambient", "name": "Demo Ambient",
@@ -409,4 +384,3 @@ def _seed_scene_presets(path: Path) -> None:
"updated_at": _NOW, "updated_at": _NOW,
}, },
} }
_write_store(path, "scene_presets", presets)

View File

@@ -24,6 +24,9 @@ import wled_controller.core.filters.css_filter_template # noqa: F401
import wled_controller.core.filters.noise_gate # noqa: F401 import wled_controller.core.filters.noise_gate # noqa: F401
import wled_controller.core.filters.palette_quantization # noqa: F401 import wled_controller.core.filters.palette_quantization # noqa: F401
import wled_controller.core.filters.reverse # noqa: F401 import wled_controller.core.filters.reverse # noqa: F401
import wled_controller.core.filters.hsl_shift # noqa: F401
import wled_controller.core.filters.contrast # noqa: F401
import wled_controller.core.filters.temporal_blur # noqa: F401
__all__ = [ __all__ = [
"FilterOptionDef", "FilterOptionDef",

View File

@@ -3,7 +3,6 @@
import math import math
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional
import cv2
import numpy as np import numpy as np
from wled_controller.core.filters.base import FilterOptionDef, PostprocessingFilter from wled_controller.core.filters.base import FilterOptionDef, PostprocessingFilter
@@ -69,12 +68,12 @@ class ColorCorrectionFilter(PostprocessingFilter):
g_mult = (tg / _REF_G) * gg g_mult = (tg / _REF_G) * gg
b_mult = (tb / _REF_B) * bg b_mult = (tb / _REF_B) * bg
# Build merged (256, 1, 3) LUT for single-pass cv2.LUT # Build merged (256, 3) LUT for single-pass numpy fancy-index lookup
src = np.arange(256, dtype=np.float32) src = np.arange(256, dtype=np.float32)
lut_r = np.clip(src * r_mult, 0, 255).astype(np.uint8) lut_r = np.clip(src * r_mult, 0, 255).astype(np.uint8)
lut_g = np.clip(src * g_mult, 0, 255).astype(np.uint8) lut_g = np.clip(src * g_mult, 0, 255).astype(np.uint8)
lut_b = np.clip(src * b_mult, 0, 255).astype(np.uint8) lut_b = np.clip(src * b_mult, 0, 255).astype(np.uint8)
self._lut = np.stack([lut_r, lut_g, lut_b], axis=-1).reshape(256, 1, 3) self._lut = np.stack([lut_r, lut_g, lut_b], axis=-1) # (256, 3)
self._is_neutral = (temp == 6500 and rg == 1.0 and gg == 1.0 and bg == 1.0) self._is_neutral = (temp == 6500 and rg == 1.0 and gg == 1.0 and bg == 1.0)
@@ -122,5 +121,5 @@ class ColorCorrectionFilter(PostprocessingFilter):
def process_image(self, image: np.ndarray, image_pool: ImagePool) -> Optional[np.ndarray]: def process_image(self, image: np.ndarray, image_pool: ImagePool) -> Optional[np.ndarray]:
if self._is_neutral: if self._is_neutral:
return None return None
cv2.LUT(image, self._lut, dst=image) image[:] = self._lut[image]
return None return None

View File

@@ -0,0 +1,49 @@
"""Contrast postprocessing filter."""
from typing import Any, Dict, List, Optional
import numpy as np
from wled_controller.core.filters.base import FilterOptionDef, PostprocessingFilter
from wled_controller.core.filters.image_pool import ImagePool
from wled_controller.core.filters.registry import FilterRegistry
@FilterRegistry.register
class ContrastFilter(PostprocessingFilter):
"""Adjusts contrast around mid-gray (128) using a lookup table.
value < 1.0 = reduced contrast (washed out)
value = 1.0 = unchanged
value > 1.0 = increased contrast (punchier)
"""
filter_id = "contrast"
filter_name = "Contrast"
def __init__(self, options: Dict[str, Any]):
super().__init__(options)
value = self.options["value"]
# LUT: output = clamp(128 + (input - 128) * value, 0, 255)
lut = np.clip(128.0 + (np.arange(256, dtype=np.float32) - 128.0) * value, 0, 255)
self._lut = lut.astype(np.uint8)
@classmethod
def get_options_schema(cls) -> List[FilterOptionDef]:
return [
FilterOptionDef(
key="value",
label="Contrast",
option_type="float",
default=1.0,
min_value=0.0,
max_value=3.0,
step=0.05,
),
]
def process_image(self, image: np.ndarray, image_pool: ImagePool) -> Optional[np.ndarray]:
if self.options["value"] == 1.0:
return None
image[:] = self._lut[image]
return None

View File

@@ -2,8 +2,8 @@
from typing import List, Optional from typing import List, Optional
import cv2
import numpy as np import numpy as np
from PIL import Image
from wled_controller.core.filters.base import FilterOptionDef, PostprocessingFilter from wled_controller.core.filters.base import FilterOptionDef, PostprocessingFilter
from wled_controller.core.filters.image_pool import ImagePool from wled_controller.core.filters.image_pool import ImagePool
@@ -44,7 +44,8 @@ class DownscalerFilter(PostprocessingFilter):
if new_h == h and new_w == w: if new_h == h and new_w == w:
return None return None
downscaled = cv2.resize(image, (new_w, new_h), interpolation=cv2.INTER_AREA) pil_img = Image.fromarray(image)
downscaled = np.array(pil_img.resize((new_w, new_h), Image.LANCZOS))
result = image_pool.acquire(new_h, new_w, image.shape[2] if image.ndim == 3 else 3) result = image_pool.acquire(new_h, new_w, image.shape[2] if image.ndim == 3 else 3)
np.copyto(result, downscaled) np.copyto(result, downscaled)

View File

@@ -0,0 +1,146 @@
"""HSL shift postprocessing filter — hue rotation and lightness adjustment."""
from typing import Any, Dict, List, Optional
import numpy as np
from wled_controller.core.filters.base import FilterOptionDef, PostprocessingFilter
from wled_controller.core.filters.image_pool import ImagePool
from wled_controller.core.filters.registry import FilterRegistry
@FilterRegistry.register
class HslShiftFilter(PostprocessingFilter):
"""Shifts hue and lightness of all pixels via integer math.
Hue is rotated by a fixed offset (0-360 degrees).
Lightness is scaled by a multiplier (0.0 = black, 1.0 = unchanged, 2.0 = bright).
"""
filter_id = "hsl_shift"
filter_name = "HSL Shift"
def __init__(self, options: Dict[str, Any]):
super().__init__(options)
self._f32_buf: Optional[np.ndarray] = None
@classmethod
def get_options_schema(cls) -> List[FilterOptionDef]:
return [
FilterOptionDef(
key="hue",
label="Hue Shift",
option_type="int",
default=0,
min_value=0,
max_value=359,
step=1,
),
FilterOptionDef(
key="lightness",
label="Lightness",
option_type="float",
default=1.0,
min_value=0.0,
max_value=2.0,
step=0.05,
),
]
def process_image(self, image: np.ndarray, image_pool: ImagePool) -> Optional[np.ndarray]:
hue_shift = self.options["hue"]
lightness = self.options["lightness"]
if hue_shift == 0 and lightness == 1.0:
return None
h, w, c = image.shape
n = h * w
# Flatten to (N, 3) float32
flat = image.reshape(n, c)
if self._f32_buf is None or self._f32_buf.shape[0] != n:
self._f32_buf = np.empty((n, 3), dtype=np.float32)
buf = self._f32_buf
np.copyto(buf, flat, casting="unsafe")
buf *= (1.0 / 255.0)
r = buf[:, 0]
g = buf[:, 1]
b = buf[:, 2]
# RGB -> HSL (vectorized)
cmax = np.maximum(np.maximum(r, g), b)
cmin = np.minimum(np.minimum(r, g), b)
delta = cmax - cmin
light = (cmax + cmin) * 0.5
# Hue calculation
hue = np.zeros(n, dtype=np.float32)
mask_nonzero = delta > 1e-6
if np.any(mask_nonzero):
d = delta[mask_nonzero]
rm = r[mask_nonzero]
gm = g[mask_nonzero]
bm = b[mask_nonzero]
cm = cmax[mask_nonzero]
h_val = np.zeros_like(d)
mr = cm == rm
mg = (~mr) & (cm == gm)
mb = (~mr) & (~mg)
h_val[mr] = ((gm[mr] - bm[mr]) / d[mr]) % 6.0
h_val[mg] = (bm[mg] - rm[mg]) / d[mg] + 2.0
h_val[mb] = (rm[mb] - gm[mb]) / d[mb] + 4.0
h_val *= 60.0
h_val[h_val < 0] += 360.0
hue[mask_nonzero] = h_val
# Saturation
sat = np.zeros(n, dtype=np.float32)
mask_sat = mask_nonzero & (light > 1e-6) & (light < 1.0 - 1e-6)
if np.any(mask_sat):
sat[mask_sat] = delta[mask_sat] / (1.0 - np.abs(2.0 * light[mask_sat] - 1.0))
# Apply shifts
if hue_shift != 0:
hue = (hue + hue_shift) % 360.0
if lightness != 1.0:
light = np.clip(light * lightness, 0.0, 1.0)
# HSL -> RGB (vectorized)
c_val = (1.0 - np.abs(2.0 * light - 1.0)) * sat
x_val = c_val * (1.0 - np.abs((hue / 60.0) % 2.0 - 1.0))
m_val = light - c_val * 0.5
sector = (hue / 60.0).astype(np.int32) % 6
ro = np.empty(n, dtype=np.float32)
go = np.empty(n, dtype=np.float32)
bo = np.empty(n, dtype=np.float32)
for s, rv, gv, bv in (
(0, c_val, x_val, 0.0),
(1, x_val, c_val, 0.0),
(2, 0.0, c_val, x_val),
(3, 0.0, x_val, c_val),
(4, x_val, 0.0, c_val),
(5, c_val, 0.0, x_val),
):
mask_s = sector == s
if not np.any(mask_s):
continue
rv_arr = rv[mask_s] if not isinstance(rv, float) else rv
gv_arr = gv[mask_s] if not isinstance(gv, float) else gv
bv_arr = bv[mask_s] if not isinstance(bv, float) else bv
ro[mask_s] = rv_arr + m_val[mask_s]
go[mask_s] = gv_arr + m_val[mask_s]
bo[mask_s] = bv_arr + m_val[mask_s]
buf[:, 0] = ro
buf[:, 1] = go
buf[:, 2] = bo
np.clip(buf, 0.0, 1.0, out=buf)
buf *= 255.0
np.copyto(flat, buf, casting="unsafe")
return None

View File

@@ -2,8 +2,8 @@
from typing import List, Optional from typing import List, Optional
import cv2
import numpy as np import numpy as np
from PIL import Image
from wled_controller.core.filters.base import FilterOptionDef, PostprocessingFilter from wled_controller.core.filters.base import FilterOptionDef, PostprocessingFilter
from wled_controller.core.filters.image_pool import ImagePool from wled_controller.core.filters.image_pool import ImagePool
@@ -42,8 +42,9 @@ class PixelateFilter(PostprocessingFilter):
# vectorized C++ instead of per-block Python loop # vectorized C++ instead of per-block Python loop
small_w = max(1, w // block_size) small_w = max(1, w // block_size)
small_h = max(1, h // block_size) small_h = max(1, h // block_size)
small = cv2.resize(image, (small_w, small_h), interpolation=cv2.INTER_AREA) pil_img = Image.fromarray(image)
pixelated = cv2.resize(small, (w, h), interpolation=cv2.INTER_NEAREST) small = pil_img.resize((small_w, small_h), Image.LANCZOS)
pixelated = np.array(small.resize((w, h), Image.NEAREST))
np.copyto(image, pixelated) np.copyto(image, pixelated)
return None return None

View File

@@ -0,0 +1,77 @@
"""Temporal blur postprocessing filter — blends current frame with history."""
from typing import Any, Dict, List, Optional
import numpy as np
from wled_controller.core.filters.base import FilterOptionDef, PostprocessingFilter
from wled_controller.core.filters.image_pool import ImagePool
from wled_controller.core.filters.registry import FilterRegistry
@FilterRegistry.register
class TemporalBlurFilter(PostprocessingFilter):
"""Blends each frame with a running accumulator for motion smoothing.
Uses exponential moving average: acc = (1 - strength) * frame + strength * acc
Higher strength = more blur / longer trails.
"""
filter_id = "temporal_blur"
filter_name = "Temporal Blur"
def __init__(self, options: Dict[str, Any]):
super().__init__(options)
self._acc: Optional[np.ndarray] = None
@classmethod
def get_options_schema(cls) -> List[FilterOptionDef]:
return [
FilterOptionDef(
key="strength",
label="Strength",
option_type="float",
default=0.5,
min_value=0.0,
max_value=0.95,
step=0.05,
),
]
def process_image(self, image: np.ndarray, image_pool: ImagePool) -> Optional[np.ndarray]:
strength = self.options["strength"]
if strength == 0.0:
self._acc = None
return None
h, w, c = image.shape
shape = (h, w, c)
if self._acc is None or self._acc.shape != shape:
self._acc = image.astype(np.float32)
return None
# EMA: acc = strength * acc + (1 - strength) * current
new_weight = 1.0 - strength
self._acc *= strength
self._acc += new_weight * image
np.copyto(image, self._acc, casting="unsafe")
return None
def process_strip(self, strip: np.ndarray) -> Optional[np.ndarray]:
"""Optimized strip path — avoids reshape overhead."""
strength = self.options["strength"]
if strength == 0.0:
self._acc = None
return None
shape = strip.shape
if self._acc is None or self._acc.shape != shape:
self._acc = strip.astype(np.float32)
return None
new_weight = 1.0 - strength
self._acc *= strength
self._acc += new_weight * strip
np.copyto(strip, self._acc, casting="unsafe")
return None

View File

@@ -17,6 +17,7 @@ import numpy as np
from wled_controller.core.audio.analysis import NUM_BANDS from wled_controller.core.audio.analysis import NUM_BANDS
from wled_controller.core.audio.audio_capture import AudioCaptureManager from wled_controller.core.audio.audio_capture import AudioCaptureManager
from wled_controller.core.audio.band_filter import apply_band_filter, compute_band_mask
from wled_controller.core.processing.color_strip_stream import ColorStripStream from wled_controller.core.processing.color_strip_stream import ColorStripStream
from wled_controller.core.processing.effect_stream import _build_palette_lut from wled_controller.core.processing.effect_stream import _build_palette_lut
from wled_controller.utils import get_logger from wled_controller.utils import get_logger
@@ -58,14 +59,32 @@ class AudioColorStripStream(ColorStripStream):
self._prev_spectrum: Optional[np.ndarray] = None self._prev_spectrum: Optional[np.ndarray] = None
self._prev_rms = 0.0 self._prev_rms = 0.0
self._gradient_store = None # injected by stream manager
self._update_from_source(source) self._update_from_source(source)
def set_gradient_store(self, gradient_store) -> None:
"""Inject gradient store for palette resolution."""
self._gradient_store = gradient_store
self._resolve_palette_lut()
def _resolve_palette_lut(self) -> None:
"""Build palette LUT from gradient_id or legacy palette name."""
gradient_id = self._gradient_id
if gradient_id and self._gradient_store:
stops = self._gradient_store.resolve_stops(gradient_id)
if stops:
custom = [[s["position"], *s["color"]] for s in stops]
self._palette_lut = _build_palette_lut("custom", custom)
return
self._palette_lut = _build_palette_lut(self._palette_name)
def _update_from_source(self, source) -> None: def _update_from_source(self, source) -> None:
self._visualization_mode = getattr(source, "visualization_mode", "spectrum") self._visualization_mode = getattr(source, "visualization_mode", "spectrum")
self._sensitivity = float(getattr(source, "sensitivity", 1.0)) self._sensitivity = float(getattr(source, "sensitivity", 1.0))
self._smoothing = float(getattr(source, "smoothing", 0.3)) self._smoothing = float(getattr(source, "smoothing", 0.3))
self._gradient_id = getattr(source, "gradient_id", None)
self._palette_name = getattr(source, "palette", "rainbow") self._palette_name = getattr(source, "palette", "rainbow")
self._palette_lut = _build_palette_lut(self._palette_name) self._resolve_palette_lut()
color = getattr(source, "color", None) color = getattr(source, "color", None)
self._color = color if isinstance(color, list) and len(color) == 3 else [0, 255, 0] self._color = color if isinstance(color, list) and len(color) == 3 else [0, 255, 0]
color_peak = getattr(source, "color_peak", None) color_peak = getattr(source, "color_peak", None)
@@ -82,17 +101,18 @@ class AudioColorStripStream(ColorStripStream):
self._audio_source_id = audio_source_id self._audio_source_id = audio_source_id
self._audio_engine_type = None self._audio_engine_type = None
self._audio_engine_config = None self._audio_engine_config = None
self._band_mask = None # precomputed band filter mask (None = full range)
if audio_source_id and self._audio_source_store: if audio_source_id and self._audio_source_store:
try: try:
device_index, is_loopback, channel, template_id = ( resolved = self._audio_source_store.resolve_audio_source(audio_source_id)
self._audio_source_store.resolve_audio_source(audio_source_id) self._audio_device_index = resolved.device_index
) self._audio_loopback = resolved.is_loopback
self._audio_device_index = device_index self._audio_channel = resolved.channel
self._audio_loopback = is_loopback if resolved.freq_low is not None and resolved.freq_high is not None:
self._audio_channel = channel self._band_mask = compute_band_mask(resolved.freq_low, resolved.freq_high)
if template_id and self._audio_template_store: if resolved.audio_template_id and self._audio_template_store:
try: try:
tpl = self._audio_template_store.get_template(template_id) tpl = self._audio_template_store.get_template(resolved.audio_template_id)
self._audio_engine_type = tpl.engine_type self._audio_engine_type = tpl.engine_type
self._audio_engine_config = tpl.engine_config self._audio_engine_config = tpl.engine_config
except ValueError: except ValueError:
@@ -302,12 +322,16 @@ class AudioColorStripStream(ColorStripStream):
# ── Channel selection ───────────────────────────────────────── # ── Channel selection ─────────────────────────────────────────
def _pick_channel(self, analysis): def _pick_channel(self, analysis):
"""Return (spectrum, rms) for the configured audio channel.""" """Return (spectrum, rms) for the configured audio channel, with band filtering."""
if self._audio_channel == "left": if self._audio_channel == "left":
return analysis.left_spectrum, analysis.left_rms spectrum, rms = analysis.left_spectrum, analysis.left_rms
elif self._audio_channel == "right": elif self._audio_channel == "right":
return analysis.right_spectrum, analysis.right_rms spectrum, rms = analysis.right_spectrum, analysis.right_rms
return analysis.spectrum, analysis.rms else:
spectrum, rms = analysis.spectrum, analysis.rms
if self._band_mask is not None:
spectrum, rms = apply_band_filter(spectrum, rms, self._band_mask)
return spectrum, rms
# ── Spectrum Analyzer ────────────────────────────────────────── # ── Spectrum Analyzer ──────────────────────────────────────────

View File

@@ -4,12 +4,17 @@ Implements CandlelightColorStripStream which produces warm, organic
flickering across all LEDs using layered sine waves and value noise. flickering across all LEDs using layered sine waves and value noise.
Each "candle" is an independent flicker source that illuminates Each "candle" is an independent flicker source that illuminates
nearby LEDs with smooth falloff. nearby LEDs with smooth falloff.
Features:
- Wind simulation: correlated brightness drops across all candles
- Candle type presets: taper / votive / bonfire
- Wax drip effect: localized brightness dips that recover over ~0.5 s
""" """
import math import math
import threading import threading
import time import time
from typing import Optional from typing import List, Optional
import numpy as np import numpy as np
@@ -38,6 +43,18 @@ def _noise1d(x: np.ndarray) -> np.ndarray:
return a + u * (b - a) return a + u * (b - a)
# ── Candle type preset multipliers ──────────────────────────────────
# (flicker_amplitude_mul, speed_mul, sigma_mul, warm_bonus)
_CANDLE_PRESETS: dict = {
"default": (1.0, 1.0, 1.0, 0.0),
"taper": (0.5, 1.3, 0.8, 0.0), # tall, steady
"votive": (1.5, 1.0, 0.7, 0.0), # small, flickery
"bonfire": (2.0, 1.0, 1.5, 0.12), # chaotic, warmer shift
}
_VALID_CANDLE_TYPES = frozenset(_CANDLE_PRESETS)
class CandlelightColorStripStream(ColorStripStream): class CandlelightColorStripStream(ColorStripStream):
"""Color strip stream simulating realistic candle flickering. """Color strip stream simulating realistic candle flickering.
@@ -59,7 +76,12 @@ class CandlelightColorStripStream(ColorStripStream):
self._s_bright: Optional[np.ndarray] = None self._s_bright: Optional[np.ndarray] = None
self._s_noise: Optional[np.ndarray] = None self._s_noise: Optional[np.ndarray] = None
self._s_x: Optional[np.ndarray] = None self._s_x: Optional[np.ndarray] = None
self._s_drip: Optional[np.ndarray] = None
self._pool_n = 0 self._pool_n = 0
# Wax drip events: [pos, brightness, phase(0=dim,1=recover)]
self._drip_events: List[List[float]] = []
self._drip_rng = np.random.RandomState(seed=42)
self._last_drip_t = 0.0
self._update_from_source(source) self._update_from_source(source)
def _update_from_source(self, source) -> None: def _update_from_source(self, source) -> None:
@@ -68,6 +90,9 @@ class CandlelightColorStripStream(ColorStripStream):
self._intensity = float(getattr(source, "intensity", 1.0)) self._intensity = float(getattr(source, "intensity", 1.0))
self._num_candles = max(1, int(getattr(source, "num_candles", 3))) self._num_candles = max(1, int(getattr(source, "num_candles", 3)))
self._speed = float(getattr(source, "speed", 1.0)) self._speed = float(getattr(source, "speed", 1.0))
self._wind_strength = float(getattr(source, "wind_strength", 0.0))
raw_type = getattr(source, "candle_type", "default")
self._candle_type = raw_type if raw_type in _VALID_CANDLE_TYPES else "default"
_lc = getattr(source, "led_count", 0) _lc = getattr(source, "led_count", 0)
self._auto_size = not _lc self._auto_size = not _lc
self._led_count = _lc if _lc and _lc > 0 else 1 self._led_count = _lc if _lc and _lc > 0 else 1
@@ -161,11 +186,13 @@ class CandlelightColorStripStream(ColorStripStream):
self._s_bright = np.empty(n, dtype=np.float32) self._s_bright = np.empty(n, dtype=np.float32)
self._s_noise = np.empty(n, dtype=np.float32) self._s_noise = np.empty(n, dtype=np.float32)
self._s_x = np.arange(n, dtype=np.float32) self._s_x = np.arange(n, dtype=np.float32)
self._s_drip = np.ones(n, dtype=np.float32)
buf = _buf_a if _use_a else _buf_b buf = _buf_a if _use_a else _buf_b
_use_a = not _use_a _use_a = not _use_a
self._render_candlelight(buf, n, t, speed) self._update_drip_events(n, wall_start, frame_time)
self._render_candlelight(buf, n, t, speed, wall_start)
with self._colors_lock: with self._colors_lock:
self._colors = buf self._colors = buf
@@ -179,26 +206,68 @@ class CandlelightColorStripStream(ColorStripStream):
finally: finally:
self._running = False self._running = False
def _render_candlelight(self, buf: np.ndarray, n: int, t: float, speed: float) -> None: # ── Drip management ─────────────────────────────────────────────
"""Render candle flickering into buf (n, 3) uint8.
Algorithm: def _update_drip_events(self, n: int, wall_t: float, dt: float) -> None:
- Place num_candles evenly along the strip """Spawn new wax drip events and advance existing ones."""
- Each candle has independent layered-sine flicker intensity = self._intensity
- Spatial falloff: LEDs near a candle are brighter spawn_interval = max(0.3, 1.0 / max(intensity, 0.01))
- Per-LED noise adds individual variation if wall_t - self._last_drip_t >= spawn_interval and len(self._drip_events) < 5:
- Final brightness modulates the base warm color self._last_drip_t = wall_t
""" pos = float(self._drip_rng.randint(0, max(n, 1)))
# Scale speed so that speed=1 gives a gentle ~1.3 Hz dominant flicker self._drip_events.append([pos, 1.0, 0])
speed = speed * 0.35
surviving = []
for drip in self._drip_events:
pos, bright, phase = drip
if phase == 0:
bright -= dt / 0.2 * 0.7
if bright <= 0.3:
bright = 0.3
drip[2] = 1
else:
bright += dt / 0.5 * 0.7
if bright >= 1.0:
continue # drip complete
drip[1] = bright
surviving.append(drip)
self._drip_events = surviving
# Build per-LED drip factor
drip_arr = self._s_drip
drip_arr[:n] = 1.0
x = self._s_x[:n]
for drip in self._drip_events:
pos, bright, _phase = drip
dist = x - pos
falloff = np.exp(-0.5 * dist * dist / 4.0)
drip_arr[:n] *= 1.0 - (1.0 - bright) * falloff
# ── Render ──────────────────────────────────────────────────────
def _render_candlelight(self, buf: np.ndarray, n: int, t: float, speed: float, wall_t: float) -> None:
"""Render candle flickering into buf (n, 3) uint8."""
amp_mul, spd_mul, sigma_mul, warm_bonus = _CANDLE_PRESETS[self._candle_type]
eff_speed = speed * 0.35 * spd_mul
intensity = self._intensity intensity = self._intensity
num_candles = self._num_candles num_candles = self._num_candles
base_r, base_g, base_b = self._color[0], self._color[1], self._color[2] base_r, base_g, base_b = self._color[0], self._color[1], self._color[2]
bright = self._s_bright # Wind modulation
bright[:] = 0.0 wind_strength = self._wind_strength
if wind_strength > 0.0:
wind_raw = (
0.6 * math.sin(2.0 * math.pi * 0.15 * wall_t)
+ 0.4 * math.sin(2.0 * math.pi * 0.27 * wall_t + 1.1)
)
wind_mod = max(0.0, wind_raw)
else:
wind_mod = 0.0
bright = self._s_bright
bright[:n] = 0.0
# Candle positions: evenly distributed
if num_candles == 1: if num_candles == 1:
positions = [n / 2.0] positions = [n / 2.0]
else: else:
@@ -207,42 +276,42 @@ class CandlelightColorStripStream(ColorStripStream):
x = self._s_x[:n] x = self._s_x[:n]
for ci, pos in enumerate(positions): for ci, pos in enumerate(positions):
# Independent flicker for this candle: layered sines at different frequencies offset = ci * 137.5
# Use candle index as phase offset for independence
offset = ci * 137.5 # golden-angle offset for non-repeating
flicker = ( flicker = (
0.40 * math.sin(2 * math.pi * speed * t * 3.7 + offset) 0.40 * math.sin(2.0 * math.pi * eff_speed * t * 3.7 + offset)
+ 0.25 * math.sin(2 * math.pi * speed * t * 7.3 + offset * 0.7) + 0.25 * math.sin(2.0 * math.pi * eff_speed * t * 7.3 + offset * 0.7)
+ 0.15 * math.sin(2 * math.pi * speed * t * 13.1 + offset * 1.3) + 0.15 * math.sin(2.0 * math.pi * eff_speed * t * 13.1 + offset * 1.3)
+ 0.10 * math.sin(2 * math.pi * speed * t * 1.9 + offset * 0.3) + 0.10 * math.sin(2.0 * math.pi * eff_speed * t * 1.9 + offset * 0.3)
) )
# Normalize flicker to [0.3, 1.0] range (candles never fully go dark)
candle_brightness = 0.65 + 0.35 * flicker * intensity
# Spatial falloff: Gaussian centered on candle position candle_brightness = 0.65 + 0.35 * flicker * intensity * amp_mul
# sigma proportional to strip length / num_candles
sigma = max(n / (num_candles * 2.0), 2.0) if wind_strength > 0.0:
candle_brightness *= (1.0 - wind_strength * wind_mod * 0.4)
candle_brightness = max(0.1, candle_brightness)
sigma = max(n / (num_candles * 2.0), 2.0) * sigma_mul
dist = x - pos dist = x - pos
falloff = np.exp(-0.5 * (dist * dist) / (sigma * sigma)) falloff = np.exp(-0.5 * (dist * dist) / (sigma * sigma))
bright += candle_brightness * falloff bright[:n] += candle_brightness * falloff
# Per-LED noise for individual variation # Per-LED noise
noise_x = x * 0.3 + t * speed * 5.0 noise_x = x * 0.3 + t * eff_speed * 5.0
noise = _noise1d(noise_x) noise = _noise1d(noise_x)
# Modulate brightness with noise (±15%) bright[:n] *= (0.85 + 0.30 * noise)
bright *= (0.85 + 0.30 * noise)
# Clamp to [0, 1] # Wax drip factor
np.clip(bright, 0.0, 1.0, out=bright) bright[:n] *= self._s_drip[:n]
# Apply base color with brightness modulation np.clip(bright[:n], 0.0, 1.0, out=bright[:n])
# Candles emit warmer (more red, less blue) at lower brightness
# Add slight color variation: dimmer = warmer # Colour mapping: dimmer = warmer
warm_shift = (1.0 - bright) * 0.3 warm_shift = (1.0 - bright[:n]) * (0.3 + warm_bonus)
r = bright * base_r r = bright[:n] * base_r
g = bright * base_g * (1.0 - warm_shift * 0.5) g = bright[:n] * base_g * (1.0 - warm_shift * 0.5)
b = bright * base_b * (1.0 - warm_shift) b = bright[:n] * base_b * (1.0 - warm_shift)
buf[:, 0] = np.clip(r, 0, 255).astype(np.uint8) buf[:, 0] = np.clip(r, 0, 255).astype(np.uint8)
buf[:, 1] = np.clip(g, 0, 255).astype(np.uint8) buf[:, 1] = np.clip(g, 0, 255).astype(np.uint8)

View File

@@ -24,6 +24,27 @@ from wled_controller.core.capture.screen_capture import extract_border_pixels
from wled_controller.utils import get_logger from wled_controller.utils import get_logger
from wled_controller.utils.timer import high_resolution_timer from wled_controller.utils.timer import high_resolution_timer
class _SimpleNoise1D:
"""Minimal 1-D value noise for gradient perturbation (avoids circular import)."""
def __init__(self, seed: int = 99):
rng = np.random.RandomState(seed)
self._table = rng.random(512).astype(np.float32)
def noise(self, x: np.ndarray) -> np.ndarray:
size = len(self._table)
xi = np.floor(x).astype(np.int64)
frac = x - np.floor(x)
t = frac * frac * (3.0 - 2.0 * frac)
a = self._table[xi % size]
b = self._table[(xi + 1) % size]
return a + t * (b - a)
# Module-level noise for gradient perturbation
_gradient_noise = _SimpleNoise1D(seed=99)
logger = get_logger(__name__) logger = get_logger(__name__)
@@ -348,7 +369,7 @@ class PictureColorStripStream(ColorStripStream):
self._running = False self._running = False
def _compute_gradient_colors(stops: list, led_count: int) -> np.ndarray: def _compute_gradient_colors(stops: list, led_count: int, easing: str = "linear") -> np.ndarray:
"""Compute an (led_count, 3) uint8 array from gradient color stops. """Compute an (led_count, 3) uint8 array from gradient color stops.
Each stop: {"position": float 01, "color": [R,G,B], "color_right": [R,G,B] | absent} Each stop: {"position": float 01, "color": [R,G,B], "color_right": [R,G,B] | absent}
@@ -361,7 +382,7 @@ def _compute_gradient_colors(stops: list, led_count: int) -> np.ndarray:
left_color = A["color_right"] if present, else A["color"] left_color = A["color_right"] if present, else A["color"]
right_color = B["color"] right_color = B["color"]
t = (p - A.pos) / (B.pos - A.pos) t = (p - A.pos) / (B.pos - A.pos)
color = lerp(left_color, right_color, t) color = lerp(left_color, right_color, eased(t))
""" """
if led_count <= 0: if led_count <= 0:
led_count = 1 led_count = 1
@@ -412,6 +433,15 @@ def _compute_gradient_colors(stops: list, led_count: int) -> np.ndarray:
span = b_pos - a_pos span = b_pos - a_pos
t = np.where(span > 0, (between_pos - a_pos) / span, 0.0) t = np.where(span > 0, (between_pos - a_pos) / span, 0.0)
# Apply easing to interpolation parameter
if easing == "ease_in_out":
t = t * t * (3.0 - 2.0 * t)
elif easing == "cubic":
t = np.where(t < 0.5, 4.0 * t * t * t, 1.0 - (-2.0 * t + 2.0) ** 3 / 2.0)
elif easing == "step":
steps = float(max(2, n_stops))
t = np.round(t * steps) / steps
a_colors = right_colors[idx] # A's right color a_colors = right_colors[idx] # A's right color
b_colors = left_colors[idx + 1] # B's left color b_colors = left_colors[idx + 1] # B's left color
result[mask_between] = a_colors + t[:, np.newaxis] * (b_colors - a_colors) result[mask_between] = a_colors + t[:, np.newaxis] * (b_colors - a_colors)
@@ -821,19 +851,38 @@ class GradientColorStripStream(ColorStripStream):
self._fps = 30 self._fps = 30
self._frame_time = 1.0 / 30 self._frame_time = 1.0 / 30
self._clock = None # optional SyncClockRuntime self._clock = None # optional SyncClockRuntime
self._gradient_store = None # injected by stream manager
self._update_from_source(source) self._update_from_source(source)
def set_gradient_store(self, gradient_store) -> None:
"""Inject gradient store for resolving gradient_id to stops."""
self._gradient_store = gradient_store
# Re-resolve stops if gradient_id is set
gradient_id = getattr(self, "_gradient_id", None)
if gradient_id and self._gradient_store:
stops = self._gradient_store.resolve_stops(gradient_id)
if stops:
self._stops = stops
self._rebuild_colors()
def _update_from_source(self, source) -> None: def _update_from_source(self, source) -> None:
self._gradient_id = getattr(source, "gradient_id", None)
self._stops = list(source.stops) if source.stops else [] self._stops = list(source.stops) if source.stops else []
# Override inline stops with gradient entity if set
if self._gradient_id and self._gradient_store:
resolved = self._gradient_store.resolve_stops(self._gradient_id)
if resolved:
self._stops = resolved
_lc = getattr(source, "led_count", 0) _lc = getattr(source, "led_count", 0)
self._auto_size = not _lc self._auto_size = not _lc
led_count = _lc if _lc and _lc > 0 else 1 led_count = _lc if _lc and _lc > 0 else 1
self._led_count = led_count self._led_count = led_count
self._animation = source.animation # dict or None; read atomically by _animate_loop self._animation = source.animation # dict or None; read atomically by _animate_loop
self._easing = getattr(source, "easing", "linear") or "linear"
self._rebuild_colors() self._rebuild_colors()
def _rebuild_colors(self) -> None: def _rebuild_colors(self) -> None:
colors = _compute_gradient_colors(self._stops, self._led_count) colors = _compute_gradient_colors(self._stops, self._led_count, self._easing)
with self._colors_lock: with self._colors_lock:
self._colors = colors self._colors = colors
@@ -919,6 +968,7 @@ class GradientColorStripStream(ColorStripStream):
_cached_base: Optional[np.ndarray] = None _cached_base: Optional[np.ndarray] = None
_cached_n: int = 0 _cached_n: int = 0
_cached_stops: Optional[list] = None _cached_stops: Optional[list] = None
_cached_easing: str = ""
# Double-buffer pool + uint16 scratch for brightness math # Double-buffer pool + uint16 scratch for brightness math
_pool_n = 0 _pool_n = 0
_buf_a = _buf_b = _scratch_u16 = None _buf_a = _buf_b = _scratch_u16 = None
@@ -950,11 +1000,13 @@ class GradientColorStripStream(ColorStripStream):
stops = self._stops stops = self._stops
colors = None colors = None
# Recompute base gradient only when stops or led_count change # Recompute base gradient only when stops, led_count, or easing change
if _cached_base is None or _cached_n != n or _cached_stops is not stops: easing = self._easing
_cached_base = _compute_gradient_colors(stops, n) if _cached_base is None or _cached_n != n or _cached_stops is not stops or _cached_easing != easing:
_cached_base = _compute_gradient_colors(stops, n, easing)
_cached_n = n _cached_n = n
_cached_stops = stops _cached_stops = stops
_cached_easing = easing
base = _cached_base base = _cached_base
# Re-allocate pool only when LED count changes # Re-allocate pool only when LED count changes
@@ -1099,6 +1151,70 @@ class GradientColorStripStream(ColorStripStream):
buf[:, 2] = np.clip(bo * 255.0, 0, 255).astype(np.uint8) buf[:, 2] = np.clip(bo * 255.0, 0, 255).astype(np.uint8)
colors = buf colors = buf
elif atype == "noise_perturb":
# Perturb gradient stop positions with value noise
perturbed = []
for si, s in enumerate(stops):
noise_val = _gradient_noise.noise(
np.array([si * 10.0 + t * speed], dtype=np.float32)
)[0]
new_pos = min(1.0, max(0.0,
float(s.get("position", 0)) + (noise_val - 0.5) * 0.2
))
perturbed.append(dict(s, position=new_pos))
buf[:] = _compute_gradient_colors(perturbed, n, easing)
colors = buf
elif atype == "hue_rotate":
# Rotate hue while preserving original S/V
h_shift = (speed * t * 0.1) % 1.0
rgb_f = base.astype(np.float32) * (1.0 / 255.0)
r_f = rgb_f[:, 0]
g_f = rgb_f[:, 1]
b_f = rgb_f[:, 2]
cmax = np.maximum(np.maximum(r_f, g_f), b_f)
cmin = np.minimum(np.minimum(r_f, g_f), b_f)
delta = cmax - cmin
# Hue
h_arr = np.zeros(n, dtype=np.float32)
mask_r = (delta > 0) & (cmax == r_f)
mask_g = (delta > 0) & (cmax == g_f) & ~mask_r
mask_b = (delta > 0) & ~mask_r & ~mask_g
h_arr[mask_r] = ((g_f[mask_r] - b_f[mask_r]) / delta[mask_r]) % 6.0
h_arr[mask_g] = ((b_f[mask_g] - r_f[mask_g]) / delta[mask_g]) + 2.0
h_arr[mask_b] = ((r_f[mask_b] - g_f[mask_b]) / delta[mask_b]) + 4.0
h_arr *= (1.0 / 6.0)
h_arr %= 1.0
# S and V — preserve original values (no clamping)
s_arr = np.where(cmax > 0, delta / cmax, np.float32(0))
v_arr = cmax
# Shift hue
h_arr += h_shift
h_arr %= 1.0
# HSV->RGB
h6 = h_arr * 6.0
hi = h6.astype(np.int32) % 6
f_arr = h6 - np.floor(h6)
p = v_arr * (1.0 - s_arr)
q = v_arr * (1.0 - s_arr * f_arr)
tt = v_arr * (1.0 - s_arr * (1.0 - f_arr))
ro = np.empty(n, dtype=np.float32)
go = np.empty(n, dtype=np.float32)
bo = np.empty(n, dtype=np.float32)
for sxt, rv, gv, bv in (
(0, v_arr, tt, p), (1, q, v_arr, p),
(2, p, v_arr, tt), (3, p, q, v_arr),
(4, tt, p, v_arr), (5, v_arr, p, q),
):
m = hi == sxt
ro[m] = rv[m]
go[m] = gv[m]
bo[m] = bv[m]
buf[:, 0] = np.clip(ro * 255.0, 0, 255).astype(np.uint8)
buf[:, 1] = np.clip(go * 255.0, 0, 255).astype(np.uint8)
buf[:, 2] = np.clip(bo * 255.0, 0, 255).astype(np.uint8)
colors = buf
if colors is not None: if colors is not None:
with self._colors_lock: with self._colors_lock:
self._colors = colors self._colors = colors

View File

@@ -69,7 +69,7 @@ class ColorStripStreamManager:
keyed by ``{css_id}:{consumer_id}``. keyed by ``{css_id}:{consumer_id}``.
""" """
def __init__(self, color_strip_store, live_stream_manager, audio_capture_manager=None, audio_source_store=None, audio_template_store=None, sync_clock_manager=None, value_stream_manager=None, cspt_store=None): def __init__(self, color_strip_store, live_stream_manager, audio_capture_manager=None, audio_source_store=None, audio_template_store=None, sync_clock_manager=None, value_stream_manager=None, cspt_store=None, gradient_store=None, weather_manager=None):
""" """
Args: Args:
color_strip_store: ColorStripStore for resolving source configs color_strip_store: ColorStripStore for resolving source configs
@@ -79,6 +79,7 @@ class ColorStripStreamManager:
sync_clock_manager: SyncClockManager for acquiring clock runtimes sync_clock_manager: SyncClockManager for acquiring clock runtimes
value_stream_manager: ValueStreamManager for per-layer brightness sources value_stream_manager: ValueStreamManager for per-layer brightness sources
cspt_store: ColorStripProcessingTemplateStore for per-layer filter chains cspt_store: ColorStripProcessingTemplateStore for per-layer filter chains
gradient_store: GradientStore for resolving gradient entity references
""" """
self._color_strip_store = color_strip_store self._color_strip_store = color_strip_store
self._live_stream_manager = live_stream_manager self._live_stream_manager = live_stream_manager
@@ -88,6 +89,8 @@ class ColorStripStreamManager:
self._sync_clock_manager = sync_clock_manager self._sync_clock_manager = sync_clock_manager
self._value_stream_manager = value_stream_manager self._value_stream_manager = value_stream_manager
self._cspt_store = cspt_store self._cspt_store = cspt_store
self._gradient_store = gradient_store
self._weather_manager = weather_manager
self._streams: Dict[str, _ColorStripEntry] = {} self._streams: Dict[str, _ColorStripEntry] = {}
def _inject_clock(self, css_stream, source) -> Optional[str]: def _inject_clock(self, css_stream, source) -> Optional[str]:
@@ -170,6 +173,9 @@ class ColorStripStreamManager:
css_stream = MappedColorStripStream(source, self) css_stream = MappedColorStripStream(source, self)
elif source.source_type == "processed": elif source.source_type == "processed":
css_stream = ProcessedColorStripStream(source, self, self._cspt_store) css_stream = ProcessedColorStripStream(source, self, self._cspt_store)
elif source.source_type == "weather":
from wled_controller.core.processing.weather_stream import WeatherColorStripStream
css_stream = WeatherColorStripStream(source, self._weather_manager)
else: else:
stream_cls = _SIMPLE_STREAM_MAP.get(source.source_type) stream_cls = _SIMPLE_STREAM_MAP.get(source.source_type)
if not stream_cls: if not stream_cls:
@@ -177,6 +183,9 @@ class ColorStripStreamManager:
f"Unsupported color strip source type '{source.source_type}' for {css_id}" f"Unsupported color strip source type '{source.source_type}' for {css_id}"
) )
css_stream = stream_cls(source) css_stream = stream_cls(source)
# Inject gradient store for palette resolution
if self._gradient_store and hasattr(css_stream, "set_gradient_store"):
css_stream.set_gradient_store(self._gradient_store)
# Inject sync clock runtime if source references a clock # Inject sync clock runtime if source references a clock
acquired_clock_id = self._inject_clock(css_stream, source) acquired_clock_id = self._inject_clock(css_stream, source)
css_stream.start() css_stream.start()

View File

@@ -17,6 +17,11 @@ _BLEND_ADD = "add"
_BLEND_MULTIPLY = "multiply" _BLEND_MULTIPLY = "multiply"
_BLEND_SCREEN = "screen" _BLEND_SCREEN = "screen"
_BLEND_OVERRIDE = "override" _BLEND_OVERRIDE = "override"
_BLEND_OVERLAY = "overlay"
_BLEND_SOFT_LIGHT = "soft_light"
_BLEND_HARD_LIGHT = "hard_light"
_BLEND_DIFFERENCE = "difference"
_BLEND_EXCLUSION = "exclusion"
class CompositeColorStripStream(ColorStripStream): class CompositeColorStripStream(ColorStripStream):
@@ -155,9 +160,13 @@ class CompositeColorStripStream(ColorStripStream):
def update_source(self, source) -> None: def update_source(self, source) -> None:
"""Hot-update: rebuild sub-streams if layer config changed.""" """Hot-update: rebuild sub-streams if layer config changed."""
new_layers = list(source.layers) new_layers = list(source.layers)
old_layer_ids = [(layer.get("source_id"), layer.get("blend_mode"), layer.get("opacity"), layer.get("enabled"), layer.get("brightness_source_id")) old_layer_ids = [(layer.get("source_id"), layer.get("blend_mode"), layer.get("opacity"),
layer.get("enabled"), layer.get("brightness_source_id"),
layer.get("start", 0), layer.get("end", 0), layer.get("reverse", False))
for layer in self._layers] for layer in self._layers]
new_layer_ids = [(layer.get("source_id"), layer.get("blend_mode"), layer.get("opacity"), layer.get("enabled"), layer.get("brightness_source_id")) new_layer_ids = [(layer.get("source_id"), layer.get("blend_mode"), layer.get("opacity"),
layer.get("enabled"), layer.get("brightness_source_id"),
layer.get("start", 0), layer.get("end", 0), layer.get("reverse", False))
for layer in new_layers] for layer in new_layers]
self._layers = new_layers self._layers = new_layers
@@ -187,7 +196,15 @@ class CompositeColorStripStream(ColorStripStream):
try: try:
stream = self._css_manager.acquire(src_id, consumer_id) stream = self._css_manager.acquire(src_id, consumer_id)
if hasattr(stream, "configure") and self._led_count > 0: if hasattr(stream, "configure") and self._led_count > 0:
stream.configure(self._led_count) # Configure with zone length if layer has a range, else full strip
layer_start = layer.get("start", 0)
layer_end = layer.get("end", 0)
if layer_start > 0 or layer_end > 0:
eff_end = layer_end if layer_end > 0 else self._led_count
zone_len = max(0, eff_end - layer_start)
stream.configure(zone_len if zone_len > 0 else self._led_count)
else:
stream.configure(self._led_count)
self._sub_streams[i] = (src_id, consumer_id, stream) self._sub_streams[i] = (src_id, consumer_id, stream)
except Exception as e: except Exception as e:
logger.warning( logger.warning(
@@ -336,12 +353,125 @@ class CompositeColorStripStream(ColorStripStream):
u16a >>= 8 u16a >>= 8
np.copyto(out, u16a, casting="unsafe") np.copyto(out, u16a, casting="unsafe")
def _blend_overlay(self, bottom: np.ndarray, top: np.ndarray, alpha: int,
out: np.ndarray) -> None:
"""Overlay blend: multiply darks, screen lights, then lerp with alpha.
if bottom < 128: blended = 2*bottom*top >> 8
else: blended = 255 - 2*(255-bottom)*(255-top) >> 8
"""
u16a, u16b = self._u16_a, self._u16_b
np.copyto(u16a, bottom, casting="unsafe")
np.copyto(u16b, top, casting="unsafe")
# Multiply path: 2*b*t >> 8
mul = (u16a * u16b) >> 7 # * 2 >> 8 == >> 7
# Screen path: 255 - 2*(255-b)*(255-t) >> 8
scr = 255 - (((255 - u16a) * (255 - u16b)) >> 7)
# Select based on bottom < 128
mask = u16a < 128
blended = np.where(mask, mul, scr)
np.clip(blended, 0, 255, out=blended)
# Lerp: result = (bottom * (256-a) + blended * a) >> 8
np.copyto(u16a, bottom, casting="unsafe")
u16a *= (256 - alpha)
blended *= alpha
u16a += blended
u16a >>= 8
np.copyto(out, u16a, casting="unsafe")
def _blend_soft_light(self, bottom: np.ndarray, top: np.ndarray, alpha: int,
out: np.ndarray) -> None:
"""Soft light blend (Pegtop formula), then lerp with alpha.
blended = (1 - 2*t/255) * b*b/255 + 2*t*b/255
"""
u16a, u16b = self._u16_a, self._u16_b
np.copyto(u16a, bottom, casting="unsafe")
np.copyto(u16b, top, casting="unsafe")
# term1 = (255 - 2*t) * b * b / (255*255)
# term2 = 2 * t * b / 255
# Use intermediate 32-bit to avoid overflow
b32 = u16a.astype(np.uint32)
t32 = u16b.astype(np.uint32)
blended = ((255 - 2 * t32) * b32 * b32 + 2 * t32 * b32 * 255) // (255 * 255)
np.clip(blended, 0, 255, out=blended)
# Lerp
np.copyto(u16a, bottom, casting="unsafe")
u16a *= (256 - alpha)
blended_u16 = blended.astype(np.uint16)
blended_u16 *= alpha
u16a += blended_u16
u16a >>= 8
np.copyto(out, u16a, casting="unsafe")
def _blend_hard_light(self, bottom: np.ndarray, top: np.ndarray, alpha: int,
out: np.ndarray) -> None:
"""Hard light blend: overlay with top/bottom roles swapped.
if top < 128: blended = 2*bottom*top >> 8
else: blended = 255 - 2*(255-bottom)*(255-top) >> 8
"""
u16a, u16b = self._u16_a, self._u16_b
np.copyto(u16a, bottom, casting="unsafe")
np.copyto(u16b, top, casting="unsafe")
mul = (u16a * u16b) >> 7
scr = 255 - (((255 - u16a) * (255 - u16b)) >> 7)
# Select based on top < 128 (differs from overlay)
mask = u16b < 128
blended = np.where(mask, mul, scr)
np.clip(blended, 0, 255, out=blended)
# Lerp
np.copyto(u16a, bottom, casting="unsafe")
u16a *= (256 - alpha)
blended *= alpha
u16a += blended
u16a >>= 8
np.copyto(out, u16a, casting="unsafe")
def _blend_difference(self, bottom: np.ndarray, top: np.ndarray, alpha: int,
out: np.ndarray) -> None:
"""Difference blend: |bottom - top|, then lerp with alpha."""
u16a, u16b = self._u16_a, self._u16_b
np.copyto(u16a, bottom, casting="unsafe")
np.copyto(u16b, top, casting="unsafe")
# abs diff using signed subtraction
blended = np.abs(u16a.astype(np.int16) - u16b.astype(np.int16)).astype(np.uint16)
# Lerp
np.copyto(u16a, bottom, casting="unsafe")
u16a *= (256 - alpha)
blended *= alpha
u16a += blended
u16a >>= 8
np.copyto(out, u16a, casting="unsafe")
def _blend_exclusion(self, bottom: np.ndarray, top: np.ndarray, alpha: int,
out: np.ndarray) -> None:
"""Exclusion blend: bottom + top - 2*bottom*top/255, then lerp with alpha."""
u16a, u16b = self._u16_a, self._u16_b
np.copyto(u16a, bottom, casting="unsafe")
np.copyto(u16b, top, casting="unsafe")
# blended = b + t - 2*b*t/255
blended = u16a + u16b - ((u16a * u16b) >> 7)
np.clip(blended, 0, 255, out=blended)
# Lerp
np.copyto(u16a, bottom, casting="unsafe")
u16a *= (256 - alpha)
blended *= alpha
u16a += blended
u16a >>= 8
np.copyto(out, u16a, casting="unsafe")
_BLEND_DISPATCH = { _BLEND_DISPATCH = {
_BLEND_NORMAL: "_blend_normal", _BLEND_NORMAL: "_blend_normal",
_BLEND_ADD: "_blend_add", _BLEND_ADD: "_blend_add",
_BLEND_MULTIPLY: "_blend_multiply", _BLEND_MULTIPLY: "_blend_multiply",
_BLEND_SCREEN: "_blend_screen", _BLEND_SCREEN: "_blend_screen",
_BLEND_OVERRIDE: "_blend_override", _BLEND_OVERRIDE: "_blend_override",
_BLEND_OVERLAY: "_blend_overlay",
_BLEND_SOFT_LIGHT: "_blend_soft_light",
_BLEND_HARD_LIGHT: "_blend_hard_light",
_BLEND_DIFFERENCE: "_blend_difference",
_BLEND_EXCLUSION: "_blend_exclusion",
} }
# ── Processing loop ───────────────────────────────────────── # ── Processing loop ─────────────────────────────────────────
@@ -416,9 +546,34 @@ class CompositeColorStripStream(ColorStripStream):
if _result is not None: if _result is not None:
colors = _result colors = _result
# Resize to target LED count if needed # Determine layer range
if len(colors) != target_n: layer_start = layer.get("start", 0)
colors = self._resize_to_target(colors, target_n) layer_end = layer.get("end", 0)
has_range = layer_start > 0 or layer_end > 0
if has_range:
# Clamp range to strip bounds
eff_start = max(0, min(layer_start, target_n))
eff_end = max(eff_start, min(layer_end if layer_end > 0 else target_n, target_n))
zone_len = eff_end - eff_start
if zone_len <= 0:
continue
# Resize to zone length
if len(colors) != zone_len:
src_x = np.linspace(0, 1, len(colors))
dst_x = np.linspace(0, 1, zone_len)
resized = np.empty((zone_len, 3), dtype=np.uint8)
for ch in range(3):
np.copyto(resized[:, ch], np.interp(dst_x, src_x, colors[:, ch]), casting="unsafe")
colors = resized
else:
# Full-strip layer: resize to target LED count
if len(colors) != target_n:
colors = self._resize_to_target(colors, target_n)
# Reverse if requested
if layer.get("reverse", False):
colors = colors[::-1].copy()
# Apply per-layer brightness from value source # Apply per-layer brightness from value source
if i in self._brightness_streams: if i in self._brightness_streams:
@@ -437,14 +592,20 @@ class CompositeColorStripStream(ColorStripStream):
alpha = max(0, min(256, alpha)) alpha = max(0, min(256, alpha))
if not has_result: if not has_result:
# First layer: copy directly (or blend with black if opacity < 1) result_buf[:] = 0
if alpha >= 256 and blend_mode == _BLEND_NORMAL:
result_buf[:] = colors
else:
result_buf[:] = 0
blend_fn = self._blend_methods.get(blend_mode, self._default_blend_method)
blend_fn(result_buf, colors, alpha, result_buf)
has_result = True has_result = True
if has_range:
# Blend only into the target range — use scratch sub-slices
rng = result_buf[eff_start:eff_end]
u16a_rng = self._u16_a[:zone_len]
u16b_rng = self._u16_b[:zone_len]
blend_fn = self._blend_methods.get(blend_mode, self._default_blend_method)
# Temporarily swap scratch buffers for the range size
orig_u16a, orig_u16b = self._u16_a, self._u16_b
self._u16_a, self._u16_b = u16a_rng, u16b_rng
blend_fn(rng, colors, alpha, rng)
self._u16_a, self._u16_b = orig_u16a, orig_u16b
else: else:
blend_fn = self._blend_methods.get(blend_mode, self._default_blend_method) blend_fn = self._blend_methods.get(blend_mode, self._default_blend_method)
blend_fn(result_buf, colors, alpha, result_buf) blend_fn(result_buf, colors, alpha, result_buf)

View File

@@ -3,8 +3,14 @@
Implements DaylightColorStripStream which produces a uniform LED color array Implements DaylightColorStripStream which produces a uniform LED color array
that transitions through dawn, daylight, sunset, and night over a continuous that transitions through dawn, daylight, sunset, and night over a continuous
24-hour cycle. Can use real wall-clock time or a configurable simulation speed. 24-hour cycle. Can use real wall-clock time or a configurable simulation speed.
When latitude and longitude are provided, sunrise/sunset times are computed
via simplified NOAA solar equations so the daylight curve automatically adapts
to the user's location and the current season.
""" """
import datetime
import math
import threading import threading
import time import time
from typing import Optional from typing import Optional
@@ -19,9 +25,9 @@ logger = get_logger(__name__)
# ── Daylight color table ──────────────────────────────────────────────── # ── Daylight color table ────────────────────────────────────────────────
# #
# Maps hour-of-day (024) to RGB color. Interpolated linearly between # Canonical hour control points (024) RGB. Designed for a default
# control points. Colors approximate natural daylight color temperature # sunrise of 6 h and sunset of 19 h. At render time the curve is remapped
# from warm sunrise tones through cool midday to warm sunset and dim night. # to the actual solar times for the location.
# #
# Format: (hour, R, G, B) # Format: (hour, R, G, B)
_DAYLIGHT_CURVE = [ _DAYLIGHT_CURVE = [
@@ -44,66 +50,171 @@ _DAYLIGHT_CURVE = [
(24.0, 10, 10, 30), # midnight (wrap) (24.0, 10, 10, 30), # midnight (wrap)
] ]
# Pre-build a (1440, 3) uint8 LUT — one entry per minute of the day # Reference solar times the canonical curve was designed around
_DEFAULT_SUNRISE = 6.0
_DEFAULT_SUNSET = 19.0
# Global cache of the default static LUT (lazy-built once)
_daylight_lut: Optional[np.ndarray] = None _daylight_lut: Optional[np.ndarray] = None
def _get_daylight_lut() -> np.ndarray: # ── Solar position helpers ──────────────────────────────────────────────
global _daylight_lut
if _daylight_lut is not None:
return _daylight_lut def _compute_solar_times(
latitude: float, longitude: float, day_of_year: int
) -> tuple:
"""Return (sunrise_hour, sunset_hour) in local solar time.
Uses simplified NOAA solar equations:
- declination: decl = 23.45 * sin(2π * (284 + doy) / 365)
- hour angle: cos(ha) = -tan(lat) * tan(decl)
- sunrise/sunset: 12 ∓ ha/15, shifted by longitude
Polar day and polar night are clamped to visible ranges.
"""
deg2rad = math.pi / 180.0
decl_deg = 23.45 * math.sin(2.0 * math.pi * (284 + day_of_year) / 365.0)
decl_rad = decl_deg * deg2rad
lat_rad = latitude * deg2rad
cos_ha = -math.tan(lat_rad) * math.tan(decl_rad)
if cos_ha <= -1.0:
# Polar day — sun never sets
sunrise = 3.0
sunset = 21.0
elif cos_ha >= 1.0:
# Polar night — sun never rises
sunrise = 12.0
sunset = 12.0
else:
ha_hours = math.acos(cos_ha) / (deg2rad * 15.0)
lon_offset = longitude / 15.0
solar_noon = 12.0 - lon_offset
sunrise = solar_noon - ha_hours
sunset = solar_noon + ha_hours
# Clamp to sane ranges
sunrise = max(3.0, min(10.0, sunrise))
sunset = max(14.0, min(21.0, sunset))
return sunrise, sunset
def _build_lut_for_solar_times(sunrise: float, sunset: float) -> np.ndarray:
"""Build a 1440-entry uint8 RGB LUT scaled to the given sunrise/sunset hours.
The canonical _DAYLIGHT_CURVE is remapped so that:
- Night before dawn: 0 h → sunrise maps to 0 h → _DEFAULT_SUNRISE
- Daylight window: sunrise → sunset maps to _DEFAULT_SUNRISE → _DEFAULT_SUNSET
- Night after dusk: sunset → 24 h maps to _DEFAULT_SUNSET → 24 h
"""
default_night_before = _DEFAULT_SUNRISE
default_day_len = _DEFAULT_SUNSET - _DEFAULT_SUNRISE
default_night_after = 24.0 - _DEFAULT_SUNSET
actual_night_before = max(sunrise, 0.01)
actual_day_len = max(sunset - sunrise, 0.25)
actual_night_after = max(24.0 - sunset, 0.01)
lut = np.zeros((1440, 3), dtype=np.uint8) lut = np.zeros((1440, 3), dtype=np.uint8)
for minute in range(1440): for minute in range(1440):
hour = minute / 60.0 hour = minute / 60.0
# Find surrounding control points
if hour < sunrise:
frac = hour / actual_night_before
canon_hour = frac * default_night_before
elif hour < sunset:
frac = (hour - sunrise) / actual_day_len
canon_hour = _DEFAULT_SUNRISE + frac * default_day_len
else:
frac = (hour - sunset) / actual_night_after
canon_hour = _DEFAULT_SUNSET + frac * default_night_after
canon_hour = max(0.0, min(23.99, canon_hour))
# Locate surrounding curve control points
prev = _DAYLIGHT_CURVE[0] prev = _DAYLIGHT_CURVE[0]
nxt = _DAYLIGHT_CURVE[-1] nxt = _DAYLIGHT_CURVE[-1]
for i in range(len(_DAYLIGHT_CURVE) - 1): for i in range(len(_DAYLIGHT_CURVE) - 1):
if _DAYLIGHT_CURVE[i][0] <= hour <= _DAYLIGHT_CURVE[i + 1][0]: if _DAYLIGHT_CURVE[i][0] <= canon_hour <= _DAYLIGHT_CURVE[i + 1][0]:
prev = _DAYLIGHT_CURVE[i] prev = _DAYLIGHT_CURVE[i]
nxt = _DAYLIGHT_CURVE[i + 1] nxt = _DAYLIGHT_CURVE[i + 1]
break break
span = nxt[0] - prev[0]
t = (hour - prev[0]) / span if span > 0 else 0.0
# Smooth interpolation (smoothstep)
t = t * t * (3 - 2 * t)
for ch in range(3):
lut[minute, ch] = int(prev[ch + 1] + (nxt[ch + 1] - prev[ch + 1]) * t + 0.5)
_daylight_lut = lut span = nxt[0] - prev[0]
t = (canon_hour - prev[0]) / span if span > 0 else 0.0
t = t * t * (3.0 - 2.0 * t) # smoothstep
for ch in range(3):
lut[minute, ch] = int(
prev[ch + 1] + (nxt[ch + 1] - prev[ch + 1]) * t + 0.5
)
return lut return lut
def _get_daylight_lut() -> np.ndarray:
"""Return the static default LUT (built once, cached globally)."""
global _daylight_lut
if _daylight_lut is None:
_daylight_lut = _build_lut_for_solar_times(_DEFAULT_SUNRISE, _DEFAULT_SUNSET)
return _daylight_lut
# ── Stream class ────────────────────────────────────────────────────────
class DaylightColorStripStream(ColorStripStream): class DaylightColorStripStream(ColorStripStream):
"""Color strip stream simulating a 24-hour daylight cycle. """Color strip stream simulating a 24-hour daylight cycle.
All LEDs display the same color at any moment. The color smoothly All LEDs display the same color at any moment. The color smoothly
transitions through a pre-defined daylight curve. transitions through a pre-defined daylight curve whose sunrise/sunset
times are computed from latitude, longitude, and day of year.
""" """
def __init__(self, source): def __init__(self, source):
self._colors_lock = threading.Lock() self._colors_lock = threading.Lock()
self._running = False self._running = False
self._thread: Optional[threading.Thread] = None self._thread: Optional[threading.Thread] = None
self._fps = 10 # low FPS — transitions are slow self._fps = 10
self._frame_time = 1.0 / 10 self._frame_time = 1.0 / 10
self._clock = None self._clock = None
self._led_count = 1 self._led_count = 1
self._auto_size = True self._auto_size = True
self._lut = _get_daylight_lut() # Per-instance LUT cache: {(sr_min, ss_min): np.ndarray}
self._lut_cache: dict = {}
self._update_from_source(source) self._update_from_source(source)
def _update_from_source(self, source) -> None: def _update_from_source(self, source) -> None:
self._speed = float(getattr(source, "speed", 1.0)) self._speed = float(getattr(source, "speed", 1.0))
self._use_real_time = bool(getattr(source, "use_real_time", False)) self._use_real_time = bool(getattr(source, "use_real_time", False))
self._latitude = float(getattr(source, "latitude", 50.0)) self._latitude = float(getattr(source, "latitude", 50.0))
self._longitude = float(getattr(source, "longitude", 0.0))
_lc = getattr(source, "led_count", 0) _lc = getattr(source, "led_count", 0)
self._auto_size = not _lc self._auto_size = not _lc
self._led_count = _lc if _lc and _lc > 0 else 1 self._led_count = _lc if _lc and _lc > 0 else 1
self._lut_cache = {}
with self._colors_lock: with self._colors_lock:
self._colors: Optional[np.ndarray] = None self._colors: Optional[np.ndarray] = None
def _get_lut_for_day(self, day_of_year: int) -> np.ndarray:
"""Return a solar-time-aware LUT for the given day (cached)."""
sunrise, sunset = _compute_solar_times(
self._latitude, self._longitude, day_of_year
)
sr_key = int(round(sunrise * 60))
ss_key = int(round(sunset * 60))
cache_key = (sr_key, ss_key)
lut = self._lut_cache.get(cache_key)
if lut is None:
lut = _build_lut_for_solar_times(sunrise, sunset)
if len(self._lut_cache) > 8:
self._lut_cache.clear()
self._lut_cache[cache_key] = lut
return lut
def configure(self, device_led_count: int) -> None: def configure(self, device_led_count: int) -> None:
if self._auto_size and device_led_count > 0: if self._auto_size and device_led_count > 0:
new_count = max(self._led_count, device_led_count) new_count = max(self._led_count, device_led_count)
@@ -193,18 +304,20 @@ class DaylightColorStripStream(ColorStripStream):
_use_a = not _use_a _use_a = not _use_a
if self._use_real_time: if self._use_real_time:
# Use actual wall-clock time
import datetime
now = datetime.datetime.now() now = datetime.datetime.now()
day_of_year = now.timetuple().tm_yday
minute_of_day = now.hour * 60 + now.minute + now.second / 60.0 minute_of_day = now.hour * 60 + now.minute + now.second / 60.0
else: else:
# Simulated cycle: speed=1.0 → full 24h in ~240s (4 min) # Simulated: speed=1.0 → full 24h in 240s.
# Use summer solstice (day 172) for maximum day length.
day_of_year = 172
cycle_seconds = 240.0 / max(speed, 0.01) cycle_seconds = 240.0 / max(speed, 0.01)
phase = (t % cycle_seconds) / cycle_seconds # 0..1 phase = (t % cycle_seconds) / cycle_seconds
minute_of_day = phase * 1440.0 minute_of_day = phase * 1440.0
lut = self._get_lut_for_day(day_of_year)
idx = int(minute_of_day) % 1440 idx = int(minute_of_day) % 1440
color = self._lut[idx] color = lut[idx]
buf[:] = color buf[:] = color
with self._colors_lock: with self._colors_lock:

View File

@@ -42,11 +42,21 @@ _PALETTE_DEFS: Dict[str, list] = {
_palette_cache: Dict[str, np.ndarray] = {} _palette_cache: Dict[str, np.ndarray] = {}
def _build_palette_lut(name: str) -> np.ndarray: def _build_palette_lut(name: str, custom_stops: list = None) -> np.ndarray:
"""Build a (256, 3) uint8 lookup table for the named palette.""" """Build a (256, 3) uint8 lookup table for the named palette.
if name in _palette_cache:
return _palette_cache[name] When name == "custom" and custom_stops is provided, builds from those
points = _PALETTE_DEFS.get(name, _PALETTE_DEFS["fire"]) stops without caching (each source gets its own LUT).
"""
if custom_stops and name == "custom":
# Convert [[pos,R,G,B], ...] to [(pos,R,G,B), ...]
points = [(s[0], s[1], s[2], s[3]) for s in custom_stops if len(s) >= 4]
if not points:
points = _PALETTE_DEFS["fire"]
else:
if name in _palette_cache:
return _palette_cache[name]
points = _PALETTE_DEFS.get(name, _PALETTE_DEFS["fire"])
lut = np.zeros((256, 3), dtype=np.uint8) lut = np.zeros((256, 3), dtype=np.uint8)
for i in range(256): for i in range(256):
t = i / 255.0 t = i / 255.0
@@ -67,7 +77,8 @@ def _build_palette_lut(name: str) -> np.ndarray:
int(ag + (bg - ag) * frac), int(ag + (bg - ag) * frac),
int(ab + (bb - ab) * frac), int(ab + (bb - ab) * frac),
) )
_palette_cache[name] = lut if name != "custom":
_palette_cache[name] = lut
return lut return lut
@@ -164,6 +175,13 @@ _EFFECT_DEFAULT_PALETTE = {
"plasma": "rainbow", "plasma": "rainbow",
"noise": "rainbow", "noise": "rainbow",
"aurora": "aurora", "aurora": "aurora",
"rain": "ocean",
"comet": "fire",
"bouncing_ball": "rainbow",
"fireworks": "rainbow",
"sparkle_rain": "ice",
"lava_lamp": "lava",
"wave_interference": "rainbow",
} }
@@ -200,15 +218,47 @@ class EffectColorStripStream(ColorStripStream):
self._s_layer2: Optional[np.ndarray] = None self._s_layer2: Optional[np.ndarray] = None
self._plasma_key = (0, 0.0) self._plasma_key = (0, 0.0)
self._plasma_x: Optional[np.ndarray] = None self._plasma_x: Optional[np.ndarray] = None
# Bouncing ball state
self._ball_positions: Optional[np.ndarray] = None
self._ball_velocities: Optional[np.ndarray] = None
self._ball_last_t = 0.0
# Fireworks state
self._fw_particles: list = [] # active particles
self._fw_rockets: list = [] # active rockets
self._fw_last_launch = 0.0
# Sparkle rain state
self._sparkle_state: Optional[np.ndarray] = None # per-LED brightness 0..1
self._gradient_store = None # injected by stream manager
self._update_from_source(source) self._update_from_source(source)
def set_gradient_store(self, gradient_store) -> None:
"""Inject gradient store for palette resolution. Called by stream manager."""
self._gradient_store = gradient_store
# Re-resolve palette now that store is available
self._resolve_palette_lut()
def _resolve_palette_lut(self) -> None:
"""Build palette LUT from gradient_id or legacy palette name."""
gradient_id = self._gradient_id
if gradient_id and self._gradient_store:
stops = self._gradient_store.resolve_stops(gradient_id)
if stops:
# Convert gradient entity stops to palette LUT stops
custom = [[s["position"], *s["color"]] for s in stops]
self._palette_lut = _build_palette_lut("custom", custom)
return
# Fallback: legacy palette name or custom_palette
self._palette_lut = _build_palette_lut(self._palette_name, self._custom_palette)
def _update_from_source(self, source) -> None: def _update_from_source(self, source) -> None:
self._effect_type = getattr(source, "effect_type", "fire") self._effect_type = getattr(source, "effect_type", "fire")
_lc = getattr(source, "led_count", 0) _lc = getattr(source, "led_count", 0)
self._auto_size = not _lc self._auto_size = not _lc
self._led_count = _lc if _lc and _lc > 0 else 1 self._led_count = _lc if _lc and _lc > 0 else 1
self._gradient_id = getattr(source, "gradient_id", None)
self._palette_name = getattr(source, "palette", None) or _EFFECT_DEFAULT_PALETTE.get(self._effect_type, "fire") self._palette_name = getattr(source, "palette", None) or _EFFECT_DEFAULT_PALETTE.get(self._effect_type, "fire")
self._palette_lut = _build_palette_lut(self._palette_name) self._custom_palette = getattr(source, "custom_palette", None)
self._resolve_palette_lut()
color = getattr(source, "color", None) color = getattr(source, "color", None)
self._color = color if isinstance(color, list) and len(color) == 3 else [255, 80, 0] self._color = color if isinstance(color, list) and len(color) == 3 else [255, 80, 0]
self._intensity = float(getattr(source, "intensity", 1.0)) self._intensity = float(getattr(source, "intensity", 1.0))
@@ -290,6 +340,13 @@ class EffectColorStripStream(ColorStripStream):
"plasma": self._render_plasma, "plasma": self._render_plasma,
"noise": self._render_noise, "noise": self._render_noise,
"aurora": self._render_aurora, "aurora": self._render_aurora,
"rain": self._render_rain,
"comet": self._render_comet,
"bouncing_ball": self._render_bouncing_ball,
"fireworks": self._render_fireworks,
"sparkle_rain": self._render_sparkle_rain,
"lava_lamp": self._render_lava_lamp,
"wave_interference": self._render_wave_interference,
} }
try: try:
@@ -555,3 +612,329 @@ class EffectColorStripStream(ColorStripStream):
self._s_f32_rgb *= bright[:, np.newaxis] self._s_f32_rgb *= bright[:, np.newaxis]
np.clip(self._s_f32_rgb, 0, 255, out=self._s_f32_rgb) np.clip(self._s_f32_rgb, 0, 255, out=self._s_f32_rgb)
np.copyto(buf, self._s_f32_rgb, casting='unsafe') np.copyto(buf, self._s_f32_rgb, casting='unsafe')
# ── Rain ──────────────────────────────────────────────────────────
def _render_rain(self, buf: np.ndarray, n: int, t: float) -> None:
"""Raindrops falling down the strip with trailing tails."""
speed = self._effective_speed
intensity = self._intensity
scale = self._scale
lut = self._palette_lut
# Multiple rain "lanes" at different speeds for depth
bright = self._s_f32_a
bright[:] = 0.0
indices = self._s_arange
num_drops = max(3, int(8 * intensity))
for d in range(num_drops):
drop_speed = speed * (6.0 + d * 2.3) * scale
phase_offset = d * 31.7 # prime-ish offset for independence
# Drop position wraps around the strip
pos = (t * drop_speed + phase_offset) % n
# Tail: exponential falloff behind drop (drop falls downward = decreasing index)
dist = (pos - indices) % n
tail_len = max(3.0, n * 0.08)
trail = np.exp(-dist / tail_len)
# Head brightness boost
head_mask = dist < 2.0
trail[head_mask] = 1.0
bright += trail * (0.3 / max(num_drops * 0.3, 1.0))
np.clip(bright, 0.0, 1.0, out=bright)
np.multiply(bright, 255, out=self._s_f32_b)
np.copyto(self._s_i32, self._s_f32_b, casting='unsafe')
np.clip(self._s_i32, 0, 255, out=self._s_i32)
buf[:] = lut[self._s_i32]
# ── Comet ─────────────────────────────────────────────────────────
def _render_comet(self, buf: np.ndarray, n: int, t: float) -> None:
"""Multiple comets with curved, pulsing tails."""
speed = self._effective_speed
intensity = self._intensity
color = self._color
mirror = self._mirror
indices = self._s_arange
buf[:] = 0
num_comets = 3
for c in range(num_comets):
travel_speed = speed * (5.0 + c * 3.0)
phase = c * 137.5
if mirror:
cycle = 2 * (n - 1) if n > 1 else 1
raw_pos = (t * travel_speed + phase) % cycle
pos = raw_pos if raw_pos < n else cycle - raw_pos
else:
pos = (t * travel_speed + phase) % n
# Tail with pulsing brightness
dist = self._s_f32_a
np.subtract(pos, indices, out=dist)
dist %= n
decay = 0.04 + 0.20 * (1.0 - min(1.0, intensity))
np.multiply(dist, -decay, out=self._s_f32_b)
np.exp(self._s_f32_b, out=self._s_f32_b)
# Pulse modulation on tail
pulse = 0.7 + 0.3 * math.sin(t * speed * 4.0 + c * 2.1)
self._s_f32_b *= pulse
r, g, b = color
for ch_idx, ch_val in enumerate((r, g, b)):
np.multiply(self._s_f32_b, ch_val, out=self._s_f32_c)
np.clip(self._s_f32_c, 0, 255, out=self._s_f32_c)
# Additive blend
self._s_f32_a[:] = buf[:, ch_idx]
self._s_f32_a += self._s_f32_c
np.clip(self._s_f32_a, 0, 255, out=self._s_f32_a)
np.copyto(buf[:, ch_idx], self._s_f32_a, casting='unsafe')
# ── Bouncing Ball ─────────────────────────────────────────────────
def _render_bouncing_ball(self, buf: np.ndarray, n: int, t: float) -> None:
"""Physics-simulated bouncing balls with gravity."""
speed = self._effective_speed
intensity = self._intensity
color = self._color
num_balls = 3
# Initialize ball state on first call or LED count change
if self._ball_positions is None or len(self._ball_positions) != num_balls:
self._ball_positions = np.array([n * 0.3, n * 0.5, n * 0.8], dtype=np.float64)
self._ball_velocities = np.array([0.0, 0.0, 0.0], dtype=np.float64)
self._ball_last_t = t
dt = min(t - self._ball_last_t, 0.1) # cap delta to avoid explosion
self._ball_last_t = t
if dt <= 0:
dt = 1.0 / 30
gravity = 50.0 * intensity * speed
damping = 0.85
for i in range(num_balls):
self._ball_velocities[i] += gravity * dt
self._ball_positions[i] += self._ball_velocities[i] * dt
# Bounce off bottom (index n-1)
if self._ball_positions[i] >= n - 1:
self._ball_positions[i] = n - 1
self._ball_velocities[i] = -abs(self._ball_velocities[i]) * damping
# Re-launch if nearly stopped
if abs(self._ball_velocities[i]) < 2.0:
self._ball_velocities[i] = -30.0 * speed * (0.8 + 0.4 * (i / num_balls))
# Bounce off top
if self._ball_positions[i] < 0:
self._ball_positions[i] = 0
self._ball_velocities[i] = abs(self._ball_velocities[i]) * damping
# Render balls with glow radius
buf[:] = 0
indices = self._s_arange
r, g, b = color
for i in range(num_balls):
pos = self._ball_positions[i]
dist = self._s_f32_a
np.subtract(indices, pos, out=dist)
np.abs(dist, out=dist)
# Gaussian glow, radius ~3 LEDs
np.multiply(dist, dist, out=self._s_f32_b)
self._s_f32_b *= -0.5
np.exp(self._s_f32_b, out=self._s_f32_b)
# Hue offset per ball
hue_shift = i / num_balls
br = r * (1 - hue_shift * 0.5)
bg = g * (0.5 + hue_shift * 0.5)
bb = b * (0.3 + hue_shift * 0.7)
for ch_idx, ch_val in enumerate((br, bg, bb)):
np.multiply(self._s_f32_b, ch_val, out=self._s_f32_c)
self._s_f32_a[:] = buf[:, ch_idx]
self._s_f32_a += self._s_f32_c
np.clip(self._s_f32_a, 0, 255, out=self._s_f32_a)
np.copyto(buf[:, ch_idx], self._s_f32_a, casting='unsafe')
# ── Fireworks ─────────────────────────────────────────────────────
def _render_fireworks(self, buf: np.ndarray, n: int, t: float) -> None:
"""Rockets launch and explode into colorful particle bursts."""
speed = self._effective_speed
intensity = self._intensity
lut = self._palette_lut
dt = 1.0 / max(self._fps, 1)
# Launch rockets periodically
launch_interval = max(0.3, 1.5 / max(intensity, 0.1))
if t - self._fw_last_launch > launch_interval:
self._fw_last_launch = t
# Rocket: [position, velocity, palette_idx]
target = 0.2 + np.random.random() * 0.5 # explode at 20-70% height
rocket_speed = n * (0.8 + np.random.random() * 0.4) * speed
self._fw_rockets.append([float(n - 1), -rocket_speed, target * n])
# Update rockets
new_rockets = []
for rocket in self._fw_rockets:
rocket[0] += rocket[1] * dt
if rocket[0] <= rocket[2]:
# Explode: create particles
num_particles = int(8 + 8 * intensity)
palette_idx = np.random.randint(0, 256)
for _ in range(num_particles):
vel = (np.random.random() - 0.5) * n * 0.5 * speed
# [position, velocity, brightness, palette_idx]
self._fw_particles.append([rocket[0], vel, 1.0, palette_idx])
else:
new_rockets.append(rocket)
self._fw_rockets = new_rockets
# Update particles
new_particles = []
for p in self._fw_particles:
p[0] += p[1] * dt
p[1] *= 0.97 # drag
p[2] -= dt * 1.5 # fade
if p[2] > 0.02 and 0 <= p[0] < n:
new_particles.append(p)
self._fw_particles = new_particles
# Cap active particles
if len(self._fw_particles) > 200:
self._fw_particles = self._fw_particles[-200:]
# Render
buf[:] = 0
for p in self._fw_particles:
pos, _vel, bright, pal_idx = p
idx = int(pos)
if 0 <= idx < n:
color = lut[int(pal_idx) % 256]
for ch in range(3):
val = int(buf[idx, ch] + color[ch] * bright)
buf[idx, ch] = min(255, val)
# Spread to neighbors
for offset in (-1, 1):
ni = idx + offset
if 0 <= ni < n:
for ch in range(3):
val = int(buf[ni, ch] + color[ch] * bright * 0.4)
buf[ni, ch] = min(255, val)
# Render rockets as bright white dots
for rocket in self._fw_rockets:
idx = int(rocket[0])
if 0 <= idx < n:
buf[idx] = (255, 255, 255)
# ── Sparkle Rain ──────────────────────────────────────────────────
def _render_sparkle_rain(self, buf: np.ndarray, n: int, t: float) -> None:
"""Twinkling star field with smooth fade-in/fade-out."""
speed = self._effective_speed
intensity = self._intensity
lut = self._palette_lut
# Initialize/resize sparkle state
if self._sparkle_state is None or len(self._sparkle_state) != n:
self._sparkle_state = np.zeros(n, dtype=np.float32)
dt = 1.0 / max(self._fps, 1)
state = self._sparkle_state
# Fade existing sparkles
fade_rate = 1.5 * speed
state -= fade_rate * dt
np.clip(state, 0.0, 1.0, out=state)
# Spawn new sparkles
spawn_prob = 0.05 * intensity * speed
rng = np.random.random(n)
new_mask = (rng < spawn_prob) & (state < 0.1)
state[new_mask] = 1.0
# Map sparkle brightness to palette
np.multiply(state, 255, out=self._s_f32_a)
np.copyto(self._s_i32, self._s_f32_a, casting='unsafe')
np.clip(self._s_i32, 0, 255, out=self._s_i32)
self._s_f32_rgb[:] = lut[self._s_i32]
# Apply brightness
self._s_f32_rgb *= state[:, np.newaxis]
np.clip(self._s_f32_rgb, 0, 255, out=self._s_f32_rgb)
np.copyto(buf, self._s_f32_rgb, casting='unsafe')
# ── Lava Lamp ─────────────────────────────────────────────────────
def _render_lava_lamp(self, buf: np.ndarray, n: int, t: float) -> None:
"""Slow-moving colored blobs that merge and separate."""
speed = self._effective_speed
scale = self._scale
lut = self._palette_lut
# Use noise at very low frequency for blob movement
np.multiply(self._s_arange, scale * 0.03, out=self._s_f32_a)
# Two blob layers at different speeds for organic movement
self._s_f32_a += t * speed * 0.1
layer1 = self._noise.fbm(self._s_f32_a, octaves=3).copy()
np.multiply(self._s_arange, scale * 0.05, out=self._s_f32_a)
self._s_f32_a += t * speed * 0.07 + 100.0
layer2 = self._noise.fbm(self._s_f32_a, octaves=2).copy()
# Combine: create blob-like shapes with soft edges
combined = self._s_f32_a
np.multiply(layer1, 0.6, out=combined)
combined += layer2 * 0.4
# Sharpen to create distinct blobs (sigmoid-like)
combined -= 0.45
combined *= 6.0
# Soft clamp via tanh approximation: x / (1 + |x|)
np.abs(combined, out=self._s_f32_b)
self._s_f32_b += 1.0
np.divide(combined, self._s_f32_b, out=combined)
# Map from [-1,1] to [0,1]
combined += 1.0
combined *= 0.5
np.clip(combined, 0.0, 1.0, out=combined)
# Map to palette
np.multiply(combined, 255, out=self._s_f32_b)
np.copyto(self._s_i32, self._s_f32_b, casting='unsafe')
np.clip(self._s_i32, 0, 255, out=self._s_i32)
buf[:] = lut[self._s_i32]
# ── Wave Interference ─────────────────────────────────────────────
def _render_wave_interference(self, buf: np.ndarray, n: int, t: float) -> None:
"""Two counter-propagating sine waves creating interference patterns."""
speed = self._effective_speed
scale = self._scale
lut = self._palette_lut
# Wave parameters
k = 2.0 * math.pi * scale / max(n, 1) # wavenumber
omega = speed * 2.0 # angular frequency
# Wave 1: right-propagating
indices = self._s_arange
np.multiply(indices, k, out=self._s_f32_a)
self._s_f32_a -= omega * t
np.sin(self._s_f32_a, out=self._s_f32_a)
# Wave 2: left-propagating at slightly different frequency
np.multiply(indices, k * 1.1, out=self._s_f32_b)
self._s_f32_b += omega * t * 0.9
np.sin(self._s_f32_b, out=self._s_f32_b)
# Interference: sum and normalize to [0, 255]
self._s_f32_a += self._s_f32_b
# Range is [-2, 2], map to [0, 255]
self._s_f32_a += 2.0
self._s_f32_a *= (255.0 / 4.0)
np.clip(self._s_f32_a, 0, 255, out=self._s_f32_a)
np.copyto(self._s_i32, self._s_f32_a, casting='unsafe')
buf[:] = lut[self._s_i32]

View File

@@ -9,8 +9,8 @@ import time
from datetime import datetime, timezone from datetime import datetime, timezone
from typing import Dict, List, Optional, Tuple from typing import Dict, List, Optional, Tuple
import cv2
import numpy as np import numpy as np
from PIL import Image
from wled_controller.core.processing.live_stream import LiveStream from wled_controller.core.processing.live_stream import LiveStream
from wled_controller.core.capture.screen_capture import ( from wled_controller.core.capture.screen_capture import (
@@ -46,7 +46,8 @@ def _process_kc_frame(capture, rect_names, rect_bounds, calc_fn, prev_colors_arr
t0 = time.perf_counter() t0 = time.perf_counter()
# Downsample to working resolution — 144x fewer pixels at 1080p # Downsample to working resolution — 144x fewer pixels at 1080p
small = cv2.resize(capture.image, KC_WORK_SIZE, interpolation=cv2.INTER_AREA) pil_img = Image.fromarray(capture.image)
small = np.array(pil_img.resize(KC_WORK_SIZE, Image.LANCZOS))
# Extract colors for each rectangle from the small image # Extract colors for each rectangle from the small image
n = len(rect_names) n = len(rect_names)

View File

@@ -22,7 +22,11 @@ from wled_controller.core.processing.live_stream import (
ScreenCaptureLiveStream, ScreenCaptureLiveStream,
StaticImageLiveStream, StaticImageLiveStream,
) )
from wled_controller.core.processing.video_stream import VideoCaptureLiveStream try:
from wled_controller.core.processing.video_stream import VideoCaptureLiveStream
_has_video = True
except ImportError:
_has_video = False
from wled_controller.utils import get_logger from wled_controller.utils import get_logger
logger = get_logger(__name__) logger = get_logger(__name__)
@@ -264,8 +268,13 @@ class LiveStreamManager:
logger.warning(f"Skipping unknown filter '{fi.filter_id}': {e}") logger.warning(f"Skipping unknown filter '{fi.filter_id}': {e}")
return resolved return resolved
def _create_video_live_stream(self, config) -> VideoCaptureLiveStream: def _create_video_live_stream(self, config):
"""Create a VideoCaptureLiveStream from a VideoCaptureSource config.""" """Create a VideoCaptureLiveStream from a VideoCaptureSource config."""
if not _has_video:
raise ImportError(
"OpenCV is required for video stream support. "
"Install it with: pip install opencv-python-headless"
)
stream = VideoCaptureLiveStream( stream = VideoCaptureLiveStream(
url=config.url, url=config.url,
loop=config.loop, loop=config.loop,

View File

@@ -106,7 +106,9 @@ class NotificationColorStripStream(ColorStripStream):
color = _hex_to_rgb(self._default_color) color = _hex_to_rgb(self._default_color)
# Push event to queue (thread-safe deque.append) # Push event to queue (thread-safe deque.append)
self._event_queue.append({"color": color, "start": time.monotonic()}) # Priority: 0 = normal, 1 = high (high interrupts current effect)
priority = 1 if color_override else 0
self._event_queue.append({"color": color, "start": time.monotonic(), "priority": priority})
return True return True
def configure(self, device_led_count: int) -> None: def configure(self, device_led_count: int) -> None:
@@ -190,11 +192,12 @@ class NotificationColorStripStream(ColorStripStream):
frame_time = self._frame_time frame_time = self._frame_time
try: try:
# Check for new events # Check for new events — high priority interrupts current
while self._event_queue: while self._event_queue:
try: try:
event = self._event_queue.popleft() event = self._event_queue.popleft()
self._active_effect = event if self._active_effect is None or event.get("priority", 0) >= self._active_effect.get("priority", 0):
self._active_effect = event
except IndexError: except IndexError:
break break
@@ -247,6 +250,10 @@ class NotificationColorStripStream(ColorStripStream):
self._render_pulse(buf, n, color, progress) self._render_pulse(buf, n, color, progress)
elif effect == "sweep": elif effect == "sweep":
self._render_sweep(buf, n, color, progress) self._render_sweep(buf, n, color, progress)
elif effect == "chase":
self._render_chase(buf, n, color, progress)
elif effect == "gradient_flash":
self._render_gradient_flash(buf, n, color, progress)
else: else:
# Default: flash # Default: flash
self._render_flash(buf, n, color, progress) self._render_flash(buf, n, color, progress)
@@ -296,3 +303,62 @@ class NotificationColorStripStream(ColorStripStream):
buf[:, 0] = r buf[:, 0] = r
buf[:, 1] = g buf[:, 1] = g
buf[:, 2] = b buf[:, 2] = b
def _render_chase(self, buf: np.ndarray, n: int, color: tuple, progress: float) -> None:
"""Chase effect: light travels across strip and bounces back.
First half: light moves left-to-right with a glowing tail.
Second half: light moves right-to-left back to start.
Overall brightness fades as progress approaches 1.0.
"""
buf[:] = 0
if n <= 0:
return
# Position: bounce (0→n→0)
if progress < 0.5:
pos = progress * 2.0 * (n - 1)
else:
pos = (1.0 - (progress - 0.5) * 2.0) * (n - 1)
# Overall fade
fade = max(0.0, 1.0 - progress * 0.6)
# Glow radius: ~5% of strip, minimum 2 LEDs
radius = max(2.0, n * 0.05)
for i in range(n):
dist = abs(i - pos)
if dist < radius * 3:
glow = math.exp(-0.5 * (dist / radius) ** 2) * fade
buf[i, 0] = min(255, int(color[0] * glow))
buf[i, 1] = min(255, int(color[1] * glow))
buf[i, 2] = min(255, int(color[2] * glow))
def _render_gradient_flash(self, buf: np.ndarray, n: int, color: tuple, progress: float) -> None:
"""Gradient flash: bright center fades to edges, then all fades out.
Creates a gradient from the notification color at center to darker
edges, with overall brightness fading over the duration.
"""
if n <= 0:
return
# Overall brightness envelope: quick attack, exponential decay
if progress < 0.1:
brightness = progress / 0.1
else:
brightness = math.exp(-3.0 * (progress - 0.1))
# Center-to-edge gradient
center = n / 2.0
max_dist = center if center > 0 else 1.0
for i in range(n):
dist = abs(i - center) / max_dist
# Smooth falloff from center
edge_factor = 1.0 - dist * 0.6
b_final = brightness * edge_factor
buf[i, 0] = min(255, int(color[0] * b_final))
buf[i, 1] = min(255, int(color[1] * b_final))
buf[i, 2] = min(255, int(color[2] * b_final))

View File

@@ -5,7 +5,8 @@ instances when new notifications appear. Sources with os_listener=True are
monitored. monitored.
Supported platforms: Supported platforms:
- **Windows**: polls toast notifications via winsdk UserNotificationListener - **Windows**: polls toast notifications via winrt UserNotificationListener
(falls back to winsdk if winrt packages are not installed)
- **Linux**: monitors org.freedesktop.Notifications via D-Bus (dbus-next) - **Linux**: monitors org.freedesktop.Notifications via D-Bus (dbus-next)
""" """
@@ -33,8 +34,34 @@ def get_os_notification_listener() -> Optional["OsNotificationListener"]:
# ── Platform backends ────────────────────────────────────────────────── # ── Platform backends ──────────────────────────────────────────────────
def _import_winrt_notifications():
"""Try to import WinRT notification APIs: winrt first, then winsdk fallback.
Returns (UserNotificationListener, UserNotificationListenerAccessStatus,
NotificationKinds, backend_name) or raises ImportError.
"""
# Preferred: lightweight winrt packages (~1MB total)
try:
from winrt.windows.ui.notifications.management import (
UserNotificationListener,
UserNotificationListenerAccessStatus,
)
from winrt.windows.ui.notifications import NotificationKinds
return UserNotificationListener, UserNotificationListenerAccessStatus, NotificationKinds, "winrt"
except ImportError:
pass
# Fallback: winsdk (~35MB, may already be installed)
from winsdk.windows.ui.notifications.management import (
UserNotificationListener,
UserNotificationListenerAccessStatus,
)
from winsdk.windows.ui.notifications import NotificationKinds
return UserNotificationListener, UserNotificationListenerAccessStatus, NotificationKinds, "winsdk"
class _WindowsBackend: class _WindowsBackend:
"""Polls Windows toast notifications via winsdk.""" """Polls Windows toast notifications via winrt (preferred) or winsdk."""
def __init__(self, on_notification): def __init__(self, on_notification):
self._on_notification = on_notification self._on_notification = on_notification
@@ -48,21 +75,22 @@ class _WindowsBackend:
if platform.system() != "Windows": if platform.system() != "Windows":
return False return False
try: try:
from winsdk.windows.ui.notifications.management import ( UNL, AccessStatus, _NK, backend = _import_winrt_notifications()
UserNotificationListener, listener = UNL.current
UserNotificationListenerAccessStatus,
)
listener = UserNotificationListener.current
status = listener.get_access_status() status = listener.get_access_status()
if status != UserNotificationListenerAccessStatus.ALLOWED: if status != AccessStatus.ALLOWED:
logger.warning( logger.warning(
f"OS notification listener: access denied (status={status}). " f"OS notification listener: access denied (status={status}). "
"Enable notification access in Windows Settings > Privacy > Notifications." "Enable notification access in Windows Settings > Privacy > Notifications."
) )
return False return False
logger.info(f"OS notification listener: using {backend} backend")
return True return True
except ImportError: except ImportError:
logger.info("OS notification listener: winsdk not installed, skipping") logger.info(
"OS notification listener: neither winrt nor winsdk installed, skipping. "
"Install with: pip install winrt-Windows.UI.Notifications winrt-Windows.UI.Notifications.Management"
)
return False return False
except Exception as e: except Exception as e:
logger.warning(f"OS notification listener: Windows init error: {e}") logger.warning(f"OS notification listener: Windows init error: {e}")
@@ -84,10 +112,8 @@ class _WindowsBackend:
self._thread = None self._thread = None
def _poll_loop(self) -> None: def _poll_loop(self) -> None:
from winsdk.windows.ui.notifications.management import UserNotificationListener UNL, _AccessStatus, NotificationKinds, _backend = _import_winrt_notifications()
from winsdk.windows.ui.notifications import NotificationKinds listener = UNL.current
listener = UserNotificationListener.current
loop = asyncio.new_event_loop() loop = asyncio.new_event_loop()
async def _get_notifications(): async def _get_notifications():

View File

@@ -55,6 +55,8 @@ class ProcessorDependencies:
value_source_store: object = None value_source_store: object = None
sync_clock_manager: object = None sync_clock_manager: object = None
cspt_store: object = None cspt_store: object = None
gradient_store: object = None
weather_manager: object = None
@dataclass @dataclass
@@ -129,6 +131,8 @@ class ProcessorManager(AutoRestartMixin, DeviceHealthMixin, DeviceTestModeMixin)
audio_template_store=deps.audio_template_store, audio_template_store=deps.audio_template_store,
sync_clock_manager=deps.sync_clock_manager, sync_clock_manager=deps.sync_clock_manager,
cspt_store=deps.cspt_store, cspt_store=deps.cspt_store,
gradient_store=deps.gradient_store,
weather_manager=deps.weather_manager,
) )
self._value_stream_manager = ValueStreamManager( self._value_stream_manager = ValueStreamManager(
value_source_store=deps.value_source_store, value_source_store=deps.value_source_store,

View File

@@ -9,9 +9,14 @@ import threading
import time import time
from typing import Optional from typing import Optional
import cv2
import numpy as np import numpy as np
try:
import cv2
_has_cv2 = True
except ImportError:
_has_cv2 = False
from wled_controller.core.capture_engines.base import ScreenCapture from wled_controller.core.capture_engines.base import ScreenCapture
from wled_controller.core.processing.live_stream import LiveStream from wled_controller.core.processing.live_stream import LiveStream
from wled_controller.utils import get_logger from wled_controller.utils import get_logger
@@ -67,12 +72,22 @@ def resolve_youtube_url(url: str, resolution_limit: Optional[int] = None) -> str
return stream_url return stream_url
def _require_cv2():
"""Raise a clear error if OpenCV is not installed."""
if not _has_cv2:
raise ImportError(
"OpenCV is required for camera and video support. "
"Install it with: pip install opencv-python-headless"
)
def extract_thumbnail(url: str, resolution_limit: Optional[int] = None) -> Optional[np.ndarray]: def extract_thumbnail(url: str, resolution_limit: Optional[int] = None) -> Optional[np.ndarray]:
"""Extract the first frame of a video as a thumbnail (RGB numpy array). """Extract the first frame of a video as a thumbnail (RGB numpy array).
For YouTube URLs, resolves via yt-dlp first. For YouTube URLs, resolves via yt-dlp first.
Returns None on failure. Returns None on failure.
""" """
_require_cv2()
try: try:
actual_url = url actual_url = url
if is_youtube_url(url): if is_youtube_url(url):
@@ -127,6 +142,7 @@ class VideoCaptureLiveStream(LiveStream):
resolution_limit: Optional[int] = None, resolution_limit: Optional[int] = None,
target_fps: int = 30, target_fps: int = 30,
): ):
_require_cv2()
self._original_url = url self._original_url = url
self._resolved_url: Optional[str] = None self._resolved_url: Optional[str] = None
self._loop = loop self._loop = loop
@@ -136,7 +152,7 @@ class VideoCaptureLiveStream(LiveStream):
self._resolution_limit = resolution_limit self._resolution_limit = resolution_limit
self._target_fps = target_fps self._target_fps = target_fps
self._cap: Optional[cv2.VideoCapture] = None self._cap = None # Optional[cv2.VideoCapture]
self._video_fps: float = 30.0 self._video_fps: float = 30.0
self._total_frames: int = 0 self._total_frames: int = 0
self._video_duration: float = 0.0 self._video_duration: float = 0.0

View File

@@ -0,0 +1,282 @@
"""Weather-reactive color strip stream — maps weather conditions to ambient LED colors."""
import random
import threading
import time
from typing import Optional
import numpy as np
from wled_controller.core.processing.color_strip_stream import ColorStripStream
from wled_controller.core.weather.weather_manager import WeatherManager
from wled_controller.core.weather.weather_provider import DEFAULT_WEATHER, WeatherData
from wled_controller.utils import get_logger
logger = get_logger(__name__)
# ── WMO code → palette mapping ──────────────────────────────────────────
# Each entry: (start_rgb, end_rgb) as (R, G, B) tuples.
# Codes are matched by range.
_PALETTES = {
# Clear sky / mainly clear
(0, 1): ((255, 220, 100), (255, 180, 80)),
# Partly cloudy / overcast
(2, 3): ((150, 180, 220), (240, 235, 220)),
# Fog
(45, 48): ((180, 190, 200), (210, 215, 220)),
# Drizzle (light, moderate, dense, freezing)
(51, 53, 55, 56, 57): ((100, 160, 220), (80, 180, 190)),
# Rain (slight, moderate, heavy, freezing)
(61, 63, 65, 66, 67): ((40, 80, 180), (60, 140, 170)),
# Snow (slight, moderate, heavy, grains)
(71, 73, 75, 77): ((200, 210, 240), (180, 200, 255)),
# Rain showers
(80, 81, 82): ((30, 60, 160), (80, 70, 170)),
# Snow showers
(85, 86): ((220, 225, 240), (190, 185, 220)),
# Thunderstorm
(95, 96, 99): ((60, 20, 120), (40, 60, 200)),
}
# Default palette (partly cloudy)
_DEFAULT_PALETTE = ((150, 180, 220), (240, 235, 220))
def _resolve_palette(code: int) -> tuple:
"""Map a WMO weather code to a (start_rgb, end_rgb) palette."""
for codes, palette in _PALETTES.items():
if code in codes:
return palette
return _DEFAULT_PALETTE
def _apply_temperature_shift(color: np.ndarray, temperature: float, influence: float) -> np.ndarray:
"""Shift color array warm/cool based on temperature.
> 25°C: shift toward warm (add red, reduce blue)
< 5°C: shift toward cool (add blue, reduce red)
Between 5-25°C: linear interpolation (no shift at 15°C midpoint)
"""
if influence <= 0.0:
return color
# Normalize temperature to -1..+1 range (cold..hot)
t = (temperature - 15.0) / 10.0 # -1 at 5°C, 0 at 15°C, +1 at 25°C
t = max(-1.0, min(1.0, t))
shift = t * influence * 30.0 # max ±30 RGB units
result = color.astype(np.int16)
result[:, 0] += int(shift) # red
result[:, 2] -= int(shift) # blue
np.clip(result, 0, 255, out=result)
return result.astype(np.uint8)
def _smoothstep(x: float) -> float:
"""Hermite smoothstep: smooth cubic interpolation."""
x = max(0.0, min(1.0, x))
return x * x * (3.0 - 2.0 * x)
class WeatherColorStripStream(ColorStripStream):
"""Generates ambient LED colors based on real-time weather data.
Fetches weather data from a WeatherManager (which polls the API),
maps the WMO condition code to a color palette, applies temperature
influence, and animates a slow gradient drift across the strip.
"""
def __init__(self, source, weather_manager: WeatherManager):
self._source_id = source.id
self._weather_source_id: str = source.weather_source_id
self._speed: float = source.speed
self._temperature_influence: float = source.temperature_influence
self._clock_id: Optional[str] = source.clock_id
self._weather_manager = weather_manager
self._led_count: int = 0 # auto-size from device
self._fps: int = 30
self._frame_time: float = 1.0 / 30
self._running = False
self._thread: Optional[threading.Thread] = None
self._latest_colors: Optional[np.ndarray] = None
self._colors_lock = threading.Lock()
# Pre-allocated buffers
self._buf_a: Optional[np.ndarray] = None
self._buf_b: Optional[np.ndarray] = None
self._use_a = True
self._pool_n = 0
# Thunderstorm flash state
self._flash_remaining = 0
# ── ColorStripStream interface ──────────────────────────────────
@property
def target_fps(self) -> int:
return self._fps
def set_capture_fps(self, fps: int) -> None:
self._fps = max(1, min(60, fps))
self._frame_time = 1.0 / self._fps
@property
def led_count(self) -> int:
return self._led_count
@property
def is_animated(self) -> bool:
return True
def start(self) -> None:
# Acquire weather runtime (increments ref count)
if self._weather_source_id:
try:
self._weather_manager.acquire(self._weather_source_id)
except Exception as e:
logger.warning(f"Weather stream {self._source_id}: failed to acquire weather source: {e}")
self._running = True
self._thread = threading.Thread(
target=self._animate_loop, daemon=True,
name=f"WeatherCSS-{self._source_id[:12]}",
)
self._thread.start()
logger.info(f"WeatherColorStripStream started: {self._source_id}")
def stop(self) -> None:
self._running = False
if self._thread is not None:
self._thread.join(timeout=5.0)
self._thread = None
# Release weather runtime
if self._weather_source_id:
try:
self._weather_manager.release(self._weather_source_id)
except Exception as e:
logger.warning(f"Weather stream {self._source_id}: failed to release weather source: {e}")
logger.info(f"WeatherColorStripStream stopped: {self._source_id}")
def get_latest_colors(self) -> Optional[np.ndarray]:
with self._colors_lock:
return self._latest_colors
def configure(self, device_led_count: int) -> None:
if device_led_count > 0 and device_led_count != self._led_count:
self._led_count = device_led_count
def update_source(self, source) -> None:
self._speed = source.speed
self._temperature_influence = source.temperature_influence
self._clock_id = source.clock_id
# If weather source changed, release old + acquire new
new_ws_id = source.weather_source_id
if new_ws_id != self._weather_source_id:
if self._weather_source_id:
try:
self._weather_manager.release(self._weather_source_id)
except Exception:
pass
self._weather_source_id = new_ws_id
if new_ws_id:
try:
self._weather_manager.acquire(new_ws_id)
except Exception as e:
logger.warning(f"Weather stream: failed to acquire new source {new_ws_id}: {e}")
# ── Animation loop ──────────────────────────────────────────────
def _ensure_pool(self, n: int) -> None:
if n == self._pool_n or n <= 0:
return
self._pool_n = n
self._buf_a = np.zeros((n, 3), dtype=np.uint8)
self._buf_b = np.zeros((n, 3), dtype=np.uint8)
def _get_clock_time(self) -> Optional[float]:
"""Get time from sync clock if configured."""
if not self._clock_id:
return None
try:
# Access via weather manager's store isn't ideal, but clocks
# are looked up at the ProcessorManager level when the stream
# is created. For now, return None and use wall time.
return None
except Exception:
return None
def _animate_loop(self) -> None:
start_time = time.perf_counter()
try:
while self._running:
loop_start = time.perf_counter()
try:
n = self._led_count
if n <= 0:
time.sleep(self._frame_time)
continue
self._ensure_pool(n)
buf = self._buf_a if self._use_a else self._buf_b
self._use_a = not self._use_a
# Get weather data
weather = self._get_weather()
palette_start, palette_end = _resolve_palette(weather.code)
# Convert to arrays
c0 = np.array(palette_start, dtype=np.float32)
c1 = np.array(palette_end, dtype=np.float32)
# Compute animation phase
t = time.perf_counter() - start_time
phase = (t * self._speed * 0.1) % 1.0
# Generate gradient with drift
for i in range(n):
frac = ((i / max(n - 1, 1)) + phase) % 1.0
s = _smoothstep(frac)
buf[i] = (c0 * (1.0 - s) + c1 * s).astype(np.uint8)
# Apply temperature shift
if self._temperature_influence > 0.0:
buf[:] = _apply_temperature_shift(buf, weather.temperature, self._temperature_influence)
# Thunderstorm flash effect
is_thunderstorm = weather.code in (95, 96, 99)
if is_thunderstorm:
if self._flash_remaining > 0:
buf[:] = 255
self._flash_remaining -= 1
elif random.random() < 0.015: # ~1.5% chance per frame
self._flash_remaining = random.randint(2, 5)
with self._colors_lock:
self._latest_colors = buf
except Exception as e:
logger.error(f"WeatherColorStripStream error: {e}", exc_info=True)
elapsed = time.perf_counter() - loop_start
time.sleep(max(self._frame_time - elapsed, 0.001))
except Exception as e:
logger.error(f"Fatal WeatherColorStripStream error: {e}", exc_info=True)
finally:
self._running = False
def _get_weather(self) -> WeatherData:
"""Get current weather data from the manager."""
if not self._weather_source_id:
return DEFAULT_WEATHER
try:
return self._weather_manager.get_data(self._weather_source_id)
except Exception:
return DEFAULT_WEATHER

View File

@@ -0,0 +1 @@
"""Auto-update — periodic release checking and notification."""

View File

@@ -0,0 +1,54 @@
"""Gitea release provider — fetches releases from a Gitea instance."""
import httpx
from wled_controller.core.update.release_provider import AssetInfo, ReleaseInfo, ReleaseProvider
from wled_controller.utils import get_logger
logger = get_logger(__name__)
class GiteaReleaseProvider(ReleaseProvider):
"""Fetch releases from a Gitea repository via its REST API."""
def __init__(self, base_url: str, repo: str, token: str = "") -> None:
self._base_url = base_url.rstrip("/")
self._repo = repo
self._token = token
async def get_releases(self, limit: int = 10) -> list[ReleaseInfo]:
url = f"{self._base_url}/api/v1/repos/{self._repo}/releases"
headers: dict[str, str] = {}
if self._token:
headers["Authorization"] = f"token {self._token}"
async with httpx.AsyncClient(timeout=15) as client:
resp = await client.get(url, params={"limit": limit}, headers=headers)
resp.raise_for_status()
data = resp.json()
releases: list[ReleaseInfo] = []
for item in data:
tag = item.get("tag_name", "")
version = tag.lstrip("v")
assets = tuple(
AssetInfo(
name=a["name"],
size=a.get("size", 0),
download_url=a["browser_download_url"],
)
for a in item.get("assets", [])
)
releases.append(ReleaseInfo(
tag=tag,
version=version,
name=item.get("name", tag),
body=item.get("body", ""),
prerelease=item.get("prerelease", False),
published_at=item.get("published_at", ""),
assets=assets,
))
return releases
def get_releases_page_url(self) -> str:
return f"{self._base_url}/{self._repo}/releases"

View File

@@ -0,0 +1,73 @@
"""Detect how the application was installed.
The install type determines which update strategy is available:
- installer: NSIS `.exe` installed to AppData — can run new installer silently
- portable: Extracted ZIP with embedded Python — can replace app/ + python/ dirs
- docker: Running inside a Docker container — no auto-update, show instructions
- dev: Running from source (pip install -e) — no auto-update, link to releases
"""
import sys
from enum import Enum
from pathlib import Path
from wled_controller.utils import get_logger
logger = get_logger(__name__)
class InstallType(str, Enum):
INSTALLER = "installer"
PORTABLE = "portable"
DOCKER = "docker"
DEV = "dev"
def detect_install_type() -> InstallType:
"""Detect the current install type once at startup."""
# Docker: /.dockerenv file or cgroup hints
if Path("/.dockerenv").exists():
logger.info("Install type: docker")
return InstallType.DOCKER
# Windows installed/portable: look for embedded Python dir
app_root = Path.cwd()
has_uninstaller = (app_root / "uninstall.exe").exists()
has_embedded_python = (app_root / "python" / "python.exe").exists()
if has_uninstaller:
logger.info("Install type: installer (uninstall.exe found at %s)", app_root)
return InstallType.INSTALLER
if has_embedded_python:
logger.info("Install type: portable (embedded python/ found at %s)", app_root)
return InstallType.PORTABLE
# Linux portable: look for venv/ + run.sh
if (app_root / "venv").is_dir() and (app_root / "run.sh").exists():
logger.info("Install type: portable (Linux venv layout at %s)", app_root)
return InstallType.PORTABLE
logger.info("Install type: dev (no distribution markers found)")
return InstallType.DEV
def get_platform_asset_pattern(install_type: InstallType) -> str | None:
"""Return a substring that the matching release asset name must contain.
Returns None if auto-update is not supported for this install type.
"""
if install_type == InstallType.DOCKER:
return None
if install_type == InstallType.DEV:
return None
if sys.platform == "win32":
if install_type == InstallType.INSTALLER:
return "-setup.exe"
return "-win-x64.zip"
if sys.platform == "linux":
return "-linux-x64.tar.gz"
return None

View File

@@ -0,0 +1,41 @@
"""Abstract release provider and data models."""
from abc import ABC, abstractmethod
from dataclasses import dataclass
@dataclass(frozen=True)
class AssetInfo:
"""A single downloadable asset attached to a release."""
name: str
size: int
download_url: str
@dataclass(frozen=True)
class ReleaseInfo:
"""A single release from the hosting platform."""
tag: str
version: str
name: str
body: str
prerelease: bool
published_at: str
assets: tuple[AssetInfo, ...]
class ReleaseProvider(ABC):
"""Platform-agnostic interface for querying releases.
Implement this for Gitea, GitHub, GitLab, etc.
"""
@abstractmethod
async def get_releases(self, limit: int = 10) -> list[ReleaseInfo]:
"""Fetch recent releases, newest first."""
@abstractmethod
def get_releases_page_url(self) -> str:
"""Return the user-facing URL of the releases page."""

View File

@@ -0,0 +1,518 @@
"""Background service that periodically checks for new releases."""
import asyncio
import os
import shutil
import subprocess
import sys
import time
from pathlib import Path
from typing import Any
import httpx
from wled_controller import __version__
from wled_controller.core.update.install_type import InstallType, detect_install_type, get_platform_asset_pattern
from wled_controller.core.update.release_provider import AssetInfo, ReleaseInfo, ReleaseProvider
from wled_controller.core.update.version_check import is_newer, normalize_version
from wled_controller.storage.database import Database
from wled_controller.utils import get_logger
logger = get_logger(__name__)
DEFAULT_SETTINGS: dict[str, Any] = {
"enabled": True,
"check_interval_hours": 24.0,
"include_prerelease": False,
}
_STARTUP_DELAY_S = 30
_MANUAL_CHECK_DEBOUNCE_S = 60
class UpdateService:
"""Periodically polls a ReleaseProvider and fires WebSocket events."""
def __init__(
self,
provider: ReleaseProvider,
db: Database,
fire_event: Any = None,
update_dir: Path | None = None,
) -> None:
self._provider = provider
self._db = db
self._fire_event = fire_event
self._settings = self._load_settings()
self._task: asyncio.Task | None = None
# Install type (detected once)
self._install_type = detect_install_type()
self._asset_pattern = get_platform_asset_pattern(self._install_type)
# Download directory
self._update_dir = update_dir or Path("data/updates")
self._update_dir.mkdir(parents=True, exist_ok=True)
# Runtime state
self._available_release: ReleaseInfo | None = None
self._last_check: float = 0.0
self._checking = False
self._last_error: str | None = None
# Download/apply state
self._download_progress: float = 0.0 # 0..1
self._downloading = False
self._downloaded_file: Path | None = None
self._applying = False
# Load persisted state
persisted = self._db.get_setting("update_state") or {}
self._dismissed_version: str = persisted.get("dismissed_version", "")
# ── Settings persistence ───────────────────────────────────
def _load_settings(self) -> dict:
data = self._db.get_setting("auto_update")
if data:
return {**DEFAULT_SETTINGS, **data}
return dict(DEFAULT_SETTINGS)
def _save_settings(self) -> None:
self._db.set_setting("auto_update", {
"enabled": self._settings["enabled"],
"check_interval_hours": self._settings["check_interval_hours"],
"include_prerelease": self._settings["include_prerelease"],
})
def _save_state(self) -> None:
self._db.set_setting("update_state", {
"dismissed_version": self._dismissed_version,
})
# ── Lifecycle ──────────────────────────────────────────────
async def start(self) -> None:
if self._settings["enabled"]:
self._start_loop()
logger.info(
"Update checker started (every %.1fh, prerelease=%s, install=%s)",
self._settings["check_interval_hours"],
self._settings["include_prerelease"],
self._install_type.value,
)
else:
logger.info("Update checker initialized (disabled, install=%s)", self._install_type.value)
async def stop(self) -> None:
self._cancel_loop()
logger.info("Update checker stopped")
def _start_loop(self) -> None:
self._cancel_loop()
self._task = asyncio.create_task(self._check_loop())
def _cancel_loop(self) -> None:
if self._task is not None:
self._task.cancel()
self._task = None
async def _check_loop(self) -> None:
try:
await asyncio.sleep(_STARTUP_DELAY_S)
await self._perform_check()
interval_s = self._settings["check_interval_hours"] * 3600
while True:
await asyncio.sleep(interval_s)
try:
await self._perform_check()
except Exception as exc:
logger.error("Update check failed: %s", exc, exc_info=True)
except asyncio.CancelledError:
pass
# ── Core check logic ───────────────────────────────────────
async def _perform_check(self) -> None:
self._checking = True
self._last_error = None
try:
releases = await self._provider.get_releases(limit=10)
best = self._find_best_release(releases)
self._available_release = best
self._last_check = time.time()
if best and self._fire_event:
self._fire_event({
"type": "update_available",
"version": best.version,
"tag": best.tag,
"name": best.name,
"prerelease": best.prerelease,
"dismissed": best.version == self._dismissed_version,
"can_auto_update": self._can_auto_update(best),
})
logger.info(
"Update check complete — %s",
f"v{best.version} available" if best else "up to date",
)
except Exception as exc:
self._last_error = str(exc)
raise
finally:
self._checking = False
def _find_best_release(self, releases: list[ReleaseInfo]) -> ReleaseInfo | None:
"""Find the newest release that is newer than the current version."""
include_pre = self._settings["include_prerelease"]
for release in releases:
if release.prerelease and not include_pre:
continue
try:
normalize_version(release.version)
except Exception:
continue
if is_newer(release.version, __version__):
return release
return None
def _find_asset(self, release: ReleaseInfo) -> AssetInfo | None:
"""Find the matching asset for this platform + install type."""
if not self._asset_pattern:
return None
for asset in release.assets:
if self._asset_pattern in asset.name:
return asset
return None
def _can_auto_update(self, release: ReleaseInfo) -> bool:
"""Check if auto-update is possible for the given release."""
return self._find_asset(release) is not None
# ── Download ───────────────────────────────────────────────
async def download_update(self) -> Path:
"""Download the update asset. Returns path to downloaded file."""
release = self._available_release
if not release:
raise RuntimeError("No update available")
asset = self._find_asset(release)
if not asset:
raise RuntimeError(
f"No matching asset for {self._install_type.value} "
f"on {sys.platform}"
)
dest = self._update_dir / asset.name
# Skip re-download if file exists and size matches
if dest.exists() and dest.stat().st_size == asset.size:
logger.info("Update already downloaded: %s", dest.name)
self._downloaded_file = dest
self._download_progress = 1.0
return dest
self._downloading = True
self._download_progress = 0.0
try:
await self._stream_download(asset.download_url, dest, asset.size)
self._downloaded_file = dest
logger.info("Downloaded update: %s (%d bytes)", dest.name, dest.stat().st_size)
return dest
except Exception:
# Clean up partial download
if dest.exists():
dest.unlink()
self._downloaded_file = None
raise
finally:
self._downloading = False
async def _stream_download(self, url: str, dest: Path, total_size: int) -> None:
"""Stream-download a file, updating progress as we go."""
tmp = dest.with_suffix(dest.suffix + ".tmp")
received = 0
async with httpx.AsyncClient(timeout=300, follow_redirects=True) as client:
async with client.stream("GET", url) as resp:
resp.raise_for_status()
with open(tmp, "wb") as f:
async for chunk in resp.aiter_bytes(chunk_size=65536):
f.write(chunk)
received += len(chunk)
if total_size > 0:
self._download_progress = received / total_size
if self._fire_event:
self._fire_event({
"type": "update_download_progress",
"progress": round(self._download_progress, 3),
})
# Atomic rename
tmp.replace(dest)
self._download_progress = 1.0
# ── Apply ──────────────────────────────────────────────────
async def apply_update(self) -> None:
"""Download (if needed) and apply the update, then shut down."""
if self._applying:
raise RuntimeError("Update already in progress")
self._applying = True
try:
if not self._downloaded_file or not self._downloaded_file.exists():
await self.download_update()
assert self._downloaded_file is not None
file_path = self._downloaded_file
if self._install_type == InstallType.INSTALLER:
await self._apply_installer(file_path)
elif self._install_type == InstallType.PORTABLE:
if file_path.suffix == ".zip":
await self._apply_portable_zip(file_path)
elif file_path.name.endswith(".tar.gz"):
await self._apply_portable_tarball(file_path)
else:
raise RuntimeError(f"Unknown portable format: {file_path.name}")
else:
raise RuntimeError(f"Auto-update not supported for install type: {self._install_type.value}")
finally:
self._applying = False
async def _apply_installer(self, exe_path: Path) -> None:
"""Launch the NSIS installer silently and shut down."""
install_dir = str(Path.cwd())
logger.info("Launching silent installer: %s /S /D=%s", exe_path, install_dir)
# Fire event so frontend shows restart overlay
if self._fire_event:
self._fire_event({"type": "server_restarting"})
# Launch installer detached — it will wait for python.exe to exit,
# then install and the VBS launcher / service will restart the app.
subprocess.Popen(
[str(exe_path), "/S", f"/D={install_dir}"],
creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP
if sys.platform == "win32" else 0,
)
# Give the installer a moment to start, then shut down
await asyncio.sleep(1)
from wled_controller.server_ref import request_shutdown
request_shutdown()
async def _apply_portable_zip(self, zip_path: Path) -> None:
"""Extract ZIP over the current installation, then shut down."""
import zipfile
app_root = Path.cwd()
staging = self._update_dir / "_staging"
logger.info("Extracting portable update: %s", zip_path.name)
# Extract to staging dir in a thread (I/O bound)
def _extract():
if staging.exists():
shutil.rmtree(staging)
staging.mkdir(parents=True)
with zipfile.ZipFile(zip_path, "r") as zf:
zf.extractall(staging)
await asyncio.to_thread(_extract)
# The ZIP contains a top-level LedGrab/ dir — find it
inner = _find_single_child_dir(staging)
# Write a post-update script that swaps the dirs after shutdown.
# On Windows, python.exe locks files, so we need a bat script
# that waits for the process to exit, then does the swap.
script = self._update_dir / "_apply_update.bat"
script.write_text(
_build_swap_script(inner, app_root, ["app", "python", "scripts"]),
encoding="utf-8",
)
logger.info("Launching post-update script and shutting down")
if self._fire_event:
self._fire_event({"type": "server_restarting"})
subprocess.Popen(
["cmd.exe", "/c", str(script)],
creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP
if sys.platform == "win32" else 0,
)
await asyncio.sleep(1)
from wled_controller.server_ref import request_shutdown
request_shutdown()
async def _apply_portable_tarball(self, tar_path: Path) -> None:
"""Extract tarball over the current installation, then shut down."""
import tarfile
app_root = Path.cwd()
staging = self._update_dir / "_staging"
logger.info("Extracting portable update: %s", tar_path.name)
def _extract():
if staging.exists():
shutil.rmtree(staging)
staging.mkdir(parents=True)
with tarfile.open(tar_path, "r:gz") as tf:
tf.extractall(staging, filter="data")
await asyncio.to_thread(_extract)
inner = _find_single_child_dir(staging)
# On Linux, write a shell script that replaces dirs after shutdown
script = self._update_dir / "_apply_update.sh"
dirs_to_swap = ["app", "venv"]
lines = [
"#!/bin/bash",
"# Auto-generated update script — replaces app dirs and restarts",
f'APP_ROOT="{app_root}"',
f'STAGING="{inner}"',
"sleep 3 # wait for server to exit",
]
for d in dirs_to_swap:
lines.append(f'[ -d "$STAGING/{d}" ] && rm -rf "$APP_ROOT/{d}" && mv "$STAGING/{d}" "$APP_ROOT/{d}"')
# Copy scripts/ and run.sh if present
lines.append('[ -f "$STAGING/run.sh" ] && cp "$STAGING/run.sh" "$APP_ROOT/run.sh"')
lines.append(f'rm -rf "{staging}"')
lines.append(f'rm -f "{script}"')
lines.append('echo "Update applied. Restarting..."')
lines.append('cd "$APP_ROOT" && exec ./run.sh')
script.write_text("\n".join(lines) + "\n", encoding="utf-8")
os.chmod(script, 0o755)
logger.info("Launching post-update script and shutting down")
if self._fire_event:
self._fire_event({"type": "server_restarting"})
subprocess.Popen(
["/bin/bash", str(script)],
start_new_session=True,
)
await asyncio.sleep(1)
from wled_controller.server_ref import request_shutdown
request_shutdown()
# ── Public API (called from routes) ────────────────────────
async def check_now(self) -> dict:
"""Trigger an immediate check (with debounce)."""
elapsed = time.time() - self._last_check
if elapsed < _MANUAL_CHECK_DEBOUNCE_S and self._available_release is not None:
return self.get_status()
await self._perform_check()
return self.get_status()
def dismiss(self, version: str) -> None:
"""Dismiss the notification for *version*."""
self._dismissed_version = version
self._save_state()
def get_status(self) -> dict:
rel = self._available_release
can_auto = rel is not None and self._can_auto_update(rel)
return {
"current_version": __version__,
"has_update": rel is not None,
"checking": self._checking,
"last_check": self._last_check if self._last_check else None,
"last_error": self._last_error,
"releases_url": self._provider.get_releases_page_url(),
"install_type": self._install_type.value,
"can_auto_update": can_auto,
"downloading": self._downloading,
"download_progress": round(self._download_progress, 3),
"applying": self._applying,
"release": {
"version": rel.version,
"tag": rel.tag,
"name": rel.name,
"body": rel.body,
"prerelease": rel.prerelease,
"published_at": rel.published_at,
} if rel else None,
"dismissed_version": self._dismissed_version,
}
def get_settings(self) -> dict:
return {
"enabled": self._settings["enabled"],
"check_interval_hours": self._settings["check_interval_hours"],
"include_prerelease": self._settings["include_prerelease"],
}
async def update_settings(
self,
enabled: bool,
check_interval_hours: float,
include_prerelease: bool,
) -> dict:
self._settings["enabled"] = enabled
self._settings["check_interval_hours"] = check_interval_hours
self._settings["include_prerelease"] = include_prerelease
self._save_settings()
if enabled:
self._start_loop()
logger.info(
"Update checker enabled (every %.1fh, prerelease=%s)",
check_interval_hours,
include_prerelease,
)
else:
self._cancel_loop()
logger.info("Update checker disabled")
return self.get_settings()
# ── Helpers ──────────────────────────────────────────────────
def _find_single_child_dir(parent: Path) -> Path:
"""Return the single subdirectory inside *parent* (e.g. LedGrab/)."""
children = [c for c in parent.iterdir() if c.is_dir()]
if len(children) == 1:
return children[0]
return parent
def _build_swap_script(staging: Path, app_root: Path, dirs: list[str]) -> str:
"""Build a Windows batch script that replaces dirs after the server exits."""
lines = [
"@echo off",
"REM Auto-generated update script — replaces app dirs and restarts",
"echo Waiting for server to exit...",
"timeout /t 5 /nobreak >nul",
]
for d in dirs:
src = staging / d
dst = app_root / d
lines.append(f'if exist "{src}" (')
lines.append(f' rmdir /s /q "{dst}" 2>nul')
lines.append(f' move /y "{src}" "{dst}"')
lines.append(")")
# Copy LedGrab.bat if present
bat = staging / "LedGrab.bat"
lines.append(f'if exist "{bat}" copy /y "{bat}" "{app_root / "LedGrab.bat"}"')
# Cleanup
lines.append(f'rmdir /s /q "{staging.parent}" 2>nul')
lines.append('del /f /q "%~f0" 2>nul')
lines.append('echo Update complete. Restarting...')
# Restart via VBS launcher or bat
vbs = app_root / "scripts" / "start-hidden.vbs"
bat_launcher = app_root / "LedGrab.bat"
lines.append(f'if exist "{vbs}" (')
lines.append(f' start "" wscript.exe "{vbs}"')
lines.append(") else (")
lines.append(f' start "" "{bat_launcher}"')
lines.append(")")
return "\r\n".join(lines) + "\r\n"

View File

@@ -0,0 +1,45 @@
"""Version comparison utilities.
Normalizes Gitea-style tags (v0.3.0-alpha.1) to PEP 440 (0.3.0a1)
so that ``packaging.version.Version`` can compare them correctly.
"""
import re
from packaging.version import InvalidVersion, Version
_PRE_MAP = {
"alpha": "a",
"beta": "b",
"rc": "rc",
}
_PRE_PATTERN = re.compile(
r"^(\d+\.\d+\.\d+)[-.]?(alpha|beta|rc)[.-]?(\d+)$", re.IGNORECASE
)
def normalize_version(raw: str) -> Version:
"""Convert a tag like ``v0.3.0-alpha.1`` to a PEP 440 ``Version``.
Raises ``InvalidVersion`` if the string cannot be parsed.
"""
cleaned = raw.lstrip("v").strip()
m = _PRE_PATTERN.match(cleaned)
if m:
base, pre_label, pre_num = m.group(1), m.group(2).lower(), m.group(3)
pep_label = _PRE_MAP.get(pre_label, pre_label)
cleaned = f"{base}{pep_label}{pre_num}"
return Version(cleaned)
def is_newer(candidate: str, current: str) -> bool:
"""Return True if *candidate* is strictly newer than *current*.
Returns False if either version string is unparseable.
"""
try:
return normalize_version(candidate) > normalize_version(current)
except InvalidVersion:
return False

View File

@@ -0,0 +1,183 @@
"""Weather source runtime manager — polls APIs and caches WeatherData.
Ref-counted pool: multiple CSS streams sharing the same weather source
share one polling loop. Lazy-creates runtimes on first acquire().
"""
import threading
import time
from typing import Dict, Optional
from wled_controller.core.weather.weather_provider import (
DEFAULT_WEATHER,
WeatherData,
WeatherProvider,
create_provider,
)
from wled_controller.storage.weather_source import WeatherSource
from wled_controller.storage.weather_source_store import WeatherSourceStore
from wled_controller.utils import get_logger
logger = get_logger(__name__)
class _WeatherRuntime:
"""Polls a weather provider on a timer and caches the latest result."""
def __init__(self, source: WeatherSource, provider: WeatherProvider) -> None:
self._source_id = source.id
self._provider = provider
self._latitude = source.latitude
self._longitude = source.longitude
self._interval = max(60, source.update_interval)
self._data: WeatherData = DEFAULT_WEATHER
self._lock = threading.Lock()
self._running = False
self._thread: Optional[threading.Thread] = None
@property
def data(self) -> WeatherData:
with self._lock:
return self._data
def start(self) -> None:
if self._running:
return
self._running = True
self._thread = threading.Thread(
target=self._poll_loop, daemon=True,
name=f"Weather-{self._source_id[:12]}",
)
self._thread.start()
logger.info(f"Weather runtime started: {self._source_id}")
def stop(self) -> None:
self._running = False
if self._thread is not None:
self._thread.join(timeout=10.0)
self._thread = None
logger.info(f"Weather runtime stopped: {self._source_id}")
def update_config(self, source: WeatherSource) -> None:
self._latitude = source.latitude
self._longitude = source.longitude
self._interval = max(60, source.update_interval)
def fetch_now(self) -> WeatherData:
"""Force an immediate fetch (used by test endpoint)."""
result = self._provider.fetch(self._latitude, self._longitude)
with self._lock:
self._data = result
return result
def _poll_loop(self) -> None:
# Fetch immediately on start
self._do_fetch()
last_fetch = time.monotonic()
while self._running:
time.sleep(1.0)
if time.monotonic() - last_fetch >= self._interval:
self._do_fetch()
last_fetch = time.monotonic()
def _do_fetch(self) -> None:
result = self._provider.fetch(self._latitude, self._longitude)
with self._lock:
self._data = result
logger.debug(
f"Weather {self._source_id}: code={result.code} "
f"temp={result.temperature:.1f}C wind={result.wind_speed:.0f}km/h"
)
class WeatherManager:
"""Ref-counted pool of weather runtimes."""
def __init__(self, store: WeatherSourceStore) -> None:
self._store = store
# source_id -> (runtime, ref_count)
self._runtimes: Dict[str, tuple] = {}
self._lock = threading.Lock()
def acquire(self, source_id: str) -> _WeatherRuntime:
"""Get or create a runtime for the given weather source. Increments ref count."""
with self._lock:
if source_id in self._runtimes:
runtime, count = self._runtimes[source_id]
self._runtimes[source_id] = (runtime, count + 1)
return runtime
source = self._store.get(source_id)
provider = create_provider(source.provider, source.provider_config)
runtime = _WeatherRuntime(source, provider)
runtime.start()
self._runtimes[source_id] = (runtime, 1)
return runtime
def release(self, source_id: str) -> None:
"""Decrement ref count; stop runtime when it reaches zero."""
with self._lock:
if source_id not in self._runtimes:
return
runtime, count = self._runtimes[source_id]
if count <= 1:
runtime.stop()
del self._runtimes[source_id]
else:
self._runtimes[source_id] = (runtime, count - 1)
def get_data(self, source_id: str) -> WeatherData:
"""Get cached weather data for a source (creates runtime if needed)."""
with self._lock:
if source_id in self._runtimes:
runtime, _count = self._runtimes[source_id]
return runtime.data
# No active runtime — do a one-off fetch via ensure_runtime
runtime = self._ensure_runtime(source_id)
return runtime.data
def fetch_now(self, source_id: str) -> WeatherData:
"""Force an immediate fetch for the test endpoint."""
runtime = self._ensure_runtime(source_id)
return runtime.fetch_now()
def update_source(self, source_id: str) -> None:
"""Hot-update runtime config when the source is edited."""
with self._lock:
if source_id not in self._runtimes:
return
runtime, count = self._runtimes[source_id]
try:
source = self._store.get(source_id)
runtime.update_config(source)
except Exception as e:
logger.warning(f"Failed to update weather runtime {source_id}: {e}")
def _ensure_runtime(self, source_id: str) -> _WeatherRuntime:
"""Get or create a runtime (for API control of idle sources)."""
with self._lock:
if source_id in self._runtimes:
runtime, count = self._runtimes[source_id]
return runtime
source = self._store.get(source_id)
provider = create_provider(source.provider, source.provider_config)
runtime = _WeatherRuntime(source, provider)
runtime.start()
with self._lock:
if source_id not in self._runtimes:
self._runtimes[source_id] = (runtime, 0)
else:
runtime.stop()
runtime, _count = self._runtimes[source_id]
return runtime
def shutdown(self) -> None:
"""Stop all runtimes."""
with self._lock:
for source_id, (runtime, _count) in list(self._runtimes.items()):
runtime.stop()
self._runtimes.clear()
logger.info("Weather manager shut down")

View File

@@ -0,0 +1,123 @@
"""Weather data providers with pluggable backend support.
Each provider fetches current weather data and returns a standardized
WeatherData result. Only Open-Meteo is supported in v1 (free, no API key).
"""
import time
from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import Dict, Type
import httpx
from wled_controller.utils import get_logger
logger = get_logger(__name__)
_HTTP_TIMEOUT = 5.0
@dataclass(frozen=True)
class WeatherData:
"""Immutable weather observation."""
code: int # WMO weather code (0-99)
temperature: float # Celsius
wind_speed: float # km/h
cloud_cover: int # 0-100 %
fetched_at: float # time.monotonic() timestamp
# Default fallback when no data has been fetched yet
DEFAULT_WEATHER = WeatherData(code=2, temperature=20.0, wind_speed=5.0, cloud_cover=50, fetched_at=0.0)
class WeatherProvider(ABC):
"""Abstract weather data provider."""
@abstractmethod
def fetch(self, latitude: float, longitude: float) -> WeatherData:
"""Fetch current weather for the given location.
Must not raise — returns DEFAULT_WEATHER on failure.
"""
class OpenMeteoProvider(WeatherProvider):
"""Open-Meteo API provider (free, no API key required)."""
_URL = "https://api.open-meteo.com/v1/forecast"
def __init__(self) -> None:
self._client = httpx.Client(timeout=_HTTP_TIMEOUT)
def fetch(self, latitude: float, longitude: float) -> WeatherData:
try:
resp = self._client.get(
self._URL,
params={
"latitude": latitude,
"longitude": longitude,
"current": "temperature_2m,weather_code,wind_speed_10m,cloud_cover",
},
)
resp.raise_for_status()
data = resp.json()
current = data["current"]
return WeatherData(
code=int(current["weather_code"]),
temperature=float(current["temperature_2m"]),
wind_speed=float(current.get("wind_speed_10m", 0.0)),
cloud_cover=int(current.get("cloud_cover", 50)),
fetched_at=time.monotonic(),
)
except Exception as e:
logger.warning(f"Open-Meteo fetch failed: {e}")
return DEFAULT_WEATHER
PROVIDER_REGISTRY: Dict[str, Type[WeatherProvider]] = {
"open_meteo": OpenMeteoProvider,
}
def create_provider(provider_name: str, provider_config: dict) -> WeatherProvider:
"""Create a provider instance from registry."""
cls = PROVIDER_REGISTRY.get(provider_name)
if cls is None:
raise ValueError(f"Unknown weather provider: {provider_name}")
return cls()
# WMO Weather interpretation codes (WMO 4677)
WMO_CONDITION_NAMES: Dict[int, str] = {
0: "Clear sky",
1: "Mainly clear",
2: "Partly cloudy",
3: "Overcast",
45: "Fog",
48: "Depositing rime fog",
51: "Light drizzle",
53: "Moderate drizzle",
55: "Dense drizzle",
56: "Light freezing drizzle",
57: "Dense freezing drizzle",
61: "Slight rain",
63: "Moderate rain",
65: "Heavy rain",
66: "Light freezing rain",
67: "Heavy freezing rain",
71: "Slight snowfall",
73: "Moderate snowfall",
75: "Heavy snowfall",
77: "Snow grains",
80: "Slight rain showers",
81: "Moderate rain showers",
82: "Violent rain showers",
85: "Slight snow showers",
86: "Heavy snow showers",
95: "Thunderstorm",
96: "Thunderstorm with slight hail",
99: "Thunderstorm with heavy hail",
}

View File

@@ -32,13 +32,18 @@ from wled_controller.storage.automation_store import AutomationStore
from wled_controller.storage.scene_preset_store import ScenePresetStore from wled_controller.storage.scene_preset_store import ScenePresetStore
from wled_controller.storage.sync_clock_store import SyncClockStore from wled_controller.storage.sync_clock_store import SyncClockStore
from wled_controller.storage.color_strip_processing_template_store import ColorStripProcessingTemplateStore from wled_controller.storage.color_strip_processing_template_store import ColorStripProcessingTemplateStore
from wled_controller.storage.gradient_store import GradientStore
from wled_controller.storage.weather_source_store import WeatherSourceStore
from wled_controller.core.processing.sync_clock_manager import SyncClockManager from wled_controller.core.processing.sync_clock_manager import SyncClockManager
from wled_controller.core.weather.weather_manager import WeatherManager
from wled_controller.core.automations.automation_engine import AutomationEngine from wled_controller.core.automations.automation_engine import AutomationEngine
from wled_controller.core.mqtt.mqtt_service import MQTTService from wled_controller.core.mqtt.mqtt_service import MQTTService
from wled_controller.core.devices.mqtt_client import set_mqtt_service from wled_controller.core.devices.mqtt_client import set_mqtt_service
from wled_controller.core.backup.auto_backup import AutoBackupEngine from wled_controller.core.backup.auto_backup import AutoBackupEngine
from wled_controller.core.processing.os_notification_listener import OsNotificationListener from wled_controller.core.processing.os_notification_listener import OsNotificationListener
from wled_controller.api.routes.system import STORE_MAP from wled_controller.core.update.update_service import UpdateService
from wled_controller.core.update.gitea_provider import GiteaReleaseProvider
from wled_controller.storage.database import Database
from wled_controller.utils import setup_logging, get_logger, install_broadcast_handler from wled_controller.utils import setup_logging, get_logger, install_broadcast_handler
# Initialize logging # Initialize logging
@@ -49,27 +54,34 @@ logger = get_logger(__name__)
# Get configuration # Get configuration
config = get_config() config = get_config()
# Seed demo data before stores are loaded (first-run only) # Initialize SQLite database
db = Database(config.storage.database_file)
# Seed demo data after DB is ready (first-run only)
if config.demo: if config.demo:
from wled_controller.core.demo_seed import seed_demo_data from wled_controller.core.demo_seed import seed_demo_data
seed_demo_data(config.storage) seed_demo_data(db)
# Initialize storage and processing # Initialize storage and processing
device_store = DeviceStore(config.storage.devices_file) device_store = DeviceStore(db)
template_store = TemplateStore(config.storage.templates_file) template_store = TemplateStore(db)
pp_template_store = PostprocessingTemplateStore(config.storage.postprocessing_templates_file) pp_template_store = PostprocessingTemplateStore(db)
picture_source_store = PictureSourceStore(config.storage.picture_sources_file) picture_source_store = PictureSourceStore(db)
output_target_store = OutputTargetStore(config.storage.output_targets_file) output_target_store = OutputTargetStore(db)
pattern_template_store = PatternTemplateStore(config.storage.pattern_templates_file) pattern_template_store = PatternTemplateStore(db)
color_strip_store = ColorStripStore(config.storage.color_strip_sources_file) color_strip_store = ColorStripStore(db)
audio_source_store = AudioSourceStore(config.storage.audio_sources_file) audio_source_store = AudioSourceStore(db)
audio_template_store = AudioTemplateStore(config.storage.audio_templates_file) audio_template_store = AudioTemplateStore(db)
value_source_store = ValueSourceStore(config.storage.value_sources_file) value_source_store = ValueSourceStore(db)
automation_store = AutomationStore(config.storage.automations_file) automation_store = AutomationStore(db)
scene_preset_store = ScenePresetStore(config.storage.scene_presets_file) scene_preset_store = ScenePresetStore(db)
sync_clock_store = SyncClockStore(config.storage.sync_clocks_file) sync_clock_store = SyncClockStore(db)
cspt_store = ColorStripProcessingTemplateStore(config.storage.color_strip_processing_templates_file) cspt_store = ColorStripProcessingTemplateStore(db)
gradient_store = GradientStore(db)
gradient_store.migrate_palette_references(color_strip_store)
weather_source_store = WeatherSourceStore(db)
sync_clock_manager = SyncClockManager(sync_clock_store) sync_clock_manager = SyncClockManager(sync_clock_store)
weather_manager = WeatherManager(weather_source_store)
processor_manager = ProcessorManager( processor_manager = ProcessorManager(
ProcessorDependencies( ProcessorDependencies(
@@ -84,10 +96,21 @@ processor_manager = ProcessorManager(
audio_template_store=audio_template_store, audio_template_store=audio_template_store,
sync_clock_manager=sync_clock_manager, sync_clock_manager=sync_clock_manager,
cspt_store=cspt_store, cspt_store=cspt_store,
gradient_store=gradient_store,
weather_manager=weather_manager,
) )
) )
def _save_all_stores() -> None:
"""Shutdown hook — SQLite stores use write-through caching, so this is a no-op.
Every create/update/delete already goes to the database immediately.
Kept for backward compatibility with server_ref.py which calls this.
"""
logger.info("Shutdown: all stores already persisted (write-through cache)")
@asynccontextmanager @asynccontextmanager
async def lifespan(app: FastAPI): async def lifespan(app: FastAPI):
"""Application lifespan manager. """Application lifespan manager.
@@ -98,24 +121,18 @@ async def lifespan(app: FastAPI):
logger.info(f"Starting LED Grab v{__version__}") logger.info(f"Starting LED Grab v{__version__}")
logger.info(f"Python version: {sys.version}") logger.info(f"Python version: {sys.version}")
logger.info(f"Server listening on {config.server.host}:{config.server.port}") logger.info(f"Server listening on {config.server.host}:{config.server.port}")
print("\n =============================================")
print(f" LED Grab v{__version__}")
print(f" Open http://localhost:{config.server.port} in your browser")
print(" =============================================\n")
# Validate authentication configuration # Log authentication mode
if not config.auth.api_keys: if not config.auth.api_keys:
logger.error("=" * 70) logger.info("Authentication disabled (no API keys configured)")
logger.error("CRITICAL: No API keys configured!") else:
logger.error("Authentication is REQUIRED for all API requests.") logger.info(f"Authentication enabled ({len(config.auth.api_keys)} API key(s) configured)")
logger.error("Please add API keys to your configuration:") client_labels = ", ".join(config.auth.api_keys.keys())
logger.error(" 1. Generate keys: openssl rand -hex 32") logger.info(f"Authorized clients: {client_labels}")
logger.error(" 2. Add to config/default_config.yaml under auth.api_keys")
logger.error(" 3. Format: label: \"your-generated-key\"")
logger.error("=" * 70)
raise RuntimeError("No API keys configured - server cannot start without authentication")
# Log authentication status
logger.info(f"API Authentication: ENFORCED ({len(config.auth.api_keys)} clients configured)")
client_labels = ", ".join(config.auth.api_keys.keys())
logger.info(f"Authorized clients: {client_labels}")
logger.info("All API requests require valid Bearer token authentication")
# Create MQTT service (shared broker connection) # Create MQTT service (shared broker connection)
mqtt_service = MQTTService(config.mqtt) mqtt_service = MQTTService(config.mqtt)
@@ -130,19 +147,30 @@ async def lifespan(app: FastAPI):
device_store=device_store, device_store=device_store,
) )
# Create auto-backup engine — derive paths from storage config so that # Create auto-backup engine — derive paths from database location so that
# demo mode auto-backups go to data/demo/ instead of data/. # demo mode auto-backups go to data/demo/ instead of data/.
_data_dir = Path(config.storage.devices_file).parent _data_dir = Path(config.storage.database_file).parent
auto_backup_engine = AutoBackupEngine( auto_backup_engine = AutoBackupEngine(
settings_path=_data_dir / "auto_backup_settings.json",
backup_dir=_data_dir / "backups", backup_dir=_data_dir / "backups",
store_map=STORE_MAP, db=db,
storage_config=config.storage, )
# Create update service (checks for new releases)
_release_provider = GiteaReleaseProvider(
base_url="https://git.dolgolyov-family.by",
repo="alexei.dolgolyov/wled-screen-controller-mixed",
)
update_service = UpdateService(
provider=_release_provider,
db=db,
fire_event=processor_manager.fire_event,
update_dir=_data_dir / "updates",
) )
# Initialize API dependencies # Initialize API dependencies
init_dependencies( init_dependencies(
device_store, template_store, processor_manager, device_store, template_store, processor_manager,
database=db,
pp_template_store=pp_template_store, pp_template_store=pp_template_store,
pattern_template_store=pattern_template_store, pattern_template_store=pattern_template_store,
picture_source_store=picture_source_store, picture_source_store=picture_source_store,
@@ -158,6 +186,10 @@ async def lifespan(app: FastAPI):
sync_clock_store=sync_clock_store, sync_clock_store=sync_clock_store,
sync_clock_manager=sync_clock_manager, sync_clock_manager=sync_clock_manager,
cspt_store=cspt_store, cspt_store=cspt_store,
gradient_store=gradient_store,
weather_source_store=weather_source_store,
weather_manager=weather_manager,
update_service=update_service,
) )
# Register devices in processor manager for health monitoring # Register devices in processor manager for health monitoring
@@ -204,6 +236,9 @@ async def lifespan(app: FastAPI):
# Start auto-backup engine (periodic configuration backups) # Start auto-backup engine (periodic configuration backups)
await auto_backup_engine.start() await auto_backup_engine.start()
# Start update checker (periodic release polling)
await update_service.start()
# Start OS notification listener (Windows toast → notification CSS streams) # Start OS notification listener (Windows toast → notification CSS streams)
os_notif_listener = OsNotificationListener( os_notif_listener = OsNotificationListener(
color_strip_store=color_strip_store, color_strip_store=color_strip_store,
@@ -216,6 +251,23 @@ async def lifespan(app: FastAPI):
# Shutdown # Shutdown
logger.info("Shutting down LED Grab") logger.info("Shutting down LED Grab")
# Persist all stores to disk before stopping anything.
# This ensures in-memory data survives force-kills and restarts
# where no CRUD happened during the session.
_save_all_stores()
# Stop weather manager
try:
weather_manager.shutdown()
except Exception as e:
logger.error(f"Error stopping weather manager: {e}")
# Stop update checker
try:
await update_service.stop()
except Exception as e:
logger.error(f"Error stopping update checker: {e}")
# Stop auto-backup engine # Stop auto-backup engine
try: try:
await auto_backup_engine.stop() await auto_backup_engine.stop()

View File

@@ -0,0 +1,59 @@
"""Module-level holder for the uvicorn Server and TrayManager references.
Allows the shutdown API endpoint to trigger graceful shutdown via
``server.should_exit = True`` + ``tray.stop()``, which is the same
mechanism the system tray "Shutdown" menu item uses.
"""
from typing import Any, Optional
_server: Optional[Any] = None # uvicorn.Server
_tray: Optional[Any] = None # TrayManager
def set_server(server: Any) -> None:
"""Store the uvicorn Server instance (called from __main__)."""
global _server
_server = server
def set_tray(tray: Any) -> None:
"""Store the TrayManager instance (called from __main__)."""
global _tray
_tray = tray
def request_shutdown() -> None:
"""Signal uvicorn + tray to perform a graceful shutdown.
Broadcasts a ``server_restarting`` event so the frontend can show
a restart indicator, persists all stores to disk, then sets
``should_exit = True`` on the uvicorn Server and stops the tray.
"""
# Notify connected clients that a restart is in progress
_broadcast_restarting()
# Persist stores before signaling shutdown.
# The lifespan shutdown handler also saves, but it may not run
# reliably when uvicorn is in a daemon thread.
try:
from wled_controller.main import _save_all_stores
_save_all_stores()
except Exception:
pass # best-effort; lifespan handler is the backup
if _server is not None:
_server.should_exit = True
if _tray is not None:
_tray.stop()
def _broadcast_restarting() -> None:
"""Push a server_restarting event to all connected WebSocket clients."""
try:
from wled_controller.api.dependencies import _deps
pm = _deps.get("processor_manager")
if pm is not None:
pm.fire_event({"type": "server_restarting"})
except Exception:
pass

View File

@@ -14,4 +14,5 @@
@import './tree-nav.css'; @import './tree-nav.css';
@import './tutorials.css'; @import './tutorials.css';
@import './graph-editor.css'; @import './graph-editor.css';
@import './appearance.css';
@import './mobile.css'; @import './mobile.css';

View File

@@ -0,0 +1,355 @@
/* ── Appearance tab: preset cards & background effects ── */
/* Use --font-body / --font-heading CSS variables for preset font switching */
body {
font-family: var(--font-body, 'DM Sans', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif);
}
h1 {
font-family: var(--font-heading, 'Orbitron', sans-serif);
}
/* ─── Preset grid ─── */
.ap-hint {
display: block;
font-size: 0.8rem;
color: var(--text-muted);
margin-bottom: 0.75rem;
}
.ap-grid {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 10px;
}
/* ─── Preset card (shared) ─── */
.ap-card {
position: relative;
display: flex;
flex-direction: column;
align-items: center;
gap: 6px;
padding: 6px;
border: 2px solid var(--border-color);
border-radius: var(--radius-md);
background: var(--card-bg);
cursor: pointer;
transition: border-color var(--duration-normal) var(--ease-out),
box-shadow var(--duration-normal) var(--ease-out),
transform var(--duration-fast) var(--ease-out);
}
.ap-card:hover {
border-color: var(--text-muted);
transform: translateY(-1px);
}
.ap-card.active {
border-color: var(--primary-color);
box-shadow: 0 0 0 1px var(--primary-color),
0 0 12px -2px color-mix(in srgb, var(--primary-color) 40%, transparent);
}
.ap-card.active::after {
content: '\2713';
position: absolute;
top: 4px;
right: 6px;
font-size: 0.65rem;
font-weight: 700;
color: var(--primary-color);
}
.ap-card-label {
font-size: 0.72rem;
font-weight: 600;
color: var(--text-secondary);
text-align: center;
line-height: 1.2;
}
.ap-card.active .ap-card-label {
color: var(--primary-color);
}
/* ─── Style preset preview ─── */
.ap-card-preview {
width: 100%;
aspect-ratio: 4 / 3;
border-radius: var(--radius-sm);
border: 1px solid;
padding: 8px 7px 6px;
display: flex;
flex-direction: column;
gap: 4px;
overflow: hidden;
}
.ap-card-accent {
width: 24px;
height: 4px;
border-radius: 2px;
margin-bottom: 2px;
}
.ap-card-lines {
display: flex;
flex-direction: column;
gap: 3px;
}
.ap-card-lines span {
display: block;
height: 2px;
border-radius: 1px;
width: 100%;
}
/* ─── Background effect preview ─── */
.ap-bg-preview {
width: 100%;
aspect-ratio: 4 / 3;
border-radius: var(--radius-sm);
overflow: hidden;
position: relative;
background: var(--bg-color);
border: 1px solid var(--border-color);
}
.ap-bg-preview-inner {
position: absolute;
inset: 0;
}
/* Mini previews for each effect type */
[data-effect="none"] .ap-bg-preview-inner {
background: var(--bg-color);
}
[data-effect="noise"] .ap-bg-preview-inner {
background: radial-gradient(ellipse at 30% 50%,
color-mix(in srgb, var(--primary-color) 20%, var(--bg-color)) 0%,
var(--bg-color) 70%);
animation: ap-noise-shimmer 4s ease-in-out infinite alternate;
}
@keyframes ap-noise-shimmer {
from { opacity: 0.7; }
to { opacity: 1; }
}
/* ── Aurora preview: horizontal gradient bands ── */
[data-effect="aurora"] .ap-bg-preview-inner {
background:
linear-gradient(180deg,
transparent 0%,
color-mix(in srgb, var(--primary-color) 18%, transparent) 25%,
transparent 50%,
color-mix(in srgb, var(--primary-color) 12%, transparent) 70%,
transparent 100%);
animation: ap-aurora-sway 6s ease-in-out infinite alternate;
}
@keyframes ap-aurora-sway {
from { transform: translateX(-5%) scaleY(1); opacity: 0.8; }
to { transform: translateX(5%) scaleY(1.15); opacity: 1; }
}
/* ── Plasma preview: color blobs ── */
[data-effect="plasma"] .ap-bg-preview-inner {
background:
radial-gradient(circle at 30% 40%,
color-mix(in srgb, var(--primary-color) 20%, transparent) 0%, transparent 50%),
radial-gradient(circle at 70% 60%,
color-mix(in srgb, var(--primary-color) 15%, transparent) 0%, transparent 50%),
radial-gradient(circle at 50% 20%,
color-mix(in srgb, var(--primary-color) 12%, transparent) 0%, transparent 40%);
animation: ap-plasma-cycle 5s ease-in-out infinite alternate;
}
@keyframes ap-plasma-cycle {
from { transform: scale(1) rotate(0deg); }
to { transform: scale(1.1) rotate(3deg); }
}
/* ── Digital Rain preview: vertical lines ── */
[data-effect="rain"] .ap-bg-preview-inner {
background:
linear-gradient(180deg, var(--primary-color) 0%, transparent 60%) 10% 0 / 1px 70% no-repeat,
linear-gradient(180deg, var(--primary-color) 0%, transparent 50%) 25% 20% / 1px 50% no-repeat,
linear-gradient(180deg, var(--primary-color) 0%, transparent 60%) 40% 10% / 1px 60% no-repeat,
linear-gradient(180deg, var(--primary-color) 0%, transparent 40%) 55% 30% / 1px 40% no-repeat,
linear-gradient(180deg, var(--primary-color) 0%, transparent 55%) 70% 5% / 1px 55% no-repeat,
linear-gradient(180deg, var(--primary-color) 0%, transparent 50%) 85% 15% / 1px 50% no-repeat;
opacity: 0.4;
animation: ap-rain-fall 3s linear infinite;
}
@keyframes ap-rain-fall {
from { transform: translateY(-20%); }
to { transform: translateY(20%); }
}
/* ── Stars preview: scattered dots ── */
[data-effect="stars"] .ap-bg-preview-inner {
background:
radial-gradient(circle 1px at 15% 20%, #fff 0%, transparent 100%),
radial-gradient(circle 1.5px at 40% 60%, var(--primary-color) 0%, transparent 100%),
radial-gradient(circle 1px at 65% 30%, #fff 0%, transparent 100%),
radial-gradient(circle 0.8px at 80% 70%, #fff 0%, transparent 100%),
radial-gradient(circle 1.2px at 30% 85%, var(--primary-color) 0%, transparent 100%),
radial-gradient(circle 0.7px at 55% 15%, #fff 0%, transparent 100%),
radial-gradient(circle 1px at 90% 45%, #fff 0%, transparent 100%),
radial-gradient(circle 0.9px at 10% 55%, #fff 0%, transparent 100%);
opacity: 0.7;
animation: ap-stars-twinkle 3s ease-in-out infinite alternate;
}
@keyframes ap-stars-twinkle {
from { opacity: 0.5; }
to { opacity: 0.8; }
}
/* ── Warp preview: radial tunnel ── */
[data-effect="warp"] .ap-bg-preview-inner {
background: radial-gradient(circle at 50% 50%,
color-mix(in srgb, var(--primary-color) 20%, transparent) 0%,
transparent 30%,
color-mix(in srgb, var(--primary-color) 10%, transparent) 50%,
transparent 70%,
color-mix(in srgb, var(--primary-color) 6%, transparent) 90%);
animation: ap-warp-pulse 4s ease-in-out infinite;
}
@keyframes ap-warp-pulse {
0%, 100% { transform: scale(1); opacity: 0.7; }
50% { transform: scale(1.2); opacity: 1; }
}
[data-effect="grid"] .ap-bg-preview-inner {
background-image:
radial-gradient(circle, var(--text-muted) 0.5px, transparent 0.5px);
background-size: 8px 8px;
opacity: 0.5;
}
[data-effect="mesh"] .ap-bg-preview-inner {
background:
radial-gradient(ellipse at 20% 30%,
color-mix(in srgb, var(--primary-color) 18%, transparent) 0%, transparent 60%),
radial-gradient(ellipse at 80% 70%,
color-mix(in srgb, var(--primary-color) 12%, transparent) 0%, transparent 60%);
}
[data-effect="scanlines"] .ap-bg-preview-inner {
background: repeating-linear-gradient(
0deg,
transparent 0px,
transparent 2px,
color-mix(in srgb, var(--text-muted) 10%, transparent) 2px,
color-mix(in srgb, var(--text-muted) 10%, transparent) 3px
);
}
/* ═══ Full-page background effects ═══
Uses a dedicated <div id="bg-effect-layer"> (same pattern as the WebGL canvas).
The active effect class (e.g. .bg-effect-grid) is set directly on the div.
Shader effects use <canvas id="bg-effect-canvas"> instead. */
/* When a CSS/shader bg effect is active, make body transparent so the layer shows
(mirrors [data-bg-anim="on"] body { background: transparent } in base.css) */
[data-bg-effect] body {
background: transparent;
}
[data-bg-effect] header {
background: transparent;
}
[data-bg-effect] header::after {
content: '';
position: absolute;
inset: 0;
z-index: -1;
backdrop-filter: blur(12px);
-webkit-backdrop-filter: blur(12px);
background: color-mix(in srgb, var(--bg-color) 60%, transparent);
}
/* Card translucency for bg effects (match existing bg-anim behaviour) */
[data-bg-effect][data-theme="dark"] .card,
[data-bg-effect][data-theme="dark"] .template-card,
[data-bg-effect][data-theme="dark"] .add-device-card,
[data-bg-effect][data-theme="dark"] .dashboard-target {
background: rgba(45, 45, 45, 0.92);
}
[data-bg-effect][data-theme="light"] .card,
[data-bg-effect][data-theme="light"] .template-card,
[data-bg-effect][data-theme="light"] .add-device-card,
[data-bg-effect][data-theme="light"] .dashboard-target {
background: rgba(255, 255, 255, 0.88);
}
#bg-effect-layer {
display: none;
position: fixed;
inset: 0;
z-index: -1;
pointer-events: none;
}
#bg-effect-layer.bg-effect-grid,
#bg-effect-layer.bg-effect-mesh,
#bg-effect-layer.bg-effect-scanlines {
display: block;
}
/* ── Grid: dot matrix ── */
#bg-effect-layer.bg-effect-grid {
background-image:
radial-gradient(circle 1.5px, var(--text-color) 0%, transparent 100%);
background-size: 24px 24px;
opacity: 0.18;
}
/* ── Gradient mesh: ambient blobs ── */
#bg-effect-layer.bg-effect-mesh {
background:
radial-gradient(ellipse 600px 400px at 15% 20%,
color-mix(in srgb, var(--primary-color) 25%, transparent) 0%, transparent 100%),
radial-gradient(ellipse 500px 500px at 85% 80%,
color-mix(in srgb, var(--primary-color) 18%, transparent) 0%, transparent 100%),
radial-gradient(ellipse 400px 300px at 60% 40%,
color-mix(in srgb, var(--primary-color) 14%, transparent) 0%, transparent 100%);
animation: bg-mesh-drift 20s ease-in-out infinite alternate;
}
@keyframes bg-mesh-drift {
0% { transform: translate(0, 0) scale(1); }
50% { transform: translate(-20px, 15px) scale(1.05); }
100% { transform: translate(10px, -10px) scale(0.98); }
}
/* ── Scanlines: retro CRT ── */
#bg-effect-layer.bg-effect-scanlines {
background: repeating-linear-gradient(
0deg,
transparent 0px,
transparent 3px,
color-mix(in srgb, var(--text-muted) 8%, transparent) 3px,
color-mix(in srgb, var(--text-muted) 8%, transparent) 4px
);
}
/* ─── Mobile: 2-column grid ─── */
@media (max-width: 480px) {
.ap-grid {
grid-template-columns: repeat(2, 1fr);
}
}

View File

@@ -115,6 +115,7 @@ body {
html { html {
background: var(--bg-color); background: var(--bg-color);
overflow-x: hidden;
overflow-y: scroll; overflow-y: scroll;
scroll-behavior: smooth; scroll-behavior: smooth;
scrollbar-gutter: stable; scrollbar-gutter: stable;
@@ -132,7 +133,8 @@ html.modal-open {
} }
/* ── Ambient animated background ── */ /* ── Ambient animated background ── */
#bg-anim-canvas { #bg-anim-canvas,
#bg-effect-canvas {
display: none; display: none;
position: fixed; position: fixed;
inset: 0; inset: 0;

View File

@@ -480,13 +480,11 @@ body.cs-drag-active .card-drag-handle {
.card-title { .card-title {
font-size: 1.05rem; font-size: 1.05rem;
font-weight: 600; font-weight: 600;
overflow: hidden;
white-space: nowrap;
text-overflow: ellipsis;
min-width: 0; min-width: 0;
display: flex; display: flex;
align-items: center; align-items: center;
gap: 8px; gap: 8px;
overflow: hidden;
} }
.card-title-text { .card-title-text {
@@ -500,6 +498,7 @@ body.cs-drag-active .card-drag-handle {
color: var(--primary-text-color); color: var(--primary-text-color);
vertical-align: middle; vertical-align: middle;
margin-right: 6px; margin-right: 6px;
flex-shrink: 0;
} }
.device-url-badge { .device-url-badge {

View File

@@ -1024,3 +1024,68 @@ textarea:focus-visible {
width: 18px; width: 18px;
height: 18px; height: 18px;
} }
/* ── Schedule time picker (value sources) ── */
.schedule-row {
display: flex;
align-items: center;
gap: 8px;
margin-bottom: 6px;
}
.schedule-time-wrap {
display: flex;
align-items: center;
gap: 2px;
background: var(--bg-color);
border: 1px solid var(--border-color);
border-radius: var(--radius-sm);
padding: 2px 6px;
transition: border-color var(--duration-fast) ease;
}
.schedule-time-wrap:focus-within {
border-color: var(--primary-color);
box-shadow: 0 0 0 2px rgba(76, 175, 80, 0.15);
}
.schedule-time-wrap input[type="number"] {
width: 2.4ch;
text-align: center;
font-size: 1.1rem;
font-weight: 600;
font-variant-numeric: tabular-nums;
font-family: inherit;
background: var(--bg-secondary);
border: 1px solid transparent;
border-radius: var(--radius-sm);
color: var(--text-color);
padding: 4px 2px;
-moz-appearance: textfield;
transition: border-color var(--duration-fast) ease,
background var(--duration-fast) ease;
}
.schedule-time-wrap input[type="number"]::-webkit-inner-spin-button,
.schedule-time-wrap input[type="number"]::-webkit-outer-spin-button {
-webkit-appearance: none;
margin: 0;
}
.schedule-time-wrap input[type="number"]:focus {
outline: none;
border-color: var(--primary-color);
background: color-mix(in srgb, var(--primary-color) 8%, var(--bg-secondary));
}
.schedule-time-colon {
font-size: 1.1rem;
font-weight: 700;
color: var(--text-muted);
line-height: 1;
padding: 0 1px;
user-select: none;
}
.schedule-value {
flex: 1;
min-width: 60px;
}
.schedule-value-display {
min-width: 2.5ch;
text-align: right;
font-variant-numeric: tabular-nums;
}

View File

@@ -119,6 +119,11 @@
color: var(--primary-text-color); color: var(--primary-text-color);
} }
.dashboard-target-info > div {
min-width: 0;
overflow: hidden;
}
.dashboard-target-name { .dashboard-target-name {
font-size: 0.85rem; font-size: 0.85rem;
font-weight: 600; font-weight: 600;
@@ -130,6 +135,11 @@
gap: 4px; gap: 4px;
} }
.dashboard-target-name-text {
overflow: hidden;
text-overflow: ellipsis;
}
.dashboard-card-link { .dashboard-card-link {
cursor: pointer; cursor: pointer;
} }

View File

@@ -109,6 +109,58 @@ h2 {
padding: 2px 8px; padding: 2px 8px;
border-radius: 10px; border-radius: 10px;
letter-spacing: 0.03em; letter-spacing: 0.03em;
transition: background 0.3s, color 0.3s, box-shadow 0.3s;
}
#server-version.has-update {
background: var(--warning-color);
color: #fff;
cursor: pointer;
animation: updatePulse 2s ease-in-out infinite;
}
@keyframes updatePulse {
0%, 100% { box-shadow: 0 0 0 0 rgba(255, 152, 0, 0.4); }
50% { box-shadow: 0 0 0 4px rgba(255, 152, 0, 0); }
}
/* ── Update banner ── */
.update-banner {
display: flex;
align-items: center;
justify-content: center;
gap: 10px;
padding: 6px 16px;
background: var(--bg-secondary);
border-bottom: 2px solid var(--primary-color);
color: var(--text-color);
font-size: 0.85rem;
font-weight: 600;
animation: bannerSlideDown 0.3s var(--ease-out);
}
.update-banner-text {
color: var(--primary-color);
}
.update-banner-action {
padding: 4px;
background: transparent;
border: none;
color: var(--text-secondary);
cursor: pointer;
border-radius: var(--radius-sm);
transition: color 0.15s, background 0.15s;
}
.update-banner-action:hover {
color: var(--primary-color);
background: var(--border-color);
}
@keyframes bannerSlideDown {
from { transform: translateY(-100%); opacity: 0; }
to { transform: translateY(0); opacity: 1; }
} }
.status-badge { .status-badge {

View File

@@ -580,6 +580,10 @@
margin: 0; margin: 0;
font-size: 1.5rem; font-size: 1.5rem;
color: var(--text-color); color: var(--text-color);
overflow: hidden;
white-space: nowrap;
text-overflow: ellipsis;
min-width: 0;
} }
.modal-header-actions { .modal-header-actions {
@@ -1214,7 +1218,8 @@
user-select: none; user-select: none;
} }
#gradient-canvas { #gradient-canvas,
#ge-gradient-canvas {
width: 100%; width: 100%;
height: 44px; height: 44px;
display: block; display: block;
@@ -1531,6 +1536,78 @@
background: var(--card-bg); background: var(--card-bg);
} }
.composite-layer-header {
display: flex;
align-items: center;
gap: 6px;
cursor: pointer;
user-select: none;
}
.composite-layer-expand-btn {
font-size: 0.6rem;
color: var(--text-secondary);
transition: transform 0.15s ease;
flex-shrink: 0;
width: 12px;
text-align: center;
}
.composite-layer-expanded .composite-layer-expand-btn {
transform: rotate(90deg);
}
.composite-layer-summary {
flex: 1;
min-width: 0;
display: flex;
align-items: center;
gap: 8px;
overflow: hidden;
}
.composite-layer-summary-name {
font-size: 0.85rem;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.composite-layer-summary-blend {
font-size: 0.75rem;
color: var(--text-secondary);
background: var(--bg-secondary);
padding: 1px 6px;
border-radius: 3px;
white-space: nowrap;
flex-shrink: 0;
}
.composite-layer-body-wrapper {
display: grid;
grid-template-rows: 0fr;
transition: grid-template-rows 0.2s ease;
}
.composite-layer-expanded .composite-layer-body-wrapper {
grid-template-rows: 1fr;
}
.composite-layer-body {
display: flex;
flex-direction: column;
gap: 4px;
padding-top: 0;
overflow: hidden;
min-height: 0;
transition: padding-top 0.2s ease;
font-size: 0.85rem;
}
.composite-layer-expanded .composite-layer-body {
padding-top: 4px;
}
.composite-layer-row { .composite-layer-row {
display: flex; display: flex;
align-items: center; align-items: center;
@@ -1545,7 +1622,6 @@
.composite-layer-brightness-label { .composite-layer-brightness-label {
flex-shrink: 0; flex-shrink: 0;
width: 90px; width: 90px;
font-size: 0.8rem;
color: var(--text-secondary); color: var(--text-secondary);
} }
@@ -1561,7 +1637,6 @@
} }
.composite-layer-opacity-label { .composite-layer-opacity-label {
font-size: 0.8rem;
white-space: nowrap; white-space: nowrap;
flex-shrink: 0; flex-shrink: 0;
} }
@@ -1576,11 +1651,66 @@
} }
.composite-layer-remove-btn { .composite-layer-remove-btn {
font-size: 0.75rem; background: none;
padding: 0; border: none;
color: var(--text-muted);
font-size: 0.85rem;
width: 26px; width: 26px;
height: 26px; height: 26px;
flex: 0 0 26px; flex: 0 0 26px;
display: flex;
align-items: center;
justify-content: center;
cursor: pointer;
border-radius: 4px;
transition: color 0.2s, background 0.2s;
}
.composite-layer-remove-btn:hover {
color: var(--danger-color);
background: color-mix(in srgb, var(--danger-color) 10%, transparent);
}
.composite-layer-range-toggle-label {
display: flex;
align-items: center;
gap: 6px;
flex-shrink: 0;
color: var(--text-secondary);
cursor: pointer;
white-space: nowrap;
}
.composite-layer-range-fields {
display: flex;
align-items: center;
gap: 6px;
flex: 1;
min-width: 0;
}
.composite-layer-range-fields label {
color: var(--text-secondary);
flex-shrink: 0;
}
.composite-layer-range-fields input[type="number"] {
width: 60px;
flex-shrink: 0;
}
.composite-layer-range-disabled {
opacity: 0.35;
pointer-events: none;
}
.composite-layer-reverse-label {
display: flex;
align-items: center;
gap: 4px;
margin-left: auto;
white-space: nowrap;
color: var(--text-secondary);
} }
/* ── Composite layer drag-to-reorder ── */ /* ── Composite layer drag-to-reorder ── */
@@ -1608,6 +1738,31 @@
background: var(--border-color); background: var(--border-color);
} }
/* ── Weather source location row ── */
.weather-location-row {
display: flex;
align-items: center;
gap: 8px;
flex-wrap: wrap;
}
.weather-location-field {
display: flex;
align-items: center;
gap: 4px;
}
.weather-location-field label {
font-size: 0.85rem;
color: var(--text-secondary);
flex-shrink: 0;
}
.weather-location-field input[type="number"] {
width: 80px;
}
.composite-layer-drag-handle:active { .composite-layer-drag-handle:active {
cursor: grabbing; cursor: grabbing;
} }

View File

@@ -67,6 +67,8 @@
justify-content: space-between; justify-content: space-between;
align-items: center; align-items: center;
margin-bottom: 12px; margin-bottom: 12px;
overflow: hidden;
min-width: 0;
} }
.template-card .template-card-header { .template-card .template-card-header {
@@ -81,6 +83,7 @@
white-space: nowrap; white-space: nowrap;
text-overflow: ellipsis; text-overflow: ellipsis;
min-width: 0; min-width: 0;
max-width: 100%;
} }
.template-name > .icon { .template-name > .icon {

View File

@@ -3,7 +3,7 @@
*/ */
// Layer 0: state // Layer 0: state
import { apiKey, setApiKey, refreshInterval } from './core/state.ts'; import { apiKey, setApiKey, authRequired, refreshInterval } from './core/state.ts';
import { Modal } from './core/modal.ts'; import { Modal } from './core/modal.ts';
import { queryEl } from './core/dom-utils.ts'; import { queryEl } from './core/dom-utils.ts';
@@ -14,6 +14,7 @@ import { t, initLocale, changeLocale } from './core/i18n.ts';
// Layer 1.5: visual effects // Layer 1.5: visual effects
import { initCardGlare } from './core/card-glare.ts'; import { initCardGlare } from './core/card-glare.ts';
import { initBgAnim, updateBgAnimAccent, updateBgAnimTheme } from './core/bg-anim.ts'; import { initBgAnim, updateBgAnimAccent, updateBgAnimTheme } from './core/bg-anim.ts';
import { initBgShaders } from './core/bg-shaders.ts';
import { initTabIndicator, updateTabIndicator } from './core/tab-indicator.ts'; import { initTabIndicator, updateTabIndicator } from './core/tab-indicator.ts';
// Layer 2: ui // Layer 2: ui
@@ -117,16 +118,20 @@ import {
// Layer 5: color-strip sources // Layer 5: color-strip sources
import { import {
showCSSEditor, closeCSSEditorModal, forceCSSEditorClose, saveCSSEditor, deleteColorStrip, showCSSEditor, closeCSSEditorModal, forceCSSEditorClose, saveCSSEditor, deleteColorStrip,
onCSSTypeChange, onEffectTypeChange, onAnimationTypeChange, onCSSClockChange, onDaylightRealTimeChange, onCSSTypeChange, onEffectTypeChange, onEffectPaletteChange, onAnimationTypeChange, onCSSClockChange, onDaylightRealTimeChange,
colorCycleAddColor, colorCycleRemoveColor, colorCycleAddColor, colorCycleRemoveColor,
compositeAddLayer, compositeRemoveLayer, compositeAddLayer, compositeRemoveLayer,
mappedAddZone, mappedRemoveZone, mappedAddZone, mappedRemoveZone,
onAudioVizChange, onAudioVizChange,
applyGradientPreset,
onGradientPresetChange, onGradientPresetChange,
promptAndSaveGradientPreset, promptAndSaveGradientPreset,
applyCustomGradientPreset,
deleteAndRefreshGradientPreset, deleteAndRefreshGradientPreset,
showGradientModal,
closeGradientEditor,
saveGradientEntity,
cloneGradient,
editGradient,
deleteGradient,
cloneColorStrip, cloneColorStrip,
toggleCSSOverlay, toggleCSSOverlay,
previewCSSFromEditor, previewCSSFromEditor,
@@ -180,18 +185,25 @@ import {
import { switchTab, initTabs, startAutoRefresh, handlePopState } from './features/tabs.ts'; import { switchTab, initTabs, startAutoRefresh, handlePopState } from './features/tabs.ts';
import { navigateToCard } from './core/navigation.ts'; import { navigateToCard } from './core/navigation.ts';
import { openCommandPalette, closeCommandPalette, initCommandPalette } from './core/command-palette.ts'; import { openCommandPalette, closeCommandPalette, initCommandPalette } from './core/command-palette.ts';
import {
applyStylePreset, applyBgEffect, renderAppearanceTab, initAppearance,
} from './features/appearance.ts';
import { import {
openSettingsModal, closeSettingsModal, switchSettingsTab, openSettingsModal, closeSettingsModal, switchSettingsTab,
downloadBackup, handleRestoreFileSelected, downloadBackup, handleRestoreFileSelected,
saveAutoBackupSettings, restoreSavedBackup, downloadSavedBackup, deleteSavedBackup, saveAutoBackupSettings, triggerBackupNow, restoreSavedBackup, downloadSavedBackup, deleteSavedBackup,
restartServer, saveMqttSettings, restartServer, saveMqttSettings,
loadApiKeysList, loadApiKeysList,
downloadPartialExport, handlePartialImportFileSelected,
connectLogViewer, disconnectLogViewer, clearLogViewer, applyLogFilter, connectLogViewer, disconnectLogViewer, clearLogViewer, applyLogFilter,
openLogOverlay, closeLogOverlay, openLogOverlay, closeLogOverlay,
loadLogLevel, setLogLevel, loadLogLevel, setLogLevel,
saveExternalUrl, getBaseOrigin, loadExternalUrl, saveExternalUrl, getBaseOrigin, loadExternalUrl,
} from './features/settings.ts'; } from './features/settings.ts';
import {
loadUpdateStatus, initUpdateListener, checkForUpdates,
loadUpdateSettings, saveUpdateSettings, dismissUpdate,
initUpdateSettingsPanel, applyUpdate,
} from './features/update.ts';
// ─── Register all HTML onclick / onchange / onfocus globals ─── // ─── Register all HTML onclick / onchange / onfocus globals ───
@@ -426,6 +438,7 @@ Object.assign(window, {
deleteColorStrip, deleteColorStrip,
onCSSTypeChange, onCSSTypeChange,
onEffectTypeChange, onEffectTypeChange,
onEffectPaletteChange,
onCSSClockChange, onCSSClockChange,
onAnimationTypeChange, onAnimationTypeChange,
onDaylightRealTimeChange, onDaylightRealTimeChange,
@@ -436,11 +449,15 @@ Object.assign(window, {
mappedAddZone, mappedAddZone,
mappedRemoveZone, mappedRemoveZone,
onAudioVizChange, onAudioVizChange,
applyGradientPreset,
onGradientPresetChange, onGradientPresetChange,
promptAndSaveGradientPreset, promptAndSaveGradientPreset,
applyCustomGradientPreset,
deleteAndRefreshGradientPreset, deleteAndRefreshGradientPreset,
showGradientModal,
closeGradientEditor,
saveGradientEntity,
cloneGradient,
editGradient,
deleteGradient,
cloneColorStrip, cloneColorStrip,
toggleCSSOverlay, toggleCSSOverlay,
previewCSSFromEditor, previewCSSFromEditor,
@@ -523,21 +540,20 @@ Object.assign(window, {
openCommandPalette, openCommandPalette,
closeCommandPalette, closeCommandPalette,
// settings (tabs / backup / restore / auto-backup / MQTT / partial export-import / api keys / log level) // settings (tabs / backup / restore / auto-backup / MQTT / api keys / log level)
openSettingsModal, openSettingsModal,
closeSettingsModal, closeSettingsModal,
switchSettingsTab, switchSettingsTab,
downloadBackup, downloadBackup,
handleRestoreFileSelected, handleRestoreFileSelected,
saveAutoBackupSettings, saveAutoBackupSettings,
triggerBackupNow,
restoreSavedBackup, restoreSavedBackup,
downloadSavedBackup, downloadSavedBackup,
deleteSavedBackup, deleteSavedBackup,
restartServer, restartServer,
saveMqttSettings, saveMqttSettings,
loadApiKeysList, loadApiKeysList,
downloadPartialExport,
handlePartialImportFileSelected,
connectLogViewer, connectLogViewer,
disconnectLogViewer, disconnectLogViewer,
clearLogViewer, clearLogViewer,
@@ -548,6 +564,19 @@ Object.assign(window, {
setLogLevel, setLogLevel,
saveExternalUrl, saveExternalUrl,
getBaseOrigin, getBaseOrigin,
// update
checkForUpdates,
loadUpdateSettings,
saveUpdateSettings,
dismissUpdate,
initUpdateSettingsPanel,
applyUpdate,
// appearance
applyStylePreset,
applyBgEffect,
renderAppearanceTab,
}); });
// ─── Global keyboard shortcuts ─── // ─── Global keyboard shortcuts ───
@@ -626,6 +655,8 @@ document.addEventListener('DOMContentLoaded', async () => {
// Initialize visual effects // Initialize visual effects
initCardGlare(); initCardGlare();
initBgAnim(); initBgAnim();
initBgShaders();
initAppearance();
initTabIndicator(); initTabIndicator();
updateBgAnimTheme(document.documentElement.getAttribute('data-theme') !== 'light'); updateBgAnimTheme(document.documentElement.getAttribute('data-theme') !== 'light');
const accent = localStorage.getItem('accentColor') || '#4CAF50'; const accent = localStorage.getItem('accentColor') || '#4CAF50';
@@ -659,11 +690,15 @@ document.addEventListener('DOMContentLoaded', async () => {
if (addDeviceForm) addDeviceForm.addEventListener('submit', handleAddDevice); if (addDeviceForm) addDeviceForm.addEventListener('submit', handleAddDevice);
// Always monitor server connection (even before login) // Always monitor server connection (even before login)
loadServerInfo(); await loadServerInfo();
startConnectionMonitor(); startConnectionMonitor();
// Show modal if no API key is stored // Expose auth state for inline scripts (after loadServerInfo sets it)
if (!apiKey) { (window as any)._authRequired = authRequired;
if (typeof window.updateAuthUI === 'function') window.updateAuthUI();
// Show login modal only when auth is enabled and no API key is stored
if (authRequired && !apiKey) {
setTimeout(() => { setTimeout(() => {
if (typeof window.showApiKeyModal === 'function') { if (typeof window.showApiKeyModal === 'function') {
window.showApiKeyModal(null, true); window.showApiKeyModal(null, true);
@@ -681,6 +716,10 @@ document.addEventListener('DOMContentLoaded', async () => {
startEntityEventListeners(); startEntityEventListeners();
startAutoRefresh(); startAutoRefresh();
// Initialize update checker (banner + WS listener)
initUpdateListener();
loadUpdateStatus();
// Show getting-started tutorial on first visit // Show getting-started tutorial on first visit
if (!localStorage.getItem('tour_completed')) { if (!localStorage.getItem('tour_completed')) {
setTimeout(() => startGettingStartedTutorial(), 600); setTimeout(() => startGettingStartedTutorial(), 600);

View File

@@ -2,10 +2,11 @@
* API utilities — base URL, auth headers, fetch wrapper, helpers. * API utilities — base URL, auth headers, fetch wrapper, helpers.
*/ */
import { apiKey, setApiKey, refreshInterval, setRefreshInterval, displaysCache } from './state.ts'; import { apiKey, setApiKey, authRequired, setAuthRequired, refreshInterval, setRefreshInterval, displaysCache } from './state.ts';
import { t } from './i18n.ts'; import { t } from './i18n.ts';
import { showToast } from './ui.ts'; import { showToast } from './ui.ts';
import { getEl, queryEl } from './dom-utils.ts'; import { getEl, queryEl } from './dom-utils.ts';
import { serverRestarting, clearRestartingFlag } from './events-ws.ts';
export const API_BASE = '/api/v1'; export const API_BASE = '/api/v1';
@@ -137,6 +138,7 @@ export function isGameSenseDevice(type: string) {
} }
export function handle401Error() { export function handle401Error() {
if (!authRequired) return; // Auth disabled — ignore 401s
if (!apiKey) return; // Already handled or no session if (!apiKey) return; // Already handled or no session
localStorage.removeItem('wled_api_key'); localStorage.removeItem('wled_api_key');
setApiKey(null); setApiKey(null);
@@ -167,6 +169,30 @@ export function handle401Error() {
let _connCheckTimer: ReturnType<typeof setInterval> | null = null; let _connCheckTimer: ReturnType<typeof setInterval> | null = null;
let _serverOnline: boolean | null = null; // null = unknown, true/false let _serverOnline: boolean | null = null; // null = unknown, true/false
/** Toggle which message block is visible inside the connection overlay. */
function _setOverlayMode(restarting: boolean) {
const msgOffline = document.getElementById('conn-msg-offline');
const msgRestarting = document.getElementById('conn-msg-restarting');
if (msgOffline) msgOffline.style.display = restarting ? 'none' : '';
if (msgRestarting) msgRestarting.style.display = restarting ? '' : 'none';
}
/**
* Show the restart overlay immediately (called when server_restarting
* event arrives via WebSocket, before the connection actually drops).
*/
export function showRestartingOverlay() {
_serverOnline = false;
const banner = document.getElementById('connection-overlay');
const badge = document.getElementById('server-status');
if (banner) {
(banner as HTMLElement).style.display = 'flex';
banner.setAttribute('aria-hidden', 'false');
}
_setOverlayMode(true);
if (badge) badge.className = 'status-badge offline';
}
function _setConnectionState(online: boolean) { function _setConnectionState(online: boolean) {
const changed = _serverOnline !== online; const changed = _serverOnline !== online;
_serverOnline = online; _serverOnline = online;
@@ -175,8 +201,14 @@ function _setConnectionState(online: boolean) {
if (online) { if (online) {
if (banner) { (banner as HTMLElement).style.display = 'none'; banner.setAttribute('aria-hidden', 'true'); } if (banner) { (banner as HTMLElement).style.display = 'none'; banner.setAttribute('aria-hidden', 'true'); }
if (badge) badge.className = 'status-badge online'; if (badge) badge.className = 'status-badge online';
// Clear the restarting flag once the server is back
if (serverRestarting) clearRestartingFlag();
} else { } else {
if (banner) { (banner as HTMLElement).style.display = 'flex'; banner.setAttribute('aria-hidden', 'false'); } if (banner) {
(banner as HTMLElement).style.display = 'flex';
banner.setAttribute('aria-hidden', 'false');
}
_setOverlayMode(serverRestarting);
if (badge) badge.className = 'status-badge offline'; if (badge) badge.className = 'status-badge offline';
} }
return changed; return changed;
@@ -200,6 +232,11 @@ export async function loadServerInfo() {
window.dispatchEvent(new CustomEvent('server:reconnected')); window.dispatchEvent(new CustomEvent('server:reconnected'));
} }
// Auth mode detection
const authNeeded = data.auth_required !== false;
setAuthRequired(authNeeded);
(window as any)._authRequired = authNeeded;
// Demo mode detection // Demo mode detection
if (data.demo_mode && !demoMode) { if (data.demo_mode && !demoMode) {
demoMode = true; demoMode = true;

View File

@@ -0,0 +1,491 @@
/**
* Shader-based background effects engine.
*
* Renders various full-screen WebGL shader effects on a shared canvas.
* Each effect is a fragment shader that receives common uniforms
* (time, resolution, accent color, theme brightness).
*
* The engine reuses the same <canvas id="bg-effect-canvas"> element
* and rebuilds the GL program when switching shaders.
*/
// ─── Shader library ──────────────────────────────────────────
const VERT = `
attribute vec2 a_pos;
void main() { gl_Position = vec4(a_pos, 0.0, 1.0); }
`;
/** Aurora — flowing northern-lights bands */
const FRAG_AURORA = `
precision mediump float;
uniform float u_time;
uniform vec2 u_res;
uniform vec3 u_accent;
uniform vec3 u_bg;
uniform float u_light;
vec3 mod289(vec3 x){return x-floor(x*(1./289.))*289.;}
vec2 mod289(vec2 x){return x-floor(x*(1./289.))*289.;}
vec3 permute(vec3 x){return mod289(((x*34.)+1.)*x);}
float snoise(vec2 v){
const vec4 C=vec4(.211324865,.366025404,-.577350269,.024390244);
vec2 i=floor(v+dot(v,C.yy));
vec2 x0=v-i+dot(i,C.xx);
vec2 i1=(x0.x>x0.y)?vec2(1,0):vec2(0,1);
vec4 x12=x0.xyxy+C.xxzz; x12.xy-=i1;
i=mod289(i);
vec3 p=permute(permute(i.y+vec3(0,i1.y,1))+i.x+vec3(0,i1.x,1));
vec3 m=max(.5-vec3(dot(x0,x0),dot(x12.xy,x12.xy),dot(x12.zw,x12.zw)),0.);
m=m*m; m=m*m;
vec3 x=2.*fract(p*C.www)-1.;
vec3 h=abs(x)-.5;
vec3 ox=floor(x+.5);
vec3 a0=x-ox;
m*=1.79284291-.85373472*(a0*a0+h*h);
vec3 g;
g.x=a0.x*x0.x+h.x*x0.y;
g.yz=a0.yz*x12.xz+h.yz*x12.yw;
return 130.*dot(m,g);
}
void main(){
vec2 uv=gl_FragCoord.xy/u_res;
float t=u_time*.06;
// Vertical bands that sway horizontally
float wave1=sin(uv.x*3.+t*1.2+snoise(vec2(uv.x*2.,t*.3))*.8)*.5+.5;
float wave2=sin(uv.x*5.-t*.9+snoise(vec2(uv.x*1.5,t*.2+2.))*.6)*.5+.5;
float wave3=sin(uv.x*2.+t*.7+snoise(vec2(uv.x*3.,t*.25+5.))*.7)*.5+.5;
// Fade toward the top of the screen
float yFade=smoothstep(0.,.7,uv.y)*smoothstep(1.,.4,uv.y);
// Combine bands
float band1=smoothstep(.35,.65,wave1)*yFade;
float band2=smoothstep(.4,.7,wave2)*yFade*.7;
float band3=smoothstep(.3,.6,wave3)*yFade*.5;
vec3 col1=u_accent;
vec3 col2=u_accent.gbr;
vec3 col3=mix(u_accent,vec3(1),0.3);
float intensity=band1*0.12+band2*0.08+band3*0.06;
vec3 color=band1*col1+band2*col2+band3*col3;
color=color/(band1+band2+band3+.001);
float boost=1.+u_light*.4;
vec3 result=mix(u_bg,u_bg+color*.6,intensity*boost);
gl_FragColor=vec4(result,1.);
}
`;
/** Plasma — classic color-cycling plasma */
const FRAG_PLASMA = `
precision mediump float;
uniform float u_time;
uniform vec2 u_res;
uniform vec3 u_accent;
uniform vec3 u_bg;
uniform float u_light;
void main(){
vec2 uv=gl_FragCoord.xy/u_res;
float aspect=u_res.x/u_res.y;
vec2 p=vec2(uv.x*aspect,uv.y);
float t=u_time*.15;
float v1=sin(p.x*4.+t);
float v2=sin(p.y*4.+t*.7);
float v3=sin((p.x+p.y)*3.+t*1.1);
float v4=sin(length(p-vec2(aspect*.5,.5))*6.-t*.8);
float v=(v1+v2+v3+v4)*.25; // -1..1
vec3 col1=u_accent;
vec3 col2=u_accent.brg;
vec3 col3=u_accent.gbr;
float m1=sin(v*3.14159)*.5+.5;
float m2=sin(v*3.14159+2.094)*.5+.5;
vec3 color=col1*(1.-m1)+col2*m1;
color=color*(1.-m2*.4)+col3*m2*.4;
float boost=1.+u_light*.3;
float intensity=smoothstep(-.2,.8,v)*.22*boost;
vec3 result=mix(u_bg,u_bg+color*.7,intensity);
gl_FragColor=vec4(result,1.);
}
`;
/** Digital Rain — matrix-style falling columns */
const FRAG_RAIN = `
precision mediump float;
uniform float u_time;
uniform vec2 u_res;
uniform vec3 u_accent;
uniform vec3 u_bg;
uniform float u_light;
float hash(vec2 p){
return fract(sin(dot(p,vec2(127.1,311.7)))*43758.5453);
}
void main(){
vec2 uv=gl_FragCoord.xy/u_res;
float aspect=u_res.x/u_res.y;
float t=u_time*.12;
// Many thin columns
float cols=80.*aspect;
float col_x=floor(uv.x*cols);
// Multiple drops per column at staggered phases
float totalGlow=0.;
for(int d=0;d<3;d++){
float seed=float(d)*37.;
float speed=.3+hash(vec2(col_x,seed))*1.0;
float phase=hash(vec2(col_x,seed+1.))*6.28;
float drop_y=1.-fract(t*speed+phase);
// Trail: fades downward from the drop head
float dist=drop_y-uv.y;
if(dist<0.) dist+=1.;
float trailLen=.15+hash(vec2(col_x,seed+2.))*.25;
float trail=smoothstep(trailLen,0.,dist);
trail*=trail;
// Cell flicker within the trail
float rows=80.;
float row_y=floor(uv.y*rows);
float flicker=step(.35,hash(vec2(col_x+seed,row_y+floor(t*4.))));
totalGlow+=trail*flicker*.12;
// Bright head
float headDist=abs(uv.y-drop_y);
if(headDist>.5) headDist=1.-headDist;
totalGlow+=smoothstep(.006,0.,headDist)*.15;
}
totalGlow=min(totalGlow,.35);
float boost=1.+u_light*.25;
vec3 result=mix(u_bg,u_bg+u_accent,totalGlow*boost);
gl_FragColor=vec4(result,1.);
}
`;
/** Starfield — parallax depth star field */
const FRAG_STARS = `
precision mediump float;
uniform float u_time;
uniform vec2 u_res;
uniform vec3 u_accent;
uniform vec3 u_bg;
uniform float u_light;
float hash(vec2 p){
return fract(sin(dot(p,vec2(127.1,311.7)))*43758.5453);
}
void main(){
vec2 uv=(gl_FragCoord.xy-.5*u_res)/u_res.y;
float t=u_time*.02;
vec3 result=u_bg;
// 4 layers of stars at different depths
for(int layer=0;layer<4;layer++){
float fl=float(layer);
float depth=1.+fl*.6;
float speed=.05*depth;
vec2 st=uv*20.*depth;
st.y+=t*speed*15.;
vec2 cell=floor(st);
vec2 f=fract(st);
// Check this cell and its 8 neighbours for stars
for(int dy=-1;dy<=1;dy++){
for(int dx=-1;dx<=1;dx++){
vec2 neighbor=vec2(float(dx),float(dy));
vec2 c=cell+neighbor;
float h=hash(c+fl*137.);
// Only ~40% of cells have a star
if(h>.6) continue;
vec2 starPos=vec2(hash(c*1.17+fl*53.),hash(c*2.31+fl*79.));
float d=length(f-neighbor-starPos);
float starSize=.04+h*.06;
float brightness=smoothstep(starSize,starSize*.1,d);
// Soft glow halo
float glow=smoothstep(starSize*4.,0.,d)*.3;
// Twinkle
float twinkle=sin(t*8.+h*43.)*.25+.75;
float total=(brightness+glow)*twinkle;
// Color: white or accent-tinted
vec3 starColor=mix(vec3(1.),u_accent,step(.3,h));
float boost=1.+u_light*.2;
result+=starColor*total*.12*boost/depth;
}
}
}
gl_FragColor=vec4(result,1.);
}
`;
/** Warp — smooth tunnel/vortex speed effect */
const FRAG_WARP = `
precision mediump float;
uniform float u_time;
uniform vec2 u_res;
uniform vec3 u_accent;
uniform vec3 u_bg;
uniform float u_light;
void main(){
vec2 uv=(gl_FragCoord.xy-.5*u_res)/u_res.y;
float t=u_time*.1;
float r=length(uv);
float a=atan(uv.y,uv.x);
// Logarithmic tunnel mapping for smooth depth
float tunnel=log(r+.001)*-2.;
// Twist increases with depth (further from center = more twist)
float twist=a/3.14159+tunnel*.15+t;
// Overlapping band patterns at different scales
float band1=sin(tunnel*4.-t*2.)*.5+.5;
float band2=sin(tunnel*7.+t*1.5)*.5+.5;
float rays=sin(twist*6.)*.5+.5;
float pattern=mix(band1,band2,.5)*rays;
// Smooth radial fade — no harsh edges
float fade=smoothstep(0.,.03,r)*smoothstep(.9,.15,r);
// Gentle center glow
float centerGlow=smoothstep(.3,0.,r)*.08;
vec3 col1=u_accent;
vec3 col2=u_accent.gbr;
vec3 color=mix(col1,col2,rays);
float intensity=pattern*fade*.15+centerGlow;
float boost=1.+u_light*.3;
vec3 result=mix(u_bg,u_bg+color*.6,intensity*boost);
gl_FragColor=vec4(result,1.);
}
`;
// ─── Shader registry ─────────────────────────────────────────
export type ShaderEffectId = 'aurora' | 'plasma' | 'rain' | 'stars' | 'warp';
const SHADER_MAP: Record<ShaderEffectId, string> = {
aurora: FRAG_AURORA,
plasma: FRAG_PLASMA,
rain: FRAG_RAIN,
stars: FRAG_STARS,
warp: FRAG_WARP,
};
// ─── Engine state ────────────────────────────────────────────
let _canvas: HTMLCanvasElement | null = null;
let _gl: WebGLRenderingContext | null = null;
let _prog: WebGLProgram | null = null;
let _raf: number | null = null;
let _startTime = 0;
let _activeShader: ShaderEffectId | null = null;
// Uniforms
let _uTime: WebGLUniformLocation | null = null;
let _uRes: WebGLUniformLocation | null = null;
let _uAccent: WebGLUniformLocation | null = null;
let _uBg: WebGLUniformLocation | null = null;
let _uLight: WebGLUniformLocation | null = null;
let _accent = [76 / 255, 175 / 255, 80 / 255];
let _bgColor = [26 / 255, 26 / 255, 26 / 255];
let _isLight = 0.0;
// ─── GL helpers ──────────────────────────────────────────────
function _compile(gl: WebGLRenderingContext, type: number, src: string): WebGLShader | null {
const s = gl.createShader(type);
if (!s) return null;
gl.shaderSource(s, src);
gl.compileShader(s);
if (!gl.getShaderParameter(s, gl.COMPILE_STATUS)) {
console.error('bg-shaders compile:', gl.getShaderInfoLog(s));
return null;
}
return s;
}
function _buildProgram(gl: WebGLRenderingContext, fragSrc: string): WebGLProgram | null {
const vs = _compile(gl, gl.VERTEX_SHADER, VERT);
const fs = _compile(gl, gl.FRAGMENT_SHADER, fragSrc);
if (!vs || !fs) return null;
const prog = gl.createProgram()!;
gl.attachShader(prog, vs);
gl.attachShader(prog, fs);
gl.linkProgram(prog);
if (!gl.getProgramParameter(prog, gl.LINK_STATUS)) {
console.error('bg-shaders link:', gl.getProgramInfoLog(prog));
return null;
}
// Clean up individual shaders
gl.deleteShader(vs);
gl.deleteShader(fs);
return prog;
}
function _ensureCanvas(): boolean {
if (_canvas) return true;
_canvas = document.getElementById('bg-effect-canvas') as HTMLCanvasElement | null;
return !!_canvas;
}
function _ensureGL(): boolean {
if (_gl) return true;
if (!_ensureCanvas()) return false;
_gl = _canvas!.getContext('webgl', { alpha: false, antialias: false, depth: false });
if (!_gl) return false;
// Full-screen quad (shared across all shaders)
const buf = _gl.createBuffer();
_gl.bindBuffer(_gl.ARRAY_BUFFER, buf);
_gl.bufferData(_gl.ARRAY_BUFFER, new Float32Array([-1, -1, 1, -1, -1, 1, 1, 1]), _gl.STATIC_DRAW);
return true;
}
function _switchProgram(shaderId: ShaderEffectId): boolean {
if (!_ensureGL()) return false;
const gl = _gl!;
// Delete old program
if (_prog) {
gl.deleteProgram(_prog);
_prog = null;
}
const fragSrc = SHADER_MAP[shaderId];
if (!fragSrc) return false;
_prog = _buildProgram(gl, fragSrc);
if (!_prog) return false;
gl.useProgram(_prog);
// Re-bind vertex attrib
const aPos = gl.getAttribLocation(_prog, 'a_pos');
gl.enableVertexAttribArray(aPos);
gl.vertexAttribPointer(aPos, 2, gl.FLOAT, false, 0, 0);
// Cache uniform locations
_uTime = gl.getUniformLocation(_prog, 'u_time');
_uRes = gl.getUniformLocation(_prog, 'u_res');
_uAccent = gl.getUniformLocation(_prog, 'u_accent');
_uBg = gl.getUniformLocation(_prog, 'u_bg');
_uLight = gl.getUniformLocation(_prog, 'u_light');
return true;
}
function _resize(): void {
if (!_canvas || !_gl) return;
const w = Math.round(window.innerWidth * 0.5);
const h = Math.round(window.innerHeight * 0.5);
_canvas.width = w;
_canvas.height = h;
_gl.viewport(0, 0, w, h);
}
function _draw(time: number): void {
_raf = requestAnimationFrame(_draw);
if (!_gl || !_prog) return;
const t = (time - _startTime) * 0.001;
_gl.uniform1f(_uTime, t);
_gl.uniform2f(_uRes, _canvas!.width, _canvas!.height);
_gl.uniform3f(_uAccent, _accent[0], _accent[1], _accent[2]);
_gl.uniform3f(_uBg, _bgColor[0], _bgColor[1], _bgColor[2]);
_gl.uniform1f(_uLight, _isLight);
_gl.drawArrays(_gl.TRIANGLE_STRIP, 0, 4);
}
// ─── Public API ──────────────────────────────────────────────
/** Start a shader effect. Stops any currently running shader. */
export function startShaderEffect(id: ShaderEffectId): void {
stopShaderEffect();
if (!_switchProgram(id)) return;
_activeShader = id;
_resize();
_startTime = performance.now();
_raf = requestAnimationFrame(_draw);
// Show canvas
if (_canvas) _canvas.style.display = 'block';
}
/** Stop the currently running shader effect. */
export function stopShaderEffect(): void {
if (_raf) {
cancelAnimationFrame(_raf);
_raf = null;
}
_activeShader = null;
if (_canvas) _canvas.style.display = 'none';
}
/** Update accent color (called when user changes accent). */
export function updateShaderAccent(hex: string): void {
_accent = [
parseInt(hex.slice(1, 3), 16) / 255,
parseInt(hex.slice(3, 5), 16) / 255,
parseInt(hex.slice(5, 7), 16) / 255,
];
}
/** Update theme brightness (called on theme toggle). */
export function updateShaderTheme(isDark: boolean): void {
_bgColor = isDark ? [26 / 255, 26 / 255, 26 / 255] : [245 / 255, 245 / 255, 245 / 255];
_isLight = isDark ? 0.0 : 1.0;
}
/** Get the currently active shader ID (or null). */
export function getActiveShader(): ShaderEffectId | null {
return _activeShader;
}
/** Check if a given ID is a valid shader effect. */
export function isShaderEffect(id: string): id is ShaderEffectId {
return id in SHADER_MAP;
}
/** Initialize resize listener. */
export function initBgShaders(): void {
window.addEventListener('resize', () => { if (_raf) _resize(); });
}

Some files were not shown because too many files have changed in this diff Show More