Compare commits

..

2 Commits

Author SHA1 Message Date
f8656b72a6 Add configuration backup/restore with settings modal
Backend: GET /api/v1/system/backup bundles all 11 store JSON files into a
single downloadable backup with metadata envelope. POST /api/v1/system/restore
validates and writes stores atomically, then schedules a delayed server restart
via detached restart.ps1 subprocess.

Frontend: Settings modal (gear button in header) with Download Backup and
Restore from Backup buttons. Restore shows confirm dialog, uploads via
multipart FormData, then displays fullscreen restart overlay that polls
/health until the server comes back and reloads the page.

Locales: en, ru, zh translations for all settings.* keys.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 18:23:18 +03:00
9cfe628cc5 Codebase review fixes: stability, performance, quality improvements
Stability: Add outer try/except/finally with _running=False cleanup to all 6
processing loop methods (live, color_strip, effect, audio, composite, mapped).
Add exponential backoff on consecutive capture errors in live_stream. Move
audio stream.stop() outside lock scope.

Performance: Replace per-pixel Python loop with np.array().tobytes() in
ddp_client. Vectorize pixelate filter with cv2.resize down+up. Vectorize
gradient rendering with np.searchsorted.

Frontend: Add lockBody/unlockBody re-entrancy counter. Add {once:true} to
fetchWithAuth abort listener. Null ws.onclose before ws.close() in LED preview.

Backend: Remove auth token prefix from log messages. Add atomic_write_json
helper (tempfile + os.replace) and update all 10 stores. Add name uniqueness
checks to all update methods. Fix DELETE status codes to 204 in audio_sources
and value_sources. Fix get_source() silent bug in color_strip_sources.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 18:23:04 +03:00
39 changed files with 1313 additions and 786 deletions

54
CODEBASE_REVIEW.md Normal file
View File

@@ -0,0 +1,54 @@
# Codebase Review — 2026-02-26
Findings from full codebase review. Items ordered by priority within each category.
## Stability (Critical)
- [x] **Fatal loop exception leaks resources** — Added outer `try/except/finally` with `self._running = False` to all 10 processing loop methods across `live_stream.py`, `color_strip_stream.py`, `effect_stream.py`, `audio_stream.py`, `composite_stream.py`, `mapped_stream.py`. Also added per-iteration `try/except` where missing.
- [x] **`_is_running` flag cleanup** — Fixed via `finally: self._running = False` in all loop methods. *(Race condition via `threading.Event` deferred — current pattern sufficient with the finally block.)*
- [x] **`ColorStripStreamManager` thread safety** — **FALSE POSITIVE**: All access is from the async event loop; methods are synchronous with no `await` points, so no preemption is possible.
- [x] **Audio `stream.stop()` called under lock** — Moved `stream.stop()` outside lock scope in both `release()` and `release_all()` in `audio_capture.py`.
- [x] **WS accept-before-validate****FALSE POSITIVE**: All WS endpoints validate auth and resolve configs BEFORE calling `websocket.accept()`.
- [x] **Capture error no-backoff** — Added consecutive error counter with exponential backoff (`min(1.0, 0.1 * (errors - 5))`) in `ScreenCaptureLiveStream._capture_loop()`.
- [ ] **WGC session close not detected** — Deferred (Windows-specific edge case, low priority).
- [x] **`LiveStreamManager.acquire()` not thread-safe** — **FALSE POSITIVE**: Same as ColorStripStreamManager — all access from async event loop, no await in methods.
## Performance (High Impact)
- [x] **Per-pixel Python loop in `send_pixels()`** — Replaced per-pixel Python loop with `np.array().tobytes()` in `ddp_client.py`. Hot path already uses `send_pixels_numpy()`.
- [ ] **WGC 6MB frame allocation per callback** — Deferred (Windows-specific, requires WGC API changes).
- [x] **Gradient rendering O(LEDs×Stops) Python loop** — Vectorized with NumPy: `np.searchsorted` for stop lookup + vectorized interpolation in `_compute_gradient_colors()`.
- [x] **`PixelateFilter` nested Python loop** — Replaced with `cv2.resize` down (INTER_AREA) + up (INTER_NEAREST) — pure C++ backend.
- [x] **`DownscalerFilter` double allocation** — **FALSE POSITIVE**: Already uses single `cv2.resize()` call (vectorized C++).
- [x] **`SaturationFilter` ~25MB temp arrays** — **FALSE POSITIVE**: Already uses pre-allocated scratch buffer and vectorized in-place numpy.
- [x] **`FrameInterpolationFilter` copies full image** — **FALSE POSITIVE**: Already uses vectorized numpy integer blending with image pool.
- [x] **`datetime.utcnow()` per frame** — **LOW IMPACT**: ~1-2μs per call, negligible at 60fps. Deprecation tracked under Backend Quality.
- [x] **Unbounded diagnostic lists****FALSE POSITIVE**: Lists are cleared every 5 seconds (~300 entries max at 60fps). Trivial memory.
## Frontend Quality
- [x] **`lockBody()`/`unlockBody()` not re-entrant** — Added `_lockCount` reference counter and `_savedScrollY` in `ui.js`. First lock saves scroll, last unlock restores.
- [x] **XSS via unescaped engine config keys****FALSE POSITIVE**: Both capture template and audio template card renderers already use `escapeHtml()` on keys and values.
- [x] **LED preview WS `onclose` not nulled** — Added `ws.onclose = null` before `ws.close()` in `disconnectLedPreviewWS()` in `targets.js`.
- [x] **`fetchWithAuth` retry adds duplicate listeners** — Added `{ once: true }` to abort signal listener in `api.js`.
- [x] **Audio `requestAnimationFrame` loop continues after WS close****FALSE POSITIVE**: Loop already checks `testAudioModal.isOpen` before scheduling next frame, and `_cleanupTest()` cancels the animation frame.
## Backend Quality
- [ ] **No thread-safety in `JsonStore`** — Deferred (low risk — all stores are accessed from async event loop).
- [x] **Auth token prefix logged** — Removed token prefix from log message in `auth.py`. Now logs only "Invalid API key attempt".
- [ ] **Duplicate capture/test code** — Deferred (code duplication, not a bug — refactoring would reduce LOC but doesn't fix a defect).
- [x] **Update methods allow duplicate names** — Added name uniqueness checks to `update_template` in `template_store.py`, `postprocessing_template_store.py`, `audio_template_store.py`, `pattern_template_store.py`, and `update_profile` in `profile_store.py`. Also added missing check to `create_profile`.
- [ ] **Routes access `manager._private` attrs** — Deferred (stylistic, not a bug — would require adding public accessor methods).
- [x] **Non-atomic file writes** — Created `utils/file_ops.py` with `atomic_write_json()` helper (tempfile + `os.replace`). Updated all 10 store files.
- [ ] **444 f-string logger calls** — Deferred (performance impact negligible — Python evaluates f-strings very fast; lazy `%s` formatting only matters at very high call rates).
- [x] **`get_source()` silent bug** — Fixed: `color_strip_sources.py:_resolve_display_index()` called `picture_source_store.get_source()` which doesn't exist (should be `get_stream()`). Was silently returning `0` for display index.
- [ ] **`get_config()` race** — Deferred (low risk — config changes are infrequent user-initiated operations).
- [ ] **`datetime.utcnow()` deprecated** — Deferred (functional, deprecation warning only appears in Python 3.12+).
- [x] **Inconsistent DELETE status codes** — Changed `audio_sources.py` and `value_sources.py` DELETE endpoints from 200 to 204 (matching all other DELETE endpoints).
## Architecture (Observations, no action needed)
**Strengths**: Clean layered design, plugin registries, reference-counted stream sharing, consistent API patterns.
**Weaknesses**: No backpressure (slow consumers buffer frames), thread count grows linearly, config global singleton, reference counting races.

View File

@@ -59,7 +59,7 @@ def verify_api_key(
break
if not authenticated_as:
logger.warning(f"Invalid API key attempt: {token[:8]}...")
logger.warning("Invalid API key attempt")
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid API key",

View File

@@ -129,7 +129,7 @@ async def stream_capture_test(
done_event.set()
# Start capture in background thread
loop = asyncio.get_event_loop()
loop = asyncio.get_running_loop()
capture_future = loop.run_in_executor(None, _capture_loop)
start_time = time.perf_counter()
@@ -142,6 +142,8 @@ async def stream_capture_test(
# Check for init error
if init_error:
stop_event.set()
await capture_future
await websocket.send_json({"type": "error", "detail": init_error})
return

View File

@@ -125,7 +125,7 @@ async def update_audio_source(
raise HTTPException(status_code=400, detail=str(e))
@router.delete("/api/v1/audio-sources/{source_id}", tags=["Audio Sources"])
@router.delete("/api/v1/audio-sources/{source_id}", status_code=204, tags=["Audio Sources"])
async def delete_audio_source(
source_id: str,
_auth: AuthRequired,
@@ -143,7 +143,6 @@ async def delete_audio_source(
)
store.delete_source(source_id)
return {"status": "deleted", "id": source_id}
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))

View File

@@ -103,7 +103,7 @@ def _resolve_display_index(picture_source_id: str, picture_source_store: Picture
if not picture_source_id or depth > 5:
return 0
try:
ps = picture_source_store.get_source(picture_source_id)
ps = picture_source_store.get_stream(picture_source_id)
except Exception:
return 0
if isinstance(ps, ScreenCapturePictureSource):

View File

@@ -1,12 +1,17 @@
"""System routes: health, version, displays, performance, ADB."""
"""System routes: health, version, displays, performance, backup/restore, ADB."""
import io
import json
import subprocess
import sys
import threading
from datetime import datetime
from pathlib import Path
from typing import Optional
import psutil
from fastapi import APIRouter, Depends, HTTPException, Query
from fastapi import APIRouter, Depends, File, HTTPException, Query, UploadFile
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
from wled_controller import __version__
@@ -19,10 +24,12 @@ from wled_controller.api.schemas.system import (
HealthResponse,
PerformanceResponse,
ProcessListResponse,
RestoreResponse,
VersionResponse,
)
from wled_controller.config import get_config
from wled_controller.core.capture.screen_capture import get_available_displays
from wled_controller.utils import get_logger
from wled_controller.utils import atomic_write_json, get_logger
logger = get_logger(__name__)
@@ -206,6 +213,141 @@ async def get_metrics_history(
return manager.metrics_history.get_history()
# ---------------------------------------------------------------------------
# Configuration backup / restore
# ---------------------------------------------------------------------------
# Mapping: logical store name → StorageConfig attribute name
STORE_MAP = {
"devices": "devices_file",
"capture_templates": "templates_file",
"postprocessing_templates": "postprocessing_templates_file",
"picture_sources": "picture_sources_file",
"picture_targets": "picture_targets_file",
"pattern_templates": "pattern_templates_file",
"color_strip_sources": "color_strip_sources_file",
"audio_sources": "audio_sources_file",
"audio_templates": "audio_templates_file",
"value_sources": "value_sources_file",
"profiles": "profiles_file",
}
_RESTART_SCRIPT = Path(__file__).resolve().parents[4] / "restart.ps1"
def _schedule_restart() -> None:
"""Spawn restart.ps1 after a short delay so the HTTP response completes."""
def _restart():
import time
time.sleep(1)
subprocess.Popen(
["powershell", "-ExecutionPolicy", "Bypass", "-File", str(_RESTART_SCRIPT)],
creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP,
)
threading.Thread(target=_restart, daemon=True).start()
@router.get("/api/v1/system/backup", tags=["System"])
def backup_config(_: AuthRequired):
"""Download all configuration as a single JSON backup file."""
config = get_config()
stores = {}
for store_key, config_attr in STORE_MAP.items():
file_path = Path(getattr(config.storage, config_attr))
if file_path.exists():
with open(file_path, "r", encoding="utf-8") as f:
stores[store_key] = json.load(f)
else:
stores[store_key] = {}
backup = {
"meta": {
"format": "ledgrab-backup",
"format_version": 1,
"app_version": __version__,
"created_at": datetime.utcnow().isoformat() + "Z",
"store_count": len(stores),
},
"stores": stores,
}
content = json.dumps(backup, indent=2, ensure_ascii=False)
timestamp = datetime.utcnow().strftime("%Y-%m-%dT%H%M%S")
filename = f"ledgrab-backup-{timestamp}.json"
return StreamingResponse(
io.BytesIO(content.encode("utf-8")),
media_type="application/json",
headers={"Content-Disposition": f'attachment; filename="{filename}"'},
)
@router.post("/api/v1/system/restore", response_model=RestoreResponse, tags=["System"])
async def restore_config(
_: AuthRequired,
file: UploadFile = File(...),
):
"""Upload a backup file to restore all configuration. Triggers server restart."""
# Read and parse
try:
raw = await file.read()
if len(raw) > 10 * 1024 * 1024: # 10 MB limit
raise HTTPException(status_code=400, detail="Backup file too large (max 10 MB)")
backup = json.loads(raw)
except json.JSONDecodeError as e:
raise HTTPException(status_code=400, detail=f"Invalid JSON file: {e}")
# Validate envelope
meta = backup.get("meta")
if not isinstance(meta, dict) or meta.get("format") != "ledgrab-backup":
raise HTTPException(status_code=400, detail="Not a valid LED Grab backup file")
fmt_version = meta.get("format_version", 0)
if fmt_version > 1:
raise HTTPException(
status_code=400,
detail=f"Backup format version {fmt_version} is not supported by this server version",
)
stores = backup.get("stores")
if not isinstance(stores, dict):
raise HTTPException(status_code=400, detail="Backup file missing 'stores' section")
known_keys = set(STORE_MAP.keys())
present_keys = known_keys & set(stores.keys())
if not present_keys:
raise HTTPException(status_code=400, detail="Backup contains no recognized store data")
for key in present_keys:
if not isinstance(stores[key], dict):
raise HTTPException(status_code=400, detail=f"Store '{key}' in backup is not a valid JSON object")
# Write store files atomically
config = get_config()
written = 0
for store_key, config_attr in STORE_MAP.items():
if store_key in stores:
file_path = Path(getattr(config.storage, config_attr))
atomic_write_json(file_path, stores[store_key])
written += 1
logger.info(f"Restored store: {store_key} -> {file_path}")
logger.info(f"Restore complete: {written}/{len(STORE_MAP)} stores written. Scheduling restart...")
_schedule_restart()
missing = known_keys - present_keys
return RestoreResponse(
status="restored",
stores_written=written,
stores_total=len(STORE_MAP),
missing_stores=sorted(missing) if missing else [],
restart_scheduled=True,
message=f"Restored {written} stores. Server restarting...",
)
# ---------------------------------------------------------------------------
# ADB helpers (for Android / scrcpy engine)
# ---------------------------------------------------------------------------

View File

@@ -152,7 +152,7 @@ async def update_value_source(
raise HTTPException(status_code=400, detail=str(e))
@router.delete("/api/v1/value-sources/{source_id}", tags=["Value Sources"])
@router.delete("/api/v1/value-sources/{source_id}", status_code=204, tags=["Value Sources"])
async def delete_value_source(
source_id: str,
_auth: AuthRequired,
@@ -171,7 +171,6 @@ async def delete_value_source(
)
store.delete_source(source_id)
return {"status": "deleted", "id": source_id}
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))

View File

@@ -68,3 +68,14 @@ class PerformanceResponse(BaseModel):
ram_percent: float = Field(description="RAM usage percent")
gpu: GpuInfo | None = Field(default=None, description="GPU info (null if unavailable)")
timestamp: datetime = Field(description="Measurement timestamp")
class RestoreResponse(BaseModel):
"""Response after restoring configuration backup."""
status: str = Field(description="Status of restore operation")
stores_written: int = Field(description="Number of stores successfully written")
stores_total: int = Field(description="Total number of known stores")
missing_stores: List[str] = Field(default_factory=list, description="Store keys not found in backup")
restart_scheduled: bool = Field(description="Whether server restart was scheduled")
message: str = Field(description="Human-readable status message")

View File

@@ -222,6 +222,7 @@ class AudioCaptureManager:
return
key = (engine_type, device_index, is_loopback)
stream_to_stop = None
with self._lock:
if key not in self._streams:
logger.warning(f"Attempted to release unknown audio capture: {key}")
@@ -230,23 +231,28 @@ class AudioCaptureManager:
stream, ref_count = self._streams[key]
ref_count -= 1
if ref_count <= 0:
stream.stop()
stream_to_stop = stream
del self._streams[key]
logger.info(f"Removed audio capture {key}")
else:
self._streams[key] = (stream, ref_count)
logger.debug(f"Released audio capture {key} (ref_count={ref_count})")
# Stop outside the lock — stream.stop() joins a thread (up to 5s)
if stream_to_stop is not None:
stream_to_stop.stop()
def release_all(self) -> None:
"""Stop and remove all capture streams. Called on shutdown."""
with self._lock:
for key, (stream, _) in list(self._streams.items()):
try:
stream.stop()
except Exception as e:
logger.error(f"Error stopping audio capture {key}: {e}")
streams_to_stop = list(self._streams.items())
self._streams.clear()
logger.info("Released all audio capture streams")
# Stop outside the lock — each stop() joins a thread
for key, (stream, _) in streams_to_stop:
try:
stream.stop()
except Exception as e:
logger.error(f"Error stopping audio capture {key}: {e}")
logger.info("Released all audio capture streams")
@staticmethod
def enumerate_devices() -> List[dict]:

View File

@@ -193,12 +193,13 @@ class DDPClient:
try:
# Send plain RGB — WLED handles per-bus color order conversion
# internally when outputting to hardware.
# Convert to numpy to avoid per-pixel Python loop
bpp = 4 if self.rgbw else 3 # bytes per pixel
pixel_bytes = bytearray()
for r, g, b in pixels:
pixel_bytes.extend((int(r), int(g), int(b)))
if self.rgbw:
pixel_bytes.append(0) # White channel = 0
pixel_array = np.array(pixels, dtype=np.uint8)
if self.rgbw:
white = np.zeros((pixel_array.shape[0], 1), dtype=np.uint8)
pixel_array = np.hstack((pixel_array, white))
pixel_bytes = pixel_array.tobytes()
total_bytes = len(pixel_bytes)
# Align payload to full pixels (multiple of bpp) to avoid splitting

View File

@@ -2,6 +2,7 @@
from typing import Any, Dict, List, Optional
import cv2
import numpy as np
from wled_controller.core.filters.base import FilterOptionDef, PostprocessingFilter
@@ -37,12 +38,12 @@ class PixelateFilter(PostprocessingFilter):
h, w = image.shape[:2]
for y in range(0, h, block_size):
for x in range(0, w, block_size):
y_end = min(y + block_size, h)
x_end = min(x + block_size, w)
block = image[y:y_end, x:x_end]
mean_color = block.mean(axis=(0, 1)).astype(np.uint8)
image[y:y_end, x:x_end] = mean_color
# Resize down (area averaging) then up (nearest neighbor) —
# vectorized C++ instead of per-block Python loop
small_w = max(1, w // block_size)
small_h = max(1, h // block_size)
small = cv2.resize(image, (small_w, small_h), interpolation=cv2.INTER_AREA)
pixelated = cv2.resize(small, (w, h), interpolation=cv2.INTER_NEAREST)
np.copyto(image, pixelated)
return None

View File

@@ -229,63 +229,71 @@ class AudioColorStripStream(ColorStripStream):
"vu_meter": self._render_vu_meter,
}
with high_resolution_timer():
while self._running:
loop_start = time.perf_counter()
frame_time = 1.0 / self._fps
n = self._led_count
try:
with high_resolution_timer():
while self._running:
loop_start = time.perf_counter()
frame_time = 1.0 / self._fps
try:
n = self._led_count
# Rebuild scratch buffers and pre-computed arrays when LED count changes
if n != _pool_n:
_pool_n = n
_buf_a = np.zeros((n, 3), dtype=np.uint8)
_buf_b = np.zeros((n, 3), dtype=np.uint8)
_band_x = np.arange(NUM_BANDS, dtype=np.float32)
half = (n + 1) // 2
_led_x_mirror = np.linspace(0, NUM_BANDS - 1, half)
_led_x = np.linspace(0, NUM_BANDS - 1, n)
_full_amp = np.empty(n, dtype=np.float32)
_vu_gradient = np.linspace(0, 1, n, dtype=np.float32)
_indices_buf = np.empty(n, dtype=np.int32)
self._prev_spectrum = None # reset smoothing on resize
# Rebuild scratch buffers and pre-computed arrays when LED count changes
if n != _pool_n:
_pool_n = n
_buf_a = np.zeros((n, 3), dtype=np.uint8)
_buf_b = np.zeros((n, 3), dtype=np.uint8)
_band_x = np.arange(NUM_BANDS, dtype=np.float32)
half = (n + 1) // 2
_led_x_mirror = np.linspace(0, NUM_BANDS - 1, half)
_led_x = np.linspace(0, NUM_BANDS - 1, n)
_full_amp = np.empty(n, dtype=np.float32)
_vu_gradient = np.linspace(0, 1, n, dtype=np.float32)
_indices_buf = np.empty(n, dtype=np.int32)
self._prev_spectrum = None # reset smoothing on resize
# Make pre-computed arrays available to render methods
self._band_x = _band_x
self._led_x = _led_x
self._led_x_mirror = _led_x_mirror
self._full_amp = _full_amp
self._vu_gradient = _vu_gradient
self._indices_buf = _indices_buf
# Make pre-computed arrays available to render methods
self._band_x = _band_x
self._led_x = _led_x
self._led_x_mirror = _led_x_mirror
self._full_amp = _full_amp
self._vu_gradient = _vu_gradient
self._indices_buf = _indices_buf
buf = _buf_a if _use_a else _buf_b
_use_a = not _use_a
buf = _buf_a if _use_a else _buf_b
_use_a = not _use_a
# Get latest audio analysis
analysis = None
if self._audio_stream is not None:
analysis = self._audio_stream.get_latest_analysis()
# Get latest audio analysis
analysis = None
if self._audio_stream is not None:
analysis = self._audio_stream.get_latest_analysis()
render_fn = renderers.get(self._visualization_mode, self._render_spectrum)
t_render = time.perf_counter()
render_fn(buf, n, analysis)
render_ms = (time.perf_counter() - t_render) * 1000
render_fn = renderers.get(self._visualization_mode, self._render_spectrum)
t_render = time.perf_counter()
render_fn(buf, n, analysis)
render_ms = (time.perf_counter() - t_render) * 1000
with self._colors_lock:
self._colors = buf
with self._colors_lock:
self._colors = buf
# Pull capture-side timing and combine with render timing
capture_timing = self._audio_stream.get_last_timing() if self._audio_stream else {}
read_ms = capture_timing.get("read_ms", 0)
fft_ms = capture_timing.get("fft_ms", 0)
self._last_timing = {
"audio_read_ms": read_ms,
"audio_fft_ms": fft_ms,
"audio_render_ms": render_ms,
"total_ms": read_ms + fft_ms + render_ms,
}
# Pull capture-side timing and combine with render timing
capture_timing = self._audio_stream.get_last_timing() if self._audio_stream else {}
read_ms = capture_timing.get("read_ms", 0)
fft_ms = capture_timing.get("fft_ms", 0)
self._last_timing = {
"audio_read_ms": read_ms,
"audio_fft_ms": fft_ms,
"audio_render_ms": render_ms,
"total_ms": read_ms + fft_ms + render_ms,
}
except Exception as e:
logger.error(f"AudioColorStripStream render error: {e}")
elapsed = time.perf_counter() - loop_start
time.sleep(max(frame_time - elapsed, 0.001))
elapsed = time.perf_counter() - loop_start
time.sleep(max(frame_time - elapsed, 0.001))
except Exception as e:
logger.error(f"Fatal AudioColorStripStream loop error: {e}", exc_info=True)
finally:
self._running = False
# ── Channel selection ─────────────────────────────────────────

View File

@@ -334,145 +334,150 @@ class PictureColorStripStream(ColorStripStream):
led_colors = frame_buf
return led_colors
with high_resolution_timer():
while self._running:
loop_start = time.perf_counter()
fps = self._fps
frame_time = 1.0 / fps if fps > 0 else 1.0
try:
with high_resolution_timer():
while self._running:
loop_start = time.perf_counter()
fps = self._fps
frame_time = 1.0 / fps if fps > 0 else 1.0
try:
frame = self._live_stream.get_latest_frame()
try:
frame = self._live_stream.get_latest_frame()
if frame is None or frame is cached_frame:
if frame is None or frame is cached_frame:
if (
frame is not None
and self._frame_interpolation
and self._interp_from is not None
and self._interp_to is not None
and _u16_a is not None
):
# Interpolate between previous and current capture
t = min(1.0, (loop_start - self._interp_start) / self._interp_duration)
frame_buf = _frame_a if _use_a else _frame_b
_use_a = not _use_a
_blend_u16(self._interp_from, self._interp_to, int(t * 256), frame_buf)
led_colors = _apply_corrections(frame_buf, frame_buf)
with self._colors_lock:
self._latest_colors = led_colors
elapsed = time.perf_counter() - loop_start
time.sleep(max(frame_time - elapsed, 0.001))
continue
interval = (
loop_start - self._last_capture_time
if self._last_capture_time > 0
else frame_time
)
self._last_capture_time = loop_start
cached_frame = frame
t0 = time.perf_counter()
calibration = self._calibration
border_pixels = extract_border_pixels(frame, calibration.border_width)
t1 = time.perf_counter()
led_colors = self._pixel_mapper.map_border_to_leds(border_pixels)
t2 = time.perf_counter()
# Ensure scratch pool is sized for this frame
target_count = self._led_count
_n = target_count if target_count > 0 else len(led_colors)
if _n > 0 and _n != _pool_n:
_pool_n = _n
_frame_a = np.empty((_n, 3), dtype=np.uint8)
_frame_b = np.empty((_n, 3), dtype=np.uint8)
_u16_a = np.empty((_n, 3), dtype=np.uint16)
_u16_b = np.empty((_n, 3), dtype=np.uint16)
_i32 = np.empty((_n, 3), dtype=np.int32)
_i32_gray = np.empty((_n, 1), dtype=np.int32)
self._previous_colors = None
# Copy/pad into double-buffered frame (avoids per-frame allocations)
frame_buf = _frame_a if _use_a else _frame_b
_use_a = not _use_a
n_leds = len(led_colors)
if _pool_n > 0:
if n_leds < _pool_n:
frame_buf[:n_leds] = led_colors
frame_buf[n_leds:] = 0
elif n_leds > _pool_n:
frame_buf[:] = led_colors[:_pool_n]
else:
frame_buf[:] = led_colors
led_colors = frame_buf
# Temporal smoothing (pre-allocated uint16 scratch)
smoothing = self._smoothing
if (
frame is not None
and self._frame_interpolation
and self._interp_from is not None
and self._interp_to is not None
self._previous_colors is not None
and smoothing > 0
and len(self._previous_colors) == len(led_colors)
and _u16_a is not None
):
# Interpolate between previous and current capture
t = min(1.0, (loop_start - self._interp_start) / self._interp_duration)
frame_buf = _frame_a if _use_a else _frame_b
_use_a = not _use_a
_blend_u16(self._interp_from, self._interp_to, int(t * 256), frame_buf)
led_colors = _apply_corrections(frame_buf, frame_buf)
with self._colors_lock:
self._latest_colors = led_colors
elapsed = time.perf_counter() - loop_start
time.sleep(max(frame_time - elapsed, 0.001))
continue
_blend_u16(led_colors, self._previous_colors,
int(smoothing * 256), led_colors)
t3 = time.perf_counter()
interval = (
loop_start - self._last_capture_time
if self._last_capture_time > 0
else frame_time
)
self._last_capture_time = loop_start
cached_frame = frame
# Update interpolation buffers (smoothed colors, before corrections)
# Must be AFTER smoothing so idle-tick interpolation produces
# output consistent with new-frame ticks (both smoothed).
if self._frame_interpolation:
self._interp_from = self._interp_to
self._interp_to = led_colors.copy()
self._interp_start = loop_start
self._interp_duration = max(interval, 0.001)
t0 = time.perf_counter()
# Saturation (pre-allocated int32 scratch)
saturation = self._saturation
if saturation != 1.0:
_apply_saturation(led_colors, saturation, _i32, _i32_gray, led_colors)
t4 = time.perf_counter()
calibration = self._calibration
border_pixels = extract_border_pixels(frame, calibration.border_width)
t1 = time.perf_counter()
# Gamma (LUT lookup — O(1) per pixel)
if self._gamma != 1.0:
led_colors = self._gamma_lut[led_colors]
t5 = time.perf_counter()
led_colors = self._pixel_mapper.map_border_to_leds(border_pixels)
t2 = time.perf_counter()
# Brightness (integer math with pre-allocated int32 scratch)
brightness = self._brightness
if brightness != 1.0:
bright_int = int(brightness * 256)
np.copyto(_i32, led_colors, casting='unsafe')
_i32 *= bright_int
_i32 >>= 8
np.clip(_i32, 0, 255, out=_i32)
np.copyto(frame_buf, _i32, casting='unsafe')
led_colors = frame_buf
t6 = time.perf_counter()
# Ensure scratch pool is sized for this frame
target_count = self._led_count
_n = target_count if target_count > 0 else len(led_colors)
if _n > 0 and _n != _pool_n:
_pool_n = _n
_frame_a = np.empty((_n, 3), dtype=np.uint8)
_frame_b = np.empty((_n, 3), dtype=np.uint8)
_u16_a = np.empty((_n, 3), dtype=np.uint16)
_u16_b = np.empty((_n, 3), dtype=np.uint16)
_i32 = np.empty((_n, 3), dtype=np.int32)
_i32_gray = np.empty((_n, 1), dtype=np.int32)
self._previous_colors = None
self._previous_colors = led_colors
# Copy/pad into double-buffered frame (avoids per-frame allocations)
frame_buf = _frame_a if _use_a else _frame_b
_use_a = not _use_a
n_leds = len(led_colors)
if _pool_n > 0:
if n_leds < _pool_n:
frame_buf[:n_leds] = led_colors
frame_buf[n_leds:] = 0
elif n_leds > _pool_n:
frame_buf[:] = led_colors[:_pool_n]
else:
frame_buf[:] = led_colors
led_colors = frame_buf
with self._colors_lock:
self._latest_colors = led_colors
# Temporal smoothing (pre-allocated uint16 scratch)
smoothing = self._smoothing
if (
self._previous_colors is not None
and smoothing > 0
and len(self._previous_colors) == len(led_colors)
and _u16_a is not None
):
_blend_u16(led_colors, self._previous_colors,
int(smoothing * 256), led_colors)
t3 = time.perf_counter()
self._last_timing = {
"extract_ms": (t1 - t0) * 1000,
"map_leds_ms": (t2 - t1) * 1000,
"smooth_ms": (t3 - t2) * 1000,
"saturation_ms": (t4 - t3) * 1000,
"gamma_ms": (t5 - t4) * 1000,
"brightness_ms": (t6 - t5) * 1000,
"total_ms": (t6 - t0) * 1000,
}
# Update interpolation buffers (smoothed colors, before corrections)
# Must be AFTER smoothing so idle-tick interpolation produces
# output consistent with new-frame ticks (both smoothed).
if self._frame_interpolation:
self._interp_from = self._interp_to
self._interp_to = led_colors.copy()
self._interp_start = loop_start
self._interp_duration = max(interval, 0.001)
except Exception as e:
logger.error(f"PictureColorStripStream processing error: {e}", exc_info=True)
# Saturation (pre-allocated int32 scratch)
saturation = self._saturation
if saturation != 1.0:
_apply_saturation(led_colors, saturation, _i32, _i32_gray, led_colors)
t4 = time.perf_counter()
# Gamma (LUT lookup — O(1) per pixel)
if self._gamma != 1.0:
led_colors = self._gamma_lut[led_colors]
t5 = time.perf_counter()
# Brightness (integer math with pre-allocated int32 scratch)
brightness = self._brightness
if brightness != 1.0:
bright_int = int(brightness * 256)
np.copyto(_i32, led_colors, casting='unsafe')
_i32 *= bright_int
_i32 >>= 8
np.clip(_i32, 0, 255, out=_i32)
np.copyto(frame_buf, _i32, casting='unsafe')
led_colors = frame_buf
t6 = time.perf_counter()
self._previous_colors = led_colors
with self._colors_lock:
self._latest_colors = led_colors
self._last_timing = {
"extract_ms": (t1 - t0) * 1000,
"map_leds_ms": (t2 - t1) * 1000,
"smooth_ms": (t3 - t2) * 1000,
"saturation_ms": (t4 - t3) * 1000,
"gamma_ms": (t5 - t4) * 1000,
"brightness_ms": (t6 - t5) * 1000,
"total_ms": (t6 - t0) * 1000,
}
except Exception as e:
logger.error(f"PictureColorStripStream processing error: {e}", exc_info=True)
elapsed = time.perf_counter() - loop_start
remaining = frame_time - elapsed
if remaining > 0:
time.sleep(remaining)
elapsed = time.perf_counter() - loop_start
remaining = frame_time - elapsed
if remaining > 0:
time.sleep(remaining)
except Exception as e:
logger.error(f"Fatal PictureColorStripStream loop error: {e}", exc_info=True)
finally:
self._running = False
def _compute_gradient_colors(stops: list, led_count: int) -> np.ndarray:
@@ -506,30 +511,42 @@ def _compute_gradient_colors(stops: list, led_count: int) -> np.ndarray:
c = stop.get("color", [255, 255, 255])
return np.array(c if isinstance(c, list) and len(c) == 3 else [255, 255, 255], dtype=np.float32)
# Vectorized: compute all LED positions at once
positions = np.linspace(0, 1, led_count) if led_count > 1 else np.array([0.0])
result = np.zeros((led_count, 3), dtype=np.float32)
for i in range(led_count):
p = i / (led_count - 1) if led_count > 1 else 0.0
# Extract stop positions and colors into arrays
n_stops = len(sorted_stops)
stop_positions = np.array([float(s.get("position", 0)) for s in sorted_stops], dtype=np.float32)
if p <= float(sorted_stops[0].get("position", 0)):
result[i] = _color(sorted_stops[0], "left")
continue
# Pre-compute left/right colors for each stop
left_colors = np.array([_color(s, "left") for s in sorted_stops], dtype=np.float32)
right_colors = np.array([_color(s, "right") for s in sorted_stops], dtype=np.float32)
last = sorted_stops[-1]
if p >= float(last.get("position", 1)):
result[i] = _color(last, "right")
continue
# LEDs before first stop
mask_before = positions <= stop_positions[0]
result[mask_before] = left_colors[0]
for j in range(len(sorted_stops) - 1):
a = sorted_stops[j]
b = sorted_stops[j + 1]
a_pos = float(a.get("position", 0))
b_pos = float(b.get("position", 1))
if a_pos <= p <= b_pos:
span = b_pos - a_pos
t = (p - a_pos) / span if span > 0 else 0.0
result[i] = _color(a, "right") + t * (_color(b, "left") - _color(a, "right"))
break
# LEDs after last stop
mask_after = positions >= stop_positions[-1]
result[mask_after] = right_colors[-1]
# LEDs between stops — vectorized per segment
mask_between = ~mask_before & ~mask_after
if np.any(mask_between):
between_pos = positions[mask_between]
# np.searchsorted finds the right stop index for each LED
idx = np.searchsorted(stop_positions, between_pos, side="right") - 1
idx = np.clip(idx, 0, n_stops - 2)
a_pos = stop_positions[idx]
b_pos = stop_positions[idx + 1]
span = b_pos - a_pos
t = np.where(span > 0, (between_pos - a_pos) / span, 0.0)
a_colors = right_colors[idx] # A's right color
b_colors = left_colors[idx + 1] # B's left color
result[mask_between] = a_colors + t[:, np.newaxis] * (b_colors - a_colors)
return np.clip(result, 0, 255).astype(np.uint8)
@@ -646,90 +663,98 @@ class StaticColorStripStream(ColorStripStream):
_buf_a = _buf_b = None
_use_a = True
with high_resolution_timer():
while self._running:
loop_start = time.perf_counter()
frame_time = 1.0 / self._fps
anim = self._animation
if anim and anim.get("enabled"):
speed = float(anim.get("speed", 1.0))
atype = anim.get("type", "breathing")
t = loop_start
n = self._led_count
try:
with high_resolution_timer():
while self._running:
loop_start = time.perf_counter()
frame_time = 1.0 / self._fps
try:
anim = self._animation
if anim and anim.get("enabled"):
speed = float(anim.get("speed", 1.0))
atype = anim.get("type", "breathing")
t = loop_start
n = self._led_count
if n != _pool_n:
_pool_n = n
_buf_a = np.empty((n, 3), dtype=np.uint8)
_buf_b = np.empty((n, 3), dtype=np.uint8)
if n != _pool_n:
_pool_n = n
_buf_a = np.empty((n, 3), dtype=np.uint8)
_buf_b = np.empty((n, 3), dtype=np.uint8)
buf = _buf_a if _use_a else _buf_b
_use_a = not _use_a
colors = None
buf = _buf_a if _use_a else _buf_b
_use_a = not _use_a
colors = None
if atype == "breathing":
factor = 0.5 * (1 + math.sin(2 * math.pi * speed * t * 0.5))
r, g, b = self._source_color
buf[:] = (min(255, int(r * factor)), min(255, int(g * factor)), min(255, int(b * factor)))
colors = buf
if atype == "breathing":
factor = 0.5 * (1 + math.sin(2 * math.pi * speed * t * 0.5))
r, g, b = self._source_color
buf[:] = (min(255, int(r * factor)), min(255, int(g * factor)), min(255, int(b * factor)))
colors = buf
elif atype == "strobe":
# Square wave: on for half the period, off for the other half.
# speed=1.0 → 2 flashes/sec (one full on/off cycle per 0.5s)
if math.sin(2 * math.pi * speed * t * 2.0) >= 0:
buf[:] = self._source_color
else:
buf[:] = 0
colors = buf
elif atype == "strobe":
# Square wave: on for half the period, off for the other half.
# speed=1.0 → 2 flashes/sec (one full on/off cycle per 0.5s)
if math.sin(2 * math.pi * speed * t * 2.0) >= 0:
buf[:] = self._source_color
else:
buf[:] = 0
colors = buf
elif atype == "sparkle":
# Random LEDs flash white while the rest stay the base color
buf[:] = self._source_color
density = min(0.5, 0.1 * speed)
mask = np.random.random(n) < density
buf[mask] = (255, 255, 255)
colors = buf
elif atype == "sparkle":
# Random LEDs flash white while the rest stay the base color
buf[:] = self._source_color
density = min(0.5, 0.1 * speed)
mask = np.random.random(n) < density
buf[mask] = (255, 255, 255)
colors = buf
elif atype == "pulse":
# Sharp attack, slow exponential decay — heartbeat-like
# speed=1.0 → ~1 pulse per second
phase = (speed * t * 1.0) % 1.0
if phase < 0.1:
factor = phase / 0.1
else:
factor = math.exp(-5.0 * (phase - 0.1))
r, g, b = self._source_color
buf[:] = (min(255, int(r * factor)), min(255, int(g * factor)), min(255, int(b * factor)))
colors = buf
elif atype == "pulse":
# Sharp attack, slow exponential decay — heartbeat-like
# speed=1.0 → ~1 pulse per second
phase = (speed * t * 1.0) % 1.0
if phase < 0.1:
factor = phase / 0.1
else:
factor = math.exp(-5.0 * (phase - 0.1))
r, g, b = self._source_color
buf[:] = (min(255, int(r * factor)), min(255, int(g * factor)), min(255, int(b * factor)))
colors = buf
elif atype == "candle":
# Random brightness fluctuations simulating a candle flame
base_factor = 0.75
flicker = 0.25 * math.sin(2 * math.pi * speed * t * 3.7)
flicker += 0.15 * math.sin(2 * math.pi * speed * t * 7.3)
flicker += 0.10 * (np.random.random() - 0.5)
factor = max(0.2, min(1.0, base_factor + flicker))
r, g, b = self._source_color
buf[:] = (min(255, int(r * factor)), min(255, int(g * factor)), min(255, int(b * factor)))
colors = buf
elif atype == "candle":
# Random brightness fluctuations simulating a candle flame
base_factor = 0.75
flicker = 0.25 * math.sin(2 * math.pi * speed * t * 3.7)
flicker += 0.15 * math.sin(2 * math.pi * speed * t * 7.3)
flicker += 0.10 * (np.random.random() - 0.5)
factor = max(0.2, min(1.0, base_factor + flicker))
r, g, b = self._source_color
buf[:] = (min(255, int(r * factor)), min(255, int(g * factor)), min(255, int(b * factor)))
colors = buf
elif atype == "rainbow_fade":
# Shift hue continuously from the base color
r, g, b = self._source_color
h, s, v = colorsys.rgb_to_hsv(r / 255.0, g / 255.0, b / 255.0)
# speed=1.0 → one full hue rotation every ~10s
h_shift = (speed * t * 0.1) % 1.0
new_h = (h + h_shift) % 1.0
nr, ng, nb = colorsys.hsv_to_rgb(new_h, max(s, 0.5), max(v, 0.3))
buf[:] = (int(nr * 255), int(ng * 255), int(nb * 255))
colors = buf
elif atype == "rainbow_fade":
# Shift hue continuously from the base color
r, g, b = self._source_color
h, s, v = colorsys.rgb_to_hsv(r / 255.0, g / 255.0, b / 255.0)
# speed=1.0 → one full hue rotation every ~10s
h_shift = (speed * t * 0.1) % 1.0
new_h = (h + h_shift) % 1.0
nr, ng, nb = colorsys.hsv_to_rgb(new_h, max(s, 0.5), max(v, 0.3))
buf[:] = (int(nr * 255), int(ng * 255), int(nb * 255))
colors = buf
if colors is not None:
with self._colors_lock:
self._colors = colors
if colors is not None:
with self._colors_lock:
self._colors = colors
except Exception as e:
logger.error(f"StaticColorStripStream animation error: {e}")
elapsed = time.perf_counter() - loop_start
sleep_target = frame_time if anim and anim.get("enabled") else 0.25
time.sleep(max(sleep_target - elapsed, 0.001))
elapsed = time.perf_counter() - loop_start
sleep_target = frame_time if anim and anim.get("enabled") else 0.25
time.sleep(max(sleep_target - elapsed, 0.001))
except Exception as e:
logger.error(f"Fatal StaticColorStripStream loop error: {e}", exc_info=True)
finally:
self._running = False
class ColorCycleColorStripStream(ColorStripStream):
@@ -834,39 +859,47 @@ class ColorCycleColorStripStream(ColorStripStream):
_buf_a = _buf_b = None
_use_a = True
with high_resolution_timer():
while self._running:
loop_start = time.perf_counter()
frame_time = 1.0 / self._fps
color_list = self._color_list
speed = self._cycle_speed
n = self._led_count
num = len(color_list)
if num >= 2:
if n != _pool_n:
_pool_n = n
_buf_a = np.empty((n, 3), dtype=np.uint8)
_buf_b = np.empty((n, 3), dtype=np.uint8)
try:
with high_resolution_timer():
while self._running:
loop_start = time.perf_counter()
frame_time = 1.0 / self._fps
try:
color_list = self._color_list
speed = self._cycle_speed
n = self._led_count
num = len(color_list)
if num >= 2:
if n != _pool_n:
_pool_n = n
_buf_a = np.empty((n, 3), dtype=np.uint8)
_buf_b = np.empty((n, 3), dtype=np.uint8)
buf = _buf_a if _use_a else _buf_b
_use_a = not _use_a
buf = _buf_a if _use_a else _buf_b
_use_a = not _use_a
# 0.05 factor → one full cycle every 20s at speed=1.0
cycle_pos = (speed * loop_start * 0.05) % 1.0
seg = cycle_pos * num
idx = int(seg) % num
t_i = seg - int(seg)
c1 = color_list[idx]
c2 = color_list[(idx + 1) % num]
buf[:] = (
min(255, int(c1[0] + (c2[0] - c1[0]) * t_i)),
min(255, int(c1[1] + (c2[1] - c1[1]) * t_i)),
min(255, int(c1[2] + (c2[2] - c1[2]) * t_i)),
)
with self._colors_lock:
self._colors = buf
elapsed = time.perf_counter() - loop_start
time.sleep(max(frame_time - elapsed, 0.001))
# 0.05 factor → one full cycle every 20s at speed=1.0
cycle_pos = (speed * loop_start * 0.05) % 1.0
seg = cycle_pos * num
idx = int(seg) % num
t_i = seg - int(seg)
c1 = color_list[idx]
c2 = color_list[(idx + 1) % num]
buf[:] = (
min(255, int(c1[0] + (c2[0] - c1[0]) * t_i)),
min(255, int(c1[1] + (c2[1] - c1[1]) * t_i)),
min(255, int(c1[2] + (c2[2] - c1[2]) * t_i)),
)
with self._colors_lock:
self._colors = buf
except Exception as e:
logger.error(f"ColorCycleColorStripStream animation error: {e}")
elapsed = time.perf_counter() - loop_start
time.sleep(max(frame_time - elapsed, 0.001))
except Exception as e:
logger.error(f"Fatal ColorCycleColorStripStream loop error: {e}", exc_info=True)
finally:
self._running = False
class GradientColorStripStream(ColorStripStream):
@@ -986,130 +1019,138 @@ class GradientColorStripStream(ColorStripStream):
_wave_factors = None # float32 scratch for wave sin result
_wave_u16 = None # uint16 scratch for wave int factors
with high_resolution_timer():
while self._running:
loop_start = time.perf_counter()
frame_time = 1.0 / self._fps
anim = self._animation
if anim and anim.get("enabled"):
speed = float(anim.get("speed", 1.0))
atype = anim.get("type", "breathing")
t = loop_start
n = self._led_count
stops = self._stops
colors = None
try:
with high_resolution_timer():
while self._running:
loop_start = time.perf_counter()
frame_time = 1.0 / self._fps
try:
anim = self._animation
if anim and anim.get("enabled"):
speed = float(anim.get("speed", 1.0))
atype = anim.get("type", "breathing")
t = loop_start
n = self._led_count
stops = self._stops
colors = None
# Recompute base gradient only when stops or led_count change
if _cached_base is None or _cached_n != n or _cached_stops is not stops:
_cached_base = _compute_gradient_colors(stops, n)
_cached_n = n
_cached_stops = stops
base = _cached_base
# Recompute base gradient only when stops or led_count change
if _cached_base is None or _cached_n != n or _cached_stops is not stops:
_cached_base = _compute_gradient_colors(stops, n)
_cached_n = n
_cached_stops = stops
base = _cached_base
# Re-allocate pool only when LED count changes
if n != _pool_n:
_pool_n = n
_buf_a = np.empty((n, 3), dtype=np.uint8)
_buf_b = np.empty((n, 3), dtype=np.uint8)
_scratch_u16 = np.empty((n, 3), dtype=np.uint16)
_wave_i = np.arange(n, dtype=np.float32)
_wave_factors = np.empty(n, dtype=np.float32)
_wave_u16 = np.empty(n, dtype=np.uint16)
# Re-allocate pool only when LED count changes
if n != _pool_n:
_pool_n = n
_buf_a = np.empty((n, 3), dtype=np.uint8)
_buf_b = np.empty((n, 3), dtype=np.uint8)
_scratch_u16 = np.empty((n, 3), dtype=np.uint16)
_wave_i = np.arange(n, dtype=np.float32)
_wave_factors = np.empty(n, dtype=np.float32)
_wave_u16 = np.empty(n, dtype=np.uint16)
buf = _buf_a if _use_a else _buf_b
_use_a = not _use_a
buf = _buf_a if _use_a else _buf_b
_use_a = not _use_a
if atype == "breathing":
int_f = max(0, min(256, int(0.5 * (1 + math.sin(2 * math.pi * speed * t * 0.5)) * 256)))
np.copyto(_scratch_u16, base)
_scratch_u16 *= int_f
_scratch_u16 >>= 8
np.copyto(buf, _scratch_u16, casting='unsafe')
colors = buf
if atype == "breathing":
int_f = max(0, min(256, int(0.5 * (1 + math.sin(2 * math.pi * speed * t * 0.5)) * 256)))
np.copyto(_scratch_u16, base)
_scratch_u16 *= int_f
_scratch_u16 >>= 8
np.copyto(buf, _scratch_u16, casting='unsafe')
colors = buf
elif atype == "gradient_shift":
shift = int(speed * t * 10) % max(n, 1)
if shift > 0:
buf[:n - shift] = base[shift:]
buf[n - shift:] = base[:shift]
else:
np.copyto(buf, base)
colors = buf
elif atype == "gradient_shift":
shift = int(speed * t * 10) % max(n, 1)
if shift > 0:
buf[:n - shift] = base[shift:]
buf[n - shift:] = base[:shift]
else:
np.copyto(buf, base)
colors = buf
elif atype == "wave":
if n > 1:
np.sin(
2 * math.pi * _wave_i / n - 2 * math.pi * speed * t * 0.25,
out=_wave_factors,
)
_wave_factors *= 0.5
_wave_factors += 0.5
np.multiply(_wave_factors, 256, out=_wave_factors)
np.clip(_wave_factors, 0, 256, out=_wave_factors)
np.copyto(_wave_u16, _wave_factors, casting='unsafe')
np.copyto(_scratch_u16, base)
_scratch_u16 *= _wave_u16[:, None]
_scratch_u16 >>= 8
np.copyto(buf, _scratch_u16, casting='unsafe')
colors = buf
else:
np.copyto(buf, base)
colors = buf
elif atype == "wave":
if n > 1:
np.sin(
2 * math.pi * _wave_i / n - 2 * math.pi * speed * t * 0.25,
out=_wave_factors,
)
_wave_factors *= 0.5
_wave_factors += 0.5
np.multiply(_wave_factors, 256, out=_wave_factors)
np.clip(_wave_factors, 0, 256, out=_wave_factors)
np.copyto(_wave_u16, _wave_factors, casting='unsafe')
np.copyto(_scratch_u16, base)
_scratch_u16 *= _wave_u16[:, None]
_scratch_u16 >>= 8
np.copyto(buf, _scratch_u16, casting='unsafe')
colors = buf
else:
np.copyto(buf, base)
colors = buf
elif atype == "strobe":
if math.sin(2 * math.pi * speed * t * 2.0) >= 0:
np.copyto(buf, base)
else:
buf[:] = 0
colors = buf
elif atype == "strobe":
if math.sin(2 * math.pi * speed * t * 2.0) >= 0:
np.copyto(buf, base)
else:
buf[:] = 0
colors = buf
elif atype == "sparkle":
np.copyto(buf, base)
density = min(0.5, 0.1 * speed)
mask = np.random.random(n) < density
buf[mask] = (255, 255, 255)
colors = buf
elif atype == "sparkle":
np.copyto(buf, base)
density = min(0.5, 0.1 * speed)
mask = np.random.random(n) < density
buf[mask] = (255, 255, 255)
colors = buf
elif atype == "pulse":
phase = (speed * t * 1.0) % 1.0
if phase < 0.1:
factor = phase / 0.1
else:
factor = math.exp(-5.0 * (phase - 0.1))
int_f = max(0, min(256, int(factor * 256)))
np.copyto(_scratch_u16, base)
_scratch_u16 *= int_f
_scratch_u16 >>= 8
np.copyto(buf, _scratch_u16, casting='unsafe')
colors = buf
elif atype == "pulse":
phase = (speed * t * 1.0) % 1.0
if phase < 0.1:
factor = phase / 0.1
else:
factor = math.exp(-5.0 * (phase - 0.1))
int_f = max(0, min(256, int(factor * 256)))
np.copyto(_scratch_u16, base)
_scratch_u16 *= int_f
_scratch_u16 >>= 8
np.copyto(buf, _scratch_u16, casting='unsafe')
colors = buf
elif atype == "candle":
base_factor = 0.75
flicker = 0.25 * math.sin(2 * math.pi * speed * t * 3.7)
flicker += 0.15 * math.sin(2 * math.pi * speed * t * 7.3)
flicker += 0.10 * (np.random.random() - 0.5)
factor = max(0.2, min(1.0, base_factor + flicker))
int_f = int(factor * 256)
np.copyto(_scratch_u16, base)
_scratch_u16 *= int_f
_scratch_u16 >>= 8
np.copyto(buf, _scratch_u16, casting='unsafe')
colors = buf
elif atype == "candle":
base_factor = 0.75
flicker = 0.25 * math.sin(2 * math.pi * speed * t * 3.7)
flicker += 0.15 * math.sin(2 * math.pi * speed * t * 7.3)
flicker += 0.10 * (np.random.random() - 0.5)
factor = max(0.2, min(1.0, base_factor + flicker))
int_f = int(factor * 256)
np.copyto(_scratch_u16, base)
_scratch_u16 *= int_f
_scratch_u16 >>= 8
np.copyto(buf, _scratch_u16, casting='unsafe')
colors = buf
elif atype == "rainbow_fade":
h_shift = (speed * t * 0.1) % 1.0
for i in range(n):
r, g, b = base[i]
h, s, v = colorsys.rgb_to_hsv(r / 255.0, g / 255.0, b / 255.0)
new_h = (h + h_shift) % 1.0
nr, ng, nb = colorsys.hsv_to_rgb(new_h, max(s, 0.5), max(v, 0.3))
buf[i] = (int(nr * 255), int(ng * 255), int(nb * 255))
colors = buf
elif atype == "rainbow_fade":
h_shift = (speed * t * 0.1) % 1.0
for i in range(n):
r, g, b = base[i]
h, s, v = colorsys.rgb_to_hsv(r / 255.0, g / 255.0, b / 255.0)
new_h = (h + h_shift) % 1.0
nr, ng, nb = colorsys.hsv_to_rgb(new_h, max(s, 0.5), max(v, 0.3))
buf[i] = (int(nr * 255), int(ng * 255), int(nb * 255))
colors = buf
if colors is not None:
with self._colors_lock:
self._colors = colors
if colors is not None:
with self._colors_lock:
self._colors = colors
except Exception as e:
logger.error(f"GradientColorStripStream animation error: {e}")
elapsed = time.perf_counter() - loop_start
sleep_target = frame_time if anim and anim.get("enabled") else 0.25
time.sleep(max(sleep_target - elapsed, 0.001))
elapsed = time.perf_counter() - loop_start
sleep_target = frame_time if anim and anim.get("enabled") else 0.25
time.sleep(max(sleep_target - elapsed, 0.001))
except Exception as e:
logger.error(f"Fatal GradientColorStripStream loop error: {e}", exc_info=True)
finally:
self._running = False

View File

@@ -253,61 +253,66 @@ class CompositeColorStripStream(ColorStripStream):
# ── Processing loop ─────────────────────────────────────────
def _processing_loop(self) -> None:
while self._running:
loop_start = time.perf_counter()
frame_time = 1.0 / self._fps
try:
while self._running:
loop_start = time.perf_counter()
frame_time = 1.0 / self._fps
try:
target_n = self._led_count
if target_n <= 0:
time.sleep(frame_time)
continue
self._ensure_pool(target_n)
result_buf = self._result_a if self._use_a else self._result_b
self._use_a = not self._use_a
has_result = False
for i, layer in enumerate(self._layers):
if not layer.get("enabled", True):
continue
if i not in self._sub_streams:
try:
target_n = self._led_count
if target_n <= 0:
time.sleep(frame_time)
continue
_src_id, _consumer_id, stream = self._sub_streams[i]
colors = stream.get_latest_colors()
if colors is None:
continue
self._ensure_pool(target_n)
# Resize to target LED count if needed
if len(colors) != target_n:
colors = self._resize_to_target(colors, target_n)
result_buf = self._result_a if self._use_a else self._result_b
self._use_a = not self._use_a
has_result = False
opacity = layer.get("opacity", 1.0)
blend_mode = layer.get("blend_mode", _BLEND_NORMAL)
alpha = int(opacity * 256)
alpha = max(0, min(256, alpha))
for i, layer in enumerate(self._layers):
if not layer.get("enabled", True):
continue
if i not in self._sub_streams:
continue
if not has_result:
# First layer: copy directly (or blend with black if opacity < 1)
if alpha >= 256 and blend_mode == _BLEND_NORMAL:
result_buf[:] = colors
_src_id, _consumer_id, stream = self._sub_streams[i]
colors = stream.get_latest_colors()
if colors is None:
continue
# Resize to target LED count if needed
if len(colors) != target_n:
colors = self._resize_to_target(colors, target_n)
opacity = layer.get("opacity", 1.0)
blend_mode = layer.get("blend_mode", _BLEND_NORMAL)
alpha = int(opacity * 256)
alpha = max(0, min(256, alpha))
if not has_result:
# First layer: copy directly (or blend with black if opacity < 1)
if alpha >= 256 and blend_mode == _BLEND_NORMAL:
result_buf[:] = colors
else:
result_buf[:] = 0
blend_fn = getattr(self, self._BLEND_DISPATCH.get(blend_mode, "_blend_normal"))
blend_fn(result_buf, colors, alpha, result_buf)
has_result = True
else:
result_buf[:] = 0
blend_fn = getattr(self, self._BLEND_DISPATCH.get(blend_mode, "_blend_normal"))
blend_fn(result_buf, colors, alpha, result_buf)
has_result = True
else:
blend_fn = getattr(self, self._BLEND_DISPATCH.get(blend_mode, "_blend_normal"))
blend_fn(result_buf, colors, alpha, result_buf)
if has_result:
with self._colors_lock:
self._latest_colors = result_buf
if has_result:
with self._colors_lock:
self._latest_colors = result_buf
except Exception as e:
logger.error(f"CompositeColorStripStream processing error: {e}", exc_info=True)
except Exception as e:
logger.error(f"CompositeColorStripStream processing error: {e}", exc_info=True)
elapsed = time.perf_counter() - loop_start
time.sleep(max(frame_time - elapsed, 0.001))
elapsed = time.perf_counter() - loop_start
time.sleep(max(frame_time - elapsed, 0.001))
except Exception as e:
logger.error(f"Fatal CompositeColorStripStream loop error: {e}", exc_info=True)
finally:
self._running = False

View File

@@ -284,38 +284,45 @@ class EffectColorStripStream(ColorStripStream):
"aurora": self._render_aurora,
}
with high_resolution_timer():
while self._running:
loop_start = time.perf_counter()
frame_time = 1.0 / self._fps
try:
with high_resolution_timer():
while self._running:
loop_start = time.perf_counter()
frame_time = 1.0 / self._fps
try:
n = self._led_count
if n != _pool_n:
_pool_n = n
_buf_a = np.empty((n, 3), dtype=np.uint8)
_buf_b = np.empty((n, 3), dtype=np.uint8)
# Scratch arrays for render methods
self._s_f32_a = np.empty(n, dtype=np.float32)
self._s_f32_b = np.empty(n, dtype=np.float32)
self._s_f32_c = np.empty(n, dtype=np.float32)
self._s_i32 = np.empty(n, dtype=np.int32)
self._s_f32_rgb = np.empty((n, 3), dtype=np.float32)
self._s_arange = np.arange(n, dtype=np.float32)
self._s_layer1 = np.empty(n, dtype=np.float32)
self._s_layer2 = np.empty(n, dtype=np.float32)
self._plasma_key = (0, 0.0)
n = self._led_count
if n != _pool_n:
_pool_n = n
_buf_a = np.empty((n, 3), dtype=np.uint8)
_buf_b = np.empty((n, 3), dtype=np.uint8)
# Scratch arrays for render methods
self._s_f32_a = np.empty(n, dtype=np.float32)
self._s_f32_b = np.empty(n, dtype=np.float32)
self._s_f32_c = np.empty(n, dtype=np.float32)
self._s_i32 = np.empty(n, dtype=np.int32)
self._s_f32_rgb = np.empty((n, 3), dtype=np.float32)
self._s_arange = np.arange(n, dtype=np.float32)
self._s_layer1 = np.empty(n, dtype=np.float32)
self._s_layer2 = np.empty(n, dtype=np.float32)
self._plasma_key = (0, 0.0)
buf = _buf_a if _use_a else _buf_b
_use_a = not _use_a
buf = _buf_a if _use_a else _buf_b
_use_a = not _use_a
render_fn = renderers.get(self._effect_type, self._render_fire)
render_fn(buf, n, loop_start)
render_fn = renderers.get(self._effect_type, self._render_fire)
render_fn(buf, n, loop_start)
with self._colors_lock:
self._colors = buf
except Exception as e:
logger.error(f"EffectColorStripStream render error: {e}")
with self._colors_lock:
self._colors = buf
elapsed = time.perf_counter() - loop_start
time.sleep(max(frame_time - elapsed, 0.001))
elapsed = time.perf_counter() - loop_start
time.sleep(max(frame_time - elapsed, 0.001))
except Exception as e:
logger.error(f"Fatal EffectColorStripStream loop error: {e}", exc_info=True)
finally:
self._running = False
# ── Fire ─────────────────────────────────────────────────────────

View File

@@ -129,25 +129,41 @@ class ScreenCaptureLiveStream(LiveStream):
def _capture_loop(self) -> None:
frame_time = 1.0 / self._fps if self._fps > 0 else 1.0
with high_resolution_timer():
while self._running:
loop_start = time.perf_counter()
try:
frame = self._capture_stream.capture_frame()
if frame is not None:
with self._frame_lock:
self._latest_frame = frame
else:
# Small sleep when no frame available to avoid CPU spinning
time.sleep(0.001)
except Exception as e:
logger.error(f"Capture error (display={self._capture_stream.display_index}): {e}")
consecutive_errors = 0
try:
with high_resolution_timer():
while self._running:
loop_start = time.perf_counter()
try:
frame = self._capture_stream.capture_frame()
if frame is not None:
with self._frame_lock:
self._latest_frame = frame
consecutive_errors = 0
else:
# Small sleep when no frame available to avoid CPU spinning
time.sleep(0.001)
except Exception as e:
consecutive_errors += 1
logger.error(f"Capture error (display={self._capture_stream.display_index}): {e}")
# Backoff on repeated errors to avoid CPU spinning
if consecutive_errors > 5:
backoff = min(1.0, 0.1 * (consecutive_errors - 5))
time.sleep(backoff)
continue
# Throttle to target FPS
elapsed = time.perf_counter() - loop_start
remaining = frame_time - elapsed
if remaining > 0:
time.sleep(remaining)
# Throttle to target FPS
elapsed = time.perf_counter() - loop_start
remaining = frame_time - elapsed
if remaining > 0:
time.sleep(remaining)
except Exception as e:
logger.error(
f"Fatal capture loop error (display={self._capture_stream.display_index}): {e}",
exc_info=True,
)
finally:
self._running = False
class ProcessedLiveStream(LiveStream):
@@ -226,79 +242,84 @@ class ProcessedLiveStream(LiveStream):
fps = self.target_fps
frame_time = 1.0 / fps if fps > 0 else 1.0
with high_resolution_timer():
while self._running:
loop_start = time.perf_counter()
try:
with high_resolution_timer():
while self._running:
loop_start = time.perf_counter()
try:
source_frame = self._source.get_latest_frame()
if source_frame is None or source_frame is cached_source_frame:
# Idle tick — run filter chain when any filter requests idle processing
if self._has_idle_filters and cached_source_frame is not None:
src = cached_source_frame.image
h, w, c = src.shape
if _idle_src_buf is None or _idle_src_buf.shape != (h, w, c):
_idle_src_buf = np.empty((h, w, c), dtype=np.uint8)
np.copyto(_idle_src_buf, src)
idle_image = _idle_src_buf
source_frame = self._source.get_latest_frame()
if source_frame is None or source_frame is cached_source_frame:
# Idle tick — run filter chain when any filter requests idle processing
if self._has_idle_filters and cached_source_frame is not None:
src = cached_source_frame.image
for f in self._filters:
result = f.process_image(idle_image, self._image_pool)
if result is not None:
if idle_image is not _idle_src_buf:
self._image_pool.release(idle_image)
idle_image = result
# Only publish a new frame when the filter chain produced actual
# interpolated output (idle_image advanced past the input buffer).
if idle_image is not _idle_src_buf:
processed = ScreenCapture(
image=idle_image,
width=idle_image.shape[1],
height=idle_image.shape[0],
display_index=cached_source_frame.display_index,
)
with self._frame_lock:
self._latest_frame = processed
elapsed = time.perf_counter() - loop_start
remaining = frame_time - elapsed
time.sleep(max(remaining, 0.001))
continue
cached_source_frame = source_frame
# Reuse ring buffer slot instead of allocating a new copy each frame
src = source_frame.image
h, w, c = src.shape
if _idle_src_buf is None or _idle_src_buf.shape != (h, w, c):
_idle_src_buf = np.empty((h, w, c), dtype=np.uint8)
np.copyto(_idle_src_buf, src)
idle_image = _idle_src_buf
buf = _ring[_ring_idx]
if buf is None or buf.shape != (h, w, c):
buf = np.empty((h, w, c), dtype=np.uint8)
_ring[_ring_idx] = buf
_ring_idx = (_ring_idx + 1) % 3
np.copyto(buf, src)
image = buf
for f in self._filters:
result = f.process_image(idle_image, self._image_pool)
result = f.process_image(image, self._image_pool)
if result is not None:
if idle_image is not _idle_src_buf:
self._image_pool.release(idle_image)
idle_image = result
# Release intermediate filter output back to pool
# (don't release the ring buffer itself)
if image is not buf:
self._image_pool.release(image)
image = result
# Only publish a new frame when the filter chain produced actual
# interpolated output (idle_image advanced past the input buffer).
# If every filter passed through, idle_image is still _idle_src_buf —
# leave _latest_frame unchanged so consumers that rely on object
# identity for deduplication correctly detect no new content.
if idle_image is not _idle_src_buf:
processed = ScreenCapture(
image=idle_image,
width=idle_image.shape[1],
height=idle_image.shape[0],
display_index=cached_source_frame.display_index,
)
with self._frame_lock:
self._latest_frame = processed
elapsed = time.perf_counter() - loop_start
remaining = frame_time - elapsed
time.sleep(max(remaining, 0.001))
continue
cached_source_frame = source_frame
# Reuse ring buffer slot instead of allocating a new copy each frame
src = source_frame.image
h, w, c = src.shape
buf = _ring[_ring_idx]
if buf is None or buf.shape != (h, w, c):
buf = np.empty((h, w, c), dtype=np.uint8)
_ring[_ring_idx] = buf
_ring_idx = (_ring_idx + 1) % 3
np.copyto(buf, src)
image = buf
for f in self._filters:
result = f.process_image(image, self._image_pool)
if result is not None:
# Release intermediate filter output back to pool
# (don't release the ring buffer itself)
if image is not buf:
self._image_pool.release(image)
image = result
processed = ScreenCapture(
image=image,
width=image.shape[1],
height=image.shape[0],
display_index=source_frame.display_index,
)
with self._frame_lock:
self._latest_frame = processed
processed = ScreenCapture(
image=image,
width=image.shape[1],
height=image.shape[0],
display_index=source_frame.display_index,
)
with self._frame_lock:
self._latest_frame = processed
except Exception as e:
logger.error(f"Filter processing error: {e}")
time.sleep(0.01)
except Exception as e:
logger.error(f"Fatal processing loop error: {e}", exc_info=True)
finally:
self._running = False
class StaticImageLiveStream(LiveStream):

View File

@@ -152,61 +152,66 @@ class MappedColorStripStream(ColorStripStream):
# ── Processing loop ─────────────────────────────────────────
def _processing_loop(self) -> None:
while self._running:
loop_start = time.perf_counter()
frame_time = 1.0 / self._fps
try:
while self._running:
loop_start = time.perf_counter()
frame_time = 1.0 / self._fps
try:
target_n = self._led_count
if target_n <= 0:
time.sleep(frame_time)
continue
result = np.zeros((target_n, 3), dtype=np.uint8)
for i, zone in enumerate(self._zones):
if i not in self._sub_streams:
try:
target_n = self._led_count
if target_n <= 0:
time.sleep(frame_time)
continue
_src_id, _consumer_id, stream = self._sub_streams[i]
colors = stream.get_latest_colors()
if colors is None:
continue
result = np.zeros((target_n, 3), dtype=np.uint8)
start = zone.get("start", 0)
end = zone.get("end", 0)
if end <= 0:
end = target_n
start = max(0, min(start, target_n))
end = max(start, min(end, target_n))
zone_len = end - start
for i, zone in enumerate(self._zones):
if i not in self._sub_streams:
continue
if zone_len <= 0:
continue
_src_id, _consumer_id, stream = self._sub_streams[i]
colors = stream.get_latest_colors()
if colors is None:
continue
# Resize sub-stream output to zone length if needed
if len(colors) != zone_len:
src_x = np.linspace(0, 1, len(colors))
dst_x = np.linspace(0, 1, zone_len)
resized = np.empty((zone_len, 3), dtype=np.uint8)
for ch in range(3):
np.copyto(
resized[:, ch],
np.interp(dst_x, src_x, colors[:, ch]),
casting="unsafe",
)
colors = resized
start = zone.get("start", 0)
end = zone.get("end", 0)
if end <= 0:
end = target_n
start = max(0, min(start, target_n))
end = max(start, min(end, target_n))
zone_len = end - start
if zone.get("reverse", False):
colors = colors[::-1]
if zone_len <= 0:
continue
result[start:end] = colors
# Resize sub-stream output to zone length if needed
if len(colors) != zone_len:
src_x = np.linspace(0, 1, len(colors))
dst_x = np.linspace(0, 1, zone_len)
resized = np.empty((zone_len, 3), dtype=np.uint8)
for ch in range(3):
np.copyto(
resized[:, ch],
np.interp(dst_x, src_x, colors[:, ch]),
casting="unsafe",
)
colors = resized
with self._colors_lock:
self._latest_colors = result
if zone.get("reverse", False):
colors = colors[::-1]
except Exception as e:
logger.error(f"MappedColorStripStream processing error: {e}", exc_info=True)
result[start:end] = colors
elapsed = time.perf_counter() - loop_start
time.sleep(max(frame_time - elapsed, 0.001))
with self._colors_lock:
self._latest_colors = result
except Exception as e:
logger.error(f"MappedColorStripStream processing error: {e}", exc_info=True)
elapsed = time.perf_counter() - loop_start
time.sleep(max(frame_time - elapsed, 0.001))
except Exception as e:
logger.error(f"Fatal MappedColorStripStream loop error: {e}", exc_info=True)
finally:
self._running = False

View File

@@ -131,10 +131,13 @@ import {
showCSSCalibration, toggleCalibrationOverlay,
} from './features/calibration.js';
// Layer 6: tabs, navigation, command palette
// Layer 6: tabs, navigation, command palette, settings
import { switchTab, initTabs, startAutoRefresh, handlePopState } from './features/tabs.js';
import { navigateToCard } from './core/navigation.js';
import { openCommandPalette, closeCommandPalette, initCommandPalette } from './core/command-palette.js';
import {
openSettingsModal, closeSettingsModal, downloadBackup, handleRestoreFileSelected,
} from './features/settings.js';
// ─── Register all HTML onclick / onchange / onfocus globals ───
@@ -384,6 +387,12 @@ Object.assign(window, {
navigateToCard,
openCommandPalette,
closeCommandPalette,
// settings (backup / restore)
openSettingsModal,
closeSettingsModal,
downloadBackup,
handleRestoreFileSelected,
});
// ─── Global keyboard shortcuts ───

View File

@@ -36,7 +36,7 @@ export async function fetchWithAuth(url, options = {}) {
for (let attempt = 0; attempt < maxAttempts; attempt++) {
const controller = new AbortController();
if (fetchOpts.signal) {
fetchOpts.signal.addEventListener('abort', () => controller.abort());
fetchOpts.signal.addEventListener('abort', () => controller.abort(), { once: true });
}
const timer = setTimeout(() => controller.abort(), timeout);
try {

View File

@@ -56,17 +56,26 @@ export function setupBackdropClose(modal, closeFn) {
modal._backdropCloseSetup = true;
}
let _lockCount = 0;
let _savedScrollY = 0;
export function lockBody() {
const scrollY = window.scrollY;
document.body.style.top = `-${scrollY}px`;
document.body.classList.add('modal-open');
if (_lockCount === 0) {
_savedScrollY = window.scrollY;
document.body.style.top = `-${_savedScrollY}px`;
document.body.classList.add('modal-open');
}
_lockCount++;
}
export function unlockBody() {
const scrollY = parseInt(document.body.style.top || '0', 10) * -1;
document.body.classList.remove('modal-open');
document.body.style.top = '';
window.scrollTo(0, scrollY);
if (_lockCount <= 0) return;
_lockCount--;
if (_lockCount === 0) {
document.body.classList.remove('modal-open');
document.body.style.top = '';
window.scrollTo(0, _savedScrollY);
}
}
export function openLightbox(imageSrc, statsHtml) {

View File

@@ -0,0 +1,137 @@
/**
* Settings — backup / restore configuration.
*/
import { apiKey } from '../core/state.js';
import { API_BASE, fetchWithAuth } from '../core/api.js';
import { Modal } from '../core/modal.js';
import { showToast, showConfirm } from '../core/ui.js';
import { t } from '../core/i18n.js';
// Simple modal (no form / no dirty check needed)
const settingsModal = new Modal('settings-modal');
export function openSettingsModal() {
document.getElementById('settings-error').style.display = 'none';
settingsModal.open();
}
export function closeSettingsModal() {
settingsModal.forceClose();
}
// ─── Backup ────────────────────────────────────────────────
export async function downloadBackup() {
try {
const resp = await fetchWithAuth('/system/backup', { timeout: 30000 });
if (!resp.ok) {
const err = await resp.json().catch(() => ({}));
throw new Error(err.detail || `HTTP ${resp.status}`);
}
const blob = await resp.blob();
const disposition = resp.headers.get('Content-Disposition') || '';
const match = disposition.match(/filename="(.+?)"/);
const filename = match ? match[1] : 'ledgrab-backup.json';
const a = document.createElement('a');
a.href = URL.createObjectURL(blob);
a.download = filename;
document.body.appendChild(a);
a.click();
a.remove();
URL.revokeObjectURL(a.href);
showToast(t('settings.backup.success'), 'success');
} catch (err) {
console.error('Backup download failed:', err);
showToast(t('settings.backup.error') + ': ' + err.message, 'error');
}
}
// ─── Restore ───────────────────────────────────────────────
export async function handleRestoreFileSelected(input) {
const file = input.files[0];
input.value = '';
if (!file) return;
const confirmed = await showConfirm(t('settings.restore.confirm'));
if (!confirmed) return;
try {
const formData = new FormData();
formData.append('file', file);
const resp = await fetch(`${API_BASE}/system/restore`, {
method: 'POST',
headers: { 'Authorization': `Bearer ${apiKey}` },
body: formData,
});
if (!resp.ok) {
const err = await resp.json().catch(() => ({}));
throw new Error(err.detail || `HTTP ${resp.status}`);
}
const data = await resp.json();
showToast(data.message || t('settings.restore.success'), 'success');
settingsModal.forceClose();
if (data.restart_scheduled) {
showRestartOverlay();
}
} catch (err) {
console.error('Restore failed:', err);
showToast(t('settings.restore.error') + ': ' + err.message, 'error');
}
}
// ─── Restart overlay ───────────────────────────────────────
function showRestartOverlay() {
const overlay = document.createElement('div');
overlay.id = 'restart-overlay';
overlay.style.cssText =
'position:fixed;inset:0;z-index:100000;display:flex;flex-direction:column;' +
'align-items:center;justify-content:center;background:rgba(0,0,0,0.85);color:#fff;font-size:1.2rem;';
overlay.innerHTML =
'<div class="spinner" style="width:48px;height:48px;border:4px solid rgba(255,255,255,0.3);' +
'border-top-color:#fff;border-radius:50%;animation:spin 0.8s linear infinite;margin-bottom:1rem;"></div>' +
`<div id="restart-msg">${t('settings.restore.restarting')}</div>`;
// Add spinner animation if not present
if (!document.getElementById('restart-spinner-style')) {
const style = document.createElement('style');
style.id = 'restart-spinner-style';
style.textContent = '@keyframes spin{to{transform:rotate(360deg)}}';
document.head.appendChild(style);
}
document.body.appendChild(overlay);
pollHealth();
}
function pollHealth() {
const start = Date.now();
const maxWait = 30000;
const interval = 1500;
const check = async () => {
if (Date.now() - start > maxWait) {
const msg = document.getElementById('restart-msg');
if (msg) msg.textContent = t('settings.restore.restart_timeout');
return;
}
try {
const resp = await fetch('/health', { signal: AbortSignal.timeout(3000) });
if (resp.ok) {
window.location.reload();
return;
}
} catch { /* server still down */ }
setTimeout(check, interval);
};
// Wait a moment before first check to let the server shut down
setTimeout(check, 2000);
}

View File

@@ -1102,6 +1102,7 @@ function connectLedPreviewWS(targetId) {
function disconnectLedPreviewWS(targetId) {
const ws = ledPreviewWebSockets[targetId];
if (ws) {
ws.onclose = null;
ws.close();
delete ledPreviewWebSockets[targetId];
}

View File

@@ -927,5 +927,21 @@
"search.group.pp_templates": "Post-Processing Templates",
"search.group.pattern_templates": "Pattern Templates",
"search.group.audio": "Audio Sources",
"search.group.value": "Value Sources"
"search.group.value": "Value Sources",
"settings.title": "Settings",
"settings.backup.label": "Backup Configuration",
"settings.backup.hint": "Download all configuration (devices, targets, streams, templates, profiles) as a single JSON file.",
"settings.backup.button": "Download Backup",
"settings.backup.success": "Backup downloaded successfully",
"settings.backup.error": "Backup download failed",
"settings.restore.label": "Restore Configuration",
"settings.restore.hint": "Upload a previously downloaded backup file to replace all configuration. The server will restart automatically.",
"settings.restore.button": "Restore from Backup",
"settings.restore.confirm": "This will replace ALL configuration and restart the server. Are you sure?",
"settings.restore.success": "Configuration restored",
"settings.restore.error": "Restore failed",
"settings.restore.restarting": "Server is restarting...",
"settings.restore.restart_timeout": "Server did not respond. Please refresh the page manually.",
"settings.button.close": "Close"
}

View File

@@ -927,5 +927,21 @@
"search.group.pp_templates": "Шаблоны постобработки",
"search.group.pattern_templates": "Шаблоны паттернов",
"search.group.audio": "Аудиоисточники",
"search.group.value": "Источники значений"
"search.group.value": "Источники значений",
"settings.title": "Настройки",
"settings.backup.label": "Резервное копирование",
"settings.backup.hint": "Скачать всю конфигурацию (устройства, цели, потоки, шаблоны, профили) в виде одного JSON-файла.",
"settings.backup.button": "Скачать резервную копию",
"settings.backup.success": "Резервная копия скачана",
"settings.backup.error": "Ошибка скачивания резервной копии",
"settings.restore.label": "Восстановление конфигурации",
"settings.restore.hint": "Загрузите ранее сохранённый файл резервной копии для замены всей конфигурации. Сервер перезапустится автоматически.",
"settings.restore.button": "Восстановить из копии",
"settings.restore.confirm": "Это заменит ВСЮ конфигурацию и перезапустит сервер. Вы уверены?",
"settings.restore.success": "Конфигурация восстановлена",
"settings.restore.error": "Ошибка восстановления",
"settings.restore.restarting": "Сервер перезапускается...",
"settings.restore.restart_timeout": "Сервер не отвечает. Обновите страницу вручную.",
"settings.button.close": "Закрыть"
}

View File

@@ -927,5 +927,21 @@
"search.group.pp_templates": "后处理模板",
"search.group.pattern_templates": "图案模板",
"search.group.audio": "音频源",
"search.group.value": "值源"
"search.group.value": "值源",
"settings.title": "设置",
"settings.backup.label": "备份配置",
"settings.backup.hint": "将所有配置(设备、目标、流、模板、配置文件)下载为单个 JSON 文件。",
"settings.backup.button": "下载备份",
"settings.backup.success": "备份下载成功",
"settings.backup.error": "备份下载失败",
"settings.restore.label": "恢复配置",
"settings.restore.hint": "上传之前下载的备份文件以替换所有配置。服务器将自动重启。",
"settings.restore.button": "从备份恢复",
"settings.restore.confirm": "这将替换所有配置并重启服务器。确定继续吗?",
"settings.restore.success": "配置已恢复",
"settings.restore.error": "恢复失败",
"settings.restore.restarting": "服务器正在重启...",
"settings.restore.restart_timeout": "服务器未响应。请手动刷新页面。",
"settings.button.close": "关闭"
}

View File

@@ -11,7 +11,7 @@ from wled_controller.storage.audio_source import (
MonoAudioSource,
MultichannelAudioSource,
)
from wled_controller.utils import get_logger
from wled_controller.utils import atomic_write_json, get_logger
logger = get_logger(__name__)
@@ -57,21 +57,14 @@ class AudioSourceStore:
def _save(self) -> None:
try:
self.file_path.parent.mkdir(parents=True, exist_ok=True)
sources_dict = {
sid: source.to_dict()
for sid, source in self._sources.items()
}
data = {
"version": "1.0.0",
"audio_sources": sources_dict,
"audio_sources": {
sid: source.to_dict()
for sid, source in self._sources.items()
},
}
with open(self.file_path, "w", encoding="utf-8") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save audio sources to {self.file_path}: {e}")
raise

View File

@@ -8,7 +8,7 @@ from typing import Dict, List, Optional
from wled_controller.core.audio.factory import AudioEngineRegistry
from wled_controller.storage.audio_template import AudioCaptureTemplate
from wled_controller.utils import get_logger
from wled_controller.utils import atomic_write_json, get_logger
logger = get_logger(__name__)
@@ -93,21 +93,14 @@ class AudioTemplateStore:
def _save(self) -> None:
"""Save all templates to file."""
try:
self.file_path.parent.mkdir(parents=True, exist_ok=True)
templates_dict = {
template_id: template.to_dict()
for template_id, template in self._templates.items()
}
data = {
"version": "1.0.0",
"templates": templates_dict,
"templates": {
template_id: template.to_dict()
for template_id, template in self._templates.items()
},
}
with open(self.file_path, "w", encoding="utf-8") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save audio templates to {self.file_path}: {e}")
raise
@@ -168,6 +161,9 @@ class AudioTemplateStore:
template = self._templates[template_id]
if name is not None:
for tid, t in self._templates.items():
if tid != template_id and t.name == name:
raise ValueError(f"Audio template with name '{name}' already exists")
template.name = name
if engine_type is not None:
template.engine_type = engine_type

View File

@@ -19,7 +19,7 @@ from wled_controller.storage.color_strip_source import (
PictureColorStripSource,
StaticColorStripSource,
)
from wled_controller.utils import get_logger
from wled_controller.utils import atomic_write_json, get_logger
logger = get_logger(__name__)
@@ -62,21 +62,14 @@ class ColorStripStore:
def _save(self) -> None:
try:
self.file_path.parent.mkdir(parents=True, exist_ok=True)
sources_dict = {
sid: source.to_dict()
for sid, source in self._sources.items()
}
data = {
"version": "1.0.0",
"color_strip_sources": sources_dict,
"color_strip_sources": {
sid: source.to_dict()
for sid, source in self._sources.items()
},
}
with open(self.file_path, "w", encoding="utf-8") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save color strip sources to {self.file_path}: {e}")
raise

View File

@@ -8,7 +8,7 @@ from typing import Dict, List, Optional
from wled_controller.storage.key_colors_picture_target import KeyColorRectangle
from wled_controller.storage.pattern_template import PatternTemplate
from wled_controller.utils import get_logger
from wled_controller.utils import atomic_write_json, get_logger
logger = get_logger(__name__)
@@ -88,21 +88,14 @@ class PatternTemplateStore:
def _save(self) -> None:
"""Save all templates to file."""
try:
self.file_path.parent.mkdir(parents=True, exist_ok=True)
templates_dict = {
template_id: template.to_dict()
for template_id, template in self._templates.items()
}
data = {
"version": "1.0.0",
"pattern_templates": templates_dict,
"pattern_templates": {
template_id: template.to_dict()
for template_id, template in self._templates.items()
},
}
with open(self.file_path, "w", encoding="utf-8") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save pattern templates to {self.file_path}: {e}")
raise
@@ -180,6 +173,9 @@ class PatternTemplateStore:
template = self._templates[template_id]
if name is not None:
for tid, t in self._templates.items():
if tid != template_id and t.name == name:
raise ValueError(f"Pattern template with name '{name}' already exists")
template.name = name
if rectangles is not None:
template.rectangles = rectangles

View File

@@ -12,7 +12,7 @@ from wled_controller.storage.picture_source import (
ProcessedPictureSource,
StaticImagePictureSource,
)
from wled_controller.utils import get_logger
from wled_controller.utils import atomic_write_json, get_logger
logger = get_logger(__name__)
@@ -68,21 +68,14 @@ class PictureSourceStore:
def _save(self) -> None:
"""Save all streams to file."""
try:
self.file_path.parent.mkdir(parents=True, exist_ok=True)
streams_dict = {
stream_id: stream.to_dict()
for stream_id, stream in self._streams.items()
}
data = {
"version": "1.0.0",
"picture_sources": streams_dict,
"picture_sources": {
stream_id: stream.to_dict()
for stream_id, stream in self._streams.items()
},
}
with open(self.file_path, "w", encoding="utf-8") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save picture sources to {self.file_path}: {e}")
raise

View File

@@ -12,7 +12,7 @@ from wled_controller.storage.key_colors_picture_target import (
KeyColorsSettings,
KeyColorsPictureTarget,
)
from wled_controller.utils import get_logger
from wled_controller.utils import atomic_write_json, get_logger
logger = get_logger(__name__)
@@ -63,21 +63,14 @@ class PictureTargetStore:
def _save(self) -> None:
"""Save all targets to file."""
try:
self.file_path.parent.mkdir(parents=True, exist_ok=True)
targets_dict = {
target_id: target.to_dict()
for target_id, target in self._targets.items()
}
data = {
"version": "1.0.0",
"picture_targets": targets_dict,
"picture_targets": {
target_id: target.to_dict()
for target_id, target in self._targets.items()
},
}
with open(self.file_path, "w", encoding="utf-8") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save picture targets to {self.file_path}: {e}")
raise

View File

@@ -10,7 +10,7 @@ from wled_controller.core.filters.filter_instance import FilterInstance
from wled_controller.core.filters.registry import FilterRegistry
from wled_controller.storage.picture_source import ProcessedPictureSource
from wled_controller.storage.postprocessing_template import PostprocessingTemplate
from wled_controller.utils import get_logger
from wled_controller.utils import atomic_write_json, get_logger
logger = get_logger(__name__)
@@ -92,21 +92,14 @@ class PostprocessingTemplateStore:
def _save(self) -> None:
"""Save all templates to file."""
try:
self.file_path.parent.mkdir(parents=True, exist_ok=True)
templates_dict = {
template_id: template.to_dict()
for template_id, template in self._templates.items()
}
data = {
"version": "2.0.0",
"postprocessing_templates": templates_dict,
"postprocessing_templates": {
template_id: template.to_dict()
for template_id, template in self._templates.items()
},
}
with open(self.file_path, "w", encoding="utf-8") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save postprocessing templates to {self.file_path}: {e}")
raise
@@ -189,6 +182,9 @@ class PostprocessingTemplateStore:
template = self._templates[template_id]
if name is not None:
for tid, t in self._templates.items():
if tid != template_id and t.name == name:
raise ValueError(f"Postprocessing template with name '{name}' already exists")
template.name = name
if filters is not None:
# Validate filter IDs

View File

@@ -7,7 +7,7 @@ from pathlib import Path
from typing import Dict, List, Optional
from wled_controller.storage.profile import Condition, Profile
from wled_controller.utils import get_logger
from wled_controller.utils import atomic_write_json, get_logger
logger = get_logger(__name__)
@@ -49,18 +49,13 @@ class ProfileStore:
def _save(self) -> None:
try:
self.file_path.parent.mkdir(parents=True, exist_ok=True)
data = {
"version": "1.0.0",
"profiles": {
pid: p.to_dict() for pid, p in self._profiles.items()
},
}
with open(self.file_path, "w", encoding="utf-8") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save profiles to {self.file_path}: {e}")
raise
@@ -81,6 +76,10 @@ class ProfileStore:
conditions: Optional[List[Condition]] = None,
target_ids: Optional[List[str]] = None,
) -> Profile:
for p in self._profiles.values():
if p.name == name:
raise ValueError(f"Profile with name '{name}' already exists")
profile_id = f"prof_{uuid.uuid4().hex[:8]}"
now = datetime.utcnow()
@@ -116,6 +115,9 @@ class ProfileStore:
profile = self._profiles[profile_id]
if name is not None:
for pid, p in self._profiles.items():
if pid != profile_id and p.name == name:
raise ValueError(f"Profile with name '{name}' already exists")
profile.name = name
if enabled is not None:
profile.enabled = enabled

View File

@@ -8,7 +8,7 @@ from typing import Dict, List, Optional
from wled_controller.core.capture_engines.factory import EngineRegistry
from wled_controller.storage.template import CaptureTemplate
from wled_controller.utils import get_logger
from wled_controller.utils import atomic_write_json, get_logger
logger = get_logger(__name__)
@@ -95,23 +95,14 @@ class TemplateStore:
def _save(self) -> None:
"""Save all templates to file."""
try:
# Ensure directory exists
self.file_path.parent.mkdir(parents=True, exist_ok=True)
templates_dict = {
template_id: template.to_dict()
for template_id, template in self._templates.items()
}
data = {
"version": "1.0.0",
"templates": templates_dict,
"templates": {
template_id: template.to_dict()
for template_id, template in self._templates.items()
},
}
# Write to file
with open(self.file_path, "w", encoding="utf-8") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save templates to {self.file_path}: {e}")
raise
@@ -218,6 +209,9 @@ class TemplateStore:
# Update fields
if name is not None:
for tid, t in self._templates.items():
if tid != template_id and t.name == name:
raise ValueError(f"Template with name '{name}' already exists")
template.name = name
if engine_type is not None:
template.engine_type = engine_type

View File

@@ -13,7 +13,7 @@ from wled_controller.storage.value_source import (
StaticValueSource,
ValueSource,
)
from wled_controller.utils import get_logger
from wled_controller.utils import atomic_write_json, get_logger
logger = get_logger(__name__)
@@ -59,21 +59,14 @@ class ValueSourceStore:
def _save(self) -> None:
try:
self.file_path.parent.mkdir(parents=True, exist_ok=True)
sources_dict = {
sid: source.to_dict()
for sid, source in self._sources.items()
}
data = {
"version": "1.0.0",
"value_sources": sources_dict,
"value_sources": {
sid: source.to_dict()
for sid, source in self._sources.items()
},
}
with open(self.file_path, "w", encoding="utf-8") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save value sources to {self.file_path}: {e}")
raise

View File

@@ -34,6 +34,9 @@
<button class="theme-toggle" onclick="toggleTheme()" data-i18n-title="theme.toggle" title="Toggle theme">
<span id="theme-icon">🌙</span>
</button>
<button class="search-toggle" onclick="openSettingsModal()" data-i18n-title="settings.title" title="Settings">
&#x2699;&#xFE0F;
</button>
<select id="locale-select" onchange="changeLocale()" data-i18n-title="locale.change" title="Change language" style="padding: 4px 8px; border: 1px solid var(--border-color); border-radius: 4px; background: var(--bg-color); color: var(--text-color); font-size: 0.8rem; cursor: pointer;">
<option value="en">English</option>
<option value="ru">Русский</option>
@@ -128,6 +131,7 @@
{% include 'modals/test-audio-template.html' %}
{% include 'modals/value-source-editor.html' %}
{% include 'modals/test-value-source.html' %}
{% include 'modals/settings.html' %}
{% include 'partials/tutorial-overlay.html' %}
{% include 'partials/image-lightbox.html' %}

View File

@@ -0,0 +1,33 @@
<!-- Settings Modal -->
<div id="settings-modal" class="modal" role="dialog" aria-modal="true" aria-labelledby="settings-modal-title">
<div class="modal-content" style="max-width: 450px;">
<div class="modal-header">
<h2 id="settings-modal-title" data-i18n="settings.title">Settings</h2>
<button class="modal-close-btn" onclick="closeSettingsModal()" title="Close" data-i18n-aria-label="aria.close">&#x2715;</button>
</div>
<div class="modal-body">
<!-- Backup section -->
<div class="form-group">
<div class="label-row">
<label data-i18n="settings.backup.label">Backup Configuration</label>
<button type="button" class="hint-toggle" onclick="toggleHint(this)" title="?">?</button>
</div>
<small class="input-hint" style="display:none" data-i18n="settings.backup.hint">Download all configuration (devices, targets, streams, templates, profiles) as a single JSON file.</small>
<button class="btn btn-primary" onclick="downloadBackup()" style="width:100%" data-i18n="settings.backup.button">Download Backup</button>
</div>
<!-- Restore section -->
<div class="form-group">
<div class="label-row">
<label data-i18n="settings.restore.label">Restore Configuration</label>
<button type="button" class="hint-toggle" onclick="toggleHint(this)" title="?">?</button>
</div>
<small class="input-hint" style="display:none" data-i18n="settings.restore.hint">Upload a previously downloaded backup file to replace all configuration. The server will restart automatically.</small>
<input type="file" id="settings-restore-input" accept=".json" style="display:none" onchange="handleRestoreFileSelected(this)">
<button class="btn btn-danger" onclick="document.getElementById('settings-restore-input').click()" style="width:100%" data-i18n="settings.restore.button">Restore from Backup</button>
</div>
<div id="settings-error" class="error-message" style="display:none;"></div>
</div>
</div>
</div>

View File

@@ -1,7 +1,8 @@
"""Utility functions and helpers."""
from .file_ops import atomic_write_json
from .logger import setup_logging, get_logger
from .monitor_names import get_monitor_names, get_monitor_name, get_monitor_refresh_rates
from .timer import high_resolution_timer
__all__ = ["setup_logging", "get_logger", "get_monitor_names", "get_monitor_name", "get_monitor_refresh_rates", "high_resolution_timer"]
__all__ = ["atomic_write_json", "setup_logging", "get_logger", "get_monitor_names", "get_monitor_name", "get_monitor_refresh_rates", "high_resolution_timer"]

View File

@@ -0,0 +1,34 @@
"""Atomic file write utilities."""
import json
import os
import tempfile
from pathlib import Path
def atomic_write_json(file_path: Path, data: dict, indent: int = 2) -> None:
"""Write JSON data to file atomically via temp file + rename.
Prevents data corruption if the process crashes or loses power
mid-write. The rename operation is atomic on most filesystems.
"""
file_path = Path(file_path)
file_path.parent.mkdir(parents=True, exist_ok=True)
# Write to a temp file in the same directory (same filesystem for atomic rename)
fd, tmp_path = tempfile.mkstemp(
dir=file_path.parent,
prefix=f".{file_path.stem}_",
suffix=".tmp",
)
try:
with os.fdopen(fd, "w", encoding="utf-8") as f:
json.dump(data, f, indent=indent, ensure_ascii=False)
os.replace(tmp_path, file_path)
except BaseException:
# Clean up temp file on any error
try:
os.unlink(tmp_path)
except OSError:
pass
raise