Add tags to all entity types with chip-based input and autocomplete

- Add `tags: List[str]` field to all 13 entity types (devices, output targets,
  CSS sources, picture sources, audio sources, value sources, sync clocks,
  automations, scene presets, capture/audio/PP/pattern templates)
- Update all stores, schemas, and route handlers for tag CRUD
- Add GET /api/v1/tags endpoint aggregating unique tags across all stores
- Create TagInput component with chip display, autocomplete dropdown,
  keyboard navigation, and API-backed suggestions
- Display tag chips on all entity cards (searchable via existing text filter)
- Add tag input to all 14 editor modals with dirty check support
- Add CSS styles and i18n keys (en/ru/zh) for tag UI
- Also includes code review fixes: thread safety, perf, store dedup

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-03-09 22:20:19 +03:00
parent 2712c6682e
commit 30fa107ef7
120 changed files with 2471 additions and 1949 deletions

138
REVIEW.md Normal file
View File

@@ -0,0 +1,138 @@
# Codebase Review Report
_Generated 2026-03-09_
---
## 1. Bugs (Critical)
### Thread Safety / Race Conditions
| Issue | Location | Description |
|-------|----------|-------------|
| **Dict mutation during iteration** | `composite_stream.py:121`, `mapped_stream.py:102` | `update_source()` calls `_sub_streams.clear()` from the API thread while `_processing_loop` iterates the dict on a background thread. **Will crash with `RuntimeError: dictionary changed size during iteration`.** |
| **Clock ref-count corruption** | `color_strip_stream_manager.py:286-304` | On clock hot-swap, `_release_clock` reads the *new* clock_id from the store (already updated), so it releases the newly acquired clock instead of the old one. Leaks the old runtime, destroys the new one. |
| **SyncClockRuntime race** | `sync_clock_runtime.py:42-49` | `get_time()` reads `_running`, `_offset`, `_epoch` without `_lock`, while `pause()`/`resume()`/`reset()` modify them under `_lock`. Compound read can double-count elapsed time. |
| **SyncClockManager unprotected dicts** | `sync_clock_manager.py:26-54` | `_runtimes` and `_ref_counts` are plain dicts mutated from both the async event loop and background threads with no lock. |
### Silent Failures
| Issue | Location | Description |
|-------|----------|-------------|
| **Crashed streams go undetected** | `mapped_stream.py:214`, `composite_stream.py` | When the processing loop dies, `get_latest_colors()` permanently returns stale data. The target keeps sending frozen colors to LEDs with no indicator anything is wrong. |
| **Crash doesn't fire state_change event** | `wled_target_processor.py:900` | Fatal exception path sets `_is_running = False` without firing `state_change` event (only `stop()` fires it). Dashboard doesn't learn about crashes via WebSocket. |
| **WebSocket broadcast client mismatch** | `kc_target_processor.py:481-485` | `zip(self._ws_clients, results)` pairs results with the live list, but clients can be removed between scheduling `gather` and collecting results, causing wrong clients to be dropped. |
### Security
| Issue | Location | Description |
|-------|----------|-------------|
| **Incomplete path traversal guard** | `auto_backup.py` | Filename validation uses string checks (`".." in filename`) instead of `Path.resolve().is_relative_to()`. |
---
## 2. Performance
### High Impact (Hot Path)
| Issue | Location | Impact |
|-------|----------|--------|
| **Per-frame `np.array()` from list** | `ddp_client.py:195` | Allocates a new numpy array from a Python list every frame. Should use pre-allocated buffer. |
| **Triple FFT for mono audio** | `analysis.py:168-174` | When audio is mono (common for system loopback), runs 3 identical FFTs. 2x wasted CPU. |
| **`frame_time = 1.0/fps` in every loop iteration** | 8 stream files | Recomputed every frame despite `_fps` only changing on consumer subscribe. Should be cached. |
| **4x deque traversals per frame for metrics** | `kc_target_processor.py:413-416` | Full traversal of metrics deques every frame to compute avg/min/max. |
| **3x spectrum `.copy()` per audio chunk** | `analysis.py:195-201` | ~258 array allocations/sec for read-only consumers. Could use non-writable views. |
### Medium Impact
| Issue | Location |
|-------|----------|
| `getattr` + dict lookup per composite layer per frame | `composite_stream.py:299-304` |
| Unconditional `self.*=` attribute writes every frame in audio stream | `audio_stream.py:255-261` |
| `JSON.parse(localStorage)` on every collapsed-section call | `dashboard.js` `_getCollapsedSections` |
| Effect/composite/mapped streams hardcoded to 30 FPS | `effect_stream.py`, `composite_stream.py:37`, `mapped_stream.py:33` |
| Double `querySelectorAll` on card reconcile | `card-sections.js:229-232` |
| Module import inside per-second sampling function | `metrics_history.py:21,35` |
| `datetime.utcnow()` twice per frame | `kc_target_processor.py:420,464` |
| Redundant `bytes()` copy of bytes slice | `ddp_client.py:222` |
| Unnecessary `.copy()` of temp interp result | `audio_stream.py:331,342` |
| Multiple intermediate numpy allocs for luminance | `value_stream.py:486-494` |
---
## 3. Code Quality
### Architecture
| Issue | Description |
|-------|-------------|
| **12 store classes with duplicated boilerplate** | All JSON stores repeat the same load/save/CRUD pattern with no base class. A `BaseJsonStore[T]` would eliminate ~60% of each store file. |
| **`DeviceStore.save()` uses unsafe temp file** | Fixed-path temp file instead of `atomic_write_json` used by all other stores. |
| **`scene_activator.py` accesses `ProcessorManager._processors` directly** | Lines 33, 68, 90, 110 — bypasses public API, breaks encapsulation. |
| **Route code directly mutates `ProcessorManager` internals** | `devices.py` accesses `manager._devices` and `manager._color_strip_stream_manager` in 13+ places. |
| **`color-strips.js` is 1900+ lines** | Handles 11 CSS source types, gradient editor, composite layers, mapped zones, card rendering, overlay control — should be split. |
| **No `DataCache` for color strip sources** | Every other entity uses `DataCache`. CSS sources are fetched with raw `fetchWithAuth` in 5+ places with no deduplication. |
### Consistency / Hygiene
| Issue | Location |
|-------|----------|
| `Dict[str, any]` (lowercase `any`) — invalid type annotation | `template_store.py:138,187`, `audio_template_store.py:126,155` |
| `datetime.utcnow()` deprecated — 88 call sites in 42 files | Project-wide |
| `_icon` SVG helper duplicated verbatim in 3 JS files | `color-strips.js:293`, `automations.js:41`, `kc-targets.js:49` |
| `hexToRgbArray` private to one file, pattern inlined elsewhere | `color-strips.js:471` vs line 1403 |
| Hardcoded English fallback in `showToast` | `color-strips.js:1593` |
| `ColorStripStore.create_source` silently creates wrong type for unknown `source_type` | `color_strip_store.py:92-332` |
| `update_source` clock_id clearing uses undocumented empty-string sentinel | `color_strip_store.py:394-395` |
| `DeviceStore._load` lacks per-item error isolation (unlike all other stores) | `device_store.py:122-138` |
| No unit tests | Zero test files. Highest-risk: `CalibrationConfig`/`PixelMapper` geometry, DDP packets, automation conditions. |
---
## 4. Features & Suggestions
### High Impact / Low Effort
| Suggestion | Details |
|------------|---------|
| **Auto-restart crashed processing loops** | Add backoff-based restart when `_processing_loop` dies. Currently crashes are permanent until manual intervention. |
| **Fire `state_change` on crash** | Add `finally` block in `_processing_loop` to notify the dashboard immediately. |
| **`POST /system/auto-backup/trigger`** | ~5 lines of Python. Manual backup trigger before risky config changes. |
| **`is_healthy` property on streams** | Let target processors detect when their color source has died. |
| **Rotate webhook token endpoint** | `POST /automations/{id}/rotate-webhook-token` — regenerate without recreating automation. |
| **"Start All" targets button** | "Stop All" exists but "Start All" (the more common operation after restart) is missing. |
| **Include auto-backup settings in backup** | Currently lost on restore. |
| **Distinguish "crashed" vs "stopped" in dashboard** | `metrics.last_error` is already populated — just surface it. |
### High Impact / Moderate Effort
| Suggestion | Details |
|------------|---------|
| **Home Assistant MQTT discovery** | Publish auto-discovery payloads so devices appear in HA automatically. MQTT infra already exists. |
| **Device health WebSocket events** | Eliminates 5-30s poll latency for online/offline detection. |
| **`GET /system/store-errors`** | Surface startup deserialization failures to the user. Currently only in logs. |
| **Scene snapshot should capture device brightness** | `software_brightness` is not saved/restored by scenes. |
| **Exponential backoff on events WebSocket reconnect** | Currently fixed 3s retry, generates constant logs during outages. |
| **CSS source import/export** | Share individual sources without full config backup. |
| **Per-target error ring buffer via API** | `GET /targets/{id}/logs` for remote debugging. |
| **DDP socket reconnection** | UDP socket invalidated on network changes; no reconnect path exists. |
| **Adalight serial reconnection** | COM port disconnect crashes the target permanently. |
| **MQTT-controlled brightness and scene activation** | Direct command handler without requiring API key management. |
### Nice-to-Have
| Suggestion | Details |
|------------|---------|
| Configurable metrics history window (currently hardcoded 120 samples / 2 min) | |
| Replace `window.prompt()` API key entry with proper modal | |
| Pattern template live preview (SVG/Canvas) | |
| Keyboard shortcuts for start/stop targets and scene activation | |
| FPS chart auto-scaling y-axis (`Math.max(target*1.15, maxSeen*1.1)`) | |
| WLED native preset target type (send `{"ps": id}` instead of pixels) | |
| Configurable DDP max packet size per device | |
| `GET /system/active-streams` unified runtime snapshot | |
| OpenMetrics / Prometheus endpoint for Grafana integration | |
| Configurable health check intervals (currently hardcoded 10s/60s) | |
| Configurable backup directory path | |
| `GET /system/logs?tail=100&level=ERROR` for in-app log viewing | |
| Device card "currently streaming" badge | |

46
TODO.md
View File

@@ -51,11 +51,51 @@ Priority: `P1` quick win · `P2` moderate · `P3` large effort
- Impact: medium — enables phone screen mirroring to ambient lighting; appeals to mobile gaming use case - Impact: medium — enables phone screen mirroring to ambient lighting; appeals to mobile gaming use case
- [x] `P3` **Camera / webcam** — Border-sampling from camera feed for video calls or room-reactive lighting - [x] `P3` **Camera / webcam** — Border-sampling from camera feed for video calls or room-reactive lighting
## Code Health (from review 2026-03-09)
### Bugs
- [x] `P1` **Thread safety: dict mutation during iteration** — composite_stream.py / mapped_stream.py `_sub_streams.clear()` crashes processing loop
- [x] `P1` **Thread safety: SyncClockRuntime.get_time() race** — compound read without lock causes time double-counting
- [x] `P1` **Thread safety: SyncClockManager unprotected dicts**`_runtimes`/`_ref_counts` mutated from multiple threads without lock
- [x] `P1` **Clock ref-count corruption on hot-swap**`_release_clock` reads new clock_id from store instead of old one
- [x] `P1` **Path traversal guard**`auto_backup.py` uses string checks instead of `Path.resolve().is_relative_to()`
- [x] `P2` **Crash doesn't fire state_change event** — fatal exception path in `wled_target_processor.py` doesn't notify dashboard
- [x] `P2` **WS broadcast client mismatch**`kc_target_processor.py` `zip(clients, results)` can pair wrong clients after concurrent removal
### Performance
- [x] `P1` **Triple FFT for mono audio**`analysis.py` runs 3 identical FFTs when audio is mono (2x wasted CPU)
- [x] `P2` **Per-frame np.array() from list**`ddp_client.py:195` allocates new numpy array every frame
- [x] `P2` **frame_time recomputed every loop iteration**`1.0/fps` in 8 stream files, should be cached
- [x] `P2` **Effect/composite/mapped streams hardcoded to 30 FPS** — ignores target FPS, bottlenecks 60 FPS targets
- [x] `P3` **Spectrum .copy() per audio chunk**`analysis.py` ~258 array allocations/sec for read-only consumers
### Code Quality
- [x] `P2` **12 store classes with duplicated boilerplate** — no base class; `BaseJsonStore[T]` would eliminate ~60%
- [x] `P2` **DeviceStore.save() uses unsafe temp file** — fixed-path `.tmp` instead of `atomic_write_json`
- [x] `P2` **Route code directly mutates ProcessorManager internals**`devices.py` accesses `manager._devices` in 13+ places
- [x] `P2` **scene_activator.py accesses ProcessorManager._processors directly** — bypasses public API
- [x] `P3` **datetime.utcnow() deprecated** — 88 call sites in 42 files, should use `datetime.now(timezone.utc)`
- [x] `P3` **color-strips.js 1900+ lines** — should be split into separate modules
- [x] `P3` **No DataCache for color strip sources** — fetched with raw fetchWithAuth in 5+ places
### Features
- [ ] `P1` **Auto-restart crashed processing loops** — add backoff-based restart when `_processing_loop` dies
- [ ] `P1` **"Start All" targets button** — "Stop All" exists but "Start All" is missing
- [ ] `P2` **Manual backup trigger endpoint**`POST /system/auto-backup/trigger` (~5 lines)
- [ ] `P2` **Scene snapshot should capture device brightness**`software_brightness` not saved/restored
- [ ] `P2` **Device health WebSocket events** — eliminate 5-30s poll latency for online/offline detection
- [ ] `P2` **Distinguish "crashed" vs "stopped" in dashboard**`metrics.last_error` is already populated
- [ ] `P3` **Home Assistant MQTT discovery** — publish auto-discovery payloads; MQTT infra already exists
- [ ] `P3` **CSS source import/export** — share individual sources without full config backup
- [ ] `P3` **Exponential backoff on events WS reconnect** — currently fixed 3s retry
## UX ## UX
- [ ] `P2` **Tags / groups for cards** — Assign tags to devices, targets, and sources; filter and group cards by tag - [x] `P2` **Tags / groups for cards** — Assign tags to devices, targets, and sources; filter and group cards by tag
- Complexity: medium — new `tags: List[str]` field on all card entities; tag CRUD API; filter bar UI per section; tag badge rendering on cards; persistence migration
- Impact: medium-high — essential for setups with many devices/targets; enables quick filtering (e.g. "bedroom", "desk", "gaming")
- [x] `P3` **PWA / mobile layout** — Mobile-first layout + "Add to Home Screen" manifest - [x] `P3` **PWA / mobile layout** — Mobile-first layout + "Add to Home Screen" manifest
- [ ] `P1` **Collapse dashboard running target stats** — Show only FPS chart by default; uptime, errors, and pipeline timings in an expandable section collapsed by default - [ ] `P1` **Collapse dashboard running target stats** — Show only FPS chart by default; uptime, errors, and pipeline timings in an expandable section collapsed by default
- [x] `P1` **Review protocol badge on LED target cards** — Review and improve the protocol badge display on LED target cards - [x] `P1` **Review protocol badge on LED target cards** — Review and improve the protocol badge display on LED target cards

View File

@@ -43,6 +43,7 @@ def _to_response(source: AudioSource) -> AudioSourceResponse:
audio_source_id=getattr(source, "audio_source_id", None), audio_source_id=getattr(source, "audio_source_id", None),
channel=getattr(source, "channel", None), channel=getattr(source, "channel", None),
description=source.description, description=source.description,
tags=getattr(source, 'tags', []),
created_at=source.created_at, created_at=source.created_at,
updated_at=source.updated_at, updated_at=source.updated_at,
) )
@@ -81,6 +82,7 @@ async def create_audio_source(
channel=data.channel, channel=data.channel,
description=data.description, description=data.description,
audio_template_id=data.audio_template_id, audio_template_id=data.audio_template_id,
tags=data.tags,
) )
return _to_response(source) return _to_response(source)
except ValueError as e: except ValueError as e:
@@ -119,6 +121,7 @@ async def update_audio_source(
channel=data.channel, channel=data.channel,
description=data.description, description=data.description,
audio_template_id=data.audio_template_id, audio_template_id=data.audio_template_id,
tags=data.tags,
) )
return _to_response(source) return _to_response(source)
except ValueError as e: except ValueError as e:

View File

@@ -41,7 +41,8 @@ async def list_audio_templates(
responses = [ responses = [
AudioTemplateResponse( AudioTemplateResponse(
id=t.id, name=t.name, engine_type=t.engine_type, id=t.id, name=t.name, engine_type=t.engine_type,
engine_config=t.engine_config, created_at=t.created_at, engine_config=t.engine_config, tags=getattr(t, 'tags', []),
created_at=t.created_at,
updated_at=t.updated_at, description=t.description, updated_at=t.updated_at, description=t.description,
) )
for t in templates for t in templates
@@ -63,10 +64,12 @@ async def create_audio_template(
template = store.create_template( template = store.create_template(
name=data.name, engine_type=data.engine_type, name=data.name, engine_type=data.engine_type,
engine_config=data.engine_config, description=data.description, engine_config=data.engine_config, description=data.description,
tags=data.tags,
) )
return AudioTemplateResponse( return AudioTemplateResponse(
id=template.id, name=template.name, engine_type=template.engine_type, id=template.id, name=template.name, engine_type=template.engine_type,
engine_config=template.engine_config, created_at=template.created_at, engine_config=template.engine_config, tags=getattr(template, 'tags', []),
created_at=template.created_at,
updated_at=template.updated_at, description=template.description, updated_at=template.updated_at, description=template.description,
) )
except ValueError as e: except ValueError as e:
@@ -89,7 +92,8 @@ async def get_audio_template(
raise HTTPException(status_code=404, detail=f"Audio template {template_id} not found") raise HTTPException(status_code=404, detail=f"Audio template {template_id} not found")
return AudioTemplateResponse( return AudioTemplateResponse(
id=t.id, name=t.name, engine_type=t.engine_type, id=t.id, name=t.name, engine_type=t.engine_type,
engine_config=t.engine_config, created_at=t.created_at, engine_config=t.engine_config, tags=getattr(t, 'tags', []),
created_at=t.created_at,
updated_at=t.updated_at, description=t.description, updated_at=t.updated_at, description=t.description,
) )
@@ -106,11 +110,12 @@ async def update_audio_template(
t = store.update_template( t = store.update_template(
template_id=template_id, name=data.name, template_id=template_id, name=data.name,
engine_type=data.engine_type, engine_config=data.engine_config, engine_type=data.engine_type, engine_config=data.engine_config,
description=data.description, description=data.description, tags=data.tags,
) )
return AudioTemplateResponse( return AudioTemplateResponse(
id=t.id, name=t.name, engine_type=t.engine_type, id=t.id, name=t.name, engine_type=t.engine_type,
engine_config=t.engine_config, created_at=t.created_at, engine_config=t.engine_config, tags=getattr(t, 'tags', []),
created_at=t.created_at,
updated_at=t.updated_at, description=t.description, updated_at=t.updated_at, description=t.description,
) )
except ValueError as e: except ValueError as e:

View File

@@ -107,6 +107,7 @@ def _automation_to_response(automation, engine: AutomationEngine, request: Reque
is_active=state["is_active"], is_active=state["is_active"],
last_activated_at=state.get("last_activated_at"), last_activated_at=state.get("last_activated_at"),
last_deactivated_at=state.get("last_deactivated_at"), last_deactivated_at=state.get("last_deactivated_at"),
tags=getattr(automation, 'tags', []),
created_at=automation.created_at, created_at=automation.created_at,
updated_at=automation.updated_at, updated_at=automation.updated_at,
) )
@@ -167,6 +168,7 @@ async def create_automation(
scene_preset_id=data.scene_preset_id, scene_preset_id=data.scene_preset_id,
deactivation_mode=data.deactivation_mode, deactivation_mode=data.deactivation_mode,
deactivation_scene_preset_id=data.deactivation_scene_preset_id, deactivation_scene_preset_id=data.deactivation_scene_preset_id,
tags=data.tags,
) )
if automation.enabled: if automation.enabled:
@@ -256,6 +258,7 @@ async def update_automation(
condition_logic=data.condition_logic, condition_logic=data.condition_logic,
conditions=conditions, conditions=conditions,
deactivation_mode=data.deactivation_mode, deactivation_mode=data.deactivation_mode,
tags=data.tags,
) )
if data.scene_preset_id is not None: if data.scene_preset_id is not None:
update_kwargs["scene_preset_id"] = data.scene_preset_id update_kwargs["scene_preset_id"] = data.scene_preset_id

View File

@@ -100,6 +100,7 @@ def _css_to_response(source, overlay_active: bool = False) -> ColorStripSourceRe
app_filter_list=getattr(source, "app_filter_list", None), app_filter_list=getattr(source, "app_filter_list", None),
os_listener=getattr(source, "os_listener", None), os_listener=getattr(source, "os_listener", None),
overlay_active=overlay_active, overlay_active=overlay_active,
tags=getattr(source, 'tags', []),
created_at=source.created_at, created_at=source.created_at,
updated_at=source.updated_at, updated_at=source.updated_at,
) )
@@ -190,6 +191,7 @@ async def create_color_strip_source(
app_filter_mode=data.app_filter_mode, app_filter_mode=data.app_filter_mode,
app_filter_list=data.app_filter_list, app_filter_list=data.app_filter_list,
os_listener=data.os_listener, os_listener=data.os_listener,
tags=data.tags,
) )
return _css_to_response(source) return _css_to_response(source)
@@ -273,11 +275,12 @@ async def update_color_strip_source(
app_filter_mode=data.app_filter_mode, app_filter_mode=data.app_filter_mode,
app_filter_list=data.app_filter_list, app_filter_list=data.app_filter_list,
os_listener=data.os_listener, os_listener=data.os_listener,
tags=data.tags,
) )
# Hot-reload running stream (no restart needed for in-place param changes) # Hot-reload running stream (no restart needed for in-place param changes)
try: try:
manager._color_strip_stream_manager.update_source(source_id, source) manager.color_strip_stream_manager.update_source(source_id, source)
except Exception as e: except Exception as e:
logger.warning(f"Could not hot-reload CSS stream {source_id}: {e}") logger.warning(f"Could not hot-reload CSS stream {source_id}: {e}")
@@ -354,7 +357,7 @@ async def test_css_calibration(
""" """
try: try:
# Validate device exists in manager # Validate device exists in manager
if body.device_id not in manager._devices: if not manager.has_device(body.device_id):
raise HTTPException(status_code=404, detail=f"Device {body.device_id} not found") raise HTTPException(status_code=404, detail=f"Device {body.device_id} not found")
# Validate edge names and colors # Validate edge names and colors
@@ -500,7 +503,7 @@ async def push_colors(
if colors_array.ndim != 2 or colors_array.shape[1] != 3: if colors_array.ndim != 2 or colors_array.shape[1] != 3:
raise HTTPException(status_code=400, detail="Colors must be an array of [R,G,B] triplets") raise HTTPException(status_code=400, detail="Colors must be an array of [R,G,B] triplets")
streams = manager._color_strip_stream_manager.get_streams_by_source_id(source_id) streams = manager.color_strip_stream_manager.get_streams_by_source_id(source_id)
for stream in streams: for stream in streams:
if hasattr(stream, "push_colors"): if hasattr(stream, "push_colors"):
stream.push_colors(colors_array) stream.push_colors(colors_array)
@@ -537,7 +540,7 @@ async def notify_source(
app_name = body.app if body else None app_name = body.app if body else None
color_override = body.color if body else None color_override = body.color if body else None
streams = manager._color_strip_stream_manager.get_streams_by_source_id(source_id) streams = manager.color_strip_stream_manager.get_streams_by_source_id(source_id)
accepted = 0 accepted = 0
for stream in streams: for stream in streams:
if hasattr(stream, "fire"): if hasattr(stream, "fire"):
@@ -624,7 +627,7 @@ async def css_api_input_ws(
continue continue
# Push to all running streams # Push to all running streams
streams = manager._color_strip_stream_manager.get_streams_by_source_id(source_id) streams = manager.color_strip_stream_manager.get_streams_by_source_id(source_id)
for stream in streams: for stream in streams:
if hasattr(stream, "push_colors"): if hasattr(stream, "push_colors"):
stream.push_colors(colors_array) stream.push_colors(colors_array)

View File

@@ -52,6 +52,7 @@ def _device_to_response(device) -> DeviceResponse:
rgbw=device.rgbw, rgbw=device.rgbw,
zone_mode=device.zone_mode, zone_mode=device.zone_mode,
capabilities=sorted(get_device_capabilities(device.device_type)), capabilities=sorted(get_device_capabilities(device.device_type)),
tags=getattr(device, 'tags', []),
created_at=device.created_at, created_at=device.created_at,
updated_at=device.updated_at, updated_at=device.updated_at,
) )
@@ -126,6 +127,7 @@ async def create_device(
send_latency_ms=device_data.send_latency_ms or 0, send_latency_ms=device_data.send_latency_ms or 0,
rgbw=device_data.rgbw or False, rgbw=device_data.rgbw or False,
zone_mode=device_data.zone_mode or "combined", zone_mode=device_data.zone_mode or "combined",
tags=device_data.tags,
) )
# WS devices: auto-set URL to ws://{device_id} # WS devices: auto-set URL to ws://{device_id}
@@ -308,6 +310,7 @@ async def update_device(
send_latency_ms=update_data.send_latency_ms, send_latency_ms=update_data.send_latency_ms,
rgbw=update_data.rgbw, rgbw=update_data.rgbw,
zone_mode=update_data.zone_mode, zone_mode=update_data.zone_mode,
tags=update_data.tags,
) )
# Sync connection info in processor manager # Sync connection info in processor manager
@@ -322,11 +325,12 @@ async def update_device(
pass pass
# Sync auto_shutdown and zone_mode in runtime state # Sync auto_shutdown and zone_mode in runtime state
if device_id in manager._devices: ds = manager.find_device_state(device_id)
if ds:
if update_data.auto_shutdown is not None: if update_data.auto_shutdown is not None:
manager._devices[device_id].auto_shutdown = update_data.auto_shutdown ds.auto_shutdown = update_data.auto_shutdown
if update_data.zone_mode is not None: if update_data.zone_mode is not None:
manager._devices[device_id].zone_mode = update_data.zone_mode ds.zone_mode = update_data.zone_mode
return _device_to_response(device) return _device_to_response(device)
@@ -420,7 +424,7 @@ async def get_device_brightness(
raise HTTPException(status_code=400, detail=f"Brightness control is not supported for {device.device_type} devices") raise HTTPException(status_code=400, detail=f"Brightness control is not supported for {device.device_type} devices")
# Return cached hardware brightness if available (updated by SET endpoint) # Return cached hardware brightness if available (updated by SET endpoint)
ds = manager._devices.get(device_id) ds = manager.find_device_state(device_id)
if ds and ds.hardware_brightness is not None: if ds and ds.hardware_brightness is not None:
return {"brightness": ds.hardware_brightness} return {"brightness": ds.hardware_brightness}
@@ -465,13 +469,15 @@ async def set_device_brightness(
except NotImplementedError: except NotImplementedError:
# Provider has no hardware brightness; use software brightness # Provider has no hardware brightness; use software brightness
device.software_brightness = bri device.software_brightness = bri
device.updated_at = __import__("datetime").datetime.utcnow() from datetime import datetime, timezone
device.updated_at = datetime.now(timezone.utc)
store.save() store.save()
if device_id in manager._devices: ds = manager.find_device_state(device_id)
manager._devices[device_id].software_brightness = bri if ds:
ds.software_brightness = bri
# Update cached hardware brightness # Update cached hardware brightness
ds = manager._devices.get(device_id) ds = manager.find_device_state(device_id)
if ds: if ds:
ds.hardware_brightness = bri ds.hardware_brightness = bri
@@ -499,7 +505,7 @@ async def get_device_power(
try: try:
# Serial devices: use tracked state (no hardware query available) # Serial devices: use tracked state (no hardware query available)
ds = manager._devices.get(device_id) ds = manager.find_device_state(device_id)
if device.device_type in ("adalight", "ambiled") and ds: if device.device_type in ("adalight", "ambiled") and ds:
return {"on": ds.power_on} return {"on": ds.power_on}
@@ -532,10 +538,10 @@ async def set_device_power(
try: try:
# For serial devices, use the cached idle client to avoid port conflicts # For serial devices, use the cached idle client to avoid port conflicts
ds = manager._devices.get(device_id) ds = manager.find_device_state(device_id)
if device.device_type in ("adalight", "ambiled") and ds: if device.device_type in ("adalight", "ambiled") and ds:
if not on: if not on:
await manager._send_clear_pixels(device_id) await manager.send_clear_pixels(device_id)
ds.power_on = on ds.power_on = on
else: else:
provider = get_provider(device.device_type) provider = get_provider(device.device_type)

View File

@@ -105,6 +105,7 @@ def _target_to_response(target) -> OutputTargetResponse:
adaptive_fps=target.adaptive_fps, adaptive_fps=target.adaptive_fps,
protocol=target.protocol, protocol=target.protocol,
description=target.description, description=target.description,
tags=getattr(target, 'tags', []),
created_at=target.created_at, created_at=target.created_at,
updated_at=target.updated_at, updated_at=target.updated_at,
@@ -117,6 +118,7 @@ def _target_to_response(target) -> OutputTargetResponse:
picture_source_id=target.picture_source_id, picture_source_id=target.picture_source_id,
key_colors_settings=_kc_settings_to_schema(target.settings), key_colors_settings=_kc_settings_to_schema(target.settings),
description=target.description, description=target.description,
tags=getattr(target, 'tags', []),
created_at=target.created_at, created_at=target.created_at,
updated_at=target.updated_at, updated_at=target.updated_at,
@@ -127,6 +129,7 @@ def _target_to_response(target) -> OutputTargetResponse:
name=target.name, name=target.name,
target_type=target.target_type, target_type=target.target_type,
description=target.description, description=target.description,
tags=getattr(target, 'tags', []),
created_at=target.created_at, created_at=target.created_at,
updated_at=target.updated_at, updated_at=target.updated_at,
@@ -169,6 +172,7 @@ async def create_target(
picture_source_id=data.picture_source_id, picture_source_id=data.picture_source_id,
key_colors_settings=kc_settings, key_colors_settings=kc_settings,
description=data.description, description=data.description,
tags=data.tags,
) )
# Register in processor manager # Register in processor manager
@@ -287,6 +291,7 @@ async def update_target(
protocol=data.protocol, protocol=data.protocol,
key_colors_settings=kc_settings, key_colors_settings=kc_settings,
description=data.description, description=data.description,
tags=data.tags,
) )
# Detect KC brightness VS change (inside key_colors_settings) # Detect KC brightness VS change (inside key_colors_settings)
@@ -461,11 +466,11 @@ async def get_target_colors(
r=r, g=g, b=b, r=r, g=g, b=b,
hex=f"#{r:02x}{g:02x}{b:02x}", hex=f"#{r:02x}{g:02x}{b:02x}",
) )
from datetime import datetime from datetime import datetime, timezone
return KeyColorsResponse( return KeyColorsResponse(
target_id=target_id, target_id=target_id,
colors=colors, colors=colors,
timestamp=datetime.utcnow(), timestamp=datetime.now(timezone.utc),
) )
except ValueError as e: except ValueError as e:
raise HTTPException(status_code=404, detail=str(e)) raise HTTPException(status_code=404, detail=str(e))

View File

@@ -36,6 +36,7 @@ def _pat_template_to_response(t) -> PatternTemplateResponse:
created_at=t.created_at, created_at=t.created_at,
updated_at=t.updated_at, updated_at=t.updated_at,
description=t.description, description=t.description,
tags=getattr(t, 'tags', []),
) )
@@ -70,6 +71,7 @@ async def create_pattern_template(
name=data.name, name=data.name,
rectangles=rectangles, rectangles=rectangles,
description=data.description, description=data.description,
tags=data.tags,
) )
return _pat_template_to_response(template) return _pat_template_to_response(template)
except ValueError as e: except ValueError as e:
@@ -113,6 +115,7 @@ async def update_pattern_template(
name=data.name, name=data.name,
rectangles=rectangles, rectangles=rectangles,
description=data.description, description=data.description,
tags=data.tags,
) )
return _pat_template_to_response(template) return _pat_template_to_response(template)
except ValueError as e: except ValueError as e:

View File

@@ -60,6 +60,7 @@ def _stream_to_response(s) -> PictureSourceResponse:
created_at=s.created_at, created_at=s.created_at,
updated_at=s.updated_at, updated_at=s.updated_at,
description=s.description, description=s.description,
tags=getattr(s, 'tags', []),
) )
@@ -196,6 +197,7 @@ async def create_picture_source(
postprocessing_template_id=data.postprocessing_template_id, postprocessing_template_id=data.postprocessing_template_id,
image_source=data.image_source, image_source=data.image_source,
description=data.description, description=data.description,
tags=data.tags,
) )
return _stream_to_response(stream) return _stream_to_response(stream)
except HTTPException: except HTTPException:
@@ -240,6 +242,7 @@ async def update_picture_source(
postprocessing_template_id=data.postprocessing_template_id, postprocessing_template_id=data.postprocessing_template_id,
image_source=data.image_source, image_source=data.image_source,
description=data.description, description=data.description,
tags=data.tags,
) )
return _stream_to_response(stream) return _stream_to_response(stream)
except ValueError as e: except ValueError as e:

View File

@@ -50,6 +50,7 @@ def _pp_template_to_response(t) -> PostprocessingTemplateResponse:
created_at=t.created_at, created_at=t.created_at,
updated_at=t.updated_at, updated_at=t.updated_at,
description=t.description, description=t.description,
tags=getattr(t, 'tags', []),
) )
@@ -81,6 +82,7 @@ async def create_pp_template(
name=data.name, name=data.name,
filters=filters, filters=filters,
description=data.description, description=data.description,
tags=data.tags,
) )
return _pp_template_to_response(template) return _pp_template_to_response(template)
except ValueError as e: except ValueError as e:
@@ -119,6 +121,7 @@ async def update_pp_template(
name=data.name, name=data.name,
filters=filters, filters=filters,
description=data.description, description=data.description,
tags=data.tags,
) )
return _pp_template_to_response(template) return _pp_template_to_response(template)
except ValueError as e: except ValueError as e:

View File

@@ -1,7 +1,7 @@
"""Scene preset API routes — CRUD, capture, activate, recapture.""" """Scene preset API routes — CRUD, capture, activate, recapture."""
import uuid import uuid
from datetime import datetime from datetime import datetime, timezone
from fastapi import APIRouter, Depends, HTTPException from fastapi import APIRouter, Depends, HTTPException
@@ -45,6 +45,7 @@ def _preset_to_response(preset: ScenePreset) -> ScenePresetResponse:
"fps": t.fps, "fps": t.fps,
} for t in preset.targets], } for t in preset.targets],
order=preset.order, order=preset.order,
tags=getattr(preset, 'tags', []),
created_at=preset.created_at, created_at=preset.created_at,
updated_at=preset.updated_at, updated_at=preset.updated_at,
) )
@@ -69,13 +70,14 @@ async def create_scene_preset(
target_ids = set(data.target_ids) if data.target_ids is not None else None target_ids = set(data.target_ids) if data.target_ids is not None else None
targets = capture_current_snapshot(target_store, manager, target_ids) targets = capture_current_snapshot(target_store, manager, target_ids)
now = datetime.utcnow() now = datetime.now(timezone.utc)
preset = ScenePreset( preset = ScenePreset(
id=f"scene_{uuid.uuid4().hex[:8]}", id=f"scene_{uuid.uuid4().hex[:8]}",
name=data.name, name=data.name,
description=data.description, description=data.description,
targets=targets, targets=targets,
order=store.count(), order=store.count(),
tags=data.tags if data.tags is not None else [],
created_at=now, created_at=now,
updated_at=now, updated_at=now,
) )
@@ -169,6 +171,7 @@ async def update_scene_preset(
description=data.description, description=data.description,
order=data.order, order=data.order,
targets=new_targets, targets=new_targets,
tags=data.tags,
) )
except ValueError as e: except ValueError as e:
raise HTTPException(status_code=404 if "not found" in str(e).lower() else 400, detail=str(e)) raise HTTPException(status_code=404 if "not found" in str(e).lower() else 400, detail=str(e))

View File

@@ -33,6 +33,7 @@ def _to_response(clock: SyncClock, manager: SyncClockManager) -> SyncClockRespon
name=clock.name, name=clock.name,
speed=rt.speed if rt else clock.speed, speed=rt.speed if rt else clock.speed,
description=clock.description, description=clock.description,
tags=getattr(clock, 'tags', []),
is_running=rt.is_running if rt else True, is_running=rt.is_running if rt else True,
elapsed_time=rt.get_time() if rt else 0.0, elapsed_time=rt.get_time() if rt else 0.0,
created_at=clock.created_at, created_at=clock.created_at,
@@ -67,6 +68,7 @@ async def create_sync_clock(
name=data.name, name=data.name,
speed=data.speed, speed=data.speed,
description=data.description, description=data.description,
tags=data.tags,
) )
return _to_response(clock, manager) return _to_response(clock, manager)
except ValueError as e: except ValueError as e:
@@ -103,6 +105,7 @@ async def update_sync_clock(
name=data.name, name=data.name,
speed=data.speed, speed=data.speed,
description=data.description, description=data.description,
tags=data.tags,
) )
# Hot-update runtime speed # Hot-update runtime speed
if data.speed is not None: if data.speed is not None:

View File

@@ -7,7 +7,7 @@ import platform
import subprocess import subprocess
import sys import sys
import threading import threading
from datetime import datetime from datetime import datetime, timezone
from pathlib import Path from pathlib import Path
from typing import Optional from typing import Optional
@@ -18,7 +18,23 @@ from pydantic import BaseModel
from wled_controller import __version__ from wled_controller import __version__
from wled_controller.api.auth import AuthRequired from wled_controller.api.auth import AuthRequired
from wled_controller.api.dependencies import get_auto_backup_engine, get_processor_manager from wled_controller.api.dependencies import (
get_auto_backup_engine,
get_audio_source_store,
get_audio_template_store,
get_automation_store,
get_color_strip_store,
get_device_store,
get_output_target_store,
get_pattern_template_store,
get_picture_source_store,
get_pp_template_store,
get_processor_manager,
get_scene_preset_store,
get_sync_clock_store,
get_template_store,
get_value_source_store,
)
from wled_controller.api.schemas.system import ( from wled_controller.api.schemas.system import (
AutoBackupSettings, AutoBackupSettings,
AutoBackupStatusResponse, AutoBackupStatusResponse,
@@ -104,7 +120,7 @@ async def health_check():
return HealthResponse( return HealthResponse(
status="healthy", status="healthy",
timestamp=datetime.utcnow(), timestamp=datetime.now(timezone.utc),
version=__version__, version=__version__,
) )
@@ -124,6 +140,39 @@ async def get_version():
) )
@router.get("/api/v1/tags", tags=["Tags"])
async def list_all_tags(_: AuthRequired):
"""Get all tags used across all entities."""
all_tags: set[str] = set()
store_getters = [
get_device_store, get_output_target_store, get_color_strip_store,
get_picture_source_store, get_audio_source_store, get_value_source_store,
get_sync_clock_store, get_automation_store, get_scene_preset_store,
get_template_store, get_audio_template_store, get_pp_template_store,
get_pattern_template_store,
]
for getter in store_getters:
try:
store = getter()
except RuntimeError:
continue
# Each store has a different "get all" method name
items = None
for method_name in (
"get_all_devices", "get_all_targets", "get_all_sources",
"get_all_streams", "get_all_clocks", "get_all_automations",
"get_all_presets", "get_all_templates",
):
fn = getattr(store, method_name, None)
if fn is not None:
items = fn()
break
if items:
for item in items:
all_tags.update(getattr(item, 'tags', []))
return {"tags": sorted(all_tags)}
@router.get("/api/v1/config/displays", response_model=DisplayListResponse, tags=["Config"]) @router.get("/api/v1/config/displays", response_model=DisplayListResponse, tags=["Config"])
async def get_displays( async def get_displays(
_: AuthRequired, _: AuthRequired,
@@ -238,7 +287,7 @@ def get_system_performance(_: AuthRequired):
ram_total_mb=round(mem.total / 1024 / 1024, 1), ram_total_mb=round(mem.total / 1024 / 1024, 1),
ram_percent=mem.percent, ram_percent=mem.percent,
gpu=gpu, gpu=gpu,
timestamp=datetime.utcnow(), timestamp=datetime.now(timezone.utc),
) )
@@ -318,14 +367,14 @@ def backup_config(_: AuthRequired):
"format": "ledgrab-backup", "format": "ledgrab-backup",
"format_version": 1, "format_version": 1,
"app_version": __version__, "app_version": __version__,
"created_at": datetime.utcnow().isoformat() + "Z", "created_at": datetime.now(timezone.utc).isoformat() + "Z",
"store_count": len(stores), "store_count": len(stores),
}, },
"stores": stores, "stores": stores,
} }
content = json.dumps(backup, indent=2, ensure_ascii=False) content = json.dumps(backup, indent=2, ensure_ascii=False)
timestamp = datetime.utcnow().strftime("%Y-%m-%dT%H%M%S") timestamp = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H%M%S")
filename = f"ledgrab-backup-{timestamp}.json" filename = f"ledgrab-backup-{timestamp}.json"
return StreamingResponse( return StreamingResponse(

View File

@@ -62,7 +62,7 @@ async def list_templates(
name=t.name, name=t.name,
engine_type=t.engine_type, engine_type=t.engine_type,
engine_config=t.engine_config, engine_config=t.engine_config,
tags=getattr(t, 'tags', []),
created_at=t.created_at, created_at=t.created_at,
updated_at=t.updated_at, updated_at=t.updated_at,
description=t.description, description=t.description,
@@ -93,6 +93,7 @@ async def create_template(
engine_type=template_data.engine_type, engine_type=template_data.engine_type,
engine_config=template_data.engine_config, engine_config=template_data.engine_config,
description=template_data.description, description=template_data.description,
tags=template_data.tags,
) )
return TemplateResponse( return TemplateResponse(
@@ -100,7 +101,7 @@ async def create_template(
name=template.name, name=template.name,
engine_type=template.engine_type, engine_type=template.engine_type,
engine_config=template.engine_config, engine_config=template.engine_config,
tags=getattr(template, 'tags', []),
created_at=template.created_at, created_at=template.created_at,
updated_at=template.updated_at, updated_at=template.updated_at,
description=template.description, description=template.description,
@@ -130,6 +131,7 @@ async def get_template(
name=template.name, name=template.name,
engine_type=template.engine_type, engine_type=template.engine_type,
engine_config=template.engine_config, engine_config=template.engine_config,
tags=getattr(template, 'tags', []),
created_at=template.created_at, created_at=template.created_at,
updated_at=template.updated_at, updated_at=template.updated_at,
description=template.description, description=template.description,
@@ -151,6 +153,7 @@ async def update_template(
engine_type=update_data.engine_type, engine_type=update_data.engine_type,
engine_config=update_data.engine_config, engine_config=update_data.engine_config,
description=update_data.description, description=update_data.description,
tags=update_data.tags,
) )
return TemplateResponse( return TemplateResponse(
@@ -158,7 +161,7 @@ async def update_template(
name=template.name, name=template.name,
engine_type=template.engine_type, engine_type=template.engine_type,
engine_config=template.engine_config, engine_config=template.engine_config,
tags=getattr(template, 'tags', []),
created_at=template.created_at, created_at=template.created_at,
updated_at=template.updated_at, updated_at=template.updated_at,
description=template.description, description=template.description,

View File

@@ -51,6 +51,7 @@ def _to_response(source: ValueSource) -> ValueSourceResponse:
picture_source_id=d.get("picture_source_id"), picture_source_id=d.get("picture_source_id"),
scene_behavior=d.get("scene_behavior"), scene_behavior=d.get("scene_behavior"),
description=d.get("description"), description=d.get("description"),
tags=d.get("tags", []),
created_at=source.created_at, created_at=source.created_at,
updated_at=source.updated_at, updated_at=source.updated_at,
) )
@@ -97,6 +98,7 @@ async def create_value_source(
picture_source_id=data.picture_source_id, picture_source_id=data.picture_source_id,
scene_behavior=data.scene_behavior, scene_behavior=data.scene_behavior,
auto_gain=data.auto_gain, auto_gain=data.auto_gain,
tags=data.tags,
) )
return _to_response(source) return _to_response(source)
except ValueError as e: except ValueError as e:
@@ -144,6 +146,7 @@ async def update_value_source(
picture_source_id=data.picture_source_id, picture_source_id=data.picture_source_id,
scene_behavior=data.scene_behavior, scene_behavior=data.scene_behavior,
auto_gain=data.auto_gain, auto_gain=data.auto_gain,
tags=data.tags,
) )
# Hot-reload running value streams # Hot-reload running value streams
pm.update_value_source(source_id) pm.update_value_source(source_id)

View File

@@ -19,6 +19,7 @@ class AudioSourceCreate(BaseModel):
audio_source_id: Optional[str] = Field(None, description="Parent multichannel audio source ID") audio_source_id: Optional[str] = Field(None, description="Parent multichannel audio source ID")
channel: Optional[str] = Field(None, description="Channel: mono|left|right") channel: Optional[str] = Field(None, description="Channel: mono|left|right")
description: Optional[str] = Field(None, description="Optional description", max_length=500) description: Optional[str] = Field(None, description="Optional description", max_length=500)
tags: List[str] = Field(default_factory=list, description="User-defined tags")
class AudioSourceUpdate(BaseModel): class AudioSourceUpdate(BaseModel):
@@ -31,6 +32,7 @@ class AudioSourceUpdate(BaseModel):
audio_source_id: Optional[str] = Field(None, description="Parent multichannel audio source ID") audio_source_id: Optional[str] = Field(None, description="Parent multichannel audio source ID")
channel: Optional[str] = Field(None, description="Channel: mono|left|right") channel: Optional[str] = Field(None, description="Channel: mono|left|right")
description: Optional[str] = Field(None, description="Optional description", max_length=500) description: Optional[str] = Field(None, description="Optional description", max_length=500)
tags: Optional[List[str]] = None
class AudioSourceResponse(BaseModel): class AudioSourceResponse(BaseModel):
@@ -45,6 +47,7 @@ class AudioSourceResponse(BaseModel):
audio_source_id: Optional[str] = Field(None, description="Parent multichannel source ID") audio_source_id: Optional[str] = Field(None, description="Parent multichannel source ID")
channel: Optional[str] = Field(None, description="Channel: mono|left|right") channel: Optional[str] = Field(None, description="Channel: mono|left|right")
description: Optional[str] = Field(None, description="Description") description: Optional[str] = Field(None, description="Description")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
created_at: datetime = Field(description="Creation timestamp") created_at: datetime = Field(description="Creation timestamp")
updated_at: datetime = Field(description="Last update timestamp") updated_at: datetime = Field(description="Last update timestamp")

View File

@@ -13,6 +13,7 @@ class AudioTemplateCreate(BaseModel):
engine_type: str = Field(description="Audio engine type (e.g., 'wasapi', 'sounddevice')", min_length=1) engine_type: str = Field(description="Audio engine type (e.g., 'wasapi', 'sounddevice')", min_length=1)
engine_config: Dict = Field(default_factory=dict, description="Engine-specific configuration") engine_config: Dict = Field(default_factory=dict, description="Engine-specific configuration")
description: Optional[str] = Field(None, description="Template description", max_length=500) description: Optional[str] = Field(None, description="Template description", max_length=500)
tags: List[str] = Field(default_factory=list, description="User-defined tags")
class AudioTemplateUpdate(BaseModel): class AudioTemplateUpdate(BaseModel):
@@ -22,6 +23,7 @@ class AudioTemplateUpdate(BaseModel):
engine_type: Optional[str] = Field(None, description="Audio engine type") engine_type: Optional[str] = Field(None, description="Audio engine type")
engine_config: Optional[Dict] = Field(None, description="Engine-specific configuration") engine_config: Optional[Dict] = Field(None, description="Engine-specific configuration")
description: Optional[str] = Field(None, description="Template description", max_length=500) description: Optional[str] = Field(None, description="Template description", max_length=500)
tags: Optional[List[str]] = None
class AudioTemplateResponse(BaseModel): class AudioTemplateResponse(BaseModel):
@@ -31,6 +33,7 @@ class AudioTemplateResponse(BaseModel):
name: str = Field(description="Template name") name: str = Field(description="Template name")
engine_type: str = Field(description="Engine type identifier") engine_type: str = Field(description="Engine type identifier")
engine_config: Dict = Field(description="Engine-specific configuration") engine_config: Dict = Field(description="Engine-specific configuration")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
created_at: datetime = Field(description="Creation timestamp") created_at: datetime = Field(description="Creation timestamp")
updated_at: datetime = Field(description="Last update timestamp") updated_at: datetime = Field(description="Last update timestamp")
description: Optional[str] = Field(None, description="Template description") description: Optional[str] = Field(None, description="Template description")

View File

@@ -39,6 +39,7 @@ class AutomationCreate(BaseModel):
scene_preset_id: Optional[str] = Field(None, description="Scene preset to activate") scene_preset_id: Optional[str] = Field(None, description="Scene preset to activate")
deactivation_mode: str = Field(default="none", description="'none', 'revert', or 'fallback_scene'") deactivation_mode: str = Field(default="none", description="'none', 'revert', or 'fallback_scene'")
deactivation_scene_preset_id: Optional[str] = Field(None, description="Scene preset for fallback deactivation") deactivation_scene_preset_id: Optional[str] = Field(None, description="Scene preset for fallback deactivation")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
class AutomationUpdate(BaseModel): class AutomationUpdate(BaseModel):
@@ -51,6 +52,7 @@ class AutomationUpdate(BaseModel):
scene_preset_id: Optional[str] = Field(None, description="Scene preset to activate") scene_preset_id: Optional[str] = Field(None, description="Scene preset to activate")
deactivation_mode: Optional[str] = Field(None, description="'none', 'revert', or 'fallback_scene'") deactivation_mode: Optional[str] = Field(None, description="'none', 'revert', or 'fallback_scene'")
deactivation_scene_preset_id: Optional[str] = Field(None, description="Scene preset for fallback deactivation") deactivation_scene_preset_id: Optional[str] = Field(None, description="Scene preset for fallback deactivation")
tags: Optional[List[str]] = None
class AutomationResponse(BaseModel): class AutomationResponse(BaseModel):
@@ -64,6 +66,7 @@ class AutomationResponse(BaseModel):
scene_preset_id: Optional[str] = Field(None, description="Scene preset to activate") scene_preset_id: Optional[str] = Field(None, description="Scene preset to activate")
deactivation_mode: str = Field(default="none", description="Deactivation behavior") deactivation_mode: str = Field(default="none", description="Deactivation behavior")
deactivation_scene_preset_id: Optional[str] = Field(None, description="Fallback scene preset") deactivation_scene_preset_id: Optional[str] = Field(None, description="Fallback scene preset")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
webhook_url: Optional[str] = Field(None, description="Webhook URL for the first webhook condition (if any)") webhook_url: Optional[str] = Field(None, description="Webhook URL for the first webhook condition (if any)")
is_active: bool = Field(default=False, description="Whether the automation is currently active") is_active: bool = Field(default=False, description="Whether the automation is currently active")
last_activated_at: Optional[datetime] = Field(None, description="Last time this automation was activated") last_activated_at: Optional[datetime] = Field(None, description="Last time this automation was activated")

View File

@@ -97,6 +97,7 @@ class ColorStripSourceCreate(BaseModel):
os_listener: Optional[bool] = Field(None, description="Whether to listen for OS notifications") os_listener: Optional[bool] = Field(None, description="Whether to listen for OS notifications")
# sync clock # sync clock
clock_id: Optional[str] = Field(None, description="Optional sync clock ID for synchronized animation") clock_id: Optional[str] = Field(None, description="Optional sync clock ID for synchronized animation")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
class ColorStripSourceUpdate(BaseModel): class ColorStripSourceUpdate(BaseModel):
@@ -150,6 +151,7 @@ class ColorStripSourceUpdate(BaseModel):
os_listener: Optional[bool] = Field(None, description="Whether to listen for OS notifications") os_listener: Optional[bool] = Field(None, description="Whether to listen for OS notifications")
# sync clock # sync clock
clock_id: Optional[str] = Field(None, description="Optional sync clock ID for synchronized animation") clock_id: Optional[str] = Field(None, description="Optional sync clock ID for synchronized animation")
tags: Optional[List[str]] = None
class ColorStripSourceResponse(BaseModel): class ColorStripSourceResponse(BaseModel):
@@ -205,6 +207,7 @@ class ColorStripSourceResponse(BaseModel):
os_listener: Optional[bool] = Field(None, description="Whether to listen for OS notifications") os_listener: Optional[bool] = Field(None, description="Whether to listen for OS notifications")
# sync clock # sync clock
clock_id: Optional[str] = Field(None, description="Optional sync clock ID for synchronized animation") clock_id: Optional[str] = Field(None, description="Optional sync clock ID for synchronized animation")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
overlay_active: bool = Field(False, description="Whether the screen overlay is currently active") overlay_active: bool = Field(False, description="Whether the screen overlay is currently active")
created_at: datetime = Field(description="Creation timestamp") created_at: datetime = Field(description="Creation timestamp")
updated_at: datetime = Field(description="Last update timestamp") updated_at: datetime = Field(description="Last update timestamp")

View File

@@ -18,6 +18,7 @@ class DeviceCreate(BaseModel):
send_latency_ms: Optional[int] = Field(None, ge=0, le=5000, description="Simulated send latency in ms (mock devices)") send_latency_ms: Optional[int] = Field(None, ge=0, le=5000, description="Simulated send latency in ms (mock devices)")
rgbw: Optional[bool] = Field(None, description="RGBW mode (mock devices)") rgbw: Optional[bool] = Field(None, description="RGBW mode (mock devices)")
zone_mode: Optional[str] = Field(None, description="OpenRGB zone mode: combined or separate") zone_mode: Optional[str] = Field(None, description="OpenRGB zone mode: combined or separate")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
class DeviceUpdate(BaseModel): class DeviceUpdate(BaseModel):
@@ -32,6 +33,7 @@ class DeviceUpdate(BaseModel):
send_latency_ms: Optional[int] = Field(None, ge=0, le=5000, description="Simulated send latency in ms (mock devices)") send_latency_ms: Optional[int] = Field(None, ge=0, le=5000, description="Simulated send latency in ms (mock devices)")
rgbw: Optional[bool] = Field(None, description="RGBW mode (mock devices)") rgbw: Optional[bool] = Field(None, description="RGBW mode (mock devices)")
zone_mode: Optional[str] = Field(None, description="OpenRGB zone mode: combined or separate") zone_mode: Optional[str] = Field(None, description="OpenRGB zone mode: combined or separate")
tags: Optional[List[str]] = None
class CalibrationLineSchema(BaseModel): class CalibrationLineSchema(BaseModel):
@@ -125,6 +127,7 @@ class DeviceResponse(BaseModel):
rgbw: bool = Field(default=False, description="RGBW mode (mock devices)") rgbw: bool = Field(default=False, description="RGBW mode (mock devices)")
zone_mode: str = Field(default="combined", description="OpenRGB zone mode: combined or separate") zone_mode: str = Field(default="combined", description="OpenRGB zone mode: combined or separate")
capabilities: List[str] = Field(default_factory=list, description="Device type capabilities") capabilities: List[str] = Field(default_factory=list, description="Device type capabilities")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
created_at: datetime = Field(description="Creation timestamp") created_at: datetime = Field(description="Creation timestamp")
updated_at: datetime = Field(description="Last update timestamp") updated_at: datetime = Field(description="Last update timestamp")

View File

@@ -65,6 +65,7 @@ class OutputTargetCreate(BaseModel):
picture_source_id: str = Field(default="", description="Picture source ID (for key_colors targets)") picture_source_id: str = Field(default="", description="Picture source ID (for key_colors targets)")
key_colors_settings: Optional[KeyColorsSettingsSchema] = Field(None, description="Key colors settings (for key_colors targets)") key_colors_settings: Optional[KeyColorsSettingsSchema] = Field(None, description="Key colors settings (for key_colors targets)")
description: Optional[str] = Field(None, description="Optional description", max_length=500) description: Optional[str] = Field(None, description="Optional description", max_length=500)
tags: List[str] = Field(default_factory=list, description="User-defined tags")
class OutputTargetUpdate(BaseModel): class OutputTargetUpdate(BaseModel):
@@ -85,6 +86,7 @@ class OutputTargetUpdate(BaseModel):
picture_source_id: Optional[str] = Field(None, description="Picture source ID (for key_colors targets)") picture_source_id: Optional[str] = Field(None, description="Picture source ID (for key_colors targets)")
key_colors_settings: Optional[KeyColorsSettingsSchema] = Field(None, description="Key colors settings (for key_colors targets)") key_colors_settings: Optional[KeyColorsSettingsSchema] = Field(None, description="Key colors settings (for key_colors targets)")
description: Optional[str] = Field(None, description="Optional description", max_length=500) description: Optional[str] = Field(None, description="Optional description", max_length=500)
tags: Optional[List[str]] = None
class OutputTargetResponse(BaseModel): class OutputTargetResponse(BaseModel):
@@ -107,6 +109,7 @@ class OutputTargetResponse(BaseModel):
picture_source_id: str = Field(default="", description="Picture source ID (key_colors)") picture_source_id: str = Field(default="", description="Picture source ID (key_colors)")
key_colors_settings: Optional[KeyColorsSettingsSchema] = Field(None, description="Key colors settings") key_colors_settings: Optional[KeyColorsSettingsSchema] = Field(None, description="Key colors settings")
description: Optional[str] = Field(None, description="Description") description: Optional[str] = Field(None, description="Description")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
created_at: datetime = Field(description="Creation timestamp") created_at: datetime = Field(description="Creation timestamp")
updated_at: datetime = Field(description="Last update timestamp") updated_at: datetime = Field(description="Last update timestamp")

View File

@@ -14,6 +14,7 @@ class PatternTemplateCreate(BaseModel):
name: str = Field(description="Template name", min_length=1, max_length=100) name: str = Field(description="Template name", min_length=1, max_length=100)
rectangles: List[KeyColorRectangleSchema] = Field(default_factory=list, description="List of named rectangles") rectangles: List[KeyColorRectangleSchema] = Field(default_factory=list, description="List of named rectangles")
description: Optional[str] = Field(None, description="Template description", max_length=500) description: Optional[str] = Field(None, description="Template description", max_length=500)
tags: List[str] = Field(default_factory=list, description="User-defined tags")
class PatternTemplateUpdate(BaseModel): class PatternTemplateUpdate(BaseModel):
@@ -22,6 +23,7 @@ class PatternTemplateUpdate(BaseModel):
name: Optional[str] = Field(None, description="Template name", min_length=1, max_length=100) name: Optional[str] = Field(None, description="Template name", min_length=1, max_length=100)
rectangles: Optional[List[KeyColorRectangleSchema]] = Field(None, description="List of named rectangles") rectangles: Optional[List[KeyColorRectangleSchema]] = Field(None, description="List of named rectangles")
description: Optional[str] = Field(None, description="Template description", max_length=500) description: Optional[str] = Field(None, description="Template description", max_length=500)
tags: Optional[List[str]] = None
class PatternTemplateResponse(BaseModel): class PatternTemplateResponse(BaseModel):
@@ -30,6 +32,7 @@ class PatternTemplateResponse(BaseModel):
id: str = Field(description="Template ID") id: str = Field(description="Template ID")
name: str = Field(description="Template name") name: str = Field(description="Template name")
rectangles: List[KeyColorRectangleSchema] = Field(description="List of named rectangles") rectangles: List[KeyColorRectangleSchema] = Field(description="List of named rectangles")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
created_at: datetime = Field(description="Creation timestamp") created_at: datetime = Field(description="Creation timestamp")
updated_at: datetime = Field(description="Last update timestamp") updated_at: datetime = Field(description="Last update timestamp")
description: Optional[str] = Field(None, description="Template description") description: Optional[str] = Field(None, description="Template description")

View File

@@ -18,6 +18,7 @@ class PictureSourceCreate(BaseModel):
postprocessing_template_id: Optional[str] = Field(None, description="Postprocessing template ID (processed streams)") postprocessing_template_id: Optional[str] = Field(None, description="Postprocessing template ID (processed streams)")
image_source: Optional[str] = Field(None, description="Image URL or file path (static_image streams)") image_source: Optional[str] = Field(None, description="Image URL or file path (static_image streams)")
description: Optional[str] = Field(None, description="Stream description", max_length=500) description: Optional[str] = Field(None, description="Stream description", max_length=500)
tags: List[str] = Field(default_factory=list, description="User-defined tags")
class PictureSourceUpdate(BaseModel): class PictureSourceUpdate(BaseModel):
@@ -31,6 +32,7 @@ class PictureSourceUpdate(BaseModel):
postprocessing_template_id: Optional[str] = Field(None, description="Postprocessing template ID (processed streams)") postprocessing_template_id: Optional[str] = Field(None, description="Postprocessing template ID (processed streams)")
image_source: Optional[str] = Field(None, description="Image URL or file path (static_image streams)") image_source: Optional[str] = Field(None, description="Image URL or file path (static_image streams)")
description: Optional[str] = Field(None, description="Stream description", max_length=500) description: Optional[str] = Field(None, description="Stream description", max_length=500)
tags: Optional[List[str]] = None
class PictureSourceResponse(BaseModel): class PictureSourceResponse(BaseModel):
@@ -45,6 +47,7 @@ class PictureSourceResponse(BaseModel):
source_stream_id: Optional[str] = Field(None, description="Source stream ID") source_stream_id: Optional[str] = Field(None, description="Source stream ID")
postprocessing_template_id: Optional[str] = Field(None, description="Postprocessing template ID") postprocessing_template_id: Optional[str] = Field(None, description="Postprocessing template ID")
image_source: Optional[str] = Field(None, description="Image URL or file path") image_source: Optional[str] = Field(None, description="Image URL or file path")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
created_at: datetime = Field(description="Creation timestamp") created_at: datetime = Field(description="Creation timestamp")
updated_at: datetime = Field(description="Last update timestamp") updated_at: datetime = Field(description="Last update timestamp")
description: Optional[str] = Field(None, description="Stream description") description: Optional[str] = Field(None, description="Stream description")

View File

@@ -14,6 +14,7 @@ class PostprocessingTemplateCreate(BaseModel):
name: str = Field(description="Template name", min_length=1, max_length=100) name: str = Field(description="Template name", min_length=1, max_length=100)
filters: List[FilterInstanceSchema] = Field(default_factory=list, description="Ordered list of filter instances") filters: List[FilterInstanceSchema] = Field(default_factory=list, description="Ordered list of filter instances")
description: Optional[str] = Field(None, description="Template description", max_length=500) description: Optional[str] = Field(None, description="Template description", max_length=500)
tags: List[str] = Field(default_factory=list, description="User-defined tags")
class PostprocessingTemplateUpdate(BaseModel): class PostprocessingTemplateUpdate(BaseModel):
@@ -22,6 +23,7 @@ class PostprocessingTemplateUpdate(BaseModel):
name: Optional[str] = Field(None, description="Template name", min_length=1, max_length=100) name: Optional[str] = Field(None, description="Template name", min_length=1, max_length=100)
filters: Optional[List[FilterInstanceSchema]] = Field(None, description="Ordered list of filter instances") filters: Optional[List[FilterInstanceSchema]] = Field(None, description="Ordered list of filter instances")
description: Optional[str] = Field(None, description="Template description", max_length=500) description: Optional[str] = Field(None, description="Template description", max_length=500)
tags: Optional[List[str]] = None
class PostprocessingTemplateResponse(BaseModel): class PostprocessingTemplateResponse(BaseModel):
@@ -30,6 +32,7 @@ class PostprocessingTemplateResponse(BaseModel):
id: str = Field(description="Template ID") id: str = Field(description="Template ID")
name: str = Field(description="Template name") name: str = Field(description="Template name")
filters: List[FilterInstanceSchema] = Field(description="Ordered list of filter instances") filters: List[FilterInstanceSchema] = Field(description="Ordered list of filter instances")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
created_at: datetime = Field(description="Creation timestamp") created_at: datetime = Field(description="Creation timestamp")
updated_at: datetime = Field(description="Last update timestamp") updated_at: datetime = Field(description="Last update timestamp")
description: Optional[str] = Field(None, description="Template description") description: Optional[str] = Field(None, description="Template description")

View File

@@ -20,6 +20,7 @@ class ScenePresetCreate(BaseModel):
name: str = Field(description="Preset name", min_length=1, max_length=100) name: str = Field(description="Preset name", min_length=1, max_length=100)
description: str = Field(default="", max_length=500) description: str = Field(default="", max_length=500)
target_ids: Optional[List[str]] = Field(None, description="Target IDs to capture (all if omitted)") target_ids: Optional[List[str]] = Field(None, description="Target IDs to capture (all if omitted)")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
class ScenePresetUpdate(BaseModel): class ScenePresetUpdate(BaseModel):
@@ -29,6 +30,7 @@ class ScenePresetUpdate(BaseModel):
description: Optional[str] = Field(None, max_length=500) description: Optional[str] = Field(None, max_length=500)
order: Optional[int] = None order: Optional[int] = None
target_ids: Optional[List[str]] = Field(None, description="Update target list: keep state for existing, capture fresh for new, drop removed") target_ids: Optional[List[str]] = Field(None, description="Update target list: keep state for existing, capture fresh for new, drop removed")
tags: Optional[List[str]] = None
class ScenePresetResponse(BaseModel): class ScenePresetResponse(BaseModel):
@@ -39,6 +41,7 @@ class ScenePresetResponse(BaseModel):
description: str description: str
targets: List[TargetSnapshotSchema] targets: List[TargetSnapshotSchema]
order: int order: int
tags: List[str] = Field(default_factory=list, description="User-defined tags")
created_at: datetime created_at: datetime
updated_at: datetime updated_at: datetime

View File

@@ -12,6 +12,7 @@ class SyncClockCreate(BaseModel):
name: str = Field(description="Clock name", min_length=1, max_length=100) name: str = Field(description="Clock name", min_length=1, max_length=100)
speed: float = Field(default=1.0, description="Speed multiplier (0.110.0)", ge=0.1, le=10.0) speed: float = Field(default=1.0, description="Speed multiplier (0.110.0)", ge=0.1, le=10.0)
description: Optional[str] = Field(None, description="Optional description", max_length=500) description: Optional[str] = Field(None, description="Optional description", max_length=500)
tags: List[str] = Field(default_factory=list, description="User-defined tags")
class SyncClockUpdate(BaseModel): class SyncClockUpdate(BaseModel):
@@ -20,6 +21,7 @@ class SyncClockUpdate(BaseModel):
name: Optional[str] = Field(None, description="Clock name", min_length=1, max_length=100) name: Optional[str] = Field(None, description="Clock name", min_length=1, max_length=100)
speed: Optional[float] = Field(None, description="Speed multiplier (0.110.0)", ge=0.1, le=10.0) speed: Optional[float] = Field(None, description="Speed multiplier (0.110.0)", ge=0.1, le=10.0)
description: Optional[str] = Field(None, description="Optional description", max_length=500) description: Optional[str] = Field(None, description="Optional description", max_length=500)
tags: Optional[List[str]] = None
class SyncClockResponse(BaseModel): class SyncClockResponse(BaseModel):
@@ -29,6 +31,7 @@ class SyncClockResponse(BaseModel):
name: str = Field(description="Clock name") name: str = Field(description="Clock name")
speed: float = Field(description="Speed multiplier") speed: float = Field(description="Speed multiplier")
description: Optional[str] = Field(None, description="Description") description: Optional[str] = Field(None, description="Description")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
is_running: bool = Field(True, description="Whether clock is currently running") is_running: bool = Field(True, description="Whether clock is currently running")
elapsed_time: float = Field(0.0, description="Current elapsed time in seconds") elapsed_time: float = Field(0.0, description="Current elapsed time in seconds")
created_at: datetime = Field(description="Creation timestamp") created_at: datetime = Field(description="Creation timestamp")

View File

@@ -13,6 +13,7 @@ class TemplateCreate(BaseModel):
engine_type: str = Field(description="Engine type (e.g., 'mss', 'dxcam', 'wgc')", min_length=1) engine_type: str = Field(description="Engine type (e.g., 'mss', 'dxcam', 'wgc')", min_length=1)
engine_config: Dict = Field(default_factory=dict, description="Engine-specific configuration") engine_config: Dict = Field(default_factory=dict, description="Engine-specific configuration")
description: Optional[str] = Field(None, description="Template description", max_length=500) description: Optional[str] = Field(None, description="Template description", max_length=500)
tags: List[str] = Field(default_factory=list, description="User-defined tags")
class TemplateUpdate(BaseModel): class TemplateUpdate(BaseModel):
@@ -22,6 +23,7 @@ class TemplateUpdate(BaseModel):
engine_type: Optional[str] = Field(None, description="Capture engine type (mss, dxcam, wgc)") engine_type: Optional[str] = Field(None, description="Capture engine type (mss, dxcam, wgc)")
engine_config: Optional[Dict] = Field(None, description="Engine-specific configuration") engine_config: Optional[Dict] = Field(None, description="Engine-specific configuration")
description: Optional[str] = Field(None, description="Template description", max_length=500) description: Optional[str] = Field(None, description="Template description", max_length=500)
tags: Optional[List[str]] = None
class TemplateResponse(BaseModel): class TemplateResponse(BaseModel):
@@ -31,6 +33,7 @@ class TemplateResponse(BaseModel):
name: str = Field(description="Template name") name: str = Field(description="Template name")
engine_type: str = Field(description="Engine type identifier") engine_type: str = Field(description="Engine type identifier")
engine_config: Dict = Field(description="Engine-specific configuration") engine_config: Dict = Field(description="Engine-specific configuration")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
created_at: datetime = Field(description="Creation timestamp") created_at: datetime = Field(description="Creation timestamp")
updated_at: datetime = Field(description="Last update timestamp") updated_at: datetime = Field(description="Last update timestamp")
description: Optional[str] = Field(None, description="Template description") description: Optional[str] = Field(None, description="Template description")

View File

@@ -29,6 +29,7 @@ class ValueSourceCreate(BaseModel):
picture_source_id: Optional[str] = Field(None, description="Picture source ID for scene mode") picture_source_id: Optional[str] = Field(None, description="Picture source ID for scene mode")
scene_behavior: Optional[str] = Field(None, description="Scene behavior: complement|match") scene_behavior: Optional[str] = Field(None, description="Scene behavior: complement|match")
description: Optional[str] = Field(None, description="Optional description", max_length=500) description: Optional[str] = Field(None, description="Optional description", max_length=500)
tags: List[str] = Field(default_factory=list, description="User-defined tags")
class ValueSourceUpdate(BaseModel): class ValueSourceUpdate(BaseModel):
@@ -53,6 +54,7 @@ class ValueSourceUpdate(BaseModel):
picture_source_id: Optional[str] = Field(None, description="Picture source ID for scene mode") picture_source_id: Optional[str] = Field(None, description="Picture source ID for scene mode")
scene_behavior: Optional[str] = Field(None, description="Scene behavior: complement|match") scene_behavior: Optional[str] = Field(None, description="Scene behavior: complement|match")
description: Optional[str] = Field(None, description="Optional description", max_length=500) description: Optional[str] = Field(None, description="Optional description", max_length=500)
tags: Optional[List[str]] = None
class ValueSourceResponse(BaseModel): class ValueSourceResponse(BaseModel):
@@ -75,6 +77,7 @@ class ValueSourceResponse(BaseModel):
picture_source_id: Optional[str] = Field(None, description="Picture source ID") picture_source_id: Optional[str] = Field(None, description="Picture source ID")
scene_behavior: Optional[str] = Field(None, description="Scene behavior") scene_behavior: Optional[str] = Field(None, description="Scene behavior")
description: Optional[str] = Field(None, description="Description") description: Optional[str] = Field(None, description="Description")
tags: List[str] = Field(default_factory=list, description="User-defined tags")
created_at: datetime = Field(description="Creation timestamp") created_at: datetime = Field(description="Creation timestamp")
updated_at: datetime = Field(description="Last update timestamp") updated_at: datetime = Field(description="Last update timestamp")

View File

@@ -99,6 +99,17 @@ class AudioAnalyzer:
self._spectrum_buf_right = np.zeros(NUM_BANDS, dtype=np.float32) self._spectrum_buf_right = np.zeros(NUM_BANDS, dtype=np.float32)
self._sq_buf = np.empty(chunk_size, dtype=np.float32) self._sq_buf = np.empty(chunk_size, dtype=np.float32)
# Double-buffered output spectra — avoids allocating new arrays each
# analyze() call. Consumers hold a reference to the "old" buffer while
# the analyzer writes into the alternate one.
self._out_spectrum = [np.zeros(NUM_BANDS, dtype=np.float32),
np.zeros(NUM_BANDS, dtype=np.float32)]
self._out_spectrum_left = [np.zeros(NUM_BANDS, dtype=np.float32),
np.zeros(NUM_BANDS, dtype=np.float32)]
self._out_spectrum_right = [np.zeros(NUM_BANDS, dtype=np.float32),
np.zeros(NUM_BANDS, dtype=np.float32)]
self._out_idx = 0 # toggles 0/1 each analyze() call
# Pre-compute band start/end arrays and widths for vectorized binning # Pre-compute band start/end arrays and widths for vectorized binning
self._band_starts = np.array([s for s, _ in self._bands], dtype=np.intp) self._band_starts = np.array([s for s, _ in self._bands], dtype=np.intp)
self._band_ends = np.array([e for _, e in self._bands], dtype=np.intp) self._band_ends = np.array([e for _, e in self._bands], dtype=np.intp)
@@ -168,10 +179,14 @@ class AudioAnalyzer:
# FFT for mono, left, right # FFT for mono, left, right
self._fft_bands(samples, self._spectrum_buf, self._smooth_spectrum, self._fft_bands(samples, self._spectrum_buf, self._smooth_spectrum,
alpha, one_minus_alpha) alpha, one_minus_alpha)
self._fft_bands(left_samples, self._spectrum_buf_left, self._smooth_spectrum_left, if channels > 1:
alpha, one_minus_alpha) self._fft_bands(left_samples, self._spectrum_buf_left, self._smooth_spectrum_left,
self._fft_bands(right_samples, self._spectrum_buf_right, self._smooth_spectrum_right, alpha, one_minus_alpha)
alpha, one_minus_alpha) self._fft_bands(right_samples, self._spectrum_buf_right, self._smooth_spectrum_right,
alpha, one_minus_alpha)
else:
np.copyto(self._smooth_spectrum_left, self._smooth_spectrum)
np.copyto(self._smooth_spectrum_right, self._smooth_spectrum)
# Beat detection — compare current energy to rolling average (mono) # Beat detection — compare current energy to rolling average (mono)
np.multiply(samples, samples, out=self._sq_buf[:n]) np.multiply(samples, samples, out=self._sq_buf[:n])
@@ -188,17 +203,27 @@ class AudioAnalyzer:
beat = True beat = True
beat_intensity = min(1.0, (ratio - 1.0) / 2.0) beat_intensity = min(1.0, (ratio - 1.0) / 2.0)
# Snapshot spectra into double-buffered output arrays (no allocation)
idx = self._out_idx
self._out_idx = 1 - idx
out_spec = self._out_spectrum[idx]
out_left = self._out_spectrum_left[idx]
out_right = self._out_spectrum_right[idx]
np.copyto(out_spec, self._smooth_spectrum)
np.copyto(out_left, self._smooth_spectrum_left)
np.copyto(out_right, self._smooth_spectrum_right)
return AudioAnalysis( return AudioAnalysis(
timestamp=time.perf_counter(), timestamp=time.perf_counter(),
rms=rms, rms=rms,
peak=peak, peak=peak,
spectrum=self._smooth_spectrum.copy(), spectrum=out_spec,
beat=beat, beat=beat,
beat_intensity=beat_intensity, beat_intensity=beat_intensity,
left_rms=left_rms, left_rms=left_rms,
left_spectrum=self._smooth_spectrum_left.copy(), left_spectrum=out_left,
right_rms=right_rms, right_rms=right_rms,
right_spectrum=self._smooth_spectrum_right.copy(), right_spectrum=out_right,
) )
def _fft_bands(self, samps, buf, smooth_buf, alpha, one_minus_alpha): def _fft_bands(self, samps, buf, smooth_buf, alpha, one_minus_alpha):

View File

@@ -201,20 +201,25 @@ class AutoBackupEngine:
}) })
return backups return backups
def delete_backup(self, filename: str) -> None: def _safe_backup_path(self, filename: str) -> Path:
# Validate filename to prevent path traversal """Resolve a backup filename to an absolute path, guarding against path traversal."""
if os.sep in filename or "/" in filename or ".." in filename: if not filename or os.sep in filename or "/" in filename or ".." in filename:
raise ValueError("Invalid filename") raise ValueError("Invalid filename")
target = self._backup_dir / filename target = (self._backup_dir / filename).resolve()
# Ensure resolved path is still inside the backup directory
if not target.is_relative_to(self._backup_dir.resolve()):
raise ValueError("Invalid filename")
return target
def delete_backup(self, filename: str) -> None:
target = self._safe_backup_path(filename)
if not target.exists(): if not target.exists():
raise FileNotFoundError(f"Backup not found: {filename}") raise FileNotFoundError(f"Backup not found: {filename}")
target.unlink() target.unlink()
logger.info(f"Deleted backup: {filename}") logger.info(f"Deleted backup: {filename}")
def get_backup_path(self, filename: str) -> Path: def get_backup_path(self, filename: str) -> Path:
if os.sep in filename or "/" in filename or ".." in filename: target = self._safe_backup_path(filename)
raise ValueError("Invalid filename")
target = self._backup_dir / filename
if not target.exists(): if not target.exists():
raise FileNotFoundError(f"Backup not found: {filename}") raise FileNotFoundError(f"Backup not found: {filename}")
return target return target

View File

@@ -1,7 +1,7 @@
"""Adalight serial LED client — sends pixel data over serial using the Adalight protocol.""" """Adalight serial LED client — sends pixel data over serial using the Adalight protocol."""
import asyncio import asyncio
from datetime import datetime from datetime import datetime, timezone
from typing import List, Optional, Tuple from typing import List, Optional, Tuple
import numpy as np import numpy as np
@@ -199,7 +199,7 @@ class AdalightClient(LEDClient):
return DeviceHealth( return DeviceHealth(
online=True, online=True,
latency_ms=0.0, latency_ms=0.0,
last_checked=datetime.utcnow(), last_checked=datetime.now(timezone.utc),
device_name=prev_health.device_name if prev_health else None, device_name=prev_health.device_name if prev_health else None,
device_version=None, device_version=None,
device_led_count=prev_health.device_led_count if prev_health else None, device_led_count=prev_health.device_led_count if prev_health else None,
@@ -207,12 +207,12 @@ class AdalightClient(LEDClient):
else: else:
return DeviceHealth( return DeviceHealth(
online=False, online=False,
last_checked=datetime.utcnow(), last_checked=datetime.now(timezone.utc),
error=f"Serial port {port} not found", error=f"Serial port {port} not found",
) )
except Exception as e: except Exception as e:
return DeviceHealth( return DeviceHealth(
online=False, online=False,
last_checked=datetime.utcnow(), last_checked=datetime.now(timezone.utc),
error=str(e), error=str(e),
) )

View File

@@ -190,9 +190,12 @@ class DDPClient:
try: try:
# Send plain RGB — WLED handles per-bus color order conversion # Send plain RGB — WLED handles per-bus color order conversion
# internally when outputting to hardware. # internally when outputting to hardware.
# Convert to numpy to avoid per-pixel Python loop # Accept numpy arrays directly to avoid per-pixel Python loop
bpp = 4 if self.rgbw else 3 # bytes per pixel bpp = 4 if self.rgbw else 3 # bytes per pixel
pixel_array = np.array(pixels, dtype=np.uint8) if isinstance(pixels, np.ndarray):
pixel_array = pixels
else:
pixel_array = np.array(pixels, dtype=np.uint8)
if self.rgbw: if self.rgbw:
n = pixel_array.shape[0] n = pixel_array.shape[0]
if n != self._rgbw_buf_n: if n != self._rgbw_buf_n:
@@ -219,7 +222,7 @@ class DDPClient:
for i in range(num_packets): for i in range(num_packets):
start = i * bytes_per_packet start = i * bytes_per_packet
end = min(start + bytes_per_packet, total_bytes) end = min(start + bytes_per_packet, total_bytes)
chunk = bytes(pixel_bytes[start:end]) chunk = pixel_bytes[start:end]
is_last = (i == num_packets - 1) is_last = (i == num_packets - 1)
# Increment sequence number # Increment sequence number

View File

@@ -2,7 +2,7 @@
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from dataclasses import dataclass, field from dataclasses import dataclass, field
from datetime import datetime from datetime import datetime, timezone
from typing import Dict, List, Optional, Tuple, Union from typing import Dict, List, Optional, Tuple, Union
import numpy as np import numpy as np
@@ -139,7 +139,7 @@ class LEDClient(ABC):
http_client: Shared httpx.AsyncClient for HTTP requests http_client: Shared httpx.AsyncClient for HTTP requests
prev_health: Previous health result (for preserving cached metadata) prev_health: Previous health result (for preserving cached metadata)
""" """
return DeviceHealth(online=True, last_checked=datetime.utcnow()) return DeviceHealth(online=True, last_checked=datetime.now(timezone.utc))
async def __aenter__(self): async def __aenter__(self):
await self.connect() await self.connect()

View File

@@ -1,7 +1,7 @@
"""Mock LED client — simulates an LED strip with configurable latency for testing.""" """Mock LED client — simulates an LED strip with configurable latency for testing."""
import asyncio import asyncio
from datetime import datetime from datetime import datetime, timezone
from typing import List, Optional, Tuple, Union from typing import List, Optional, Tuple, Union
import numpy as np import numpy as np
@@ -69,5 +69,5 @@ class MockClient(LEDClient):
return DeviceHealth( return DeviceHealth(
online=True, online=True,
latency_ms=0.0, latency_ms=0.0,
last_checked=datetime.utcnow(), last_checked=datetime.now(timezone.utc),
) )

View File

@@ -1,6 +1,6 @@
"""Mock device provider — virtual LED strip for testing.""" """Mock device provider — virtual LED strip for testing."""
from datetime import datetime from datetime import datetime, timezone
from typing import List from typing import List
from wled_controller.core.devices.led_client import ( from wled_controller.core.devices.led_client import (
@@ -28,7 +28,7 @@ class MockDeviceProvider(LEDDeviceProvider):
return MockClient(url, **kwargs) return MockClient(url, **kwargs)
async def check_health(self, url: str, http_client, prev_health=None) -> DeviceHealth: async def check_health(self, url: str, http_client, prev_health=None) -> DeviceHealth:
return DeviceHealth(online=True, latency_ms=0.0, last_checked=datetime.utcnow()) return DeviceHealth(online=True, latency_ms=0.0, last_checked=datetime.now(timezone.utc))
async def validate_device(self, url: str) -> dict: async def validate_device(self, url: str) -> dict:
return {} return {}

View File

@@ -87,12 +87,12 @@ class MQTTLEDClient(LEDClient):
http_client, http_client,
prev_health=None, prev_health=None,
) -> DeviceHealth: ) -> DeviceHealth:
from datetime import datetime from datetime import datetime, timezone
svc = _mqtt_service svc = _mqtt_service
if svc is None or not svc.is_enabled: if svc is None or not svc.is_enabled:
return DeviceHealth(online=False, error="MQTT disabled", last_checked=datetime.utcnow()) return DeviceHealth(online=False, error="MQTT disabled", last_checked=datetime.now(timezone.utc))
return DeviceHealth( return DeviceHealth(
online=svc.is_connected, online=svc.is_connected,
last_checked=datetime.utcnow(), last_checked=datetime.now(timezone.utc),
error=None if svc.is_connected else "MQTT broker disconnected", error=None if svc.is_connected else "MQTT broker disconnected",
) )

View File

@@ -4,7 +4,7 @@ import asyncio
import socket import socket
import struct import struct
import threading import threading
from datetime import datetime from datetime import datetime, timezone
from typing import Any, Dict, List, Optional, Tuple, Union from typing import Any, Dict, List, Optional, Tuple, Union
import numpy as np import numpy as np
@@ -428,13 +428,13 @@ class OpenRGBLEDClient(LEDClient):
return DeviceHealth( return DeviceHealth(
online=True, online=True,
latency_ms=latency, latency_ms=latency,
last_checked=datetime.utcnow(), last_checked=datetime.now(timezone.utc),
device_name=device_name, device_name=device_name,
device_led_count=device_led_count, device_led_count=device_led_count,
) )
except Exception as e: except Exception as e:
return DeviceHealth( return DeviceHealth(
online=False, online=False,
last_checked=datetime.utcnow(), last_checked=datetime.now(timezone.utc),
error=str(e), error=str(e),
) )

View File

@@ -3,7 +3,7 @@
import asyncio import asyncio
import time import time
from dataclasses import dataclass, field from dataclasses import dataclass, field
from datetime import datetime from datetime import datetime, timezone
from typing import List, Tuple, Optional, Dict, Any from typing import List, Tuple, Optional, Dict, Any
from urllib.parse import urlparse from urllib.parse import urlparse
@@ -540,7 +540,7 @@ class WLEDClient(LEDClient):
return DeviceHealth( return DeviceHealth(
online=True, online=True,
latency_ms=round(latency, 1), latency_ms=round(latency, 1),
last_checked=datetime.utcnow(), last_checked=datetime.now(timezone.utc),
device_name=data.get("name"), device_name=data.get("name"),
device_version=data.get("ver"), device_version=data.get("ver"),
device_led_count=leds_info.get("count"), device_led_count=leds_info.get("count"),
@@ -553,7 +553,7 @@ class WLEDClient(LEDClient):
return DeviceHealth( return DeviceHealth(
online=False, online=False,
latency_ms=None, latency_ms=None,
last_checked=datetime.utcnow(), last_checked=datetime.now(timezone.utc),
device_name=prev_health.device_name if prev_health else None, device_name=prev_health.device_name if prev_health else None,
device_version=prev_health.device_version if prev_health else None, device_version=prev_health.device_version if prev_health else None,
device_led_count=prev_health.device_led_count if prev_health else None, device_led_count=prev_health.device_led_count if prev_health else None,

View File

@@ -1,7 +1,7 @@
"""WebSocket LED client — broadcasts pixel data to connected WebSocket clients.""" """WebSocket LED client — broadcasts pixel data to connected WebSocket clients."""
import asyncio import asyncio
from datetime import datetime from datetime import datetime, timezone
from typing import Dict, List, Optional, Tuple, Union from typing import Dict, List, Optional, Tuple, Union
import numpy as np import numpy as np
@@ -126,5 +126,5 @@ class WSLEDClient(LEDClient):
return DeviceHealth( return DeviceHealth(
online=True, online=True,
latency_ms=0.0, latency_ms=0.0,
last_checked=datetime.utcnow(), last_checked=datetime.now(timezone.utc),
) )

View File

@@ -1,6 +1,6 @@
"""WebSocket device provider — factory, validation, health checks.""" """WebSocket device provider — factory, validation, health checks."""
from datetime import datetime from datetime import datetime, timezone
from typing import List from typing import List
from wled_controller.core.devices.led_client import ( from wled_controller.core.devices.led_client import (
@@ -33,7 +33,7 @@ class WSDeviceProvider(LEDDeviceProvider):
self, url: str, http_client, prev_health=None, self, url: str, http_client, prev_health=None,
) -> DeviceHealth: ) -> DeviceHealth:
return DeviceHealth( return DeviceHealth(
online=True, latency_ms=0.0, last_checked=datetime.utcnow(), online=True, latency_ms=0.0, last_checked=datetime.now(timezone.utc),
) )
async def validate_device(self, url: str) -> dict: async def validate_device(self, url: str) -> dict:

View File

@@ -46,6 +46,7 @@ class AudioColorStripStream(ColorStripStream):
self._running = False self._running = False
self._thread: Optional[threading.Thread] = None self._thread: Optional[threading.Thread] = None
self._fps = 30 self._fps = 30
self._frame_time = 1.0 / 30
# Per-frame timing (read by WledTargetProcessor via get_last_timing()) # Per-frame timing (read by WledTargetProcessor via get_last_timing())
self._last_timing: dict = {} self._last_timing: dict = {}
@@ -128,6 +129,7 @@ class AudioColorStripStream(ColorStripStream):
def set_capture_fps(self, fps: int) -> None: def set_capture_fps(self, fps: int) -> None:
self._fps = max(1, min(90, fps)) self._fps = max(1, min(90, fps))
self._frame_time = 1.0 / self._fps
def start(self) -> None: def start(self) -> None:
if self._running: if self._running:
@@ -233,7 +235,7 @@ class AudioColorStripStream(ColorStripStream):
with high_resolution_timer(): with high_resolution_timer():
while self._running: while self._running:
loop_start = time.perf_counter() loop_start = time.perf_counter()
frame_time = 1.0 / self._fps frame_time = self._frame_time
try: try:
n = self._led_count n = self._led_count

View File

@@ -587,6 +587,7 @@ class StaticColorStripStream(ColorStripStream):
self._running = False self._running = False
self._thread: Optional[threading.Thread] = None self._thread: Optional[threading.Thread] = None
self._fps = 30 self._fps = 30
self._frame_time = 1.0 / 30
self._clock = None # optional SyncClockRuntime self._clock = None # optional SyncClockRuntime
self._update_from_source(source) self._update_from_source(source)
@@ -636,6 +637,7 @@ class StaticColorStripStream(ColorStripStream):
"""Update animation loop rate. Thread-safe (read atomically by the loop).""" """Update animation loop rate. Thread-safe (read atomically by the loop)."""
fps = max(1, min(90, fps)) fps = max(1, min(90, fps))
self._fps = fps self._fps = fps
self._frame_time = 1.0 / fps
def start(self) -> None: def start(self) -> None:
if self._running: if self._running:
@@ -693,7 +695,7 @@ class StaticColorStripStream(ColorStripStream):
with high_resolution_timer(): with high_resolution_timer():
while self._running: while self._running:
wall_start = time.perf_counter() wall_start = time.perf_counter()
frame_time = 1.0 / self._fps frame_time = self._frame_time
try: try:
anim = self._animation anim = self._animation
if anim and anim.get("enabled"): if anim and anim.get("enabled"):
@@ -807,6 +809,7 @@ class ColorCycleColorStripStream(ColorStripStream):
self._running = False self._running = False
self._thread: Optional[threading.Thread] = None self._thread: Optional[threading.Thread] = None
self._fps = 30 self._fps = 30
self._frame_time = 1.0 / 30
self._clock = None # optional SyncClockRuntime self._clock = None # optional SyncClockRuntime
self._update_from_source(source) self._update_from_source(source)
@@ -849,6 +852,7 @@ class ColorCycleColorStripStream(ColorStripStream):
"""Update animation loop rate. Thread-safe (read atomically by the loop).""" """Update animation loop rate. Thread-safe (read atomically by the loop)."""
fps = max(1, min(90, fps)) fps = max(1, min(90, fps))
self._fps = fps self._fps = fps
self._frame_time = 1.0 / fps
def start(self) -> None: def start(self) -> None:
if self._running: if self._running:
@@ -902,7 +906,7 @@ class ColorCycleColorStripStream(ColorStripStream):
with high_resolution_timer(): with high_resolution_timer():
while self._running: while self._running:
wall_start = time.perf_counter() wall_start = time.perf_counter()
frame_time = 1.0 / self._fps frame_time = self._frame_time
try: try:
color_list = self._color_list color_list = self._color_list
clock = self._clock clock = self._clock
@@ -967,6 +971,7 @@ class GradientColorStripStream(ColorStripStream):
self._running = False self._running = False
self._thread: Optional[threading.Thread] = None self._thread: Optional[threading.Thread] = None
self._fps = 30 self._fps = 30
self._frame_time = 1.0 / 30
self._clock = None # optional SyncClockRuntime self._clock = None # optional SyncClockRuntime
self._update_from_source(source) self._update_from_source(source)
@@ -1015,6 +1020,7 @@ class GradientColorStripStream(ColorStripStream):
"""Update animation loop rate. Thread-safe (read atomically by the loop).""" """Update animation loop rate. Thread-safe (read atomically by the loop)."""
fps = max(1, min(90, fps)) fps = max(1, min(90, fps))
self._fps = fps self._fps = fps
self._frame_time = 1.0 / fps
def start(self) -> None: def start(self) -> None:
if self._running: if self._running:
@@ -1077,7 +1083,7 @@ class GradientColorStripStream(ColorStripStream):
with high_resolution_timer(): with high_resolution_timer():
while self._running: while self._running:
wall_start = time.perf_counter() wall_start = time.perf_counter()
frame_time = 1.0 / self._fps frame_time = self._frame_time
try: try:
anim = self._animation anim = self._animation
if anim and anim.get("enabled"): if anim and anim.get("enabled"):

View File

@@ -46,6 +46,8 @@ class _ColorStripEntry:
picture_source_ids: list = None picture_source_ids: list = None
# Per-consumer target FPS values (target_id → fps) # Per-consumer target FPS values (target_id → fps)
target_fps: Dict[str, int] = None target_fps: Dict[str, int] = None
# Clock ID currently acquired for this stream (for correct release)
clock_id: Optional[str] = None
def __post_init__(self): def __post_init__(self):
if self.picture_source_ids is None: if self.picture_source_ids is None:
@@ -79,24 +81,36 @@ class ColorStripStreamManager:
self._sync_clock_manager = sync_clock_manager self._sync_clock_manager = sync_clock_manager
self._streams: Dict[str, _ColorStripEntry] = {} self._streams: Dict[str, _ColorStripEntry] = {}
def _inject_clock(self, css_stream, source) -> None: def _inject_clock(self, css_stream, source) -> Optional[str]:
"""Inject a SyncClockRuntime into the stream if source has clock_id.""" """Inject a SyncClockRuntime into the stream if source has clock_id.
Returns the clock_id that was acquired, or None.
"""
clock_id = getattr(source, "clock_id", None) clock_id = getattr(source, "clock_id", None)
if clock_id and self._sync_clock_manager and hasattr(css_stream, "set_clock"): if clock_id and self._sync_clock_manager and hasattr(css_stream, "set_clock"):
try: try:
clock_rt = self._sync_clock_manager.acquire(clock_id) clock_rt = self._sync_clock_manager.acquire(clock_id)
css_stream.set_clock(clock_rt) css_stream.set_clock(clock_rt)
logger.debug(f"Injected clock {clock_id} into stream for {source.id}") logger.debug(f"Injected clock {clock_id} into stream for {source.id}")
return clock_id
except Exception as e: except Exception as e:
logger.warning(f"Could not inject clock {clock_id}: {e}") logger.warning(f"Could not inject clock {clock_id}: {e}")
return None
def _release_clock(self, source_id: str, stream) -> None: def _release_clock(self, source_id: str, stream, clock_id: str = None) -> None:
"""Release the clock runtime acquired for a stream.""" """Release the clock runtime acquired for a stream.
Args:
source_id: CSS source ID (used as fallback to look up clock_id from store)
stream: The stream instance (unused, kept for API compat)
clock_id: Explicit clock_id to release. If None, looks up from store.
"""
if not self._sync_clock_manager: if not self._sync_clock_manager:
return return
try: try:
source = self._color_strip_store.get_source(source_id) if not clock_id:
clock_id = getattr(source, "clock_id", None) source = self._color_strip_store.get_source(source_id)
clock_id = getattr(source, "clock_id", None)
if clock_id: if clock_id:
self._sync_clock_manager.release(clock_id) self._sync_clock_manager.release(clock_id)
except Exception: except Exception:
@@ -153,11 +167,12 @@ class ColorStripStreamManager:
) )
css_stream = stream_cls(source) css_stream = stream_cls(source)
# Inject sync clock runtime if source references a clock # Inject sync clock runtime if source references a clock
self._inject_clock(css_stream, source) acquired_clock_id = self._inject_clock(css_stream, source)
css_stream.start() css_stream.start()
key = f"{css_id}:{consumer_id}" if consumer_id else css_id key = f"{css_id}:{consumer_id}" if consumer_id else css_id
self._streams[key] = _ColorStripEntry( self._streams[key] = _ColorStripEntry(
stream=css_stream, ref_count=1, picture_source_ids=[], stream=css_stream, ref_count=1, picture_source_ids=[],
clock_id=acquired_clock_id,
) )
logger.info(f"Created {source.source_type} stream {key}") logger.info(f"Created {source.source_type} stream {key}")
return css_stream return css_stream
@@ -249,8 +264,9 @@ class ColorStripStreamManager:
logger.error(f"Error stopping color strip stream {key}: {e}") logger.error(f"Error stopping color strip stream {key}: {e}")
# Release clock runtime if acquired # Release clock runtime if acquired
source_id = key.split(":")[0] if ":" in key else key if entry.clock_id:
self._release_clock(source_id, entry.stream) source_id = key.split(":")[0] if ":" in key else key
self._release_clock(source_id, entry.stream, clock_id=entry.clock_id)
picture_source_ids = entry.picture_source_ids picture_source_ids = entry.picture_source_ids
del self._streams[key] del self._streams[key]
@@ -282,26 +298,28 @@ class ColorStripStreamManager:
for key in matching_keys: for key in matching_keys:
entry = self._streams[key] entry = self._streams[key]
old_clock_id = entry.clock_id
entry.stream.update_source(new_source) entry.stream.update_source(new_source)
# Hot-swap clock if clock_id changed # Hot-swap clock if clock_id changed
if hasattr(entry.stream, "set_clock") and self._sync_clock_manager: if hasattr(entry.stream, "set_clock") and self._sync_clock_manager:
new_clock_id = getattr(new_source, "clock_id", None) new_clock_id = getattr(new_source, "clock_id", None)
old_clock = getattr(entry.stream, "_clock", None)
if new_clock_id: if new_clock_id:
try: if new_clock_id != old_clock_id:
clock_rt = self._sync_clock_manager.acquire(new_clock_id) try:
entry.stream.set_clock(clock_rt) clock_rt = self._sync_clock_manager.acquire(new_clock_id)
# Release old clock if different entry.stream.set_clock(clock_rt)
if old_clock: entry.clock_id = new_clock_id
# Find the old clock_id (best-effort) # Release old clock after acquiring new one
source_id = key.split(":")[0] if ":" in key else key if old_clock_id:
self._release_clock(source_id, entry.stream) source_id = key.split(":")[0] if ":" in key else key
except Exception as e: self._release_clock(source_id, entry.stream, clock_id=old_clock_id)
logger.warning(f"Could not hot-swap clock {new_clock_id}: {e}") except Exception as e:
elif old_clock: logger.warning(f"Could not hot-swap clock {new_clock_id}: {e}")
elif old_clock_id:
entry.stream.set_clock(None) entry.stream.set_clock(None)
entry.clock_id = None
source_id = key.split(":")[0] if ":" in key else key source_id = key.split(":")[0] if ":" in key else key
self._release_clock(source_id, entry.stream) self._release_clock(source_id, entry.stream, clock_id=old_clock_id)
# Track picture source changes for future reference counting # Track picture source changes for future reference counting
from wled_controller.storage.color_strip_source import PictureColorStripSource, AdvancedPictureColorStripSource from wled_controller.storage.color_strip_source import PictureColorStripSource, AdvancedPictureColorStripSource

View File

@@ -36,6 +36,7 @@ class CompositeColorStripStream(ColorStripStream):
self._auto_size: bool = source.led_count == 0 self._auto_size: bool = source.led_count == 0
self._css_manager = css_manager self._css_manager = css_manager
self._fps: int = 30 self._fps: int = 30
self._frame_time: float = 1.0 / 30
self._running = False self._running = False
self._thread: Optional[threading.Thread] = None self._thread: Optional[threading.Thread] = None
@@ -44,6 +45,7 @@ class CompositeColorStripStream(ColorStripStream):
# layer_index -> (source_id, consumer_id, stream) # layer_index -> (source_id, consumer_id, stream)
self._sub_streams: Dict[int, tuple] = {} self._sub_streams: Dict[int, tuple] = {}
self._sub_lock = threading.Lock() # guards _sub_streams access across threads
# Pre-allocated scratch (rebuilt when LED count changes) # Pre-allocated scratch (rebuilt when LED count changes)
self._pool_n = 0 self._pool_n = 0
@@ -60,6 +62,10 @@ class CompositeColorStripStream(ColorStripStream):
def target_fps(self) -> int: def target_fps(self) -> int:
return self._fps return self._fps
def set_capture_fps(self, fps: int) -> None:
self._fps = max(1, min(90, fps))
self._frame_time = 1.0 / self._fps
@property @property
def led_count(self) -> int: def led_count(self) -> int:
return self._led_count return self._led_count
@@ -69,7 +75,8 @@ class CompositeColorStripStream(ColorStripStream):
return True return True
def start(self) -> None: def start(self) -> None:
self._acquire_sub_streams() with self._sub_lock:
self._acquire_sub_streams()
self._running = True self._running = True
self._thread = threading.Thread( self._thread = threading.Thread(
target=self._processing_loop, daemon=True, target=self._processing_loop, daemon=True,
@@ -86,7 +93,8 @@ class CompositeColorStripStream(ColorStripStream):
if self._thread is not None: if self._thread is not None:
self._thread.join(timeout=5.0) self._thread.join(timeout=5.0)
self._thread = None self._thread = None
self._release_sub_streams() with self._sub_lock:
self._release_sub_streams()
logger.info(f"CompositeColorStripStream stopped: {self._source_id}") logger.info(f"CompositeColorStripStream stopped: {self._source_id}")
def get_latest_colors(self) -> Optional[np.ndarray]: def get_latest_colors(self) -> Optional[np.ndarray]:
@@ -97,7 +105,9 @@ class CompositeColorStripStream(ColorStripStream):
if self._auto_size and device_led_count > 0 and device_led_count != self._led_count: if self._auto_size and device_led_count > 0 and device_led_count != self._led_count:
self._led_count = device_led_count self._led_count = device_led_count
# Re-configure sub-streams that support auto-sizing # Re-configure sub-streams that support auto-sizing
for _idx, (src_id, consumer_id, stream) in self._sub_streams.items(): with self._sub_lock:
snapshot = dict(self._sub_streams)
for _idx, (src_id, consumer_id, stream) in snapshot.items():
if hasattr(stream, "configure"): if hasattr(stream, "configure"):
stream.configure(device_led_count) stream.configure(device_led_count)
logger.debug(f"CompositeColorStripStream auto-sized to {device_led_count} LEDs") logger.debug(f"CompositeColorStripStream auto-sized to {device_led_count} LEDs")
@@ -118,8 +128,9 @@ class CompositeColorStripStream(ColorStripStream):
# If layer composition changed, rebuild sub-streams # If layer composition changed, rebuild sub-streams
if old_layer_ids != new_layer_ids: if old_layer_ids != new_layer_ids:
self._release_sub_streams() with self._sub_lock:
self._acquire_sub_streams() self._release_sub_streams()
self._acquire_sub_streams()
logger.info(f"CompositeColorStripStream rebuilt sub-streams: {self._source_id}") logger.info(f"CompositeColorStripStream rebuilt sub-streams: {self._source_id}")
# ── Sub-stream lifecycle ──────────────────────────────────── # ── Sub-stream lifecycle ────────────────────────────────────
@@ -256,7 +267,7 @@ class CompositeColorStripStream(ColorStripStream):
try: try:
while self._running: while self._running:
loop_start = time.perf_counter() loop_start = time.perf_counter()
frame_time = 1.0 / self._fps frame_time = self._frame_time
try: try:
target_n = self._led_count target_n = self._led_count
@@ -270,13 +281,16 @@ class CompositeColorStripStream(ColorStripStream):
self._use_a = not self._use_a self._use_a = not self._use_a
has_result = False has_result = False
with self._sub_lock:
sub_snapshot = dict(self._sub_streams)
for i, layer in enumerate(self._layers): for i, layer in enumerate(self._layers):
if not layer.get("enabled", True): if not layer.get("enabled", True):
continue continue
if i not in self._sub_streams: if i not in sub_snapshot:
continue continue
_src_id, _consumer_id, stream = self._sub_streams[i] _src_id, _consumer_id, stream = sub_snapshot[i]
colors = stream.get_latest_colors() colors = stream.get_latest_colors()
if colors is None: if colors is None:
continue continue

View File

@@ -182,6 +182,7 @@ class EffectColorStripStream(ColorStripStream):
self._running = False self._running = False
self._thread: Optional[threading.Thread] = None self._thread: Optional[threading.Thread] = None
self._fps = 30 self._fps = 30
self._frame_time = 1.0 / 30
self._clock = None # optional SyncClockRuntime self._clock = None # optional SyncClockRuntime
self._effective_speed = 1.0 # resolved speed (from clock or source) self._effective_speed = 1.0 # resolved speed (from clock or source)
self._noise = _ValueNoise1D(seed=42) self._noise = _ValueNoise1D(seed=42)
@@ -233,6 +234,7 @@ class EffectColorStripStream(ColorStripStream):
def set_capture_fps(self, fps: int) -> None: def set_capture_fps(self, fps: int) -> None:
self._fps = max(1, min(90, fps)) self._fps = max(1, min(90, fps))
self._frame_time = 1.0 / self._fps
def start(self) -> None: def start(self) -> None:
if self._running: if self._running:
@@ -294,7 +296,7 @@ class EffectColorStripStream(ColorStripStream):
with high_resolution_timer(): with high_resolution_timer():
while self._running: while self._running:
wall_start = time.perf_counter() wall_start = time.perf_counter()
frame_time = 1.0 / self._fps frame_time = self._frame_time
try: try:
# Resolve animation time and speed from clock or local # Resolve animation time and speed from clock or local
clock = self._clock clock = self._clock

View File

@@ -6,7 +6,7 @@ import asyncio
import collections import collections
import json import json
import time import time
from datetime import datetime from datetime import datetime, timezone
from typing import Dict, List, Optional, Tuple from typing import Dict, List, Optional, Tuple
import cv2 import cv2
@@ -169,7 +169,7 @@ class KCTargetProcessor(TargetProcessor):
self._value_stream = None self._value_stream = None
# Reset metrics # Reset metrics
self._metrics = ProcessingMetrics(start_time=datetime.utcnow()) self._metrics = ProcessingMetrics(start_time=datetime.now(timezone.utc))
self._previous_colors = None self._previous_colors = None
self._latest_colors = None self._latest_colors = None
@@ -276,7 +276,7 @@ class KCTargetProcessor(TargetProcessor):
metrics = self._metrics metrics = self._metrics
uptime = 0.0 uptime = 0.0
if metrics.start_time and self._is_running: if metrics.start_time and self._is_running:
uptime = (datetime.utcnow() - metrics.start_time).total_seconds() uptime = (datetime.now(timezone.utc) - metrics.start_time).total_seconds()
return { return {
"target_id": self._target_id, "target_id": self._target_id,
@@ -417,7 +417,7 @@ class KCTargetProcessor(TargetProcessor):
# Update metrics # Update metrics
self._metrics.frames_processed += 1 self._metrics.frames_processed += 1
self._metrics.last_update = datetime.utcnow() self._metrics.last_update = datetime.now(timezone.utc)
# Calculate actual FPS # Calculate actual FPS
now = time.perf_counter() now = time.perf_counter()
@@ -452,6 +452,7 @@ class KCTargetProcessor(TargetProcessor):
except Exception as e: except Exception as e:
logger.error(f"Fatal error in KC processing loop for target {self._target_id}: {e}") logger.error(f"Fatal error in KC processing loop for target {self._target_id}: {e}")
self._is_running = False self._is_running = False
self._ctx.fire_event({"type": "state_change", "target_id": self._target_id, "processing": False, "crashed": True})
raise raise
finally: finally:
logger.info(f"KC processing loop ended for target {self._target_id}") logger.info(f"KC processing loop ended for target {self._target_id}")
@@ -468,7 +469,7 @@ class KCTargetProcessor(TargetProcessor):
name: {"r": c[0], "g": c[1], "b": c[2]} name: {"r": c[0], "g": c[1], "b": c[2]}
for name, c in colors.items() for name, c in colors.items()
}, },
"timestamp": datetime.utcnow().isoformat(), "timestamp": datetime.now(timezone.utc).isoformat(),
}) })
async def _send_safe(ws): async def _send_safe(ws):
@@ -478,8 +479,9 @@ class KCTargetProcessor(TargetProcessor):
except Exception: except Exception:
return False return False
results = await asyncio.gather(*[_send_safe(ws) for ws in self._ws_clients]) clients = list(self._ws_clients)
results = await asyncio.gather(*[_send_safe(ws) for ws in clients])
disconnected = [ws for ws, ok in zip(self._ws_clients, results) if not ok] for ws, ok in zip(clients, results):
for ws in disconnected: if not ok and ws in self._ws_clients:
self._ws_clients.remove(ws) self._ws_clients.remove(ws)

View File

@@ -75,6 +75,7 @@ class ScreenCaptureLiveStream(LiveStream):
def __init__(self, capture_stream: CaptureStream, fps: int): def __init__(self, capture_stream: CaptureStream, fps: int):
self._capture_stream = capture_stream self._capture_stream = capture_stream
self._fps = fps self._fps = fps
self._frame_time = 1.0 / fps if fps > 0 else 1.0
self._latest_frame: Optional[ScreenCapture] = None self._latest_frame: Optional[ScreenCapture] = None
self._frame_lock = threading.Lock() self._frame_lock = threading.Lock()
self._running = False self._running = False
@@ -128,7 +129,7 @@ class ScreenCaptureLiveStream(LiveStream):
return self._latest_frame return self._latest_frame
def _capture_loop(self) -> None: def _capture_loop(self) -> None:
frame_time = 1.0 / self._fps if self._fps > 0 else 1.0 frame_time = self._frame_time
consecutive_errors = 0 consecutive_errors = 0
try: try:
with high_resolution_timer(): with high_resolution_timer():

View File

@@ -31,6 +31,7 @@ class MappedColorStripStream(ColorStripStream):
self._auto_size: bool = source.led_count == 0 self._auto_size: bool = source.led_count == 0
self._css_manager = css_manager self._css_manager = css_manager
self._fps: int = 30 self._fps: int = 30
self._frame_time: float = 1.0 / 30
self._running = False self._running = False
self._thread: Optional[threading.Thread] = None self._thread: Optional[threading.Thread] = None
@@ -39,6 +40,7 @@ class MappedColorStripStream(ColorStripStream):
# zone_index -> (source_id, consumer_id, stream) # zone_index -> (source_id, consumer_id, stream)
self._sub_streams: Dict[int, tuple] = {} self._sub_streams: Dict[int, tuple] = {}
self._sub_lock = threading.Lock() # guards _sub_streams access across threads
# ── ColorStripStream interface ────────────────────────────── # ── ColorStripStream interface ──────────────────────────────
@@ -46,6 +48,10 @@ class MappedColorStripStream(ColorStripStream):
def target_fps(self) -> int: def target_fps(self) -> int:
return self._fps return self._fps
def set_capture_fps(self, fps: int) -> None:
self._fps = max(1, min(90, fps))
self._frame_time = 1.0 / self._fps
@property @property
def led_count(self) -> int: def led_count(self) -> int:
return self._led_count return self._led_count
@@ -55,7 +61,8 @@ class MappedColorStripStream(ColorStripStream):
return True return True
def start(self) -> None: def start(self) -> None:
self._acquire_sub_streams() with self._sub_lock:
self._acquire_sub_streams()
self._running = True self._running = True
self._thread = threading.Thread( self._thread = threading.Thread(
target=self._processing_loop, daemon=True, target=self._processing_loop, daemon=True,
@@ -72,7 +79,8 @@ class MappedColorStripStream(ColorStripStream):
if self._thread is not None: if self._thread is not None:
self._thread.join(timeout=5.0) self._thread.join(timeout=5.0)
self._thread = None self._thread = None
self._release_sub_streams() with self._sub_lock:
self._release_sub_streams()
logger.info(f"MappedColorStripStream stopped: {self._source_id}") logger.info(f"MappedColorStripStream stopped: {self._source_id}")
def get_latest_colors(self) -> Optional[np.ndarray]: def get_latest_colors(self) -> Optional[np.ndarray]:
@@ -82,7 +90,8 @@ class MappedColorStripStream(ColorStripStream):
def configure(self, device_led_count: int) -> None: def configure(self, device_led_count: int) -> None:
if self._auto_size and device_led_count > 0 and device_led_count != self._led_count: if self._auto_size and device_led_count > 0 and device_led_count != self._led_count:
self._led_count = device_led_count self._led_count = device_led_count
self._reconfigure_sub_streams() with self._sub_lock:
self._reconfigure_sub_streams()
logger.debug(f"MappedColorStripStream auto-sized to {device_led_count} LEDs") logger.debug(f"MappedColorStripStream auto-sized to {device_led_count} LEDs")
def update_source(self, source) -> None: def update_source(self, source) -> None:
@@ -100,8 +109,9 @@ class MappedColorStripStream(ColorStripStream):
self._auto_size = False self._auto_size = False
if old_zone_ids != new_zone_ids: if old_zone_ids != new_zone_ids:
self._release_sub_streams() with self._sub_lock:
self._acquire_sub_streams() self._release_sub_streams()
self._acquire_sub_streams()
logger.info(f"MappedColorStripStream rebuilt sub-streams: {self._source_id}") logger.info(f"MappedColorStripStream rebuilt sub-streams: {self._source_id}")
# ── Sub-stream lifecycle ──────────────────────────────────── # ── Sub-stream lifecycle ────────────────────────────────────
@@ -152,7 +162,7 @@ class MappedColorStripStream(ColorStripStream):
# ── Processing loop ───────────────────────────────────────── # ── Processing loop ─────────────────────────────────────────
def _processing_loop(self) -> None: def _processing_loop(self) -> None:
frame_time = 1.0 / self._fps frame_time = self._frame_time
try: try:
while self._running: while self._running:
loop_start = time.perf_counter() loop_start = time.perf_counter()
@@ -165,11 +175,14 @@ class MappedColorStripStream(ColorStripStream):
result = np.zeros((target_n, 3), dtype=np.uint8) result = np.zeros((target_n, 3), dtype=np.uint8)
with self._sub_lock:
sub_snapshot = dict(self._sub_streams)
for i, zone in enumerate(self._zones): for i, zone in enumerate(self._zones):
if i not in self._sub_streams: if i not in sub_snapshot:
continue continue
_src_id, _consumer_id, stream = self._sub_streams[i] _src_id, _consumer_id, stream = sub_snapshot[i]
colors = stream.get_latest_colors() colors = stream.get_latest_colors()
if colors is None: if colors is None:
continue continue

View File

@@ -2,7 +2,7 @@
import asyncio import asyncio
from collections import deque from collections import deque
from datetime import datetime from datetime import datetime, timezone
from typing import Dict, Optional from typing import Dict, Optional
from wled_controller.utils import get_logger from wled_controller.utils import get_logger
@@ -22,7 +22,7 @@ def _collect_system_snapshot() -> dict:
mem = psutil.virtual_memory() mem = psutil.virtual_memory()
snapshot = { snapshot = {
"t": datetime.utcnow().isoformat(), "t": datetime.now(timezone.utc).isoformat(),
"cpu": psutil.cpu_percent(interval=None), "cpu": psutil.cpu_percent(interval=None),
"ram_pct": mem.percent, "ram_pct": mem.percent,
"ram_used": round(mem.used / 1024 / 1024, 1), "ram_used": round(mem.used / 1024 / 1024, 1),
@@ -95,7 +95,7 @@ class MetricsHistory:
except Exception: except Exception:
all_states = {} all_states = {}
now = datetime.utcnow().isoformat() now = datetime.now(timezone.utc).isoformat()
active_ids = set() active_ids = set()
for target_id, state in all_states.items(): for target_id, state in all_states.items():
active_ids.add(target_id) active_ids.add(target_id)

View File

@@ -53,6 +53,7 @@ class NotificationColorStripStream(ColorStripStream):
self._running = False self._running = False
self._thread: Optional[threading.Thread] = None self._thread: Optional[threading.Thread] = None
self._fps = 30 self._fps = 30
self._frame_time = 1.0 / 30
# Event queue: deque of (color_rgb_tuple, start_time) # Event queue: deque of (color_rgb_tuple, start_time)
self._event_queue: collections.deque = collections.deque(maxlen=16) self._event_queue: collections.deque = collections.deque(maxlen=16)
@@ -119,6 +120,10 @@ class NotificationColorStripStream(ColorStripStream):
def target_fps(self) -> int: def target_fps(self) -> int:
return self._fps return self._fps
def set_capture_fps(self, fps: int) -> None:
self._fps = max(1, min(90, fps))
self._frame_time = 1.0 / self._fps
@property @property
def is_animated(self) -> bool: def is_animated(self) -> bool:
return True return True
@@ -179,7 +184,7 @@ class NotificationColorStripStream(ColorStripStream):
try: try:
while self._running: while self._running:
wall_start = time.perf_counter() wall_start = time.perf_counter()
frame_time = 1.0 / self._fps frame_time = self._frame_time
try: try:
# Check for new events # Check for new events

View File

@@ -122,6 +122,10 @@ class ProcessorManager:
def metrics_history(self) -> MetricsHistory: def metrics_history(self) -> MetricsHistory:
return self._metrics_history return self._metrics_history
@property
def color_strip_stream_manager(self) -> ColorStripStreamManager:
return self._color_strip_stream_manager
# ===== SHARED CONTEXT (passed to target processors) ===== # ===== SHARED CONTEXT (passed to target processors) =====
def _build_context(self) -> TargetContext: def _build_context(self) -> TargetContext:
@@ -821,8 +825,8 @@ class ProcessorManager:
return return
# Skip periodic health checks for virtual devices (always online) # Skip periodic health checks for virtual devices (always online)
if "health_check" not in get_device_capabilities(state.device_type): if "health_check" not in get_device_capabilities(state.device_type):
from datetime import datetime from datetime import datetime, timezone
state.health = DeviceHealth(online=True, latency_ms=0.0, last_checked=datetime.utcnow()) state.health = DeviceHealth(online=True, latency_ms=0.0, last_checked=datetime.now(timezone.utc))
return return
if state.health_task and not state.health_task.done(): if state.health_task and not state.health_task.done():
return return
@@ -897,6 +901,22 @@ class ProcessorManager:
# ===== HELPERS ===== # ===== HELPERS =====
def has_device(self, device_id: str) -> bool:
"""Check if a device is registered."""
return device_id in self._devices
def find_device_state(self, device_id: str) -> Optional[DeviceState]:
"""Get device state, returning None if not registered."""
return self._devices.get(device_id)
async def send_clear_pixels(self, device_id: str) -> None:
"""Send all-black pixels to a device (public wrapper)."""
await self._send_clear_pixels(device_id)
def get_processor(self, target_id: str) -> Optional[TargetProcessor]:
"""Look up a processor by target_id, returning None if not found."""
return self._processors.get(target_id)
def _get_processor(self, target_id: str) -> TargetProcessor: def _get_processor(self, target_id: str) -> TargetProcessor:
"""Look up a processor by target_id, raising ValueError if not found.""" """Look up a processor by target_id, raising ValueError if not found."""
proc = self._processors.get(target_id) proc = self._processors.get(target_id)

View File

@@ -4,6 +4,7 @@ Runtimes are created lazily when a stream first acquires a clock and
destroyed when the last consumer releases it. destroyed when the last consumer releases it.
""" """
import threading
from typing import Dict, Optional from typing import Dict, Optional
from wled_controller.core.processing.sync_clock_runtime import SyncClockRuntime from wled_controller.core.processing.sync_clock_runtime import SyncClockRuntime
@@ -18,6 +19,7 @@ class SyncClockManager:
def __init__(self, store: SyncClockStore) -> None: def __init__(self, store: SyncClockStore) -> None:
self._store = store self._store = store
self._lock = threading.Lock()
self._runtimes: Dict[str, SyncClockRuntime] = {} self._runtimes: Dict[str, SyncClockRuntime] = {}
self._ref_counts: Dict[str, int] = {} self._ref_counts: Dict[str, int] = {}
@@ -25,56 +27,62 @@ class SyncClockManager:
def acquire(self, clock_id: str) -> SyncClockRuntime: def acquire(self, clock_id: str) -> SyncClockRuntime:
"""Get or create a runtime for *clock_id* (ref-counted).""" """Get or create a runtime for *clock_id* (ref-counted)."""
if clock_id in self._runtimes: with self._lock:
self._ref_counts[clock_id] += 1 if clock_id in self._runtimes:
logger.debug(f"SyncClock {clock_id} ref++ → {self._ref_counts[clock_id]}") self._ref_counts[clock_id] += 1
return self._runtimes[clock_id] logger.debug(f"SyncClock {clock_id} ref++ → {self._ref_counts[clock_id]}")
return self._runtimes[clock_id]
clock_cfg = self._store.get_clock(clock_id) # raises ValueError if missing clock_cfg = self._store.get_clock(clock_id) # raises ValueError if missing
rt = SyncClockRuntime(speed=clock_cfg.speed) rt = SyncClockRuntime(speed=clock_cfg.speed)
self._runtimes[clock_id] = rt self._runtimes[clock_id] = rt
self._ref_counts[clock_id] = 1 self._ref_counts[clock_id] = 1
logger.info(f"SyncClock runtime created: {clock_id} (speed={clock_cfg.speed})") logger.info(f"SyncClock runtime created: {clock_id} (speed={clock_cfg.speed})")
return rt return rt
def release(self, clock_id: str) -> None: def release(self, clock_id: str) -> None:
"""Decrement ref count; destroy runtime when it reaches zero.""" """Decrement ref count; destroy runtime when it reaches zero."""
if clock_id not in self._ref_counts: with self._lock:
return if clock_id not in self._ref_counts:
self._ref_counts[clock_id] -= 1 return
logger.debug(f"SyncClock {clock_id} ref-- → {self._ref_counts[clock_id]}") self._ref_counts[clock_id] -= 1
if self._ref_counts[clock_id] <= 0: logger.debug(f"SyncClock {clock_id} ref-- → {self._ref_counts[clock_id]}")
del self._runtimes[clock_id] if self._ref_counts[clock_id] <= 0:
del self._ref_counts[clock_id] del self._runtimes[clock_id]
logger.info(f"SyncClock runtime destroyed: {clock_id}") del self._ref_counts[clock_id]
logger.info(f"SyncClock runtime destroyed: {clock_id}")
def release_all_for(self, clock_id: str) -> None: def release_all_for(self, clock_id: str) -> None:
"""Force-release all references to *clock_id* (used on delete).""" """Force-release all references to *clock_id* (used on delete)."""
self._runtimes.pop(clock_id, None) with self._lock:
self._ref_counts.pop(clock_id, None) self._runtimes.pop(clock_id, None)
self._ref_counts.pop(clock_id, None)
def release_all(self) -> None: def release_all(self) -> None:
"""Destroy all runtimes (shutdown).""" """Destroy all runtimes (shutdown)."""
self._runtimes.clear() with self._lock:
self._ref_counts.clear() self._runtimes.clear()
self._ref_counts.clear()
# ── Lookup (no ref counting) ────────────────────────────────── # ── Lookup (no ref counting) ──────────────────────────────────
def get_runtime(self, clock_id: str) -> Optional[SyncClockRuntime]: def get_runtime(self, clock_id: str) -> Optional[SyncClockRuntime]:
"""Return an existing runtime or *None* (does not create one).""" """Return an existing runtime or *None* (does not create one)."""
return self._runtimes.get(clock_id) with self._lock:
return self._runtimes.get(clock_id)
def _ensure_runtime(self, clock_id: str) -> SyncClockRuntime: def _ensure_runtime(self, clock_id: str) -> SyncClockRuntime:
"""Return existing runtime or create a zero-ref one for API control.""" """Return existing runtime or create a zero-ref one for API control."""
rt = self._runtimes.get(clock_id) with self._lock:
if rt: rt = self._runtimes.get(clock_id)
if rt:
return rt
clock_cfg = self._store.get_clock(clock_id)
rt = SyncClockRuntime(speed=clock_cfg.speed)
self._runtimes[clock_id] = rt
self._ref_counts[clock_id] = 0
logger.info(f"SyncClock runtime created (API): {clock_id} (speed={clock_cfg.speed})")
return rt return rt
clock_cfg = self._store.get_clock(clock_id)
rt = SyncClockRuntime(speed=clock_cfg.speed)
self._runtimes[clock_id] = rt
self._ref_counts[clock_id] = 0
logger.info(f"SyncClock runtime created (API): {clock_id} (speed={clock_cfg.speed})")
return rt
# ── Delegated control ───────────────────────────────────────── # ── Delegated control ─────────────────────────────────────────

View File

@@ -44,9 +44,10 @@ class SyncClockRuntime:
Returns *real* (wall-clock) elapsed time, not speed-scaled. Returns *real* (wall-clock) elapsed time, not speed-scaled.
""" """
if not self._running: with self._lock:
return self._offset if not self._running:
return self._offset + (time.perf_counter() - self._epoch) return self._offset
return self._offset + (time.perf_counter() - self._epoch)
# ── Control ──────────────────────────────────────────────────── # ── Control ────────────────────────────────────────────────────

View File

@@ -5,7 +5,7 @@ from __future__ import annotations
import asyncio import asyncio
import collections import collections
import time import time
from datetime import datetime from datetime import datetime, timezone
from typing import Optional from typing import Optional
import httpx import httpx
@@ -173,7 +173,7 @@ class WledTargetProcessor(TargetProcessor):
self._value_stream = None self._value_stream = None
# Reset metrics and start loop # Reset metrics and start loop
self._metrics = ProcessingMetrics(start_time=datetime.utcnow()) self._metrics = ProcessingMetrics(start_time=datetime.now(timezone.utc))
self._is_running = True self._is_running = True
self._task = asyncio.create_task(self._processing_loop()) self._task = asyncio.create_task(self._processing_loop())
@@ -404,7 +404,7 @@ class WledTargetProcessor(TargetProcessor):
fps_target = self._target_fps fps_target = self._target_fps
uptime_seconds = 0.0 uptime_seconds = 0.0
if metrics.start_time and self._is_running: if metrics.start_time and self._is_running:
uptime_seconds = (datetime.utcnow() - metrics.start_time).total_seconds() uptime_seconds = (datetime.now(timezone.utc) - metrics.start_time).total_seconds()
return { return {
"target_id": self._target_id, "target_id": self._target_id,
@@ -514,11 +514,12 @@ class WledTargetProcessor(TargetProcessor):
except Exception: except Exception:
return False return False
results = await asyncio.gather(*[_send_safe(ws) for ws in self._preview_clients]) clients = list(self._preview_clients)
results = await asyncio.gather(*[_send_safe(ws) for ws in clients])
disconnected = [ws for ws, ok in zip(self._preview_clients, results) if not ok] for ws, ok in zip(clients, results):
for ws in disconnected: if not ok and ws in self._preview_clients:
self._preview_clients.remove(ws) self._preview_clients.remove(ws)
# ----- Private: processing loop ----- # ----- Private: processing loop -----
@@ -808,7 +809,7 @@ class WledTargetProcessor(TargetProcessor):
self._metrics.timing_send_ms = send_ms self._metrics.timing_send_ms = send_ms
self._metrics.frames_processed += 1 self._metrics.frames_processed += 1
self._metrics.last_update = datetime.utcnow() self._metrics.last_update = datetime.now(timezone.utc)
if self._metrics.frames_processed <= 3 or self._metrics.frames_processed % 100 == 0: if self._metrics.frames_processed <= 3 or self._metrics.frames_processed % 100 == 0:
logger.info( logger.info(
@@ -898,6 +899,7 @@ class WledTargetProcessor(TargetProcessor):
self._metrics.last_error = f"FATAL: {e}" self._metrics.last_error = f"FATAL: {e}"
self._metrics.errors_count += 1 self._metrics.errors_count += 1
self._is_running = False self._is_running = False
self._ctx.fire_event({"type": "state_change", "target_id": self._target_id, "processing": False, "crashed": True})
raise raise
finally: finally:
# Clean up probe client # Clean up probe client

View File

@@ -30,7 +30,7 @@ def capture_current_snapshot(
for t in target_store.get_all_targets(): for t in target_store.get_all_targets():
if target_ids is not None and t.id not in target_ids: if target_ids is not None and t.id not in target_ids:
continue continue
proc = processor_manager._processors.get(t.id) proc = processor_manager.get_processor(t.id)
running = proc.is_running if proc else False running = proc.is_running if proc else False
targets.append(TargetSnapshot( targets.append(TargetSnapshot(
target_id=t.id, target_id=t.id,
@@ -65,7 +65,7 @@ async def apply_scene_state(
for ts in preset.targets: for ts in preset.targets:
if not ts.running: if not ts.running:
try: try:
proc = processor_manager._processors.get(ts.target_id) proc = processor_manager.get_processor(ts.target_id)
if proc and proc.is_running: if proc and proc.is_running:
await processor_manager.stop_processing(ts.target_id) await processor_manager.stop_processing(ts.target_id)
except Exception as e: except Exception as e:
@@ -87,7 +87,7 @@ async def apply_scene_state(
target_store.update_target(ts.target_id, **changed) target_store.update_target(ts.target_id, **changed)
# Sync live processor if running # Sync live processor if running
proc = processor_manager._processors.get(ts.target_id) proc = processor_manager.get_processor(ts.target_id)
if proc and proc.is_running: if proc and proc.is_running:
css_changed = "color_strip_source_id" in changed css_changed = "color_strip_source_id" in changed
bvs_changed = "brightness_value_source_id" in changed bvs_changed = "brightness_value_source_id" in changed
@@ -107,7 +107,7 @@ async def apply_scene_state(
for ts in preset.targets: for ts in preset.targets:
if ts.running: if ts.running:
try: try:
proc = processor_manager._processors.get(ts.target_id) proc = processor_manager.get_processor(ts.target_id)
if not proc or not proc.is_running: if not proc or not proc.is_running:
await processor_manager.start_processing(ts.target_id) await processor_manager.start_processing(ts.target_id)
except Exception as e: except Exception as e:

View File

@@ -401,6 +401,119 @@ input:-webkit-autofill:focus {
background: var(--info-color); background: var(--info-color);
} }
/* ── Card Tags ──────────────────────────────────────────── */
.card-tags {
display: flex;
flex-wrap: wrap;
gap: 4px;
margin-top: 6px;
margin-bottom: 4px;
}
.card-tag {
display: inline-block;
font-size: 0.68rem;
font-weight: 600;
color: var(--primary-color);
background: color-mix(in srgb, var(--primary-color) 12%, var(--bg-secondary));
border: 1px solid color-mix(in srgb, var(--primary-color) 25%, transparent);
padding: 1px 7px;
border-radius: 8px;
white-space: nowrap;
line-height: 1.4;
}
/* ── Tag Input (chip-based input with autocomplete) ──── */
.tag-input-wrap {
position: relative;
display: flex;
flex-wrap: wrap;
align-items: center;
gap: 4px;
padding: 6px 8px;
border: 1px solid var(--border-color);
border-radius: 4px;
background: var(--bg-color);
cursor: text;
min-height: 38px;
transition: border-color 0.15s;
}
.tag-input-wrap:focus-within {
border-color: var(--primary-color);
box-shadow: 0 0 0 2px rgba(76, 175, 80, 0.15);
}
.tag-chip {
display: inline-flex;
align-items: center;
gap: 2px;
font-size: 0.8rem;
font-weight: 500;
color: var(--primary-color);
background: color-mix(in srgb, var(--primary-color) 12%, var(--bg-secondary));
border: 1px solid color-mix(in srgb, var(--primary-color) 25%, transparent);
padding: 2px 6px;
border-radius: 6px;
white-space: nowrap;
line-height: 1.3;
}
.tag-chip-remove {
background: none;
border: none;
color: inherit;
font-size: 0.9rem;
cursor: pointer;
padding: 0 2px;
line-height: 1;
opacity: 0.6;
transition: opacity 0.15s;
}
.tag-chip-remove:hover {
opacity: 1;
}
.tag-input-field {
flex: 1 1 60px;
min-width: 60px;
border: none !important;
outline: none !important;
background: none !important;
padding: 2px 0 !important;
font-size: 0.85rem;
color: var(--text-color);
box-shadow: none !important;
}
.tag-input-dropdown {
display: none;
position: absolute;
top: 100%;
left: 0;
right: 0;
z-index: 1000;
background: var(--bg-color);
border: 1px solid var(--border-color);
border-radius: 4px;
box-shadow: 0 4px 12px var(--shadow-color);
margin-top: 4px;
max-height: 200px;
overflow-y: auto;
}
.tag-dropdown-item {
padding: 6px 10px;
font-size: 0.85rem;
cursor: pointer;
transition: background 0.1s;
}
.tag-dropdown-item:hover,
.tag-dropdown-item.tag-dropdown-active {
background: var(--bg-secondary);
}
/* ── Focus-visible indicators for keyboard navigation ── */ /* ── Focus-visible indicators for keyboard navigation ── */
.btn:focus-visible, .btn:focus-visible,

View File

@@ -255,6 +255,26 @@ export const automationsCacheObj = new DataCache({
}); });
automationsCacheObj.subscribe(v => { _automationsCache = v; }); automationsCacheObj.subscribe(v => { _automationsCache = v; });
export const colorStripSourcesCache = new DataCache({
endpoint: '/color-strip-sources',
extractData: json => json.sources || [],
});
export const devicesCache = new DataCache({
endpoint: '/devices',
extractData: json => json.devices || [],
});
export const outputTargetsCache = new DataCache({
endpoint: '/output-targets',
extractData: json => json.targets || [],
});
export const patternTemplatesCache = new DataCache({
endpoint: '/pattern-templates',
extractData: json => json.templates || [],
});
export const scenePresetsCache = new DataCache({ export const scenePresetsCache = new DataCache({
endpoint: '/scene-presets', endpoint: '/scene-presets',
extractData: json => json.presets || [], extractData: json => json.presets || [],

View File

@@ -0,0 +1,225 @@
/**
* TagInput — reusable chip-based tag input with autocomplete.
*
* Usage:
* import { TagInput } from '../core/tag-input.js';
*
* const tagInput = new TagInput(document.getElementById('my-container'));
* tagInput.setValue(['bedroom', 'gaming']);
* tagInput.getValue(); // ['bedroom', 'gaming']
* tagInput.destroy();
*
* The component fetches available tags from GET /api/v1/tags for autocomplete.
* Tags are stored lowercase, trimmed, deduplicated.
*/
import { fetchWithAuth } from './api.js';
let _allTagsCache = null;
let _allTagsFetchPromise = null;
/** Fetch all tags from API (cached). Call invalidateTagsCache() after mutations. */
export async function fetchAllTags() {
if (_allTagsCache) return _allTagsCache;
if (_allTagsFetchPromise) return _allTagsFetchPromise;
_allTagsFetchPromise = fetchWithAuth('/tags')
.then(r => r.json())
.then(data => {
_allTagsCache = data.tags || [];
_allTagsFetchPromise = null;
return _allTagsCache;
})
.catch(() => {
_allTagsFetchPromise = null;
return [];
});
return _allTagsFetchPromise;
}
/** Call after create/update to refresh autocomplete suggestions. */
export function invalidateTagsCache() {
_allTagsCache = null;
}
/**
* Render tag chips HTML for display on cards.
* @param {string[]} tags
* @returns {string} HTML string
*/
export function renderTagChips(tags) {
if (!tags || !tags.length) return '';
return `<div class="card-tags">${tags.map(tag =>
`<span class="card-tag">${_escapeHtml(tag)}</span>`
).join('')}</div>`;
}
function _escapeHtml(str) {
return str.replace(/&/g, '&amp;').replace(/</g, '&lt;').replace(/>/g, '&gt;').replace(/"/g, '&quot;');
}
export class TagInput {
/**
* @param {HTMLElement} container Element to render the tag input into
* @param {object} [opts]
* @param {string} [opts.placeholder] Placeholder text for input
*/
constructor(container, opts = {}) {
this._container = container;
this._tags = [];
this._placeholder = opts.placeholder || 'Add tag...';
this._dropdownVisible = false;
this._selectedIdx = -1;
this._render();
this._bindEvents();
}
getValue() {
return [...this._tags];
}
setValue(tags) {
this._tags = (tags || []).map(t => t.toLowerCase().trim()).filter(Boolean);
this._tags = [...new Set(this._tags)];
this._renderChips();
}
destroy() {
this._container.innerHTML = '';
this._hideDropdown();
}
// ── private ──
_render() {
this._container.innerHTML = `
<div class="tag-input-wrap">
<div class="tag-input-chips"></div>
<input type="text" class="tag-input-field" placeholder="${_escapeHtml(this._placeholder)}" autocomplete="off" spellcheck="false">
<div class="tag-input-dropdown"></div>
</div>
`;
this._chipsEl = this._container.querySelector('.tag-input-chips');
this._inputEl = this._container.querySelector('.tag-input-field');
this._dropdownEl = this._container.querySelector('.tag-input-dropdown');
}
_renderChips() {
this._chipsEl.innerHTML = this._tags.map((tag, i) =>
`<span class="tag-chip">${_escapeHtml(tag)}<button type="button" class="tag-chip-remove" data-idx="${i}">&times;</button></span>`
).join('');
}
_bindEvents() {
// Chip remove buttons
this._chipsEl.addEventListener('click', (e) => {
const btn = e.target.closest('.tag-chip-remove');
if (!btn) return;
const idx = parseInt(btn.dataset.idx, 10);
this._tags.splice(idx, 1);
this._renderChips();
});
// Input keydown
this._inputEl.addEventListener('keydown', (e) => {
if (e.key === 'Enter' || e.key === ',' || e.key === 'Tab') {
if (this._dropdownVisible && this._selectedIdx >= 0) {
// Select from dropdown
e.preventDefault();
const items = this._dropdownEl.querySelectorAll('.tag-dropdown-item');
if (items[this._selectedIdx]) {
this._addTag(items[this._selectedIdx].dataset.tag);
}
} else if (this._inputEl.value.trim()) {
e.preventDefault();
this._addTag(this._inputEl.value);
}
} else if (e.key === 'Backspace' && !this._inputEl.value && this._tags.length) {
this._tags.pop();
this._renderChips();
} else if (e.key === 'ArrowDown' && this._dropdownVisible) {
e.preventDefault();
this._moveSelection(1);
} else if (e.key === 'ArrowUp' && this._dropdownVisible) {
e.preventDefault();
this._moveSelection(-1);
} else if (e.key === 'Escape') {
this._hideDropdown();
}
});
// Input typing → autocomplete
this._inputEl.addEventListener('input', () => {
this._updateDropdown();
});
// Focus → show dropdown
this._inputEl.addEventListener('focus', () => {
this._updateDropdown();
});
// Blur → hide (with delay so clicks register)
this._inputEl.addEventListener('blur', () => {
setTimeout(() => this._hideDropdown(), 200);
});
// Dropdown click
this._dropdownEl.addEventListener('mousedown', (e) => {
e.preventDefault(); // prevent blur
const item = e.target.closest('.tag-dropdown-item');
if (item) this._addTag(item.dataset.tag);
});
}
_addTag(raw) {
const tag = raw.toLowerCase().trim().replace(/,/g, '');
if (!tag || this._tags.includes(tag)) {
this._inputEl.value = '';
this._hideDropdown();
return;
}
this._tags.push(tag);
this._renderChips();
this._inputEl.value = '';
this._hideDropdown();
invalidateTagsCache();
}
async _updateDropdown() {
const query = this._inputEl.value.toLowerCase().trim();
const allTags = await fetchAllTags();
// Filter: exclude already-selected tags, match query
const suggestions = allTags
.filter(t => !this._tags.includes(t))
.filter(t => !query || t.includes(query))
.slice(0, 8);
if (!suggestions.length) {
this._hideDropdown();
return;
}
this._dropdownEl.innerHTML = suggestions.map((tag, i) =>
`<div class="tag-dropdown-item${i === 0 ? ' tag-dropdown-active' : ''}" data-tag="${_escapeHtml(tag)}">${_escapeHtml(tag)}</div>`
).join('');
this._dropdownEl.style.display = 'block';
this._dropdownVisible = true;
this._selectedIdx = 0;
}
_hideDropdown() {
this._dropdownEl.style.display = 'none';
this._dropdownVisible = false;
this._selectedIdx = -1;
}
_moveSelection(delta) {
const items = this._dropdownEl.querySelectorAll('.tag-dropdown-item');
if (!items.length) return;
items[this._selectedIdx]?.classList.remove('tag-dropdown-active');
this._selectedIdx = Math.max(0, Math.min(items.length - 1, this._selectedIdx + delta));
items[this._selectedIdx]?.classList.add('tag-dropdown-active');
items[this._selectedIdx]?.scrollIntoView({ block: 'nearest' });
}
}

View File

@@ -6,6 +6,7 @@
*/ */
import { API_BASE, fetchWithAuth } from '../core/api.js'; import { API_BASE, fetchWithAuth } from '../core/api.js';
import { colorStripSourcesCache } from '../core/state.js';
import { t } from '../core/i18n.js'; import { t } from '../core/i18n.js';
import { showToast } from '../core/ui.js'; import { showToast } from '../core/ui.js';
import { Modal } from '../core/modal.js'; import { Modal } from '../core/modal.js';
@@ -77,13 +78,12 @@ const _modal = new AdvancedCalibrationModal();
export async function showAdvancedCalibration(cssId) { export async function showAdvancedCalibration(cssId) {
try { try {
const [cssResp, psResp] = await Promise.all([ const [cssSources, psResp] = await Promise.all([
fetchWithAuth(`/color-strip-sources/${cssId}`), colorStripSourcesCache.fetch(),
fetchWithAuth('/picture-sources'), fetchWithAuth('/picture-sources'),
]); ]);
if (!cssResp.ok) { showToast(t('calibration.error.css_load_failed'), 'error'); return; } const source = cssSources.find(s => s.id === cssId);
if (!source) { showToast(t('calibration.error.css_load_failed'), 'error'); return; }
const source = await cssResp.json();
const calibration = source.calibration || {}; const calibration = source.calibration || {};
const psList = psResp.ok ? ((await psResp.json()).streams || []) : []; const psList = psResp.ok ? ((await psResp.json()).streams || []) : [];
@@ -168,6 +168,7 @@ export async function saveAdvancedCalibration() {
if (resp.ok) { if (resp.ok) {
showToast(t('calibration.saved'), 'success'); showToast(t('calibration.saved'), 'success');
colorStripSourcesCache.invalidate();
_modal.forceClose(); _modal.forceClose();
} else { } else {
const err = await resp.json().catch(() => ({})); const err = await resp.json().catch(() => ({}));

View File

@@ -17,11 +17,18 @@ import { showToast, showConfirm, lockBody, unlockBody } from '../core/ui.js';
import { Modal } from '../core/modal.js'; import { Modal } from '../core/modal.js';
import { ICON_MUSIC, getAudioSourceIcon, ICON_AUDIO_TEMPLATE, ICON_AUDIO_INPUT, ICON_AUDIO_LOOPBACK } from '../core/icons.js'; import { ICON_MUSIC, getAudioSourceIcon, ICON_AUDIO_TEMPLATE, ICON_AUDIO_INPUT, ICON_AUDIO_LOOPBACK } from '../core/icons.js';
import { EntitySelect } from '../core/entity-palette.js'; import { EntitySelect } from '../core/entity-palette.js';
import { TagInput } from '../core/tag-input.js';
import { loadPictureSources } from './streams.js'; import { loadPictureSources } from './streams.js';
let _audioSourceTagsInput = null;
class AudioSourceModal extends Modal { class AudioSourceModal extends Modal {
constructor() { super('audio-source-modal'); } constructor() { super('audio-source-modal'); }
onForceClose() {
if (_audioSourceTagsInput) { _audioSourceTagsInput.destroy(); _audioSourceTagsInput = null; }
}
snapshotValues() { snapshotValues() {
return { return {
name: document.getElementById('audio-source-name').value, name: document.getElementById('audio-source-name').value,
@@ -31,6 +38,7 @@ class AudioSourceModal extends Modal {
audioTemplate: document.getElementById('audio-source-audio-template').value, audioTemplate: document.getElementById('audio-source-audio-template').value,
parent: document.getElementById('audio-source-parent').value, parent: document.getElementById('audio-source-parent').value,
channel: document.getElementById('audio-source-channel').value, channel: document.getElementById('audio-source-channel').value,
tags: JSON.stringify(_audioSourceTagsInput ? _audioSourceTagsInput.getValue() : []),
}; };
} }
} }
@@ -86,6 +94,11 @@ export async function showAudioSourceModal(sourceType, editData) {
} }
} }
// Tags
if (_audioSourceTagsInput) { _audioSourceTagsInput.destroy(); _audioSourceTagsInput = null; }
_audioSourceTagsInput = new TagInput(document.getElementById('audio-source-tags-container'), { placeholder: t('tags.placeholder') });
_audioSourceTagsInput.setValue(isEdit ? (editData.tags || []) : []);
audioSourceModal.open(); audioSourceModal.open();
audioSourceModal.snapshot(); audioSourceModal.snapshot();
} }
@@ -115,7 +128,7 @@ export async function saveAudioSource() {
return; return;
} }
const payload = { name, source_type: sourceType, description }; const payload = { name, source_type: sourceType, description, tags: _audioSourceTagsInput ? _audioSourceTagsInput.getValue() : [] };
if (sourceType === 'multichannel') { if (sourceType === 'multichannel') {
const deviceVal = document.getElementById('audio-source-device').value || '-1:1'; const deviceVal = document.getElementById('audio-source-device').value || '-1:1';

View File

@@ -12,14 +12,21 @@ import { updateTabBadge } from './tabs.js';
import { ICON_SETTINGS, ICON_START, ICON_PAUSE, ICON_CLOCK, ICON_AUTOMATION, ICON_HELP, ICON_OK, ICON_TIMER, ICON_MONITOR, ICON_RADIO, ICON_SCENE, ICON_CLONE } from '../core/icons.js'; import { ICON_SETTINGS, ICON_START, ICON_PAUSE, ICON_CLOCK, ICON_AUTOMATION, ICON_HELP, ICON_OK, ICON_TIMER, ICON_MONITOR, ICON_RADIO, ICON_SCENE, ICON_CLONE } from '../core/icons.js';
import * as P from '../core/icon-paths.js'; import * as P from '../core/icon-paths.js';
import { wrapCard } from '../core/card-colors.js'; import { wrapCard } from '../core/card-colors.js';
import { TagInput, renderTagChips } from '../core/tag-input.js';
import { IconSelect } from '../core/icon-select.js'; import { IconSelect } from '../core/icon-select.js';
import { EntitySelect } from '../core/entity-palette.js'; import { EntitySelect } from '../core/entity-palette.js';
import { attachProcessPicker } from '../core/process-picker.js'; import { attachProcessPicker } from '../core/process-picker.js';
import { csScenes, createSceneCard } from './scene-presets.js'; import { csScenes, createSceneCard } from './scene-presets.js';
let _automationTagsInput = null;
class AutomationEditorModal extends Modal { class AutomationEditorModal extends Modal {
constructor() { super('automation-editor-modal'); } constructor() { super('automation-editor-modal'); }
onForceClose() {
if (_automationTagsInput) { _automationTagsInput.destroy(); _automationTagsInput = null; }
}
snapshotValues() { snapshotValues() {
return { return {
name: document.getElementById('automation-editor-name').value, name: document.getElementById('automation-editor-name').value,
@@ -29,6 +36,7 @@ class AutomationEditorModal extends Modal {
scenePresetId: document.getElementById('automation-scene-id').value, scenePresetId: document.getElementById('automation-scene-id').value,
deactivationMode: document.getElementById('automation-deactivation-mode').value, deactivationMode: document.getElementById('automation-deactivation-mode').value,
deactivationScenePresetId: document.getElementById('automation-fallback-scene-id').value, deactivationScenePresetId: document.getElementById('automation-fallback-scene-id').value,
tags: JSON.stringify(_automationTagsInput ? _automationTagsInput.getValue() : []),
}; };
} }
} }
@@ -204,7 +212,8 @@ function createAutomationCard(automation, sceneMap = new Map()) {
${deactivationLabel ? `<span class="card-meta">${deactivationLabel}</span>` : ''} ${deactivationLabel ? `<span class="card-meta">${deactivationLabel}</span>` : ''}
${lastActivityMeta} ${lastActivityMeta}
</div> </div>
<div class="stream-card-props">${condPills}</div>`, <div class="stream-card-props">${condPills}</div>
${renderTagChips(automation.tags)}`,
actions: ` actions: `
<button class="btn btn-icon btn-secondary" onclick="cloneAutomation('${automation.id}')" title="${t('common.clone')}">${ICON_CLONE}</button> <button class="btn btn-icon btn-secondary" onclick="cloneAutomation('${automation.id}')" title="${t('common.clone')}">${ICON_CLONE}</button>
<button class="btn btn-icon btn-secondary" onclick="openAutomationEditor('${automation.id}')" title="${t('automations.edit')}">${ICON_SETTINGS}</button> <button class="btn btn-icon btn-secondary" onclick="openAutomationEditor('${automation.id}')" title="${t('automations.edit')}">${ICON_SETTINGS}</button>
@@ -240,6 +249,8 @@ export async function openAutomationEditor(automationId, cloneData) {
if (_deactivationModeIconSelect) _deactivationModeIconSelect.setValue('none'); if (_deactivationModeIconSelect) _deactivationModeIconSelect.setValue('none');
document.getElementById('automation-fallback-scene-group').style.display = 'none'; document.getElementById('automation-fallback-scene-group').style.display = 'none';
let _editorTags = [];
if (automationId) { if (automationId) {
titleEl.innerHTML = `${ICON_AUTOMATION} ${t('automations.edit')}`; titleEl.innerHTML = `${ICON_AUTOMATION} ${t('automations.edit')}`;
try { try {
@@ -266,6 +277,7 @@ export async function openAutomationEditor(automationId, cloneData) {
if (_deactivationModeIconSelect) _deactivationModeIconSelect.setValue(deactMode); if (_deactivationModeIconSelect) _deactivationModeIconSelect.setValue(deactMode);
_onDeactivationModeChange(); _onDeactivationModeChange();
_initSceneSelector('automation-fallback-scene-id', automation.deactivation_scene_preset_id); _initSceneSelector('automation-fallback-scene-id', automation.deactivation_scene_preset_id);
_editorTags = automation.tags || [];
} catch (e) { } catch (e) {
showToast(e.message, 'error'); showToast(e.message, 'error');
return; return;
@@ -293,6 +305,7 @@ export async function openAutomationEditor(automationId, cloneData) {
if (_deactivationModeIconSelect) _deactivationModeIconSelect.setValue(cloneDeactMode); if (_deactivationModeIconSelect) _deactivationModeIconSelect.setValue(cloneDeactMode);
_onDeactivationModeChange(); _onDeactivationModeChange();
_initSceneSelector('automation-fallback-scene-id', cloneData.deactivation_scene_preset_id); _initSceneSelector('automation-fallback-scene-id', cloneData.deactivation_scene_preset_id);
_editorTags = cloneData.tags || [];
} else { } else {
titleEl.innerHTML = `${ICON_AUTOMATION} ${t('automations.add')}`; titleEl.innerHTML = `${ICON_AUTOMATION} ${t('automations.add')}`;
idInput.value = ''; idInput.value = '';
@@ -314,6 +327,12 @@ export async function openAutomationEditor(automationId, cloneData) {
modal.querySelectorAll('[data-i18n-placeholder]').forEach(el => { modal.querySelectorAll('[data-i18n-placeholder]').forEach(el => {
el.placeholder = t(el.getAttribute('data-i18n-placeholder')); el.placeholder = t(el.getAttribute('data-i18n-placeholder'));
}); });
// Tags
if (_automationTagsInput) { _automationTagsInput.destroy(); _automationTagsInput = null; }
_automationTagsInput = new TagInput(document.getElementById('automation-tags-container'), { placeholder: t('tags.placeholder') });
_automationTagsInput.setValue(_editorTags);
automationModal.snapshot(); automationModal.snapshot();
} }
@@ -671,6 +690,7 @@ export async function saveAutomationEditor() {
scene_preset_id: document.getElementById('automation-scene-id').value || null, scene_preset_id: document.getElementById('automation-scene-id').value || null,
deactivation_mode: document.getElementById('automation-deactivation-mode').value, deactivation_mode: document.getElementById('automation-deactivation-mode').value,
deactivation_scene_preset_id: document.getElementById('automation-fallback-scene-id').value || null, deactivation_scene_preset_id: document.getElementById('automation-fallback-scene-id').value || null,
tags: _automationTagsInput ? _automationTagsInput.getValue() : [],
}; };
const automationId = idInput.value; const automationId = idInput.value;

View File

@@ -6,6 +6,7 @@ import {
calibrationTestState, EDGE_TEST_COLORS, displaysCache, calibrationTestState, EDGE_TEST_COLORS, displaysCache,
} from '../core/state.js'; } from '../core/state.js';
import { API_BASE, getHeaders, fetchWithAuth } from '../core/api.js'; import { API_BASE, getHeaders, fetchWithAuth } from '../core/api.js';
import { colorStripSourcesCache, devicesCache } from '../core/state.js';
import { t } from '../core/i18n.js'; import { t } from '../core/i18n.js';
import { showToast } from '../core/ui.js'; import { showToast } from '../core/ui.js';
import { Modal } from '../core/modal.js'; import { Modal } from '../core/modal.js';
@@ -231,13 +232,12 @@ export async function closeCalibrationModal() {
export async function showCSSCalibration(cssId) { export async function showCSSCalibration(cssId) {
try { try {
const [cssResp, devicesResp] = await Promise.all([ const [cssSources, devices] = await Promise.all([
fetchWithAuth(`/color-strip-sources/${cssId}`), colorStripSourcesCache.fetch(),
fetchWithAuth('/devices'), devicesCache.fetch().catch(() => []),
]); ]);
const source = cssSources.find(s => s.id === cssId);
if (!cssResp.ok) { showToast(t('calibration.error.css_load_failed'), 'error'); return; } if (!source) { showToast(t('calibration.error.css_load_failed'), 'error'); return; }
const source = await cssResp.json();
const calibration = source.calibration || { const calibration = source.calibration || {
} }
@@ -246,7 +246,6 @@ export async function showCSSCalibration(cssId) {
document.getElementById('calibration-css-id').value = cssId; document.getElementById('calibration-css-id').value = cssId;
// Populate device picker for edge test // Populate device picker for edge test
const devices = devicesResp.ok ? ((await devicesResp.json()).devices || []) : [];
const testDeviceSelect = document.getElementById('calibration-test-device'); const testDeviceSelect = document.getElementById('calibration-test-device');
testDeviceSelect.innerHTML = ''; testDeviceSelect.innerHTML = '';
devices.forEach(d => { devices.forEach(d => {
@@ -940,6 +939,7 @@ export async function saveCalibration() {
} }
if (response.ok) { if (response.ok) {
showToast(t('calibration.saved'), 'success'); showToast(t('calibration.saved'), 'success');
if (cssMode) colorStripSourcesCache.invalidate();
calibModal.forceClose(); calibModal.forceClose();
if (cssMode) { if (cssMode) {
if (window.loadTargetsTab) window.loadTargetsTab(); if (window.loadTargetsTab) window.loadTargetsTab();

View File

@@ -3,7 +3,7 @@
*/ */
import { fetchWithAuth, escapeHtml } from '../core/api.js'; import { fetchWithAuth, escapeHtml } from '../core/api.js';
import { _cachedSyncClocks, audioSourcesCache, streamsCache } from '../core/state.js'; import { _cachedSyncClocks, audioSourcesCache, streamsCache, colorStripSourcesCache } from '../core/state.js';
import { t } from '../core/i18n.js'; import { t } from '../core/i18n.js';
import { showToast, showConfirm } from '../core/ui.js'; import { showToast, showConfirm } from '../core/ui.js';
import { Modal } from '../core/modal.js'; import { Modal } from '../core/modal.js';
@@ -16,15 +16,28 @@ import {
} from '../core/icons.js'; } from '../core/icons.js';
import * as P from '../core/icon-paths.js'; import * as P from '../core/icon-paths.js';
import { wrapCard } from '../core/card-colors.js'; import { wrapCard } from '../core/card-colors.js';
import { TagInput, renderTagChips } from '../core/tag-input.js';
import { attachProcessPicker } from '../core/process-picker.js'; import { attachProcessPicker } from '../core/process-picker.js';
import { IconSelect } from '../core/icon-select.js'; import { IconSelect } from '../core/icon-select.js';
import { EntitySelect } from '../core/entity-palette.js'; import { EntitySelect } from '../core/entity-palette.js';
import {
rgbArrayToHex, hexToRgbArray,
gradientInit, gradientRenderAll, gradientAddStop, applyGradientPreset,
getGradientStops, GRADIENT_PRESETS, gradientPresetStripHTML,
} from './css-gradient-editor.js';
// Re-export for app.js window global bindings
export { gradientInit, gradientRenderAll, gradientAddStop, applyGradientPreset };
class CSSEditorModal extends Modal { class CSSEditorModal extends Modal {
constructor() { constructor() {
super('css-editor-modal'); super('css-editor-modal');
} }
onForceClose() {
if (_cssTagsInput) { _cssTagsInput.destroy(); _cssTagsInput = null; }
}
snapshotValues() { snapshotValues() {
const type = document.getElementById('css-editor-type').value; const type = document.getElementById('css-editor-type').value;
return { return {
@@ -39,7 +52,7 @@ class CSSEditorModal extends Modal {
color: document.getElementById('css-editor-color').value, color: document.getElementById('css-editor-color').value,
frame_interpolation: document.getElementById('css-editor-frame-interpolation').checked, frame_interpolation: document.getElementById('css-editor-frame-interpolation').checked,
led_count: document.getElementById('css-editor-led-count').value, led_count: document.getElementById('css-editor-led-count').value,
gradient_stops: type === 'gradient' ? JSON.stringify(_gradientStops) : '[]', gradient_stops: type === 'gradient' ? JSON.stringify(getGradientStops()) : '[]',
animation_type: document.getElementById('css-editor-animation-type').value, animation_type: document.getElementById('css-editor-animation-type').value,
cycle_colors: JSON.stringify(_colorCycleColors), cycle_colors: JSON.stringify(_colorCycleColors),
effect_type: document.getElementById('css-editor-effect-type').value, effect_type: document.getElementById('css-editor-effect-type').value,
@@ -67,12 +80,15 @@ class CSSEditorModal extends Modal {
notification_filter_list: document.getElementById('css-editor-notification-filter-list').value, notification_filter_list: document.getElementById('css-editor-notification-filter-list').value,
notification_app_colors: JSON.stringify(_notificationAppColors), notification_app_colors: JSON.stringify(_notificationAppColors),
clock_id: document.getElementById('css-editor-clock').value, clock_id: document.getElementById('css-editor-clock').value,
tags: JSON.stringify(_cssTagsInput ? _cssTagsInput.getValue() : []),
}; };
} }
} }
const cssEditorModal = new CSSEditorModal(); const cssEditorModal = new CSSEditorModal();
let _cssTagsInput = null;
// ── EntitySelect instances for CSS editor ── // ── EntitySelect instances for CSS editor ──
let _cssPictureSourceEntitySelect = null; let _cssPictureSourceEntitySelect = null;
let _cssAudioSourceEntitySelect = null; let _cssAudioSourceEntitySelect = null;
@@ -272,13 +288,7 @@ function _gradientStripHTML(pts, w = 80, h = 16) {
return `<span style="display:inline-block;width:${w}px;height:${h}px;border-radius:3px;background:linear-gradient(to right,${stops});flex-shrink:0"></span>`; return `<span style="display:inline-block;width:${w}px;height:${h}px;border-radius:3px;background:linear-gradient(to right,${stops});flex-shrink:0"></span>`;
} }
/** /* gradientPresetStripHTML imported from css-gradient-editor.js */
* Build a gradient preview from _GRADIENT_PRESETS entry (array of {position, color:[r,g,b]}).
*/
function _gradientPresetStripHTML(stops, w = 80, h = 16) {
const css = stops.map(s => `rgb(${s.color.join(',')}) ${(s.position * 100).toFixed(0)}%`).join(', ');
return `<span style="display:inline-block;width:${w}px;height:${h}px;border-radius:3px;background:linear-gradient(to right,${css});flex-shrink:0"></span>`;
}
/* ── Effect / audio palette IconSelect instances ─────────────── */ /* ── Effect / audio palette IconSelect instances ─────────────── */
@@ -355,8 +365,8 @@ function _ensureGradientPresetIconSelect() {
if (!sel) return; if (!sel) return;
const items = [ const items = [
{ value: '', icon: _icon(P.palette), label: t('color_strip.gradient.preset.custom') }, { value: '', icon: _icon(P.palette), label: t('color_strip.gradient.preset.custom') },
...Object.entries(_GRADIENT_PRESETS).map(([key, stops]) => ({ ...Object.entries(GRADIENT_PRESETS).map(([key, stops]) => ({
value: key, icon: _gradientPresetStripHTML(stops), label: t(`color_strip.gradient.preset.${key}`), value: key, icon: gradientPresetStripHTML(stops), label: t(`color_strip.gradient.preset.${key}`),
})), })),
]; ];
if (_gradientPresetIconSelect) { _gradientPresetIconSelect.updateItems(items); return; } if (_gradientPresetIconSelect) { _gradientPresetIconSelect.updateItems(items); return; }
@@ -468,16 +478,7 @@ function _loadColorCycleState(css) {
} }
/** Convert an [R, G, B] array to a CSS hex color string like "#rrggbb". */ /** Convert an [R, G, B] array to a CSS hex color string like "#rrggbb". */
function rgbArrayToHex(rgb) { /* rgbArrayToHex / hexToRgbArray imported from css-gradient-editor.js */
if (!Array.isArray(rgb) || rgb.length !== 3) return '#ffffff';
return '#' + rgb.map(v => Math.max(0, Math.min(255, v)).toString(16).padStart(2, '0')).join('');
}
/** Convert a CSS hex string like "#rrggbb" to an [R, G, B] array. */
function hexToRgbArray(hex) {
const m = /^#?([0-9a-f]{2})([0-9a-f]{2})([0-9a-f]{2})$/i.exec(hex);
return m ? [parseInt(m[1], 16), parseInt(m[2], 16), parseInt(m[3], 16)] : [255, 255, 255];
}
/* ── Composite layer helpers ──────────────────────────────────── */ /* ── Composite layer helpers ──────────────────────────────────── */
@@ -1090,7 +1091,8 @@ export function createColorStripCard(source, pictureSourceMap, audioSourceMap) {
</div> </div>
<div class="stream-card-props"> <div class="stream-card-props">
${propsHtml} ${propsHtml}
</div>`, </div>
${renderTagChips(source.tags)}`,
actions: ` actions: `
<button class="btn btn-icon btn-secondary" onclick="cloneColorStrip('${source.id}')" title="${t('common.clone')}">${ICON_CLONE}</button> <button class="btn btn-icon btn-secondary" onclick="cloneColorStrip('${source.id}')" title="${t('common.clone')}">${ICON_CLONE}</button>
<button class="btn btn-icon btn-secondary" onclick="showCSSEditor('${source.id}')" title="${t('common.edit')}">${ICON_EDIT}</button> <button class="btn btn-icon btn-secondary" onclick="showCSSEditor('${source.id}')" title="${t('common.edit')}">${ICON_EDIT}</button>
@@ -1132,8 +1134,7 @@ export async function showCSSEditor(cssId = null, cloneData = null) {
const sources = await streamsCache.fetch(); const sources = await streamsCache.fetch();
// Fetch all color strip sources for composite layer dropdowns // Fetch all color strip sources for composite layer dropdowns
const cssListResp = await fetchWithAuth('/color-strip-sources'); const allCssSources = await colorStripSourcesCache.fetch().catch(() => []);
const allCssSources = cssListResp.ok ? ((await cssListResp.json()).sources || []) : [];
_compositeAvailableSources = allCssSources.filter(s => _compositeAvailableSources = allCssSources.filter(s =>
s.source_type !== 'composite' && (!cssId || s.id !== cssId) s.source_type !== 'composite' && (!cssId || s.id !== cssId)
); );
@@ -1251,9 +1252,9 @@ export async function showCSSEditor(cssId = null, cloneData = null) {
document.getElementById('css-editor-type-group').style.display = cssId ? 'none' : ''; document.getElementById('css-editor-type-group').style.display = cssId ? 'none' : '';
if (cssId) { if (cssId) {
const resp = await fetchWithAuth(`/color-strip-sources/${cssId}`); const cssSources = await colorStripSourcesCache.fetch();
if (!resp.ok) throw new Error('Failed to load color strip source'); const css = cssSources.find(s => s.id === cssId);
const css = await resp.json(); if (!css) throw new Error('Failed to load color strip source');
document.getElementById('css-editor-id').value = css.id; document.getElementById('css-editor-id').value = css.id;
document.getElementById('css-editor-name').value = css.name; document.getElementById('css-editor-name').value = css.name;
@@ -1328,6 +1329,15 @@ export async function showCSSEditor(cssId = null, cloneData = null) {
document.getElementById('css-editor-notification-effect').onchange = () => _autoGenerateCSSName(); document.getElementById('css-editor-notification-effect').onchange = () => _autoGenerateCSSName();
document.getElementById('css-editor-error').style.display = 'none'; document.getElementById('css-editor-error').style.display = 'none';
// Tags
if (_cssTagsInput) { _cssTagsInput.destroy(); _cssTagsInput = null; }
const _cssTags = cssId
? ((await colorStripSourcesCache.fetch()).find(s => s.id === cssId)?.tags || [])
: (cloneData ? (cloneData.tags || []) : []);
_cssTagsInput = new TagInput(document.getElementById('css-tags-container'), { placeholder: t('tags.placeholder') });
_cssTagsInput.setValue(_cssTags);
cssEditorModal.snapshot(); cssEditorModal.snapshot();
cssEditorModal.open(); cssEditorModal.open();
setTimeout(() => document.getElementById('css-editor-name').focus(), 100); setTimeout(() => document.getElementById('css-editor-name').focus(), 100);
@@ -1374,13 +1384,14 @@ export async function saveCSSEditor() {
}; };
if (!cssId) payload.source_type = 'color_cycle'; if (!cssId) payload.source_type = 'color_cycle';
} else if (sourceType === 'gradient') { } else if (sourceType === 'gradient') {
if (_gradientStops.length < 2) { const gStops = getGradientStops();
if (gStops.length < 2) {
cssEditorModal.showError(t('color_strip.gradient.min_stops')); cssEditorModal.showError(t('color_strip.gradient.min_stops'));
return; return;
} }
payload = { payload = {
name, name,
stops: _gradientStops.map(s => ({ stops: gStops.map(s => ({
position: s.position, position: s.position,
color: s.color, color: s.color,
...(s.colorRight ? { color_right: s.colorRight } : {}), ...(s.colorRight ? { color_right: s.colorRight } : {}),
@@ -1496,6 +1507,9 @@ export async function saveCSSEditor() {
payload.clock_id = clockVal || null; payload.clock_id = clockVal || null;
} }
// Tags
payload.tags = _cssTagsInput ? _cssTagsInput.getValue() : [];
try { try {
let response; let response;
if (cssId) { if (cssId) {
@@ -1516,6 +1530,7 @@ export async function saveCSSEditor() {
} }
showToast(cssId ? t('color_strip.updated') : t('color_strip.created'), 'success'); showToast(cssId ? t('color_strip.updated') : t('color_strip.created'), 'success');
colorStripSourcesCache.invalidate();
cssEditorModal.forceClose(); cssEditorModal.forceClose();
if (window.loadTargetsTab) await window.loadTargetsTab(); if (window.loadTargetsTab) await window.loadTargetsTab();
} catch (error) { } catch (error) {
@@ -1562,9 +1577,9 @@ export function copyEndpointUrl(btn) {
export async function cloneColorStrip(cssId) { export async function cloneColorStrip(cssId) {
try { try {
const resp = await fetchWithAuth(`/color-strip-sources/${cssId}`); const sources = await colorStripSourcesCache.fetch();
if (!resp.ok) throw new Error('Failed to load color strip source'); const css = sources.find(s => s.id === cssId);
const css = await resp.json(); if (!css) throw new Error('Color strip source not found');
showCSSEditor(null, css); showCSSEditor(null, css);
} catch (error) { } catch (error) {
if (error.isAuth) return; if (error.isAuth) return;
@@ -1585,6 +1600,7 @@ export async function deleteColorStrip(cssId) {
}); });
if (response.ok) { if (response.ok) {
showToast(t('color_strip.deleted'), 'success'); showToast(t('color_strip.deleted'), 'success');
colorStripSourcesCache.invalidate();
if (window.loadTargetsTab) await window.loadTargetsTab(); if (window.loadTargetsTab) await window.loadTargetsTab();
} else { } else {
const err = await response.json(); const err = await response.json();
@@ -1636,363 +1652,4 @@ export async function stopCSSOverlay(cssId) {
} }
} }
/* ══════════════════════════════════════════════════════════════ /* Gradient editor moved to css-gradient-editor.js */
GRADIENT EDITOR
══════════════════════════════════════════════════════════════ */
/**
* Internal state: array of stop objects.
* Each stop: { position: float 01, color: [R,G,B], colorRight: [R,G,B]|null }
*/
let _gradientStops = [];
let _gradientSelectedIdx = -1;
let _gradientDragging = null; // { idx, trackRect } while dragging
/* ── Interpolation (mirrors Python backend exactly) ───────────── */
function _gradientInterpolate(stops, pos) {
if (!stops.length) return [128, 128, 128];
const sorted = [...stops].sort((a, b) => a.position - b.position);
if (pos <= sorted[0].position) return sorted[0].color.slice();
const last = sorted[sorted.length - 1];
if (pos >= last.position) return (last.colorRight || last.color).slice();
for (let i = 0; i < sorted.length - 1; i++) {
const a = sorted[i];
const b = sorted[i + 1];
if (a.position <= pos && pos <= b.position) {
const span = b.position - a.position;
const t2 = span > 0 ? (pos - a.position) / span : 0;
const lc = a.colorRight || a.color;
const rc = b.color;
return lc.map((c, j) => Math.round(c + t2 * (rc[j] - c)));
}
}
return [128, 128, 128];
}
/* ── Init ─────────────────────────────────────────────────────── */
export function gradientInit(stops) {
_gradientStops = stops.map(s => ({
position: parseFloat(s.position ?? 0),
color: (Array.isArray(s.color) && s.color.length === 3) ? [...s.color] : [255, 255, 255],
colorRight: (Array.isArray(s.color_right) && s.color_right.length === 3) ? [...s.color_right] : null,
}));
_gradientSelectedIdx = _gradientStops.length > 0 ? 0 : -1;
_gradientDragging = null;
_gradientSetupTrackClick();
gradientRenderAll();
}
/* ── Presets ──────────────────────────────────────────────────── */
const _GRADIENT_PRESETS = {
rainbow: [
{ position: 0.0, color: [255, 0, 0] },
{ position: 0.17, color: [255, 165, 0] },
{ position: 0.33, color: [255, 255, 0] },
{ position: 0.5, color: [0, 255, 0] },
{ position: 0.67, color: [0, 100, 255] },
{ position: 0.83, color: [75, 0, 130] },
{ position: 1.0, color: [148, 0, 211] },
],
sunset: [
{ position: 0.0, color: [255, 60, 0] },
{ position: 0.3, color: [255, 120, 20] },
{ position: 0.6, color: [200, 40, 80] },
{ position: 0.8, color: [120, 20, 120] },
{ position: 1.0, color: [40, 10, 60] },
],
ocean: [
{ position: 0.0, color: [0, 10, 40] },
{ position: 0.3, color: [0, 60, 120] },
{ position: 0.6, color: [0, 140, 180] },
{ position: 0.8, color: [100, 220, 240] },
{ position: 1.0, color: [200, 240, 255] },
],
forest: [
{ position: 0.0, color: [0, 40, 0] },
{ position: 0.3, color: [0, 100, 20] },
{ position: 0.6, color: [60, 180, 30] },
{ position: 0.8, color: [140, 220, 50] },
{ position: 1.0, color: [220, 255, 80] },
],
fire: [
{ position: 0.0, color: [0, 0, 0] },
{ position: 0.25, color: [80, 0, 0] },
{ position: 0.5, color: [255, 40, 0] },
{ position: 0.75, color: [255, 160, 0] },
{ position: 1.0, color: [255, 255, 60] },
],
lava: [
{ position: 0.0, color: [0, 0, 0] },
{ position: 0.3, color: [120, 0, 0] },
{ position: 0.6, color: [255, 60, 0] },
{ position: 0.8, color: [255, 160, 40] },
{ position: 1.0, color: [255, 255, 120] },
],
aurora: [
{ position: 0.0, color: [0, 20, 40] },
{ position: 0.25, color: [0, 200, 100] },
{ position: 0.5, color: [0, 100, 200] },
{ position: 0.75, color: [120, 0, 200] },
{ position: 1.0, color: [0, 200, 140] },
],
ice: [
{ position: 0.0, color: [255, 255, 255] },
{ position: 0.3, color: [180, 220, 255] },
{ position: 0.6, color: [80, 160, 255] },
{ position: 0.85, color: [20, 60, 180] },
{ position: 1.0, color: [10, 20, 80] },
],
warm: [
{ position: 0.0, color: [255, 255, 80] },
{ position: 0.33, color: [255, 160, 0] },
{ position: 0.67, color: [255, 60, 0] },
{ position: 1.0, color: [160, 0, 0] },
],
cool: [
{ position: 0.0, color: [0, 255, 200] },
{ position: 0.33, color: [0, 120, 255] },
{ position: 0.67, color: [60, 0, 255] },
{ position: 1.0, color: [120, 0, 180] },
],
neon: [
{ position: 0.0, color: [255, 0, 200] },
{ position: 0.25, color: [0, 255, 255] },
{ position: 0.5, color: [0, 255, 50] },
{ position: 0.75, color: [255, 255, 0] },
{ position: 1.0, color: [255, 0, 100] },
],
pastel: [
{ position: 0.0, color: [255, 180, 180] },
{ position: 0.2, color: [255, 220, 160] },
{ position: 0.4, color: [255, 255, 180] },
{ position: 0.6, color: [180, 255, 200] },
{ position: 0.8, color: [180, 200, 255] },
{ position: 1.0, color: [220, 180, 255] },
],
};
export function applyGradientPreset(key) {
if (!key || !_GRADIENT_PRESETS[key]) return;
gradientInit(_GRADIENT_PRESETS[key]);
}
/* ── Render ───────────────────────────────────────────────────── */
export function gradientRenderAll() {
_gradientRenderCanvas();
_gradientRenderMarkers();
_gradientRenderStopList();
}
function _gradientRenderCanvas() {
const canvas = document.getElementById('gradient-canvas');
if (!canvas) return;
// Sync canvas pixel width to its CSS display width
const W = Math.max(1, Math.round(canvas.offsetWidth || 300));
if (canvas.width !== W) canvas.width = W;
const ctx = canvas.getContext('2d');
const H = canvas.height;
const imgData = ctx.createImageData(W, H);
for (let x = 0; x < W; x++) {
const pos = W > 1 ? x / (W - 1) : 0;
const [r, g, b] = _gradientInterpolate(_gradientStops, pos);
for (let y = 0; y < H; y++) {
const idx = (y * W + x) * 4;
imgData.data[idx] = r;
imgData.data[idx + 1] = g;
imgData.data[idx + 2] = b;
imgData.data[idx + 3] = 255;
}
}
ctx.putImageData(imgData, 0, 0);
}
function _gradientRenderMarkers() {
const track = document.getElementById('gradient-markers-track');
if (!track) return;
track.innerHTML = '';
_gradientStops.forEach((stop, idx) => {
const marker = document.createElement('div');
marker.className = 'gradient-marker' + (idx === _gradientSelectedIdx ? ' selected' : '');
marker.style.left = `${stop.position * 100}%`;
marker.style.background = rgbArrayToHex(stop.color);
marker.title = `${(stop.position * 100).toFixed(0)}%`;
marker.addEventListener('mousedown', (e) => {
e.preventDefault();
e.stopPropagation();
_gradientSelectedIdx = idx;
_gradientStartDrag(e, idx);
_gradientRenderMarkers();
_gradientRenderStopList();
});
track.appendChild(marker);
});
}
/**
* Update the selected stop index and reflect it via CSS classes only —
* no DOM rebuild, so in-flight click events on child elements are preserved.
*/
function _gradientSelectStop(idx) {
_gradientSelectedIdx = idx;
document.querySelectorAll('.gradient-stop-row').forEach((r, i) => r.classList.toggle('selected', i === idx));
document.querySelectorAll('.gradient-marker').forEach((m, i) => m.classList.toggle('selected', i === idx));
}
function _gradientRenderStopList() {
const list = document.getElementById('gradient-stops-list');
if (!list) return;
list.innerHTML = '';
_gradientStops.forEach((stop, idx) => {
const row = document.createElement('div');
row.className = 'gradient-stop-row' + (idx === _gradientSelectedIdx ? ' selected' : '');
const hasBidir = !!stop.colorRight;
const rightColor = stop.colorRight || stop.color;
row.innerHTML = `
<input type="number" class="gradient-stop-pos" value="${stop.position.toFixed(2)}"
min="0" max="1" step="0.01" title="${t('color_strip.gradient.position')}">
<input type="color" class="gradient-stop-color" value="${rgbArrayToHex(stop.color)}"
title="Left color">
<button type="button" class="btn btn-sm gradient-stop-bidir-btn${hasBidir ? ' active' : ''}"
title="${t('color_strip.gradient.bidir.hint')}">↔</button>
<input type="color" class="gradient-stop-color-right" value="${rgbArrayToHex(rightColor)}"
style="display:${hasBidir ? 'inline-block' : 'none'}" title="Right color">
<span class="gradient-stop-spacer"></span>
<button type="button" class="btn btn-sm btn-danger gradient-stop-remove-btn"
title="Remove stop"${_gradientStops.length <= 2 ? ' disabled' : ''}>✕</button>
`;
// Select row on mousedown — CSS-only update so child click events are not interrupted
row.addEventListener('mousedown', () => _gradientSelectStop(idx));
// Position
const posInput = row.querySelector('.gradient-stop-pos');
posInput.addEventListener('change', (e) => {
const val = Math.min(1, Math.max(0, parseFloat(e.target.value) || 0));
e.target.value = val.toFixed(2);
_gradientStops[idx].position = val;
gradientRenderAll();
});
posInput.addEventListener('focus', () => _gradientSelectStop(idx));
// Left color
row.querySelector('.gradient-stop-color').addEventListener('input', (e) => {
_gradientStops[idx].color = hexToRgbArray(e.target.value);
const markers = document.querySelectorAll('.gradient-marker');
if (markers[idx]) markers[idx].style.background = e.target.value;
_gradientRenderCanvas();
});
// Bidirectional toggle
row.querySelector('.gradient-stop-bidir-btn').addEventListener('click', (e) => {
e.stopPropagation();
_gradientStops[idx].colorRight = _gradientStops[idx].colorRight
? null
: [..._gradientStops[idx].color];
_gradientRenderStopList();
_gradientRenderCanvas();
});
// Right color
row.querySelector('.gradient-stop-color-right').addEventListener('input', (e) => {
_gradientStops[idx].colorRight = hexToRgbArray(e.target.value);
_gradientRenderCanvas();
});
// Remove
row.querySelector('.btn-danger').addEventListener('click', (e) => {
e.stopPropagation();
if (_gradientStops.length > 2) {
_gradientStops.splice(idx, 1);
if (_gradientSelectedIdx >= _gradientStops.length) {
_gradientSelectedIdx = _gradientStops.length - 1;
}
gradientRenderAll();
}
});
list.appendChild(row);
});
}
/* ── Add Stop ─────────────────────────────────────────────────── */
export function gradientAddStop(position) {
if (position === undefined) {
// Find the largest gap between adjacent stops and place in the middle
const sorted = [..._gradientStops].sort((a, b) => a.position - b.position);
let maxGap = 0, gapMid = 0.5;
for (let i = 0; i < sorted.length - 1; i++) {
const gap = sorted[i + 1].position - sorted[i].position;
if (gap > maxGap) {
maxGap = gap;
gapMid = (sorted[i].position + sorted[i + 1].position) / 2;
}
}
position = sorted.length >= 2 ? Math.round(gapMid * 100) / 100 : 0.5;
}
position = Math.min(1, Math.max(0, position));
const color = _gradientInterpolate(_gradientStops, position);
_gradientStops.push({ position, color, colorRight: null });
_gradientSelectedIdx = _gradientStops.length - 1;
gradientRenderAll();
}
/* ── Drag ─────────────────────────────────────────────────────── */
function _gradientStartDrag(e, idx) {
const track = document.getElementById('gradient-markers-track');
if (!track) return;
_gradientDragging = { idx, trackRect: track.getBoundingClientRect() };
const onMove = (me) => {
if (!_gradientDragging) return;
const { trackRect } = _gradientDragging;
const pos = Math.min(1, Math.max(0, (me.clientX - trackRect.left) / trackRect.width));
_gradientStops[_gradientDragging.idx].position = Math.round(pos * 100) / 100;
gradientRenderAll();
};
const onUp = () => {
_gradientDragging = null;
document.removeEventListener('mousemove', onMove);
document.removeEventListener('mouseup', onUp);
};
document.addEventListener('mousemove', onMove);
document.addEventListener('mouseup', onUp);
}
/* ── Track click → add stop ───────────────────────────────────── */
function _gradientSetupTrackClick() {
const track = document.getElementById('gradient-markers-track');
if (!track || track._gradientClickBound) return;
track._gradientClickBound = true;
track.addEventListener('click', (e) => {
if (_gradientDragging) return;
const rect = track.getBoundingClientRect();
const pos = Math.min(1, Math.max(0, (e.clientX - rect.left) / rect.width));
// Ignore clicks very close to an existing marker
const tooClose = _gradientStops.some(s => Math.abs(s.position - pos) < 0.03);
if (!tooClose) {
gradientAddStop(Math.round(pos * 100) / 100);
}
});
}

View File

@@ -0,0 +1,393 @@
/**
* Gradient stop editor — canvas preview, draggable markers, stop list, presets.
*
* Extracted from color-strips.js. Self-contained module that manages
* gradient stops state and renders into the CSS editor modal DOM.
*/
import { t } from '../core/i18n.js';
/* ── Color conversion utilities ───────────────────────────────── */
export function rgbArrayToHex(rgb) {
if (!Array.isArray(rgb) || rgb.length !== 3) return '#ffffff';
return '#' + rgb.map(v => Math.max(0, Math.min(255, v)).toString(16).padStart(2, '0')).join('');
}
/** Convert a CSS hex string like "#rrggbb" to an [R, G, B] array. */
export function hexToRgbArray(hex) {
const m = /^#?([0-9a-f]{2})([0-9a-f]{2})([0-9a-f]{2})$/i.exec(hex);
return m ? [parseInt(m[1], 16), parseInt(m[2], 16), parseInt(m[3], 16)] : [255, 255, 255];
}
/* ── State ────────────────────────────────────────────────────── */
/**
* Internal state: array of stop objects.
* Each stop: { position: float 01, color: [R,G,B], colorRight: [R,G,B]|null }
*/
let _gradientStops = [];
let _gradientSelectedIdx = -1;
let _gradientDragging = null; // { idx, trackRect } while dragging
/** Read-only accessor for save/dirty-check from the parent module. */
export function getGradientStops() {
return _gradientStops;
}
/* ── Interpolation (mirrors Python backend exactly) ───────────── */
function _gradientInterpolate(stops, pos) {
if (!stops.length) return [128, 128, 128];
const sorted = [...stops].sort((a, b) => a.position - b.position);
if (pos <= sorted[0].position) return sorted[0].color.slice();
const last = sorted[sorted.length - 1];
if (pos >= last.position) return (last.colorRight || last.color).slice();
for (let i = 0; i < sorted.length - 1; i++) {
const a = sorted[i];
const b = sorted[i + 1];
if (a.position <= pos && pos <= b.position) {
const span = b.position - a.position;
const t2 = span > 0 ? (pos - a.position) / span : 0;
const lc = a.colorRight || a.color;
const rc = b.color;
return lc.map((c, j) => Math.round(c + t2 * (rc[j] - c)));
}
}
return [128, 128, 128];
}
/* ── Init ─────────────────────────────────────────────────────── */
export function gradientInit(stops) {
_gradientStops = stops.map(s => ({
position: parseFloat(s.position ?? 0),
color: (Array.isArray(s.color) && s.color.length === 3) ? [...s.color] : [255, 255, 255],
colorRight: (Array.isArray(s.color_right) && s.color_right.length === 3) ? [...s.color_right] : null,
}));
_gradientSelectedIdx = _gradientStops.length > 0 ? 0 : -1;
_gradientDragging = null;
_gradientSetupTrackClick();
gradientRenderAll();
}
/* ── Presets ──────────────────────────────────────────────────── */
export const GRADIENT_PRESETS = {
rainbow: [
{ position: 0.0, color: [255, 0, 0] },
{ position: 0.17, color: [255, 165, 0] },
{ position: 0.33, color: [255, 255, 0] },
{ position: 0.5, color: [0, 255, 0] },
{ position: 0.67, color: [0, 100, 255] },
{ position: 0.83, color: [75, 0, 130] },
{ position: 1.0, color: [148, 0, 211] },
],
sunset: [
{ position: 0.0, color: [255, 60, 0] },
{ position: 0.3, color: [255, 120, 20] },
{ position: 0.6, color: [200, 40, 80] },
{ position: 0.8, color: [120, 20, 120] },
{ position: 1.0, color: [40, 10, 60] },
],
ocean: [
{ position: 0.0, color: [0, 10, 40] },
{ position: 0.3, color: [0, 60, 120] },
{ position: 0.6, color: [0, 140, 180] },
{ position: 0.8, color: [100, 220, 240] },
{ position: 1.0, color: [200, 240, 255] },
],
forest: [
{ position: 0.0, color: [0, 40, 0] },
{ position: 0.3, color: [0, 100, 20] },
{ position: 0.6, color: [60, 180, 30] },
{ position: 0.8, color: [140, 220, 50] },
{ position: 1.0, color: [220, 255, 80] },
],
fire: [
{ position: 0.0, color: [0, 0, 0] },
{ position: 0.25, color: [80, 0, 0] },
{ position: 0.5, color: [255, 40, 0] },
{ position: 0.75, color: [255, 160, 0] },
{ position: 1.0, color: [255, 255, 60] },
],
lava: [
{ position: 0.0, color: [0, 0, 0] },
{ position: 0.3, color: [120, 0, 0] },
{ position: 0.6, color: [255, 60, 0] },
{ position: 0.8, color: [255, 160, 40] },
{ position: 1.0, color: [255, 255, 120] },
],
aurora: [
{ position: 0.0, color: [0, 20, 40] },
{ position: 0.25, color: [0, 200, 100] },
{ position: 0.5, color: [0, 100, 200] },
{ position: 0.75, color: [120, 0, 200] },
{ position: 1.0, color: [0, 200, 140] },
],
ice: [
{ position: 0.0, color: [255, 255, 255] },
{ position: 0.3, color: [180, 220, 255] },
{ position: 0.6, color: [80, 160, 255] },
{ position: 0.85, color: [20, 60, 180] },
{ position: 1.0, color: [10, 20, 80] },
],
warm: [
{ position: 0.0, color: [255, 255, 80] },
{ position: 0.33, color: [255, 160, 0] },
{ position: 0.67, color: [255, 60, 0] },
{ position: 1.0, color: [160, 0, 0] },
],
cool: [
{ position: 0.0, color: [0, 255, 200] },
{ position: 0.33, color: [0, 120, 255] },
{ position: 0.67, color: [60, 0, 255] },
{ position: 1.0, color: [120, 0, 180] },
],
neon: [
{ position: 0.0, color: [255, 0, 200] },
{ position: 0.25, color: [0, 255, 255] },
{ position: 0.5, color: [0, 255, 50] },
{ position: 0.75, color: [255, 255, 0] },
{ position: 1.0, color: [255, 0, 100] },
],
pastel: [
{ position: 0.0, color: [255, 180, 180] },
{ position: 0.2, color: [255, 220, 160] },
{ position: 0.4, color: [255, 255, 180] },
{ position: 0.6, color: [180, 255, 200] },
{ position: 0.8, color: [180, 200, 255] },
{ position: 1.0, color: [220, 180, 255] },
],
};
/**
* Build a gradient preview from GRADIENT_PRESETS entry (array of {position, color:[r,g,b]}).
*/
export function gradientPresetStripHTML(stops, w = 80, h = 16) {
const css = stops.map(s => `rgb(${s.color.join(',')}) ${(s.position * 100).toFixed(0)}%`).join(', ');
return `<span style="display:inline-block;width:${w}px;height:${h}px;border-radius:3px;background:linear-gradient(to right,${css});flex-shrink:0"></span>`;
}
export function applyGradientPreset(key) {
if (!key || !GRADIENT_PRESETS[key]) return;
gradientInit(GRADIENT_PRESETS[key]);
}
/* ── Render ───────────────────────────────────────────────────── */
export function gradientRenderAll() {
_gradientRenderCanvas();
_gradientRenderMarkers();
_gradientRenderStopList();
}
function _gradientRenderCanvas() {
const canvas = document.getElementById('gradient-canvas');
if (!canvas) return;
// Sync canvas pixel width to its CSS display width
const W = Math.max(1, Math.round(canvas.offsetWidth || 300));
if (canvas.width !== W) canvas.width = W;
const ctx = canvas.getContext('2d');
const H = canvas.height;
const imgData = ctx.createImageData(W, H);
for (let x = 0; x < W; x++) {
const pos = W > 1 ? x / (W - 1) : 0;
const [r, g, b] = _gradientInterpolate(_gradientStops, pos);
for (let y = 0; y < H; y++) {
const idx = (y * W + x) * 4;
imgData.data[idx] = r;
imgData.data[idx + 1] = g;
imgData.data[idx + 2] = b;
imgData.data[idx + 3] = 255;
}
}
ctx.putImageData(imgData, 0, 0);
}
function _gradientRenderMarkers() {
const track = document.getElementById('gradient-markers-track');
if (!track) return;
track.innerHTML = '';
_gradientStops.forEach((stop, idx) => {
const marker = document.createElement('div');
marker.className = 'gradient-marker' + (idx === _gradientSelectedIdx ? ' selected' : '');
marker.style.left = `${stop.position * 100}%`;
marker.style.background = rgbArrayToHex(stop.color);
marker.title = `${(stop.position * 100).toFixed(0)}%`;
marker.addEventListener('mousedown', (e) => {
e.preventDefault();
e.stopPropagation();
_gradientSelectedIdx = idx;
_gradientStartDrag(e, idx);
_gradientRenderMarkers();
_gradientRenderStopList();
});
track.appendChild(marker);
});
}
/**
* Update the selected stop index and reflect it via CSS classes only —
* no DOM rebuild, so in-flight click events on child elements are preserved.
*/
function _gradientSelectStop(idx) {
_gradientSelectedIdx = idx;
document.querySelectorAll('.gradient-stop-row').forEach((r, i) => r.classList.toggle('selected', i === idx));
document.querySelectorAll('.gradient-marker').forEach((m, i) => m.classList.toggle('selected', i === idx));
}
function _gradientRenderStopList() {
const list = document.getElementById('gradient-stops-list');
if (!list) return;
list.innerHTML = '';
_gradientStops.forEach((stop, idx) => {
const row = document.createElement('div');
row.className = 'gradient-stop-row' + (idx === _gradientSelectedIdx ? ' selected' : '');
const hasBidir = !!stop.colorRight;
const rightColor = stop.colorRight || stop.color;
row.innerHTML = `
<input type="number" class="gradient-stop-pos" value="${stop.position.toFixed(2)}"
min="0" max="1" step="0.01" title="${t('color_strip.gradient.position')}">
<input type="color" class="gradient-stop-color" value="${rgbArrayToHex(stop.color)}"
title="Left color">
<button type="button" class="btn btn-sm gradient-stop-bidir-btn${hasBidir ? ' active' : ''}"
title="${t('color_strip.gradient.bidir.hint')}">↔</button>
<input type="color" class="gradient-stop-color-right" value="${rgbArrayToHex(rightColor)}"
style="display:${hasBidir ? 'inline-block' : 'none'}" title="Right color">
<span class="gradient-stop-spacer"></span>
<button type="button" class="btn btn-sm btn-danger gradient-stop-remove-btn"
title="Remove stop"${_gradientStops.length <= 2 ? ' disabled' : ''}>✕</button>
`;
// Select row on mousedown — CSS-only update so child click events are not interrupted
row.addEventListener('mousedown', () => _gradientSelectStop(idx));
// Position
const posInput = row.querySelector('.gradient-stop-pos');
posInput.addEventListener('change', (e) => {
const val = Math.min(1, Math.max(0, parseFloat(e.target.value) || 0));
e.target.value = val.toFixed(2);
_gradientStops[idx].position = val;
gradientRenderAll();
});
posInput.addEventListener('focus', () => _gradientSelectStop(idx));
// Left color
row.querySelector('.gradient-stop-color').addEventListener('input', (e) => {
_gradientStops[idx].color = hexToRgbArray(e.target.value);
const markers = document.querySelectorAll('.gradient-marker');
if (markers[idx]) markers[idx].style.background = e.target.value;
_gradientRenderCanvas();
});
// Bidirectional toggle
row.querySelector('.gradient-stop-bidir-btn').addEventListener('click', (e) => {
e.stopPropagation();
_gradientStops[idx].colorRight = _gradientStops[idx].colorRight
? null
: [..._gradientStops[idx].color];
_gradientRenderStopList();
_gradientRenderCanvas();
});
// Right color
row.querySelector('.gradient-stop-color-right').addEventListener('input', (e) => {
_gradientStops[idx].colorRight = hexToRgbArray(e.target.value);
_gradientRenderCanvas();
});
// Remove
row.querySelector('.btn-danger').addEventListener('click', (e) => {
e.stopPropagation();
if (_gradientStops.length > 2) {
_gradientStops.splice(idx, 1);
if (_gradientSelectedIdx >= _gradientStops.length) {
_gradientSelectedIdx = _gradientStops.length - 1;
}
gradientRenderAll();
}
});
list.appendChild(row);
});
}
/* ── Add Stop ─────────────────────────────────────────────────── */
export function gradientAddStop(position) {
if (position === undefined) {
// Find the largest gap between adjacent stops and place in the middle
const sorted = [..._gradientStops].sort((a, b) => a.position - b.position);
let maxGap = 0, gapMid = 0.5;
for (let i = 0; i < sorted.length - 1; i++) {
const gap = sorted[i + 1].position - sorted[i].position;
if (gap > maxGap) {
maxGap = gap;
gapMid = (sorted[i].position + sorted[i + 1].position) / 2;
}
}
position = sorted.length >= 2 ? Math.round(gapMid * 100) / 100 : 0.5;
}
position = Math.min(1, Math.max(0, position));
const color = _gradientInterpolate(_gradientStops, position);
_gradientStops.push({ position, color, colorRight: null });
_gradientSelectedIdx = _gradientStops.length - 1;
gradientRenderAll();
}
/* ── Drag ─────────────────────────────────────────────────────── */
function _gradientStartDrag(e, idx) {
const track = document.getElementById('gradient-markers-track');
if (!track) return;
_gradientDragging = { idx, trackRect: track.getBoundingClientRect() };
const onMove = (me) => {
if (!_gradientDragging) return;
const { trackRect } = _gradientDragging;
const pos = Math.min(1, Math.max(0, (me.clientX - trackRect.left) / trackRect.width));
_gradientStops[_gradientDragging.idx].position = Math.round(pos * 100) / 100;
gradientRenderAll();
};
const onUp = () => {
_gradientDragging = null;
document.removeEventListener('mousemove', onMove);
document.removeEventListener('mouseup', onUp);
};
document.addEventListener('mousemove', onMove);
document.addEventListener('mouseup', onUp);
}
/* ── Track click → add stop ───────────────────────────────────── */
function _gradientSetupTrackClick() {
const track = document.getElementById('gradient-markers-track');
if (!track || track._gradientClickBound) return;
track._gradientClickBound = true;
track.addEventListener('click', (e) => {
if (_gradientDragging) return;
const rect = track.getBoundingClientRect();
const pos = Math.min(1, Math.max(0, (e.clientX - rect.left) / rect.width));
// Ignore clicks very close to an existing marker
const tooClose = _gradientStops.some(s => Math.abs(s.position - pos) < 0.03);
if (!tooClose) {
gradientAddStop(Math.round(pos * 100) / 100);
}
});
}

View File

@@ -2,7 +2,7 @@
* Dashboard — real-time target status overview. * Dashboard — real-time target status overview.
*/ */
import { apiKey, _dashboardLoading, set_dashboardLoading, dashboardPollInterval, setDashboardPollInterval } from '../core/state.js'; import { apiKey, _dashboardLoading, set_dashboardLoading, dashboardPollInterval, setDashboardPollInterval, colorStripSourcesCache, devicesCache, outputTargetsCache } from '../core/state.js';
import { API_BASE, getHeaders, fetchWithAuth, escapeHtml } from '../core/api.js'; import { API_BASE, getHeaders, fetchWithAuth, escapeHtml } from '../core/api.js';
import { t } from '../core/i18n.js'; import { t } from '../core/i18n.js';
import { showToast, formatUptime, setTabRefreshing } from '../core/ui.js'; import { showToast, formatUptime, setTabRefreshing } from '../core/ui.js';
@@ -418,27 +418,23 @@ export async function loadDashboard(forceFullRender = false) {
try { try {
// Fire all requests in a single batch to avoid sequential RTTs // Fire all requests in a single batch to avoid sequential RTTs
const [targetsResp, automationsResp, devicesResp, cssResp, batchStatesResp, batchMetricsResp, scenePresets, syncClocksResp] = await Promise.all([ const [targets, automationsResp, devicesArr, cssArr, batchStatesResp, batchMetricsResp, scenePresets, syncClocksResp] = await Promise.all([
fetchWithAuth('/output-targets'), outputTargetsCache.fetch().catch(() => []),
fetchWithAuth('/automations').catch(() => null), fetchWithAuth('/automations').catch(() => null),
fetchWithAuth('/devices').catch(() => null), devicesCache.fetch().catch(() => []),
fetchWithAuth('/color-strip-sources').catch(() => null), colorStripSourcesCache.fetch().catch(() => []),
fetchWithAuth('/output-targets/batch/states').catch(() => null), fetchWithAuth('/output-targets/batch/states').catch(() => null),
fetchWithAuth('/output-targets/batch/metrics').catch(() => null), fetchWithAuth('/output-targets/batch/metrics').catch(() => null),
loadScenePresets(), loadScenePresets(),
fetchWithAuth('/sync-clocks').catch(() => null), fetchWithAuth('/sync-clocks').catch(() => null),
]); ]);
const targetsData = await targetsResp.json();
const targets = targetsData.targets || [];
const automationsData = automationsResp && automationsResp.ok ? await automationsResp.json() : { automations: [] }; const automationsData = automationsResp && automationsResp.ok ? await automationsResp.json() : { automations: [] };
const automations = automationsData.automations || []; const automations = automationsData.automations || [];
const devicesData = devicesResp && devicesResp.ok ? await devicesResp.json() : { devices: [] };
const devicesMap = {}; const devicesMap = {};
for (const d of (devicesData.devices || [])) { devicesMap[d.id] = d; } for (const d of devicesArr) { devicesMap[d.id] = d; }
const cssData = cssResp && cssResp.ok ? await cssResp.json() : { sources: [] };
const cssSourceMap = {}; const cssSourceMap = {};
for (const s of (cssData.sources || [])) { cssSourceMap[s.id] = s; } for (const s of (cssArr || [])) { cssSourceMap[s.id] = s; }
const syncClocksData = syncClocksResp && syncClocksResp.ok ? await syncClocksResp.json() : { clocks: [] }; const syncClocksData = syncClocksResp && syncClocksResp.ok ? await syncClocksResp.json() : { clocks: [] };
const syncClocks = syncClocksData.clocks || []; const syncClocks = syncClocksData.clocks || [];
@@ -782,14 +778,13 @@ export async function dashboardStopTarget(targetId) {
export async function dashboardStopAll() { export async function dashboardStopAll() {
try { try {
const [targetsResp, statesResp] = await Promise.all([ const [allTargets, statesResp] = await Promise.all([
fetchWithAuth('/output-targets'), outputTargetsCache.fetch().catch(() => []),
fetchWithAuth('/output-targets/batch/states'), fetchWithAuth('/output-targets/batch/states'),
]); ]);
const data = await targetsResp.json();
const statesData = statesResp.ok ? await statesResp.json() : { states: {} }; const statesData = statesResp.ok ? await statesResp.json() : { states: {} };
const states = statesData.states || {}; const states = statesData.states || {};
const running = (data.targets || []).filter(t => states[t.id]?.processing); const running = allTargets.filter(t => states[t.id]?.processing);
await Promise.all(running.map(t => await Promise.all(running.map(t =>
fetchWithAuth(`/output-targets/${t.id}/stop`, { method: 'POST' }).catch(() => {}) fetchWithAuth(`/output-targets/${t.id}/stop`, { method: 'POST' }).catch(() => {})
)); ));

View File

@@ -7,6 +7,7 @@ import {
_discoveryCache, set_discoveryCache, _discoveryCache, set_discoveryCache,
} from '../core/state.js'; } from '../core/state.js';
import { API_BASE, fetchWithAuth, isSerialDevice, isMockDevice, isMqttDevice, isWsDevice, isOpenrgbDevice, escapeHtml } from '../core/api.js'; import { API_BASE, fetchWithAuth, isSerialDevice, isMockDevice, isMqttDevice, isWsDevice, isOpenrgbDevice, escapeHtml } from '../core/api.js';
import { devicesCache } from '../core/state.js';
import { t } from '../core/i18n.js'; import { t } from '../core/i18n.js';
import { showToast } from '../core/ui.js'; import { showToast } from '../core/ui.js';
import { Modal } from '../core/modal.js'; import { Modal } from '../core/modal.js';
@@ -463,6 +464,7 @@ export async function handleAddDevice(event) {
const result = await response.json(); const result = await response.json();
console.log('Device added successfully:', result); console.log('Device added successfully:', result);
showToast(t('device_discovery.added'), 'success'); showToast(t('device_discovery.added'), 'success');
devicesCache.invalidate();
addDeviceModal.forceClose(); addDeviceModal.forceClose();
if (typeof window.loadDevices === 'function') await window.loadDevices(); if (typeof window.loadDevices === 'function') await window.loadDevices();
if (!localStorage.getItem('deviceTutorialSeen')) { if (!localStorage.getItem('deviceTutorialSeen')) {

View File

@@ -6,12 +6,16 @@ import {
_deviceBrightnessCache, updateDeviceBrightness, _deviceBrightnessCache, updateDeviceBrightness,
} from '../core/state.js'; } from '../core/state.js';
import { API_BASE, getHeaders, fetchWithAuth, escapeHtml, isSerialDevice, isMockDevice, isMqttDevice, isWsDevice, isOpenrgbDevice } from '../core/api.js'; import { API_BASE, getHeaders, fetchWithAuth, escapeHtml, isSerialDevice, isMockDevice, isMqttDevice, isWsDevice, isOpenrgbDevice } from '../core/api.js';
import { devicesCache } from '../core/state.js';
import { _fetchOpenrgbZones, _getCheckedZones, _splitOpenrgbZone, _getZoneMode } from './device-discovery.js'; import { _fetchOpenrgbZones, _getCheckedZones, _splitOpenrgbZone, _getZoneMode } from './device-discovery.js';
import { t } from '../core/i18n.js'; import { t } from '../core/i18n.js';
import { showToast, showConfirm } from '../core/ui.js'; import { showToast, showConfirm } from '../core/ui.js';
import { Modal } from '../core/modal.js'; import { Modal } from '../core/modal.js';
import { ICON_SETTINGS, ICON_STOP_PLAIN, ICON_LED, ICON_WEB, ICON_PLUG } from '../core/icons.js'; import { ICON_SETTINGS, ICON_STOP_PLAIN, ICON_LED, ICON_WEB, ICON_PLUG } from '../core/icons.js';
import { wrapCard } from '../core/card-colors.js'; import { wrapCard } from '../core/card-colors.js';
import { TagInput, renderTagChips } from '../core/tag-input.js';
let _deviceTagsInput = null;
class DeviceSettingsModal extends Modal { class DeviceSettingsModal extends Modal {
constructor() { super('device-settings-modal'); } constructor() { super('device-settings-modal'); }
@@ -30,6 +34,7 @@ class DeviceSettingsModal extends Modal {
send_latency: document.getElementById('settings-send-latency')?.value || '0', send_latency: document.getElementById('settings-send-latency')?.value || '0',
zones: JSON.stringify(_getCheckedZones('settings-zone-list')), zones: JSON.stringify(_getCheckedZones('settings-zone-list')),
zoneMode: _getZoneMode('settings-zone-mode'), zoneMode: _getZoneMode('settings-zone-mode'),
tags: JSON.stringify(_deviceTagsInput ? _deviceTagsInput.getValue() : []),
}; };
} }
@@ -125,7 +130,8 @@ export function createDeviceCard(device) {
onchange="saveCardBrightness('${device.id}', this.value)" onchange="saveCardBrightness('${device.id}', this.value)"
title="${_deviceBrightnessCache[device.id] != null ? Math.round(_deviceBrightnessCache[device.id] / 255 * 100) + '%' : '...'}" title="${_deviceBrightnessCache[device.id] != null ? Math.round(_deviceBrightnessCache[device.id] / 255 * 100) + '%' : '...'}"
${_deviceBrightnessCache[device.id] == null ? 'disabled' : ''}> ${_deviceBrightnessCache[device.id] == null ? 'disabled' : ''}>
</div>` : ''}`, </div>` : ''}
${renderTagChips(device.tags)}`,
actions: ` actions: `
<button class="btn btn-icon btn-secondary" onclick="showSettings('${device.id}')" title="${t('device.button.settings')}"> <button class="btn btn-icon btn-secondary" onclick="showSettings('${device.id}')" title="${t('device.button.settings')}">
${ICON_SETTINGS} ${ICON_SETTINGS}
@@ -165,6 +171,7 @@ export async function removeDevice(deviceId) {
}); });
if (response.ok) { if (response.ok) {
showToast(t('device.removed'), 'success'); showToast(t('device.removed'), 'success');
devicesCache.invalidate();
window.loadDevices(); window.loadDevices();
} else { } else {
const error = await response.json(); const error = await response.json();
@@ -323,6 +330,13 @@ export async function showSettings(deviceId) {
} }
} }
// Tags
if (_deviceTagsInput) _deviceTagsInput.destroy();
_deviceTagsInput = new TagInput(document.getElementById('device-tags-container'), {
placeholder: window.t ? t('tags.placeholder') : 'Add tag...'
});
_deviceTagsInput.setValue(device.tags || []);
settingsModal.snapshot(); settingsModal.snapshot();
settingsModal.open(); settingsModal.open();
@@ -338,7 +352,7 @@ export async function showSettings(deviceId) {
} }
export function isSettingsDirty() { return settingsModal.isDirty(); } export function isSettingsDirty() { return settingsModal.isDirty(); }
export function forceCloseDeviceSettingsModal() { settingsModal.forceClose(); } export function forceCloseDeviceSettingsModal() { if (_deviceTagsInput) { _deviceTagsInput.destroy(); _deviceTagsInput = null; } settingsModal.forceClose(); }
export function closeDeviceSettingsModal() { settingsModal.close(); } export function closeDeviceSettingsModal() { settingsModal.close(); }
export async function saveDeviceSettings() { export async function saveDeviceSettings() {
@@ -356,6 +370,7 @@ export async function saveDeviceSettings() {
name, url, name, url,
auto_shutdown: document.getElementById('settings-auto-shutdown').checked, auto_shutdown: document.getElementById('settings-auto-shutdown').checked,
state_check_interval: parseInt(document.getElementById('settings-health-interval').value, 10) || 30, state_check_interval: parseInt(document.getElementById('settings-health-interval').value, 10) || 30,
tags: _deviceTagsInput ? _deviceTagsInput.getValue() : [],
}; };
const ledCountInput = document.getElementById('settings-led-count'); const ledCountInput = document.getElementById('settings-led-count');
if (settingsModal.capabilities.includes('manual_led_count') && ledCountInput.value) { if (settingsModal.capabilities.includes('manual_led_count') && ledCountInput.value) {
@@ -386,6 +401,7 @@ export async function saveDeviceSettings() {
} }
showToast(t('settings.saved'), 'success'); showToast(t('settings.saved'), 'success');
devicesCache.invalidate();
settingsModal.forceClose(); settingsModal.forceClose();
window.loadDevices(); window.loadDevices();
} catch (err) { } catch (err) {

View File

@@ -9,6 +9,7 @@ import {
kcWebSockets, kcWebSockets,
PATTERN_RECT_BORDERS, PATTERN_RECT_BORDERS,
_cachedValueSources, valueSourcesCache, streamsCache, _cachedValueSources, valueSourcesCache, streamsCache,
outputTargetsCache, patternTemplatesCache,
} from '../core/state.js'; } from '../core/state.js';
import { API_BASE, getHeaders, fetchWithAuth, escapeHtml } from '../core/api.js'; import { API_BASE, getHeaders, fetchWithAuth, escapeHtml } from '../core/api.js';
import { t } from '../core/i18n.js'; import { t } from '../core/i18n.js';
@@ -21,9 +22,12 @@ import {
} from '../core/icons.js'; } from '../core/icons.js';
import * as P from '../core/icon-paths.js'; import * as P from '../core/icon-paths.js';
import { wrapCard } from '../core/card-colors.js'; import { wrapCard } from '../core/card-colors.js';
import { TagInput, renderTagChips } from '../core/tag-input.js';
import { IconSelect } from '../core/icon-select.js'; import { IconSelect } from '../core/icon-select.js';
import { EntitySelect } from '../core/entity-palette.js'; import { EntitySelect } from '../core/entity-palette.js';
let _kcTagsInput = null;
class KCEditorModal extends Modal { class KCEditorModal extends Modal {
constructor() { constructor() {
super('kc-editor-modal'); super('kc-editor-modal');
@@ -38,6 +42,7 @@ class KCEditorModal extends Modal {
smoothing: document.getElementById('kc-editor-smoothing').value, smoothing: document.getElementById('kc-editor-smoothing').value,
patternTemplateId: document.getElementById('kc-editor-pattern-template').value, patternTemplateId: document.getElementById('kc-editor-pattern-template').value,
brightness_vs: document.getElementById('kc-editor-brightness-vs').value, brightness_vs: document.getElementById('kc-editor-brightness-vs').value,
tags: JSON.stringify(_kcTagsInput ? _kcTagsInput.getValue() : []),
}; };
} }
} }
@@ -228,6 +233,7 @@ export function createKCTargetCard(target, sourceMap, patternTemplateMap, valueS
<span class="stream-card-prop" title="${t('kc.fps')}">${ICON_FPS} ${kcSettings.fps ?? 10}</span> <span class="stream-card-prop" title="${t('kc.fps')}">${ICON_FPS} ${kcSettings.fps ?? 10}</span>
${bvs ? `<span class="stream-card-prop stream-card-prop-full stream-card-link" title="${t('targets.brightness_vs')}" onclick="event.stopPropagation(); navigateToCard('streams','value','value-sources','data-id','${bvsId}')">${getValueSourceIcon(bvs.source_type)} ${escapeHtml(bvs.name)}</span>` : ''} ${bvs ? `<span class="stream-card-prop stream-card-prop-full stream-card-link" title="${t('targets.brightness_vs')}" onclick="event.stopPropagation(); navigateToCard('streams','value','value-sources','data-id','${bvsId}')">${getValueSourceIcon(bvs.source_type)} ${escapeHtml(bvs.name)}</span>` : ''}
</div> </div>
${renderTagChips(target.tags)}
<div class="brightness-control" data-kc-brightness-wrap="${target.id}"> <div class="brightness-control" data-kc-brightness-wrap="${target.id}">
<input type="range" class="brightness-slider" min="0" max="255" <input type="range" class="brightness-slider" min="0" max="255"
value="${brightnessInt}" data-kc-brightness="${target.id}" value="${brightnessInt}" data-kc-brightness="${target.id}"
@@ -503,12 +509,11 @@ function _populateKCBrightnessVsDropdown(selectedId = '') {
export async function showKCEditor(targetId = null, cloneData = null) { export async function showKCEditor(targetId = null, cloneData = null) {
try { try {
// Load sources, pattern templates, and value sources in parallel // Load sources, pattern templates, and value sources in parallel
const [sources, patResp, valueSources] = await Promise.all([ const [sources, patTemplates, valueSources] = await Promise.all([
streamsCache.fetch().catch(() => []), streamsCache.fetch().catch(() => []),
fetchWithAuth('/pattern-templates').catch(() => null), patternTemplatesCache.fetch().catch(() => []),
valueSourcesCache.fetch(), valueSourcesCache.fetch(),
]); ]);
const patTemplates = (patResp && patResp.ok) ? (await patResp.json()).templates || [] : [];
// Populate source select // Populate source select
const sourceSelect = document.getElementById('kc-editor-source'); const sourceSelect = document.getElementById('kc-editor-source');
@@ -538,10 +543,12 @@ export async function showKCEditor(targetId = null, cloneData = null) {
_ensureSourceEntitySelect(sources); _ensureSourceEntitySelect(sources);
_ensurePatternEntitySelect(patTemplates); _ensurePatternEntitySelect(patTemplates);
let _editorTags = [];
if (targetId) { if (targetId) {
const resp = await fetch(`${API_BASE}/output-targets/${targetId}`, { headers: getHeaders() }); const resp = await fetch(`${API_BASE}/output-targets/${targetId}`, { headers: getHeaders() });
if (!resp.ok) throw new Error('Failed to load target'); if (!resp.ok) throw new Error('Failed to load target');
const target = await resp.json(); const target = await resp.json();
_editorTags = target.tags || [];
const kcSettings = target.key_colors_settings || {}; const kcSettings = target.key_colors_settings || {};
document.getElementById('kc-editor-id').value = target.id; document.getElementById('kc-editor-id').value = target.id;
@@ -557,6 +564,7 @@ export async function showKCEditor(targetId = null, cloneData = null) {
_populateKCBrightnessVsDropdown(kcSettings.brightness_value_source_id || ''); _populateKCBrightnessVsDropdown(kcSettings.brightness_value_source_id || '');
document.getElementById('kc-editor-title').innerHTML = `${ICON_PALETTE} ${t('kc.edit')}`; document.getElementById('kc-editor-title').innerHTML = `${ICON_PALETTE} ${t('kc.edit')}`;
} else if (cloneData) { } else if (cloneData) {
_editorTags = cloneData.tags || [];
const kcSettings = cloneData.key_colors_settings || {}; const kcSettings = cloneData.key_colors_settings || {};
document.getElementById('kc-editor-id').value = ''; document.getElementById('kc-editor-id').value = '';
document.getElementById('kc-editor-name').value = (cloneData.name || '') + ' (Copy)'; document.getElementById('kc-editor-name').value = (cloneData.name || '') + ' (Copy)';
@@ -593,6 +601,13 @@ export async function showKCEditor(targetId = null, cloneData = null) {
patSelect.onchange = () => _autoGenerateKCName(); patSelect.onchange = () => _autoGenerateKCName();
if (!targetId && !cloneData) _autoGenerateKCName(); if (!targetId && !cloneData) _autoGenerateKCName();
// Tags
if (_kcTagsInput) _kcTagsInput.destroy();
_kcTagsInput = new TagInput(document.getElementById('kc-tags-container'), {
placeholder: window.t ? t('tags.placeholder') : 'Add tag...'
});
_kcTagsInput.setValue(_editorTags);
kcEditorModal.snapshot(); kcEditorModal.snapshot();
kcEditorModal.open(); kcEditorModal.open();
@@ -614,6 +629,7 @@ export async function closeKCEditorModal() {
} }
export function forceCloseKCEditorModal() { export function forceCloseKCEditorModal() {
if (_kcTagsInput) { _kcTagsInput.destroy(); _kcTagsInput = null; }
kcEditorModal.forceClose(); kcEditorModal.forceClose();
set_kcNameManuallyEdited(false); set_kcNameManuallyEdited(false);
} }
@@ -641,6 +657,7 @@ export async function saveKCEditor() {
const payload = { const payload = {
name, name,
picture_source_id: sourceId, picture_source_id: sourceId,
tags: _kcTagsInput ? _kcTagsInput.getValue() : [],
key_colors_settings: { key_colors_settings: {
fps, fps,
interpolation_mode: interpolation, interpolation_mode: interpolation,
@@ -671,6 +688,7 @@ export async function saveKCEditor() {
} }
showToast(targetId ? t('kc.updated') : t('kc.created'), 'success'); showToast(targetId ? t('kc.updated') : t('kc.created'), 'success');
outputTargetsCache.invalidate();
kcEditorModal.forceClose(); kcEditorModal.forceClose();
// Use window.* to avoid circular import with targets.js // Use window.* to avoid circular import with targets.js
if (typeof window.loadTargetsTab === 'function') window.loadTargetsTab(); if (typeof window.loadTargetsTab === 'function') window.loadTargetsTab();
@@ -683,9 +701,9 @@ export async function saveKCEditor() {
export async function cloneKCTarget(targetId) { export async function cloneKCTarget(targetId) {
try { try {
const resp = await fetchWithAuth(`/output-targets/${targetId}`); const targets = await outputTargetsCache.fetch();
if (!resp.ok) throw new Error('Failed to load target'); const target = targets.find(t => t.id === targetId);
const target = await resp.json(); if (!target) throw new Error('Target not found');
showKCEditor(null, target); showKCEditor(null, target);
} catch (error) { } catch (error) {
if (error.isAuth) return; if (error.isAuth) return;
@@ -704,6 +722,7 @@ export async function deleteKCTarget(targetId) {
}); });
if (response.ok) { if (response.ok) {
showToast(t('kc.deleted'), 'success'); showToast(t('kc.deleted'), 'success');
outputTargetsCache.invalidate();
// Use window.* to avoid circular import with targets.js // Use window.* to avoid circular import with targets.js
if (typeof window.loadTargetsTab === 'function') window.loadTargetsTab(); if (typeof window.loadTargetsTab === 'function') window.loadTargetsTab();
} else { } else {

View File

@@ -16,14 +16,17 @@ import {
streamsCache, streamsCache,
} from '../core/state.js'; } from '../core/state.js';
import { API_BASE, getHeaders, fetchWithAuth, escapeHtml } from '../core/api.js'; import { API_BASE, getHeaders, fetchWithAuth, escapeHtml } from '../core/api.js';
import { patternTemplatesCache } from '../core/state.js';
import { t } from '../core/i18n.js'; import { t } from '../core/i18n.js';
import { showToast, showConfirm } from '../core/ui.js'; import { showToast, showConfirm } from '../core/ui.js';
import { Modal } from '../core/modal.js'; import { Modal } from '../core/modal.js';
import { getPictureSourceIcon, ICON_PATTERN_TEMPLATE, ICON_CLONE, ICON_EDIT } from '../core/icons.js'; import { getPictureSourceIcon, ICON_PATTERN_TEMPLATE, ICON_CLONE, ICON_EDIT } from '../core/icons.js';
import { wrapCard } from '../core/card-colors.js'; import { wrapCard } from '../core/card-colors.js';
import { TagInput, renderTagChips } from '../core/tag-input.js';
import { EntitySelect } from '../core/entity-palette.js'; import { EntitySelect } from '../core/entity-palette.js';
let _patternBgEntitySelect = null; let _patternBgEntitySelect = null;
let _patternTagsInput = null;
class PatternTemplateModal extends Modal { class PatternTemplateModal extends Modal {
constructor() { constructor() {
@@ -35,10 +38,12 @@ class PatternTemplateModal extends Modal {
name: document.getElementById('pattern-template-name').value, name: document.getElementById('pattern-template-name').value,
description: document.getElementById('pattern-template-description').value, description: document.getElementById('pattern-template-description').value,
rectangles: JSON.stringify(patternEditorRects), rectangles: JSON.stringify(patternEditorRects),
tags: JSON.stringify(_patternTagsInput ? _patternTagsInput.getValue() : []),
}; };
} }
onForceClose() { onForceClose() {
if (_patternTagsInput) { _patternTagsInput.destroy(); _patternTagsInput = null; }
setPatternEditorRects([]); setPatternEditorRects([]);
setPatternEditorSelectedIdx(-1); setPatternEditorSelectedIdx(-1);
setPatternEditorBgImage(null); setPatternEditorBgImage(null);
@@ -70,7 +75,8 @@ export function createPatternTemplateCard(pt) {
${desc} ${desc}
<div class="stream-card-props"> <div class="stream-card-props">
<span class="stream-card-prop">▭ ${rectCount} rect${rectCount !== 1 ? 's' : ''}</span> <span class="stream-card-prop">▭ ${rectCount} rect${rectCount !== 1 ? 's' : ''}</span>
</div>`, </div>
${renderTagChips(pt.tags)}`,
actions: ` actions: `
<button class="btn btn-icon btn-secondary" onclick="clonePatternTemplate('${pt.id}')" title="${t('common.clone')}">${ICON_CLONE}</button> <button class="btn btn-icon btn-secondary" onclick="clonePatternTemplate('${pt.id}')" title="${t('common.clone')}">${ICON_CLONE}</button>
<button class="btn btn-icon btn-secondary" onclick="showPatternTemplateEditor('${pt.id}')" title="${t('common.edit')}">${ICON_EDIT}</button>`, <button class="btn btn-icon btn-secondary" onclick="showPatternTemplateEditor('${pt.id}')" title="${t('common.edit')}">${ICON_EDIT}</button>`,
@@ -109,6 +115,8 @@ export async function showPatternTemplateEditor(templateId = null, cloneData = n
setPatternEditorSelectedIdx(-1); setPatternEditorSelectedIdx(-1);
setPatternCanvasDragMode(null); setPatternCanvasDragMode(null);
let _editorTags = [];
if (templateId) { if (templateId) {
const resp = await fetch(`${API_BASE}/pattern-templates/${templateId}`, { headers: getHeaders() }); const resp = await fetch(`${API_BASE}/pattern-templates/${templateId}`, { headers: getHeaders() });
if (!resp.ok) throw new Error('Failed to load pattern template'); if (!resp.ok) throw new Error('Failed to load pattern template');
@@ -119,12 +127,14 @@ export async function showPatternTemplateEditor(templateId = null, cloneData = n
document.getElementById('pattern-template-description').value = tmpl.description || ''; document.getElementById('pattern-template-description').value = tmpl.description || '';
document.getElementById('pattern-template-modal-title').innerHTML = `${ICON_PATTERN_TEMPLATE} ${t('pattern.edit')}`; document.getElementById('pattern-template-modal-title').innerHTML = `${ICON_PATTERN_TEMPLATE} ${t('pattern.edit')}`;
setPatternEditorRects((tmpl.rectangles || []).map(r => ({ ...r }))); setPatternEditorRects((tmpl.rectangles || []).map(r => ({ ...r })));
_editorTags = tmpl.tags || [];
} else if (cloneData) { } else if (cloneData) {
document.getElementById('pattern-template-id').value = ''; document.getElementById('pattern-template-id').value = '';
document.getElementById('pattern-template-name').value = (cloneData.name || '') + ' (Copy)'; document.getElementById('pattern-template-name').value = (cloneData.name || '') + ' (Copy)';
document.getElementById('pattern-template-description').value = cloneData.description || ''; document.getElementById('pattern-template-description').value = cloneData.description || '';
document.getElementById('pattern-template-modal-title').innerHTML = `${ICON_PATTERN_TEMPLATE} ${t('pattern.add')}`; document.getElementById('pattern-template-modal-title').innerHTML = `${ICON_PATTERN_TEMPLATE} ${t('pattern.add')}`;
setPatternEditorRects((cloneData.rectangles || []).map(r => ({ ...r }))); setPatternEditorRects((cloneData.rectangles || []).map(r => ({ ...r })));
_editorTags = cloneData.tags || [];
} else { } else {
document.getElementById('pattern-template-id').value = ''; document.getElementById('pattern-template-id').value = '';
document.getElementById('pattern-template-name').value = ''; document.getElementById('pattern-template-name').value = '';
@@ -133,6 +143,11 @@ export async function showPatternTemplateEditor(templateId = null, cloneData = n
setPatternEditorRects([]); setPatternEditorRects([]);
} }
// Tags
if (_patternTagsInput) { _patternTagsInput.destroy(); _patternTagsInput = null; }
_patternTagsInput = new TagInput(document.getElementById('pattern-tags-container'), { placeholder: t('tags.placeholder') });
_patternTagsInput.setValue(_editorTags);
patternModal.snapshot(); patternModal.snapshot();
renderPatternRectList(); renderPatternRectList();
@@ -177,6 +192,7 @@ export async function savePatternTemplate() {
name: r.name, x: r.x, y: r.y, width: r.width, height: r.height, name: r.name, x: r.x, y: r.y, width: r.width, height: r.height,
})), })),
description: description || null, description: description || null,
tags: _patternTagsInput ? _patternTagsInput.getValue() : [],
}; };
try { try {
@@ -197,6 +213,7 @@ export async function savePatternTemplate() {
} }
showToast(templateId ? t('pattern.updated') : t('pattern.created'), 'success'); showToast(templateId ? t('pattern.updated') : t('pattern.created'), 'success');
patternTemplatesCache.invalidate();
patternModal.forceClose(); patternModal.forceClose();
// Use window.* to avoid circular import with targets.js // Use window.* to avoid circular import with targets.js
if (typeof window.loadTargetsTab === 'function') window.loadTargetsTab(); if (typeof window.loadTargetsTab === 'function') window.loadTargetsTab();
@@ -209,9 +226,9 @@ export async function savePatternTemplate() {
export async function clonePatternTemplate(templateId) { export async function clonePatternTemplate(templateId) {
try { try {
const resp = await fetchWithAuth(`/pattern-templates/${templateId}`); const templates = await patternTemplatesCache.fetch();
if (!resp.ok) throw new Error('Failed to load pattern template'); const tmpl = templates.find(t => t.id === templateId);
const tmpl = await resp.json(); if (!tmpl) throw new Error('Pattern template not found');
showPatternTemplateEditor(null, tmpl); showPatternTemplateEditor(null, tmpl);
} catch (error) { } catch (error) {
if (error.isAuth) return; if (error.isAuth) return;
@@ -229,6 +246,7 @@ export async function deletePatternTemplate(templateId) {
}); });
if (response.ok) { if (response.ok) {
showToast(t('pattern.deleted'), 'success'); showToast(t('pattern.deleted'), 'success');
patternTemplatesCache.invalidate();
if (typeof window.loadTargetsTab === 'function') window.loadTargetsTab(); if (typeof window.loadTargetsTab === 'function') window.loadTargetsTab();
} else { } else {
const error = await response.json(); const error = await response.json();

View File

@@ -11,15 +11,20 @@ import { CardSection } from '../core/card-sections.js';
import { import {
ICON_SCENE, ICON_CAPTURE, ICON_START, ICON_EDIT, ICON_REFRESH, ICON_TARGET, ICON_CLONE, ICON_SCENE, ICON_CAPTURE, ICON_START, ICON_EDIT, ICON_REFRESH, ICON_TARGET, ICON_CLONE,
} from '../core/icons.js'; } from '../core/icons.js';
import { scenePresetsCache } from '../core/state.js'; import { scenePresetsCache, outputTargetsCache } from '../core/state.js';
import { TagInput, renderTagChips } from '../core/tag-input.js';
import { cardColorStyle, cardColorButton } from '../core/card-colors.js'; import { cardColorStyle, cardColorButton } from '../core/card-colors.js';
import { EntityPalette } from '../core/entity-palette.js'; import { EntityPalette } from '../core/entity-palette.js';
let _editingId = null; let _editingId = null;
let _allTargets = []; // fetched on capture open let _allTargets = []; // fetched on capture open
let _sceneTagsInput = null;
class ScenePresetEditorModal extends Modal { class ScenePresetEditorModal extends Modal {
constructor() { super('scene-preset-editor-modal'); } constructor() { super('scene-preset-editor-modal'); }
onForceClose() {
if (_sceneTagsInput) { _sceneTagsInput.destroy(); _sceneTagsInput = null; }
}
snapshotValues() { snapshotValues() {
const items = [...document.querySelectorAll('#scene-target-list .scene-target-item')] const items = [...document.querySelectorAll('#scene-target-list .scene-target-item')]
.map(el => el.dataset.targetId).sort().join(','); .map(el => el.dataset.targetId).sort().join(',');
@@ -27,6 +32,7 @@ class ScenePresetEditorModal extends Modal {
name: document.getElementById('scene-preset-editor-name').value, name: document.getElementById('scene-preset-editor-name').value,
description: document.getElementById('scene-preset-editor-description').value, description: document.getElementById('scene-preset-editor-description').value,
targets: items, targets: items,
tags: JSON.stringify(_sceneTagsInput ? _sceneTagsInput.getValue() : []),
}; };
} }
} }
@@ -61,6 +67,7 @@ export function createSceneCard(preset) {
${meta.map(m => `<span class="stream-card-prop">${m}</span>`).join('')} ${meta.map(m => `<span class="stream-card-prop">${m}</span>`).join('')}
${updated ? `<span class="stream-card-prop">${updated}</span>` : ''} ${updated ? `<span class="stream-card-prop">${updated}</span>` : ''}
</div> </div>
${renderTagChips(preset.tags)}
<div class="card-actions"> <div class="card-actions">
<button class="btn btn-icon btn-secondary" onclick="cloneScenePreset('${preset.id}')" title="${t('common.clone')}">${ICON_CLONE}</button> <button class="btn btn-icon btn-secondary" onclick="cloneScenePreset('${preset.id}')" title="${t('common.clone')}">${ICON_CLONE}</button>
<button class="btn btn-icon btn-secondary" onclick="editScenePreset('${preset.id}')" title="${t('scenes.edit')}">${ICON_EDIT}</button> <button class="btn btn-icon btn-secondary" onclick="editScenePreset('${preset.id}')" title="${t('scenes.edit')}">${ICON_EDIT}</button>
@@ -129,15 +136,15 @@ export async function openScenePresetCapture() {
selectorGroup.style.display = ''; selectorGroup.style.display = '';
targetList.innerHTML = ''; targetList.innerHTML = '';
try { try {
const resp = await fetchWithAuth('/output-targets'); _allTargets = await outputTargetsCache.fetch().catch(() => []);
if (resp.ok) { _refreshTargetSelect();
const data = await resp.json();
_allTargets = data.targets || [];
_refreshTargetSelect();
}
} catch { /* ignore */ } } catch { /* ignore */ }
} }
if (_sceneTagsInput) { _sceneTagsInput.destroy(); _sceneTagsInput = null; }
_sceneTagsInput = new TagInput(document.getElementById('scene-tags-container'), { placeholder: t('tags.placeholder') });
_sceneTagsInput.setValue([]);
scenePresetModal.open(); scenePresetModal.open();
scenePresetModal.snapshot(); scenePresetModal.snapshot();
} }
@@ -164,27 +171,27 @@ export async function editScenePreset(presetId) {
selectorGroup.style.display = ''; selectorGroup.style.display = '';
targetList.innerHTML = ''; targetList.innerHTML = '';
try { try {
const resp = await fetchWithAuth('/output-targets'); _allTargets = await outputTargetsCache.fetch().catch(() => []);
if (resp.ok) {
const data = await resp.json();
_allTargets = data.targets || [];
// Pre-add targets already in the preset // Pre-add targets already in the preset
const presetTargetIds = (preset.targets || []).map(pt => pt.target_id || pt.id); const presetTargetIds = (preset.targets || []).map(pt => pt.target_id || pt.id);
for (const tid of presetTargetIds) { for (const tid of presetTargetIds) {
const tgt = _allTargets.find(t => t.id === tid); const tgt = _allTargets.find(t => t.id === tid);
if (!tgt) continue; if (!tgt) continue;
const item = document.createElement('div'); const item = document.createElement('div');
item.className = 'scene-target-item'; item.className = 'scene-target-item';
item.dataset.targetId = tid; item.dataset.targetId = tid;
item.innerHTML = `<span>${escapeHtml(tgt.name)}</span><button type="button" class="btn-remove-condition" onclick="removeSceneTarget(this)" title="Remove">&#x2715;</button>`; item.innerHTML = `<span>${escapeHtml(tgt.name)}</span><button type="button" class="btn-remove-condition" onclick="removeSceneTarget(this)" title="Remove">&#x2715;</button>`;
targetList.appendChild(item); targetList.appendChild(item);
}
_refreshTargetSelect();
} }
_refreshTargetSelect();
} catch { /* ignore */ } } catch { /* ignore */ }
} }
if (_sceneTagsInput) { _sceneTagsInput.destroy(); _sceneTagsInput = null; }
_sceneTagsInput = new TagInput(document.getElementById('scene-tags-container'), { placeholder: t('tags.placeholder') });
_sceneTagsInput.setValue(preset.tags || []);
scenePresetModal.open(); scenePresetModal.open();
scenePresetModal.snapshot(); scenePresetModal.snapshot();
} }
@@ -202,6 +209,8 @@ export async function saveScenePreset() {
return; return;
} }
const tags = _sceneTagsInput ? _sceneTagsInput.getValue() : [];
try { try {
let resp; let resp;
if (_editingId) { if (_editingId) {
@@ -209,14 +218,14 @@ export async function saveScenePreset() {
.map(el => el.dataset.targetId); .map(el => el.dataset.targetId);
resp = await fetchWithAuth(`/scene-presets/${_editingId}`, { resp = await fetchWithAuth(`/scene-presets/${_editingId}`, {
method: 'PUT', method: 'PUT',
body: JSON.stringify({ name, description, target_ids }), body: JSON.stringify({ name, description, target_ids, tags }),
}); });
} else { } else {
const target_ids = [...document.querySelectorAll('#scene-target-list .scene-target-item')] const target_ids = [...document.querySelectorAll('#scene-target-list .scene-target-item')]
.map(el => el.dataset.targetId); .map(el => el.dataset.targetId);
resp = await fetchWithAuth('/scene-presets', { resp = await fetchWithAuth('/scene-presets', {
method: 'POST', method: 'POST',
body: JSON.stringify({ name, description, target_ids }), body: JSON.stringify({ name, description, target_ids, tags }),
}); });
} }
@@ -367,27 +376,27 @@ export async function cloneScenePreset(presetId) {
selectorGroup.style.display = ''; selectorGroup.style.display = '';
targetList.innerHTML = ''; targetList.innerHTML = '';
try { try {
const resp = await fetchWithAuth('/output-targets'); _allTargets = await outputTargetsCache.fetch().catch(() => []);
if (resp.ok) {
const data = await resp.json();
_allTargets = data.targets || [];
// Pre-add targets from the cloned preset // Pre-add targets from the cloned preset
const clonedTargetIds = (preset.targets || []).map(pt => pt.target_id || pt.id); const clonedTargetIds = (preset.targets || []).map(pt => pt.target_id || pt.id);
for (const tid of clonedTargetIds) { for (const tid of clonedTargetIds) {
const tgt = _allTargets.find(t => t.id === tid); const tgt = _allTargets.find(t => t.id === tid);
if (!tgt) continue; if (!tgt) continue;
const item = document.createElement('div'); const item = document.createElement('div');
item.className = 'scene-target-item'; item.className = 'scene-target-item';
item.dataset.targetId = tid; item.dataset.targetId = tid;
item.innerHTML = `<span>${escapeHtml(tgt.name)}</span><button type="button" class="btn-remove-condition" onclick="removeSceneTarget(this)" title="Remove">&#x2715;</button>`; item.innerHTML = `<span>${escapeHtml(tgt.name)}</span><button type="button" class="btn-remove-condition" onclick="removeSceneTarget(this)" title="Remove">&#x2715;</button>`;
targetList.appendChild(item); targetList.appendChild(item);
}
_refreshTargetSelect();
} }
_refreshTargetSelect();
} catch { /* ignore */ } } catch { /* ignore */ }
} }
if (_sceneTagsInput) { _sceneTagsInput.destroy(); _sceneTagsInput = null; }
_sceneTagsInput = new TagInput(document.getElementById('scene-tags-container'), { placeholder: t('tags.placeholder') });
_sceneTagsInput.setValue(preset.tags || []);
scenePresetModal.open(); scenePresetModal.open();
scenePresetModal.snapshot(); scenePresetModal.snapshot();
} }

View File

@@ -48,10 +48,17 @@ import {
ICON_CAPTURE_TEMPLATE, ICON_PP_TEMPLATE, ICON_HELP, ICON_CAPTURE_TEMPLATE, ICON_PP_TEMPLATE, ICON_HELP,
} from '../core/icons.js'; } from '../core/icons.js';
import { wrapCard } from '../core/card-colors.js'; import { wrapCard } from '../core/card-colors.js';
import { TagInput, renderTagChips } from '../core/tag-input.js';
import { IconSelect } from '../core/icon-select.js'; import { IconSelect } from '../core/icon-select.js';
import { EntitySelect } from '../core/entity-palette.js'; import { EntitySelect } from '../core/entity-palette.js';
import * as P from '../core/icon-paths.js'; import * as P from '../core/icon-paths.js';
// ── TagInput instances for modals ──
let _captureTemplateTagsInput = null;
let _streamTagsInput = null;
let _ppTemplateTagsInput = null;
let _audioTemplateTagsInput = null;
// ── Card section instances ── // ── Card section instances ──
const csRawStreams = new CardSection('raw-streams', { titleKey: 'streams.section.streams', gridClass: 'templates-grid', addCardOnclick: "showAddStreamModal('raw')", keyAttr: 'data-stream-id' }); const csRawStreams = new CardSection('raw-streams', { titleKey: 'streams.section.streams', gridClass: 'templates-grid', addCardOnclick: "showAddStreamModal('raw')", keyAttr: 'data-stream-id' });
const csRawTemplates = new CardSection('raw-templates', { titleKey: 'templates.title', gridClass: 'templates-grid', addCardOnclick: "showAddTemplateModal()", keyAttr: 'data-template-id' }); const csRawTemplates = new CardSection('raw-templates', { titleKey: 'templates.title', gridClass: 'templates-grid', addCardOnclick: "showAddTemplateModal()", keyAttr: 'data-template-id' });
@@ -77,6 +84,7 @@ class CaptureTemplateModal extends Modal {
name: document.getElementById('template-name').value, name: document.getElementById('template-name').value,
description: document.getElementById('template-description').value, description: document.getElementById('template-description').value,
engine: document.getElementById('template-engine').value, engine: document.getElementById('template-engine').value,
tags: JSON.stringify(_captureTemplateTagsInput ? _captureTemplateTagsInput.getValue() : []),
}; };
document.querySelectorAll('[data-config-key]').forEach(field => { document.querySelectorAll('[data-config-key]').forEach(field => {
vals['cfg_' + field.dataset.configKey] = field.value; vals['cfg_' + field.dataset.configKey] = field.value;
@@ -85,6 +93,7 @@ class CaptureTemplateModal extends Modal {
} }
onForceClose() { onForceClose() {
if (_captureTemplateTagsInput) { _captureTemplateTagsInput.destroy(); _captureTemplateTagsInput = null; }
setCurrentEditingTemplateId(null); setCurrentEditingTemplateId(null);
set_templateNameManuallyEdited(false); set_templateNameManuallyEdited(false);
} }
@@ -104,10 +113,12 @@ class StreamEditorModal extends Modal {
source: document.getElementById('stream-source').value, source: document.getElementById('stream-source').value,
ppTemplate: document.getElementById('stream-pp-template').value, ppTemplate: document.getElementById('stream-pp-template').value,
imageSource: document.getElementById('stream-image-source').value, imageSource: document.getElementById('stream-image-source').value,
tags: JSON.stringify(_streamTagsInput ? _streamTagsInput.getValue() : []),
}; };
} }
onForceClose() { onForceClose() {
if (_streamTagsInput) { _streamTagsInput.destroy(); _streamTagsInput = null; }
document.getElementById('stream-type').disabled = false; document.getElementById('stream-type').disabled = false;
set_streamNameManuallyEdited(false); set_streamNameManuallyEdited(false);
} }
@@ -121,10 +132,12 @@ class PPTemplateEditorModal extends Modal {
name: document.getElementById('pp-template-name').value, name: document.getElementById('pp-template-name').value,
description: document.getElementById('pp-template-description').value, description: document.getElementById('pp-template-description').value,
filters: JSON.stringify(_modalFilters.map(fi => ({ filter_id: fi.filter_id, options: fi.options }))), filters: JSON.stringify(_modalFilters.map(fi => ({ filter_id: fi.filter_id, options: fi.options }))),
tags: JSON.stringify(_ppTemplateTagsInput ? _ppTemplateTagsInput.getValue() : []),
}; };
} }
onForceClose() { onForceClose() {
if (_ppTemplateTagsInput) { _ppTemplateTagsInput.destroy(); _ppTemplateTagsInput = null; }
set_modalFilters([]); set_modalFilters([]);
set_ppTemplateNameManuallyEdited(false); set_ppTemplateNameManuallyEdited(false);
} }
@@ -138,6 +151,7 @@ class AudioTemplateModal extends Modal {
name: document.getElementById('audio-template-name').value, name: document.getElementById('audio-template-name').value,
description: document.getElementById('audio-template-description').value, description: document.getElementById('audio-template-description').value,
engine: document.getElementById('audio-template-engine').value, engine: document.getElementById('audio-template-engine').value,
tags: JSON.stringify(_audioTemplateTagsInput ? _audioTemplateTagsInput.getValue() : []),
}; };
document.querySelectorAll('#audio-engine-config-fields [data-config-key]').forEach(field => { document.querySelectorAll('#audio-engine-config-fields [data-config-key]').forEach(field => {
vals['cfg_' + field.dataset.configKey] = field.value; vals['cfg_' + field.dataset.configKey] = field.value;
@@ -146,6 +160,7 @@ class AudioTemplateModal extends Modal {
} }
onForceClose() { onForceClose() {
if (_audioTemplateTagsInput) { _audioTemplateTagsInput.destroy(); _audioTemplateTagsInput = null; }
setCurrentEditingAudioTemplateId(null); setCurrentEditingAudioTemplateId(null);
set_audioTemplateNameManuallyEdited(false); set_audioTemplateNameManuallyEdited(false);
} }
@@ -194,6 +209,11 @@ export async function showAddTemplateModal(cloneData = null) {
populateEngineConfig(cloneData.engine_config); populateEngineConfig(cloneData.engine_config);
} }
// Tags
if (_captureTemplateTagsInput) { _captureTemplateTagsInput.destroy(); _captureTemplateTagsInput = null; }
_captureTemplateTagsInput = new TagInput(document.getElementById('capture-template-tags-container'), { placeholder: t('tags.placeholder') });
_captureTemplateTagsInput.setValue(cloneData ? (cloneData.tags || []) : []);
templateModal.open(); templateModal.open();
templateModal.snapshot(); templateModal.snapshot();
} }
@@ -221,6 +241,11 @@ export async function editTemplate(templateId) {
if (testResults) testResults.style.display = 'none'; if (testResults) testResults.style.display = 'none';
document.getElementById('template-error').style.display = 'none'; document.getElementById('template-error').style.display = 'none';
// Tags
if (_captureTemplateTagsInput) { _captureTemplateTagsInput.destroy(); _captureTemplateTagsInput = null; }
_captureTemplateTagsInput = new TagInput(document.getElementById('capture-template-tags-container'), { placeholder: t('tags.placeholder') });
_captureTemplateTagsInput.setValue(template.tags || []);
templateModal.open(); templateModal.open();
templateModal.snapshot(); templateModal.snapshot();
} catch (error) { } catch (error) {
@@ -611,7 +636,7 @@ export async function saveTemplate() {
const description = document.getElementById('template-description').value.trim(); const description = document.getElementById('template-description').value.trim();
const engineConfig = collectEngineConfig(); const engineConfig = collectEngineConfig();
const payload = { name, engine_type: engineType, engine_config: engineConfig, description: description || null }; const payload = { name, engine_type: engineType, engine_config: engineConfig, description: description || null, tags: _captureTemplateTagsInput ? _captureTemplateTagsInput.getValue() : [] };
try { try {
let response; let response;
@@ -813,6 +838,11 @@ export async function showAddAudioTemplateModal(cloneData = null) {
populateAudioEngineConfig(cloneData.engine_config); populateAudioEngineConfig(cloneData.engine_config);
} }
// Tags
if (_audioTemplateTagsInput) { _audioTemplateTagsInput.destroy(); _audioTemplateTagsInput = null; }
_audioTemplateTagsInput = new TagInput(document.getElementById('audio-template-tags-container'), { placeholder: t('tags.placeholder') });
_audioTemplateTagsInput.setValue(cloneData ? (cloneData.tags || []) : []);
audioTemplateModal.open(); audioTemplateModal.open();
audioTemplateModal.snapshot(); audioTemplateModal.snapshot();
} }
@@ -836,6 +866,11 @@ export async function editAudioTemplate(templateId) {
document.getElementById('audio-template-error').style.display = 'none'; document.getElementById('audio-template-error').style.display = 'none';
// Tags
if (_audioTemplateTagsInput) { _audioTemplateTagsInput.destroy(); _audioTemplateTagsInput = null; }
_audioTemplateTagsInput = new TagInput(document.getElementById('audio-template-tags-container'), { placeholder: t('tags.placeholder') });
_audioTemplateTagsInput.setValue(template.tags || []);
audioTemplateModal.open(); audioTemplateModal.open();
audioTemplateModal.snapshot(); audioTemplateModal.snapshot();
} catch (error) { } catch (error) {
@@ -861,7 +896,7 @@ export async function saveAudioTemplate() {
const description = document.getElementById('audio-template-description').value.trim(); const description = document.getElementById('audio-template-description').value.trim();
const engineConfig = collectAudioEngineConfig(); const engineConfig = collectAudioEngineConfig();
const payload = { name, engine_type: engineType, engine_config: engineConfig, description: description || null }; const payload = { name, engine_type: engineType, engine_config: engineConfig, description: description || null, tags: _audioTemplateTagsInput ? _audioTemplateTagsInput.getValue() : [] };
try { try {
let response; let response;
@@ -1235,6 +1270,7 @@ function renderPictureSourcesList(streams) {
<div class="template-name">${typeIcon} ${escapeHtml(stream.name)}</div> <div class="template-name">${typeIcon} ${escapeHtml(stream.name)}</div>
</div> </div>
${detailsHtml} ${detailsHtml}
${renderTagChips(stream.tags)}
${stream.description ? `<div class="template-config" style="opacity:0.7;">${escapeHtml(stream.description)}</div>` : ''}`, ${stream.description ? `<div class="template-config" style="opacity:0.7;">${escapeHtml(stream.description)}</div>` : ''}`,
actions: ` actions: `
<button class="btn btn-icon btn-secondary" onclick="showTestStreamModal('${stream.id}')" title="${t('streams.test.title')}">${ICON_TEST}</button> <button class="btn btn-icon btn-secondary" onclick="showTestStreamModal('${stream.id}')" title="${t('streams.test.title')}">${ICON_TEST}</button>
@@ -1261,6 +1297,7 @@ function renderPictureSourcesList(streams) {
<span class="stream-card-prop" title="${t('templates.engine')}">${getEngineIcon(template.engine_type)} ${template.engine_type.toUpperCase()}</span> <span class="stream-card-prop" title="${t('templates.engine')}">${getEngineIcon(template.engine_type)} ${template.engine_type.toUpperCase()}</span>
${configEntries.length > 0 ? `<span class="stream-card-prop" title="${t('templates.config.show')}">${ICON_WRENCH} ${configEntries.length}</span>` : ''} ${configEntries.length > 0 ? `<span class="stream-card-prop" title="${t('templates.config.show')}">${ICON_WRENCH} ${configEntries.length}</span>` : ''}
</div> </div>
${renderTagChips(template.tags)}
${configEntries.length > 0 ? ` ${configEntries.length > 0 ? `
<div class="template-config-collapse"> <div class="template-config-collapse">
<button type="button" class="template-config-toggle" onclick="this.parentElement.classList.toggle('open')">${t('templates.config.show')}</button> <button type="button" class="template-config-toggle" onclick="this.parentElement.classList.toggle('open')">${t('templates.config.show')}</button>
@@ -1302,7 +1339,8 @@ function renderPictureSourcesList(streams) {
<div class="template-name">${ICON_TEMPLATE} ${escapeHtml(tmpl.name)}</div> <div class="template-name">${ICON_TEMPLATE} ${escapeHtml(tmpl.name)}</div>
</div> </div>
${tmpl.description ? `<div class="template-config" style="opacity:0.7;">${escapeHtml(tmpl.description)}</div>` : ''} ${tmpl.description ? `<div class="template-config" style="opacity:0.7;">${escapeHtml(tmpl.description)}</div>` : ''}
${filterChainHtml}`, ${filterChainHtml}
${renderTagChips(tmpl.tags)}`,
actions: ` actions: `
<button class="btn btn-icon btn-secondary" onclick="showTestPPTemplateModal('${tmpl.id}')" title="${t('postprocessing.test.title')}">${ICON_TEST}</button> <button class="btn btn-icon btn-secondary" onclick="showTestPPTemplateModal('${tmpl.id}')" title="${t('postprocessing.test.title')}">${ICON_TEST}</button>
<button class="btn btn-icon btn-secondary" onclick="clonePPTemplate('${tmpl.id}')" title="${t('common.clone')}">${ICON_CLONE}</button> <button class="btn btn-icon btn-secondary" onclick="clonePPTemplate('${tmpl.id}')" title="${t('common.clone')}">${ICON_CLONE}</button>
@@ -1367,6 +1405,7 @@ function renderPictureSourcesList(streams) {
<div class="template-name">${icon} ${escapeHtml(src.name)}</div> <div class="template-name">${icon} ${escapeHtml(src.name)}</div>
</div> </div>
<div class="stream-card-props">${propsHtml}</div> <div class="stream-card-props">${propsHtml}</div>
${renderTagChips(src.tags)}
${src.description ? `<div class="template-config" style="opacity:0.7;">${escapeHtml(src.description)}</div>` : ''}`, ${src.description ? `<div class="template-config" style="opacity:0.7;">${escapeHtml(src.description)}</div>` : ''}`,
actions: ` actions: `
<button class="btn btn-icon btn-secondary" onclick="testAudioSource('${src.id}')" title="${t('audio_source.test')}">${ICON_TEST}</button> <button class="btn btn-icon btn-secondary" onclick="testAudioSource('${src.id}')" title="${t('audio_source.test')}">${ICON_TEST}</button>
@@ -1392,6 +1431,7 @@ function renderPictureSourcesList(streams) {
<span class="stream-card-prop" title="${t('audio_template.engine')}">${ICON_AUDIO_TEMPLATE} ${template.engine_type.toUpperCase()}</span> <span class="stream-card-prop" title="${t('audio_template.engine')}">${ICON_AUDIO_TEMPLATE} ${template.engine_type.toUpperCase()}</span>
${configEntries.length > 0 ? `<span class="stream-card-prop" title="${t('audio_template.config.show')}">${ICON_WRENCH} ${configEntries.length}</span>` : ''} ${configEntries.length > 0 ? `<span class="stream-card-prop" title="${t('audio_template.config.show')}">${ICON_WRENCH} ${configEntries.length}</span>` : ''}
</div> </div>
${renderTagChips(template.tags)}
${configEntries.length > 0 ? ` ${configEntries.length > 0 ? `
<div class="template-config-collapse"> <div class="template-config-collapse">
<button type="button" class="template-config-toggle" onclick="this.parentElement.classList.toggle('open')">${t('audio_template.config.show')}</button> <button type="button" class="template-config-toggle" onclick="this.parentElement.classList.toggle('open')">${t('audio_template.config.show')}</button>
@@ -1563,6 +1603,12 @@ export async function showAddStreamModal(presetType, cloneData = null) {
} }
_showStreamModalLoading(false); _showStreamModalLoading(false);
// Tags
if (_streamTagsInput) { _streamTagsInput.destroy(); _streamTagsInput = null; }
_streamTagsInput = new TagInput(document.getElementById('stream-tags-container'), { placeholder: t('tags.placeholder') });
_streamTagsInput.setValue(cloneData ? (cloneData.tags || []) : []);
streamModal.snapshot(); streamModal.snapshot();
} }
@@ -1616,6 +1662,12 @@ export async function editStream(streamId) {
} }
_showStreamModalLoading(false); _showStreamModalLoading(false);
// Tags
if (_streamTagsInput) { _streamTagsInput.destroy(); _streamTagsInput = null; }
_streamTagsInput = new TagInput(document.getElementById('stream-tags-container'), { placeholder: t('tags.placeholder') });
_streamTagsInput.setValue(stream.tags || []);
streamModal.snapshot(); streamModal.snapshot();
} catch (error) { } catch (error) {
console.error('Error loading stream:', error); console.error('Error loading stream:', error);
@@ -1772,7 +1824,7 @@ export async function saveStream() {
if (!name) { showToast(t('streams.error.required'), 'error'); return; } if (!name) { showToast(t('streams.error.required'), 'error'); return; }
const payload = { name, description: description || null }; const payload = { name, description: description || null, tags: _streamTagsInput ? _streamTagsInput.getValue() : [] };
if (!streamId) payload.stream_type = streamType; if (!streamId) payload.stream_type = streamType;
if (streamType === 'raw') { if (streamType === 'raw') {
@@ -2429,6 +2481,11 @@ export async function showAddPPTemplateModal(cloneData = null) {
document.getElementById('pp-template-description').value = cloneData.description || ''; document.getElementById('pp-template-description').value = cloneData.description || '';
} }
// Tags
if (_ppTemplateTagsInput) { _ppTemplateTagsInput.destroy(); _ppTemplateTagsInput = null; }
_ppTemplateTagsInput = new TagInput(document.getElementById('pp-template-tags-container'), { placeholder: t('tags.placeholder') });
_ppTemplateTagsInput.setValue(cloneData ? (cloneData.tags || []) : []);
ppTemplateModal.open(); ppTemplateModal.open();
ppTemplateModal.snapshot(); ppTemplateModal.snapshot();
} }
@@ -2455,6 +2512,11 @@ export async function editPPTemplate(templateId) {
_populateFilterSelect(); _populateFilterSelect();
renderModalFilterList(); renderModalFilterList();
// Tags
if (_ppTemplateTagsInput) { _ppTemplateTagsInput.destroy(); _ppTemplateTagsInput = null; }
_ppTemplateTagsInput = new TagInput(document.getElementById('pp-template-tags-container'), { placeholder: t('tags.placeholder') });
_ppTemplateTagsInput.setValue(tmpl.tags || []);
ppTemplateModal.open(); ppTemplateModal.open();
ppTemplateModal.snapshot(); ppTemplateModal.snapshot();
} catch (error) { } catch (error) {
@@ -2471,7 +2533,7 @@ export async function savePPTemplate() {
if (!name) { showToast(t('postprocessing.error.required'), 'error'); return; } if (!name) { showToast(t('postprocessing.error.required'), 'error'); return; }
const payload = { name, filters: collectFilters(), description: description || null }; const payload = { name, filters: collectFilters(), description: description || null, tags: _ppTemplateTagsInput ? _ppTemplateTagsInput.getValue() : [] };
try { try {
let response; let response;

View File

@@ -9,18 +9,26 @@ import { Modal } from '../core/modal.js';
import { showToast, showConfirm } from '../core/ui.js'; import { showToast, showConfirm } from '../core/ui.js';
import { ICON_CLOCK, ICON_CLONE, ICON_EDIT, ICON_START, ICON_PAUSE } from '../core/icons.js'; import { ICON_CLOCK, ICON_CLONE, ICON_EDIT, ICON_START, ICON_PAUSE } from '../core/icons.js';
import { wrapCard } from '../core/card-colors.js'; import { wrapCard } from '../core/card-colors.js';
import { TagInput, renderTagChips } from '../core/tag-input.js';
import { loadPictureSources } from './streams.js'; import { loadPictureSources } from './streams.js';
// ── Modal ── // ── Modal ──
let _syncClockTagsInput = null;
class SyncClockModal extends Modal { class SyncClockModal extends Modal {
constructor() { super('sync-clock-modal'); } constructor() { super('sync-clock-modal'); }
onForceClose() {
if (_syncClockTagsInput) { _syncClockTagsInput.destroy(); _syncClockTagsInput = null; }
}
snapshotValues() { snapshotValues() {
return { return {
name: document.getElementById('sync-clock-name').value, name: document.getElementById('sync-clock-name').value,
speed: document.getElementById('sync-clock-speed').value, speed: document.getElementById('sync-clock-speed').value,
description: document.getElementById('sync-clock-description').value, description: document.getElementById('sync-clock-description').value,
tags: JSON.stringify(_syncClockTagsInput ? _syncClockTagsInput.getValue() : []),
}; };
} }
} }
@@ -48,6 +56,11 @@ export async function showSyncClockModal(editData) {
document.getElementById('sync-clock-description').value = ''; document.getElementById('sync-clock-description').value = '';
} }
// Tags
if (_syncClockTagsInput) { _syncClockTagsInput.destroy(); _syncClockTagsInput = null; }
_syncClockTagsInput = new TagInput(document.getElementById('sync-clock-tags-container'), { placeholder: t('tags.placeholder') });
_syncClockTagsInput.setValue(isEdit ? (editData.tags || []) : []);
syncClockModal.open(); syncClockModal.open();
syncClockModal.snapshot(); syncClockModal.snapshot();
} }
@@ -69,7 +82,7 @@ export async function saveSyncClock() {
return; return;
} }
const payload = { name, speed, description }; const payload = { name, speed, description, tags: _syncClockTagsInput ? _syncClockTagsInput.getValue() : [] };
try { try {
const method = id ? 'PUT' : 'POST'; const method = id ? 'PUT' : 'POST';
@@ -199,6 +212,7 @@ export function createSyncClockCard(clock) {
<span class="stream-card-prop">${statusIcon} ${statusLabel}</span> <span class="stream-card-prop">${statusIcon} ${statusLabel}</span>
<span class="stream-card-prop">${ICON_CLOCK} ${clock.speed}x</span> <span class="stream-card-prop">${ICON_CLOCK} ${clock.speed}x</span>
</div> </div>
${renderTagChips(clock.tags)}
${clock.description ? `<div class="template-config" style="opacity:0.7;">${escapeHtml(clock.description)}</div>` : ''}`, ${clock.description ? `<div class="template-config" style="opacity:0.7;">${escapeHtml(clock.description)}</div>` : ''}`,
actions: ` actions: `
<button class="btn btn-icon btn-secondary" onclick="event.stopPropagation(); ${toggleAction}" title="${toggleTitle}">${clock.is_running ? ICON_PAUSE : ICON_START}</button> <button class="btn btn-icon btn-secondary" onclick="event.stopPropagation(); ${toggleAction}" title="${toggleTitle}">${clock.is_running ? ICON_PAUSE : ICON_START}</button>

View File

@@ -10,6 +10,7 @@ import {
ledPreviewWebSockets, ledPreviewWebSockets,
_cachedValueSources, valueSourcesCache, _cachedValueSources, valueSourcesCache,
streamsCache, audioSourcesCache, syncClocksCache, streamsCache, audioSourcesCache, syncClocksCache,
colorStripSourcesCache, devicesCache, outputTargetsCache, patternTemplatesCache,
} from '../core/state.js'; } from '../core/state.js';
import { API_BASE, getHeaders, fetchWithAuth, escapeHtml, isOpenrgbDevice } from '../core/api.js'; import { API_BASE, getHeaders, fetchWithAuth, escapeHtml, isOpenrgbDevice } from '../core/api.js';
import { t } from '../core/i18n.js'; import { t } from '../core/i18n.js';
@@ -28,6 +29,7 @@ import {
} from '../core/icons.js'; } from '../core/icons.js';
import { EntitySelect } from '../core/entity-palette.js'; import { EntitySelect } from '../core/entity-palette.js';
import { wrapCard } from '../core/card-colors.js'; import { wrapCard } from '../core/card-colors.js';
import { TagInput, renderTagChips } from '../core/tag-input.js';
import { CardSection } from '../core/card-sections.js'; import { CardSection } from '../core/card-sections.js';
import { updateSubTabHash, updateTabBadge } from './tabs.js'; import { updateSubTabHash, updateTabBadge } from './tabs.js';
@@ -140,6 +142,7 @@ function _updateSubTabCounts(subTabs) {
// --- Editor state --- // --- Editor state ---
let _editorCssSources = []; // populated when editor opens let _editorCssSources = []; // populated when editor opens
let _targetTagsInput = null;
class TargetEditorModal extends Modal { class TargetEditorModal extends Modal {
constructor() { constructor() {
@@ -157,6 +160,7 @@ class TargetEditorModal extends Modal {
fps: document.getElementById('target-editor-fps').value, fps: document.getElementById('target-editor-fps').value,
keepalive_interval: document.getElementById('target-editor-keepalive-interval').value, keepalive_interval: document.getElementById('target-editor-keepalive-interval').value,
adaptive_fps: document.getElementById('target-editor-adaptive-fps').checked, adaptive_fps: document.getElementById('target-editor-adaptive-fps').checked,
tags: JSON.stringify(_targetTagsInput ? _targetTagsInput.getValue() : []),
}; };
} }
} }
@@ -311,14 +315,12 @@ function _ensureTargetEntitySelects() {
export async function showTargetEditor(targetId = null, cloneData = null) { export async function showTargetEditor(targetId = null, cloneData = null) {
try { try {
// Load devices, CSS sources, and value sources for dropdowns // Load devices, CSS sources, and value sources for dropdowns
const [devicesResp, cssResp] = await Promise.all([ const [devices, cssSources] = await Promise.all([
fetch(`${API_BASE}/devices`, { headers: getHeaders() }), devicesCache.fetch().catch(() => []),
fetchWithAuth('/color-strip-sources'), colorStripSourcesCache.fetch().catch(() => []),
valueSourcesCache.fetch(), valueSourcesCache.fetch(),
]); ]);
const devices = devicesResp.ok ? (await devicesResp.json()).devices || [] : [];
const cssSources = cssResp.ok ? (await cssResp.json()).sources || [] : [];
set_targetEditorDevices(devices); set_targetEditorDevices(devices);
_editorCssSources = cssSources; _editorCssSources = cssSources;
@@ -335,11 +337,13 @@ export async function showTargetEditor(targetId = null, cloneData = null) {
deviceSelect.appendChild(opt); deviceSelect.appendChild(opt);
}); });
let _editorTags = [];
if (targetId) { if (targetId) {
// Editing existing target // Editing existing target
const resp = await fetch(`${API_BASE}/output-targets/${targetId}`, { headers: getHeaders() }); const resp = await fetch(`${API_BASE}/output-targets/${targetId}`, { headers: getHeaders() });
if (!resp.ok) throw new Error('Failed to load target'); if (!resp.ok) throw new Error('Failed to load target');
const target = await resp.json(); const target = await resp.json();
_editorTags = target.tags || [];
document.getElementById('target-editor-id').value = target.id; document.getElementById('target-editor-id').value = target.id;
document.getElementById('target-editor-name').value = target.name; document.getElementById('target-editor-name').value = target.name;
@@ -362,6 +366,7 @@ export async function showTargetEditor(targetId = null, cloneData = null) {
_populateBrightnessVsDropdown(target.brightness_value_source_id || ''); _populateBrightnessVsDropdown(target.brightness_value_source_id || '');
} else if (cloneData) { } else if (cloneData) {
// Cloning — create mode but pre-filled from clone data // Cloning — create mode but pre-filled from clone data
_editorTags = cloneData.tags || [];
document.getElementById('target-editor-id').value = ''; document.getElementById('target-editor-id').value = '';
document.getElementById('target-editor-name').value = (cloneData.name || '') + ' (Copy)'; document.getElementById('target-editor-name').value = (cloneData.name || '') + ' (Copy)';
deviceSelect.value = cloneData.device_id || ''; deviceSelect.value = cloneData.device_id || '';
@@ -420,6 +425,13 @@ export async function showTargetEditor(targetId = null, cloneData = null) {
_updateFpsRecommendation(); _updateFpsRecommendation();
_updateBrightnessThresholdVisibility(); _updateBrightnessThresholdVisibility();
// Tags
if (_targetTagsInput) _targetTagsInput.destroy();
_targetTagsInput = new TagInput(document.getElementById('target-tags-container'), {
placeholder: window.t ? t('tags.placeholder') : 'Add tag...'
});
_targetTagsInput.setValue(_editorTags);
targetEditorModal.snapshot(); targetEditorModal.snapshot();
targetEditorModal.open(); targetEditorModal.open();
@@ -440,6 +452,7 @@ export async function closeTargetEditorModal() {
} }
export function forceCloseTargetEditorModal() { export function forceCloseTargetEditorModal() {
if (_targetTagsInput) { _targetTagsInput.destroy(); _targetTagsInput = null; }
targetEditorModal.forceClose(); targetEditorModal.forceClose();
} }
@@ -473,6 +486,7 @@ export async function saveTargetEditor() {
keepalive_interval: standbyInterval, keepalive_interval: standbyInterval,
adaptive_fps: adaptiveFps, adaptive_fps: adaptiveFps,
protocol, protocol,
tags: _targetTagsInput ? _targetTagsInput.getValue() : [],
}; };
try { try {
@@ -496,6 +510,7 @@ export async function saveTargetEditor() {
} }
showToast(targetId ? t('targets.updated') : t('targets.created'), 'success'); showToast(targetId ? t('targets.updated') : t('targets.created'), 'success');
outputTargetsCache.invalidate();
targetEditorModal.forceClose(); targetEditorModal.forceClose();
await loadTargetsTab(); await loadTargetsTab();
} catch (error) { } catch (error) {
@@ -546,41 +561,26 @@ export async function loadTargetsTab() {
if (!csDevices.isMounted()) setTabRefreshing('targets-panel-content', true); if (!csDevices.isMounted()) setTabRefreshing('targets-panel-content', true);
try { try {
// Fetch devices, targets, CSS sources, pattern templates in parallel; // Fetch all entities via DataCache
// use DataCache for picture sources, audio sources, value sources, sync clocks const [devices, targets, cssArr, patternTemplates, psArr, valueSrcArr, asSrcArr] = await Promise.all([
const [devicesResp, targetsResp, cssResp, patResp, psArr, valueSrcArr, asSrcArr] = await Promise.all([ devicesCache.fetch().catch(() => []),
fetchWithAuth('/devices'), outputTargetsCache.fetch().catch(() => []),
fetchWithAuth('/output-targets'), colorStripSourcesCache.fetch().catch(() => []),
fetchWithAuth('/color-strip-sources').catch(() => null), patternTemplatesCache.fetch().catch(() => []),
fetchWithAuth('/pattern-templates').catch(() => null),
streamsCache.fetch().catch(() => []), streamsCache.fetch().catch(() => []),
valueSourcesCache.fetch().catch(() => []), valueSourcesCache.fetch().catch(() => []),
audioSourcesCache.fetch().catch(() => []), audioSourcesCache.fetch().catch(() => []),
syncClocksCache.fetch().catch(() => []), syncClocksCache.fetch().catch(() => []),
]); ]);
const devicesData = await devicesResp.json();
const devices = devicesData.devices || [];
const targetsData = await targetsResp.json();
const targets = targetsData.targets || [];
let colorStripSourceMap = {}; let colorStripSourceMap = {};
if (cssResp && cssResp.ok) { cssArr.forEach(s => { colorStripSourceMap[s.id] = s; });
const cssData = await cssResp.json();
(cssData.sources || []).forEach(s => { colorStripSourceMap[s.id] = s; });
}
let pictureSourceMap = {}; let pictureSourceMap = {};
psArr.forEach(s => { pictureSourceMap[s.id] = s; }); psArr.forEach(s => { pictureSourceMap[s.id] = s; });
let patternTemplates = [];
let patternTemplateMap = {}; let patternTemplateMap = {};
if (patResp && patResp.ok) { patternTemplates.forEach(pt => { patternTemplateMap[pt.id] = pt; });
const patData = await patResp.json();
patternTemplates = patData.templates || [];
patternTemplates.forEach(pt => { patternTemplateMap[pt.id] = pt; });
}
let valueSourceMap = {}; let valueSourceMap = {};
valueSrcArr.forEach(s => { valueSourceMap[s.id] = s; }); valueSrcArr.forEach(s => { valueSourceMap[s.id] = s; });
@@ -959,6 +959,7 @@ export function createTargetCard(target, deviceMap, colorStripSourceMap, valueSo
${bvs ? `<span class="stream-card-prop stream-card-prop-full stream-card-link" title="${t('targets.brightness_vs')}" onclick="event.stopPropagation(); navigateToCard('streams','value','value-sources','data-id','${bvsId}')">${getValueSourceIcon(bvs.source_type)} ${escapeHtml(bvs.name)}</span>` : ''} ${bvs ? `<span class="stream-card-prop stream-card-prop-full stream-card-link" title="${t('targets.brightness_vs')}" onclick="event.stopPropagation(); navigateToCard('streams','value','value-sources','data-id','${bvsId}')">${getValueSourceIcon(bvs.source_type)} ${escapeHtml(bvs.name)}</span>` : ''}
${target.min_brightness_threshold > 0 ? `<span class="stream-card-prop" title="${t('targets.min_brightness_threshold')}">${ICON_SUN_DIM} &lt;${target.min_brightness_threshold} → off</span>` : ''} ${target.min_brightness_threshold > 0 ? `<span class="stream-card-prop" title="${t('targets.min_brightness_threshold')}">${ICON_SUN_DIM} &lt;${target.min_brightness_threshold} → off</span>` : ''}
</div> </div>
${renderTagChips(target.tags)}
<div class="card-content"> <div class="card-content">
${isProcessing ? ` ${isProcessing ? `
<div class="metrics-grid"> <div class="metrics-grid">
@@ -1082,15 +1083,14 @@ export async function stopAllKCTargets() {
async function _stopAllByType(targetType) { async function _stopAllByType(targetType) {
try { try {
const [targetsResp, statesResp] = await Promise.all([ const [allTargets, statesResp] = await Promise.all([
fetchWithAuth('/output-targets'), outputTargetsCache.fetch().catch(() => []),
fetchWithAuth('/output-targets/batch/states'), fetchWithAuth('/output-targets/batch/states'),
]); ]);
const data = await targetsResp.json();
const statesData = statesResp.ok ? await statesResp.json() : { states: {} }; const statesData = statesResp.ok ? await statesResp.json() : { states: {} };
const states = statesData.states || {}; const states = statesData.states || {};
const typeMatch = targetType === 'led' ? t => t.target_type === 'led' || t.target_type === 'wled' : t => t.target_type === targetType; const typeMatch = targetType === 'led' ? t => t.target_type === 'led' || t.target_type === 'wled' : t => t.target_type === targetType;
const running = (data.targets || []).filter(t => typeMatch(t) && states[t.id]?.processing); const running = allTargets.filter(t => typeMatch(t) && states[t.id]?.processing);
if (!running.length) { if (!running.length) {
showToast(t('targets.stop_all.none_running'), 'info'); showToast(t('targets.stop_all.none_running'), 'info');
return; return;
@@ -1156,6 +1156,7 @@ export async function deleteTarget(targetId) {
}); });
if (response.ok) { if (response.ok) {
showToast(t('targets.deleted'), 'success'); showToast(t('targets.deleted'), 'success');
outputTargetsCache.invalidate();
} else { } else {
const error = await response.json(); const error = await response.json();
showToast(error.detail || t('target.error.delete_failed'), 'error'); showToast(error.detail || t('target.error.delete_failed'), 'error');

View File

@@ -22,6 +22,7 @@ import {
ICON_MUSIC, ICON_TRENDING_UP, ICON_MAP_PIN, ICON_MONITOR, ICON_REFRESH, ICON_MUSIC, ICON_TRENDING_UP, ICON_MAP_PIN, ICON_MONITOR, ICON_REFRESH,
} from '../core/icons.js'; } from '../core/icons.js';
import { wrapCard } from '../core/card-colors.js'; import { wrapCard } from '../core/card-colors.js';
import { TagInput, renderTagChips } from '../core/tag-input.js';
import { IconSelect } from '../core/icon-select.js'; import { IconSelect } from '../core/icon-select.js';
import { EntitySelect } from '../core/entity-palette.js'; import { EntitySelect } from '../core/entity-palette.js';
import { loadPictureSources } from './streams.js'; import { loadPictureSources } from './streams.js';
@@ -31,10 +32,15 @@ export { getValueSourceIcon };
// ── EntitySelect instances for value source editor ── // ── EntitySelect instances for value source editor ──
let _vsAudioSourceEntitySelect = null; let _vsAudioSourceEntitySelect = null;
let _vsPictureSourceEntitySelect = null; let _vsPictureSourceEntitySelect = null;
let _vsTagsInput = null;
class ValueSourceModal extends Modal { class ValueSourceModal extends Modal {
constructor() { super('value-source-modal'); } constructor() { super('value-source-modal'); }
onForceClose() {
if (_vsTagsInput) { _vsTagsInput.destroy(); _vsTagsInput = null; }
}
snapshotValues() { snapshotValues() {
const type = document.getElementById('value-source-type').value; const type = document.getElementById('value-source-type').value;
return { return {
@@ -58,6 +64,7 @@ class ValueSourceModal extends Modal {
sceneSensitivity: document.getElementById('value-source-scene-sensitivity').value, sceneSensitivity: document.getElementById('value-source-scene-sensitivity').value,
sceneSmoothing: document.getElementById('value-source-scene-smoothing').value, sceneSmoothing: document.getElementById('value-source-scene-smoothing').value,
schedule: JSON.stringify(_getScheduleFromUI()), schedule: JSON.stringify(_getScheduleFromUI()),
tags: JSON.stringify(_vsTagsInput ? _vsTagsInput.getValue() : []),
}; };
} }
} }
@@ -241,6 +248,11 @@ export async function showValueSourceModal(editData) {
document.getElementById('value-source-mode').onchange = () => _autoGenerateVSName(); document.getElementById('value-source-mode').onchange = () => _autoGenerateVSName();
document.getElementById('value-source-picture-source').onchange = () => _autoGenerateVSName(); document.getElementById('value-source-picture-source').onchange = () => _autoGenerateVSName();
// Tags
if (_vsTagsInput) { _vsTagsInput.destroy(); _vsTagsInput = null; }
_vsTagsInput = new TagInput(document.getElementById('value-source-tags-container'), { placeholder: t('tags.placeholder') });
_vsTagsInput.setValue(editData ? (editData.tags || []) : []);
valueSourceModal.open(); valueSourceModal.open();
valueSourceModal.snapshot(); valueSourceModal.snapshot();
} }
@@ -293,7 +305,7 @@ export async function saveValueSource() {
return; return;
} }
const payload = { name, source_type: sourceType, description }; const payload = { name, source_type: sourceType, description, tags: _vsTagsInput ? _vsTagsInput.getValue() : [] };
if (sourceType === 'static') { if (sourceType === 'static') {
payload.value = parseFloat(document.getElementById('value-source-value').value); payload.value = parseFloat(document.getElementById('value-source-value').value);
@@ -648,6 +660,7 @@ export function createValueSourceCard(src) {
<div class="template-name">${icon} ${escapeHtml(src.name)}</div> <div class="template-name">${icon} ${escapeHtml(src.name)}</div>
</div> </div>
<div class="stream-card-props">${propsHtml}</div> <div class="stream-card-props">${propsHtml}</div>
${renderTagChips(src.tags)}
${src.description ? `<div class="template-config" style="opacity:0.7;">${escapeHtml(src.description)}</div>` : ''}`, ${src.description ? `<div class="template-config" style="opacity:0.7;">${escapeHtml(src.description)}</div>` : ''}`,
actions: ` actions: `
<button class="btn btn-icon btn-secondary" onclick="testValueSource('${src.id}')" title="${t('value_source.test')}">${ICON_TEST}</button> <button class="btn btn-icon btn-secondary" onclick="testValueSource('${src.id}')" title="${t('value_source.test')}">${ICON_TEST}</button>

View File

@@ -332,6 +332,9 @@
"palette.search": "Search…", "palette.search": "Search…",
"section.filter.placeholder": "Filter...", "section.filter.placeholder": "Filter...",
"section.filter.reset": "Clear filter", "section.filter.reset": "Clear filter",
"tags.label": "Tags",
"tags.hint": "Assign tags for grouping and filtering cards",
"tags.placeholder": "Add tag...",
"section.expand_all": "Expand all sections", "section.expand_all": "Expand all sections",
"section.collapse_all": "Collapse all sections", "section.collapse_all": "Collapse all sections",
"streams.title": "Sources", "streams.title": "Sources",

View File

@@ -332,6 +332,9 @@
"palette.search": "Поиск…", "palette.search": "Поиск…",
"section.filter.placeholder": "Фильтр...", "section.filter.placeholder": "Фильтр...",
"section.filter.reset": "Очистить фильтр", "section.filter.reset": "Очистить фильтр",
"tags.label": "Теги",
"tags.hint": "Назначьте теги для группировки и фильтрации карточек",
"tags.placeholder": "Добавить тег...",
"section.expand_all": "Развернуть все секции", "section.expand_all": "Развернуть все секции",
"section.collapse_all": "Свернуть все секции", "section.collapse_all": "Свернуть все секции",
"streams.title": "Источники", "streams.title": "Источники",

View File

@@ -332,6 +332,9 @@
"palette.search": "搜索…", "palette.search": "搜索…",
"section.filter.placeholder": "筛选...", "section.filter.placeholder": "筛选...",
"section.filter.reset": "清除筛选", "section.filter.reset": "清除筛选",
"tags.label": "标签",
"tags.hint": "为卡片分配标签以进行分组和筛选",
"tags.placeholder": "添加标签...",
"section.expand_all": "全部展开", "section.expand_all": "全部展开",
"section.collapse_all": "全部折叠", "section.collapse_all": "全部折叠",
"streams.title": "源", "streams.title": "源",

View File

@@ -5,9 +5,9 @@ An AudioSource represents a reusable audio input configuration:
MonoAudioSource — extracts a single channel from a multichannel source MonoAudioSource — extracts a single channel from a multichannel source
""" """
from dataclasses import dataclass from dataclasses import dataclass, field
from datetime import datetime from datetime import datetime, timezone
from typing import Optional from typing import List, Optional
@dataclass @dataclass
@@ -20,6 +20,7 @@ class AudioSource:
created_at: datetime created_at: datetime
updated_at: datetime updated_at: datetime
description: Optional[str] = None description: Optional[str] = None
tags: List[str] = field(default_factory=list)
def to_dict(self) -> dict: def to_dict(self) -> dict:
"""Convert source to dictionary. Subclasses extend this.""" """Convert source to dictionary. Subclasses extend this."""
@@ -30,6 +31,7 @@ class AudioSource:
"created_at": self.created_at.isoformat(), "created_at": self.created_at.isoformat(),
"updated_at": self.updated_at.isoformat(), "updated_at": self.updated_at.isoformat(),
"description": self.description, "description": self.description,
"tags": self.tags,
# Subclass fields default to None for forward compat # Subclass fields default to None for forward compat
"device_index": None, "device_index": None,
"is_loopback": None, "is_loopback": None,
@@ -45,26 +47,27 @@ class AudioSource:
sid: str = data["id"] sid: str = data["id"]
name: str = data["name"] name: str = data["name"]
description: str | None = data.get("description") description: str | None = data.get("description")
tags: list = data.get("tags", [])
raw_created = data.get("created_at") raw_created = data.get("created_at")
created_at: datetime = ( created_at: datetime = (
datetime.fromisoformat(raw_created) datetime.fromisoformat(raw_created)
if isinstance(raw_created, str) if isinstance(raw_created, str)
else raw_created if isinstance(raw_created, datetime) else raw_created if isinstance(raw_created, datetime)
else datetime.utcnow() else datetime.now(timezone.utc)
) )
raw_updated = data.get("updated_at") raw_updated = data.get("updated_at")
updated_at: datetime = ( updated_at: datetime = (
datetime.fromisoformat(raw_updated) datetime.fromisoformat(raw_updated)
if isinstance(raw_updated, str) if isinstance(raw_updated, str)
else raw_updated if isinstance(raw_updated, datetime) else raw_updated if isinstance(raw_updated, datetime)
else datetime.utcnow() else datetime.now(timezone.utc)
) )
if source_type == "mono": if source_type == "mono":
return MonoAudioSource( return MonoAudioSource(
id=sid, name=name, source_type="mono", id=sid, name=name, source_type="mono",
created_at=created_at, updated_at=updated_at, description=description, created_at=created_at, updated_at=updated_at, description=description, tags=tags,
audio_source_id=data.get("audio_source_id") or "", audio_source_id=data.get("audio_source_id") or "",
channel=data.get("channel") or "mono", channel=data.get("channel") or "mono",
) )
@@ -72,7 +75,7 @@ class AudioSource:
# Default: multichannel # Default: multichannel
return MultichannelAudioSource( return MultichannelAudioSource(
id=sid, name=name, source_type="multichannel", id=sid, name=name, source_type="multichannel",
created_at=created_at, updated_at=updated_at, description=description, created_at=created_at, updated_at=updated_at, description=description, tags=tags,
device_index=int(data.get("device_index", -1)), device_index=int(data.get("device_index", -1)),
is_loopback=bool(data.get("is_loopback", True)), is_loopback=bool(data.get("is_loopback", True)),
audio_template_id=data.get("audio_template_id"), audio_template_id=data.get("audio_template_id"),

View File

@@ -1,87 +1,36 @@
"""Audio source storage using JSON files.""" """Audio source storage using JSON files."""
import json
import uuid import uuid
from datetime import datetime from datetime import datetime, timezone
from pathlib import Path from typing import List, Optional, Tuple
from typing import Dict, List, Optional, Tuple
from wled_controller.storage.audio_source import ( from wled_controller.storage.audio_source import (
AudioSource, AudioSource,
MonoAudioSource, MonoAudioSource,
MultichannelAudioSource, MultichannelAudioSource,
) )
from wled_controller.utils import atomic_write_json, get_logger from wled_controller.storage.base_store import BaseJsonStore
from wled_controller.utils import get_logger
logger = get_logger(__name__) logger = get_logger(__name__)
class AudioSourceStore: class AudioSourceStore(BaseJsonStore[AudioSource]):
"""Persistent storage for audio sources.""" """Persistent storage for audio sources."""
_json_key = "audio_sources"
_entity_name = "Audio source"
def __init__(self, file_path: str): def __init__(self, file_path: str):
self.file_path = Path(file_path) super().__init__(file_path, AudioSource.from_dict)
self._sources: Dict[str, AudioSource] = {}
self._load()
def _load(self) -> None: # Backward-compatible aliases
if not self.file_path.exists(): get_all_sources = BaseJsonStore.get_all
logger.info("Audio source store file not found — starting empty") get_source = BaseJsonStore.get
return
try:
with open(self.file_path, "r", encoding="utf-8") as f:
data = json.load(f)
sources_data = data.get("audio_sources", {})
loaded = 0
for source_id, source_dict in sources_data.items():
try:
source = AudioSource.from_dict(source_dict)
self._sources[source_id] = source
loaded += 1
except Exception as e:
logger.error(
f"Failed to load audio source {source_id}: {e}",
exc_info=True,
)
if loaded > 0:
logger.info(f"Loaded {loaded} audio sources from storage")
except Exception as e:
logger.error(f"Failed to load audio sources from {self.file_path}: {e}")
raise
logger.info(f"Audio source store initialized with {len(self._sources)} sources")
def _save(self) -> None:
try:
data = {
"version": "1.0.0",
"audio_sources": {
sid: source.to_dict()
for sid, source in self._sources.items()
},
}
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save audio sources to {self.file_path}: {e}")
raise
# ── CRUD ─────────────────────────────────────────────────────────
def get_all_sources(self) -> List[AudioSource]:
return list(self._sources.values())
def get_mono_sources(self) -> List[MonoAudioSource]: def get_mono_sources(self) -> List[MonoAudioSource]:
"""Return only mono audio sources (for CSS dropdown).""" """Return only mono audio sources (for CSS dropdown)."""
return [s for s in self._sources.values() if isinstance(s, MonoAudioSource)] return [s for s in self._items.values() if isinstance(s, MonoAudioSource)]
def get_source(self, source_id: str) -> AudioSource:
if source_id not in self._sources:
raise ValueError(f"Audio source not found: {source_id}")
return self._sources[source_id]
def create_source( def create_source(
self, self,
@@ -93,25 +42,21 @@ class AudioSourceStore:
channel: Optional[str] = None, channel: Optional[str] = None,
description: Optional[str] = None, description: Optional[str] = None,
audio_template_id: Optional[str] = None, audio_template_id: Optional[str] = None,
tags: Optional[List[str]] = None,
) -> AudioSource: ) -> AudioSource:
if not name or not name.strip(): self._check_name_unique(name)
raise ValueError("Name is required")
if source_type not in ("multichannel", "mono"): if source_type not in ("multichannel", "mono"):
raise ValueError(f"Invalid source type: {source_type}") raise ValueError(f"Invalid source type: {source_type}")
for source in self._sources.values():
if source.name == name:
raise ValueError(f"Audio source with name '{name}' already exists")
sid = f"as_{uuid.uuid4().hex[:8]}" sid = f"as_{uuid.uuid4().hex[:8]}"
now = datetime.utcnow() now = datetime.now(timezone.utc)
if source_type == "mono": if source_type == "mono":
if not audio_source_id: if not audio_source_id:
raise ValueError("Mono sources require audio_source_id") raise ValueError("Mono sources require audio_source_id")
# Validate parent exists and is multichannel # Validate parent exists and is multichannel
parent = self._sources.get(audio_source_id) parent = self._items.get(audio_source_id)
if not parent: if not parent:
raise ValueError(f"Parent audio source not found: {audio_source_id}") raise ValueError(f"Parent audio source not found: {audio_source_id}")
if not isinstance(parent, MultichannelAudioSource): if not isinstance(parent, MultichannelAudioSource):
@@ -119,20 +64,20 @@ class AudioSourceStore:
source = MonoAudioSource( source = MonoAudioSource(
id=sid, name=name, source_type="mono", id=sid, name=name, source_type="mono",
created_at=now, updated_at=now, description=description, created_at=now, updated_at=now, description=description, tags=tags or [],
audio_source_id=audio_source_id, audio_source_id=audio_source_id,
channel=channel or "mono", channel=channel or "mono",
) )
else: else:
source = MultichannelAudioSource( source = MultichannelAudioSource(
id=sid, name=name, source_type="multichannel", id=sid, name=name, source_type="multichannel",
created_at=now, updated_at=now, description=description, created_at=now, updated_at=now, description=description, tags=tags or [],
device_index=device_index if device_index is not None else -1, device_index=device_index if device_index is not None else -1,
is_loopback=bool(is_loopback) if is_loopback is not None else True, is_loopback=bool(is_loopback) if is_loopback is not None else True,
audio_template_id=audio_template_id, audio_template_id=audio_template_id,
) )
self._sources[sid] = source self._items[sid] = source
self._save() self._save()
logger.info(f"Created audio source: {name} ({sid}, type={source_type})") logger.info(f"Created audio source: {name} ({sid}, type={source_type})")
@@ -148,20 +93,18 @@ class AudioSourceStore:
channel: Optional[str] = None, channel: Optional[str] = None,
description: Optional[str] = None, description: Optional[str] = None,
audio_template_id: Optional[str] = None, audio_template_id: Optional[str] = None,
tags: Optional[List[str]] = None,
) -> AudioSource: ) -> AudioSource:
if source_id not in self._sources: source = self.get(source_id)
raise ValueError(f"Audio source not found: {source_id}")
source = self._sources[source_id]
if name is not None: if name is not None:
for other in self._sources.values(): self._check_name_unique(name, exclude_id=source_id)
if other.id != source_id and other.name == name:
raise ValueError(f"Audio source with name '{name}' already exists")
source.name = name source.name = name
if description is not None: if description is not None:
source.description = description source.description = description
if tags is not None:
source.tags = tags
if isinstance(source, MultichannelAudioSource): if isinstance(source, MultichannelAudioSource):
if device_index is not None: if device_index is not None:
@@ -172,7 +115,7 @@ class AudioSourceStore:
source.audio_template_id = audio_template_id source.audio_template_id = audio_template_id
elif isinstance(source, MonoAudioSource): elif isinstance(source, MonoAudioSource):
if audio_source_id is not None: if audio_source_id is not None:
parent = self._sources.get(audio_source_id) parent = self._items.get(audio_source_id)
if not parent: if not parent:
raise ValueError(f"Parent audio source not found: {audio_source_id}") raise ValueError(f"Parent audio source not found: {audio_source_id}")
if not isinstance(parent, MultichannelAudioSource): if not isinstance(parent, MultichannelAudioSource):
@@ -181,27 +124,27 @@ class AudioSourceStore:
if channel is not None: if channel is not None:
source.channel = channel source.channel = channel
source.updated_at = datetime.utcnow() source.updated_at = datetime.now(timezone.utc)
self._save() self._save()
logger.info(f"Updated audio source: {source_id}") logger.info(f"Updated audio source: {source_id}")
return source return source
def delete_source(self, source_id: str) -> None: def delete_source(self, source_id: str) -> None:
if source_id not in self._sources: if source_id not in self._items:
raise ValueError(f"Audio source not found: {source_id}") raise ValueError(f"{self._entity_name} not found: {source_id}")
source = self._sources[source_id] source = self._items[source_id]
# Prevent deleting multichannel sources referenced by mono sources # Prevent deleting multichannel sources referenced by mono sources
if isinstance(source, MultichannelAudioSource): if isinstance(source, MultichannelAudioSource):
for other in self._sources.values(): for other in self._items.values():
if isinstance(other, MonoAudioSource) and other.audio_source_id == source_id: if isinstance(other, MonoAudioSource) and other.audio_source_id == source_id:
raise ValueError( raise ValueError(
f"Cannot delete '{source.name}': referenced by mono source '{other.name}'" f"Cannot delete '{source.name}': referenced by mono source '{other.name}'"
) )
del self._sources[source_id] del self._items[source_id]
self._save() self._save()
logger.info(f"Deleted audio source: {source_id}") logger.info(f"Deleted audio source: {source_id}")
@@ -231,4 +174,3 @@ class AudioSourceStore:
return parent.device_index, parent.is_loopback, source.channel, parent.audio_template_id return parent.device_index, parent.is_loopback, source.channel, parent.audio_template_id
raise ValueError(f"Audio source {source_id} is not a valid audio source") raise ValueError(f"Audio source {source_id} is not a valid audio source")

View File

@@ -1,8 +1,8 @@
"""Audio capture template data model.""" """Audio capture template data model."""
from dataclasses import dataclass from dataclasses import dataclass, field
from datetime import datetime from datetime import datetime, timezone
from typing import Any, Dict, Optional from typing import Any, Dict, List, Optional
@dataclass @dataclass
@@ -16,6 +16,7 @@ class AudioCaptureTemplate:
created_at: datetime created_at: datetime
updated_at: datetime updated_at: datetime
description: Optional[str] = None description: Optional[str] = None
tags: List[str] = field(default_factory=list)
def to_dict(self) -> dict: def to_dict(self) -> dict:
"""Convert template to dictionary.""" """Convert template to dictionary."""
@@ -27,6 +28,7 @@ class AudioCaptureTemplate:
"created_at": self.created_at.isoformat(), "created_at": self.created_at.isoformat(),
"updated_at": self.updated_at.isoformat(), "updated_at": self.updated_at.isoformat(),
"description": self.description, "description": self.description,
"tags": self.tags,
} }
@classmethod @classmethod
@@ -39,9 +41,10 @@ class AudioCaptureTemplate:
engine_config=data.get("engine_config", {}), engine_config=data.get("engine_config", {}),
created_at=datetime.fromisoformat(data["created_at"]) created_at=datetime.fromisoformat(data["created_at"])
if isinstance(data.get("created_at"), str) if isinstance(data.get("created_at"), str)
else data.get("created_at", datetime.utcnow()), else data.get("created_at", datetime.now(timezone.utc)),
updated_at=datetime.fromisoformat(data["updated_at"]) updated_at=datetime.fromisoformat(data["updated_at"])
if isinstance(data.get("updated_at"), str) if isinstance(data.get("updated_at"), str)
else data.get("updated_at", datetime.utcnow()), else data.get("updated_at", datetime.now(timezone.utc)),
description=data.get("description"), description=data.get("description"),
tags=data.get("tags", []),
) )

View File

@@ -1,19 +1,18 @@
"""Audio template storage using JSON files.""" """Audio template storage using JSON files."""
import json
import uuid import uuid
from datetime import datetime from datetime import datetime, timezone
from pathlib import Path
from typing import Dict, List, Optional from typing import Dict, List, Optional
from wled_controller.core.audio.factory import AudioEngineRegistry from wled_controller.core.audio.factory import AudioEngineRegistry
from wled_controller.storage.audio_template import AudioCaptureTemplate from wled_controller.storage.audio_template import AudioCaptureTemplate
from wled_controller.utils import atomic_write_json, get_logger from wled_controller.storage.base_store import BaseJsonStore
from wled_controller.utils import get_logger
logger = get_logger(__name__) logger = get_logger(__name__)
class AudioTemplateStore: class AudioTemplateStore(BaseJsonStore[AudioCaptureTemplate]):
"""Storage for audio capture templates. """Storage for audio capture templates.
All templates are persisted to the JSON file. All templates are persisted to the JSON file.
@@ -21,15 +20,20 @@ class AudioTemplateStore:
highest-priority available engine. highest-priority available engine.
""" """
_json_key = "templates"
_entity_name = "Audio capture template"
def __init__(self, file_path: str): def __init__(self, file_path: str):
self.file_path = Path(file_path) super().__init__(file_path, AudioCaptureTemplate.from_dict)
self._templates: Dict[str, AudioCaptureTemplate] = {}
self._load()
self._ensure_initial_template() self._ensure_initial_template()
# Backward-compatible aliases
get_all_templates = BaseJsonStore.get_all
get_template = BaseJsonStore.get
def _ensure_initial_template(self) -> None: def _ensure_initial_template(self) -> None:
"""Auto-create a template if none exist, using the best available engine.""" """Auto-create a template if none exist, using the best available engine."""
if self._templates: if self._items:
return return
best_engine = AudioEngineRegistry.get_best_available_engine() best_engine = AudioEngineRegistry.get_best_available_engine()
@@ -39,7 +43,7 @@ class AudioTemplateStore:
engine_class = AudioEngineRegistry.get_engine(best_engine) engine_class = AudioEngineRegistry.get_engine(best_engine)
default_config = engine_class.get_default_config() default_config = engine_class.get_default_config()
now = datetime.utcnow() now = datetime.now(timezone.utc)
template_id = f"atpl_{uuid.uuid4().hex[:8]}" template_id = f"atpl_{uuid.uuid4().hex[:8]}"
template = AudioCaptureTemplate( template = AudioCaptureTemplate(
@@ -52,71 +56,17 @@ class AudioTemplateStore:
description=f"Default audio template using {best_engine.upper()} engine", description=f"Default audio template using {best_engine.upper()} engine",
) )
self._templates[template_id] = template self._items[template_id] = template
self._save() self._save()
logger.info( logger.info(
f"Auto-created initial audio template: {template.name} " f"Auto-created initial audio template: {template.name} "
f"({template_id}, engine={best_engine})" f"({template_id}, engine={best_engine})"
) )
def _load(self) -> None:
"""Load templates from file."""
if not self.file_path.exists():
return
try:
with open(self.file_path, "r", encoding="utf-8") as f:
data = json.load(f)
templates_data = data.get("templates", {})
loaded = 0
for template_id, template_dict in templates_data.items():
try:
template = AudioCaptureTemplate.from_dict(template_dict)
self._templates[template_id] = template
loaded += 1
except Exception as e:
logger.error(
f"Failed to load audio template {template_id}: {e}",
exc_info=True,
)
if loaded > 0:
logger.info(f"Loaded {loaded} audio templates from storage")
except Exception as e:
logger.error(f"Failed to load audio templates from {self.file_path}: {e}")
raise
logger.info(f"Audio template store initialized with {len(self._templates)} templates")
def _save(self) -> None:
"""Save all templates to file."""
try:
data = {
"version": "1.0.0",
"templates": {
template_id: template.to_dict()
for template_id, template in self._templates.items()
},
}
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save audio templates to {self.file_path}: {e}")
raise
def get_all_templates(self) -> List[AudioCaptureTemplate]:
return list(self._templates.values())
def get_template(self, template_id: str) -> AudioCaptureTemplate:
if template_id not in self._templates:
raise ValueError(f"Audio template not found: {template_id}")
return self._templates[template_id]
def get_default_template_id(self) -> Optional[str]: def get_default_template_id(self) -> Optional[str]:
"""Return the ID of the first template, or None if none exist.""" """Return the ID of the first template, or None if none exist."""
if self._templates: if self._items:
return next(iter(self._templates)) return next(iter(self._items))
return None return None
def create_template( def create_template(
@@ -125,13 +75,12 @@ class AudioTemplateStore:
engine_type: str, engine_type: str,
engine_config: Dict[str, any], engine_config: Dict[str, any],
description: Optional[str] = None, description: Optional[str] = None,
tags: Optional[List[str]] = None,
) -> AudioCaptureTemplate: ) -> AudioCaptureTemplate:
for template in self._templates.values(): self._check_name_unique(name)
if template.name == name:
raise ValueError(f"Audio template with name '{name}' already exists")
template_id = f"atpl_{uuid.uuid4().hex[:8]}" template_id = f"atpl_{uuid.uuid4().hex[:8]}"
now = datetime.utcnow() now = datetime.now(timezone.utc)
template = AudioCaptureTemplate( template = AudioCaptureTemplate(
id=template_id, id=template_id,
name=name, name=name,
@@ -140,9 +89,10 @@ class AudioTemplateStore:
created_at=now, created_at=now,
updated_at=now, updated_at=now,
description=description, description=description,
tags=tags or [],
) )
self._templates[template_id] = template self._items[template_id] = template
self._save() self._save()
logger.info(f"Created audio template: {name} ({template_id})") logger.info(f"Created audio template: {name} ({template_id})")
return template return template
@@ -154,16 +104,12 @@ class AudioTemplateStore:
engine_type: Optional[str] = None, engine_type: Optional[str] = None,
engine_config: Optional[Dict[str, any]] = None, engine_config: Optional[Dict[str, any]] = None,
description: Optional[str] = None, description: Optional[str] = None,
tags: Optional[List[str]] = None,
) -> AudioCaptureTemplate: ) -> AudioCaptureTemplate:
if template_id not in self._templates: template = self.get(template_id)
raise ValueError(f"Audio template not found: {template_id}")
template = self._templates[template_id]
if name is not None: if name is not None:
for tid, t in self._templates.items(): self._check_name_unique(name, exclude_id=template_id)
if tid != template_id and t.name == name:
raise ValueError(f"Audio template with name '{name}' already exists")
template.name = name template.name = name
if engine_type is not None: if engine_type is not None:
template.engine_type = engine_type template.engine_type = engine_type
@@ -171,8 +117,10 @@ class AudioTemplateStore:
template.engine_config = engine_config template.engine_config = engine_config
if description is not None: if description is not None:
template.description = description template.description = description
if tags is not None:
template.tags = tags
template.updated_at = datetime.utcnow() template.updated_at = datetime.now(timezone.utc)
self._save() self._save()
logger.info(f"Updated audio template: {template_id}") logger.info(f"Updated audio template: {template_id}")
return template return template
@@ -187,8 +135,8 @@ class AudioTemplateStore:
Raises: Raises:
ValueError: If template not found or still referenced ValueError: If template not found or still referenced
""" """
if template_id not in self._templates: if template_id not in self._items:
raise ValueError(f"Audio template not found: {template_id}") raise ValueError(f"{self._entity_name} not found: {template_id}")
# Check if any multichannel audio source references this template # Check if any multichannel audio source references this template
if audio_source_store is not None: if audio_source_store is not None:
@@ -203,6 +151,6 @@ class AudioTemplateStore:
f"referenced by audio source '{source.name}' ({source.id})" f"referenced by audio source '{source.name}' ({source.id})"
) )
del self._templates[template_id] del self._items[template_id]
self._save() self._save()
logger.info(f"Deleted audio template: {template_id}") logger.info(f"Deleted audio template: {template_id}")

View File

@@ -1,7 +1,7 @@
"""Automation and Condition data models.""" """Automation and Condition data models."""
from dataclasses import dataclass, field from dataclasses import dataclass, field
from datetime import datetime from datetime import datetime, timezone
from typing import List, Optional from typing import List, Optional
@@ -204,6 +204,7 @@ class Automation:
deactivation_scene_preset_id: Optional[str] # scene for fallback_scene mode deactivation_scene_preset_id: Optional[str] # scene for fallback_scene mode
created_at: datetime created_at: datetime
updated_at: datetime updated_at: datetime
tags: List[str] = field(default_factory=list)
def to_dict(self) -> dict: def to_dict(self) -> dict:
return { return {
@@ -215,6 +216,7 @@ class Automation:
"scene_preset_id": self.scene_preset_id, "scene_preset_id": self.scene_preset_id,
"deactivation_mode": self.deactivation_mode, "deactivation_mode": self.deactivation_mode,
"deactivation_scene_preset_id": self.deactivation_scene_preset_id, "deactivation_scene_preset_id": self.deactivation_scene_preset_id,
"tags": self.tags,
"created_at": self.created_at.isoformat(), "created_at": self.created_at.isoformat(),
"updated_at": self.updated_at.isoformat(), "updated_at": self.updated_at.isoformat(),
} }
@@ -237,6 +239,7 @@ class Automation:
scene_preset_id=data.get("scene_preset_id"), scene_preset_id=data.get("scene_preset_id"),
deactivation_mode=data.get("deactivation_mode", "none"), deactivation_mode=data.get("deactivation_mode", "none"),
deactivation_scene_preset_id=data.get("deactivation_scene_preset_id"), deactivation_scene_preset_id=data.get("deactivation_scene_preset_id"),
created_at=datetime.fromisoformat(data.get("created_at", datetime.utcnow().isoformat())), tags=data.get("tags", []),
updated_at=datetime.fromisoformat(data.get("updated_at", datetime.utcnow().isoformat())), created_at=datetime.fromisoformat(data.get("created_at", datetime.now(timezone.utc).isoformat())),
updated_at=datetime.fromisoformat(data.get("updated_at", datetime.now(timezone.utc).isoformat())),
) )

View File

@@ -1,72 +1,27 @@
"""Automation storage using JSON files.""" """Automation storage using JSON files."""
import json
import uuid import uuid
from datetime import datetime from datetime import datetime, timezone
from pathlib import Path from typing import List, Optional
from typing import Dict, List, Optional
from wled_controller.storage.automation import Automation, Condition from wled_controller.storage.automation import Automation, Condition
from wled_controller.utils import atomic_write_json, get_logger from wled_controller.storage.base_store import BaseJsonStore
from wled_controller.utils import get_logger
logger = get_logger(__name__) logger = get_logger(__name__)
class AutomationStore: class AutomationStore(BaseJsonStore[Automation]):
"""Persistent storage for automations.""" _json_key = "automations"
_entity_name = "Automation"
def __init__(self, file_path: str): def __init__(self, file_path: str):
self.file_path = Path(file_path) super().__init__(file_path, Automation.from_dict)
self._automations: Dict[str, Automation] = {}
self._load()
def _load(self) -> None: # Backward-compatible aliases
if not self.file_path.exists(): get_all_automations = BaseJsonStore.get_all
return get_automation = BaseJsonStore.get
delete_automation = BaseJsonStore.delete
try:
with open(self.file_path, "r", encoding="utf-8") as f:
data = json.load(f)
automations_data = data.get("automations", {})
loaded = 0
for auto_id, auto_dict in automations_data.items():
try:
automation = Automation.from_dict(auto_dict)
self._automations[auto_id] = automation
loaded += 1
except Exception as e:
logger.error(f"Failed to load automation {auto_id}: {e}", exc_info=True)
if loaded > 0:
logger.info(f"Loaded {loaded} automations from storage")
except Exception as e:
logger.error(f"Failed to load automations from {self.file_path}: {e}")
raise
logger.info(f"Automation store initialized with {len(self._automations)} automations")
def _save(self) -> None:
try:
data = {
"version": "1.0.0",
"automations": {
aid: a.to_dict() for aid, a in self._automations.items()
},
}
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save automations to {self.file_path}: {e}")
raise
def get_all_automations(self) -> List[Automation]:
return list(self._automations.values())
def get_automation(self, automation_id: str) -> Automation:
if automation_id not in self._automations:
raise ValueError(f"Automation not found: {automation_id}")
return self._automations[automation_id]
def create_automation( def create_automation(
self, self,
@@ -77,13 +32,14 @@ class AutomationStore:
scene_preset_id: Optional[str] = None, scene_preset_id: Optional[str] = None,
deactivation_mode: str = "none", deactivation_mode: str = "none",
deactivation_scene_preset_id: Optional[str] = None, deactivation_scene_preset_id: Optional[str] = None,
tags: Optional[List[str]] = None,
) -> Automation: ) -> Automation:
for a in self._automations.values(): for a in self._items.values():
if a.name == name: if a.name == name:
raise ValueError(f"Automation with name '{name}' already exists") raise ValueError(f"Automation with name '{name}' already exists")
automation_id = f"auto_{uuid.uuid4().hex[:8]}" automation_id = f"auto_{uuid.uuid4().hex[:8]}"
now = datetime.utcnow() now = datetime.now(timezone.utc)
automation = Automation( automation = Automation(
id=automation_id, id=automation_id,
@@ -96,11 +52,11 @@ class AutomationStore:
deactivation_scene_preset_id=deactivation_scene_preset_id, deactivation_scene_preset_id=deactivation_scene_preset_id,
created_at=now, created_at=now,
updated_at=now, updated_at=now,
tags=tags or [],
) )
self._automations[automation_id] = automation self._items[automation_id] = automation
self._save() self._save()
logger.info(f"Created automation: {name} ({automation_id})") logger.info(f"Created automation: {name} ({automation_id})")
return automation return automation
@@ -114,16 +70,12 @@ class AutomationStore:
scene_preset_id: str = "__unset__", scene_preset_id: str = "__unset__",
deactivation_mode: Optional[str] = None, deactivation_mode: Optional[str] = None,
deactivation_scene_preset_id: str = "__unset__", deactivation_scene_preset_id: str = "__unset__",
tags: Optional[List[str]] = None,
) -> Automation: ) -> Automation:
if automation_id not in self._automations: automation = self.get(automation_id)
raise ValueError(f"Automation not found: {automation_id}")
automation = self._automations[automation_id]
if name is not None: if name is not None:
for aid, a in self._automations.items(): self._check_name_unique(name, exclude_id=automation_id)
if aid != automation_id and a.name == name:
raise ValueError(f"Automation with name '{name}' already exists")
automation.name = name automation.name = name
if enabled is not None: if enabled is not None:
automation.enabled = enabled automation.enabled = enabled
@@ -137,21 +89,10 @@ class AutomationStore:
automation.deactivation_mode = deactivation_mode automation.deactivation_mode = deactivation_mode
if deactivation_scene_preset_id != "__unset__": if deactivation_scene_preset_id != "__unset__":
automation.deactivation_scene_preset_id = deactivation_scene_preset_id automation.deactivation_scene_preset_id = deactivation_scene_preset_id
if tags is not None:
automation.tags = tags
automation.updated_at = datetime.utcnow() automation.updated_at = datetime.now(timezone.utc)
self._save() self._save()
logger.info(f"Updated automation: {automation_id}") logger.info(f"Updated automation: {automation_id}")
return automation return automation
def delete_automation(self, automation_id: str) -> None:
if automation_id not in self._automations:
raise ValueError(f"Automation not found: {automation_id}")
del self._automations[automation_id]
self._save()
logger.info(f"Deleted automation: {automation_id}")
def count(self) -> int:
return len(self._automations)

View File

@@ -0,0 +1,115 @@
"""Base class for JSON entity stores — eliminates boilerplate across 12+ stores."""
import json
from pathlib import Path
from typing import Callable, Dict, Generic, List, TypeVar
from wled_controller.utils import atomic_write_json, get_logger
T = TypeVar("T")
logger = get_logger(__name__)
class BaseJsonStore(Generic[T]):
"""JSON-file-backed entity store with common CRUD helpers.
Provides:
- ``_load()`` / ``_save()``: atomic JSON file I/O
- ``get_all()`` / ``get(id)`` / ``delete(id)`` / ``count()``: read/delete
- ``_check_name_unique(name, exclude_id)``: duplicate-name guard
Subclasses must set class attributes:
- ``_json_key``: root key in JSON file (e.g. ``"sync_clocks"``)
- ``_entity_name``: human label for errors (e.g. ``"Sync clock"``)
- ``_version``: schema version string (default ``"1.0.0"``)
"""
_json_key: str
_entity_name: str
_version: str = "1.0.0"
def __init__(self, file_path: str, deserializer: Callable[[dict], T]):
self.file_path = Path(file_path)
self._items: Dict[str, T] = {}
self._deserializer = deserializer
self._load()
# ── I/O ────────────────────────────────────────────────────────
def _load(self) -> None:
if not self.file_path.exists():
logger.info(f"{self._entity_name} store file not found — starting empty")
return
try:
with open(self.file_path, "r", encoding="utf-8") as f:
data = json.load(f)
items_data = data.get(self._json_key, {})
loaded = 0
for item_id, item_dict in items_data.items():
try:
self._items[item_id] = self._deserializer(item_dict)
loaded += 1
except Exception as e:
logger.error(
f"Failed to load {self._entity_name} {item_id}: {e}",
exc_info=True,
)
if loaded > 0:
logger.info(f"Loaded {loaded} {self._json_key} from storage")
except Exception as e:
logger.error(f"Failed to load {self._json_key} from {self.file_path}: {e}")
raise
logger.info(
f"{self._entity_name} store initialized with {len(self._items)} items"
)
def _save(self) -> None:
try:
data = {
"version": self._version,
self._json_key: {
item_id: item.to_dict()
for item_id, item in self._items.items()
},
}
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save {self._json_key} to {self.file_path}: {e}")
raise
# ── Common CRUD ────────────────────────────────────────────────
def get_all(self) -> List[T]:
return list(self._items.values())
def get(self, item_id: str) -> T:
if item_id not in self._items:
raise ValueError(f"{self._entity_name} not found: {item_id}")
return self._items[item_id]
def delete(self, item_id: str) -> None:
if item_id not in self._items:
raise ValueError(f"{self._entity_name} not found: {item_id}")
del self._items[item_id]
self._save()
logger.info(f"Deleted {self._entity_name}: {item_id}")
def count(self) -> int:
return len(self._items)
# ── Helpers ────────────────────────────────────────────────────
def _check_name_unique(self, name: str, exclude_id: str = None) -> None:
"""Raise ValueError if *name* is empty or already taken."""
if not name or not name.strip():
raise ValueError("Name is required")
for item_id, item in self._items.items():
if item_id != exclude_id and getattr(item, "name", None) == name:
raise ValueError(
f"{self._entity_name} with name '{name}' already exists"
)

View File

@@ -16,8 +16,8 @@ Current types:
""" """
from dataclasses import dataclass, field from dataclasses import dataclass, field
from datetime import datetime from datetime import datetime, timezone
from typing import Optional from typing import List, Optional
from wled_controller.core.capture.calibration import ( from wled_controller.core.capture.calibration import (
CalibrationConfig, CalibrationConfig,
@@ -37,6 +37,7 @@ class ColorStripSource:
updated_at: datetime updated_at: datetime
description: Optional[str] = None description: Optional[str] = None
clock_id: Optional[str] = None # optional SyncClock reference clock_id: Optional[str] = None # optional SyncClock reference
tags: List[str] = field(default_factory=list)
@property @property
def sharable(self) -> bool: def sharable(self) -> bool:
@@ -57,6 +58,7 @@ class ColorStripSource:
"updated_at": self.updated_at.isoformat(), "updated_at": self.updated_at.isoformat(),
"description": self.description, "description": self.description,
"clock_id": self.clock_id, "clock_id": self.clock_id,
"tags": self.tags,
# Subclass fields default to None for forward compat # Subclass fields default to None for forward compat
"picture_source_id": None, "picture_source_id": None,
"fps": None, "fps": None,
@@ -102,20 +104,21 @@ class ColorStripSource:
description: str | None = data.get("description") description: str | None = data.get("description")
clock_id: str | None = data.get("clock_id") clock_id: str | None = data.get("clock_id")
tags: list = data.get("tags", [])
raw_created = data.get("created_at") raw_created = data.get("created_at")
created_at: datetime = ( created_at: datetime = (
datetime.fromisoformat(raw_created) datetime.fromisoformat(raw_created)
if isinstance(raw_created, str) if isinstance(raw_created, str)
else raw_created if isinstance(raw_created, datetime) else raw_created if isinstance(raw_created, datetime)
else datetime.utcnow() else datetime.now(timezone.utc)
) )
raw_updated = data.get("updated_at") raw_updated = data.get("updated_at")
updated_at: datetime = ( updated_at: datetime = (
datetime.fromisoformat(raw_updated) datetime.fromisoformat(raw_updated)
if isinstance(raw_updated, str) if isinstance(raw_updated, str)
else raw_updated if isinstance(raw_updated, datetime) else raw_updated if isinstance(raw_updated, datetime)
else datetime.utcnow() else datetime.now(timezone.utc)
) )
calibration_data = data.get("calibration") calibration_data = data.get("calibration")
@@ -134,7 +137,7 @@ class ColorStripSource:
return StaticColorStripSource( return StaticColorStripSource(
id=sid, name=name, source_type="static", id=sid, name=name, source_type="static",
created_at=created_at, updated_at=updated_at, description=description, created_at=created_at, updated_at=updated_at, description=description,
clock_id=clock_id, color=color, clock_id=clock_id, tags=tags, color=color,
animation=data.get("animation"), animation=data.get("animation"),
) )
@@ -144,7 +147,7 @@ class ColorStripSource:
return GradientColorStripSource( return GradientColorStripSource(
id=sid, name=name, source_type="gradient", id=sid, name=name, source_type="gradient",
created_at=created_at, updated_at=updated_at, description=description, created_at=created_at, updated_at=updated_at, description=description,
clock_id=clock_id, stops=stops, clock_id=clock_id, tags=tags, stops=stops,
animation=data.get("animation"), animation=data.get("animation"),
) )
@@ -154,14 +157,14 @@ class ColorStripSource:
return ColorCycleColorStripSource( return ColorCycleColorStripSource(
id=sid, name=name, source_type="color_cycle", id=sid, name=name, source_type="color_cycle",
created_at=created_at, updated_at=updated_at, description=description, created_at=created_at, updated_at=updated_at, description=description,
clock_id=clock_id, colors=colors, clock_id=clock_id, tags=tags, colors=colors,
) )
if source_type == "composite": if source_type == "composite":
return CompositeColorStripSource( return CompositeColorStripSource(
id=sid, name=name, source_type="composite", id=sid, name=name, source_type="composite",
created_at=created_at, updated_at=updated_at, description=description, created_at=created_at, updated_at=updated_at, description=description,
clock_id=clock_id, layers=data.get("layers") or [], clock_id=clock_id, tags=tags, layers=data.get("layers") or [],
led_count=data.get("led_count") or 0, led_count=data.get("led_count") or 0,
) )
@@ -169,7 +172,7 @@ class ColorStripSource:
return MappedColorStripSource( return MappedColorStripSource(
id=sid, name=name, source_type="mapped", id=sid, name=name, source_type="mapped",
created_at=created_at, updated_at=updated_at, description=description, created_at=created_at, updated_at=updated_at, description=description,
clock_id=clock_id, zones=data.get("zones") or [], clock_id=clock_id, tags=tags, zones=data.get("zones") or [],
led_count=data.get("led_count") or 0, led_count=data.get("led_count") or 0,
) )
@@ -181,7 +184,7 @@ class ColorStripSource:
return AudioColorStripSource( return AudioColorStripSource(
id=sid, name=name, source_type="audio", id=sid, name=name, source_type="audio",
created_at=created_at, updated_at=updated_at, description=description, created_at=created_at, updated_at=updated_at, description=description,
clock_id=clock_id, visualization_mode=data.get("visualization_mode") or "spectrum", clock_id=clock_id, tags=tags, visualization_mode=data.get("visualization_mode") or "spectrum",
audio_source_id=data.get("audio_source_id") or "", audio_source_id=data.get("audio_source_id") or "",
sensitivity=float(data.get("sensitivity") or 1.0), sensitivity=float(data.get("sensitivity") or 1.0),
smoothing=float(data.get("smoothing") or 0.3), smoothing=float(data.get("smoothing") or 0.3),
@@ -201,7 +204,7 @@ class ColorStripSource:
return EffectColorStripSource( return EffectColorStripSource(
id=sid, name=name, source_type="effect", id=sid, name=name, source_type="effect",
created_at=created_at, updated_at=updated_at, description=description, created_at=created_at, updated_at=updated_at, description=description,
clock_id=clock_id, effect_type=data.get("effect_type") or "fire", clock_id=clock_id, tags=tags, effect_type=data.get("effect_type") or "fire",
palette=data.get("palette") or "fire", palette=data.get("palette") or "fire",
color=color, color=color,
intensity=float(data.get("intensity") or 1.0), intensity=float(data.get("intensity") or 1.0),
@@ -218,7 +221,7 @@ class ColorStripSource:
return ApiInputColorStripSource( return ApiInputColorStripSource(
id=sid, name=name, source_type="api_input", id=sid, name=name, source_type="api_input",
created_at=created_at, updated_at=updated_at, description=description, created_at=created_at, updated_at=updated_at, description=description,
clock_id=clock_id, led_count=data.get("led_count") or 0, clock_id=clock_id, tags=tags, led_count=data.get("led_count") or 0,
fallback_color=fallback_color, fallback_color=fallback_color,
timeout=float(data.get("timeout") or 5.0), timeout=float(data.get("timeout") or 5.0),
) )
@@ -231,7 +234,7 @@ class ColorStripSource:
return NotificationColorStripSource( return NotificationColorStripSource(
id=sid, name=name, source_type="notification", id=sid, name=name, source_type="notification",
created_at=created_at, updated_at=updated_at, description=description, created_at=created_at, updated_at=updated_at, description=description,
clock_id=clock_id, clock_id=clock_id, tags=tags,
notification_effect=data.get("notification_effect") or "flash", notification_effect=data.get("notification_effect") or "flash",
duration_ms=int(data.get("duration_ms") or 1500), duration_ms=int(data.get("duration_ms") or 1500),
default_color=data.get("default_color") or "#FFFFFF", default_color=data.get("default_color") or "#FFFFFF",
@@ -243,6 +246,7 @@ class ColorStripSource:
# Shared picture-type field extraction # Shared picture-type field extraction
_picture_kwargs = dict( _picture_kwargs = dict(
tags=tags,
fps=data.get("fps") or 30, fps=data.get("fps") or 30,
brightness=data["brightness"] if data.get("brightness") is not None else 1.0, brightness=data["brightness"] if data.get("brightness") is not None else 1.0,
saturation=data["saturation"] if data.get("saturation") is not None else 1.0, saturation=data["saturation"] if data.get("saturation") is not None else 1.0,

View File

@@ -1,12 +1,11 @@
"""Color strip source storage using JSON files.""" """Color strip source storage using JSON files."""
import json
import uuid import uuid
from datetime import datetime from datetime import datetime, timezone
from pathlib import Path from typing import List, Optional
from typing import Dict, List, Optional
from wled_controller.core.capture.calibration import CalibrationConfig, calibration_to_dict from wled_controller.core.capture.calibration import CalibrationConfig, calibration_to_dict
from wled_controller.storage.base_store import BaseJsonStore
from wled_controller.storage.color_strip_source import ( from wled_controller.storage.color_strip_source import (
AdvancedPictureColorStripSource, AdvancedPictureColorStripSource,
ApiInputColorStripSource, ApiInputColorStripSource,
@@ -21,73 +20,27 @@ from wled_controller.storage.color_strip_source import (
PictureColorStripSource, PictureColorStripSource,
StaticColorStripSource, StaticColorStripSource,
) )
from wled_controller.utils import atomic_write_json, get_logger from wled_controller.utils import get_logger
logger = get_logger(__name__) logger = get_logger(__name__)
class ColorStripStore: class ColorStripStore(BaseJsonStore[ColorStripSource]):
"""Persistent storage for color strip sources.""" """Persistent storage for color strip sources."""
_json_key = "color_strip_sources"
_entity_name = "Color strip source"
def __init__(self, file_path: str): def __init__(self, file_path: str):
self.file_path = Path(file_path) super().__init__(file_path, ColorStripSource.from_dict)
self._sources: Dict[str, ColorStripSource] = {}
self._load()
def _load(self) -> None: # Backward-compatible aliases
if not self.file_path.exists(): get_all_sources = BaseJsonStore.get_all
logger.info("Color strip store file not found — starting empty") delete_source = BaseJsonStore.delete
return
try:
with open(self.file_path, "r", encoding="utf-8") as f:
data = json.load(f)
sources_data = data.get("color_strip_sources", {})
loaded = 0
for source_id, source_dict in sources_data.items():
try:
source = ColorStripSource.from_dict(source_dict)
self._sources[source_id] = source
loaded += 1
except Exception as e:
logger.error(f"Failed to load color strip source {source_id}: {e}", exc_info=True)
if loaded > 0:
logger.info(f"Loaded {loaded} color strip sources from storage")
except Exception as e:
logger.error(f"Failed to load color strip sources from {self.file_path}: {e}")
raise
logger.info(f"Color strip store initialized with {len(self._sources)} sources")
def _save(self) -> None:
try:
data = {
"version": "1.0.0",
"color_strip_sources": {
sid: source.to_dict()
for sid, source in self._sources.items()
},
}
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save color strip sources to {self.file_path}: {e}")
raise
def get_all_sources(self) -> List[ColorStripSource]:
return list(self._sources.values())
def get_source(self, source_id: str) -> ColorStripSource: def get_source(self, source_id: str) -> ColorStripSource:
"""Get a color strip source by ID. """Get a color strip source by ID (alias for get())."""
return self.get(source_id)
Raises:
ValueError: If source not found
"""
if source_id not in self._sources:
raise ValueError(f"Color strip source not found: {source_id}")
return self._sources[source_id]
def create_source( def create_source(
self, self,
@@ -129,6 +82,7 @@ class ColorStripStore:
app_filter_mode: Optional[str] = None, app_filter_mode: Optional[str] = None,
app_filter_list: Optional[list] = None, app_filter_list: Optional[list] = None,
os_listener: Optional[bool] = None, os_listener: Optional[bool] = None,
tags: Optional[List[str]] = None,
) -> ColorStripSource: ) -> ColorStripSource:
"""Create a new color strip source. """Create a new color strip source.
@@ -138,12 +92,12 @@ class ColorStripStore:
if not name or not name.strip(): if not name or not name.strip():
raise ValueError("Name is required") raise ValueError("Name is required")
for source in self._sources.values(): for source in self._items.values():
if source.name == name: if source.name == name:
raise ValueError(f"Color strip source with name '{name}' already exists") raise ValueError(f"Color strip source with name '{name}' already exists")
source_id = f"css_{uuid.uuid4().hex[:8]}" source_id = f"css_{uuid.uuid4().hex[:8]}"
now = datetime.utcnow() now = datetime.now(timezone.utc)
if source_type == "static": if source_type == "static":
rgb = color if isinstance(color, list) and len(color) == 3 else [255, 255, 255] rgb = color if isinstance(color, list) and len(color) == 3 else [255, 255, 255]
@@ -325,7 +279,8 @@ class ColorStripStore:
frame_interpolation=frame_interpolation, frame_interpolation=frame_interpolation,
) )
self._sources[source_id] = source source.tags = tags or []
self._items[source_id] = source
self._save() self._save()
logger.info(f"Created color strip source: {name} ({source_id}, type={source_type})") logger.info(f"Created color strip source: {name} ({source_id}, type={source_type})")
@@ -371,19 +326,20 @@ class ColorStripStore:
app_filter_mode: Optional[str] = None, app_filter_mode: Optional[str] = None,
app_filter_list: Optional[list] = None, app_filter_list: Optional[list] = None,
os_listener: Optional[bool] = None, os_listener: Optional[bool] = None,
tags: Optional[List[str]] = None,
) -> ColorStripSource: ) -> ColorStripSource:
"""Update an existing color strip source. """Update an existing color strip source.
Raises: Raises:
ValueError: If source not found ValueError: If source not found
""" """
if source_id not in self._sources: if source_id not in self._items:
raise ValueError(f"Color strip source not found: {source_id}") raise ValueError(f"Color strip source not found: {source_id}")
source = self._sources[source_id] source = self._items[source_id]
if name is not None: if name is not None:
for other in self._sources.values(): for other in self._items.values():
if other.id != source_id and other.name == name: if other.id != source_id and other.name == name:
raise ValueError(f"Color strip source with name '{name}' already exists") raise ValueError(f"Color strip source with name '{name}' already exists")
source.name = name source.name = name
@@ -394,6 +350,9 @@ class ColorStripStore:
if clock_id is not None: if clock_id is not None:
source.clock_id = clock_id if clock_id else None source.clock_id = clock_id if clock_id else None
if tags is not None:
source.tags = tags
if isinstance(source, (PictureColorStripSource, AdvancedPictureColorStripSource)): if isinstance(source, (PictureColorStripSource, AdvancedPictureColorStripSource)):
if picture_source_id is not None and isinstance(source, PictureColorStripSource): if picture_source_id is not None and isinstance(source, PictureColorStripSource):
source.picture_source_id = picture_source_id source.picture_source_id = picture_source_id
@@ -494,30 +453,16 @@ class ColorStripStore:
if os_listener is not None: if os_listener is not None:
source.os_listener = bool(os_listener) source.os_listener = bool(os_listener)
source.updated_at = datetime.utcnow() source.updated_at = datetime.now(timezone.utc)
self._save() self._save()
logger.info(f"Updated color strip source: {source_id}") logger.info(f"Updated color strip source: {source_id}")
return source return source
def delete_source(self, source_id: str) -> None:
"""Delete a color strip source.
Raises:
ValueError: If source not found
"""
if source_id not in self._sources:
raise ValueError(f"Color strip source not found: {source_id}")
del self._sources[source_id]
self._save()
logger.info(f"Deleted color strip source: {source_id}")
def get_composites_referencing(self, source_id: str) -> List[str]: def get_composites_referencing(self, source_id: str) -> List[str]:
"""Return names of composite sources that reference a given source as a layer.""" """Return names of composite sources that reference a given source as a layer."""
names = [] names = []
for source in self._sources.values(): for source in self._items.values():
if isinstance(source, CompositeColorStripSource): if isinstance(source, CompositeColorStripSource):
for layer in source.layers: for layer in source.layers:
if layer.get("source_id") == source_id: if layer.get("source_id") == source_id:
@@ -528,7 +473,7 @@ class ColorStripStore:
def get_mapped_referencing(self, source_id: str) -> List[str]: def get_mapped_referencing(self, source_id: str) -> List[str]:
"""Return names of mapped sources that reference a given source as a zone.""" """Return names of mapped sources that reference a given source as a zone."""
names = [] names = []
for source in self._sources.values(): for source in self._items.values():
if isinstance(source, MappedColorStripSource): if isinstance(source, MappedColorStripSource):
for zone in source.zones: for zone in source.zones:
if zone.get("source_id") == source_id: if zone.get("source_id") == source_id:

View File

@@ -2,11 +2,11 @@
import json import json
import uuid import uuid
from datetime import datetime from datetime import datetime, timezone
from pathlib import Path from pathlib import Path
from typing import Dict, List, Optional from typing import Dict, List, Optional
from wled_controller.utils import get_logger from wled_controller.utils import atomic_write_json, get_logger
logger = get_logger(__name__) logger = get_logger(__name__)
@@ -33,6 +33,7 @@ class Device:
send_latency_ms: int = 0, send_latency_ms: int = 0,
rgbw: bool = False, rgbw: bool = False,
zone_mode: str = "combined", zone_mode: str = "combined",
tags: List[str] = None,
created_at: Optional[datetime] = None, created_at: Optional[datetime] = None,
updated_at: Optional[datetime] = None, updated_at: Optional[datetime] = None,
): ):
@@ -48,8 +49,9 @@ class Device:
self.send_latency_ms = send_latency_ms self.send_latency_ms = send_latency_ms
self.rgbw = rgbw self.rgbw = rgbw
self.zone_mode = zone_mode self.zone_mode = zone_mode
self.created_at = created_at or datetime.utcnow() self.tags = tags or []
self.updated_at = updated_at or datetime.utcnow() self.created_at = created_at or datetime.now(timezone.utc)
self.updated_at = updated_at or datetime.now(timezone.utc)
def to_dict(self) -> dict: def to_dict(self) -> dict:
"""Convert device to dictionary.""" """Convert device to dictionary."""
@@ -75,6 +77,8 @@ class Device:
d["rgbw"] = True d["rgbw"] = True
if self.zone_mode != "combined": if self.zone_mode != "combined":
d["zone_mode"] = self.zone_mode d["zone_mode"] = self.zone_mode
if self.tags:
d["tags"] = self.tags
return d return d
@classmethod @classmethod
@@ -93,8 +97,9 @@ class Device:
send_latency_ms=data.get("send_latency_ms", 0), send_latency_ms=data.get("send_latency_ms", 0),
rgbw=data.get("rgbw", False), rgbw=data.get("rgbw", False),
zone_mode=data.get("zone_mode", "combined"), zone_mode=data.get("zone_mode", "combined"),
created_at=datetime.fromisoformat(data.get("created_at", datetime.utcnow().isoformat())), tags=data.get("tags", []),
updated_at=datetime.fromisoformat(data.get("updated_at", datetime.utcnow().isoformat())), created_at=datetime.fromisoformat(data.get("created_at", datetime.now(timezone.utc).isoformat())),
updated_at=datetime.fromisoformat(data.get("updated_at", datetime.now(timezone.utc).isoformat())),
) )
@@ -158,11 +163,7 @@ class DeviceStore:
} }
} }
temp_file = self.storage_file.with_suffix(".tmp") atomic_write_json(self.storage_file, data)
with open(temp_file, "w") as f:
json.dump(data, f, indent=2)
temp_file.replace(self.storage_file)
logger.debug(f"Saved {len(self._devices)} devices to storage") logger.debug(f"Saved {len(self._devices)} devices to storage")
@@ -181,6 +182,7 @@ class DeviceStore:
send_latency_ms: int = 0, send_latency_ms: int = 0,
rgbw: bool = False, rgbw: bool = False,
zone_mode: str = "combined", zone_mode: str = "combined",
tags: Optional[List[str]] = None,
) -> Device: ) -> Device:
"""Create a new device.""" """Create a new device."""
device_id = f"device_{uuid.uuid4().hex[:8]}" device_id = f"device_{uuid.uuid4().hex[:8]}"
@@ -200,6 +202,7 @@ class DeviceStore:
send_latency_ms=send_latency_ms, send_latency_ms=send_latency_ms,
rgbw=rgbw, rgbw=rgbw,
zone_mode=zone_mode, zone_mode=zone_mode,
tags=tags or [],
) )
self._devices[device_id] = device self._devices[device_id] = device
@@ -228,6 +231,7 @@ class DeviceStore:
send_latency_ms: Optional[int] = None, send_latency_ms: Optional[int] = None,
rgbw: Optional[bool] = None, rgbw: Optional[bool] = None,
zone_mode: Optional[str] = None, zone_mode: Optional[str] = None,
tags: Optional[List[str]] = None,
) -> Device: ) -> Device:
"""Update device.""" """Update device."""
device = self._devices.get(device_id) device = self._devices.get(device_id)
@@ -252,8 +256,10 @@ class DeviceStore:
device.rgbw = rgbw device.rgbw = rgbw
if zone_mode is not None: if zone_mode is not None:
device.zone_mode = zone_mode device.zone_mode = zone_mode
if tags is not None:
device.tags = tags
device.updated_at = datetime.utcnow() device.updated_at = datetime.now(timezone.utc)
self.save() self.save()
logger.info(f"Updated device {device_id}") logger.info(f"Updated device {device_id}")

View File

@@ -1,7 +1,7 @@
"""Key colors output target — extracts key colors from image rectangles.""" """Key colors output target — extracts key colors from image rectangles."""
from dataclasses import dataclass, field from dataclasses import dataclass, field
from datetime import datetime from datetime import datetime, timezone
from typing import List, Optional from typing import List, Optional
from wled_controller.storage.output_target import OutputTarget from wled_controller.storage.output_target import OutputTarget
@@ -100,9 +100,10 @@ class KeyColorsOutputTarget(OutputTarget):
def update_fields(self, *, name=None, device_id=None, picture_source_id=None, def update_fields(self, *, name=None, device_id=None, picture_source_id=None,
settings=None, key_colors_settings=None, description=None, settings=None, key_colors_settings=None, description=None,
tags=None,
**_kwargs) -> None: **_kwargs) -> None:
"""Apply mutable field updates for KC targets.""" """Apply mutable field updates for KC targets."""
super().update_fields(name=name, description=description) super().update_fields(name=name, description=description, tags=tags)
if picture_source_id is not None: if picture_source_id is not None:
self.picture_source_id = picture_source_id self.picture_source_id = picture_source_id
if key_colors_settings is not None: if key_colors_settings is not None:
@@ -130,6 +131,7 @@ class KeyColorsOutputTarget(OutputTarget):
picture_source_id=data.get("picture_source_id", ""), picture_source_id=data.get("picture_source_id", ""),
settings=settings, settings=settings,
description=data.get("description"), description=data.get("description"),
created_at=datetime.fromisoformat(data.get("created_at", datetime.utcnow().isoformat())), tags=data.get("tags", []),
updated_at=datetime.fromisoformat(data.get("updated_at", datetime.utcnow().isoformat())), created_at=datetime.fromisoformat(data.get("created_at", datetime.now(timezone.utc).isoformat())),
updated_at=datetime.fromisoformat(data.get("updated_at", datetime.now(timezone.utc).isoformat())),
) )

View File

@@ -1,8 +1,8 @@
"""Output target base data model.""" """Output target base data model."""
from dataclasses import dataclass from dataclasses import dataclass, field
from datetime import datetime from datetime import datetime
from typing import Optional from typing import List, Optional
@dataclass @dataclass
@@ -15,6 +15,7 @@ class OutputTarget:
created_at: datetime created_at: datetime
updated_at: datetime updated_at: datetime
description: Optional[str] = None description: Optional[str] = None
tags: List[str] = field(default_factory=list)
def register_with_manager(self, manager) -> None: def register_with_manager(self, manager) -> None:
"""Register this target with the processor manager. Subclasses override.""" """Register this target with the processor manager. Subclasses override."""
@@ -26,12 +27,15 @@ class OutputTarget:
def update_fields(self, *, name=None, device_id=None, picture_source_id=None, def update_fields(self, *, name=None, device_id=None, picture_source_id=None,
settings=None, key_colors_settings=None, description=None, settings=None, key_colors_settings=None, description=None,
tags: Optional[List[str]] = None,
**_kwargs) -> None: **_kwargs) -> None:
"""Apply mutable field updates. Base handles common fields; subclasses handle type-specific ones.""" """Apply mutable field updates. Base handles common fields; subclasses handle type-specific ones."""
if name is not None: if name is not None:
self.name = name self.name = name
if description is not None: if description is not None:
self.description = description self.description = description
if tags is not None:
self.tags = tags
@property @property
def has_picture_source(self) -> bool: def has_picture_source(self) -> bool:
@@ -45,6 +49,7 @@ class OutputTarget:
"name": self.name, "name": self.name,
"target_type": self.target_type, "target_type": self.target_type,
"description": self.description, "description": self.description,
"tags": self.tags,
"created_at": self.created_at.isoformat(), "created_at": self.created_at.isoformat(),
"updated_at": self.updated_at.isoformat(), "updated_at": self.updated_at.isoformat(),
} }

View File

@@ -2,93 +2,61 @@
import json import json
import uuid import uuid
from datetime import datetime from datetime import datetime, timezone
from pathlib import Path from typing import List, Optional
from typing import Dict, List, Optional
from wled_controller.storage.base_store import BaseJsonStore
from wled_controller.storage.output_target import OutputTarget from wled_controller.storage.output_target import OutputTarget
from wled_controller.storage.wled_output_target import WledOutputTarget from wled_controller.storage.wled_output_target import WledOutputTarget
from wled_controller.storage.key_colors_output_target import ( from wled_controller.storage.key_colors_output_target import (
KeyColorsSettings, KeyColorsSettings,
KeyColorsOutputTarget, KeyColorsOutputTarget,
) )
from wled_controller.utils import atomic_write_json, get_logger from wled_controller.utils import get_logger
logger = get_logger(__name__) logger = get_logger(__name__)
DEFAULT_STATE_CHECK_INTERVAL = 30 # seconds DEFAULT_STATE_CHECK_INTERVAL = 30 # seconds
class OutputTargetStore: class OutputTargetStore(BaseJsonStore[OutputTarget]):
"""Persistent storage for output targets.""" """Persistent storage for output targets."""
def __init__(self, file_path: str): _json_key = "output_targets"
"""Initialize output target store. _entity_name = "Output target"
Args: def __init__(self, file_path: str):
file_path: Path to targets JSON file super().__init__(file_path, OutputTarget.from_dict)
"""
self.file_path = Path(file_path)
self._targets: Dict[str, OutputTarget] = {}
self._load()
def _load(self) -> None: def _load(self) -> None:
"""Load targets from file.""" """Override to support legacy 'picture_targets' JSON key."""
import json as _json
from pathlib import Path
if not self.file_path.exists(): if not self.file_path.exists():
logger.info(f"{self._entity_name} store file not found — starting empty")
return return
try: try:
with open(self.file_path, "r", encoding="utf-8") as f: with open(self.file_path, "r", encoding="utf-8") as f:
data = json.load(f) data = _json.load(f)
# Support both new "output_targets" and legacy "picture_targets" keys
targets_data = data.get("output_targets") or data.get("picture_targets", {}) targets_data = data.get("output_targets") or data.get("picture_targets", {})
loaded = 0 loaded = 0
for target_id, target_dict in targets_data.items(): for target_id, target_dict in targets_data.items():
try: try:
target = OutputTarget.from_dict(target_dict) self._items[target_id] = self._deserializer(target_dict)
self._targets[target_id] = target
loaded += 1 loaded += 1
except Exception as e: except Exception as e:
logger.error(f"Failed to load output target {target_id}: {e}", exc_info=True) logger.error(f"Failed to load {self._entity_name} {target_id}: {e}", exc_info=True)
if loaded > 0: if loaded > 0:
logger.info(f"Loaded {loaded} output targets from storage") logger.info(f"Loaded {loaded} {self._json_key} from storage")
except Exception as e: except Exception as e:
logger.error(f"Failed to load output targets from {self.file_path}: {e}") logger.error(f"Failed to load {self._json_key} from {self.file_path}: {e}")
raise raise
logger.info(f"{self._entity_name} store initialized with {len(self._items)} items")
logger.info(f"Output target store initialized with {len(self._targets)} targets") # Backward-compatible aliases
get_all_targets = BaseJsonStore.get_all
def _save(self) -> None: get_target = BaseJsonStore.get
"""Save all targets to file.""" delete_target = BaseJsonStore.delete
try:
data = {
"version": "1.0.0",
"output_targets": {
target_id: target.to_dict()
for target_id, target in self._targets.items()
},
}
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save output targets to {self.file_path}: {e}")
raise
def get_all_targets(self) -> List[OutputTarget]:
"""Get all output targets."""
return list(self._targets.values())
def get_target(self, target_id: str) -> OutputTarget:
"""Get target by ID.
Raises:
ValueError: If target not found
"""
if target_id not in self._targets:
raise ValueError(f"Output target not found: {target_id}")
return self._targets[target_id]
def create_target( def create_target(
self, self,
@@ -106,6 +74,7 @@ class OutputTargetStore:
key_colors_settings: Optional[KeyColorsSettings] = None, key_colors_settings: Optional[KeyColorsSettings] = None,
description: Optional[str] = None, description: Optional[str] = None,
picture_source_id: str = "", picture_source_id: str = "",
tags: Optional[List[str]] = None,
) -> OutputTarget: ) -> OutputTarget:
"""Create a new output target. """Create a new output target.
@@ -116,12 +85,12 @@ class OutputTargetStore:
raise ValueError(f"Invalid target type: {target_type}") raise ValueError(f"Invalid target type: {target_type}")
# Check for duplicate name # Check for duplicate name
for target in self._targets.values(): for target in self._items.values():
if target.name == name: if target.name == name:
raise ValueError(f"Output target with name '{name}' already exists") raise ValueError(f"Output target with name '{name}' already exists")
target_id = f"pt_{uuid.uuid4().hex[:8]}" target_id = f"pt_{uuid.uuid4().hex[:8]}"
now = datetime.utcnow() now = datetime.now(timezone.utc)
if target_type == "led": if target_type == "led":
target: OutputTarget = WledOutputTarget( target: OutputTarget = WledOutputTarget(
@@ -155,7 +124,8 @@ class OutputTargetStore:
else: else:
raise ValueError(f"Unknown target type: {target_type}") raise ValueError(f"Unknown target type: {target_type}")
self._targets[target_id] = target target.tags = tags or []
self._items[target_id] = target
self._save() self._save()
logger.info(f"Created output target: {name} ({target_id}, type={target_type})") logger.info(f"Created output target: {name} ({target_id}, type={target_type})")
@@ -176,20 +146,21 @@ class OutputTargetStore:
protocol: Optional[str] = None, protocol: Optional[str] = None,
key_colors_settings: Optional[KeyColorsSettings] = None, key_colors_settings: Optional[KeyColorsSettings] = None,
description: Optional[str] = None, description: Optional[str] = None,
tags: Optional[List[str]] = None,
) -> OutputTarget: ) -> OutputTarget:
"""Update an output target. """Update an output target.
Raises: Raises:
ValueError: If target not found or validation fails ValueError: If target not found or validation fails
""" """
if target_id not in self._targets: if target_id not in self._items:
raise ValueError(f"Output target not found: {target_id}") raise ValueError(f"Output target not found: {target_id}")
target = self._targets[target_id] target = self._items[target_id]
if name is not None: if name is not None:
# Check for duplicate name (exclude self) # Check for duplicate name (exclude self)
for other in self._targets.values(): for other in self._items.values():
if other.id != target_id and other.name == name: if other.id != target_id and other.name == name:
raise ValueError(f"Output target with name '{name}' already exists") raise ValueError(f"Output target with name '{name}' already exists")
@@ -206,50 +177,37 @@ class OutputTargetStore:
protocol=protocol, protocol=protocol,
key_colors_settings=key_colors_settings, key_colors_settings=key_colors_settings,
description=description, description=description,
tags=tags,
) )
target.updated_at = datetime.utcnow() target.updated_at = datetime.now(timezone.utc)
self._save() self._save()
logger.info(f"Updated output target: {target_id}") logger.info(f"Updated output target: {target_id}")
return target return target
def delete_target(self, target_id: str) -> None:
"""Delete an output target.
Raises:
ValueError: If target not found
"""
if target_id not in self._targets:
raise ValueError(f"Output target not found: {target_id}")
del self._targets[target_id]
self._save()
logger.info(f"Deleted output target: {target_id}")
def get_targets_for_device(self, device_id: str) -> List[OutputTarget]: def get_targets_for_device(self, device_id: str) -> List[OutputTarget]:
"""Get all targets that reference a specific device.""" """Get all targets that reference a specific device."""
return [ return [
t for t in self._targets.values() t for t in self._items.values()
if isinstance(t, WledOutputTarget) and t.device_id == device_id if isinstance(t, WledOutputTarget) and t.device_id == device_id
] ]
def get_targets_referencing_source(self, source_id: str) -> List[str]: def get_targets_referencing_source(self, source_id: str) -> List[str]:
"""Return names of KC targets that reference a picture source.""" """Return names of KC targets that reference a picture source."""
return [ return [
target.name for target in self._targets.values() target.name for target in self._items.values()
if isinstance(target, KeyColorsOutputTarget) and target.picture_source_id == source_id if isinstance(target, KeyColorsOutputTarget) and target.picture_source_id == source_id
] ]
def get_targets_referencing_css(self, css_id: str) -> List[str]: def get_targets_referencing_css(self, css_id: str) -> List[str]:
"""Return names of LED targets that reference a color strip source.""" """Return names of LED targets that reference a color strip source."""
return [ return [
target.name for target in self._targets.values() target.name for target in self._items.values()
if isinstance(target, WledOutputTarget) if isinstance(target, WledOutputTarget)
and target.color_strip_source_id == css_id and target.color_strip_source_id == css_id
] ]
def count(self) -> int: def count(self) -> int:
"""Get number of targets.""" """Get number of targets."""
return len(self._targets) return len(self._items)

View File

@@ -1,7 +1,7 @@
"""Pattern template data model for key color rectangle layouts.""" """Pattern template data model for key color rectangle layouts."""
from dataclasses import dataclass, field from dataclasses import dataclass, field
from datetime import datetime from datetime import datetime, timezone
from typing import List, Optional from typing import List, Optional
from wled_controller.storage.key_colors_output_target import KeyColorRectangle from wled_controller.storage.key_colors_output_target import KeyColorRectangle
@@ -17,6 +17,7 @@ class PatternTemplate:
created_at: datetime created_at: datetime
updated_at: datetime updated_at: datetime
description: Optional[str] = None description: Optional[str] = None
tags: List[str] = field(default_factory=list)
def to_dict(self) -> dict: def to_dict(self) -> dict:
"""Convert to dictionary.""" """Convert to dictionary."""
@@ -27,6 +28,7 @@ class PatternTemplate:
"created_at": self.created_at.isoformat(), "created_at": self.created_at.isoformat(),
"updated_at": self.updated_at.isoformat(), "updated_at": self.updated_at.isoformat(),
"description": self.description, "description": self.description,
"tags": self.tags,
} }
@classmethod @classmethod
@@ -39,9 +41,10 @@ class PatternTemplate:
rectangles=rectangles, rectangles=rectangles,
created_at=datetime.fromisoformat(data["created_at"]) created_at=datetime.fromisoformat(data["created_at"])
if isinstance(data.get("created_at"), str) if isinstance(data.get("created_at"), str)
else data.get("created_at", datetime.utcnow()), else data.get("created_at", datetime.now(timezone.utc)),
updated_at=datetime.fromisoformat(data["updated_at"]) updated_at=datetime.fromisoformat(data["updated_at"])
if isinstance(data.get("updated_at"), str) if isinstance(data.get("updated_at"), str)
else data.get("updated_at", datetime.utcnow()), else data.get("updated_at", datetime.now(timezone.utc)),
description=data.get("description"), description=data.get("description"),
tags=data.get("tags", []),
) )

View File

@@ -1,42 +1,42 @@
"""Pattern template storage using JSON files.""" """Pattern template storage using JSON files."""
import json
import uuid import uuid
from datetime import datetime from datetime import datetime, timezone
from pathlib import Path from typing import List, Optional
from typing import Dict, List, Optional
from wled_controller.storage.base_store import BaseJsonStore
from wled_controller.storage.key_colors_output_target import KeyColorRectangle from wled_controller.storage.key_colors_output_target import KeyColorRectangle
from wled_controller.storage.pattern_template import PatternTemplate from wled_controller.storage.pattern_template import PatternTemplate
from wled_controller.utils import atomic_write_json, get_logger from wled_controller.utils import get_logger
logger = get_logger(__name__) logger = get_logger(__name__)
class PatternTemplateStore: class PatternTemplateStore(BaseJsonStore[PatternTemplate]):
"""Storage for pattern templates (rectangle layouts for key color extraction). """Storage for pattern templates (rectangle layouts for key color extraction).
All templates are persisted to the JSON file. All templates are persisted to the JSON file.
On startup, if no templates exist, a default one is auto-created. On startup, if no templates exist, a default one is auto-created.
""" """
def __init__(self, file_path: str): _json_key = "pattern_templates"
"""Initialize pattern template store. _entity_name = "Pattern template"
Args: def __init__(self, file_path: str):
file_path: Path to templates JSON file super().__init__(file_path, PatternTemplate.from_dict)
"""
self.file_path = Path(file_path)
self._templates: Dict[str, PatternTemplate] = {}
self._load()
self._ensure_initial_template() self._ensure_initial_template()
# Backward-compatible aliases
get_all_templates = BaseJsonStore.get_all
get_template = BaseJsonStore.get
delete_template = BaseJsonStore.delete
def _ensure_initial_template(self) -> None: def _ensure_initial_template(self) -> None:
"""Auto-create a default pattern template if none exist.""" """Auto-create a default pattern template if none exist."""
if self._templates: if self._items:
return return
now = datetime.utcnow() now = datetime.now(timezone.utc)
template_id = f"pat_{uuid.uuid4().hex[:8]}" template_id = f"pat_{uuid.uuid4().hex[:8]}"
template = PatternTemplate( template = PatternTemplate(
@@ -50,95 +50,24 @@ class PatternTemplateStore:
description="Default pattern template with full-frame rectangle", description="Default pattern template with full-frame rectangle",
) )
self._templates[template_id] = template self._items[template_id] = template
self._save() self._save()
logger.info(f"Auto-created initial pattern template: {template.name} ({template_id})") logger.info(f"Auto-created initial pattern template: {template.name} ({template_id})")
def _load(self) -> None:
"""Load templates from file."""
if not self.file_path.exists():
return
try:
with open(self.file_path, "r", encoding="utf-8") as f:
data = json.load(f)
templates_data = data.get("pattern_templates", {})
loaded = 0
for template_id, template_dict in templates_data.items():
try:
template = PatternTemplate.from_dict(template_dict)
self._templates[template_id] = template
loaded += 1
except Exception as e:
logger.error(
f"Failed to load pattern template {template_id}: {e}",
exc_info=True,
)
if loaded > 0:
logger.info(f"Loaded {loaded} pattern templates from storage")
except Exception as e:
logger.error(f"Failed to load pattern templates from {self.file_path}: {e}")
raise
logger.info(f"Pattern template store initialized with {len(self._templates)} templates")
def _save(self) -> None:
"""Save all templates to file."""
try:
data = {
"version": "1.0.0",
"pattern_templates": {
template_id: template.to_dict()
for template_id, template in self._templates.items()
},
}
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save pattern templates to {self.file_path}: {e}")
raise
def get_all_templates(self) -> List[PatternTemplate]:
"""Get all pattern templates."""
return list(self._templates.values())
def get_template(self, template_id: str) -> PatternTemplate:
"""Get template by ID.
Raises:
ValueError: If template not found
"""
if template_id not in self._templates:
raise ValueError(f"Pattern template not found: {template_id}")
return self._templates[template_id]
def create_template( def create_template(
self, self,
name: str, name: str,
rectangles: Optional[List[KeyColorRectangle]] = None, rectangles: Optional[List[KeyColorRectangle]] = None,
description: Optional[str] = None, description: Optional[str] = None,
tags: Optional[List[str]] = None,
) -> PatternTemplate: ) -> PatternTemplate:
"""Create a new pattern template. self._check_name_unique(name)
Args:
name: Template name (must be unique)
rectangles: List of named rectangles
description: Optional description
Raises:
ValueError: If template with same name exists
"""
for template in self._templates.values():
if template.name == name:
raise ValueError(f"Pattern template with name '{name}' already exists")
if rectangles is None: if rectangles is None:
rectangles = [] rectangles = []
template_id = f"pat_{uuid.uuid4().hex[:8]}" template_id = f"pat_{uuid.uuid4().hex[:8]}"
now = datetime.utcnow() now = datetime.now(timezone.utc)
template = PatternTemplate( template = PatternTemplate(
id=template_id, id=template_id,
@@ -147,9 +76,10 @@ class PatternTemplateStore:
created_at=now, created_at=now,
updated_at=now, updated_at=now,
description=description, description=description,
tags=tags or [],
) )
self._templates[template_id] = template self._items[template_id] = template
self._save() self._save()
logger.info(f"Created pattern template: {name} ({template_id})") logger.info(f"Created pattern template: {name} ({template_id})")
@@ -161,48 +91,26 @@ class PatternTemplateStore:
name: Optional[str] = None, name: Optional[str] = None,
rectangles: Optional[List[KeyColorRectangle]] = None, rectangles: Optional[List[KeyColorRectangle]] = None,
description: Optional[str] = None, description: Optional[str] = None,
tags: Optional[List[str]] = None,
) -> PatternTemplate: ) -> PatternTemplate:
"""Update an existing pattern template. template = self.get(template_id)
Raises:
ValueError: If template not found
"""
if template_id not in self._templates:
raise ValueError(f"Pattern template not found: {template_id}")
template = self._templates[template_id]
if name is not None: if name is not None:
for tid, t in self._templates.items(): self._check_name_unique(name, exclude_id=template_id)
if tid != template_id and t.name == name:
raise ValueError(f"Pattern template with name '{name}' already exists")
template.name = name template.name = name
if rectangles is not None: if rectangles is not None:
template.rectangles = rectangles template.rectangles = rectangles
if description is not None: if description is not None:
template.description = description template.description = description
if tags is not None:
template.tags = tags
template.updated_at = datetime.utcnow() template.updated_at = datetime.now(timezone.utc)
self._save() self._save()
logger.info(f"Updated pattern template: {template_id}") logger.info(f"Updated pattern template: {template_id}")
return template return template
def delete_template(self, template_id: str) -> None:
"""Delete a pattern template.
Raises:
ValueError: If template not found
"""
if template_id not in self._templates:
raise ValueError(f"Pattern template not found: {template_id}")
del self._templates[template_id]
self._save()
logger.info(f"Deleted pattern template: {template_id}")
def get_targets_referencing(self, template_id: str, output_target_store) -> List[str]: def get_targets_referencing(self, template_id: str, output_target_store) -> List[str]:
"""Return names of KC targets that reference this template.""" """Return names of KC targets that reference this template."""
from wled_controller.storage.key_colors_output_target import KeyColorsOutputTarget from wled_controller.storage.key_colors_output_target import KeyColorsOutputTarget

View File

@@ -1,8 +1,8 @@
"""Picture source data model with inheritance-based stream types.""" """Picture source data model with inheritance-based stream types."""
from dataclasses import dataclass from dataclasses import dataclass, field
from datetime import datetime from datetime import datetime, timezone
from typing import Optional from typing import List, Optional
@dataclass @dataclass
@@ -21,6 +21,7 @@ class PictureSource:
created_at: datetime created_at: datetime
updated_at: datetime updated_at: datetime
description: Optional[str] = None description: Optional[str] = None
tags: List[str] = field(default_factory=list)
def to_dict(self) -> dict: def to_dict(self) -> dict:
"""Convert stream to dictionary. Subclasses extend this.""" """Convert stream to dictionary. Subclasses extend this."""
@@ -31,6 +32,7 @@ class PictureSource:
"created_at": self.created_at.isoformat(), "created_at": self.created_at.isoformat(),
"updated_at": self.updated_at.isoformat(), "updated_at": self.updated_at.isoformat(),
"description": self.description, "description": self.description,
"tags": self.tags,
# Subclass fields default to None for backward compat # Subclass fields default to None for backward compat
"display_index": None, "display_index": None,
"capture_template_id": None, "capture_template_id": None,
@@ -47,39 +49,40 @@ class PictureSource:
sid: str = data["id"] sid: str = data["id"]
name: str = data["name"] name: str = data["name"]
description: str | None = data.get("description") description: str | None = data.get("description")
tags: list = data.get("tags", [])
raw_created = data.get("created_at") raw_created = data.get("created_at")
created_at: datetime = ( created_at: datetime = (
datetime.fromisoformat(raw_created) datetime.fromisoformat(raw_created)
if isinstance(raw_created, str) if isinstance(raw_created, str)
else raw_created if isinstance(raw_created, datetime) else raw_created if isinstance(raw_created, datetime)
else datetime.utcnow() else datetime.now(timezone.utc)
) )
raw_updated = data.get("updated_at") raw_updated = data.get("updated_at")
updated_at: datetime = ( updated_at: datetime = (
datetime.fromisoformat(raw_updated) datetime.fromisoformat(raw_updated)
if isinstance(raw_updated, str) if isinstance(raw_updated, str)
else raw_updated if isinstance(raw_updated, datetime) else raw_updated if isinstance(raw_updated, datetime)
else datetime.utcnow() else datetime.now(timezone.utc)
) )
if stream_type == "processed": if stream_type == "processed":
return ProcessedPictureSource( return ProcessedPictureSource(
id=sid, name=name, stream_type=stream_type, id=sid, name=name, stream_type=stream_type,
created_at=created_at, updated_at=updated_at, description=description, created_at=created_at, updated_at=updated_at, description=description, tags=tags,
source_stream_id=data.get("source_stream_id") or "", source_stream_id=data.get("source_stream_id") or "",
postprocessing_template_id=data.get("postprocessing_template_id") or "", postprocessing_template_id=data.get("postprocessing_template_id") or "",
) )
elif stream_type == "static_image": elif stream_type == "static_image":
return StaticImagePictureSource( return StaticImagePictureSource(
id=sid, name=name, stream_type=stream_type, id=sid, name=name, stream_type=stream_type,
created_at=created_at, updated_at=updated_at, description=description, created_at=created_at, updated_at=updated_at, description=description, tags=tags,
image_source=data.get("image_source") or "", image_source=data.get("image_source") or "",
) )
else: else:
return ScreenCapturePictureSource( return ScreenCapturePictureSource(
id=sid, name=name, stream_type=stream_type, id=sid, name=name, stream_type=stream_type,
created_at=created_at, updated_at=updated_at, description=description, created_at=created_at, updated_at=updated_at, description=description, tags=tags,
display_index=data.get("display_index") or 0, display_index=data.get("display_index") or 0,
capture_template_id=data.get("capture_template_id") or "", capture_template_id=data.get("capture_template_id") or "",
target_fps=data.get("target_fps") or 30, target_fps=data.get("target_fps") or 30,

View File

@@ -1,84 +1,43 @@
"""Picture source storage using JSON files.""" """Picture source storage using JSON files."""
import json
import uuid import uuid
from datetime import datetime from datetime import datetime, timezone
from pathlib import Path from typing import List, Optional, Set
from typing import Dict, List, Optional, Set
from wled_controller.storage.base_store import BaseJsonStore
from wled_controller.storage.picture_source import ( from wled_controller.storage.picture_source import (
PictureSource, PictureSource,
ScreenCapturePictureSource,
ProcessedPictureSource, ProcessedPictureSource,
ScreenCapturePictureSource,
StaticImagePictureSource, StaticImagePictureSource,
) )
from wled_controller.utils import atomic_write_json, get_logger from wled_controller.utils import get_logger
logger = get_logger(__name__) logger = get_logger(__name__)
class PictureSourceStore: class PictureSourceStore(BaseJsonStore[PictureSource]):
"""Storage for picture sources. """Storage for picture sources.
Supports raw and processed stream types with cycle detection Supports raw and processed stream types with cycle detection
for processed streams that reference other streams. for processed streams that reference other streams.
""" """
_json_key = "picture_sources"
_entity_name = "Picture source"
def __init__(self, file_path: str): def __init__(self, file_path: str):
"""Initialize picture source store. super().__init__(file_path, PictureSource.from_dict)
Args: # Backward-compatible aliases
file_path: Path to streams JSON file get_all_sources = BaseJsonStore.get_all
""" get_source = BaseJsonStore.get
self.file_path = Path(file_path)
self._streams: Dict[str, PictureSource] = {}
self._load()
def _load(self) -> None: # Legacy aliases (old code used "stream" naming)
"""Load streams from file.""" get_all_streams = BaseJsonStore.get_all
if not self.file_path.exists(): get_stream = BaseJsonStore.get
return
try: # ── Helpers ───────────────────────────────────────────────────────
with open(self.file_path, "r", encoding="utf-8") as f:
data = json.load(f)
streams_data = data.get("picture_sources", {})
loaded = 0
for stream_id, stream_dict in streams_data.items():
try:
stream = PictureSource.from_dict(stream_dict)
self._streams[stream_id] = stream
loaded += 1
except Exception as e:
logger.error(
f"Failed to load picture source {stream_id}: {e}",
exc_info=True,
)
if loaded > 0:
logger.info(f"Loaded {loaded} picture sources from storage")
except Exception as e:
logger.error(f"Failed to load picture sources from {self.file_path}: {e}")
raise
logger.info(f"Picture source store initialized with {len(self._streams)} streams")
def _save(self) -> None:
"""Save all streams to file."""
try:
data = {
"version": "1.0.0",
"picture_sources": {
stream_id: stream.to_dict()
for stream_id, stream in self._streams.items()
},
}
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save picture sources to {self.file_path}: {e}")
raise
def _detect_cycle(self, source_stream_id: str, exclude_stream_id: Optional[str] = None) -> bool: def _detect_cycle(self, source_stream_id: str, exclude_stream_id: Optional[str] = None) -> bool:
"""Detect if following the source chain from source_stream_id would create a cycle. """Detect if following the source chain from source_stream_id would create a cycle.
@@ -100,7 +59,7 @@ class PictureSourceStore:
return True return True
visited.add(current_id) visited.add(current_id)
current_stream = self._streams.get(current_id) current_stream = self._items.get(current_id)
if not current_stream: if not current_stream:
break break
if not isinstance(current_stream, ProcessedPictureSource): if not isinstance(current_stream, ProcessedPictureSource):
@@ -109,19 +68,7 @@ class PictureSourceStore:
return False return False
def get_all_streams(self) -> List[PictureSource]: # ── CRUD ──────────────────────────────────────────────────────────
"""Get all picture sources."""
return list(self._streams.values())
def get_stream(self, stream_id: str) -> PictureSource:
"""Get stream by ID.
Raises:
ValueError: If stream not found
"""
if stream_id not in self._streams:
raise ValueError(f"Picture source not found: {stream_id}")
return self._streams[stream_id]
def create_stream( def create_stream(
self, self,
@@ -134,6 +81,7 @@ class PictureSourceStore:
postprocessing_template_id: Optional[str] = None, postprocessing_template_id: Optional[str] = None,
image_source: Optional[str] = None, image_source: Optional[str] = None,
description: Optional[str] = None, description: Optional[str] = None,
tags: Optional[List[str]] = None,
) -> PictureSource: ) -> PictureSource:
"""Create a new picture source. """Create a new picture source.
@@ -167,7 +115,7 @@ class PictureSourceStore:
if not postprocessing_template_id: if not postprocessing_template_id:
raise ValueError("Processed streams require postprocessing_template_id") raise ValueError("Processed streams require postprocessing_template_id")
# Validate source stream exists # Validate source stream exists
if source_stream_id not in self._streams: if source_stream_id not in self._items:
raise ValueError(f"Source stream not found: {source_stream_id}") raise ValueError(f"Source stream not found: {source_stream_id}")
# Check for cycles # Check for cycles
if self._detect_cycle(source_stream_id): if self._detect_cycle(source_stream_id):
@@ -177,16 +125,15 @@ class PictureSourceStore:
raise ValueError("Static image streams require image_source") raise ValueError("Static image streams require image_source")
# Check for duplicate name # Check for duplicate name
for stream in self._streams.values(): self._check_name_unique(name)
if stream.name == name:
raise ValueError(f"Picture source with name '{name}' already exists")
stream_id = f"ps_{uuid.uuid4().hex[:8]}" stream_id = f"ps_{uuid.uuid4().hex[:8]}"
now = datetime.utcnow() now = datetime.now(timezone.utc)
common = dict( common = dict(
id=stream_id, name=name, stream_type=stream_type, id=stream_id, name=name, stream_type=stream_type,
created_at=now, updated_at=now, description=description, created_at=now, updated_at=now, description=description,
tags=tags or [],
) )
stream: PictureSource stream: PictureSource
@@ -209,7 +156,7 @@ class PictureSourceStore:
image_source=image_source, # type: ignore[arg-type] image_source=image_source, # type: ignore[arg-type]
) )
self._streams[stream_id] = stream self._items[stream_id] = stream
self._save() self._save()
logger.info(f"Created picture source: {name} ({stream_id}, type={stream_type})") logger.info(f"Created picture source: {name} ({stream_id}, type={stream_type})")
@@ -226,28 +173,29 @@ class PictureSourceStore:
postprocessing_template_id: Optional[str] = None, postprocessing_template_id: Optional[str] = None,
image_source: Optional[str] = None, image_source: Optional[str] = None,
description: Optional[str] = None, description: Optional[str] = None,
tags: Optional[List[str]] = None,
) -> PictureSource: ) -> PictureSource:
"""Update an existing picture source. """Update an existing picture source.
Raises: Raises:
ValueError: If stream not found, validation fails, or cycle detected ValueError: If stream not found, validation fails, or cycle detected
""" """
if stream_id not in self._streams: stream = self.get(stream_id)
raise ValueError(f"Picture source not found: {stream_id}")
stream = self._streams[stream_id]
# If changing source_stream_id on a processed stream, check for cycles # If changing source_stream_id on a processed stream, check for cycles
if source_stream_id is not None and isinstance(stream, ProcessedPictureSource): if source_stream_id is not None and isinstance(stream, ProcessedPictureSource):
if source_stream_id not in self._streams: if source_stream_id not in self._items:
raise ValueError(f"Source stream not found: {source_stream_id}") raise ValueError(f"Source stream not found: {source_stream_id}")
if self._detect_cycle(source_stream_id, exclude_stream_id=stream_id): if self._detect_cycle(source_stream_id, exclude_stream_id=stream_id):
raise ValueError("Cycle detected in stream chain") raise ValueError("Cycle detected in stream chain")
if name is not None: if name is not None:
self._check_name_unique(name, exclude_id=stream_id)
stream.name = name stream.name = name
if description is not None: if description is not None:
stream.description = description stream.description = description
if tags is not None:
stream.tags = tags
if isinstance(stream, ScreenCapturePictureSource): if isinstance(stream, ScreenCapturePictureSource):
if display_index is not None: if display_index is not None:
@@ -265,7 +213,7 @@ class PictureSourceStore:
if image_source is not None: if image_source is not None:
stream.image_source = image_source stream.image_source = image_source
stream.updated_at = datetime.utcnow() stream.updated_at = datetime.now(timezone.utc)
self._save() self._save()
@@ -278,22 +226,29 @@ class PictureSourceStore:
Raises: Raises:
ValueError: If stream not found or is referenced by another stream ValueError: If stream not found or is referenced by another stream
""" """
if stream_id not in self._streams: if stream_id not in self._items:
raise ValueError(f"Picture source not found: {stream_id}") raise ValueError(f"Picture source not found: {stream_id}")
# Check if any other stream references this one as source # Check if any other stream references this one as source
for other_stream in self._streams.values(): for other_stream in self._items.values():
if isinstance(other_stream, ProcessedPictureSource) and other_stream.source_stream_id == stream_id: if isinstance(other_stream, ProcessedPictureSource) and other_stream.source_stream_id == stream_id:
raise ValueError( raise ValueError(
f"Cannot delete stream '{self._streams[stream_id].name}': " f"Cannot delete stream '{self._items[stream_id].name}': "
f"it is referenced by stream '{other_stream.name}'" f"it is referenced by stream '{other_stream.name}'"
) )
del self._streams[stream_id] del self._items[stream_id]
self._save() self._save()
logger.info(f"Deleted picture source: {stream_id}") logger.info(f"Deleted picture source: {stream_id}")
# Also expose as delete_source for consistency
def delete_source(self, source_id: str) -> None:
"""Alias for delete_stream with reference checking."""
self.delete_stream(source_id)
# ── Query helpers ─────────────────────────────────────────────────
def get_targets_referencing(self, stream_id: str, target_store) -> List[str]: def get_targets_referencing(self, stream_id: str, target_store) -> List[str]:
"""Return names of targets that reference this stream.""" """Return names of targets that reference this stream."""
return target_store.get_targets_referencing_source(stream_id) return target_store.get_targets_referencing_source(stream_id)
@@ -324,7 +279,7 @@ class PictureSourceStore:
raise ValueError(f"Cycle detected in stream chain at {current_id}") raise ValueError(f"Cycle detected in stream chain at {current_id}")
visited.add(current_id) visited.add(current_id)
stream = self.get_stream(current_id) stream = self.get(current_id)
if not isinstance(stream, ProcessedPictureSource): if not isinstance(stream, ProcessedPictureSource):
return { return {

View File

@@ -1,7 +1,7 @@
"""Postprocessing template data model.""" """Postprocessing template data model."""
from dataclasses import dataclass, field from dataclasses import dataclass, field
from datetime import datetime from datetime import datetime, timezone
from typing import List, Optional from typing import List, Optional
from wled_controller.core.filters.filter_instance import FilterInstance from wled_controller.core.filters.filter_instance import FilterInstance
@@ -17,6 +17,7 @@ class PostprocessingTemplate:
created_at: datetime created_at: datetime
updated_at: datetime updated_at: datetime
description: Optional[str] = None description: Optional[str] = None
tags: List[str] = field(default_factory=list)
def to_dict(self) -> dict: def to_dict(self) -> dict:
"""Convert template to dictionary.""" """Convert template to dictionary."""
@@ -27,6 +28,7 @@ class PostprocessingTemplate:
"created_at": self.created_at.isoformat(), "created_at": self.created_at.isoformat(),
"updated_at": self.updated_at.isoformat(), "updated_at": self.updated_at.isoformat(),
"description": self.description, "description": self.description,
"tags": self.tags,
} }
@classmethod @classmethod
@@ -40,9 +42,10 @@ class PostprocessingTemplate:
filters=filters, filters=filters,
created_at=datetime.fromisoformat(data["created_at"]) created_at=datetime.fromisoformat(data["created_at"])
if isinstance(data.get("created_at"), str) if isinstance(data.get("created_at"), str)
else data.get("created_at", datetime.utcnow()), else data.get("created_at", datetime.now(timezone.utc)),
updated_at=datetime.fromisoformat(data["updated_at"]) updated_at=datetime.fromisoformat(data["updated_at"])
if isinstance(data.get("updated_at"), str) if isinstance(data.get("updated_at"), str)
else data.get("updated_at", datetime.utcnow()), else data.get("updated_at", datetime.now(timezone.utc)),
description=data.get("description"), description=data.get("description"),
tags=data.get("tags", []),
) )

View File

@@ -1,44 +1,45 @@
"""Postprocessing template storage using JSON files.""" """Postprocessing template storage using JSON files."""
import json
import uuid import uuid
from datetime import datetime from datetime import datetime, timezone
from pathlib import Path from typing import List, Optional
from typing import Dict, List, Optional
from wled_controller.core.filters.filter_instance import FilterInstance from wled_controller.core.filters.filter_instance import FilterInstance
from wled_controller.core.filters.registry import FilterRegistry from wled_controller.core.filters.registry import FilterRegistry
from wled_controller.storage.base_store import BaseJsonStore
from wled_controller.storage.picture_source import ProcessedPictureSource from wled_controller.storage.picture_source import ProcessedPictureSource
from wled_controller.storage.postprocessing_template import PostprocessingTemplate from wled_controller.storage.postprocessing_template import PostprocessingTemplate
from wled_controller.utils import atomic_write_json, get_logger from wled_controller.utils import get_logger
logger = get_logger(__name__) logger = get_logger(__name__)
class PostprocessingTemplateStore: class PostprocessingTemplateStore(BaseJsonStore[PostprocessingTemplate]):
"""Storage for postprocessing templates. """Storage for postprocessing templates.
All templates are persisted to the JSON file. All templates are persisted to the JSON file.
On startup, if no templates exist, a default one is auto-created. On startup, if no templates exist, a default one is auto-created.
""" """
def __init__(self, file_path: str): _json_key = "postprocessing_templates"
"""Initialize postprocessing template store. _entity_name = "Postprocessing template"
_version = "2.0.0"
Args: def __init__(self, file_path: str):
file_path: Path to templates JSON file super().__init__(file_path, PostprocessingTemplate.from_dict)
"""
self.file_path = Path(file_path)
self._templates: Dict[str, PostprocessingTemplate] = {}
self._load()
self._ensure_initial_template() self._ensure_initial_template()
# Backward-compatible aliases
get_all_templates = BaseJsonStore.get_all
get_template = BaseJsonStore.get
delete_template = BaseJsonStore.delete
def _ensure_initial_template(self) -> None: def _ensure_initial_template(self) -> None:
"""Auto-create a default postprocessing template if none exist.""" """Auto-create a default postprocessing template if none exist."""
if self._templates: if self._items:
return return
now = datetime.utcnow() now = datetime.now(timezone.utc)
template_id = f"pp_{uuid.uuid4().hex[:8]}" template_id = f"pp_{uuid.uuid4().hex[:8]}"
template = PostprocessingTemplate( template = PostprocessingTemplate(
@@ -54,89 +55,18 @@ class PostprocessingTemplateStore:
description="Default postprocessing template", description="Default postprocessing template",
) )
self._templates[template_id] = template self._items[template_id] = template
self._save() self._save()
logger.info(f"Auto-created initial postprocessing template: {template.name} ({template_id})") logger.info(f"Auto-created initial postprocessing template: {template.name} ({template_id})")
def _load(self) -> None:
"""Load templates from file."""
if not self.file_path.exists():
return
try:
with open(self.file_path, "r", encoding="utf-8") as f:
data = json.load(f)
templates_data = data.get("postprocessing_templates", {})
loaded = 0
for template_id, template_dict in templates_data.items():
try:
template = PostprocessingTemplate.from_dict(template_dict)
self._templates[template_id] = template
loaded += 1
except Exception as e:
logger.error(
f"Failed to load postprocessing template {template_id}: {e}",
exc_info=True,
)
if loaded > 0:
logger.info(f"Loaded {loaded} postprocessing templates from storage")
except Exception as e:
logger.error(f"Failed to load postprocessing templates from {self.file_path}: {e}")
raise
logger.info(f"Postprocessing template store initialized with {len(self._templates)} templates")
def _save(self) -> None:
"""Save all templates to file."""
try:
data = {
"version": "2.0.0",
"postprocessing_templates": {
template_id: template.to_dict()
for template_id, template in self._templates.items()
},
}
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save postprocessing templates to {self.file_path}: {e}")
raise
def get_all_templates(self) -> List[PostprocessingTemplate]:
"""Get all postprocessing templates."""
return list(self._templates.values())
def get_template(self, template_id: str) -> PostprocessingTemplate:
"""Get template by ID.
Raises:
ValueError: If template not found
"""
if template_id not in self._templates:
raise ValueError(f"Postprocessing template not found: {template_id}")
return self._templates[template_id]
def create_template( def create_template(
self, self,
name: str, name: str,
filters: Optional[List[FilterInstance]] = None, filters: Optional[List[FilterInstance]] = None,
description: Optional[str] = None, description: Optional[str] = None,
tags: Optional[List[str]] = None,
) -> PostprocessingTemplate: ) -> PostprocessingTemplate:
"""Create a new postprocessing template. self._check_name_unique(name)
Args:
name: Template name (must be unique)
filters: Ordered list of filter instances
description: Optional description
Raises:
ValueError: If template with same name exists or invalid filter_id
"""
for template in self._templates.values():
if template.name == name:
raise ValueError(f"Postprocessing template with name '{name}' already exists")
if filters is None: if filters is None:
filters = [] filters = []
@@ -147,7 +77,7 @@ class PostprocessingTemplateStore:
raise ValueError(f"Unknown filter type: '{fi.filter_id}'") raise ValueError(f"Unknown filter type: '{fi.filter_id}'")
template_id = f"pp_{uuid.uuid4().hex[:8]}" template_id = f"pp_{uuid.uuid4().hex[:8]}"
now = datetime.utcnow() now = datetime.now(timezone.utc)
template = PostprocessingTemplate( template = PostprocessingTemplate(
id=template_id, id=template_id,
@@ -156,9 +86,10 @@ class PostprocessingTemplateStore:
created_at=now, created_at=now,
updated_at=now, updated_at=now,
description=description, description=description,
tags=tags or [],
) )
self._templates[template_id] = template self._items[template_id] = template
self._save() self._save()
logger.info(f"Created postprocessing template: {name} ({template_id})") logger.info(f"Created postprocessing template: {name} ({template_id})")
@@ -170,21 +101,12 @@ class PostprocessingTemplateStore:
name: Optional[str] = None, name: Optional[str] = None,
filters: Optional[List[FilterInstance]] = None, filters: Optional[List[FilterInstance]] = None,
description: Optional[str] = None, description: Optional[str] = None,
tags: Optional[List[str]] = None,
) -> PostprocessingTemplate: ) -> PostprocessingTemplate:
"""Update an existing postprocessing template. template = self.get(template_id)
Raises:
ValueError: If template not found or invalid filter_id
"""
if template_id not in self._templates:
raise ValueError(f"Postprocessing template not found: {template_id}")
template = self._templates[template_id]
if name is not None: if name is not None:
for tid, t in self._templates.items(): self._check_name_unique(name, exclude_id=template_id)
if tid != template_id and t.name == name:
raise ValueError(f"Postprocessing template with name '{name}' already exists")
template.name = name template.name = name
if filters is not None: if filters is not None:
# Validate filter IDs # Validate filter IDs
@@ -194,28 +116,15 @@ class PostprocessingTemplateStore:
template.filters = filters template.filters = filters
if description is not None: if description is not None:
template.description = description template.description = description
if tags is not None:
template.tags = tags
template.updated_at = datetime.utcnow() template.updated_at = datetime.now(timezone.utc)
self._save() self._save()
logger.info(f"Updated postprocessing template: {template_id}") logger.info(f"Updated postprocessing template: {template_id}")
return template return template
def delete_template(self, template_id: str) -> None:
"""Delete a postprocessing template.
Raises:
ValueError: If template not found or is referenced by a picture source
"""
if template_id not in self._templates:
raise ValueError(f"Postprocessing template not found: {template_id}")
del self._templates[template_id]
self._save()
logger.info(f"Deleted postprocessing template: {template_id}")
def resolve_filter_instances(self, filter_instances, _visited=None): def resolve_filter_instances(self, filter_instances, _visited=None):
"""Recursively resolve filter instances, expanding filter_template references. """Recursively resolve filter instances, expanding filter_template references.

View File

@@ -1,7 +1,7 @@
"""Scene preset data models — snapshot of target state.""" """Scene preset data models — snapshot of target state."""
from dataclasses import dataclass, field from dataclasses import dataclass, field
from datetime import datetime from datetime import datetime, timezone
from typing import List from typing import List
@@ -42,16 +42,18 @@ class ScenePreset:
id: str id: str
name: str name: str
description: str = "" description: str = ""
tags: List[str] = field(default_factory=list)
targets: List[TargetSnapshot] = field(default_factory=list) targets: List[TargetSnapshot] = field(default_factory=list)
order: int = 0 order: int = 0
created_at: datetime = field(default_factory=datetime.utcnow) created_at: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
updated_at: datetime = field(default_factory=datetime.utcnow) updated_at: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
def to_dict(self) -> dict: def to_dict(self) -> dict:
return { return {
"id": self.id, "id": self.id,
"name": self.name, "name": self.name,
"description": self.description, "description": self.description,
"tags": self.tags,
"targets": [t.to_dict() for t in self.targets], "targets": [t.to_dict() for t in self.targets],
"order": self.order, "order": self.order,
"created_at": self.created_at.isoformat(), "created_at": self.created_at.isoformat(),
@@ -64,8 +66,9 @@ class ScenePreset:
id=data["id"], id=data["id"],
name=data["name"], name=data["name"],
description=data.get("description", ""), description=data.get("description", ""),
tags=data.get("tags", []),
targets=[TargetSnapshot.from_dict(t) for t in data.get("targets", [])], targets=[TargetSnapshot.from_dict(t) for t in data.get("targets", [])],
order=data.get("order", 0), order=data.get("order", 0),
created_at=datetime.fromisoformat(data.get("created_at", datetime.utcnow().isoformat())), created_at=datetime.fromisoformat(data.get("created_at", datetime.now(timezone.utc).isoformat())),
updated_at=datetime.fromisoformat(data.get("updated_at", datetime.utcnow().isoformat())), updated_at=datetime.fromisoformat(data.get("updated_at", datetime.now(timezone.utc).isoformat())),
) )

View File

@@ -1,79 +1,40 @@
"""Scene preset storage using JSON files.""" """Scene preset storage using JSON files."""
import json from datetime import datetime, timezone
import uuid from typing import List, Optional
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Optional
from wled_controller.storage.base_store import BaseJsonStore
from wled_controller.storage.scene_preset import ScenePreset, TargetSnapshot from wled_controller.storage.scene_preset import ScenePreset, TargetSnapshot
from wled_controller.utils import atomic_write_json, get_logger from wled_controller.utils import get_logger
logger = get_logger(__name__) logger = get_logger(__name__)
class ScenePresetStore: class ScenePresetStore(BaseJsonStore[ScenePreset]):
"""Persistent storage for scene presets.""" """Persistent storage for scene presets."""
_json_key = "scene_presets"
_entity_name = "Scene preset"
def __init__(self, file_path: str): def __init__(self, file_path: str):
self.file_path = Path(file_path) super().__init__(file_path, ScenePreset.from_dict)
self._presets: Dict[str, ScenePreset] = {}
self._load()
def _load(self) -> None: # Backward-compatible aliases
if not self.file_path.exists(): get_preset = BaseJsonStore.get
return delete_preset = BaseJsonStore.delete
try:
with open(self.file_path, "r", encoding="utf-8") as f:
data = json.load(f)
presets_data = data.get("scene_presets", {})
loaded = 0
for preset_id, preset_dict in presets_data.items():
try:
preset = ScenePreset.from_dict(preset_dict)
self._presets[preset_id] = preset
loaded += 1
except Exception as e:
logger.error(f"Failed to load scene preset {preset_id}: {e}", exc_info=True)
if loaded > 0:
logger.info(f"Loaded {loaded} scene presets from storage")
except Exception as e:
logger.error(f"Failed to load scene presets from {self.file_path}: {e}")
raise
logger.info(f"Scene preset store initialized with {len(self._presets)} presets")
def _save(self) -> None:
try:
data = {
"version": "1.0.0",
"scene_presets": {
pid: p.to_dict() for pid, p in self._presets.items()
},
}
atomic_write_json(self.file_path, data)
except Exception as e:
logger.error(f"Failed to save scene presets to {self.file_path}: {e}")
raise
def get_all_presets(self) -> List[ScenePreset]: def get_all_presets(self) -> List[ScenePreset]:
return sorted(self._presets.values(), key=lambda p: p.order) """Get all presets sorted by order field."""
return sorted(self._items.values(), key=lambda p: p.order)
def get_preset(self, preset_id: str) -> ScenePreset: # Override get_all to also sort by order for consistency
if preset_id not in self._presets: def get_all(self) -> List[ScenePreset]:
raise ValueError(f"Scene preset not found: {preset_id}") return self.get_all_presets()
return self._presets[preset_id]
def create_preset(self, preset: ScenePreset) -> ScenePreset: def create_preset(self, preset: ScenePreset) -> ScenePreset:
for p in self._presets.values(): self._check_name_unique(preset.name)
if p.name == preset.name:
raise ValueError(f"Scene preset with name '{preset.name}' already exists")
self._presets[preset.id] = preset self._items[preset.id] = preset
self._save() self._save()
logger.info(f"Created scene preset: {preset.name} ({preset.id})") logger.info(f"Created scene preset: {preset.name} ({preset.id})")
return preset return preset
@@ -85,16 +46,12 @@ class ScenePresetStore:
description: Optional[str] = None, description: Optional[str] = None,
order: Optional[int] = None, order: Optional[int] = None,
targets: Optional[List[TargetSnapshot]] = None, targets: Optional[List[TargetSnapshot]] = None,
tags: Optional[List[str]] = None,
) -> ScenePreset: ) -> ScenePreset:
if preset_id not in self._presets: preset = self.get(preset_id)
raise ValueError(f"Scene preset not found: {preset_id}")
preset = self._presets[preset_id]
if name is not None: if name is not None:
for pid, p in self._presets.items(): self._check_name_unique(name, exclude_id=preset_id)
if pid != preset_id and p.name == name:
raise ValueError(f"Scene preset with name '{name}' already exists")
preset.name = name preset.name = name
if description is not None: if description is not None:
preset.description = description preset.description = description
@@ -102,31 +59,20 @@ class ScenePresetStore:
preset.order = order preset.order = order
if targets is not None: if targets is not None:
preset.targets = targets preset.targets = targets
if tags is not None:
preset.tags = tags
preset.updated_at = datetime.utcnow() preset.updated_at = datetime.now(timezone.utc)
self._save() self._save()
logger.info(f"Updated scene preset: {preset_id}") logger.info(f"Updated scene preset: {preset_id}")
return preset return preset
def recapture_preset(self, preset_id: str, preset: ScenePreset) -> ScenePreset: def recapture_preset(self, preset_id: str, preset: ScenePreset) -> ScenePreset:
"""Replace snapshot data of an existing preset (recapture current state).""" """Replace snapshot data of an existing preset (recapture current state)."""
if preset_id not in self._presets: existing = self.get(preset_id)
raise ValueError(f"Scene preset not found: {preset_id}")
existing = self._presets[preset_id]
existing.targets = preset.targets existing.targets = preset.targets
existing.updated_at = datetime.utcnow() existing.updated_at = datetime.now(timezone.utc)
self._save() self._save()
logger.info(f"Recaptured scene preset: {preset_id}") logger.info(f"Recaptured scene preset: {preset_id}")
return existing return existing
def delete_preset(self, preset_id: str) -> None:
if preset_id not in self._presets:
raise ValueError(f"Scene preset not found: {preset_id}")
del self._presets[preset_id]
self._save()
logger.info(f"Deleted scene preset: {preset_id}")
def count(self) -> int:
return len(self._presets)

View File

@@ -5,9 +5,9 @@ color strip sources. Multiple CSS sources referencing the same clock
animate in sync and share speed / pause / resume / reset controls. animate in sync and share speed / pause / resume / reset controls.
""" """
from dataclasses import dataclass from dataclasses import dataclass, field
from datetime import datetime from datetime import datetime
from typing import Optional from typing import List, Optional
@dataclass @dataclass
@@ -20,6 +20,7 @@ class SyncClock:
created_at: datetime created_at: datetime
updated_at: datetime updated_at: datetime
description: Optional[str] = None description: Optional[str] = None
tags: List[str] = field(default_factory=list)
def to_dict(self) -> dict: def to_dict(self) -> dict:
return { return {
@@ -27,6 +28,7 @@ class SyncClock:
"name": self.name, "name": self.name,
"speed": self.speed, "speed": self.speed,
"description": self.description, "description": self.description,
"tags": self.tags,
"created_at": self.created_at.isoformat(), "created_at": self.created_at.isoformat(),
"updated_at": self.updated_at.isoformat(), "updated_at": self.updated_at.isoformat(),
} }
@@ -38,6 +40,7 @@ class SyncClock:
name=data["name"], name=data["name"],
speed=float(data.get("speed", 1.0)), speed=float(data.get("speed", 1.0)),
description=data.get("description"), description=data.get("description"),
tags=data.get("tags", []),
created_at=datetime.fromisoformat(data["created_at"]), created_at=datetime.fromisoformat(data["created_at"]),
updated_at=datetime.fromisoformat(data["updated_at"]), updated_at=datetime.fromisoformat(data["updated_at"]),
) )

Some files were not shown because too many files have changed in this diff Show More