alexei.dolgolyov e2e1107df7
Some checks failed
Lint & Test / test (push) Has been cancelled
feat: asset-based image/video sources, notification sounds, UI improvements
- Replace URL-based image_source/url fields with image_asset_id/video_asset_id
  on StaticImagePictureSource and VideoCaptureSource (clean break, no migration)
- Resolve asset IDs to file paths at runtime via AssetStore.get_file_path()
- Add EntitySelect asset pickers for image/video in stream editor modal
- Add notification sound configuration (global sound + per-app overrides)
- Unify per-app color and sound overrides into single "Per-App Overrides" section
- Persist notification history between server restarts
- Add asset management system (upload, edit, delete, soft-delete)
- Replace emoji buttons with SVG icons throughout UI
- Various backend improvements: SQLite stores, auth, backup, MQTT, webhooks
2026-03-26 20:40:25 +03:00

LED Grab

Ambient lighting system that captures screen content and drives LED strips in real time. Supports WLED, Adalight, AmbileD, and DDP devices with audio-reactive effects, pattern generation, and automated profile switching.

What It Does

The server captures pixels from a screen (or Android device via ADB), extracts border colors, applies post-processing filters, and streams the result to LED strips at up to 60 fps. A built-in web dashboard provides device management, calibration, live LED preview, and real-time metrics — no external UI required.

A Home Assistant integration exposes devices as entities for smart home automation.

Features

Screen Capture

  • Multi-monitor support with per-target display selection
  • 6 capture engine backends — MSS (cross-platform), DXCam, BetterCam, Windows Graphics Capture (Windows), Scrcpy (Android via ADB), Camera/Webcam (OpenCV)
  • Configurable capture regions, FPS, and border width
  • Capture templates for reusable configurations

LED Device Support

  • WLED (HTTP/UDP) with mDNS auto-discovery
  • Adalight (serial) — Arduino-compatible LED controllers
  • AmbileD (serial)
  • DDP (Distributed Display Protocol, UDP)
  • OpenRGB — PC peripherals (keyboard, mouse, RAM, fans, LED strips)
  • Serial port auto-detection and baud rate configuration

Color Processing

  • Post-processing filter pipeline: brightness, gamma, saturation, color correction, auto-crop, frame interpolation, pixelation, flip
  • Reusable post-processing templates
  • Color strip sources: audio-reactive, pattern generator, composite layering, audio-to-color mapping
  • Pattern templates with customizable effects

Audio Integration

  • Multichannel audio capture from any system device (input or loopback)
  • WASAPI engine on Windows, Sounddevice (PortAudio) engine on Linux/macOS
  • Per-channel mono extraction
  • Audio-reactive color strip sources driven by frequency analysis

Automation

  • Profile engine with condition-based switching (time of day, active window, etc.)
  • Dynamic brightness value sources (schedule-based, scene-aware)
  • Key Colors (KC) targets with live WebSocket color streaming

Dashboard

  • Web UI at http://localhost:8080 — no installation needed on the client side
  • Progressive Web App (PWA) — installable on phones and tablets with offline caching
  • Responsive mobile layout with bottom tab navigation
  • Device management with auto-discovery wizard
  • Visual calibration editor with overlay preview
  • Live LED strip preview via WebSocket
  • Real-time FPS, latency, and uptime charts
  • Localized in English, Russian, and Chinese

Home Assistant Integration

  • HACS-compatible custom component
  • Light, switch, sensor, and number entities per device
  • Real-time metrics via data coordinator
  • WebSocket-based live LED preview in HA

Requirements

  • Python 3.11+ (or Docker)
  • A supported LED device on the local network or connected via USB
  • Windows, Linux, or macOS — all core features work cross-platform

Platform Notes

Feature Windows Linux / macOS
Screen capture DXCam, BetterCam, WGC, MSS MSS
Webcam capture OpenCV (DirectShow) OpenCV (V4L2)
Audio capture WASAPI, Sounddevice Sounddevice (PulseAudio/PipeWire)
GPU monitoring NVIDIA (pynvml) NVIDIA (pynvml)
Android capture Scrcpy (ADB) Scrcpy (ADB)
Monitor names Friendly names (WMI) Generic ("Display 0")
Profile conditions Process/window detection Not yet implemented

Quick Start

git clone https://git.dolgolyov-family.by/alexei.dolgolyov/wled-screen-controller-mixed.git
cd wled-screen-controller/server
docker compose up -d

Manual

Requires Python 3.11+ and Node.js 18+.

git clone https://git.dolgolyov-family.by/alexei.dolgolyov/wled-screen-controller-mixed.git
cd wled-screen-controller/server

# Build the frontend bundle
npm ci && npm run build

# Create a virtual environment and install
python -m venv venv
source venv/bin/activate        # Linux/Mac
# venv\Scripts\activate         # Windows
pip install .

# Start the server
export PYTHONPATH=$(pwd)/src    # Linux/Mac
# set PYTHONPATH=%CD%\src       # Windows
uvicorn wled_controller.main:app --host 0.0.0.0 --port 8080

Open http://localhost:8080 to access the dashboard.

Important: The default API key is development-key-change-in-production. Change it before exposing the server outside localhost. See INSTALLATION.md for details.

See INSTALLATION.md for the full installation guide, including configuration, Docker manual builds, and Home Assistant setup.

Demo Mode

Demo mode runs the server with virtual devices, sample data, and isolated storage — useful for exploring the UI without real hardware.

Set the WLED_DEMO environment variable to true, 1, or yes:

# Docker
docker compose run -e WLED_DEMO=true server

# Python
WLED_DEMO=true uvicorn wled_controller.main:app --host 0.0.0.0 --port 8081

# Windows (installed app)
set WLED_DEMO=true
LedGrab.bat

Demo mode uses port 8081, config file config/demo_config.yaml, and stores data in data/demo/ (separate from production data). It can run alongside the main server.

Architecture

wled-screen-controller/
├── server/                          # Python FastAPI backend
│   ├── src/wled_controller/
│   │   ├── main.py                  # Application entry point
│   │   ├── config.py                # YAML + env var configuration
│   │   ├── api/
│   │   │   ├── routes/              # REST + WebSocket endpoints
│   │   │   └── schemas/             # Pydantic request/response models
│   │   ├── core/
│   │   │   ├── capture/             # Screen capture, calibration, pixel processing
│   │   │   ├── capture_engines/     # MSS, DXCam, BetterCam, WGC, Scrcpy, Camera backends
│   │   │   ├── devices/             # WLED, Adalight, AmbileD, DDP, OpenRGB clients
│   │   │   ├── audio/               # Audio capture engines
│   │   │   ├── filters/             # Post-processing filter pipeline
│   │   │   ├── processing/          # Stream orchestration and target processors
│   │   │   └── profiles/            # Condition-based profile automation
│   │   ├── storage/                 # JSON-based persistence layer
│   │   ├── static/                  # Web dashboard (vanilla JS, CSS, HTML)
│   │   │   ├── js/core/             # API client, state, i18n, modals, events
│   │   │   ├── js/features/         # Feature modules (devices, streams, targets, etc.)
│   │   │   ├── css/                 # Stylesheets
│   │   │   └── locales/             # en.json, ru.json, zh.json
│   │   └── utils/                   # Logging, monitor detection
│   ├── config/                      # default_config.yaml
│   ├── tests/                       # pytest suite
│   ├── Dockerfile
│   └── docker-compose.yml
├── custom_components/               # Home Assistant integration (HACS)
│   └── wled_screen_controller/
├── docs/
│   ├── API.md                       # REST API reference
│   └── CALIBRATION.md               # LED calibration guide
├── INSTALLATION.md
└── LICENSE                          # MIT

Configuration

Edit server/config/default_config.yaml or use environment variables with the WLED_ prefix:

server:
  host: "0.0.0.0"
  port: 8080
  log_level: "INFO"

auth:
  api_keys:
    dev: "development-key-change-in-production"

storage:
  devices_file: "data/devices.json"
  templates_file: "data/capture_templates.json"

logging:
  format: "json"
  file: "logs/wled_controller.log"
  max_size_mb: 100

Environment variable override example: WLED_SERVER__PORT=9090.

API

The server exposes a REST API (with Swagger docs at /docs) covering:

  • Devices — CRUD, discovery, validation, state, metrics
  • Capture Templates — Screen capture configurations
  • Picture Sources — Screen capture stream definitions
  • Picture Targets — LED target management, start/stop processing
  • Post-Processing Templates — Filter pipeline configurations
  • Color Strip Sources — Audio, pattern, composite, mapped sources
  • Audio Sources — Multichannel and mono audio device configuration
  • Pattern Templates — Effect pattern definitions
  • Value Sources — Dynamic brightness/value providers
  • Key Colors Targets — KC targets with WebSocket live color stream
  • Profiles — Condition-based automation profiles

All endpoints require API key authentication via X-API-Key header or ?token= query parameter.

See docs/API.md for the full reference.

Calibration

The calibration system maps screen border pixels to physical LED positions. Configure layout direction, start position, and per-edge segments through the web dashboard or API.

See docs/CALIBRATION.md for a step-by-step guide.

Home Assistant

Install via HACS (add as a custom repository) or manually copy custom_components/wled_screen_controller/ into your HA config directory. The integration creates light, switch, sensor, and number entities for each configured device.

See INSTALLATION.md for detailed setup instructions.

Development

cd server

# Install with dev dependencies
pip install -e ".[dev]"

# Run tests
pytest

# Format and lint
black src/ tests/
ruff check src/ tests/

Optional extras:

pip install -e ".[perf]"     # High-performance capture engines (Windows)
pip install -e ".[camera]"   # Webcam capture via OpenCV

License

MIT — see LICENSE.

Acknowledgments

  • WLED — LED control firmware
  • FastAPI — Python web framework
  • MSS — Cross-platform screen capture
Description
No description provided
Readme MIT 16 MiB
2026-03-25 22:43:53 +03:00
Languages
Python 48.5%
TypeScript 37.4%
HTML 7.4%
CSS 5.3%
Shell 0.6%
Other 0.6%