21 Commits

Author SHA1 Message Date
71b79cd919 Move quiet hours from hub config to per-call service params
All checks were successful
Validate / Hassfest (push) Successful in 1m19s
Quiet hours are now specified per send_telegram_notification call via
quiet_hours_start/quiet_hours_end params instead of being a hub-wide
integration option. This allows different automations to use different
quiet hours windows (or none at all).

- Remove quiet_hours_start/end from config options UI and const.py
- Add quiet_hours_start/end as optional HH:MM params on the service
- Remove ignore_quiet_hours param (omit quiet hours params to send immediately)
- Queue stores quiet_hours_end per item; each unique end time gets its
  own async_track_time_change timer for replay
- On startup, items whose quiet hours have passed are sent immediately
- Add async_remove_indices() to NotificationQueue for selective removal
- Timers are cleaned up when no more items need them

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 12:04:20 +03:00
678e8a6e62 Add quiet hours, fix Telegram bugs, and improve cache performance
All checks were successful
Validate / Hassfest (push) Successful in 5s
- Add quiet hours support to queue notifications during configured time windows
- Fix UnboundLocalError when single-item document chunk exceeds max_asset_data_size
- Fix document-only multi-item chunks being silently dropped (missing skip guard)
- Fix notification queue entity lookup by storing entity_id in queued params
- Fix quiet hours using OS timezone instead of HA-configured timezone (dt_util.now)
- Fix chat_action schema rejecting empty string from "Disabled" selector
- Fix stale thumbhash cache entries not being removed on mismatch
- Fix translation descriptions for send_large_photos_as_documents
- Add batch async_set_many() to TelegramFileCache to reduce disk writes
- Add max-entries eviction (2000) for thumbhash cache to prevent unbounded growth
- Eliminate redundant _is_asset_id/get_asset_thumbhash lookups in media group loop

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 09:45:34 +03:00
dd7032b411 Replace TTL with thumbhash-based cache validation and add Telegram video size limits
Some checks failed
Validate / Hassfest (push) Has been cancelled
- Asset cache now validates entries by comparing stored thumbhash with current
  Immich thumbhash instead of using TTL expiration. This makes cache invalidation
  precise (only when content actually changes) and eliminates unnecessary re-uploads.
  URL-based cache retains TTL for non-Immich URLs.
- Add TELEGRAM_MAX_VIDEO_SIZE (50 MB) check to skip oversized videos in both
  single-video and media-group paths, preventing entire groups from failing.
- Split media groups into sub-groups by cumulative upload size to ensure each
  sendMediaGroup request stays under Telegram's 50 MB upload limit.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-16 12:28:33 +03:00
65ca81a3f3 Link to local server repository
All checks were successful
Validate / Hassfest (push) Successful in 5s
2026-02-05 00:17:22 +03:00
3ba33a36cf Minor refactoring to use common const for telegram API url
All checks were successful
Validate / Hassfest (push) Successful in 4s
2026-02-04 17:46:14 +03:00
6ca3cae5df Add document type and content_type support for send_telegram_notification
All checks were successful
Validate / Hassfest (push) Successful in 3s
- Add type: document as default media type (instead of photo)
- Add optional content_type field for explicit MIME type specification
- Documents are sent separately (Telegram API limitation for media groups)
- Default content types: image/jpeg (photo), video/mp4 (video), auto-detect (document)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 01:35:57 +03:00
fde2d0ae31 Bump version to 2.7.1
All checks were successful
Validate / Hassfest (push) Successful in 4s
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 02:51:22 +03:00
31663852f9 Fixed link to automation
All checks were successful
Validate / Hassfest (push) Successful in 6s
2026-02-03 02:50:19 +03:00
5cee3ccc79 Add chat_action parameter to send_telegram_notification service
All checks were successful
Validate / Hassfest (push) Successful in 4s
Shows typing/upload indicator while processing media. Supports:
typing, upload_photo, upload_video, upload_document actions.
Set to empty string to disable. Default: typing.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 02:48:25 +03:00
3b133dc4bb Exclude archived assets from processing status check
All checks were successful
Validate / Hassfest (push) Successful in 4s
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 15:02:25 +03:00
a8ea9ab46a Rename on_this_day to memory_date with exclude-same-year behavior
All checks were successful
Validate / Hassfest (push) Successful in 2s
Renamed the date filter parameter and changed default behavior to match
Google Photos memories - now excludes assets from the same year as the
reference date, returning only photos from previous years on that day.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 14:24:08 +03:00
e88fd0fa3a Add get_assets filtering: offset, on_this_day, city, state, country
All checks were successful
Validate / Hassfest (push) Successful in 3s
- Add offset parameter for pagination support
- Add on_this_day parameter for memories filtering (match month and day)
- Add city, state, country parameters for geolocation filtering
- Update README with new parameters and examples

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 12:25:35 +03:00
3cf916dc77 Rename last_updated attribute to last_updated_at
All checks were successful
Validate / Hassfest (push) Successful in 3s
Renamed for consistency with created_at attribute naming.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 00:30:39 +03:00
df446390f2 Add album metadata attributes to Album ID sensor
All checks were successful
Validate / Hassfest (push) Successful in 4s
Add asset_count, last_updated, and created_at attributes to the
Album ID sensor for convenient access to album metadata.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 00:20:38 +03:00
1d61f05552 Track pending assets for delayed processing events
All checks were successful
Validate / Hassfest (push) Successful in 3s
- Add _pending_asset_ids to track assets detected but not yet processed
- Fire events when pending assets become processed (thumbhash available)
- Fixes issue where videos added during transcoding never triggered events
- Add debug logging for change detection and pending asset tracking
- Document external domain feature in README

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 22:23:32 +03:00
38a2a6ad7a Add external domain support for URLs
All checks were successful
Validate / Hassfest (push) Successful in 4s
- Fetch externalDomain from Immich server config on startup
- Use external domain for user-facing URLs (share links, asset URLs)
- Keep internal connection URL for API calls
- Add get_internal_download_url() to convert external URLs back to
  internal for faster local network downloads (Telegram notifications)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 21:53:02 +03:00
0bb7e71a1e Fix video asset processing detection
All checks were successful
Validate / Hassfest (push) Successful in 3s
- Use thumbhash for all assets instead of encodedVideoPath for videos
  (encodedVideoPath is not exposed in Immich API response)
- Add isTrashed check to exclude trashed assets from events
- Simplify processing status logic for both photos and videos

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 21:36:21 +03:00
c29fc2fbcf Add Telegram file ID caching and reverse geocoding fields
All checks were successful
Validate / Hassfest (push) Successful in 3s
Implement caching for Telegram file_ids to avoid re-uploading the same media.
Cached IDs are reused for subsequent sends, improving performance significantly.
Added configurable cache TTL option (1-168 hours, default 48).

Also added city, state, and country fields from Immich reverse geocoding
to asset data in events and get_assets service.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 03:12:05 +03:00
011f105823 Add geolocation (latitude/longitude) to asset data
All checks were successful
Validate / Hassfest (push) Successful in 3s
Expose GPS coordinates from EXIF data in asset responses. The latitude
and longitude fields are included in get_assets service responses and
event data when available.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 02:29:56 +03:00
ee45fdc177 Fix the services API
All checks were successful
Validate / Hassfest (push) Successful in 3s
2026-02-01 02:22:52 +03:00
4b0f3b8b12 Enhance get_assets service with flexible filtering and sorting
All checks were successful
Validate / Hassfest (push) Successful in 5s
- Replace filter parameter with independent favorite_only boolean
- Add order_by parameter supporting date, rating, and name sorting
- Rename count to limit for clarity
- Add date range filtering with min_date and max_date parameters
- Add asset_type filtering for photos and videos
- Update README with language support section and fixed sensor list
- Add translations for all new parameters

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-01 01:39:04 +03:00
12 changed files with 2350 additions and 335 deletions

View File

@@ -3,6 +3,7 @@
## Version Management
Update the integration version in `custom_components/immich_album_watcher/manifest.json` only when changes are made to the **integration content** (files inside `custom_components/immich_album_watcher/`).
**IMPORTANT** ALWAYS ask for version bump before doing it.
Do NOT bump version for:

322
README.md
View File

@@ -4,18 +4,21 @@
A Home Assistant custom integration that monitors [Immich](https://immich.app/) photo/video library albums for changes and exposes them as Home Assistant entities with event-firing capabilities.
> **Tip:** For the best experience, use this integration with the [Immich Album Watcher Blueprint](https://github.com/DolgolyovAlexei/haos-blueprints/blob/main/Common/Immich%20Album%20Watcher.yaml) to easily create automations for album change notifications.
> **Tip:** For the best experience, use this integration with the [Immich Album Watcher Blueprint](https://git.dolgolyov-family.by/alexei.dolgolyov/haos-blueprints/src/branch/main/Common/Immich%20Album%20Watcher) to easily create automations for album change notifications.
## Features
- **Album Monitoring** - Watch selected Immich albums for asset additions and removals
- **Rich Sensor Data** - Multiple sensors per album:
- Album ID (with share URL attribute)
- Asset count (with detected people list)
- Photo count
- Video count
- Last updated timestamp
- Creation date
- Album ID (with album name and share URL attributes)
- Asset Count (total assets with detected people list)
- Photo Count (number of photos)
- Video Count (number of videos)
- Last Updated (last modification timestamp)
- Created (album creation date)
- Public URL (public share link)
- Protected URL (password-protected share link)
- Protected Password (password for protected link)
- **Camera Entity** - Album thumbnail displayed as a camera entity for dashboards
- **Binary Sensor** - "New Assets" indicator that turns on when assets are added
- **Face Recognition** - Detects and lists people recognized in album photos
@@ -34,12 +37,15 @@ A Home Assistant custom integration that monitors [Immich](https://immich.app/)
- **Services** - Custom service calls:
- `immich_album_watcher.refresh` - Force immediate data refresh
- `immich_album_watcher.get_assets` - Get assets from an album with filtering and ordering
- `immich_album_watcher.send_telegram_notification` - Send text, photo, video, or media group to Telegram
- `immich_album_watcher.send_telegram_notification` - Send text, photo, video, document, or media group to Telegram
- **Share Link Management** - Button entities to create and delete share links:
- Create/delete public (unprotected) share links
- Create/delete password-protected share links
- Edit protected link passwords via Text entity
- **Configurable Polling** - Adjustable scan interval (10-3600 seconds)
- **Localization** - Available in multiple languages:
- English
- Russian (Русский)
## Installation
@@ -71,12 +77,36 @@ A Home Assistant custom integration that monitors [Immich](https://immich.app/)
| Albums | Albums to monitor | Required |
| Scan Interval | How often to check for changes (seconds) | 60 |
| Telegram Bot Token | Bot token for sending media to Telegram (optional) | - |
| Telegram Cache TTL | How long to cache uploaded file IDs (hours, 1-168) | 48 |
### External Domain Support
The integration supports connecting to a local Immich server while using an external domain for user-facing URLs. This is useful when:
- Your Home Assistant connects to Immich via local network (e.g., `http://192.168.1.100:2283`)
- But you want share links and asset URLs to use your public domain (e.g., `https://photos.example.com`)
**How it works:**
1. Configure "External domain" in Immich: **Administration → Settings → Server → External Domain**
2. The integration automatically fetches this setting on startup
3. All user-facing URLs (share links, asset URLs in events) use the external domain
4. API calls and file downloads still use the local connection URL for faster performance
**Example:**
- Server URL (in integration config): `http://192.168.1.100:2283`
- External Domain (in Immich settings): `https://photos.example.com`
- Share links in events: `https://photos.example.com/share/...`
- Telegram downloads: via `http://192.168.1.100:2283` (fast local network)
If no external domain is configured in Immich, all URLs will use the Server URL from the integration configuration.
## Entities Created (per album)
| Entity Type | Name | Description |
|-------------|------|-------------|
| Sensor | Album ID | Album identifier with `album_name` and `share_url` attributes |
| Sensor | Album ID | Album identifier with `album_name`, `asset_count`, `share_url`, `last_updated_at`, and `created_at` attributes |
| Sensor | Asset Count | Total number of assets (includes `people` list in attributes) |
| Sensor | Photo Count | Number of photos in the album |
| Sensor | Video Count | Number of videos in the album |
@@ -110,26 +140,47 @@ Get assets from a specific album with optional filtering and ordering (returns r
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_count
entity_id: sensor.album_name_asset_limit
data:
count: 10 # Maximum number of assets (1-100)
filter: "favorite" # Options: "none", "favorite", "rating"
filter_min_rating: 4 # Min rating (1-5), used when filter="rating"
order: "descending" # Options: "ascending", "descending", "random"
limit: 10 # Maximum number of assets (1-100)
offset: 0 # Number of assets to skip (for pagination)
favorite_only: false # true = favorites only, false = all assets
filter_min_rating: 4 # Min rating (1-5)
order_by: "date" # Options: "date", "rating", "name", "random"
order: "descending" # Options: "ascending", "descending"
asset_type: "all" # Options: "all", "photo", "video"
min_date: "2024-01-01" # Optional: assets created on or after this date
max_date: "2024-12-31" # Optional: assets created on or before this date
memory_date: "2024-02-14" # Optional: memories filter (excludes same year)
city: "Paris" # Optional: filter by city name
state: "California" # Optional: filter by state/region
country: "France" # Optional: filter by country
```
**Parameters:**
- `count` (optional, default: 10): Maximum number of assets to return (1-100)
- `filter` (optional, default: "none"): Filter type
- `"none"`: No filtering, return all assets
- `"favorite"`: Return only favorite assets
- `"rating"`: Return assets with rating >= `filter_min_rating`
- `filter_min_rating` (optional, default: 1): Minimum rating (1-5 stars), used when `filter="rating"`
- `order` (optional, default: "descending"): Sort order by creation date
- `"ascending"`: Oldest first
- `"descending"`: Newest first
- `"random"`: Random order
- `limit` (optional, default: 10): Maximum number of assets to return (1-100)
- `offset` (optional, default: 0): Number of assets to skip before returning results. Use with `limit` for pagination (e.g., `offset: 0, limit: 10` for first page, `offset: 10, limit: 10` for second page)
- `favorite_only` (optional, default: false): Filter to show only favorite assets
- `filter_min_rating` (optional, default: 1): Minimum rating for assets (1-5 stars). Applied independently of `favorite_only`
- `order_by` (optional, default: "date"): Field to sort assets by
- `"date"`: Sort by creation date
- `"rating"`: Sort by rating (assets without rating are placed last)
- `"name"`: Sort by filename
- `"random"`: Random order (ignores `order`)
- `order` (optional, default: "descending"): Sort direction
- `"ascending"`: Ascending order
- `"descending"`: Descending order
- `asset_type` (optional, default: "all"): Filter by asset type
- `"all"`: No type filtering, return both photos and videos
- `"photo"`: Return only photos
- `"video"`: Return only videos
- `min_date` (optional): Filter assets created on or after this date. Use ISO 8601 format (e.g., `"2024-01-01"` or `"2024-01-01T10:30:00"`)
- `max_date` (optional): Filter assets created on or before this date. Use ISO 8601 format (e.g., `"2024-12-31"` or `"2024-12-31T23:59:59"`)
- `memory_date` (optional): Filter assets by matching month and day, excluding the same year (memories filter like Google Photos). Provide a date in ISO 8601 format (e.g., `"2024-02-14"`) to get all assets taken on February 14th from previous years
- `city` (optional): Filter assets by city name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data
- `state` (optional): Filter assets by state/region name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data
- `country` (optional): Filter assets by country name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data
**Examples:**
@@ -138,10 +189,11 @@ Get 5 most recent favorite assets:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_count
entity_id: sensor.album_name_asset_limit
data:
count: 5
filter: "favorite"
limit: 5
favorite_only: true
order_by: "date"
order: "descending"
```
@@ -150,24 +202,161 @@ Get 10 random assets rated 3 stars or higher:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_count
entity_id: sensor.album_name_asset_limit
data:
count: 10
filter: "rating"
limit: 10
filter_min_rating: 3
order: "random"
order_by: "random"
```
Get 20 most recent photos only:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 20
asset_type: "photo"
order_by: "date"
order: "descending"
```
Get top 10 highest rated favorite videos:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 10
favorite_only: true
asset_type: "video"
order_by: "rating"
order: "descending"
```
Get photos sorted alphabetically by name:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 20
asset_type: "photo"
order_by: "name"
order: "ascending"
```
Get photos from a specific date range:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 50
asset_type: "photo"
min_date: "2024-06-01"
max_date: "2024-06-30"
order_by: "date"
order: "descending"
```
Get "On This Day" memories (photos from today's date in previous years):
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 20
memory_date: "{{ now().strftime('%Y-%m-%d') }}"
order_by: "date"
order: "ascending"
```
Paginate through all assets (first page):
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 10
offset: 0
order_by: "date"
order: "descending"
```
Paginate through all assets (second page):
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 10
offset: 10
order_by: "date"
order: "descending"
```
Get photos taken in a specific city:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 50
city: "Paris"
asset_type: "photo"
order_by: "date"
order: "descending"
```
Get all assets from a specific country:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 100
country: "Japan"
order_by: "date"
order: "ascending"
```
### Send Telegram Notification
Send notifications to Telegram. Supports multiple formats:
- **Text message** - When `urls` is empty or not provided
- **Single photo** - When `urls` contains one photo
- **Single video** - When `urls` contains one video
- **Media group** - When `urls` contains multiple items
- **Text message** - When `assets` is empty or not provided
- **Single document** - When `assets` contains one document (default type)
- **Single photo** - When `assets` contains one photo (`type: photo`)
- **Single video** - When `assets` contains one video (`type: video`)
- **Media group** - When `assets` contains multiple photos/videos (documents are sent separately)
The service downloads media from Immich and uploads it to Telegram, bypassing any CORS restrictions. Large lists of media are automatically split into multiple media groups based on the `max_group_size` parameter (default: 10 items per group).
The service downloads media from Immich and uploads it to Telegram, bypassing any CORS restrictions. Large lists of photos and videos are automatically split into multiple media groups based on the `max_group_size` parameter (default: 10 items per group). Documents cannot be grouped and are sent individually.
**File ID Caching:** When media is uploaded to Telegram, the service caches the returned `file_id`. Subsequent sends of the same media will use the cached `file_id` instead of re-uploading, significantly improving performance. The cache TTL is configurable in hub options (default: 48 hours, range: 1-168 hours). The cache is persistent across Home Assistant restarts and is shared across all albums in the hub.
**Dual Cache System:** The integration maintains two separate caches for optimal performance:
- **Asset ID Cache** - For Immich assets with extractable asset IDs (UUIDs). The same asset accessed via different URL types (thumbnail, original, video playback, share links) shares the same cache entry.
- **URL Cache** - For non-Immich URLs or URLs without extractable asset IDs. Also used when a custom `cache_key` is provided.
**Smart Cache Keys:** The service automatically extracts asset IDs from Immich URLs. Supported URL patterns:
- `/api/assets/{asset_id}/original`
- `/api/assets/{asset_id}/thumbnail`
- `/api/assets/{asset_id}/video/playback`
- `/share/{key}/photos/{asset_id}`
You can provide a custom `cache_key` per asset to override this behavior (stored in URL cache).
**Examples:**
@@ -176,22 +365,36 @@ Text message:
```yaml
service: immich_album_watcher.send_telegram_notification
target:
entity_id: sensor.album_name_asset_count
entity_id: sensor.album_name_asset_limit
data:
chat_id: "-1001234567890"
caption: "Check out the new album!"
disable_web_page_preview: true
```
Single document (default):
```yaml
service: immich_album_watcher.send_telegram_notification
target:
entity_id: sensor.album_name_asset_limit
data:
chat_id: "-1001234567890"
assets:
- url: "https://immich.example.com/api/assets/xxx/original?key=yyy"
content_type: "image/heic" # Optional: explicit MIME type
caption: "Original file"
```
Single photo:
```yaml
service: immich_album_watcher.send_telegram_notification
target:
entity_id: sensor.album_name_asset_count
entity_id: sensor.album_name_asset_limit
data:
chat_id: "-1001234567890"
urls:
assets:
- url: "https://immich.example.com/api/assets/xxx/thumbnail?key=yyy"
type: photo
caption: "Beautiful sunset!"
@@ -202,10 +405,10 @@ Media group:
```yaml
service: immich_album_watcher.send_telegram_notification
target:
entity_id: sensor.album_name_asset_count
entity_id: sensor.album_name_asset_limit
data:
chat_id: "-1001234567890"
urls:
assets:
- url: "https://immich.example.com/api/assets/xxx/thumbnail?key=yyy"
type: photo
- url: "https://immich.example.com/api/assets/zzz/video/playback?key=yyy"
@@ -219,7 +422,7 @@ HTML formatting:
```yaml
service: immich_album_watcher.send_telegram_notification
target:
entity_id: sensor.album_name_asset_count
entity_id: sensor.album_name_asset_limit
data:
chat_id: "-1001234567890"
caption: |
@@ -234,20 +437,35 @@ Non-blocking mode (fire-and-forget):
```yaml
service: immich_album_watcher.send_telegram_notification
target:
entity_id: sensor.album_name_asset_count
entity_id: sensor.album_name_asset_limit
data:
chat_id: "-1001234567890"
urls:
assets:
- url: "https://immich.example.com/api/assets/xxx/thumbnail?key=yyy"
type: photo
caption: "Quick notification"
wait_for_response: false # Automation continues immediately
```
Using custom cache_key (useful when same media has different URLs):
```yaml
service: immich_album_watcher.send_telegram_notification
target:
entity_id: sensor.album_name_asset_limit
data:
chat_id: "-1001234567890"
assets:
- url: "https://immich.example.com/api/assets/xxx/thumbnail?key=yyy"
type: photo
cache_key: "asset_xxx" # Custom key for caching instead of URL
caption: "Photo with custom cache key"
```
| Field | Description | Required |
|-------|-------------|----------|
| `chat_id` | Telegram chat ID to send to | Yes |
| `urls` | List of media items with `url` and `type` (photo/video). Empty for text message. | No |
| `assets` | List of media items with `url`, optional `type` (document/photo/video, default: document), optional `content_type` (MIME type, e.g., `image/jpeg`), and optional `cache_key` (custom key for caching). Empty for text message. Photos and videos can be grouped; documents are sent separately. | No |
| `bot_token` | Telegram bot token (uses configured token if not provided) | No |
| `caption` | For media: caption applied to first item. For text: the message text. Supports HTML formatting by default. | No |
| `reply_to_message_id` | Message ID to reply to | No |
@@ -258,6 +476,7 @@ data:
| `wait_for_response` | Wait for Telegram to finish processing. Set to `false` for fire-and-forget (automation continues immediately). Default: `true` | No |
| `max_asset_data_size` | Maximum asset size in bytes. Assets exceeding this limit will be skipped. Default: no limit | No |
| `send_large_photos_as_documents` | Handle photos exceeding Telegram limits (10MB or 10000px dimension sum). If `true`, send as documents. If `false`, skip oversized photos. Default: `false` | No |
| `chat_action` | Chat action to display while processing media (`typing`, `upload_photo`, `upload_video`, `upload_document`). Set to empty string to disable. Default: `typing` | No |
The service returns a response with `success` status and `message_id` (single message), `message_ids` (media group), or `groups_sent` (number of groups when split). When `wait_for_response` is `false`, the service returns immediately with `{"success": true, "status": "queued"}` while processing continues in the background.
@@ -288,7 +507,7 @@ automation:
- service: notify.mobile_app
data:
title: "New Photos"
message: "{{ trigger.event.data.added_count }} new photos in {{ trigger.event.data.album_name }}"
message: "{{ trigger.event.data.added_limit }} new photos in {{ trigger.event.data.album_name }}"
- alias: "Album renamed"
trigger:
@@ -321,8 +540,8 @@ automation:
| `album_url` | Public URL to view the album (only present if album has a shared link) | All events except `album_deleted` |
| `change_type` | Type of change (assets_added, assets_removed, album_renamed, album_sharing_changed, changed) | All events except `album_deleted` |
| `shared` | Current sharing status of the album | All events except `album_deleted` |
| `added_count` | Number of assets added | `album_changed`, `assets_added` |
| `removed_count` | Number of assets removed | `album_changed`, `assets_removed` |
| `added_limit` | Number of assets added | `album_changed`, `assets_added` |
| `removed_limit` | Number of assets removed | `album_changed`, `assets_removed` |
| `added_assets` | List of added assets with details (see below) | `album_changed`, `assets_added` |
| `removed_assets` | List of removed asset IDs | `album_changed`, `assets_removed` |
| `people` | List of all people detected in the album | All events except `album_deleted` |
@@ -346,6 +565,11 @@ Each item in the `added_assets` list contains the following fields:
| `description` | Description/caption of the asset (from EXIF data) |
| `is_favorite` | Whether the asset is marked as favorite (`true` or `false`) |
| `rating` | User rating of the asset (1-5 stars, or `null` if not rated) |
| `latitude` | GPS latitude coordinate (or `null` if no geolocation) |
| `longitude` | GPS longitude coordinate (or `null` if no geolocation) |
| `city` | City name from reverse geocoding (or `null` if unavailable) |
| `state` | State/region name from reverse geocoding (or `null` if unavailable) |
| `country` | Country name from reverse geocoding (or `null` if unavailable) |
| `url` | Public URL to view the asset (only present if album has a shared link) |
| `download_url` | Direct download URL for the original file (if shared link exists) |
| `playback_url` | Video playback URL (for VIDEO assets only, if shared link exists) |
@@ -368,7 +592,7 @@ automation:
title: "New Photos"
message: >
{{ trigger.event.data.added_assets[0].owner }} added
{{ trigger.event.data.added_count }} photos to {{ trigger.event.data.album_name }}
{{ trigger.event.data.added_limit }} photos to {{ trigger.event.data.album_name }}
```
## Requirements

View File

@@ -4,9 +4,12 @@ from __future__ import annotations
import logging
from dataclasses import dataclass
from datetime import datetime, time as dt_time
from homeassistant.config_entries import ConfigEntry, ConfigSubentry
from homeassistant.core import HomeAssistant
from homeassistant.helpers.event import async_track_time_change
from homeassistant.util import dt as dt_util
from .const import (
CONF_ALBUM_ID,
@@ -15,12 +18,14 @@ from .const import (
CONF_HUB_NAME,
CONF_IMMICH_URL,
CONF_SCAN_INTERVAL,
CONF_TELEGRAM_CACHE_TTL,
DEFAULT_SCAN_INTERVAL,
DEFAULT_TELEGRAM_CACHE_TTL,
DOMAIN,
PLATFORMS,
)
from .coordinator import ImmichAlbumWatcherCoordinator
from .storage import ImmichAlbumStorage
from .storage import ImmichAlbumStorage, NotificationQueue, TelegramFileCache
_LOGGER = logging.getLogger(__name__)
@@ -33,6 +38,7 @@ class ImmichHubData:
url: str
api_key: str
scan_interval: int
telegram_cache_ttl: int
@dataclass
@@ -55,6 +61,7 @@ async def async_setup_entry(hass: HomeAssistant, entry: ImmichConfigEntry) -> bo
url = entry.data[CONF_IMMICH_URL]
api_key = entry.data[CONF_API_KEY]
scan_interval = entry.options.get(CONF_SCAN_INTERVAL, DEFAULT_SCAN_INTERVAL)
telegram_cache_ttl = entry.options.get(CONF_TELEGRAM_CACHE_TTL, DEFAULT_TELEGRAM_CACHE_TTL)
# Store hub data
entry.runtime_data = ImmichHubData(
@@ -62,17 +69,38 @@ async def async_setup_entry(hass: HomeAssistant, entry: ImmichConfigEntry) -> bo
url=url,
api_key=api_key,
scan_interval=scan_interval,
telegram_cache_ttl=telegram_cache_ttl,
)
# Create storage for persisting album state across restarts
storage = ImmichAlbumStorage(hass, entry.entry_id)
await storage.async_load()
# Create and load Telegram file caches once per hub (shared across all albums)
# TTL is in hours from config, convert to seconds
cache_ttl_seconds = telegram_cache_ttl * 60 * 60
# URL-based cache for non-Immich URLs or URLs without extractable asset IDs
telegram_cache = TelegramFileCache(hass, entry.entry_id, ttl_seconds=cache_ttl_seconds)
await telegram_cache.async_load()
# Asset ID-based cache for Immich URLs — uses thumbhash validation instead of TTL
telegram_asset_cache = TelegramFileCache(
hass, f"{entry.entry_id}_assets", use_thumbhash=True
)
await telegram_asset_cache.async_load()
# Create notification queue for quiet hours
notification_queue = NotificationQueue(hass, entry.entry_id)
await notification_queue.async_load()
# Store hub reference
hass.data[DOMAIN][entry.entry_id] = {
"hub": entry.runtime_data,
"subentries": {},
"storage": storage,
"telegram_cache": telegram_cache,
"telegram_asset_cache": telegram_asset_cache,
"notification_queue": notification_queue,
"quiet_hours_unsubs": {}, # keyed by "HH:MM" end time
}
# Track loaded subentries to detect changes
@@ -85,6 +113,12 @@ async def async_setup_entry(hass: HomeAssistant, entry: ImmichConfigEntry) -> bo
# Forward platform setup once - platforms will iterate through subentries
await hass.config_entries.async_forward_entry_setups(entry, PLATFORMS)
# Check if there are queued notifications from before restart
if notification_queue.has_pending():
_register_queue_timers(hass, entry)
# Process any items whose quiet hours have already ended
hass.async_create_task(_process_ready_notifications(hass, entry))
# Register update listener for options and subentry changes
entry.async_on_unload(entry.add_update_listener(_async_update_listener))
@@ -104,6 +138,8 @@ async def _async_setup_subentry_coordinator(
album_id = subentry.data[CONF_ALBUM_ID]
album_name = subentry.data.get(CONF_ALBUM_NAME, "Unknown Album")
storage: ImmichAlbumStorage = hass.data[DOMAIN][entry.entry_id]["storage"]
telegram_cache: TelegramFileCache = hass.data[DOMAIN][entry.entry_id]["telegram_cache"]
telegram_asset_cache: TelegramFileCache = hass.data[DOMAIN][entry.entry_id]["telegram_asset_cache"]
_LOGGER.debug("Setting up coordinator for album: %s (%s)", album_name, album_id)
@@ -117,6 +153,8 @@ async def _async_setup_subentry_coordinator(
scan_interval=hub_data.scan_interval,
hub_name=hub_data.name,
storage=storage,
telegram_cache=telegram_cache,
telegram_asset_cache=telegram_asset_cache,
)
# Load persisted state before first refresh to detect changes during downtime
@@ -136,6 +174,195 @@ async def _async_setup_subentry_coordinator(
_LOGGER.info("Coordinator for album '%s' set up successfully", album_name)
def _is_quiet_hours(start_str: str, end_str: str) -> bool:
"""Check if current time is within quiet hours."""
if not start_str or not end_str:
return False
try:
now = dt_util.now().time()
start_time = dt_time.fromisoformat(start_str)
end_time = dt_time.fromisoformat(end_str)
except ValueError:
return False
if start_time <= end_time:
return start_time <= now < end_time
else:
# Crosses midnight (e.g., 22:00 - 08:00)
return now >= start_time or now < end_time
def _register_queue_timers(hass: HomeAssistant, entry: ImmichConfigEntry) -> None:
"""Register timers for each unique quiet_hours_end in the queue."""
entry_data = hass.data[DOMAIN][entry.entry_id]
queue: NotificationQueue = entry_data["notification_queue"]
unsubs: dict[str, list] = entry_data["quiet_hours_unsubs"]
# Collect unique end times from queued items
end_times: set[str] = set()
for item in queue.get_all():
end_str = item.get("params", {}).get("quiet_hours_end", "")
if end_str:
end_times.add(end_str)
for end_str in end_times:
if end_str in unsubs:
continue # Timer already registered for this end time
try:
end_time = dt_time.fromisoformat(end_str)
except ValueError:
_LOGGER.warning("Invalid quiet hours end time in queue: %s", end_str)
continue
async def _on_quiet_hours_end(_now: datetime, _end_str: str = end_str) -> None:
"""Handle quiet hours end — process matching queued notifications."""
_LOGGER.info("Quiet hours ended (%s), processing queued notifications", _end_str)
await _process_notifications_for_end_time(hass, entry, _end_str)
unsub = async_track_time_change(
hass, _on_quiet_hours_end, hour=end_time.hour, minute=end_time.minute, second=0
)
unsubs[end_str] = unsub
entry.async_on_unload(unsub)
_LOGGER.debug("Registered quiet hours timer for %s", end_str)
def _unregister_queue_timer(hass: HomeAssistant, entry: ImmichConfigEntry, end_str: str) -> None:
"""Unregister a quiet hours timer if no more items need it."""
entry_data = hass.data[DOMAIN][entry.entry_id]
queue: NotificationQueue = entry_data["notification_queue"]
unsubs: dict[str, list] = entry_data["quiet_hours_unsubs"]
# Check if any remaining items still use this end time
for item in queue.get_all():
if item.get("params", {}).get("quiet_hours_end", "") == end_str:
return # Still needed
unsub = unsubs.pop(end_str, None)
if unsub:
unsub()
_LOGGER.debug("Unregistered quiet hours timer for %s (no more items)", end_str)
async def _process_ready_notifications(
hass: HomeAssistant, entry: ImmichConfigEntry
) -> None:
"""Process queued notifications whose quiet hours have already ended."""
entry_data = hass.data[DOMAIN].get(entry.entry_id)
if not entry_data:
return
queue: NotificationQueue = entry_data["notification_queue"]
items = queue.get_all()
if not items:
return
# Find items whose quiet hours have ended
ready_indices = []
for i, item in enumerate(items):
params = item.get("params", {})
start_str = params.get("quiet_hours_start", "")
end_str = params.get("quiet_hours_end", "")
if not _is_quiet_hours(start_str, end_str):
ready_indices.append(i)
if not ready_indices:
return
_LOGGER.info("Found %d queued notifications ready to send (quiet hours ended)", len(ready_indices))
await _send_queued_items(hass, entry, ready_indices)
async def _process_notifications_for_end_time(
hass: HomeAssistant, entry: ImmichConfigEntry, end_str: str
) -> None:
"""Process queued notifications matching a specific quiet_hours_end time."""
entry_data = hass.data[DOMAIN].get(entry.entry_id)
if not entry_data:
return
queue: NotificationQueue = entry_data["notification_queue"]
items = queue.get_all()
if not items:
return
# Find items matching this end time that are no longer in quiet hours
matching_indices = []
for i, item in enumerate(items):
params = item.get("params", {})
if params.get("quiet_hours_end", "") == end_str:
start_str = params.get("quiet_hours_start", "")
if not _is_quiet_hours(start_str, end_str):
matching_indices.append(i)
if not matching_indices:
return
_LOGGER.info("Processing %d queued notifications for quiet hours end %s", len(matching_indices), end_str)
await _send_queued_items(hass, entry, matching_indices)
# Clean up timer if no more items need it
_unregister_queue_timer(hass, entry, end_str)
async def _send_queued_items(
hass: HomeAssistant, entry: ImmichConfigEntry, indices: list[int]
) -> None:
"""Send specific queued notifications by index and remove them from the queue."""
import asyncio
from homeassistant.helpers import entity_registry as er
entry_data = hass.data[DOMAIN].get(entry.entry_id)
if not entry_data:
return
queue: NotificationQueue = entry_data["notification_queue"]
# Find a fallback sensor entity
ent_reg = er.async_get(hass)
fallback_entity_id = None
for ent in er.async_entries_for_config_entry(ent_reg, entry.entry_id):
if ent.domain == "sensor":
fallback_entity_id = ent.entity_id
break
if not fallback_entity_id:
_LOGGER.warning("No sensor entity found to process notification queue")
return
items = queue.get_all()
sent_count = 0
for i in indices:
if i >= len(items):
continue
params = dict(items[i].get("params", {}))
try:
target_entity_id = params.pop("entity_id", None) or fallback_entity_id
# Remove quiet hours params so the replay doesn't re-queue
params.pop("quiet_hours_start", None)
params.pop("quiet_hours_end", None)
await hass.services.async_call(
DOMAIN,
"send_telegram_notification",
params,
target={"entity_id": target_entity_id},
blocking=True,
)
sent_count += 1
except Exception:
_LOGGER.exception("Failed to send queued notification %d", i + 1)
# Small delay between notifications to avoid rate limiting
await asyncio.sleep(1)
# Remove sent items from queue (in reverse order to preserve indices)
await queue.async_remove_indices(sorted(indices, reverse=True))
_LOGGER.info("Sent %d/%d queued notifications", sent_count, len(indices))
async def _async_update_listener(
hass: HomeAssistant, entry: ImmichConfigEntry
) -> None:
@@ -154,7 +381,7 @@ async def _async_update_listener(
await hass.config_entries.async_reload(entry.entry_id)
return
# Handle options-only update (scan interval change)
# Handle options-only update
new_interval = entry.options.get(CONF_SCAN_INTERVAL, DEFAULT_SCAN_INTERVAL)
# Update hub data
@@ -165,11 +392,16 @@ async def _async_update_listener(
for subentry_data in subentries_data.values():
subentry_data.coordinator.update_scan_interval(new_interval)
_LOGGER.info("Updated scan interval to %d seconds", new_interval)
_LOGGER.info("Updated hub options (scan_interval=%d)", new_interval)
async def async_unload_entry(hass: HomeAssistant, entry: ImmichConfigEntry) -> bool:
"""Unload a config entry."""
# Cancel all quiet hours timers
entry_data = hass.data[DOMAIN].get(entry.entry_id, {})
for unsub in entry_data.get("quiet_hours_unsubs", {}).values():
unsub()
# Unload all platforms
unload_ok = await hass.config_entries.async_unload_platforms(entry, PLATFORMS)

View File

@@ -27,7 +27,9 @@ from .const import (
CONF_IMMICH_URL,
CONF_SCAN_INTERVAL,
CONF_TELEGRAM_BOT_TOKEN,
CONF_TELEGRAM_CACHE_TTL,
DEFAULT_SCAN_INTERVAL,
DEFAULT_TELEGRAM_CACHE_TTL,
DOMAIN,
SUBENTRY_TYPE_ALBUM,
)
@@ -252,19 +254,30 @@ class ImmichAlbumWatcherOptionsFlow(OptionsFlow):
CONF_TELEGRAM_BOT_TOKEN: user_input.get(
CONF_TELEGRAM_BOT_TOKEN, ""
),
CONF_TELEGRAM_CACHE_TTL: user_input.get(
CONF_TELEGRAM_CACHE_TTL, DEFAULT_TELEGRAM_CACHE_TTL
),
},
)
return self.async_show_form(
step_id="init",
data_schema=self._build_options_schema(),
)
def _build_options_schema(self) -> vol.Schema:
"""Build the options form schema."""
current_interval = self._config_entry.options.get(
CONF_SCAN_INTERVAL, DEFAULT_SCAN_INTERVAL
)
current_bot_token = self._config_entry.options.get(
CONF_TELEGRAM_BOT_TOKEN, ""
)
current_cache_ttl = self._config_entry.options.get(
CONF_TELEGRAM_CACHE_TTL, DEFAULT_TELEGRAM_CACHE_TTL
)
return self.async_show_form(
step_id="init",
data_schema=vol.Schema(
return vol.Schema(
{
vol.Required(
CONF_SCAN_INTERVAL, default=current_interval
@@ -272,8 +285,10 @@ class ImmichAlbumWatcherOptionsFlow(OptionsFlow):
vol.Optional(
CONF_TELEGRAM_BOT_TOKEN, default=current_bot_token
): str,
vol.Optional(
CONF_TELEGRAM_CACHE_TTL, default=current_cache_ttl
): vol.All(vol.Coerce(int), vol.Range(min=1, max=168)),
}
),
)

View File

@@ -14,12 +14,14 @@ CONF_ALBUM_ID: Final = "album_id"
CONF_ALBUM_NAME: Final = "album_name"
CONF_SCAN_INTERVAL: Final = "scan_interval"
CONF_TELEGRAM_BOT_TOKEN: Final = "telegram_bot_token"
CONF_TELEGRAM_CACHE_TTL: Final = "telegram_cache_ttl"
# Subentry type
SUBENTRY_TYPE_ALBUM: Final = "album"
# Defaults
DEFAULT_SCAN_INTERVAL: Final = 60 # seconds
DEFAULT_TELEGRAM_CACHE_TTL: Final = 48 # hours
NEW_ASSETS_RESET_DELAY: Final = 300 # 5 minutes
DEFAULT_SHARE_PASSWORD: Final = "immich123"
@@ -47,7 +49,7 @@ ATTR_REMOVED_COUNT: Final = "removed_count"
ATTR_ADDED_ASSETS: Final = "added_assets"
ATTR_REMOVED_ASSETS: Final = "removed_assets"
ATTR_CHANGE_TYPE: Final = "change_type"
ATTR_LAST_UPDATED: Final = "last_updated"
ATTR_LAST_UPDATED: Final = "last_updated_at"
ATTR_CREATED_AT: Final = "created_at"
ATTR_THUMBNAIL_URL: Final = "thumbnail_url"
ATTR_SHARED: Final = "shared"
@@ -68,6 +70,11 @@ ATTR_ASSET_PLAYBACK_URL: Final = "playback_url"
ATTR_ASSET_DESCRIPTION: Final = "description"
ATTR_ASSET_IS_FAVORITE: Final = "is_favorite"
ATTR_ASSET_RATING: Final = "rating"
ATTR_ASSET_LATITUDE: Final = "latitude"
ATTR_ASSET_LONGITUDE: Final = "longitude"
ATTR_ASSET_CITY: Final = "city"
ATTR_ASSET_STATE: Final = "state"
ATTR_ASSET_COUNTRY: Final = "country"
# Asset types
ASSET_TYPE_IMAGE: Final = "IMAGE"

View File

@@ -8,7 +8,7 @@ from datetime import datetime, timedelta
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from .storage import ImmichAlbumStorage
from .storage import ImmichAlbumStorage, TelegramFileCache
import aiohttp
@@ -29,6 +29,11 @@ from .const import (
ATTR_ASSET_DOWNLOAD_URL,
ATTR_ASSET_FILENAME,
ATTR_ASSET_IS_FAVORITE,
ATTR_ASSET_LATITUDE,
ATTR_ASSET_LONGITUDE,
ATTR_ASSET_CITY,
ATTR_ASSET_STATE,
ATTR_ASSET_COUNTRY,
ATTR_ASSET_OWNER,
ATTR_ASSET_OWNER_ID,
ATTR_ASSET_PLAYBACK_URL,
@@ -120,7 +125,13 @@ class AssetInfo:
people: list[str] = field(default_factory=list)
is_favorite: bool = False
rating: int | None = None
latitude: float | None = None
longitude: float | None = None
city: str | None = None
state: str | None = None
country: str | None = None
is_processed: bool = True # Whether asset is fully processed by Immich
thumbhash: str | None = None # Perceptual hash for cache validation
@classmethod
def from_api_response(
@@ -147,9 +158,19 @@ class AssetInfo:
is_favorite = data.get("isFavorite", False)
rating = exif_info.get("rating") if exif_info else None
# Get geolocation
latitude = exif_info.get("latitude") if exif_info else None
longitude = exif_info.get("longitude") if exif_info else None
# Get reverse geocoded location
city = exif_info.get("city") if exif_info else None
state = exif_info.get("state") if exif_info else None
country = exif_info.get("country") if exif_info else None
# Check if asset is fully processed by Immich
asset_type = data.get("type", ASSET_TYPE_IMAGE)
is_processed = cls._check_processing_status(data, asset_type)
thumbhash = data.get("thumbhash")
return cls(
id=data["id"],
@@ -162,47 +183,70 @@ class AssetInfo:
people=people,
is_favorite=is_favorite,
rating=rating,
latitude=latitude,
longitude=longitude,
city=city,
state=state,
country=country,
is_processed=is_processed,
thumbhash=thumbhash,
)
@staticmethod
def _check_processing_status(data: dict[str, Any], asset_type: str) -> bool:
def _check_processing_status(data: dict[str, Any], _asset_type: str) -> bool:
"""Check if asset has been fully processed by Immich.
For photos: Check if thumbnails/previews have been generated
For videos: Check if video transcoding is complete
For all assets: Check if thumbnails have been generated (thumbhash exists).
Immich generates thumbnails for both photos and videos regardless of
whether video transcoding is needed.
Args:
data: Asset data from API response
asset_type: Asset type (IMAGE or VIDEO)
_asset_type: Asset type (IMAGE or VIDEO) - unused but kept for API stability
Returns:
True if asset is fully processed, False otherwise
True if asset is fully processed and not trashed/offline/archived, False otherwise
"""
if asset_type == ASSET_TYPE_VIDEO:
# For videos, check if transcoding is complete
# Video is processed if it has an encoded video path or if isOffline is False
asset_id = data.get("id", "unknown")
asset_type = data.get("type", "unknown")
is_offline = data.get("isOffline", False)
is_trashed = data.get("isTrashed", False)
is_archived = data.get("isArchived", False)
thumbhash = data.get("thumbhash")
_LOGGER.debug(
"Asset %s (%s): isOffline=%s, isTrashed=%s, isArchived=%s, thumbhash=%s",
asset_id,
asset_type,
is_offline,
is_trashed,
is_archived,
bool(thumbhash),
)
# Exclude offline assets
if is_offline:
_LOGGER.debug("Asset %s excluded: offline", asset_id)
return False
# Check if video has been transcoded (has encoded video path)
# Immich uses "encodedVideoPath" or similar field when transcoding is done
has_encoded_video = bool(data.get("encodedVideoPath"))
return has_encoded_video
else: # ASSET_TYPE_IMAGE
# For photos, check if thumbnails have been generated
# Photos are processed if they have thumbnail/preview paths
is_offline = data.get("isOffline", False)
if is_offline:
# Exclude trashed assets
if is_trashed:
_LOGGER.debug("Asset %s excluded: trashed", asset_id)
return False
# Check if thumbnails exist
has_thumbhash = bool(data.get("thumbhash"))
has_thumbnail = has_thumbhash # If thumbhash exists, thumbnails should exist
# Exclude archived assets
if is_archived:
_LOGGER.debug("Asset %s excluded: archived", asset_id)
return False
return has_thumbnail
# Check if thumbnails have been generated
# This works for both photos and videos - Immich always generates thumbnails
# Note: The API doesn't expose video transcoding status (encodedVideoPath),
# but thumbhash is sufficient since Immich generates thumbnails for all assets
is_processed = bool(thumbhash)
if not is_processed:
_LOGGER.debug("Asset %s excluded: no thumbhash", asset_id)
return is_processed
@dataclass
@@ -294,6 +338,8 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
scan_interval: int,
hub_name: str = "Immich",
storage: ImmichAlbumStorage | None = None,
telegram_cache: TelegramFileCache | None = None,
telegram_asset_cache: TelegramFileCache | None = None,
) -> None:
"""Initialize the coordinator."""
super().__init__(
@@ -313,13 +359,46 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
self._users_cache: dict[str, str] = {} # user_id -> name
self._shared_links: list[SharedLinkInfo] = []
self._storage = storage
self._telegram_cache = telegram_cache
self._telegram_asset_cache = telegram_asset_cache
self._persisted_asset_ids: set[str] | None = None
self._external_domain: str | None = None # Fetched from server config
self._pending_asset_ids: set[str] = set() # Assets detected but not yet processed
@property
def immich_url(self) -> str:
"""Return the Immich URL."""
"""Return the Immich URL (for API calls)."""
return self._url
@property
def external_url(self) -> str:
"""Return the external URL for links.
Uses externalDomain from Immich server config if set,
otherwise falls back to the connection URL.
"""
if self._external_domain:
return self._external_domain.rstrip("/")
return self._url
def get_internal_download_url(self, url: str) -> str:
"""Convert an external URL to internal URL for faster downloads.
If the URL starts with the external domain, replace it with the
internal connection URL to download via local network.
Args:
url: The URL to convert (may be external or internal)
Returns:
URL using internal connection for downloads
"""
if self._external_domain:
external = self._external_domain.rstrip("/")
if url.startswith(external):
return url.replace(external, self._url, 1)
return url
@property
def api_key(self) -> str:
"""Return the API key."""
@@ -335,6 +414,22 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
"""Return the album name."""
return self._album_name
@property
def telegram_cache(self) -> TelegramFileCache | None:
"""Return the Telegram file cache (URL-based)."""
return self._telegram_cache
@property
def telegram_asset_cache(self) -> TelegramFileCache | None:
"""Return the Telegram asset cache (asset ID-based)."""
return self._telegram_asset_cache
def get_asset_thumbhash(self, asset_id: str) -> str | None:
"""Get the current thumbhash for an asset from coordinator data."""
if self.data and asset_id in self.data.assets:
return self.data.assets[asset_id].thumbhash
return None
def update_scan_interval(self, scan_interval: int) -> None:
"""Update the scan interval."""
self.update_interval = timedelta(seconds=scan_interval)
@@ -362,18 +457,36 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
async def async_get_assets(
self,
count: int = 10,
filter: str = "none",
limit: int = 10,
offset: int = 0,
favorite_only: bool = False,
filter_min_rating: int = 1,
order_by: str = "date",
order: str = "descending",
asset_type: str = "all",
min_date: str | None = None,
max_date: str | None = None,
memory_date: str | None = None,
city: str | None = None,
state: str | None = None,
country: str | None = None,
) -> list[dict[str, Any]]:
"""Get assets from the album with optional filtering and ordering.
Args:
count: Maximum number of assets to return (1-100)
filter: Filter type - 'none', 'favorite', or 'rating'
filter_min_rating: Minimum rating for assets (1-5), used when filter='rating'
order: Sort order - 'ascending', 'descending', or 'random'
limit: Maximum number of assets to return (1-100)
offset: Number of assets to skip before returning results (for pagination)
favorite_only: Filter to show only favorite assets
filter_min_rating: Minimum rating for assets (1-5)
order_by: Field to sort by - 'date', 'rating', or 'name'
order: Sort direction - 'ascending', 'descending', or 'random'
asset_type: Asset type filter - 'all', 'photo', or 'video'
min_date: Filter assets created on or after this date (ISO 8601 format)
max_date: Filter assets created on or before this date (ISO 8601 format)
memory_date: Filter assets by matching month and day, excluding the same year (memories filter)
city: Filter assets by city (case-insensitive substring match)
state: Filter assets by state/region (case-insensitive substring match)
country: Filter assets by country (case-insensitive substring match)
Returns:
List of asset data dictionaries
@@ -384,23 +497,91 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
# Start with all processed assets only
assets = [a for a in self.data.assets.values() if a.is_processed]
# Apply filtering
if filter == "favorite":
# Apply favorite filter
if favorite_only:
assets = [a for a in assets if a.is_favorite]
elif filter == "rating":
# Apply rating filter
if filter_min_rating > 1:
assets = [a for a in assets if a.rating is not None and a.rating >= filter_min_rating]
# Apply asset type filtering
if asset_type == "photo":
assets = [a for a in assets if a.type == ASSET_TYPE_IMAGE]
elif asset_type == "video":
assets = [a for a in assets if a.type == ASSET_TYPE_VIDEO]
# Apply date filtering
if min_date:
assets = [a for a in assets if a.created_at >= min_date]
if max_date:
assets = [a for a in assets if a.created_at <= max_date]
# Apply memory date filtering (match month and day, exclude same year)
if memory_date:
try:
# Parse the reference date (supports ISO 8601 format)
ref_date = datetime.fromisoformat(memory_date.replace("Z", "+00:00"))
ref_year = ref_date.year
ref_month = ref_date.month
ref_day = ref_date.day
def matches_memory(asset: AssetInfo) -> bool:
"""Check if asset matches memory criteria (same month/day, different year)."""
try:
asset_date = datetime.fromisoformat(
asset.created_at.replace("Z", "+00:00")
)
# Match month and day, but exclude same year (true memories behavior)
return (
asset_date.month == ref_month
and asset_date.day == ref_day
and asset_date.year != ref_year
)
except (ValueError, AttributeError):
return False
assets = [a for a in assets if matches_memory(a)]
except ValueError:
_LOGGER.warning("Invalid memory_date format: %s", memory_date)
# Apply geolocation filtering (case-insensitive substring match)
if city:
city_lower = city.lower()
assets = [a for a in assets if a.city and city_lower in a.city.lower()]
if state:
state_lower = state.lower()
assets = [a for a in assets if a.state and state_lower in a.state.lower()]
if country:
country_lower = country.lower()
assets = [a for a in assets if a.country and country_lower in a.country.lower()]
# Apply ordering
if order == "random":
if order_by == "random":
import random
random.shuffle(assets)
elif order == "ascending":
assets = sorted(assets, key=lambda a: a.created_at, reverse=False)
else: # descending (default)
assets = sorted(assets, key=lambda a: a.created_at, reverse=True)
elif order_by == "rating":
# Sort by rating, putting None values last
assets = sorted(
assets,
key=lambda a: (a.rating is None, a.rating if a.rating is not None else 0),
reverse=(order == "descending")
)
elif order_by == "name":
assets = sorted(
assets,
key=lambda a: a.filename.lower(),
reverse=(order == "descending")
)
else: # date (default)
assets = sorted(
assets,
key=lambda a: a.created_at,
reverse=(order == "descending")
)
# Limit count
assets = assets[:count]
# Apply offset and limit for pagination
assets = assets[offset : offset + limit]
# Build result with all available asset data (matching event data)
result = []
@@ -456,6 +637,36 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
return self._users_cache
async def _async_fetch_server_config(self) -> None:
"""Fetch server config from Immich to get external domain."""
if self._session is None:
self._session = async_get_clientsession(self.hass)
headers = {"x-api-key": self._api_key}
try:
async with self._session.get(
f"{self._url}/api/server/config",
headers=headers,
) as response:
if response.status == 200:
data = await response.json()
external_domain = data.get("externalDomain", "") or ""
self._external_domain = external_domain
if external_domain:
_LOGGER.debug(
"Using external domain from Immich: %s", external_domain
)
else:
_LOGGER.debug(
"No external domain configured in Immich, using connection URL"
)
else:
_LOGGER.warning(
"Failed to fetch server config: HTTP %s", response.status
)
except aiohttp.ClientError as err:
_LOGGER.warning("Failed to fetch server config: %s", err)
async def _async_fetch_shared_links(self) -> list[SharedLinkInfo]:
"""Fetch shared links for this album from Immich."""
if self._session is None:
@@ -499,29 +710,29 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
"""Get the public URL if album has an accessible shared link."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/share/{accessible_links[0].key}"
return f"{self.external_url}/share/{accessible_links[0].key}"
return None
def get_any_url(self) -> str | None:
"""Get any non-expired URL (prefers accessible, falls back to protected)."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/share/{accessible_links[0].key}"
return f"{self.external_url}/share/{accessible_links[0].key}"
non_expired = [link for link in self._shared_links if not link.is_expired]
if non_expired:
return f"{self._url}/share/{non_expired[0].key}"
return f"{self.external_url}/share/{non_expired[0].key}"
return None
def get_protected_url(self) -> str | None:
"""Get a protected URL if any password-protected link exists."""
protected_links = self._get_protected_links()
if protected_links:
return f"{self._url}/share/{protected_links[0].key}"
return f"{self.external_url}/share/{protected_links[0].key}"
return None
def get_protected_urls(self) -> list[str]:
"""Get all password-protected URLs."""
return [f"{self._url}/share/{link.key}" for link in self._get_protected_links()]
return [f"{self.external_url}/share/{link.key}" for link in self._get_protected_links()]
def get_protected_password(self) -> str | None:
"""Get the password for the first protected link."""
@@ -532,13 +743,13 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
def get_public_urls(self) -> list[str]:
"""Get all accessible public URLs."""
return [f"{self._url}/share/{link.key}" for link in self._get_accessible_links()]
return [f"{self.external_url}/share/{link.key}" for link in self._get_accessible_links()]
def get_shared_links_info(self) -> list[dict[str, Any]]:
"""Get detailed info about all shared links."""
return [
{
"url": f"{self._url}/share/{link.key}",
"url": f"{self.external_url}/share/{link.key}",
"has_password": link.has_password,
"is_expired": link.is_expired,
"expires_at": link.expires_at.isoformat() if link.expires_at else None,
@@ -551,40 +762,40 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
"""Get the public viewer URL for an asset (web page)."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/share/{accessible_links[0].key}/photos/{asset_id}"
return f"{self.external_url}/share/{accessible_links[0].key}/photos/{asset_id}"
non_expired = [link for link in self._shared_links if not link.is_expired]
if non_expired:
return f"{self._url}/share/{non_expired[0].key}/photos/{asset_id}"
return f"{self.external_url}/share/{non_expired[0].key}/photos/{asset_id}"
return None
def _get_asset_download_url(self, asset_id: str) -> str | None:
"""Get the direct download URL for an asset (media file)."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/api/assets/{asset_id}/original?key={accessible_links[0].key}"
return f"{self.external_url}/api/assets/{asset_id}/original?key={accessible_links[0].key}"
non_expired = [link for link in self._shared_links if not link.is_expired]
if non_expired:
return f"{self._url}/api/assets/{asset_id}/original?key={non_expired[0].key}"
return f"{self.external_url}/api/assets/{asset_id}/original?key={non_expired[0].key}"
return None
def _get_asset_video_url(self, asset_id: str) -> str | None:
"""Get the transcoded video playback URL for a video asset."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/api/assets/{asset_id}/video/playback?key={accessible_links[0].key}"
return f"{self.external_url}/api/assets/{asset_id}/video/playback?key={accessible_links[0].key}"
non_expired = [link for link in self._shared_links if not link.is_expired]
if non_expired:
return f"{self._url}/api/assets/{asset_id}/video/playback?key={non_expired[0].key}"
return f"{self.external_url}/api/assets/{asset_id}/video/playback?key={non_expired[0].key}"
return None
def _get_asset_photo_url(self, asset_id: str) -> str | None:
"""Get the preview-sized thumbnail URL for a photo asset."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/api/assets/{asset_id}/thumbnail?size=preview&key={accessible_links[0].key}"
return f"{self.external_url}/api/assets/{asset_id}/thumbnail?size=preview&key={accessible_links[0].key}"
non_expired = [link for link in self._shared_links if not link.is_expired]
if non_expired:
return f"{self._url}/api/assets/{asset_id}/thumbnail?size=preview&key={non_expired[0].key}"
return f"{self.external_url}/api/assets/{asset_id}/thumbnail?size=preview&key={non_expired[0].key}"
return None
def _build_asset_detail(
@@ -611,11 +822,16 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
ATTR_PEOPLE: asset.people,
ATTR_ASSET_IS_FAVORITE: asset.is_favorite,
ATTR_ASSET_RATING: asset.rating,
ATTR_ASSET_LATITUDE: asset.latitude,
ATTR_ASSET_LONGITUDE: asset.longitude,
ATTR_ASSET_CITY: asset.city,
ATTR_ASSET_STATE: asset.state,
ATTR_ASSET_COUNTRY: asset.country,
}
# Add thumbnail URL if requested
if include_thumbnail:
asset_detail[ATTR_THUMBNAIL_URL] = f"{self._url}/api/assets/{asset.id}/thumbnail"
asset_detail[ATTR_THUMBNAIL_URL] = f"{self.external_url}/api/assets/{asset.id}/thumbnail"
# Add public viewer URL (web page)
asset_url = self._get_asset_public_url(asset.id)
@@ -644,6 +860,10 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
if self._session is None:
self._session = async_get_clientsession(self.hass)
# Fetch server config to get external domain (once)
if self._external_domain is None:
await self._async_fetch_server_config()
# Fetch users to resolve owner names
if not self._users_cache:
await self._async_fetch_users()
@@ -697,11 +917,16 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
elif removed_ids and not added_ids:
change_type = "assets_removed"
added_assets = [
album.assets[aid]
for aid in added_ids
if aid in album.assets
]
added_assets = []
for aid in added_ids:
if aid not in album.assets:
continue
asset = album.assets[aid]
if asset.is_processed:
added_assets.append(asset)
else:
# Track unprocessed assets for later
self._pending_asset_ids.add(aid)
change = AlbumChange(
album_id=album.id,
@@ -758,12 +983,54 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
added_ids = new_state.asset_ids - old_state.asset_ids
removed_ids = old_state.asset_ids - new_state.asset_ids
# Only include fully processed assets in added_assets
added_assets = [
new_state.assets[aid]
for aid in added_ids
if aid in new_state.assets and new_state.assets[aid].is_processed
]
_LOGGER.debug(
"Change detection: added_ids=%d, removed_ids=%d, pending=%d",
len(added_ids),
len(removed_ids),
len(self._pending_asset_ids),
)
# Track new unprocessed assets and collect processed ones
added_assets = []
for aid in added_ids:
if aid not in new_state.assets:
_LOGGER.debug("Asset %s: not in assets dict", aid)
continue
asset = new_state.assets[aid]
_LOGGER.debug(
"New asset %s (%s): is_processed=%s, filename=%s",
aid,
asset.type,
asset.is_processed,
asset.filename,
)
if asset.is_processed:
added_assets.append(asset)
else:
# Track unprocessed assets for later
self._pending_asset_ids.add(aid)
_LOGGER.debug("Asset %s added to pending (not yet processed)", aid)
# Check if any pending assets are now processed
newly_processed = []
for aid in list(self._pending_asset_ids):
if aid not in new_state.assets:
# Asset was removed, no longer pending
self._pending_asset_ids.discard(aid)
continue
asset = new_state.assets[aid]
if asset.is_processed:
_LOGGER.debug(
"Pending asset %s (%s) is now processed: filename=%s",
aid,
asset.type,
asset.filename,
)
newly_processed.append(asset)
self._pending_asset_ids.discard(aid)
# Include newly processed pending assets
added_assets.extend(newly_processed)
# Detect metadata changes
name_changed = old_state.name != new_state.name

View File

@@ -8,5 +8,5 @@
"iot_class": "cloud_polling",
"issue_tracker": "https://github.com/DolgolyovAlexei/haos-hacs-immich-album-watcher/issues",
"requirements": [],
"version": "2.1.0"
"version": "2.8.0"
}

File diff suppressed because it is too large Load Diff

View File

@@ -14,8 +14,8 @@ get_assets:
integration: immich_album_watcher
domain: sensor
fields:
count:
name: Count
limit:
name: Limit
description: Maximum number of assets to return (1-100).
required: false
default: 10
@@ -24,23 +24,25 @@ get_assets:
min: 1
max: 100
mode: slider
filter:
name: Filter
description: Filter assets by type (none, favorite, or rating-based).
offset:
name: Offset
description: Number of assets to skip before returning results (for pagination). Use with limit to fetch assets in pages.
required: false
default: "none"
default: 0
selector:
select:
options:
- label: "None (no filtering)"
value: "none"
- label: "Favorites only"
value: "favorite"
- label: "By minimum rating"
value: "rating"
number:
min: 0
mode: box
favorite_only:
name: Favorite Only
description: Filter to show only favorite assets.
required: false
default: false
selector:
boolean:
filter_min_rating:
name: Minimum Rating
description: Minimum rating for assets (1-5). Only used when filter is set to 'rating'.
description: Minimum rating for assets (1-5). Set to filter by rating.
required: false
default: 1
selector:
@@ -48,24 +50,88 @@ get_assets:
min: 1
max: 5
mode: slider
order_by:
name: Order By
description: Field to sort assets by.
required: false
default: "date"
selector:
select:
options:
- label: "Date"
value: "date"
- label: "Rating"
value: "rating"
- label: "Name"
value: "name"
- label: "Random"
value: "random"
order:
name: Order
description: Sort order for assets by creation date.
description: Sort direction.
required: false
default: "descending"
selector:
select:
options:
- label: "Ascending (oldest first)"
- label: "Ascending"
value: "ascending"
- label: "Descending (newest first)"
- label: "Descending"
value: "descending"
- label: "Random"
value: "random"
asset_type:
name: Asset Type
description: Filter assets by type (all, photo, or video).
required: false
default: "all"
selector:
select:
options:
- label: "All (no type filtering)"
value: "all"
- label: "Photos only"
value: "photo"
- label: "Videos only"
value: "video"
min_date:
name: Minimum Date
description: Filter assets created on or after this date (ISO 8601 format, e.g., 2024-01-01 or 2024-01-01T10:30:00).
required: false
selector:
text:
max_date:
name: Maximum Date
description: Filter assets created on or before this date (ISO 8601 format, e.g., 2024-12-31 or 2024-12-31T23:59:59).
required: false
selector:
text:
memory_date:
name: Memory Date
description: Filter assets by matching month and day, excluding the same year (memories filter like Google Photos). Provide a date in ISO 8601 format (e.g., 2024-02-14) to get assets from February 14th of previous years.
required: false
selector:
text:
city:
name: City
description: Filter assets by city name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data.
required: false
selector:
text:
state:
name: State
description: Filter assets by state/region name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data.
required: false
selector:
text:
country:
name: Country
description: Filter assets by country name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data.
required: false
selector:
text:
send_telegram_notification:
name: Send Telegram Notification
description: Send a notification to Telegram (text, photo, video, or media group).
description: Send a notification to Telegram (text, photo, video, document, or media group).
target:
entity:
integration: immich_album_watcher
@@ -83,9 +149,9 @@ send_telegram_notification:
required: true
selector:
text:
urls:
name: URLs
description: List of media URLs to send. Each item should have 'url' and 'type' (photo/video). If empty, sends a text message. Large lists are automatically split into multiple media groups.
assets:
name: Assets
description: "List of media assets to send. Each item should have 'url', optional 'type' (document/photo/video, default: document), optional 'content_type' (MIME type, e.g., 'image/jpeg'), and optional 'cache_key' (custom key for caching instead of URL). If empty, sends a text message. Photos and videos can be grouped; documents are sent separately."
required: false
selector:
object:
@@ -172,3 +238,33 @@ send_telegram_notification:
default: false
selector:
boolean:
chat_action:
name: Chat Action
description: Chat action to display while processing (typing, upload_photo, upload_video, upload_document). Set to empty to disable.
required: false
default: "typing"
selector:
select:
options:
- label: "Typing"
value: "typing"
- label: "Uploading Photo"
value: "upload_photo"
- label: "Uploading Video"
value: "upload_video"
- label: "Uploading Document"
value: "upload_document"
- label: "Disabled"
value: ""
quiet_hours_start:
name: Quiet Hours Start
description: "Start time for quiet hours (HH:MM format, e.g. 22:00). When set along with quiet_hours_end, notifications during this period are queued and sent when quiet hours end. Omit to send immediately."
required: false
selector:
text:
quiet_hours_end:
name: Quiet Hours End
description: "End time for quiet hours (HH:MM format, e.g. 08:00). Queued notifications will be sent at this time."
required: false
selector:
text:

View File

@@ -14,6 +14,9 @@ _LOGGER = logging.getLogger(__name__)
STORAGE_VERSION = 1
STORAGE_KEY_PREFIX = "immich_album_watcher"
# Default TTL for Telegram file_id cache (48 hours in seconds)
DEFAULT_TELEGRAM_CACHE_TTL = 48 * 60 * 60
class ImmichAlbumStorage:
"""Handles persistence of album state across restarts."""
@@ -63,3 +66,262 @@ class ImmichAlbumStorage:
"""Remove all storage data."""
await self._store.async_remove()
self._data = None
class TelegramFileCache:
"""Cache for Telegram file_ids to avoid re-uploading media.
When a file is uploaded to Telegram, it returns a file_id that can be reused
to send the same file without re-uploading. This cache stores these file_ids
keyed by the source URL or asset ID.
Supports two validation modes:
- TTL mode (default): entries expire after a configured time-to-live
- Thumbhash mode: entries are validated by comparing stored thumbhash with
the current asset thumbhash from Immich
"""
def __init__(
self,
hass: HomeAssistant,
entry_id: str,
ttl_seconds: int = DEFAULT_TELEGRAM_CACHE_TTL,
use_thumbhash: bool = False,
) -> None:
"""Initialize the Telegram file cache.
Args:
hass: Home Assistant instance
entry_id: Config entry ID for scoping the cache (per hub)
ttl_seconds: Time-to-live for cache entries in seconds (TTL mode only)
use_thumbhash: Use thumbhash-based validation instead of TTL
"""
self._store: Store[dict[str, Any]] = Store(
hass, STORAGE_VERSION, f"{STORAGE_KEY_PREFIX}.telegram_cache.{entry_id}"
)
self._data: dict[str, Any] | None = None
self._ttl_seconds = ttl_seconds
self._use_thumbhash = use_thumbhash
async def async_load(self) -> None:
"""Load cache data from storage."""
self._data = await self._store.async_load() or {"files": {}}
# Clean up expired entries on load (TTL mode only)
await self._cleanup_expired()
mode = "thumbhash" if self._use_thumbhash else "TTL"
_LOGGER.debug(
"Loaded Telegram file cache with %d entries (mode: %s)",
len(self._data.get("files", {})),
mode,
)
# Maximum number of entries to keep in thumbhash mode to prevent unbounded growth
THUMBHASH_MAX_ENTRIES = 2000
async def _cleanup_expired(self) -> None:
"""Remove expired cache entries (TTL mode) or trim old entries (thumbhash mode)."""
if self._use_thumbhash:
files = self._data.get("files", {}) if self._data else {}
if len(files) > self.THUMBHASH_MAX_ENTRIES:
sorted_keys = sorted(
files, key=lambda k: files[k].get("cached_at", "")
)
keys_to_remove = sorted_keys[: len(files) - self.THUMBHASH_MAX_ENTRIES]
for key in keys_to_remove:
del files[key]
await self._store.async_save(self._data)
_LOGGER.debug(
"Trimmed thumbhash cache from %d to %d entries",
len(keys_to_remove) + self.THUMBHASH_MAX_ENTRIES,
self.THUMBHASH_MAX_ENTRIES,
)
return
if not self._data or "files" not in self._data:
return
now = datetime.now(timezone.utc)
expired_keys = []
for url, entry in self._data["files"].items():
cached_at_str = entry.get("cached_at")
if cached_at_str:
cached_at = datetime.fromisoformat(cached_at_str)
age_seconds = (now - cached_at).total_seconds()
if age_seconds > self._ttl_seconds:
expired_keys.append(url)
if expired_keys:
for key in expired_keys:
del self._data["files"][key]
await self._store.async_save(self._data)
_LOGGER.debug("Cleaned up %d expired Telegram cache entries", len(expired_keys))
def get(self, key: str, thumbhash: str | None = None) -> dict[str, Any] | None:
"""Get cached file_id for a key.
Args:
key: The cache key (URL or asset ID)
thumbhash: Current thumbhash for validation (thumbhash mode only).
If provided, compares with stored thumbhash. Mismatch = cache miss.
Returns:
Dict with 'file_id' and 'type' if cached and valid, None otherwise
"""
if not self._data or "files" not in self._data:
return None
entry = self._data["files"].get(key)
if not entry:
return None
if self._use_thumbhash:
# Thumbhash-based validation
if thumbhash is not None:
stored_thumbhash = entry.get("thumbhash")
if stored_thumbhash and stored_thumbhash != thumbhash:
_LOGGER.debug(
"Cache miss for %s: thumbhash changed, removing stale entry",
key[:36],
)
del self._data["files"][key]
return None
# If no thumbhash provided (asset not in monitored album),
# return cached entry anyway — self-heals on Telegram rejection
else:
# TTL-based validation
cached_at_str = entry.get("cached_at")
if cached_at_str:
cached_at = datetime.fromisoformat(cached_at_str)
age_seconds = (datetime.now(timezone.utc) - cached_at).total_seconds()
if age_seconds > self._ttl_seconds:
return None
return {
"file_id": entry.get("file_id"),
"type": entry.get("type"),
}
async def async_set(
self, key: str, file_id: str, media_type: str, thumbhash: str | None = None
) -> None:
"""Store a file_id for a key.
Args:
key: The cache key (URL or asset ID)
file_id: The Telegram file_id
media_type: The type of media ('photo', 'video', 'document')
thumbhash: Current thumbhash to store alongside file_id (thumbhash mode only)
"""
if self._data is None:
self._data = {"files": {}}
entry_data: dict[str, Any] = {
"file_id": file_id,
"type": media_type,
"cached_at": datetime.now(timezone.utc).isoformat(),
}
if thumbhash is not None:
entry_data["thumbhash"] = thumbhash
self._data["files"][key] = entry_data
await self._store.async_save(self._data)
_LOGGER.debug("Cached Telegram file_id for key (type: %s)", media_type)
async def async_set_many(
self, entries: list[tuple[str, str, str, str | None]]
) -> None:
"""Store multiple file_ids in a single disk write.
Args:
entries: List of (key, file_id, media_type, thumbhash) tuples
"""
if not entries:
return
if self._data is None:
self._data = {"files": {}}
now_iso = datetime.now(timezone.utc).isoformat()
for key, file_id, media_type, thumbhash in entries:
entry_data: dict[str, Any] = {
"file_id": file_id,
"type": media_type,
"cached_at": now_iso,
}
if thumbhash is not None:
entry_data["thumbhash"] = thumbhash
self._data["files"][key] = entry_data
await self._store.async_save(self._data)
_LOGGER.debug("Batch cached %d Telegram file_ids", len(entries))
async def async_remove(self) -> None:
"""Remove all cache data."""
await self._store.async_remove()
self._data = None
class NotificationQueue:
"""Persistent queue for notifications deferred during quiet hours.
Stores full service call parameters so notifications can be replayed
exactly as they were originally called.
"""
def __init__(self, hass: HomeAssistant, entry_id: str) -> None:
"""Initialize the notification queue."""
self._store: Store[dict[str, Any]] = Store(
hass, STORAGE_VERSION, f"{STORAGE_KEY_PREFIX}.notification_queue.{entry_id}"
)
self._data: dict[str, Any] | None = None
async def async_load(self) -> None:
"""Load queue data from storage."""
self._data = await self._store.async_load() or {"queue": []}
_LOGGER.debug(
"Loaded notification queue with %d items",
len(self._data.get("queue", [])),
)
async def async_enqueue(self, notification_params: dict[str, Any]) -> None:
"""Add a notification to the queue."""
if self._data is None:
self._data = {"queue": []}
self._data["queue"].append({
"params": notification_params,
"queued_at": datetime.now(timezone.utc).isoformat(),
})
await self._store.async_save(self._data)
_LOGGER.debug("Queued notification during quiet hours (total: %d)", len(self._data["queue"]))
def get_all(self) -> list[dict[str, Any]]:
"""Get all queued notifications."""
if not self._data:
return []
return list(self._data.get("queue", []))
def has_pending(self) -> bool:
"""Check if there are pending notifications."""
return bool(self._data and self._data.get("queue"))
async def async_remove_indices(self, indices: list[int]) -> None:
"""Remove specific items by index (indices must be in descending order)."""
if not self._data or not indices:
return
for idx in indices:
if 0 <= idx < len(self._data["queue"]):
del self._data["queue"][idx]
await self._store.async_save(self._data)
async def async_clear(self) -> None:
"""Clear all queued notifications."""
if self._data:
self._data["queue"] = []
await self._store.async_save(self._data)
async def async_remove(self) -> None:
"""Remove all queue data."""
await self._store.async_remove()
self._data = None

View File

@@ -116,14 +116,16 @@
"step": {
"init": {
"title": "Immich Album Watcher Options",
"description": "Configure the polling interval for all albums.",
"description": "Configure the polling interval and Telegram settings for all albums.",
"data": {
"scan_interval": "Scan interval (seconds)",
"telegram_bot_token": "Telegram Bot Token"
"telegram_bot_token": "Telegram Bot Token",
"telegram_cache_ttl": "Telegram Cache TTL (hours)"
},
"data_description": {
"scan_interval": "How often to check for album changes (10-3600 seconds)",
"telegram_bot_token": "Bot token for sending notifications to Telegram"
"telegram_bot_token": "Bot token for sending notifications to Telegram",
"telegram_cache_ttl": "How long to cache uploaded file IDs to avoid re-uploading (1-168 hours, default: 48)"
}
}
}
@@ -137,27 +139,63 @@
"name": "Get Assets",
"description": "Get assets from the targeted album with optional filtering and ordering.",
"fields": {
"count": {
"name": "Count",
"limit": {
"name": "Limit",
"description": "Maximum number of assets to return (1-100)."
},
"filter": {
"name": "Filter",
"description": "Filter assets by type (none, favorite, or rating-based)."
"offset": {
"name": "Offset",
"description": "Number of assets to skip (for pagination)."
},
"favorite_only": {
"name": "Favorite Only",
"description": "Filter to show only favorite assets."
},
"filter_min_rating": {
"name": "Minimum Rating",
"description": "Minimum rating for assets (1-5). Only used when filter is set to 'rating'."
"description": "Minimum rating for assets (1-5)."
},
"order_by": {
"name": "Order By",
"description": "Field to sort assets by (date, rating, name, or random)."
},
"order": {
"name": "Order",
"description": "Sort order for assets by creation date."
"description": "Sort direction (ascending or descending)."
},
"asset_type": {
"name": "Asset Type",
"description": "Filter assets by type (all, photo, or video)."
},
"min_date": {
"name": "Minimum Date",
"description": "Filter assets created on or after this date (ISO 8601 format)."
},
"max_date": {
"name": "Maximum Date",
"description": "Filter assets created on or before this date (ISO 8601 format)."
},
"memory_date": {
"name": "Memory Date",
"description": "Filter assets by matching month and day, excluding the same year (memories filter)."
},
"city": {
"name": "City",
"description": "Filter assets by city name (case-insensitive)."
},
"state": {
"name": "State",
"description": "Filter assets by state/region name (case-insensitive)."
},
"country": {
"name": "Country",
"description": "Filter assets by country name (case-insensitive)."
}
}
},
"send_telegram_notification": {
"name": "Send Telegram Notification",
"description": "Send a notification to Telegram (text, photo, video, or media group).",
"description": "Send a notification to Telegram (text, photo, video, document, or media group).",
"fields": {
"bot_token": {
"name": "Bot Token",
@@ -167,9 +205,9 @@
"name": "Chat ID",
"description": "Telegram chat ID to send to."
},
"urls": {
"name": "URLs",
"description": "List of media URLs with type (photo/video). If empty, sends a text message. Large lists are automatically split into multiple media groups."
"assets": {
"name": "Assets",
"description": "List of media assets with 'url', optional 'type' (document/photo/video, default: document), optional 'content_type' (MIME type), and optional 'cache_key' (custom key for caching instead of URL). If empty, sends a text message. Photos and videos can be grouped; documents are sent separately."
},
"caption": {
"name": "Caption",
@@ -205,7 +243,19 @@
},
"send_large_photos_as_documents": {
"name": "Send Large Photos As Documents",
"description": "How to handle photos exceeding Telegram's limits (10MB or 10000px dimension sum). If true, send as documents. If false, downsize to fit limits."
"description": "How to handle photos exceeding Telegram's limits (10MB or 10000px dimension sum). If true, send as documents. If false, skip oversized photos."
},
"chat_action": {
"name": "Chat Action",
"description": "Chat action to display while processing (typing, upload_photo, upload_video, upload_document). Set to empty to disable."
},
"quiet_hours_start": {
"name": "Quiet Hours Start",
"description": "Start time for quiet hours (HH:MM format, e.g. 22:00). Notifications during this period are queued and sent when quiet hours end. Omit to send immediately."
},
"quiet_hours_end": {
"name": "Quiet Hours End",
"description": "End time for quiet hours (HH:MM format, e.g. 08:00). Queued notifications will be sent at this time."
}
}
}

View File

@@ -116,14 +116,16 @@
"step": {
"init": {
"title": "Настройки Immich Album Watcher",
"description": "Настройте интервал опроса для всех альбомов.",
"description": "Настройте интервал опроса и параметры Telegram для всех альбомов.",
"data": {
"scan_interval": "Интервал сканирования (секунды)",
"telegram_bot_token": "Токен Telegram бота"
"telegram_bot_token": "Токен Telegram бота",
"telegram_cache_ttl": "Время жизни кэша Telegram (часы)"
},
"data_description": {
"scan_interval": "Как часто проверять изменения в альбомах (10-3600 секунд)",
"telegram_bot_token": "Токен бота для отправки уведомлений в Telegram"
"telegram_bot_token": "Токен бота для отправки уведомлений в Telegram",
"telegram_cache_ttl": "Сколько хранить ID загруженных файлов для повторной отправки без загрузки (1-168 часов, по умолчанию: 48)"
}
}
}
@@ -137,27 +139,63 @@
"name": "Получить файлы",
"description": "Получить файлы из выбранного альбома с возможностью фильтрации и сортировки.",
"fields": {
"count": {
"name": "Количество",
"limit": {
"name": "Лимит",
"description": "Максимальное количество возвращаемых файлов (1-100)."
},
"filter": {
"name": "Фильтр",
"description": "Фильтровать файлы по типу (none - без фильтра, favorite - только избранные, rating - по рейтингу)."
"offset": {
"name": "Смещение",
"description": "Количество файлов для пропуска (для пагинации)."
},
"favorite_only": {
"name": "Только избранные",
"description": "Фильтр для отображения только избранных файлов."
},
"filter_min_rating": {
"name": "Минимальный рейтинг",
"description": "Минимальный рейтинг для файлов (1-5). Используется только при filter='rating'."
"description": "Минимальный рейтинг для файлов (1-5)."
},
"order_by": {
"name": "Сортировать по",
"description": "Поле для сортировки файлов (date - дата, rating - рейтинг, name - имя, random - случайный)."
},
"order": {
"name": "Порядок",
"description": "Порядок сортировки файлов по дате создания."
"description": "Направление сортировки (ascending - по возрастанию, descending - по убыванию)."
},
"asset_type": {
"name": "Тип файла",
"description": "Фильтровать файлы по типу (all - все, photo - только фото, video - только видео)."
},
"min_date": {
"name": "Минимальная дата",
"description": "Фильтровать файлы, созданные в эту дату или после (формат ISO 8601)."
},
"max_date": {
"name": "Максимальная дата",
"description": "Фильтровать файлы, созданные в эту дату или до (формат ISO 8601)."
},
"memory_date": {
"name": "Дата воспоминания",
"description": "Фильтр по совпадению месяца и дня, исключая тот же год (воспоминания)."
},
"city": {
"name": "Город",
"description": "Фильтр по названию города (без учёта регистра)."
},
"state": {
"name": "Регион",
"description": "Фильтр по названию региона/области (без учёта регистра)."
},
"country": {
"name": "Страна",
"description": "Фильтр по названию страны (без учёта регистра)."
}
}
},
"send_telegram_notification": {
"name": "Отправить уведомление в Telegram",
"description": "Отправить уведомление в Telegram (текст, фото, видео или медиа-группу).",
"description": "Отправить уведомление в Telegram (текст, фото, видео, документ или медиа-группу).",
"fields": {
"bot_token": {
"name": "Токен бота",
@@ -167,9 +205,9 @@
"name": "ID чата",
"description": "ID чата Telegram для отправки."
},
"urls": {
"name": "URL-адреса",
"description": "Список URL медиа-файлов с типом (photo/video). Если пусто, отправляет текстовое сообщение. Большие списки автоматически разделяются на несколько медиа-групп."
"assets": {
"name": "Ресурсы",
"description": "Список медиа-ресурсов с 'url', опциональным 'type' (document/photo/video, по умолчанию document), опциональным 'content_type' (MIME-тип) и опциональным 'cache_key' (свой ключ кэширования вместо URL). Если пусто, отправляет текстовое сообщение. Фото и видео группируются; документы отправляются отдельно."
},
"caption": {
"name": "Подпись",
@@ -205,7 +243,19 @@
},
"send_large_photos_as_documents": {
"name": "Большие фото как документы",
"description": "Как обрабатывать фото, превышающие лимиты Telegram (10МБ или сумма размеров 10000пкс). Если true, отправлять как документы. Если false, уменьшать для соответствия лимитам."
"description": "Как обрабатывать фото, превышающие лимиты Telegram (10МБ или сумма размеров 10000пкс). Если true, отправлять как документы. Если false, пропускать."
},
"chat_action": {
"name": "Действие в чате",
"description": "Действие для отображения во время обработки (typing, upload_photo, upload_video, upload_document). Оставьте пустым для отключения."
},
"quiet_hours_start": {
"name": "Начало тихих часов",
"description": "Время начала тихих часов (формат ЧЧ:ММ, например 22:00). Уведомления в этот период ставятся в очередь и отправляются по окончании. Не указывайте для немедленной отправки."
},
"quiet_hours_end": {
"name": "Конец тихих часов",
"description": "Время окончания тихих часов (формат ЧЧ:ММ, например 08:00). Уведомления из очереди будут отправлены в это время."
}
}
}