16 Commits

Author SHA1 Message Date
a8ea9ab46a Rename on_this_day to memory_date with exclude-same-year behavior
All checks were successful
Validate / Hassfest (push) Successful in 2s
Renamed the date filter parameter and changed default behavior to match
Google Photos memories - now excludes assets from the same year as the
reference date, returning only photos from previous years on that day.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 14:24:08 +03:00
e88fd0fa3a Add get_assets filtering: offset, on_this_day, city, state, country
All checks were successful
Validate / Hassfest (push) Successful in 3s
- Add offset parameter for pagination support
- Add on_this_day parameter for memories filtering (match month and day)
- Add city, state, country parameters for geolocation filtering
- Update README with new parameters and examples

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 12:25:35 +03:00
3cf916dc77 Rename last_updated attribute to last_updated_at
All checks were successful
Validate / Hassfest (push) Successful in 3s
Renamed for consistency with created_at attribute naming.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 00:30:39 +03:00
df446390f2 Add album metadata attributes to Album ID sensor
All checks were successful
Validate / Hassfest (push) Successful in 4s
Add asset_count, last_updated, and created_at attributes to the
Album ID sensor for convenient access to album metadata.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 00:20:38 +03:00
1d61f05552 Track pending assets for delayed processing events
All checks were successful
Validate / Hassfest (push) Successful in 3s
- Add _pending_asset_ids to track assets detected but not yet processed
- Fire events when pending assets become processed (thumbhash available)
- Fixes issue where videos added during transcoding never triggered events
- Add debug logging for change detection and pending asset tracking
- Document external domain feature in README

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 22:23:32 +03:00
38a2a6ad7a Add external domain support for URLs
All checks were successful
Validate / Hassfest (push) Successful in 4s
- Fetch externalDomain from Immich server config on startup
- Use external domain for user-facing URLs (share links, asset URLs)
- Keep internal connection URL for API calls
- Add get_internal_download_url() to convert external URLs back to
  internal for faster local network downloads (Telegram notifications)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 21:53:02 +03:00
0bb7e71a1e Fix video asset processing detection
All checks were successful
Validate / Hassfest (push) Successful in 3s
- Use thumbhash for all assets instead of encodedVideoPath for videos
  (encodedVideoPath is not exposed in Immich API response)
- Add isTrashed check to exclude trashed assets from events
- Simplify processing status logic for both photos and videos

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 21:36:21 +03:00
c29fc2fbcf Add Telegram file ID caching and reverse geocoding fields
All checks were successful
Validate / Hassfest (push) Successful in 3s
Implement caching for Telegram file_ids to avoid re-uploading the same media.
Cached IDs are reused for subsequent sends, improving performance significantly.
Added configurable cache TTL option (1-168 hours, default 48).

Also added city, state, and country fields from Immich reverse geocoding
to asset data in events and get_assets service.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 03:12:05 +03:00
011f105823 Add geolocation (latitude/longitude) to asset data
All checks were successful
Validate / Hassfest (push) Successful in 3s
Expose GPS coordinates from EXIF data in asset responses. The latitude
and longitude fields are included in get_assets service responses and
event data when available.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 02:29:56 +03:00
ee45fdc177 Fix the services API
All checks were successful
Validate / Hassfest (push) Successful in 3s
2026-02-01 02:22:52 +03:00
4b0f3b8b12 Enhance get_assets service with flexible filtering and sorting
All checks were successful
Validate / Hassfest (push) Successful in 5s
- Replace filter parameter with independent favorite_only boolean
- Add order_by parameter supporting date, rating, and name sorting
- Rename count to limit for clarity
- Add date range filtering with min_date and max_date parameters
- Add asset_type filtering for photos and videos
- Update README with language support section and fixed sensor list
- Add translations for all new parameters

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-01 01:39:04 +03:00
e5e45f0fbf Add asset preprocessing filter and enhance asset data
All checks were successful
Validate / Hassfest (push) Successful in 3s
Features:
- Filter unprocessed assets from events and get_assets service
  - Videos require completed transcoding (encodedVideoPath)
  - Photos require generated thumbnails (thumbhash)
- Add photo_url field for images (preview-sized thumbnail)
- Simplify asset attribute names (remove asset_ prefix)
- Prioritize user-added descriptions over EXIF descriptions

Documentation:
- Update README with new asset fields and preprocessing note
- Update services.yaml parameter descriptions

Version: 2.1.0

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-01 01:14:21 +03:00
8714685d5e Improve Telegram error handling and unify asset data structure
All checks were successful
Validate / Hassfest (push) Successful in 3s
- Remove photo downscaling logic in favor of cleaner error handling
- Add intelligent Telegram API error logging with diagnostics and suggestions
- Define Telegram photo limits as global constants (TELEGRAM_MAX_PHOTO_SIZE, TELEGRAM_MAX_DIMENSION_SUM)
- Add photo_url support for image assets (matching video_url for videos)
- Unify asset detail building with shared _build_asset_detail() helper method
- Enhance get_assets service to return complete asset data matching events
- Simplify attribute naming by removing redundant asset_ prefix from values

BREAKING CHANGE: Asset attribute keys changed from "asset_type", "asset_filename"
to simpler "type", "filename" for consistency and cleaner JSON responses

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-31 23:40:19 +03:00
bbcd97e1ac Expose favorite and asset rating to asset data
All checks were successful
Validate / Hassfest (push) Successful in 3s
2026-01-31 18:14:33 +03:00
04dd63825c Add intelligent handling for oversized photos in Telegram service
All checks were successful
Validate / Hassfest (push) Successful in 3s
Implements send_large_photos_as_documents parameter to handle photos
exceeding Telegram's limits (10MB file size or 10000px dimension sum).

Features:
- Automatic detection of oversized photos using file size and PIL-based
  dimension checking
- Two handling modes:
  * send_large_photos_as_documents=false (default): Intelligently
    downsizes photos using Lanczos resampling and progressive JPEG
    quality reduction to fit within Telegram limits
  * send_large_photos_as_documents=true: Sends oversized photos as
    documents to preserve original quality
- For media groups: separates oversized photos and sends them as
  documents after the main group, or downsizes them inline
- Maintains backward compatibility with existing max_asset_data_size
  parameter for hard size limits

This resolves PHOTO_INVALID_DIMENSIONS errors for large images like
26MP photos while giving users control over quality vs. file size.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-31 18:03:50 +03:00
71d3714f6a Add max_asset_data_size parameter to Telegram service
All checks were successful
Validate / Hassfest (push) Successful in 3s
Introduces optional max_asset_data_size parameter (in bytes) to filter
out oversized photos and videos from Telegram notifications. Assets
exceeding the limit are skipped with a warning, preventing
PHOTO_INVALID_DIMENSIONS errors for large images (e.g., 26MP photos).

Changes:
- Add max_asset_data_size parameter to service signature
- Implement size checking for single photos/videos
- Filter oversized assets in media groups
- Update services.yaml, translations, and documentation

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-31 17:31:14 +03:00
12 changed files with 1781 additions and 221 deletions

View File

@@ -3,6 +3,7 @@
## Version Management
Update the integration version in `custom_components/immich_album_watcher/manifest.json` only when changes are made to the **integration content** (files inside `custom_components/immich_album_watcher/`).
**IMPORTANT** ALWAYS ask for version bump before doing it.
Do NOT bump version for:

297
README.md
View File

@@ -4,16 +4,21 @@
A Home Assistant custom integration that monitors [Immich](https://immich.app/) photo/video library albums for changes and exposes them as Home Assistant entities with event-firing capabilities.
> **Tip:** For the best experience, use this integration with the [Immich Album Watcher Blueprint](https://github.com/DolgolyovAlexei/haos-blueprints/blob/main/Common/Immich%20Album%20Watcher.yaml) to easily create automations for album change notifications.
## Features
- **Album Monitoring** - Watch selected Immich albums for asset additions and removals
- **Rich Sensor Data** - Multiple sensors per album:
- Album ID (with share URL attribute)
- Asset count (with detected people list)
- Photo count
- Video count
- Last updated timestamp
- Creation date
- Album ID (with album name and share URL attributes)
- Asset Count (total assets with detected people list)
- Photo Count (number of photos)
- Video Count (number of videos)
- Last Updated (last modification timestamp)
- Created (album creation date)
- Public URL (public share link)
- Protected URL (password-protected share link)
- Protected Password (password for protected link)
- **Camera Entity** - Album thumbnail displayed as a camera entity for dashboards
- **Binary Sensor** - "New Assets" indicator that turns on when assets are added
- **Face Recognition** - Detects and lists people recognized in album photos
@@ -31,13 +36,16 @@ A Home Assistant custom integration that monitors [Immich](https://immich.app/)
- Detected people in the asset
- **Services** - Custom service calls:
- `immich_album_watcher.refresh` - Force immediate data refresh
- `immich_album_watcher.get_recent_assets` - Get recent assets from an album
- `immich_album_watcher.get_assets` - Get assets from an album with filtering and ordering
- `immich_album_watcher.send_telegram_notification` - Send text, photo, video, or media group to Telegram
- **Share Link Management** - Button entities to create and delete share links:
- Create/delete public (unprotected) share links
- Create/delete password-protected share links
- Edit protected link passwords via Text entity
- **Configurable Polling** - Adjustable scan interval (10-3600 seconds)
- **Localization** - Available in multiple languages:
- English
- Russian (Русский)
## Installation
@@ -60,8 +68,6 @@ A Home Assistant custom integration that monitors [Immich](https://immich.app/)
3. Restart Home Assistant
4. Add the integration via **Settings****Devices & Services****Add Integration**
> **Tip:** For the best experience, use this integration with the [Immich Album Watcher Blueprint](https://github.com/DolgolyovAlexei/haos-blueprints/blob/main/Common/Immich%20Album%20Watcher.yaml) to easily create automations for album change notifications.
## Configuration
| Option | Description | Default |
@@ -71,12 +77,36 @@ A Home Assistant custom integration that monitors [Immich](https://immich.app/)
| Albums | Albums to monitor | Required |
| Scan Interval | How often to check for changes (seconds) | 60 |
| Telegram Bot Token | Bot token for sending media to Telegram (optional) | - |
| Telegram Cache TTL | How long to cache uploaded file IDs (hours, 1-168) | 48 |
### External Domain Support
The integration supports connecting to a local Immich server while using an external domain for user-facing URLs. This is useful when:
- Your Home Assistant connects to Immich via local network (e.g., `http://192.168.1.100:2283`)
- But you want share links and asset URLs to use your public domain (e.g., `https://photos.example.com`)
**How it works:**
1. Configure "External domain" in Immich: **Administration → Settings → Server → External Domain**
2. The integration automatically fetches this setting on startup
3. All user-facing URLs (share links, asset URLs in events) use the external domain
4. API calls and file downloads still use the local connection URL for faster performance
**Example:**
- Server URL (in integration config): `http://192.168.1.100:2283`
- External Domain (in Immich settings): `https://photos.example.com`
- Share links in events: `https://photos.example.com/share/...`
- Telegram downloads: via `http://192.168.1.100:2283` (fast local network)
If no external domain is configured in Immich, all URLs will use the Server URL from the integration configuration.
## Entities Created (per album)
| Entity Type | Name | Description |
|-------------|------|-------------|
| Sensor | Album ID | Album identifier with `album_name` and `share_url` attributes |
| Sensor | Album ID | Album identifier with `album_name`, `asset_count`, `share_url`, `last_updated_at`, and `created_at` attributes |
| Sensor | Asset Count | Total number of assets (includes `people` list in attributes) |
| Sensor | Photo Count | Number of photos in the album |
| Sensor | Video Count | Number of videos in the album |
@@ -103,16 +133,201 @@ Force an immediate refresh of all album data:
service: immich_album_watcher.refresh
```
### Get Recent Assets
### Get Assets
Get the most recent assets from a specific album (returns response data):
Get assets from a specific album with optional filtering and ordering (returns response data). Only returns fully processed assets (videos with completed transcoding, photos with generated thumbnails).
```yaml
service: immich_album_watcher.get_recent_assets
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_count
entity_id: sensor.album_name_asset_limit
data:
count: 10
limit: 10 # Maximum number of assets (1-100)
offset: 0 # Number of assets to skip (for pagination)
favorite_only: false # true = favorites only, false = all assets
filter_min_rating: 4 # Min rating (1-5)
order_by: "date" # Options: "date", "rating", "name", "random"
order: "descending" # Options: "ascending", "descending"
asset_type: "all" # Options: "all", "photo", "video"
min_date: "2024-01-01" # Optional: assets created on or after this date
max_date: "2024-12-31" # Optional: assets created on or before this date
memory_date: "2024-02-14" # Optional: memories filter (excludes same year)
city: "Paris" # Optional: filter by city name
state: "California" # Optional: filter by state/region
country: "France" # Optional: filter by country
```
**Parameters:**
- `limit` (optional, default: 10): Maximum number of assets to return (1-100)
- `offset` (optional, default: 0): Number of assets to skip before returning results. Use with `limit` for pagination (e.g., `offset: 0, limit: 10` for first page, `offset: 10, limit: 10` for second page)
- `favorite_only` (optional, default: false): Filter to show only favorite assets
- `filter_min_rating` (optional, default: 1): Minimum rating for assets (1-5 stars). Applied independently of `favorite_only`
- `order_by` (optional, default: "date"): Field to sort assets by
- `"date"`: Sort by creation date
- `"rating"`: Sort by rating (assets without rating are placed last)
- `"name"`: Sort by filename
- `"random"`: Random order (ignores `order`)
- `order` (optional, default: "descending"): Sort direction
- `"ascending"`: Ascending order
- `"descending"`: Descending order
- `asset_type` (optional, default: "all"): Filter by asset type
- `"all"`: No type filtering, return both photos and videos
- `"photo"`: Return only photos
- `"video"`: Return only videos
- `min_date` (optional): Filter assets created on or after this date. Use ISO 8601 format (e.g., `"2024-01-01"` or `"2024-01-01T10:30:00"`)
- `max_date` (optional): Filter assets created on or before this date. Use ISO 8601 format (e.g., `"2024-12-31"` or `"2024-12-31T23:59:59"`)
- `memory_date` (optional): Filter assets by matching month and day, excluding the same year (memories filter like Google Photos). Provide a date in ISO 8601 format (e.g., `"2024-02-14"`) to get all assets taken on February 14th from previous years
- `city` (optional): Filter assets by city name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data
- `state` (optional): Filter assets by state/region name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data
- `country` (optional): Filter assets by country name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data
**Examples:**
Get 5 most recent favorite assets:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 5
favorite_only: true
order_by: "date"
order: "descending"
```
Get 10 random assets rated 3 stars or higher:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 10
filter_min_rating: 3
order_by: "random"
```
Get 20 most recent photos only:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 20
asset_type: "photo"
order_by: "date"
order: "descending"
```
Get top 10 highest rated favorite videos:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 10
favorite_only: true
asset_type: "video"
order_by: "rating"
order: "descending"
```
Get photos sorted alphabetically by name:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 20
asset_type: "photo"
order_by: "name"
order: "ascending"
```
Get photos from a specific date range:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 50
asset_type: "photo"
min_date: "2024-06-01"
max_date: "2024-06-30"
order_by: "date"
order: "descending"
```
Get "On This Day" memories (photos from today's date in previous years):
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 20
memory_date: "{{ now().strftime('%Y-%m-%d') }}"
order_by: "date"
order: "ascending"
```
Paginate through all assets (first page):
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 10
offset: 0
order_by: "date"
order: "descending"
```
Paginate through all assets (second page):
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 10
offset: 10
order_by: "date"
order: "descending"
```
Get photos taken in a specific city:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 50
city: "Paris"
asset_type: "photo"
order_by: "date"
order: "descending"
```
Get all assets from a specific country:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 100
country: "Japan"
order_by: "date"
order: "ascending"
```
### Send Telegram Notification
@@ -126,6 +341,8 @@ Send notifications to Telegram. Supports multiple formats:
The service downloads media from Immich and uploads it to Telegram, bypassing any CORS restrictions. Large lists of media are automatically split into multiple media groups based on the `max_group_size` parameter (default: 10 items per group).
**File ID Caching:** When media is uploaded to Telegram, the service caches the returned `file_id`. Subsequent sends of the same media will use the cached `file_id` instead of re-uploading, significantly improving performance. The cache TTL is configurable in hub options (default: 48 hours, range: 1-168 hours). The cache is persistent across Home Assistant restarts and is stored per album.
**Examples:**
Text message:
@@ -133,7 +350,7 @@ Text message:
```yaml
service: immich_album_watcher.send_telegram_notification
target:
entity_id: sensor.album_name_asset_count
entity_id: sensor.album_name_asset_limit
data:
chat_id: "-1001234567890"
caption: "Check out the new album!"
@@ -145,7 +362,7 @@ Single photo:
```yaml
service: immich_album_watcher.send_telegram_notification
target:
entity_id: sensor.album_name_asset_count
entity_id: sensor.album_name_asset_limit
data:
chat_id: "-1001234567890"
urls:
@@ -159,7 +376,7 @@ Media group:
```yaml
service: immich_album_watcher.send_telegram_notification
target:
entity_id: sensor.album_name_asset_count
entity_id: sensor.album_name_asset_limit
data:
chat_id: "-1001234567890"
urls:
@@ -176,12 +393,12 @@ HTML formatting:
```yaml
service: immich_album_watcher.send_telegram_notification
target:
entity_id: sensor.album_name_asset_count
entity_id: sensor.album_name_asset_limit
data:
chat_id: "-1001234567890"
caption: |
<b>Album Updated!</b>
New photos by <i>{{ trigger.event.data.added_assets[0].asset_owner }}</i>
New photos by <i>{{ trigger.event.data.added_assets[0].owner }}</i>
<a href="https://immich.example.com/album">View Album</a>
parse_mode: "HTML" # Default, can be omitted
```
@@ -191,7 +408,7 @@ Non-blocking mode (fire-and-forget):
```yaml
service: immich_album_watcher.send_telegram_notification
target:
entity_id: sensor.album_name_asset_count
entity_id: sensor.album_name_asset_limit
data:
chat_id: "-1001234567890"
urls:
@@ -213,6 +430,8 @@ data:
| `max_group_size` | Maximum media items per group (2-10). Large lists split into multiple groups. Default: 10 | No |
| `chunk_delay` | Delay in milliseconds between sending multiple groups (0-60000). Useful for rate limiting. Default: 0 | No |
| `wait_for_response` | Wait for Telegram to finish processing. Set to `false` for fire-and-forget (automation continues immediately). Default: `true` | No |
| `max_asset_data_size` | Maximum asset size in bytes. Assets exceeding this limit will be skipped. Default: no limit | No |
| `send_large_photos_as_documents` | Handle photos exceeding Telegram limits (10MB or 10000px dimension sum). If `true`, send as documents. If `false`, skip oversized photos. Default: `false` | No |
The service returns a response with `success` status and `message_id` (single message), `message_ids` (media group), or `groups_sent` (number of groups when split). When `wait_for_response` is `false`, the service returns immediately with `{"success": true, "status": "queued"}` while processing continues in the background.
@@ -243,7 +462,7 @@ automation:
- service: notify.mobile_app
data:
title: "New Photos"
message: "{{ trigger.event.data.added_count }} new photos in {{ trigger.event.data.album_name }}"
message: "{{ trigger.event.data.added_limit }} new photos in {{ trigger.event.data.album_name }}"
- alias: "Album renamed"
trigger:
@@ -276,8 +495,8 @@ automation:
| `album_url` | Public URL to view the album (only present if album has a shared link) | All events except `album_deleted` |
| `change_type` | Type of change (assets_added, assets_removed, album_renamed, album_sharing_changed, changed) | All events except `album_deleted` |
| `shared` | Current sharing status of the album | All events except `album_deleted` |
| `added_count` | Number of assets added | `album_changed`, `assets_added` |
| `removed_count` | Number of assets removed | `album_changed`, `assets_removed` |
| `added_limit` | Number of assets added | `album_changed`, `assets_added` |
| `removed_limit` | Number of assets removed | `album_changed`, `assets_removed` |
| `added_assets` | List of added assets with details (see below) | `album_changed`, `assets_added` |
| `removed_assets` | List of removed asset IDs | `album_changed`, `assets_removed` |
| `people` | List of all people detected in the album | All events except `album_deleted` |
@@ -293,15 +512,27 @@ Each item in the `added_assets` list contains the following fields:
| Field | Description |
|-------|-------------|
| `id` | Unique asset ID |
| `asset_type` | Type of asset (`IMAGE` or `VIDEO`) |
| `asset_filename` | Original filename of the asset |
| `asset_created` | Date/time when the asset was originally created |
| `asset_owner` | Display name of the user who owns the asset |
| `asset_owner_id` | Unique ID of the user who owns the asset |
| `asset_description` | Description/caption of the asset (from EXIF data) |
| `asset_url` | Public URL to view the asset (only present if album has a shared link) |
| `type` | Type of asset (`IMAGE` or `VIDEO`) |
| `filename` | Original filename of the asset |
| `created_at` | Date/time when the asset was originally created |
| `owner` | Display name of the user who owns the asset |
| `owner_id` | Unique ID of the user who owns the asset |
| `description` | Description/caption of the asset (from EXIF data) |
| `is_favorite` | Whether the asset is marked as favorite (`true` or `false`) |
| `rating` | User rating of the asset (1-5 stars, or `null` if not rated) |
| `latitude` | GPS latitude coordinate (or `null` if no geolocation) |
| `longitude` | GPS longitude coordinate (or `null` if no geolocation) |
| `city` | City name from reverse geocoding (or `null` if unavailable) |
| `state` | State/region name from reverse geocoding (or `null` if unavailable) |
| `country` | Country name from reverse geocoding (or `null` if unavailable) |
| `url` | Public URL to view the asset (only present if album has a shared link) |
| `download_url` | Direct download URL for the original file (if shared link exists) |
| `playback_url` | Video playback URL (for VIDEO assets only, if shared link exists) |
| `photo_url` | Photo preview URL (for IMAGE assets only, if shared link exists) |
| `people` | List of people detected in this specific asset |
> **Note:** Assets are only included in events and service responses when they are fully processed by Immich. For videos, this means transcoding must be complete (with `encodedVideoPath`). For photos, thumbnail generation must be complete (with `thumbhash`). This ensures that all media URLs are valid and accessible. Unprocessed assets are silently ignored until their processing completes.
Example accessing asset owner in an automation:
```yaml
@@ -315,8 +546,8 @@ automation:
data:
title: "New Photos"
message: >
{{ trigger.event.data.added_assets[0].asset_owner }} added
{{ trigger.event.data.added_count }} photos to {{ trigger.event.data.album_name }}
{{ trigger.event.data.added_assets[0].owner }} added
{{ trigger.event.data.added_limit }} photos to {{ trigger.event.data.album_name }}
```
## Requirements

View File

@@ -15,12 +15,14 @@ from .const import (
CONF_HUB_NAME,
CONF_IMMICH_URL,
CONF_SCAN_INTERVAL,
CONF_TELEGRAM_CACHE_TTL,
DEFAULT_SCAN_INTERVAL,
DEFAULT_TELEGRAM_CACHE_TTL,
DOMAIN,
PLATFORMS,
)
from .coordinator import ImmichAlbumWatcherCoordinator
from .storage import ImmichAlbumStorage
from .storage import ImmichAlbumStorage, TelegramFileCache
_LOGGER = logging.getLogger(__name__)
@@ -33,6 +35,7 @@ class ImmichHubData:
url: str
api_key: str
scan_interval: int
telegram_cache_ttl: int
@dataclass
@@ -55,6 +58,7 @@ async def async_setup_entry(hass: HomeAssistant, entry: ImmichConfigEntry) -> bo
url = entry.data[CONF_IMMICH_URL]
api_key = entry.data[CONF_API_KEY]
scan_interval = entry.options.get(CONF_SCAN_INTERVAL, DEFAULT_SCAN_INTERVAL)
telegram_cache_ttl = entry.options.get(CONF_TELEGRAM_CACHE_TTL, DEFAULT_TELEGRAM_CACHE_TTL)
# Store hub data
entry.runtime_data = ImmichHubData(
@@ -62,6 +66,7 @@ async def async_setup_entry(hass: HomeAssistant, entry: ImmichConfigEntry) -> bo
url=url,
api_key=api_key,
scan_interval=scan_interval,
telegram_cache_ttl=telegram_cache_ttl,
)
# Create storage for persisting album state across restarts
@@ -107,6 +112,12 @@ async def _async_setup_subentry_coordinator(
_LOGGER.debug("Setting up coordinator for album: %s (%s)", album_name, album_id)
# Create and load Telegram file cache for this album
# TTL is in hours from config, convert to seconds
cache_ttl_seconds = hub_data.telegram_cache_ttl * 60 * 60
telegram_cache = TelegramFileCache(hass, album_id, ttl_seconds=cache_ttl_seconds)
await telegram_cache.async_load()
# Create coordinator for this album
coordinator = ImmichAlbumWatcherCoordinator(
hass,
@@ -117,6 +128,7 @@ async def _async_setup_subentry_coordinator(
scan_interval=hub_data.scan_interval,
hub_name=hub_data.name,
storage=storage,
telegram_cache=telegram_cache,
)
# Load persisted state before first refresh to detect changes during downtime

View File

@@ -27,7 +27,9 @@ from .const import (
CONF_IMMICH_URL,
CONF_SCAN_INTERVAL,
CONF_TELEGRAM_BOT_TOKEN,
CONF_TELEGRAM_CACHE_TTL,
DEFAULT_SCAN_INTERVAL,
DEFAULT_TELEGRAM_CACHE_TTL,
DOMAIN,
SUBENTRY_TYPE_ALBUM,
)
@@ -252,6 +254,9 @@ class ImmichAlbumWatcherOptionsFlow(OptionsFlow):
CONF_TELEGRAM_BOT_TOKEN: user_input.get(
CONF_TELEGRAM_BOT_TOKEN, ""
),
CONF_TELEGRAM_CACHE_TTL: user_input.get(
CONF_TELEGRAM_CACHE_TTL, DEFAULT_TELEGRAM_CACHE_TTL
),
},
)
@@ -261,6 +266,9 @@ class ImmichAlbumWatcherOptionsFlow(OptionsFlow):
current_bot_token = self._config_entry.options.get(
CONF_TELEGRAM_BOT_TOKEN, ""
)
current_cache_ttl = self._config_entry.options.get(
CONF_TELEGRAM_CACHE_TTL, DEFAULT_TELEGRAM_CACHE_TTL
)
return self.async_show_form(
step_id="init",
@@ -272,6 +280,9 @@ class ImmichAlbumWatcherOptionsFlow(OptionsFlow):
vol.Optional(
CONF_TELEGRAM_BOT_TOKEN, default=current_bot_token
): str,
vol.Optional(
CONF_TELEGRAM_CACHE_TTL, default=current_cache_ttl
): vol.All(vol.Coerce(int), vol.Range(min=1, max=168)),
}
),
)

View File

@@ -14,12 +14,14 @@ CONF_ALBUM_ID: Final = "album_id"
CONF_ALBUM_NAME: Final = "album_name"
CONF_SCAN_INTERVAL: Final = "scan_interval"
CONF_TELEGRAM_BOT_TOKEN: Final = "telegram_bot_token"
CONF_TELEGRAM_CACHE_TTL: Final = "telegram_cache_ttl"
# Subentry type
SUBENTRY_TYPE_ALBUM: Final = "album"
# Defaults
DEFAULT_SCAN_INTERVAL: Final = 60 # seconds
DEFAULT_TELEGRAM_CACHE_TTL: Final = 48 # hours
NEW_ASSETS_RESET_DELAY: Final = 300 # 5 minutes
DEFAULT_SHARE_PASSWORD: Final = "immich123"
@@ -47,7 +49,7 @@ ATTR_REMOVED_COUNT: Final = "removed_count"
ATTR_ADDED_ASSETS: Final = "added_assets"
ATTR_REMOVED_ASSETS: Final = "removed_assets"
ATTR_CHANGE_TYPE: Final = "change_type"
ATTR_LAST_UPDATED: Final = "last_updated"
ATTR_LAST_UPDATED: Final = "last_updated_at"
ATTR_CREATED_AT: Final = "created_at"
ATTR_THUMBNAIL_URL: Final = "thumbnail_url"
ATTR_SHARED: Final = "shared"
@@ -57,15 +59,22 @@ ATTR_OLD_NAME: Final = "old_name"
ATTR_NEW_NAME: Final = "new_name"
ATTR_OLD_SHARED: Final = "old_shared"
ATTR_NEW_SHARED: Final = "new_shared"
ATTR_ASSET_TYPE: Final = "asset_type"
ATTR_ASSET_FILENAME: Final = "asset_filename"
ATTR_ASSET_CREATED: Final = "asset_created"
ATTR_ASSET_OWNER: Final = "asset_owner"
ATTR_ASSET_OWNER_ID: Final = "asset_owner_id"
ATTR_ASSET_URL: Final = "asset_url"
ATTR_ASSET_DOWNLOAD_URL: Final = "asset_download_url"
ATTR_ASSET_PLAYBACK_URL: Final = "asset_playback_url"
ATTR_ASSET_DESCRIPTION: Final = "asset_description"
ATTR_ASSET_TYPE: Final = "type"
ATTR_ASSET_FILENAME: Final = "filename"
ATTR_ASSET_CREATED: Final = "created_at"
ATTR_ASSET_OWNER: Final = "owner"
ATTR_ASSET_OWNER_ID: Final = "owner_id"
ATTR_ASSET_URL: Final = "url"
ATTR_ASSET_DOWNLOAD_URL: Final = "download_url"
ATTR_ASSET_PLAYBACK_URL: Final = "playback_url"
ATTR_ASSET_DESCRIPTION: Final = "description"
ATTR_ASSET_IS_FAVORITE: Final = "is_favorite"
ATTR_ASSET_RATING: Final = "rating"
ATTR_ASSET_LATITUDE: Final = "latitude"
ATTR_ASSET_LONGITUDE: Final = "longitude"
ATTR_ASSET_CITY: Final = "city"
ATTR_ASSET_STATE: Final = "state"
ATTR_ASSET_COUNTRY: Final = "country"
# Asset types
ASSET_TYPE_IMAGE: Final = "IMAGE"
@@ -76,5 +85,5 @@ PLATFORMS: Final = ["sensor", "binary_sensor", "camera", "text", "button"]
# Services
SERVICE_REFRESH: Final = "refresh"
SERVICE_GET_RECENT_ASSETS: Final = "get_recent_assets"
SERVICE_GET_ASSETS: Final = "get_assets"
SERVICE_SEND_TELEGRAM_NOTIFICATION: Final = "send_telegram_notification"

View File

@@ -8,7 +8,7 @@ from datetime import datetime, timedelta
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from .storage import ImmichAlbumStorage
from .storage import ImmichAlbumStorage, TelegramFileCache
import aiohttp
@@ -28,11 +28,18 @@ from .const import (
ATTR_ASSET_DESCRIPTION,
ATTR_ASSET_DOWNLOAD_URL,
ATTR_ASSET_FILENAME,
ATTR_ASSET_IS_FAVORITE,
ATTR_ASSET_LATITUDE,
ATTR_ASSET_LONGITUDE,
ATTR_ASSET_CITY,
ATTR_ASSET_STATE,
ATTR_ASSET_COUNTRY,
ATTR_ASSET_OWNER,
ATTR_ASSET_OWNER_ID,
ATTR_ASSET_PLAYBACK_URL,
ATTR_ASSET_RATING,
ATTR_ASSET_TYPE,
ATTR_ASSET_URL,
ATTR_ASSET_PLAYBACK_URL,
ATTR_CHANGE_TYPE,
ATTR_HUB_NAME,
ATTR_PEOPLE,
@@ -43,6 +50,7 @@ from .const import (
ATTR_OLD_SHARED,
ATTR_NEW_SHARED,
ATTR_SHARED,
ATTR_THUMBNAIL_URL,
DOMAIN,
EVENT_ALBUM_CHANGED,
EVENT_ASSETS_ADDED,
@@ -115,6 +123,14 @@ class AssetInfo:
owner_name: str = ""
description: str = ""
people: list[str] = field(default_factory=list)
is_favorite: bool = False
rating: int | None = None
latitude: float | None = None
longitude: float | None = None
city: str | None = None
state: str | None = None
country: str | None = None
is_processed: bool = True # Whether asset is fully processed by Immich
@classmethod
def from_api_response(
@@ -130,23 +146,98 @@ class AssetInfo:
if users_cache and owner_id:
owner_name = users_cache.get(owner_id, "")
# Get description from exifInfo if available
description = ""
# Get description - prioritize user-added description over EXIF description
description = data.get("description", "") or ""
exif_info = data.get("exifInfo")
if exif_info:
if not description and exif_info:
# Fall back to EXIF description if no user description
description = exif_info.get("description", "") or ""
# Get favorites and rating
is_favorite = data.get("isFavorite", False)
rating = exif_info.get("rating") if exif_info else None
# Get geolocation
latitude = exif_info.get("latitude") if exif_info else None
longitude = exif_info.get("longitude") if exif_info else None
# Get reverse geocoded location
city = exif_info.get("city") if exif_info else None
state = exif_info.get("state") if exif_info else None
country = exif_info.get("country") if exif_info else None
# Check if asset is fully processed by Immich
asset_type = data.get("type", ASSET_TYPE_IMAGE)
is_processed = cls._check_processing_status(data, asset_type)
return cls(
id=data["id"],
type=data.get("type", ASSET_TYPE_IMAGE),
type=asset_type,
filename=data.get("originalFileName", ""),
created_at=data.get("fileCreatedAt", ""),
owner_id=owner_id,
owner_name=owner_name,
description=description,
people=people,
is_favorite=is_favorite,
rating=rating,
latitude=latitude,
longitude=longitude,
city=city,
state=state,
country=country,
is_processed=is_processed,
)
@staticmethod
def _check_processing_status(data: dict[str, Any], _asset_type: str) -> bool:
"""Check if asset has been fully processed by Immich.
For all assets: Check if thumbnails have been generated (thumbhash exists).
Immich generates thumbnails for both photos and videos regardless of
whether video transcoding is needed.
Args:
data: Asset data from API response
_asset_type: Asset type (IMAGE or VIDEO) - unused but kept for API stability
Returns:
True if asset is fully processed and not trashed/offline, False otherwise
"""
asset_id = data.get("id", "unknown")
asset_type = data.get("type", "unknown")
is_offline = data.get("isOffline", False)
is_trashed = data.get("isTrashed", False)
thumbhash = data.get("thumbhash")
_LOGGER.debug(
"Asset %s (%s): isOffline=%s, isTrashed=%s, thumbhash=%s",
asset_id,
asset_type,
is_offline,
is_trashed,
bool(thumbhash),
)
# Exclude offline assets
if is_offline:
_LOGGER.debug("Asset %s excluded: offline", asset_id)
return False
# Exclude trashed assets
if is_trashed:
_LOGGER.debug("Asset %s excluded: trashed", asset_id)
return False
# Check if thumbnails have been generated
# This works for both photos and videos - Immich always generates thumbnails
# Note: The API doesn't expose video transcoding status (encodedVideoPath),
# but thumbhash is sufficient since Immich generates thumbnails for all assets
is_processed = bool(thumbhash)
if not is_processed:
_LOGGER.debug("Asset %s excluded: no thumbhash", asset_id)
return is_processed
@dataclass
class AlbumData:
@@ -237,6 +328,7 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
scan_interval: int,
hub_name: str = "Immich",
storage: ImmichAlbumStorage | None = None,
telegram_cache: TelegramFileCache | None = None,
) -> None:
"""Initialize the coordinator."""
super().__init__(
@@ -256,13 +348,45 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
self._users_cache: dict[str, str] = {} # user_id -> name
self._shared_links: list[SharedLinkInfo] = []
self._storage = storage
self._telegram_cache = telegram_cache
self._persisted_asset_ids: set[str] | None = None
self._external_domain: str | None = None # Fetched from server config
self._pending_asset_ids: set[str] = set() # Assets detected but not yet processed
@property
def immich_url(self) -> str:
"""Return the Immich URL."""
"""Return the Immich URL (for API calls)."""
return self._url
@property
def external_url(self) -> str:
"""Return the external URL for links.
Uses externalDomain from Immich server config if set,
otherwise falls back to the connection URL.
"""
if self._external_domain:
return self._external_domain.rstrip("/")
return self._url
def get_internal_download_url(self, url: str) -> str:
"""Convert an external URL to internal URL for faster downloads.
If the URL starts with the external domain, replace it with the
internal connection URL to download via local network.
Args:
url: The URL to convert (may be external or internal)
Returns:
URL using internal connection for downloads
"""
if self._external_domain:
external = self._external_domain.rstrip("/")
if url.startswith(external):
return url.replace(external, self._url, 1)
return url
@property
def api_key(self) -> str:
"""Return the API key."""
@@ -278,6 +402,11 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
"""Return the album name."""
return self._album_name
@property
def telegram_cache(self) -> TelegramFileCache | None:
"""Return the Telegram file cache."""
return self._telegram_cache
def update_scan_interval(self, scan_interval: int) -> None:
"""Update the scan interval."""
self.update_interval = timedelta(seconds=scan_interval)
@@ -303,33 +432,138 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
self._album_name,
)
async def async_get_recent_assets(self, count: int = 10) -> list[dict[str, Any]]:
"""Get recent assets from the album."""
async def async_get_assets(
self,
limit: int = 10,
offset: int = 0,
favorite_only: bool = False,
filter_min_rating: int = 1,
order_by: str = "date",
order: str = "descending",
asset_type: str = "all",
min_date: str | None = None,
max_date: str | None = None,
memory_date: str | None = None,
city: str | None = None,
state: str | None = None,
country: str | None = None,
) -> list[dict[str, Any]]:
"""Get assets from the album with optional filtering and ordering.
Args:
limit: Maximum number of assets to return (1-100)
offset: Number of assets to skip before returning results (for pagination)
favorite_only: Filter to show only favorite assets
filter_min_rating: Minimum rating for assets (1-5)
order_by: Field to sort by - 'date', 'rating', or 'name'
order: Sort direction - 'ascending', 'descending', or 'random'
asset_type: Asset type filter - 'all', 'photo', or 'video'
min_date: Filter assets created on or after this date (ISO 8601 format)
max_date: Filter assets created on or before this date (ISO 8601 format)
memory_date: Filter assets by matching month and day, excluding the same year (memories filter)
city: Filter assets by city (case-insensitive substring match)
state: Filter assets by state/region (case-insensitive substring match)
country: Filter assets by country (case-insensitive substring match)
Returns:
List of asset data dictionaries
"""
if self.data is None:
return []
# Sort assets by created_at descending
sorted_assets = sorted(
self.data.assets.values(),
key=lambda a: a.created_at,
reverse=True,
)[:count]
# Start with all processed assets only
assets = [a for a in self.data.assets.values() if a.is_processed]
# Apply favorite filter
if favorite_only:
assets = [a for a in assets if a.is_favorite]
# Apply rating filter
if filter_min_rating > 1:
assets = [a for a in assets if a.rating is not None and a.rating >= filter_min_rating]
# Apply asset type filtering
if asset_type == "photo":
assets = [a for a in assets if a.type == ASSET_TYPE_IMAGE]
elif asset_type == "video":
assets = [a for a in assets if a.type == ASSET_TYPE_VIDEO]
# Apply date filtering
if min_date:
assets = [a for a in assets if a.created_at >= min_date]
if max_date:
assets = [a for a in assets if a.created_at <= max_date]
# Apply memory date filtering (match month and day, exclude same year)
if memory_date:
try:
# Parse the reference date (supports ISO 8601 format)
ref_date = datetime.fromisoformat(memory_date.replace("Z", "+00:00"))
ref_year = ref_date.year
ref_month = ref_date.month
ref_day = ref_date.day
def matches_memory(asset: AssetInfo) -> bool:
"""Check if asset matches memory criteria (same month/day, different year)."""
try:
asset_date = datetime.fromisoformat(
asset.created_at.replace("Z", "+00:00")
)
# Match month and day, but exclude same year (true memories behavior)
return (
asset_date.month == ref_month
and asset_date.day == ref_day
and asset_date.year != ref_year
)
except (ValueError, AttributeError):
return False
assets = [a for a in assets if matches_memory(a)]
except ValueError:
_LOGGER.warning("Invalid memory_date format: %s", memory_date)
# Apply geolocation filtering (case-insensitive substring match)
if city:
city_lower = city.lower()
assets = [a for a in assets if a.city and city_lower in a.city.lower()]
if state:
state_lower = state.lower()
assets = [a for a in assets if a.state and state_lower in a.state.lower()]
if country:
country_lower = country.lower()
assets = [a for a in assets if a.country and country_lower in a.country.lower()]
# Apply ordering
if order_by == "random":
import random
random.shuffle(assets)
elif order_by == "rating":
# Sort by rating, putting None values last
assets = sorted(
assets,
key=lambda a: (a.rating is None, a.rating if a.rating is not None else 0),
reverse=(order == "descending")
)
elif order_by == "name":
assets = sorted(
assets,
key=lambda a: a.filename.lower(),
reverse=(order == "descending")
)
else: # date (default)
assets = sorted(
assets,
key=lambda a: a.created_at,
reverse=(order == "descending")
)
# Apply offset and limit for pagination
assets = assets[offset : offset + limit]
# Build result with all available asset data (matching event data)
result = []
for asset in sorted_assets:
asset_data = {
"id": asset.id,
"type": asset.type,
"filename": asset.filename,
"created_at": asset.created_at,
"description": asset.description,
"people": asset.people,
"thumbnail_url": f"{self._url}/api/assets/{asset.id}/thumbnail",
}
if asset.type == ASSET_TYPE_VIDEO:
video_url = self._get_asset_video_url(asset.id)
if video_url:
asset_data["video_url"] = video_url
for asset in assets:
asset_data = self._build_asset_detail(asset, include_thumbnail=True)
result.append(asset_data)
return result
@@ -380,6 +614,36 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
return self._users_cache
async def _async_fetch_server_config(self) -> None:
"""Fetch server config from Immich to get external domain."""
if self._session is None:
self._session = async_get_clientsession(self.hass)
headers = {"x-api-key": self._api_key}
try:
async with self._session.get(
f"{self._url}/api/server/config",
headers=headers,
) as response:
if response.status == 200:
data = await response.json()
external_domain = data.get("externalDomain", "") or ""
self._external_domain = external_domain
if external_domain:
_LOGGER.debug(
"Using external domain from Immich: %s", external_domain
)
else:
_LOGGER.debug(
"No external domain configured in Immich, using connection URL"
)
else:
_LOGGER.warning(
"Failed to fetch server config: HTTP %s", response.status
)
except aiohttp.ClientError as err:
_LOGGER.warning("Failed to fetch server config: %s", err)
async def _async_fetch_shared_links(self) -> list[SharedLinkInfo]:
"""Fetch shared links for this album from Immich."""
if self._session is None:
@@ -423,29 +687,29 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
"""Get the public URL if album has an accessible shared link."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/share/{accessible_links[0].key}"
return f"{self.external_url}/share/{accessible_links[0].key}"
return None
def get_any_url(self) -> str | None:
"""Get any non-expired URL (prefers accessible, falls back to protected)."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/share/{accessible_links[0].key}"
return f"{self.external_url}/share/{accessible_links[0].key}"
non_expired = [link for link in self._shared_links if not link.is_expired]
if non_expired:
return f"{self._url}/share/{non_expired[0].key}"
return f"{self.external_url}/share/{non_expired[0].key}"
return None
def get_protected_url(self) -> str | None:
"""Get a protected URL if any password-protected link exists."""
protected_links = self._get_protected_links()
if protected_links:
return f"{self._url}/share/{protected_links[0].key}"
return f"{self.external_url}/share/{protected_links[0].key}"
return None
def get_protected_urls(self) -> list[str]:
"""Get all password-protected URLs."""
return [f"{self._url}/share/{link.key}" for link in self._get_protected_links()]
return [f"{self.external_url}/share/{link.key}" for link in self._get_protected_links()]
def get_protected_password(self) -> str | None:
"""Get the password for the first protected link."""
@@ -456,13 +720,13 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
def get_public_urls(self) -> list[str]:
"""Get all accessible public URLs."""
return [f"{self._url}/share/{link.key}" for link in self._get_accessible_links()]
return [f"{self.external_url}/share/{link.key}" for link in self._get_accessible_links()]
def get_shared_links_info(self) -> list[dict[str, Any]]:
"""Get detailed info about all shared links."""
return [
{
"url": f"{self._url}/share/{link.key}",
"url": f"{self.external_url}/share/{link.key}",
"has_password": link.has_password,
"is_expired": link.is_expired,
"expires_at": link.expires_at.isoformat() if link.expires_at else None,
@@ -475,37 +739,108 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
"""Get the public viewer URL for an asset (web page)."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/share/{accessible_links[0].key}/photos/{asset_id}"
return f"{self.external_url}/share/{accessible_links[0].key}/photos/{asset_id}"
non_expired = [link for link in self._shared_links if not link.is_expired]
if non_expired:
return f"{self._url}/share/{non_expired[0].key}/photos/{asset_id}"
return f"{self.external_url}/share/{non_expired[0].key}/photos/{asset_id}"
return None
def _get_asset_download_url(self, asset_id: str) -> str | None:
"""Get the direct download URL for an asset (media file)."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/api/assets/{asset_id}/original?key={accessible_links[0].key}"
return f"{self.external_url}/api/assets/{asset_id}/original?key={accessible_links[0].key}"
non_expired = [link for link in self._shared_links if not link.is_expired]
if non_expired:
return f"{self._url}/api/assets/{asset_id}/original?key={non_expired[0].key}"
return f"{self.external_url}/api/assets/{asset_id}/original?key={non_expired[0].key}"
return None
def _get_asset_video_url(self, asset_id: str) -> str | None:
"""Get the transcoded video playback URL for a video asset."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/api/assets/{asset_id}/video/playback?key={accessible_links[0].key}"
return f"{self.external_url}/api/assets/{asset_id}/video/playback?key={accessible_links[0].key}"
non_expired = [link for link in self._shared_links if not link.is_expired]
if non_expired:
return f"{self._url}/api/assets/{asset_id}/video/playback?key={non_expired[0].key}"
return f"{self.external_url}/api/assets/{asset_id}/video/playback?key={non_expired[0].key}"
return None
def _get_asset_photo_url(self, asset_id: str) -> str | None:
"""Get the preview-sized thumbnail URL for a photo asset."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self.external_url}/api/assets/{asset_id}/thumbnail?size=preview&key={accessible_links[0].key}"
non_expired = [link for link in self._shared_links if not link.is_expired]
if non_expired:
return f"{self.external_url}/api/assets/{asset_id}/thumbnail?size=preview&key={non_expired[0].key}"
return None
def _build_asset_detail(
self, asset: AssetInfo, include_thumbnail: bool = True
) -> dict[str, Any]:
"""Build asset detail dictionary with all available data.
Args:
asset: AssetInfo object
include_thumbnail: If True, include thumbnail_url
Returns:
Dictionary with asset details (using ATTR_* constants for consistency)
"""
# Base asset data
asset_detail = {
"id": asset.id,
ATTR_ASSET_TYPE: asset.type,
ATTR_ASSET_FILENAME: asset.filename,
ATTR_ASSET_CREATED: asset.created_at,
ATTR_ASSET_OWNER: asset.owner_name,
ATTR_ASSET_OWNER_ID: asset.owner_id,
ATTR_ASSET_DESCRIPTION: asset.description,
ATTR_PEOPLE: asset.people,
ATTR_ASSET_IS_FAVORITE: asset.is_favorite,
ATTR_ASSET_RATING: asset.rating,
ATTR_ASSET_LATITUDE: asset.latitude,
ATTR_ASSET_LONGITUDE: asset.longitude,
ATTR_ASSET_CITY: asset.city,
ATTR_ASSET_STATE: asset.state,
ATTR_ASSET_COUNTRY: asset.country,
}
# Add thumbnail URL if requested
if include_thumbnail:
asset_detail[ATTR_THUMBNAIL_URL] = f"{self.external_url}/api/assets/{asset.id}/thumbnail"
# Add public viewer URL (web page)
asset_url = self._get_asset_public_url(asset.id)
if asset_url:
asset_detail[ATTR_ASSET_URL] = asset_url
# Add download URL (direct media file)
asset_download_url = self._get_asset_download_url(asset.id)
if asset_download_url:
asset_detail[ATTR_ASSET_DOWNLOAD_URL] = asset_download_url
# Add type-specific URLs
if asset.type == ASSET_TYPE_VIDEO:
asset_video_url = self._get_asset_video_url(asset.id)
if asset_video_url:
asset_detail[ATTR_ASSET_PLAYBACK_URL] = asset_video_url
elif asset.type == ASSET_TYPE_IMAGE:
asset_photo_url = self._get_asset_photo_url(asset.id)
if asset_photo_url:
asset_detail["photo_url"] = asset_photo_url # TODO: Add ATTR_ASSET_PHOTO_URL constant
return asset_detail
async def _async_update_data(self) -> AlbumData | None:
"""Fetch data from Immich API."""
if self._session is None:
self._session = async_get_clientsession(self.hass)
# Fetch server config to get external domain (once)
if self._external_domain is None:
await self._async_fetch_server_config()
# Fetch users to resolve owner names
if not self._users_cache:
await self._async_fetch_users()
@@ -559,11 +894,16 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
elif removed_ids and not added_ids:
change_type = "assets_removed"
added_assets = [
album.assets[aid]
for aid in added_ids
if aid in album.assets
]
added_assets = []
for aid in added_ids:
if aid not in album.assets:
continue
asset = album.assets[aid]
if asset.is_processed:
added_assets.append(asset)
else:
# Track unprocessed assets for later
self._pending_asset_ids.add(aid)
change = AlbumChange(
album_id=album.id,
@@ -620,34 +960,79 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
added_ids = new_state.asset_ids - old_state.asset_ids
removed_ids = old_state.asset_ids - new_state.asset_ids
_LOGGER.debug(
"Change detection: added_ids=%d, removed_ids=%d, pending=%d",
len(added_ids),
len(removed_ids),
len(self._pending_asset_ids),
)
# Track new unprocessed assets and collect processed ones
added_assets = []
for aid in added_ids:
if aid not in new_state.assets:
_LOGGER.debug("Asset %s: not in assets dict", aid)
continue
asset = new_state.assets[aid]
_LOGGER.debug(
"New asset %s (%s): is_processed=%s, filename=%s",
aid,
asset.type,
asset.is_processed,
asset.filename,
)
if asset.is_processed:
added_assets.append(asset)
else:
# Track unprocessed assets for later
self._pending_asset_ids.add(aid)
_LOGGER.debug("Asset %s added to pending (not yet processed)", aid)
# Check if any pending assets are now processed
newly_processed = []
for aid in list(self._pending_asset_ids):
if aid not in new_state.assets:
# Asset was removed, no longer pending
self._pending_asset_ids.discard(aid)
continue
asset = new_state.assets[aid]
if asset.is_processed:
_LOGGER.debug(
"Pending asset %s (%s) is now processed: filename=%s",
aid,
asset.type,
asset.filename,
)
newly_processed.append(asset)
self._pending_asset_ids.discard(aid)
# Include newly processed pending assets
added_assets.extend(newly_processed)
# Detect metadata changes
name_changed = old_state.name != new_state.name
sharing_changed = old_state.shared != new_state.shared
# Return None only if nothing changed at all
if not added_ids and not removed_ids and not name_changed and not sharing_changed:
if not added_assets and not removed_ids and not name_changed and not sharing_changed:
return None
# Determine primary change type
# Determine primary change type (use added_assets not added_ids)
change_type = "changed"
if name_changed and not added_ids and not removed_ids and not sharing_changed:
if name_changed and not added_assets and not removed_ids and not sharing_changed:
change_type = "album_renamed"
elif sharing_changed and not added_ids and not removed_ids and not name_changed:
elif sharing_changed and not added_assets and not removed_ids and not name_changed:
change_type = "album_sharing_changed"
elif added_ids and not removed_ids and not name_changed and not sharing_changed:
elif added_assets and not removed_ids and not name_changed and not sharing_changed:
change_type = "assets_added"
elif removed_ids and not added_ids and not name_changed and not sharing_changed:
elif removed_ids and not added_assets and not name_changed and not sharing_changed:
change_type = "assets_removed"
added_assets = [
new_state.assets[aid] for aid in added_ids if aid in new_state.assets
]
return AlbumChange(
album_id=new_state.id,
album_name=new_state.name,
change_type=change_type,
added_count=len(added_ids),
added_count=len(added_assets), # Count only processed assets
removed_count=len(removed_ids),
added_assets=added_assets,
removed_asset_ids=list(removed_ids),
@@ -661,26 +1046,10 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
"""Fire Home Assistant events for album changes."""
added_assets_detail = []
for asset in change.added_assets:
asset_detail = {
"id": asset.id,
ATTR_ASSET_TYPE: asset.type,
ATTR_ASSET_FILENAME: asset.filename,
ATTR_ASSET_CREATED: asset.created_at,
ATTR_ASSET_OWNER: asset.owner_name,
ATTR_ASSET_OWNER_ID: asset.owner_id,
ATTR_ASSET_DESCRIPTION: asset.description,
ATTR_PEOPLE: asset.people,
}
asset_url = self._get_asset_public_url(asset.id)
if asset_url:
asset_detail[ATTR_ASSET_URL] = asset_url
asset_download_url = self._get_asset_download_url(asset.id)
if asset_download_url:
asset_detail[ATTR_ASSET_DOWNLOAD_URL] = asset_download_url
if asset.type == ASSET_TYPE_VIDEO:
asset_video_url = self._get_asset_video_url(asset.id)
if asset_video_url:
asset_detail[ATTR_ASSET_PLAYBACK_URL] = asset_video_url
# Only include fully processed assets
if not asset.is_processed:
continue
asset_detail = self._build_asset_detail(asset, include_thumbnail=False)
added_assets_detail.append(asset_detail)
event_data = {

View File

@@ -8,5 +8,5 @@
"iot_class": "cloud_polling",
"issue_tracker": "https://github.com/DolgolyovAlexei/haos-hacs-immich-album-watcher/issues",
"requirements": [],
"version": "2.0.0"
"version": "2.7.0"
}

View File

@@ -24,6 +24,7 @@ from homeassistant.util import slugify
from .const import (
ATTR_ALBUM_ID,
ATTR_ALBUM_NAME,
ATTR_ALBUM_PROTECTED_URL,
ATTR_ALBUM_URLS,
ATTR_ASSET_COUNT,
@@ -40,7 +41,7 @@ from .const import (
CONF_HUB_NAME,
CONF_TELEGRAM_BOT_TOKEN,
DOMAIN,
SERVICE_GET_RECENT_ASSETS,
SERVICE_GET_ASSETS,
SERVICE_REFRESH,
SERVICE_SEND_TELEGRAM_NOTIFICATION,
)
@@ -48,6 +49,10 @@ from .coordinator import AlbumData, ImmichAlbumWatcherCoordinator
_LOGGER = logging.getLogger(__name__)
# Telegram photo limits
TELEGRAM_MAX_PHOTO_SIZE = 10 * 1024 * 1024 # 10 MB - Telegram's max photo size
TELEGRAM_MAX_DIMENSION_SUM = 10000 # Maximum sum of width + height in pixels
async def async_setup_entry(
hass: HomeAssistant,
@@ -88,13 +93,33 @@ async def async_setup_entry(
)
platform.async_register_entity_service(
SERVICE_GET_RECENT_ASSETS,
SERVICE_GET_ASSETS,
{
vol.Optional("count", default=10): vol.All(
vol.Optional("limit", default=10): vol.All(
vol.Coerce(int), vol.Range(min=1, max=100)
),
vol.Optional("offset", default=0): vol.All(
vol.Coerce(int), vol.Range(min=0)
),
vol.Optional("favorite_only", default=False): bool,
vol.Optional("filter_min_rating", default=1): vol.All(
vol.Coerce(int), vol.Range(min=1, max=5)
),
vol.Optional("order_by", default="date"): vol.In(
["date", "rating", "name", "random"]
),
vol.Optional("order", default="descending"): vol.In(
["ascending", "descending"]
),
vol.Optional("asset_type", default="all"): vol.In(["all", "photo", "video"]),
vol.Optional("min_date"): str,
vol.Optional("max_date"): str,
vol.Optional("memory_date"): str,
vol.Optional("city"): str,
vol.Optional("state"): str,
vol.Optional("country"): str,
},
"async_get_recent_assets",
"async_get_assets",
supports_response=SupportsResponse.ONLY,
)
@@ -115,6 +140,10 @@ async def async_setup_entry(
vol.Coerce(int), vol.Range(min=0, max=60000)
),
vol.Optional("wait_for_response", default=True): bool,
vol.Optional("max_asset_data_size"): vol.All(
vol.Coerce(int), vol.Range(min=1, max=52428800)
),
vol.Optional("send_large_photos_as_documents", default=False): bool,
},
"async_send_telegram_notification",
supports_response=SupportsResponse.OPTIONAL,
@@ -171,9 +200,38 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
"""Refresh data for this album."""
await self.coordinator.async_refresh_now()
async def async_get_recent_assets(self, count: int = 10) -> ServiceResponse:
"""Get recent assets for this album."""
assets = await self.coordinator.async_get_recent_assets(count)
async def async_get_assets(
self,
limit: int = 10,
offset: int = 0,
favorite_only: bool = False,
filter_min_rating: int = 1,
order_by: str = "date",
order: str = "descending",
asset_type: str = "all",
min_date: str | None = None,
max_date: str | None = None,
memory_date: str | None = None,
city: str | None = None,
state: str | None = None,
country: str | None = None,
) -> ServiceResponse:
"""Get assets for this album with optional filtering and ordering."""
assets = await self.coordinator.async_get_assets(
limit=limit,
offset=offset,
favorite_only=favorite_only,
filter_min_rating=filter_min_rating,
order_by=order_by,
order=order,
asset_type=asset_type,
min_date=min_date,
max_date=max_date,
memory_date=memory_date,
city=city,
state=state,
country=country,
)
return {"assets": assets}
async def async_send_telegram_notification(
@@ -188,6 +246,8 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
max_group_size: int = 10,
chunk_delay: int = 0,
wait_for_response: bool = True,
max_asset_data_size: int | None = None,
send_large_photos_as_documents: bool = False,
) -> ServiceResponse:
"""Send notification to Telegram.
@@ -216,6 +276,8 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
parse_mode=parse_mode,
max_group_size=max_group_size,
chunk_delay=chunk_delay,
max_asset_data_size=max_asset_data_size,
send_large_photos_as_documents=send_large_photos_as_documents,
)
)
return {"success": True, "status": "queued", "message": "Notification queued for background processing"}
@@ -231,6 +293,8 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
parse_mode=parse_mode,
max_group_size=max_group_size,
chunk_delay=chunk_delay,
max_asset_data_size=max_asset_data_size,
send_large_photos_as_documents=send_large_photos_as_documents,
)
async def _execute_telegram_notification(
@@ -244,6 +308,8 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
parse_mode: str = "HTML",
max_group_size: int = 10,
chunk_delay: int = 0,
max_asset_data_size: int | None = None,
send_large_photos_as_documents: bool = False,
) -> ServiceResponse:
"""Execute the Telegram notification (internal method)."""
import json
@@ -270,18 +336,20 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
# Handle single photo
if len(urls) == 1 and urls[0].get("type", "photo") == "photo":
return await self._send_telegram_photo(
session, token, chat_id, urls[0].get("url"), caption, reply_to_message_id, parse_mode
session, token, chat_id, urls[0].get("url"), caption, reply_to_message_id, parse_mode,
max_asset_data_size, send_large_photos_as_documents
)
# Handle single video
if len(urls) == 1 and urls[0].get("type") == "video":
return await self._send_telegram_video(
session, token, chat_id, urls[0].get("url"), caption, reply_to_message_id, parse_mode
session, token, chat_id, urls[0].get("url"), caption, reply_to_message_id, parse_mode, max_asset_data_size
)
# Handle multiple items - send as media group(s)
return await self._send_telegram_media_group(
session, token, chat_id, urls, caption, reply_to_message_id, max_group_size, chunk_delay, parse_mode
session, token, chat_id, urls, caption, reply_to_message_id, max_group_size, chunk_delay, parse_mode,
max_asset_data_size, send_large_photos_as_documents
)
async def _send_telegram_message(
@@ -332,6 +400,104 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
_LOGGER.error("Telegram message send failed: %s", err)
return {"success": False, "error": str(err)}
def _log_telegram_error(
self,
error_code: int | None,
description: str,
data: bytes | None = None,
media_type: str = "photo",
) -> None:
"""Log detailed Telegram API error with diagnostics.
Args:
error_code: Telegram error code
description: Error description from Telegram
data: Media data bytes (optional, for size diagnostics)
media_type: Type of media (photo/video)
"""
error_msg = f"Telegram API error ({error_code}): {description}"
# Add diagnostic information based on error type
if data:
error_msg += f" | Media size: {len(data)} bytes ({len(data) / (1024 * 1024):.2f} MB)"
# Check dimensions for photos
if media_type == "photo":
try:
from PIL import Image
import io
img = Image.open(io.BytesIO(data))
width, height = img.size
dimension_sum = width + height
error_msg += f" | Dimensions: {width}x{height} (sum={dimension_sum})"
# Highlight limit violations
if len(data) > TELEGRAM_MAX_PHOTO_SIZE:
error_msg += f" | EXCEEDS size limit ({TELEGRAM_MAX_PHOTO_SIZE / (1024 * 1024):.0f} MB)"
if dimension_sum > TELEGRAM_MAX_DIMENSION_SUM:
error_msg += f" | EXCEEDS dimension limit ({TELEGRAM_MAX_DIMENSION_SUM})"
except Exception:
pass
# Provide suggestions based on error description
suggestions = []
if "dimension" in description.lower() or "PHOTO_INVALID_DIMENSIONS" in description:
suggestions.append("Photo dimensions too large - consider setting send_large_photos_as_documents=true")
elif "too large" in description.lower() or error_code == 413:
suggestions.append("File size too large - consider setting send_large_photos_as_documents=true or max_asset_data_size to skip large files")
elif "entity too large" in description.lower():
suggestions.append("Request entity too large - reduce max_group_size or set max_asset_data_size")
if suggestions:
error_msg += f" | Suggestions: {'; '.join(suggestions)}"
_LOGGER.error(error_msg)
def _check_telegram_photo_limits(
self,
data: bytes,
) -> tuple[bool, str | None, int | None, int | None]:
"""Check if photo data exceeds Telegram photo limits.
Telegram limits for photos:
- Max file size: 10 MB
- Max dimension sum: ~10,000 pixels (width + height)
Returns:
Tuple of (exceeds_limits, reason, width, height)
- exceeds_limits: True if photo exceeds limits
- reason: Human-readable reason (None if within limits)
- width: Image width in pixels (None if PIL not available)
- height: Image height in pixels (None if PIL not available)
"""
# Check file size
if len(data) > TELEGRAM_MAX_PHOTO_SIZE:
return True, f"size {len(data)} bytes exceeds {TELEGRAM_MAX_PHOTO_SIZE} bytes limit", None, None
# Try to check dimensions using PIL
try:
from PIL import Image
import io
img = Image.open(io.BytesIO(data))
width, height = img.size
dimension_sum = width + height
if dimension_sum > TELEGRAM_MAX_DIMENSION_SUM:
return True, f"dimensions {width}x{height} (sum={dimension_sum}) exceed {TELEGRAM_MAX_DIMENSION_SUM} limit", width, height
return False, None, width, height
except ImportError:
# PIL not available, can't check dimensions
_LOGGER.debug("PIL not available, skipping dimension check")
return False, None, None, None
except Exception as e:
# Failed to check dimensions
_LOGGER.debug("Failed to check photo dimensions: %s", e)
return False, None, None, None
async def _send_telegram_photo(
self,
session: Any,
@@ -341,6 +507,8 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
caption: str | None = None,
reply_to_message_id: int | None = None,
parse_mode: str = "HTML",
max_asset_data_size: int | None = None,
send_large_photos_as_documents: bool = False,
) -> ServiceResponse:
"""Send a single photo to Telegram."""
import aiohttp
@@ -349,10 +517,46 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
if not url:
return {"success": False, "error": "Missing 'url' for photo"}
# Check cache for file_id
cache = self.coordinator.telegram_cache
cached = cache.get(url) if cache else None
if cached and cached.get("file_id"):
# Use cached file_id - no download needed
file_id = cached["file_id"]
_LOGGER.debug("Using cached Telegram file_id for photo")
payload = {
"chat_id": chat_id,
"photo": file_id,
"parse_mode": parse_mode,
}
if caption:
payload["caption"] = caption
if reply_to_message_id:
payload["reply_to_message_id"] = reply_to_message_id
telegram_url = f"https://api.telegram.org/bot{token}/sendPhoto"
try:
async with session.post(telegram_url, json=payload) as response:
result = await response.json()
if response.status == 200 and result.get("ok"):
return {
"success": True,
"message_id": result.get("result", {}).get("message_id"),
"cached": True,
}
else:
# Cache might be stale, fall through to upload
_LOGGER.debug("Cached file_id failed, will re-upload: %s", result.get("description"))
except aiohttp.ClientError as err:
_LOGGER.debug("Cached file_id request failed: %s", err)
try:
# Download the photo
_LOGGER.debug("Downloading photo from %s", url[:80])
async with session.get(url) as resp:
# Download the photo using internal URL for faster local network transfer
download_url = self.coordinator.get_internal_download_url(url)
_LOGGER.debug("Downloading photo from %s", download_url[:80])
async with session.get(download_url) as resp:
if resp.status != 200:
return {
"success": False,
@@ -361,6 +565,37 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
data = await resp.read()
_LOGGER.debug("Downloaded photo: %d bytes", len(data))
# Check if photo exceeds max size limit (user-defined limit)
if max_asset_data_size is not None and len(data) > max_asset_data_size:
_LOGGER.warning(
"Photo size (%d bytes) exceeds max_asset_data_size limit (%d bytes), skipping",
len(data), max_asset_data_size
)
return {
"success": False,
"error": f"Photo size ({len(data)} bytes) exceeds max_asset_data_size limit ({max_asset_data_size} bytes)",
"skipped": True,
}
# Check if photo exceeds Telegram's photo limits
exceeds_limits, reason, width, height = self._check_telegram_photo_limits(data)
if exceeds_limits:
if send_large_photos_as_documents:
# Send as document instead
_LOGGER.info("Photo %s, sending as document", reason)
return await self._send_telegram_document(
session, token, chat_id, data, "photo.jpg",
caption, reply_to_message_id, parse_mode, url
)
else:
# Skip oversized photo
_LOGGER.warning("Photo %s, skipping (set send_large_photos_as_documents=true to send as document)", reason)
return {
"success": False,
"error": f"Photo {reason}",
"skipped": True,
}
# Build multipart form
form = FormData()
form.add_field("chat_id", chat_id)
@@ -381,12 +616,26 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
result = await response.json()
_LOGGER.debug("Telegram API response: status=%d, ok=%s", response.status, result.get("ok"))
if response.status == 200 and result.get("ok"):
# Extract and cache file_id
photos = result.get("result", {}).get("photo", [])
if photos and cache:
# Use the largest photo's file_id
file_id = photos[-1].get("file_id")
if file_id:
await cache.async_set(url, file_id, "photo")
return {
"success": True,
"message_id": result.get("result", {}).get("message_id"),
}
else:
_LOGGER.error("Telegram API error: %s", result)
# Log detailed error with diagnostics
self._log_telegram_error(
error_code=result.get("error_code"),
description=result.get("description", "Unknown Telegram error"),
data=data,
media_type="photo",
)
return {
"success": False,
"error": result.get("description", "Unknown Telegram error"),
@@ -405,6 +654,7 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
caption: str | None = None,
reply_to_message_id: int | None = None,
parse_mode: str = "HTML",
max_asset_data_size: int | None = None,
) -> ServiceResponse:
"""Send a single video to Telegram."""
import aiohttp
@@ -413,10 +663,46 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
if not url:
return {"success": False, "error": "Missing 'url' for video"}
# Check cache for file_id
cache = self.coordinator.telegram_cache
cached = cache.get(url) if cache else None
if cached and cached.get("file_id"):
# Use cached file_id - no download needed
file_id = cached["file_id"]
_LOGGER.debug("Using cached Telegram file_id for video")
payload = {
"chat_id": chat_id,
"video": file_id,
"parse_mode": parse_mode,
}
if caption:
payload["caption"] = caption
if reply_to_message_id:
payload["reply_to_message_id"] = reply_to_message_id
telegram_url = f"https://api.telegram.org/bot{token}/sendVideo"
try:
async with session.post(telegram_url, json=payload) as response:
result = await response.json()
if response.status == 200 and result.get("ok"):
return {
"success": True,
"message_id": result.get("result", {}).get("message_id"),
"cached": True,
}
else:
# Cache might be stale, fall through to upload
_LOGGER.debug("Cached file_id failed, will re-upload: %s", result.get("description"))
except aiohttp.ClientError as err:
_LOGGER.debug("Cached file_id request failed: %s", err)
try:
# Download the video
_LOGGER.debug("Downloading video from %s", url[:80])
async with session.get(url) as resp:
# Download the video using internal URL for faster local network transfer
download_url = self.coordinator.get_internal_download_url(url)
_LOGGER.debug("Downloading video from %s", download_url[:80])
async with session.get(download_url) as resp:
if resp.status != 200:
return {
"success": False,
@@ -425,6 +711,18 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
data = await resp.read()
_LOGGER.debug("Downloaded video: %d bytes", len(data))
# Check if video exceeds max size limit
if max_asset_data_size is not None and len(data) > max_asset_data_size:
_LOGGER.warning(
"Video size (%d bytes) exceeds max_asset_data_size limit (%d bytes), skipping",
len(data), max_asset_data_size
)
return {
"success": False,
"error": f"Video size ({len(data)} bytes) exceeds max_asset_data_size limit ({max_asset_data_size} bytes)",
"skipped": True,
}
# Build multipart form
form = FormData()
form.add_field("chat_id", chat_id)
@@ -445,12 +743,25 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
result = await response.json()
_LOGGER.debug("Telegram API response: status=%d, ok=%s", response.status, result.get("ok"))
if response.status == 200 and result.get("ok"):
# Extract and cache file_id
video = result.get("result", {}).get("video", {})
if video and cache:
file_id = video.get("file_id")
if file_id:
await cache.async_set(url, file_id, "video")
return {
"success": True,
"message_id": result.get("result", {}).get("message_id"),
}
else:
_LOGGER.error("Telegram API error: %s", result)
# Log detailed error with diagnostics
self._log_telegram_error(
error_code=result.get("error_code"),
description=result.get("description", "Unknown Telegram error"),
data=data,
media_type="video",
)
return {
"success": False,
"error": result.get("description", "Unknown Telegram error"),
@@ -460,6 +771,105 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
_LOGGER.error("Telegram video upload failed: %s", err)
return {"success": False, "error": str(err)}
async def _send_telegram_document(
self,
session: Any,
token: str,
chat_id: str,
data: bytes,
filename: str = "photo.jpg",
caption: str | None = None,
reply_to_message_id: int | None = None,
parse_mode: str = "HTML",
source_url: str | None = None,
) -> ServiceResponse:
"""Send a photo as a document to Telegram (for oversized photos)."""
import aiohttp
from aiohttp import FormData
# Check cache for file_id if source_url is provided
cache = self.coordinator.telegram_cache
if source_url:
cached = cache.get(source_url) if cache else None
if cached and cached.get("file_id") and cached.get("type") == "document":
# Use cached file_id
file_id = cached["file_id"]
_LOGGER.debug("Using cached Telegram file_id for document")
payload = {
"chat_id": chat_id,
"document": file_id,
"parse_mode": parse_mode,
}
if caption:
payload["caption"] = caption
if reply_to_message_id:
payload["reply_to_message_id"] = reply_to_message_id
telegram_url = f"https://api.telegram.org/bot{token}/sendDocument"
try:
async with session.post(telegram_url, json=payload) as response:
result = await response.json()
if response.status == 200 and result.get("ok"):
return {
"success": True,
"message_id": result.get("result", {}).get("message_id"),
"cached": True,
}
else:
_LOGGER.debug("Cached file_id failed, will re-upload: %s", result.get("description"))
except aiohttp.ClientError as err:
_LOGGER.debug("Cached file_id request failed: %s", err)
try:
# Build multipart form
form = FormData()
form.add_field("chat_id", chat_id)
form.add_field("document", data, filename=filename, content_type="image/jpeg")
form.add_field("parse_mode", parse_mode)
if caption:
form.add_field("caption", caption)
if reply_to_message_id:
form.add_field("reply_to_message_id", str(reply_to_message_id))
# Send to Telegram
telegram_url = f"https://api.telegram.org/bot{token}/sendDocument"
_LOGGER.debug("Uploading oversized photo as document to Telegram (%d bytes)", len(data))
async with session.post(telegram_url, data=form) as response:
result = await response.json()
_LOGGER.debug("Telegram API response: status=%d, ok=%s", response.status, result.get("ok"))
if response.status == 200 and result.get("ok"):
# Extract and cache file_id
if source_url and cache:
document = result.get("result", {}).get("document", {})
file_id = document.get("file_id")
if file_id:
await cache.async_set(source_url, file_id, "document")
return {
"success": True,
"message_id": result.get("result", {}).get("message_id"),
}
else:
# Log detailed error with diagnostics
self._log_telegram_error(
error_code=result.get("error_code"),
description=result.get("description", "Unknown Telegram error"),
data=data,
media_type="document",
)
return {
"success": False,
"error": result.get("description", "Unknown Telegram error"),
"error_code": result.get("error_code"),
}
except aiohttp.ClientError as err:
_LOGGER.error("Telegram document upload failed: %s", err)
return {"success": False, "error": str(err)}
async def _send_telegram_media_group(
self,
session: Any,
@@ -471,6 +881,8 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
max_group_size: int = 10,
chunk_delay: int = 0,
parse_mode: str = "HTML",
max_asset_data_size: int | None = None,
send_large_photos_as_documents: bool = False,
) -> ServiceResponse:
"""Send media URLs to Telegram as media group(s).
@@ -511,12 +923,13 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
if media_type == "photo":
_LOGGER.debug("Sending chunk %d/%d as single photo", chunk_idx + 1, len(chunks))
result = await self._send_telegram_photo(
session, token, chat_id, url, chunk_caption, chunk_reply_to, parse_mode
session, token, chat_id, url, chunk_caption, chunk_reply_to, parse_mode,
max_asset_data_size, send_large_photos_as_documents
)
else: # video
_LOGGER.debug("Sending chunk %d/%d as single video", chunk_idx + 1, len(chunks))
result = await self._send_telegram_video(
session, token, chat_id, url, chunk_caption, chunk_reply_to, parse_mode
session, token, chat_id, url, chunk_caption, chunk_reply_to, parse_mode, max_asset_data_size
)
if not result.get("success"):
@@ -528,8 +941,16 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
# Multi-item chunk: use sendMediaGroup
_LOGGER.debug("Sending chunk %d/%d as media group (%d items)", chunk_idx + 1, len(chunks), len(chunk))
# Download all media files for this chunk
media_files: list[tuple[str, bytes, str]] = []
# Get cache reference
cache = self.coordinator.telegram_cache
# Collect media items - either from cache (file_id) or by downloading
# Each item: (type, media_ref, filename, url, is_cached)
# media_ref is either file_id (str) or data (bytes)
media_items: list[tuple[str, str | bytes, str, str, bool]] = []
oversized_photos: list[tuple[bytes, str | None, str]] = [] # (data, caption, url)
skipped_count = 0
for i, item in enumerate(chunk):
url = item.get("url")
media_type = item.get("type", "photo")
@@ -546,81 +967,230 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
"error": f"Invalid type '{media_type}' in item {chunk_idx * max_group_size + i}. Must be 'photo' or 'video'.",
}
# Check cache first
cached = cache.get(url) if cache else None
if cached and cached.get("file_id"):
# Use cached file_id
ext = "jpg" if media_type == "photo" else "mp4"
filename = f"media_{chunk_idx * max_group_size + i}.{ext}"
media_items.append((media_type, cached["file_id"], filename, url, True))
_LOGGER.debug("Using cached file_id for media %d", chunk_idx * max_group_size + i)
continue
try:
_LOGGER.debug("Downloading media %d from %s", chunk_idx * max_group_size + i, url[:80])
async with session.get(url) as resp:
# Download using internal URL for faster local network transfer
download_url = self.coordinator.get_internal_download_url(url)
_LOGGER.debug("Downloading media %d from %s", chunk_idx * max_group_size + i, download_url[:80])
async with session.get(download_url) as resp:
if resp.status != 200:
return {
"success": False,
"error": f"Failed to download media {chunk_idx * max_group_size + i}: HTTP {resp.status}",
}
data = await resp.read()
_LOGGER.debug("Downloaded media %d: %d bytes", chunk_idx * max_group_size + i, len(data))
# Check if media exceeds max_asset_data_size limit (user-defined limit for skipping)
if max_asset_data_size is not None and len(data) > max_asset_data_size:
_LOGGER.warning(
"Media %d size (%d bytes) exceeds max_asset_data_size limit (%d bytes), skipping",
chunk_idx * max_group_size + i, len(data), max_asset_data_size
)
skipped_count += 1
continue
# For photos, check Telegram limits
if media_type == "photo":
exceeds_limits, reason, width, height = self._check_telegram_photo_limits(data)
if exceeds_limits:
if send_large_photos_as_documents:
# Separate this photo to send as document later
# Caption only on first item of first chunk
photo_caption = caption if chunk_idx == 0 and i == 0 and len(media_items) == 0 else None
oversized_photos.append((data, photo_caption, url))
_LOGGER.info("Photo %d %s, will send as document", i, reason)
continue
else:
# Skip oversized photo
_LOGGER.warning("Photo %d %s, skipping (set send_large_photos_as_documents=true to send as document)", i, reason)
skipped_count += 1
continue
ext = "jpg" if media_type == "photo" else "mp4"
filename = f"media_{chunk_idx * max_group_size + i}.{ext}"
media_files.append((media_type, data, filename))
_LOGGER.debug("Downloaded media %d: %d bytes", chunk_idx * max_group_size + i, len(data))
media_items.append((media_type, data, filename, url, False))
except aiohttp.ClientError as err:
return {
"success": False,
"error": f"Failed to download media {chunk_idx * max_group_size + i}: {err}",
}
# Build multipart form
form = FormData()
form.add_field("chat_id", chat_id)
# Skip this chunk if all files were filtered out
if not media_items and not oversized_photos:
_LOGGER.info("Chunk %d/%d: all %d media items skipped",
chunk_idx + 1, len(chunks), len(chunk))
continue
# Only use reply_to_message_id for the first chunk
if chunk_idx == 0 and reply_to_message_id:
form.add_field("reply_to_message_id", str(reply_to_message_id))
# Send media group if we have normal-sized files
if media_items:
# Check if all items are cached (can use simple JSON payload)
all_cached = all(is_cached for _, _, _, _, is_cached in media_items)
# Build media JSON with attach:// references
media_json = []
for i, (media_type, data, filename) in enumerate(media_files):
attach_name = f"file{i}"
media_item: dict[str, Any] = {
"type": media_type,
"media": f"attach://{attach_name}",
}
# Only add caption to the first item of the first chunk
if chunk_idx == 0 and i == 0 and caption:
media_item["caption"] = caption
media_item["parse_mode"] = parse_mode
media_json.append(media_item)
if all_cached:
# All items cached - use simple JSON payload with file_ids
_LOGGER.debug("All %d items cached, using file_ids", len(media_items))
media_json = []
for i, (media_type, file_id, _, _, _) in enumerate(media_items):
media_item_json: dict[str, Any] = {
"type": media_type,
"media": file_id,
}
if chunk_idx == 0 and i == 0 and caption and not oversized_photos:
media_item_json["caption"] = caption
media_item_json["parse_mode"] = parse_mode
media_json.append(media_item_json)
content_type = "image/jpeg" if media_type == "photo" else "video/mp4"
form.add_field(attach_name, data, filename=filename, content_type=content_type)
payload = {
"chat_id": chat_id,
"media": media_json,
}
if chunk_idx == 0 and reply_to_message_id:
payload["reply_to_message_id"] = reply_to_message_id
form.add_field("media", json.dumps(media_json))
telegram_url = f"https://api.telegram.org/bot{token}/sendMediaGroup"
try:
async with session.post(telegram_url, json=payload) as response:
result = await response.json()
if response.status == 200 and result.get("ok"):
chunk_message_ids = [
msg.get("message_id") for msg in result.get("result", [])
]
all_message_ids.extend(chunk_message_ids)
else:
# Cache might be stale - fall through to upload path
_LOGGER.debug("Cached file_ids failed, will re-upload: %s", result.get("description"))
all_cached = False # Force re-upload
except aiohttp.ClientError as err:
_LOGGER.debug("Cached file_ids request failed: %s", err)
all_cached = False
# Send to Telegram
telegram_url = f"https://api.telegram.org/bot{token}/sendMediaGroup"
if not all_cached:
# Build multipart form with mix of cached file_ids and uploaded data
form = FormData()
form.add_field("chat_id", chat_id)
try:
_LOGGER.debug("Uploading media group chunk %d/%d (%d files) to Telegram",
chunk_idx + 1, len(chunks), len(media_files))
async with session.post(telegram_url, data=form) as response:
result = await response.json()
_LOGGER.debug("Telegram API response: status=%d, ok=%s", response.status, result.get("ok"))
if response.status == 200 and result.get("ok"):
chunk_message_ids = [
msg.get("message_id") for msg in result.get("result", [])
]
all_message_ids.extend(chunk_message_ids)
else:
_LOGGER.error("Telegram API error for chunk %d: %s", chunk_idx + 1, result)
# Only use reply_to_message_id for the first chunk
if chunk_idx == 0 and reply_to_message_id:
form.add_field("reply_to_message_id", str(reply_to_message_id))
# Build media JSON - use file_id for cached, attach:// for uploaded
media_json = []
upload_idx = 0
urls_to_cache: list[tuple[str, int, str]] = [] # (url, result_idx, type)
for i, (media_type, media_ref, filename, url, is_cached) in enumerate(media_items):
if is_cached:
# Use file_id directly
media_item_json: dict[str, Any] = {
"type": media_type,
"media": media_ref, # file_id
}
else:
# Upload this file
attach_name = f"file{upload_idx}"
media_item_json = {
"type": media_type,
"media": f"attach://{attach_name}",
}
content_type = "image/jpeg" if media_type == "photo" else "video/mp4"
form.add_field(attach_name, media_ref, filename=filename, content_type=content_type)
urls_to_cache.append((url, i, media_type))
upload_idx += 1
if chunk_idx == 0 and i == 0 and caption and not oversized_photos:
media_item_json["caption"] = caption
media_item_json["parse_mode"] = parse_mode
media_json.append(media_item_json)
form.add_field("media", json.dumps(media_json))
# Send to Telegram
telegram_url = f"https://api.telegram.org/bot{token}/sendMediaGroup"
try:
_LOGGER.debug("Uploading media group chunk %d/%d (%d files, %d cached) to Telegram",
chunk_idx + 1, len(chunks), len(media_items), len(media_items) - upload_idx)
async with session.post(telegram_url, data=form) as response:
result = await response.json()
_LOGGER.debug("Telegram API response: status=%d, ok=%s", response.status, result.get("ok"))
if response.status == 200 and result.get("ok"):
chunk_message_ids = [
msg.get("message_id") for msg in result.get("result", [])
]
all_message_ids.extend(chunk_message_ids)
# Cache the newly uploaded file_ids
if cache and urls_to_cache:
result_messages = result.get("result", [])
for url, result_idx, m_type in urls_to_cache:
if result_idx < len(result_messages):
msg = result_messages[result_idx]
if m_type == "photo":
photos = msg.get("photo", [])
if photos:
await cache.async_set(url, photos[-1].get("file_id"), "photo")
elif m_type == "video":
video = msg.get("video", {})
if video.get("file_id"):
await cache.async_set(url, video["file_id"], "video")
else:
# Log detailed error for media group with total size info
uploaded_data = [m for m in media_items if not m[4]]
total_size = sum(len(d) for _, d, _, _, _ in uploaded_data if isinstance(d, bytes))
_LOGGER.error(
"Telegram API error for chunk %d/%d: %s | Media count: %d | Uploaded size: %d bytes (%.2f MB)",
chunk_idx + 1, len(chunks),
result.get("description", "Unknown Telegram error"),
len(media_items),
total_size,
total_size / (1024 * 1024) if total_size else 0
)
# Log detailed diagnostics for the first photo in the group
for media_type, media_ref, _, _, is_cached in media_items:
if media_type == "photo" and not is_cached and isinstance(media_ref, bytes):
self._log_telegram_error(
error_code=result.get("error_code"),
description=result.get("description", "Unknown Telegram error"),
data=media_ref,
media_type="photo",
)
break # Only log details for first photo
return {
"success": False,
"error": result.get("description", "Unknown Telegram error"),
"error_code": result.get("error_code"),
"failed_at_chunk": chunk_idx + 1,
}
except aiohttp.ClientError as err:
_LOGGER.error("Telegram upload failed for chunk %d: %s", chunk_idx + 1, err)
return {
"success": False,
"error": result.get("description", "Unknown Telegram error"),
"error_code": result.get("error_code"),
"error": str(err),
"failed_at_chunk": chunk_idx + 1,
}
except aiohttp.ClientError as err:
_LOGGER.error("Telegram upload failed for chunk %d: %s", chunk_idx + 1, err)
return {
"success": False,
"error": str(err),
"failed_at_chunk": chunk_idx + 1,
}
# Send oversized photos as documents
for i, (data, photo_caption, photo_url) in enumerate(oversized_photos):
_LOGGER.debug("Sending oversized photo %d/%d as document", i + 1, len(oversized_photos))
result = await self._send_telegram_document(
session, token, chat_id, data, f"photo_{i}.jpg",
photo_caption, None, parse_mode, photo_url
)
if result.get("success"):
all_message_ids.append(result.get("message_id"))
else:
_LOGGER.error("Failed to send oversized photo as document: %s", result.get("error"))
# Continue with other photos even if one fails
return {
"success": True,
@@ -659,7 +1229,10 @@ class ImmichAlbumIdSensor(ImmichAlbumBaseSensor):
return {}
attrs: dict[str, Any] = {
"album_name": self._album_data.name,
ATTR_ALBUM_NAME: self._album_data.name,
ATTR_ASSET_COUNT: self._album_data.asset_count,
ATTR_LAST_UPDATED: self._album_data.updated_at,
ATTR_CREATED_AT: self._album_data.created_at,
}
# Primary share URL (prefers public, falls back to protected)

View File

@@ -6,17 +6,17 @@ refresh:
integration: immich_album_watcher
domain: sensor
get_recent_assets:
name: Get Recent Assets
description: Get the most recent assets from the targeted album.
get_assets:
name: Get Assets
description: Get assets from the targeted album with optional filtering and ordering.
target:
entity:
integration: immich_album_watcher
domain: sensor
fields:
count:
name: Count
description: Number of recent assets to return (1-100).
limit:
name: Limit
description: Maximum number of assets to return (1-100).
required: false
default: 10
selector:
@@ -24,6 +24,110 @@ get_recent_assets:
min: 1
max: 100
mode: slider
offset:
name: Offset
description: Number of assets to skip before returning results (for pagination). Use with limit to fetch assets in pages.
required: false
default: 0
selector:
number:
min: 0
mode: box
favorite_only:
name: Favorite Only
description: Filter to show only favorite assets.
required: false
default: false
selector:
boolean:
filter_min_rating:
name: Minimum Rating
description: Minimum rating for assets (1-5). Set to filter by rating.
required: false
default: 1
selector:
number:
min: 1
max: 5
mode: slider
order_by:
name: Order By
description: Field to sort assets by.
required: false
default: "date"
selector:
select:
options:
- label: "Date"
value: "date"
- label: "Rating"
value: "rating"
- label: "Name"
value: "name"
- label: "Random"
value: "random"
order:
name: Order
description: Sort direction.
required: false
default: "descending"
selector:
select:
options:
- label: "Ascending"
value: "ascending"
- label: "Descending"
value: "descending"
asset_type:
name: Asset Type
description: Filter assets by type (all, photo, or video).
required: false
default: "all"
selector:
select:
options:
- label: "All (no type filtering)"
value: "all"
- label: "Photos only"
value: "photo"
- label: "Videos only"
value: "video"
min_date:
name: Minimum Date
description: Filter assets created on or after this date (ISO 8601 format, e.g., 2024-01-01 or 2024-01-01T10:30:00).
required: false
selector:
text:
max_date:
name: Maximum Date
description: Filter assets created on or before this date (ISO 8601 format, e.g., 2024-12-31 or 2024-12-31T23:59:59).
required: false
selector:
text:
memory_date:
name: Memory Date
description: Filter assets by matching month and day, excluding the same year (memories filter like Google Photos). Provide a date in ISO 8601 format (e.g., 2024-02-14) to get assets from February 14th of previous years.
required: false
selector:
text:
city:
name: City
description: Filter assets by city name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data.
required: false
selector:
text:
state:
name: State
description: Filter assets by state/region name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data.
required: false
selector:
text:
country:
name: Country
description: Filter assets by country name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data.
required: false
selector:
text:
send_telegram_notification:
name: Send Telegram Notification
@@ -116,3 +220,21 @@ send_telegram_notification:
default: true
selector:
boolean:
max_asset_data_size:
name: Max Asset Data Size
description: Maximum asset size in bytes. Assets exceeding this limit will be skipped. Leave empty for no limit.
required: false
selector:
number:
min: 1
max: 52428800
step: 1048576
unit_of_measurement: "bytes"
mode: box
send_large_photos_as_documents:
name: Send Large Photos As Documents
description: How to handle photos exceeding Telegram's limits (10MB or 10000px dimension sum). If true, send as documents. If false, skip oversized photos.
required: false
default: false
selector:
boolean:

View File

@@ -14,6 +14,9 @@ _LOGGER = logging.getLogger(__name__)
STORAGE_VERSION = 1
STORAGE_KEY_PREFIX = "immich_album_watcher"
# Default TTL for Telegram file_id cache (48 hours in seconds)
DEFAULT_TELEGRAM_CACHE_TTL = 48 * 60 * 60
class ImmichAlbumStorage:
"""Handles persistence of album state across restarts."""
@@ -63,3 +66,116 @@ class ImmichAlbumStorage:
"""Remove all storage data."""
await self._store.async_remove()
self._data = None
class TelegramFileCache:
"""Cache for Telegram file_ids to avoid re-uploading media.
When a file is uploaded to Telegram, it returns a file_id that can be reused
to send the same file without re-uploading. This cache stores these file_ids
keyed by the source URL.
"""
def __init__(
self,
hass: HomeAssistant,
album_id: str,
ttl_seconds: int = DEFAULT_TELEGRAM_CACHE_TTL,
) -> None:
"""Initialize the Telegram file cache.
Args:
hass: Home Assistant instance
album_id: Album ID for scoping the cache
ttl_seconds: Time-to-live for cache entries in seconds (default: 48 hours)
"""
self._store: Store[dict[str, Any]] = Store(
hass, STORAGE_VERSION, f"{STORAGE_KEY_PREFIX}.telegram_cache.{album_id}"
)
self._data: dict[str, Any] | None = None
self._ttl_seconds = ttl_seconds
async def async_load(self) -> None:
"""Load cache data from storage."""
self._data = await self._store.async_load() or {"files": {}}
# Clean up expired entries on load
await self._cleanup_expired()
_LOGGER.debug(
"Loaded Telegram file cache with %d entries",
len(self._data.get("files", {})),
)
async def _cleanup_expired(self) -> None:
"""Remove expired cache entries."""
if not self._data or "files" not in self._data:
return
now = datetime.now(timezone.utc)
expired_keys = []
for url, entry in self._data["files"].items():
cached_at_str = entry.get("cached_at")
if cached_at_str:
cached_at = datetime.fromisoformat(cached_at_str)
age_seconds = (now - cached_at).total_seconds()
if age_seconds > self._ttl_seconds:
expired_keys.append(url)
if expired_keys:
for key in expired_keys:
del self._data["files"][key]
await self._store.async_save(self._data)
_LOGGER.debug("Cleaned up %d expired Telegram cache entries", len(expired_keys))
def get(self, url: str) -> dict[str, Any] | None:
"""Get cached file_id for a URL.
Args:
url: The source URL of the media
Returns:
Dict with 'file_id' and 'type' if cached and not expired, None otherwise
"""
if not self._data or "files" not in self._data:
return None
entry = self._data["files"].get(url)
if not entry:
return None
# Check if expired
cached_at_str = entry.get("cached_at")
if cached_at_str:
cached_at = datetime.fromisoformat(cached_at_str)
age_seconds = (datetime.now(timezone.utc) - cached_at).total_seconds()
if age_seconds > self._ttl_seconds:
return None
return {
"file_id": entry.get("file_id"),
"type": entry.get("type"),
}
async def async_set(self, url: str, file_id: str, media_type: str) -> None:
"""Store a file_id for a URL.
Args:
url: The source URL of the media
file_id: The Telegram file_id
media_type: The type of media ('photo', 'video', 'document')
"""
if self._data is None:
self._data = {"files": {}}
self._data["files"][url] = {
"file_id": file_id,
"type": media_type,
"cached_at": datetime.now(timezone.utc).isoformat(),
}
await self._store.async_save(self._data)
_LOGGER.debug("Cached Telegram file_id for URL (type: %s)", media_type)
async def async_remove(self) -> None:
"""Remove all cache data."""
await self._store.async_remove()
self._data = None

View File

@@ -116,14 +116,16 @@
"step": {
"init": {
"title": "Immich Album Watcher Options",
"description": "Configure the polling interval for all albums.",
"description": "Configure the polling interval and Telegram settings for all albums.",
"data": {
"scan_interval": "Scan interval (seconds)",
"telegram_bot_token": "Telegram Bot Token"
"telegram_bot_token": "Telegram Bot Token",
"telegram_cache_ttl": "Telegram Cache TTL (hours)"
},
"data_description": {
"scan_interval": "How often to check for album changes (10-3600 seconds)",
"telegram_bot_token": "Bot token for sending notifications to Telegram"
"telegram_bot_token": "Bot token for sending notifications to Telegram",
"telegram_cache_ttl": "How long to cache uploaded file IDs to avoid re-uploading (1-168 hours, default: 48)"
}
}
}
@@ -133,13 +135,61 @@
"name": "Refresh",
"description": "Force an immediate refresh of album data from Immich."
},
"get_recent_assets": {
"name": "Get Recent Assets",
"description": "Get the most recent assets from the targeted album.",
"get_assets": {
"name": "Get Assets",
"description": "Get assets from the targeted album with optional filtering and ordering.",
"fields": {
"count": {
"name": "Count",
"description": "Number of recent assets to return (1-100)."
"limit": {
"name": "Limit",
"description": "Maximum number of assets to return (1-100)."
},
"offset": {
"name": "Offset",
"description": "Number of assets to skip (for pagination)."
},
"favorite_only": {
"name": "Favorite Only",
"description": "Filter to show only favorite assets."
},
"filter_min_rating": {
"name": "Minimum Rating",
"description": "Minimum rating for assets (1-5)."
},
"order_by": {
"name": "Order By",
"description": "Field to sort assets by (date, rating, name, or random)."
},
"order": {
"name": "Order",
"description": "Sort direction (ascending or descending)."
},
"asset_type": {
"name": "Asset Type",
"description": "Filter assets by type (all, photo, or video)."
},
"min_date": {
"name": "Minimum Date",
"description": "Filter assets created on or after this date (ISO 8601 format)."
},
"max_date": {
"name": "Maximum Date",
"description": "Filter assets created on or before this date (ISO 8601 format)."
},
"memory_date": {
"name": "Memory Date",
"description": "Filter assets by matching month and day, excluding the same year (memories filter)."
},
"city": {
"name": "City",
"description": "Filter assets by city name (case-insensitive)."
},
"state": {
"name": "State",
"description": "Filter assets by state/region name (case-insensitive)."
},
"country": {
"name": "Country",
"description": "Filter assets by country name (case-insensitive)."
}
}
},
@@ -186,6 +236,14 @@
"wait_for_response": {
"name": "Wait For Response",
"description": "Wait for Telegram to finish processing before returning. Set to false for fire-and-forget (automation continues immediately)."
},
"max_asset_data_size": {
"name": "Max Asset Data Size",
"description": "Maximum asset size in bytes. Assets exceeding this limit will be skipped. Leave empty for no limit."
},
"send_large_photos_as_documents": {
"name": "Send Large Photos As Documents",
"description": "How to handle photos exceeding Telegram's limits (10MB or 10000px dimension sum). If true, send as documents. If false, downsize to fit limits."
}
}
}

View File

@@ -116,14 +116,16 @@
"step": {
"init": {
"title": "Настройки Immich Album Watcher",
"description": "Настройте интервал опроса для всех альбомов.",
"description": "Настройте интервал опроса и параметры Telegram для всех альбомов.",
"data": {
"scan_interval": "Интервал сканирования (секунды)",
"telegram_bot_token": "Токен Telegram бота"
"telegram_bot_token": "Токен Telegram бота",
"telegram_cache_ttl": "Время жизни кэша Telegram (часы)"
},
"data_description": {
"scan_interval": "Как часто проверять изменения в альбомах (10-3600 секунд)",
"telegram_bot_token": "Токен бота для отправки уведомлений в Telegram"
"telegram_bot_token": "Токен бота для отправки уведомлений в Telegram",
"telegram_cache_ttl": "Сколько хранить ID загруженных файлов для повторной отправки без загрузки (1-168 часов, по умолчанию: 48)"
}
}
}
@@ -133,13 +135,61 @@
"name": "Обновить",
"description": "Принудительно обновить данные альбома из Immich."
},
"get_recent_assets": {
"name": "Получить последние файлы",
"description": "Получить последние файлы из выбранного альбома.",
"get_assets": {
"name": "Получить файлы",
"description": "Получить файлы из выбранного альбома с возможностью фильтрации и сортировки.",
"fields": {
"count": {
"name": "Количество",
"description": "Количество возвращаемых файлов (1-100)."
"limit": {
"name": "Лимит",
"description": "Максимальное количество возвращаемых файлов (1-100)."
},
"offset": {
"name": "Смещение",
"description": "Количество файлов для пропуска (для пагинации)."
},
"favorite_only": {
"name": "Только избранные",
"description": "Фильтр для отображения только избранных файлов."
},
"filter_min_rating": {
"name": "Минимальный рейтинг",
"description": "Минимальный рейтинг для файлов (1-5)."
},
"order_by": {
"name": "Сортировать по",
"description": "Поле для сортировки файлов (date - дата, rating - рейтинг, name - имя, random - случайный)."
},
"order": {
"name": "Порядок",
"description": "Направление сортировки (ascending - по возрастанию, descending - по убыванию)."
},
"asset_type": {
"name": "Тип файла",
"description": "Фильтровать файлы по типу (all - все, photo - только фото, video - только видео)."
},
"min_date": {
"name": "Минимальная дата",
"description": "Фильтровать файлы, созданные в эту дату или после (формат ISO 8601)."
},
"max_date": {
"name": "Максимальная дата",
"description": "Фильтровать файлы, созданные в эту дату или до (формат ISO 8601)."
},
"memory_date": {
"name": "Дата воспоминания",
"description": "Фильтр по совпадению месяца и дня, исключая тот же год (воспоминания)."
},
"city": {
"name": "Город",
"description": "Фильтр по названию города (без учёта регистра)."
},
"state": {
"name": "Регион",
"description": "Фильтр по названию региона/области (без учёта регистра)."
},
"country": {
"name": "Страна",
"description": "Фильтр по названию страны (без учёта регистра)."
}
}
},
@@ -186,6 +236,14 @@
"wait_for_response": {
"name": "Ждать ответа",
"description": "Ждать завершения отправки в Telegram перед возвратом. Установите false для фоновой отправки (автоматизация продолжается немедленно)."
},
"max_asset_data_size": {
"name": "Макс. размер ресурса",
"description": "Максимальный размер ресурса в байтах. Ресурсы, превышающие этот лимит, будут пропущены. Оставьте пустым для отсутствия ограничения."
},
"send_large_photos_as_documents": {
"name": "Большие фото как документы",
"description": "Как обрабатывать фото, превышающие лимиты Telegram (10МБ или сумма размеров 10000пкс). Если true, отправлять как документы. Если false, уменьшать для соответствия лимитам."
}
}
}