Compare commits
20 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| fde2d0ae31 | |||
| 31663852f9 | |||
| 5cee3ccc79 | |||
| 3b133dc4bb | |||
| a8ea9ab46a | |||
| e88fd0fa3a | |||
| 3cf916dc77 | |||
| df446390f2 | |||
| 1d61f05552 | |||
| 38a2a6ad7a | |||
| 0bb7e71a1e | |||
| c29fc2fbcf | |||
| 011f105823 | |||
| ee45fdc177 | |||
| 4b0f3b8b12 | |||
| e5e45f0fbf | |||
| 8714685d5e | |||
| bbcd97e1ac | |||
| 04dd63825c | |||
| 71d3714f6a |
@@ -3,6 +3,7 @@
|
|||||||
## Version Management
|
## Version Management
|
||||||
|
|
||||||
Update the integration version in `custom_components/immich_album_watcher/manifest.json` only when changes are made to the **integration content** (files inside `custom_components/immich_album_watcher/`).
|
Update the integration version in `custom_components/immich_album_watcher/manifest.json` only when changes are made to the **integration content** (files inside `custom_components/immich_album_watcher/`).
|
||||||
|
**IMPORTANT** ALWAYS ask for version bump before doing it.
|
||||||
|
|
||||||
Do NOT bump version for:
|
Do NOT bump version for:
|
||||||
|
|
||||||
|
|||||||
298
README.md
298
README.md
@@ -4,16 +4,21 @@
|
|||||||
|
|
||||||
A Home Assistant custom integration that monitors [Immich](https://immich.app/) photo/video library albums for changes and exposes them as Home Assistant entities with event-firing capabilities.
|
A Home Assistant custom integration that monitors [Immich](https://immich.app/) photo/video library albums for changes and exposes them as Home Assistant entities with event-firing capabilities.
|
||||||
|
|
||||||
|
> **Tip:** For the best experience, use this integration with the [Immich Album Watcher Blueprint](https://github.com/DolgolyovAlexei/haos-blueprints/tree/main/Common/Immich%20Album%20Watcher) to easily create automations for album change notifications.
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
- **Album Monitoring** - Watch selected Immich albums for asset additions and removals
|
- **Album Monitoring** - Watch selected Immich albums for asset additions and removals
|
||||||
- **Rich Sensor Data** - Multiple sensors per album:
|
- **Rich Sensor Data** - Multiple sensors per album:
|
||||||
- Album ID (with share URL attribute)
|
- Album ID (with album name and share URL attributes)
|
||||||
- Asset count (with detected people list)
|
- Asset Count (total assets with detected people list)
|
||||||
- Photo count
|
- Photo Count (number of photos)
|
||||||
- Video count
|
- Video Count (number of videos)
|
||||||
- Last updated timestamp
|
- Last Updated (last modification timestamp)
|
||||||
- Creation date
|
- Created (album creation date)
|
||||||
|
- Public URL (public share link)
|
||||||
|
- Protected URL (password-protected share link)
|
||||||
|
- Protected Password (password for protected link)
|
||||||
- **Camera Entity** - Album thumbnail displayed as a camera entity for dashboards
|
- **Camera Entity** - Album thumbnail displayed as a camera entity for dashboards
|
||||||
- **Binary Sensor** - "New Assets" indicator that turns on when assets are added
|
- **Binary Sensor** - "New Assets" indicator that turns on when assets are added
|
||||||
- **Face Recognition** - Detects and lists people recognized in album photos
|
- **Face Recognition** - Detects and lists people recognized in album photos
|
||||||
@@ -31,13 +36,16 @@ A Home Assistant custom integration that monitors [Immich](https://immich.app/)
|
|||||||
- Detected people in the asset
|
- Detected people in the asset
|
||||||
- **Services** - Custom service calls:
|
- **Services** - Custom service calls:
|
||||||
- `immich_album_watcher.refresh` - Force immediate data refresh
|
- `immich_album_watcher.refresh` - Force immediate data refresh
|
||||||
- `immich_album_watcher.get_recent_assets` - Get recent assets from an album
|
- `immich_album_watcher.get_assets` - Get assets from an album with filtering and ordering
|
||||||
- `immich_album_watcher.send_telegram_notification` - Send text, photo, video, or media group to Telegram
|
- `immich_album_watcher.send_telegram_notification` - Send text, photo, video, or media group to Telegram
|
||||||
- **Share Link Management** - Button entities to create and delete share links:
|
- **Share Link Management** - Button entities to create and delete share links:
|
||||||
- Create/delete public (unprotected) share links
|
- Create/delete public (unprotected) share links
|
||||||
- Create/delete password-protected share links
|
- Create/delete password-protected share links
|
||||||
- Edit protected link passwords via Text entity
|
- Edit protected link passwords via Text entity
|
||||||
- **Configurable Polling** - Adjustable scan interval (10-3600 seconds)
|
- **Configurable Polling** - Adjustable scan interval (10-3600 seconds)
|
||||||
|
- **Localization** - Available in multiple languages:
|
||||||
|
- English
|
||||||
|
- Russian (Русский)
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
@@ -60,8 +68,6 @@ A Home Assistant custom integration that monitors [Immich](https://immich.app/)
|
|||||||
3. Restart Home Assistant
|
3. Restart Home Assistant
|
||||||
4. Add the integration via **Settings** → **Devices & Services** → **Add Integration**
|
4. Add the integration via **Settings** → **Devices & Services** → **Add Integration**
|
||||||
|
|
||||||
> **Tip:** For the best experience, use this integration with the [Immich Album Watcher Blueprint](https://github.com/DolgolyovAlexei/haos-blueprints/blob/main/Common/Immich%20Album%20Watcher.yaml) to easily create automations for album change notifications.
|
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
| Option | Description | Default |
|
| Option | Description | Default |
|
||||||
@@ -71,12 +77,36 @@ A Home Assistant custom integration that monitors [Immich](https://immich.app/)
|
|||||||
| Albums | Albums to monitor | Required |
|
| Albums | Albums to monitor | Required |
|
||||||
| Scan Interval | How often to check for changes (seconds) | 60 |
|
| Scan Interval | How often to check for changes (seconds) | 60 |
|
||||||
| Telegram Bot Token | Bot token for sending media to Telegram (optional) | - |
|
| Telegram Bot Token | Bot token for sending media to Telegram (optional) | - |
|
||||||
|
| Telegram Cache TTL | How long to cache uploaded file IDs (hours, 1-168) | 48 |
|
||||||
|
|
||||||
|
### External Domain Support
|
||||||
|
|
||||||
|
The integration supports connecting to a local Immich server while using an external domain for user-facing URLs. This is useful when:
|
||||||
|
|
||||||
|
- Your Home Assistant connects to Immich via local network (e.g., `http://192.168.1.100:2283`)
|
||||||
|
- But you want share links and asset URLs to use your public domain (e.g., `https://photos.example.com`)
|
||||||
|
|
||||||
|
**How it works:**
|
||||||
|
|
||||||
|
1. Configure "External domain" in Immich: **Administration → Settings → Server → External Domain**
|
||||||
|
2. The integration automatically fetches this setting on startup
|
||||||
|
3. All user-facing URLs (share links, asset URLs in events) use the external domain
|
||||||
|
4. API calls and file downloads still use the local connection URL for faster performance
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
- Server URL (in integration config): `http://192.168.1.100:2283`
|
||||||
|
- External Domain (in Immich settings): `https://photos.example.com`
|
||||||
|
- Share links in events: `https://photos.example.com/share/...`
|
||||||
|
- Telegram downloads: via `http://192.168.1.100:2283` (fast local network)
|
||||||
|
|
||||||
|
If no external domain is configured in Immich, all URLs will use the Server URL from the integration configuration.
|
||||||
|
|
||||||
## Entities Created (per album)
|
## Entities Created (per album)
|
||||||
|
|
||||||
| Entity Type | Name | Description |
|
| Entity Type | Name | Description |
|
||||||
|-------------|------|-------------|
|
|-------------|------|-------------|
|
||||||
| Sensor | Album ID | Album identifier with `album_name` and `share_url` attributes |
|
| Sensor | Album ID | Album identifier with `album_name`, `asset_count`, `share_url`, `last_updated_at`, and `created_at` attributes |
|
||||||
| Sensor | Asset Count | Total number of assets (includes `people` list in attributes) |
|
| Sensor | Asset Count | Total number of assets (includes `people` list in attributes) |
|
||||||
| Sensor | Photo Count | Number of photos in the album |
|
| Sensor | Photo Count | Number of photos in the album |
|
||||||
| Sensor | Video Count | Number of videos in the album |
|
| Sensor | Video Count | Number of videos in the album |
|
||||||
@@ -103,16 +133,201 @@ Force an immediate refresh of all album data:
|
|||||||
service: immich_album_watcher.refresh
|
service: immich_album_watcher.refresh
|
||||||
```
|
```
|
||||||
|
|
||||||
### Get Recent Assets
|
### Get Assets
|
||||||
|
|
||||||
Get the most recent assets from a specific album (returns response data):
|
Get assets from a specific album with optional filtering and ordering (returns response data). Only returns fully processed assets (videos with completed transcoding, photos with generated thumbnails).
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
service: immich_album_watcher.get_recent_assets
|
service: immich_album_watcher.get_assets
|
||||||
target:
|
target:
|
||||||
entity_id: sensor.album_name_asset_count
|
entity_id: sensor.album_name_asset_limit
|
||||||
data:
|
data:
|
||||||
count: 10
|
limit: 10 # Maximum number of assets (1-100)
|
||||||
|
offset: 0 # Number of assets to skip (for pagination)
|
||||||
|
favorite_only: false # true = favorites only, false = all assets
|
||||||
|
filter_min_rating: 4 # Min rating (1-5)
|
||||||
|
order_by: "date" # Options: "date", "rating", "name", "random"
|
||||||
|
order: "descending" # Options: "ascending", "descending"
|
||||||
|
asset_type: "all" # Options: "all", "photo", "video"
|
||||||
|
min_date: "2024-01-01" # Optional: assets created on or after this date
|
||||||
|
max_date: "2024-12-31" # Optional: assets created on or before this date
|
||||||
|
memory_date: "2024-02-14" # Optional: memories filter (excludes same year)
|
||||||
|
city: "Paris" # Optional: filter by city name
|
||||||
|
state: "California" # Optional: filter by state/region
|
||||||
|
country: "France" # Optional: filter by country
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters:**
|
||||||
|
|
||||||
|
- `limit` (optional, default: 10): Maximum number of assets to return (1-100)
|
||||||
|
- `offset` (optional, default: 0): Number of assets to skip before returning results. Use with `limit` for pagination (e.g., `offset: 0, limit: 10` for first page, `offset: 10, limit: 10` for second page)
|
||||||
|
- `favorite_only` (optional, default: false): Filter to show only favorite assets
|
||||||
|
- `filter_min_rating` (optional, default: 1): Minimum rating for assets (1-5 stars). Applied independently of `favorite_only`
|
||||||
|
- `order_by` (optional, default: "date"): Field to sort assets by
|
||||||
|
- `"date"`: Sort by creation date
|
||||||
|
- `"rating"`: Sort by rating (assets without rating are placed last)
|
||||||
|
- `"name"`: Sort by filename
|
||||||
|
- `"random"`: Random order (ignores `order`)
|
||||||
|
- `order` (optional, default: "descending"): Sort direction
|
||||||
|
- `"ascending"`: Ascending order
|
||||||
|
- `"descending"`: Descending order
|
||||||
|
- `asset_type` (optional, default: "all"): Filter by asset type
|
||||||
|
- `"all"`: No type filtering, return both photos and videos
|
||||||
|
- `"photo"`: Return only photos
|
||||||
|
- `"video"`: Return only videos
|
||||||
|
- `min_date` (optional): Filter assets created on or after this date. Use ISO 8601 format (e.g., `"2024-01-01"` or `"2024-01-01T10:30:00"`)
|
||||||
|
- `max_date` (optional): Filter assets created on or before this date. Use ISO 8601 format (e.g., `"2024-12-31"` or `"2024-12-31T23:59:59"`)
|
||||||
|
- `memory_date` (optional): Filter assets by matching month and day, excluding the same year (memories filter like Google Photos). Provide a date in ISO 8601 format (e.g., `"2024-02-14"`) to get all assets taken on February 14th from previous years
|
||||||
|
- `city` (optional): Filter assets by city name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data
|
||||||
|
- `state` (optional): Filter assets by state/region name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data
|
||||||
|
- `country` (optional): Filter assets by country name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
|
||||||
|
Get 5 most recent favorite assets:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
service: immich_album_watcher.get_assets
|
||||||
|
target:
|
||||||
|
entity_id: sensor.album_name_asset_limit
|
||||||
|
data:
|
||||||
|
limit: 5
|
||||||
|
favorite_only: true
|
||||||
|
order_by: "date"
|
||||||
|
order: "descending"
|
||||||
|
```
|
||||||
|
|
||||||
|
Get 10 random assets rated 3 stars or higher:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
service: immich_album_watcher.get_assets
|
||||||
|
target:
|
||||||
|
entity_id: sensor.album_name_asset_limit
|
||||||
|
data:
|
||||||
|
limit: 10
|
||||||
|
filter_min_rating: 3
|
||||||
|
order_by: "random"
|
||||||
|
```
|
||||||
|
|
||||||
|
Get 20 most recent photos only:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
service: immich_album_watcher.get_assets
|
||||||
|
target:
|
||||||
|
entity_id: sensor.album_name_asset_limit
|
||||||
|
data:
|
||||||
|
limit: 20
|
||||||
|
asset_type: "photo"
|
||||||
|
order_by: "date"
|
||||||
|
order: "descending"
|
||||||
|
```
|
||||||
|
|
||||||
|
Get top 10 highest rated favorite videos:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
service: immich_album_watcher.get_assets
|
||||||
|
target:
|
||||||
|
entity_id: sensor.album_name_asset_limit
|
||||||
|
data:
|
||||||
|
limit: 10
|
||||||
|
favorite_only: true
|
||||||
|
asset_type: "video"
|
||||||
|
order_by: "rating"
|
||||||
|
order: "descending"
|
||||||
|
```
|
||||||
|
|
||||||
|
Get photos sorted alphabetically by name:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
service: immich_album_watcher.get_assets
|
||||||
|
target:
|
||||||
|
entity_id: sensor.album_name_asset_limit
|
||||||
|
data:
|
||||||
|
limit: 20
|
||||||
|
asset_type: "photo"
|
||||||
|
order_by: "name"
|
||||||
|
order: "ascending"
|
||||||
|
```
|
||||||
|
|
||||||
|
Get photos from a specific date range:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
service: immich_album_watcher.get_assets
|
||||||
|
target:
|
||||||
|
entity_id: sensor.album_name_asset_limit
|
||||||
|
data:
|
||||||
|
limit: 50
|
||||||
|
asset_type: "photo"
|
||||||
|
min_date: "2024-06-01"
|
||||||
|
max_date: "2024-06-30"
|
||||||
|
order_by: "date"
|
||||||
|
order: "descending"
|
||||||
|
```
|
||||||
|
|
||||||
|
Get "On This Day" memories (photos from today's date in previous years):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
service: immich_album_watcher.get_assets
|
||||||
|
target:
|
||||||
|
entity_id: sensor.album_name_asset_limit
|
||||||
|
data:
|
||||||
|
limit: 20
|
||||||
|
memory_date: "{{ now().strftime('%Y-%m-%d') }}"
|
||||||
|
order_by: "date"
|
||||||
|
order: "ascending"
|
||||||
|
```
|
||||||
|
|
||||||
|
Paginate through all assets (first page):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
service: immich_album_watcher.get_assets
|
||||||
|
target:
|
||||||
|
entity_id: sensor.album_name_asset_limit
|
||||||
|
data:
|
||||||
|
limit: 10
|
||||||
|
offset: 0
|
||||||
|
order_by: "date"
|
||||||
|
order: "descending"
|
||||||
|
```
|
||||||
|
|
||||||
|
Paginate through all assets (second page):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
service: immich_album_watcher.get_assets
|
||||||
|
target:
|
||||||
|
entity_id: sensor.album_name_asset_limit
|
||||||
|
data:
|
||||||
|
limit: 10
|
||||||
|
offset: 10
|
||||||
|
order_by: "date"
|
||||||
|
order: "descending"
|
||||||
|
```
|
||||||
|
|
||||||
|
Get photos taken in a specific city:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
service: immich_album_watcher.get_assets
|
||||||
|
target:
|
||||||
|
entity_id: sensor.album_name_asset_limit
|
||||||
|
data:
|
||||||
|
limit: 50
|
||||||
|
city: "Paris"
|
||||||
|
asset_type: "photo"
|
||||||
|
order_by: "date"
|
||||||
|
order: "descending"
|
||||||
|
```
|
||||||
|
|
||||||
|
Get all assets from a specific country:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
service: immich_album_watcher.get_assets
|
||||||
|
target:
|
||||||
|
entity_id: sensor.album_name_asset_limit
|
||||||
|
data:
|
||||||
|
limit: 100
|
||||||
|
country: "Japan"
|
||||||
|
order_by: "date"
|
||||||
|
order: "ascending"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Send Telegram Notification
|
### Send Telegram Notification
|
||||||
@@ -126,6 +341,8 @@ Send notifications to Telegram. Supports multiple formats:
|
|||||||
|
|
||||||
The service downloads media from Immich and uploads it to Telegram, bypassing any CORS restrictions. Large lists of media are automatically split into multiple media groups based on the `max_group_size` parameter (default: 10 items per group).
|
The service downloads media from Immich and uploads it to Telegram, bypassing any CORS restrictions. Large lists of media are automatically split into multiple media groups based on the `max_group_size` parameter (default: 10 items per group).
|
||||||
|
|
||||||
|
**File ID Caching:** When media is uploaded to Telegram, the service caches the returned `file_id`. Subsequent sends of the same media will use the cached `file_id` instead of re-uploading, significantly improving performance. The cache TTL is configurable in hub options (default: 48 hours, range: 1-168 hours). The cache is persistent across Home Assistant restarts and is stored per album.
|
||||||
|
|
||||||
**Examples:**
|
**Examples:**
|
||||||
|
|
||||||
Text message:
|
Text message:
|
||||||
@@ -133,7 +350,7 @@ Text message:
|
|||||||
```yaml
|
```yaml
|
||||||
service: immich_album_watcher.send_telegram_notification
|
service: immich_album_watcher.send_telegram_notification
|
||||||
target:
|
target:
|
||||||
entity_id: sensor.album_name_asset_count
|
entity_id: sensor.album_name_asset_limit
|
||||||
data:
|
data:
|
||||||
chat_id: "-1001234567890"
|
chat_id: "-1001234567890"
|
||||||
caption: "Check out the new album!"
|
caption: "Check out the new album!"
|
||||||
@@ -145,7 +362,7 @@ Single photo:
|
|||||||
```yaml
|
```yaml
|
||||||
service: immich_album_watcher.send_telegram_notification
|
service: immich_album_watcher.send_telegram_notification
|
||||||
target:
|
target:
|
||||||
entity_id: sensor.album_name_asset_count
|
entity_id: sensor.album_name_asset_limit
|
||||||
data:
|
data:
|
||||||
chat_id: "-1001234567890"
|
chat_id: "-1001234567890"
|
||||||
urls:
|
urls:
|
||||||
@@ -159,7 +376,7 @@ Media group:
|
|||||||
```yaml
|
```yaml
|
||||||
service: immich_album_watcher.send_telegram_notification
|
service: immich_album_watcher.send_telegram_notification
|
||||||
target:
|
target:
|
||||||
entity_id: sensor.album_name_asset_count
|
entity_id: sensor.album_name_asset_limit
|
||||||
data:
|
data:
|
||||||
chat_id: "-1001234567890"
|
chat_id: "-1001234567890"
|
||||||
urls:
|
urls:
|
||||||
@@ -176,12 +393,12 @@ HTML formatting:
|
|||||||
```yaml
|
```yaml
|
||||||
service: immich_album_watcher.send_telegram_notification
|
service: immich_album_watcher.send_telegram_notification
|
||||||
target:
|
target:
|
||||||
entity_id: sensor.album_name_asset_count
|
entity_id: sensor.album_name_asset_limit
|
||||||
data:
|
data:
|
||||||
chat_id: "-1001234567890"
|
chat_id: "-1001234567890"
|
||||||
caption: |
|
caption: |
|
||||||
<b>Album Updated!</b>
|
<b>Album Updated!</b>
|
||||||
New photos by <i>{{ trigger.event.data.added_assets[0].asset_owner }}</i>
|
New photos by <i>{{ trigger.event.data.added_assets[0].owner }}</i>
|
||||||
<a href="https://immich.example.com/album">View Album</a>
|
<a href="https://immich.example.com/album">View Album</a>
|
||||||
parse_mode: "HTML" # Default, can be omitted
|
parse_mode: "HTML" # Default, can be omitted
|
||||||
```
|
```
|
||||||
@@ -191,7 +408,7 @@ Non-blocking mode (fire-and-forget):
|
|||||||
```yaml
|
```yaml
|
||||||
service: immich_album_watcher.send_telegram_notification
|
service: immich_album_watcher.send_telegram_notification
|
||||||
target:
|
target:
|
||||||
entity_id: sensor.album_name_asset_count
|
entity_id: sensor.album_name_asset_limit
|
||||||
data:
|
data:
|
||||||
chat_id: "-1001234567890"
|
chat_id: "-1001234567890"
|
||||||
urls:
|
urls:
|
||||||
@@ -213,6 +430,9 @@ data:
|
|||||||
| `max_group_size` | Maximum media items per group (2-10). Large lists split into multiple groups. Default: 10 | No |
|
| `max_group_size` | Maximum media items per group (2-10). Large lists split into multiple groups. Default: 10 | No |
|
||||||
| `chunk_delay` | Delay in milliseconds between sending multiple groups (0-60000). Useful for rate limiting. Default: 0 | No |
|
| `chunk_delay` | Delay in milliseconds between sending multiple groups (0-60000). Useful for rate limiting. Default: 0 | No |
|
||||||
| `wait_for_response` | Wait for Telegram to finish processing. Set to `false` for fire-and-forget (automation continues immediately). Default: `true` | No |
|
| `wait_for_response` | Wait for Telegram to finish processing. Set to `false` for fire-and-forget (automation continues immediately). Default: `true` | No |
|
||||||
|
| `max_asset_data_size` | Maximum asset size in bytes. Assets exceeding this limit will be skipped. Default: no limit | No |
|
||||||
|
| `send_large_photos_as_documents` | Handle photos exceeding Telegram limits (10MB or 10000px dimension sum). If `true`, send as documents. If `false`, skip oversized photos. Default: `false` | No |
|
||||||
|
| `chat_action` | Chat action to display while processing media (`typing`, `upload_photo`, `upload_video`, `upload_document`). Set to empty string to disable. Default: `typing` | No |
|
||||||
|
|
||||||
The service returns a response with `success` status and `message_id` (single message), `message_ids` (media group), or `groups_sent` (number of groups when split). When `wait_for_response` is `false`, the service returns immediately with `{"success": true, "status": "queued"}` while processing continues in the background.
|
The service returns a response with `success` status and `message_id` (single message), `message_ids` (media group), or `groups_sent` (number of groups when split). When `wait_for_response` is `false`, the service returns immediately with `{"success": true, "status": "queued"}` while processing continues in the background.
|
||||||
|
|
||||||
@@ -243,7 +463,7 @@ automation:
|
|||||||
- service: notify.mobile_app
|
- service: notify.mobile_app
|
||||||
data:
|
data:
|
||||||
title: "New Photos"
|
title: "New Photos"
|
||||||
message: "{{ trigger.event.data.added_count }} new photos in {{ trigger.event.data.album_name }}"
|
message: "{{ trigger.event.data.added_limit }} new photos in {{ trigger.event.data.album_name }}"
|
||||||
|
|
||||||
- alias: "Album renamed"
|
- alias: "Album renamed"
|
||||||
trigger:
|
trigger:
|
||||||
@@ -276,8 +496,8 @@ automation:
|
|||||||
| `album_url` | Public URL to view the album (only present if album has a shared link) | All events except `album_deleted` |
|
| `album_url` | Public URL to view the album (only present if album has a shared link) | All events except `album_deleted` |
|
||||||
| `change_type` | Type of change (assets_added, assets_removed, album_renamed, album_sharing_changed, changed) | All events except `album_deleted` |
|
| `change_type` | Type of change (assets_added, assets_removed, album_renamed, album_sharing_changed, changed) | All events except `album_deleted` |
|
||||||
| `shared` | Current sharing status of the album | All events except `album_deleted` |
|
| `shared` | Current sharing status of the album | All events except `album_deleted` |
|
||||||
| `added_count` | Number of assets added | `album_changed`, `assets_added` |
|
| `added_limit` | Number of assets added | `album_changed`, `assets_added` |
|
||||||
| `removed_count` | Number of assets removed | `album_changed`, `assets_removed` |
|
| `removed_limit` | Number of assets removed | `album_changed`, `assets_removed` |
|
||||||
| `added_assets` | List of added assets with details (see below) | `album_changed`, `assets_added` |
|
| `added_assets` | List of added assets with details (see below) | `album_changed`, `assets_added` |
|
||||||
| `removed_assets` | List of removed asset IDs | `album_changed`, `assets_removed` |
|
| `removed_assets` | List of removed asset IDs | `album_changed`, `assets_removed` |
|
||||||
| `people` | List of all people detected in the album | All events except `album_deleted` |
|
| `people` | List of all people detected in the album | All events except `album_deleted` |
|
||||||
@@ -293,15 +513,27 @@ Each item in the `added_assets` list contains the following fields:
|
|||||||
| Field | Description |
|
| Field | Description |
|
||||||
|-------|-------------|
|
|-------|-------------|
|
||||||
| `id` | Unique asset ID |
|
| `id` | Unique asset ID |
|
||||||
| `asset_type` | Type of asset (`IMAGE` or `VIDEO`) |
|
| `type` | Type of asset (`IMAGE` or `VIDEO`) |
|
||||||
| `asset_filename` | Original filename of the asset |
|
| `filename` | Original filename of the asset |
|
||||||
| `asset_created` | Date/time when the asset was originally created |
|
| `created_at` | Date/time when the asset was originally created |
|
||||||
| `asset_owner` | Display name of the user who owns the asset |
|
| `owner` | Display name of the user who owns the asset |
|
||||||
| `asset_owner_id` | Unique ID of the user who owns the asset |
|
| `owner_id` | Unique ID of the user who owns the asset |
|
||||||
| `asset_description` | Description/caption of the asset (from EXIF data) |
|
| `description` | Description/caption of the asset (from EXIF data) |
|
||||||
| `asset_url` | Public URL to view the asset (only present if album has a shared link) |
|
| `is_favorite` | Whether the asset is marked as favorite (`true` or `false`) |
|
||||||
|
| `rating` | User rating of the asset (1-5 stars, or `null` if not rated) |
|
||||||
|
| `latitude` | GPS latitude coordinate (or `null` if no geolocation) |
|
||||||
|
| `longitude` | GPS longitude coordinate (or `null` if no geolocation) |
|
||||||
|
| `city` | City name from reverse geocoding (or `null` if unavailable) |
|
||||||
|
| `state` | State/region name from reverse geocoding (or `null` if unavailable) |
|
||||||
|
| `country` | Country name from reverse geocoding (or `null` if unavailable) |
|
||||||
|
| `url` | Public URL to view the asset (only present if album has a shared link) |
|
||||||
|
| `download_url` | Direct download URL for the original file (if shared link exists) |
|
||||||
|
| `playback_url` | Video playback URL (for VIDEO assets only, if shared link exists) |
|
||||||
|
| `photo_url` | Photo preview URL (for IMAGE assets only, if shared link exists) |
|
||||||
| `people` | List of people detected in this specific asset |
|
| `people` | List of people detected in this specific asset |
|
||||||
|
|
||||||
|
> **Note:** Assets are only included in events and service responses when they are fully processed by Immich. For videos, this means transcoding must be complete (with `encodedVideoPath`). For photos, thumbnail generation must be complete (with `thumbhash`). This ensures that all media URLs are valid and accessible. Unprocessed assets are silently ignored until their processing completes.
|
||||||
|
|
||||||
Example accessing asset owner in an automation:
|
Example accessing asset owner in an automation:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
@@ -315,8 +547,8 @@ automation:
|
|||||||
data:
|
data:
|
||||||
title: "New Photos"
|
title: "New Photos"
|
||||||
message: >
|
message: >
|
||||||
{{ trigger.event.data.added_assets[0].asset_owner }} added
|
{{ trigger.event.data.added_assets[0].owner }} added
|
||||||
{{ trigger.event.data.added_count }} photos to {{ trigger.event.data.album_name }}
|
{{ trigger.event.data.added_limit }} photos to {{ trigger.event.data.album_name }}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|||||||
@@ -15,12 +15,14 @@ from .const import (
|
|||||||
CONF_HUB_NAME,
|
CONF_HUB_NAME,
|
||||||
CONF_IMMICH_URL,
|
CONF_IMMICH_URL,
|
||||||
CONF_SCAN_INTERVAL,
|
CONF_SCAN_INTERVAL,
|
||||||
|
CONF_TELEGRAM_CACHE_TTL,
|
||||||
DEFAULT_SCAN_INTERVAL,
|
DEFAULT_SCAN_INTERVAL,
|
||||||
|
DEFAULT_TELEGRAM_CACHE_TTL,
|
||||||
DOMAIN,
|
DOMAIN,
|
||||||
PLATFORMS,
|
PLATFORMS,
|
||||||
)
|
)
|
||||||
from .coordinator import ImmichAlbumWatcherCoordinator
|
from .coordinator import ImmichAlbumWatcherCoordinator
|
||||||
from .storage import ImmichAlbumStorage
|
from .storage import ImmichAlbumStorage, TelegramFileCache
|
||||||
|
|
||||||
_LOGGER = logging.getLogger(__name__)
|
_LOGGER = logging.getLogger(__name__)
|
||||||
|
|
||||||
@@ -33,6 +35,7 @@ class ImmichHubData:
|
|||||||
url: str
|
url: str
|
||||||
api_key: str
|
api_key: str
|
||||||
scan_interval: int
|
scan_interval: int
|
||||||
|
telegram_cache_ttl: int
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
@@ -55,6 +58,7 @@ async def async_setup_entry(hass: HomeAssistant, entry: ImmichConfigEntry) -> bo
|
|||||||
url = entry.data[CONF_IMMICH_URL]
|
url = entry.data[CONF_IMMICH_URL]
|
||||||
api_key = entry.data[CONF_API_KEY]
|
api_key = entry.data[CONF_API_KEY]
|
||||||
scan_interval = entry.options.get(CONF_SCAN_INTERVAL, DEFAULT_SCAN_INTERVAL)
|
scan_interval = entry.options.get(CONF_SCAN_INTERVAL, DEFAULT_SCAN_INTERVAL)
|
||||||
|
telegram_cache_ttl = entry.options.get(CONF_TELEGRAM_CACHE_TTL, DEFAULT_TELEGRAM_CACHE_TTL)
|
||||||
|
|
||||||
# Store hub data
|
# Store hub data
|
||||||
entry.runtime_data = ImmichHubData(
|
entry.runtime_data = ImmichHubData(
|
||||||
@@ -62,6 +66,7 @@ async def async_setup_entry(hass: HomeAssistant, entry: ImmichConfigEntry) -> bo
|
|||||||
url=url,
|
url=url,
|
||||||
api_key=api_key,
|
api_key=api_key,
|
||||||
scan_interval=scan_interval,
|
scan_interval=scan_interval,
|
||||||
|
telegram_cache_ttl=telegram_cache_ttl,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Create storage for persisting album state across restarts
|
# Create storage for persisting album state across restarts
|
||||||
@@ -107,6 +112,12 @@ async def _async_setup_subentry_coordinator(
|
|||||||
|
|
||||||
_LOGGER.debug("Setting up coordinator for album: %s (%s)", album_name, album_id)
|
_LOGGER.debug("Setting up coordinator for album: %s (%s)", album_name, album_id)
|
||||||
|
|
||||||
|
# Create and load Telegram file cache for this album
|
||||||
|
# TTL is in hours from config, convert to seconds
|
||||||
|
cache_ttl_seconds = hub_data.telegram_cache_ttl * 60 * 60
|
||||||
|
telegram_cache = TelegramFileCache(hass, album_id, ttl_seconds=cache_ttl_seconds)
|
||||||
|
await telegram_cache.async_load()
|
||||||
|
|
||||||
# Create coordinator for this album
|
# Create coordinator for this album
|
||||||
coordinator = ImmichAlbumWatcherCoordinator(
|
coordinator = ImmichAlbumWatcherCoordinator(
|
||||||
hass,
|
hass,
|
||||||
@@ -117,6 +128,7 @@ async def _async_setup_subentry_coordinator(
|
|||||||
scan_interval=hub_data.scan_interval,
|
scan_interval=hub_data.scan_interval,
|
||||||
hub_name=hub_data.name,
|
hub_name=hub_data.name,
|
||||||
storage=storage,
|
storage=storage,
|
||||||
|
telegram_cache=telegram_cache,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Load persisted state before first refresh to detect changes during downtime
|
# Load persisted state before first refresh to detect changes during downtime
|
||||||
|
|||||||
@@ -27,7 +27,9 @@ from .const import (
|
|||||||
CONF_IMMICH_URL,
|
CONF_IMMICH_URL,
|
||||||
CONF_SCAN_INTERVAL,
|
CONF_SCAN_INTERVAL,
|
||||||
CONF_TELEGRAM_BOT_TOKEN,
|
CONF_TELEGRAM_BOT_TOKEN,
|
||||||
|
CONF_TELEGRAM_CACHE_TTL,
|
||||||
DEFAULT_SCAN_INTERVAL,
|
DEFAULT_SCAN_INTERVAL,
|
||||||
|
DEFAULT_TELEGRAM_CACHE_TTL,
|
||||||
DOMAIN,
|
DOMAIN,
|
||||||
SUBENTRY_TYPE_ALBUM,
|
SUBENTRY_TYPE_ALBUM,
|
||||||
)
|
)
|
||||||
@@ -252,6 +254,9 @@ class ImmichAlbumWatcherOptionsFlow(OptionsFlow):
|
|||||||
CONF_TELEGRAM_BOT_TOKEN: user_input.get(
|
CONF_TELEGRAM_BOT_TOKEN: user_input.get(
|
||||||
CONF_TELEGRAM_BOT_TOKEN, ""
|
CONF_TELEGRAM_BOT_TOKEN, ""
|
||||||
),
|
),
|
||||||
|
CONF_TELEGRAM_CACHE_TTL: user_input.get(
|
||||||
|
CONF_TELEGRAM_CACHE_TTL, DEFAULT_TELEGRAM_CACHE_TTL
|
||||||
|
),
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -261,6 +266,9 @@ class ImmichAlbumWatcherOptionsFlow(OptionsFlow):
|
|||||||
current_bot_token = self._config_entry.options.get(
|
current_bot_token = self._config_entry.options.get(
|
||||||
CONF_TELEGRAM_BOT_TOKEN, ""
|
CONF_TELEGRAM_BOT_TOKEN, ""
|
||||||
)
|
)
|
||||||
|
current_cache_ttl = self._config_entry.options.get(
|
||||||
|
CONF_TELEGRAM_CACHE_TTL, DEFAULT_TELEGRAM_CACHE_TTL
|
||||||
|
)
|
||||||
|
|
||||||
return self.async_show_form(
|
return self.async_show_form(
|
||||||
step_id="init",
|
step_id="init",
|
||||||
@@ -272,6 +280,9 @@ class ImmichAlbumWatcherOptionsFlow(OptionsFlow):
|
|||||||
vol.Optional(
|
vol.Optional(
|
||||||
CONF_TELEGRAM_BOT_TOKEN, default=current_bot_token
|
CONF_TELEGRAM_BOT_TOKEN, default=current_bot_token
|
||||||
): str,
|
): str,
|
||||||
|
vol.Optional(
|
||||||
|
CONF_TELEGRAM_CACHE_TTL, default=current_cache_ttl
|
||||||
|
): vol.All(vol.Coerce(int), vol.Range(min=1, max=168)),
|
||||||
}
|
}
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -14,12 +14,14 @@ CONF_ALBUM_ID: Final = "album_id"
|
|||||||
CONF_ALBUM_NAME: Final = "album_name"
|
CONF_ALBUM_NAME: Final = "album_name"
|
||||||
CONF_SCAN_INTERVAL: Final = "scan_interval"
|
CONF_SCAN_INTERVAL: Final = "scan_interval"
|
||||||
CONF_TELEGRAM_BOT_TOKEN: Final = "telegram_bot_token"
|
CONF_TELEGRAM_BOT_TOKEN: Final = "telegram_bot_token"
|
||||||
|
CONF_TELEGRAM_CACHE_TTL: Final = "telegram_cache_ttl"
|
||||||
|
|
||||||
# Subentry type
|
# Subentry type
|
||||||
SUBENTRY_TYPE_ALBUM: Final = "album"
|
SUBENTRY_TYPE_ALBUM: Final = "album"
|
||||||
|
|
||||||
# Defaults
|
# Defaults
|
||||||
DEFAULT_SCAN_INTERVAL: Final = 60 # seconds
|
DEFAULT_SCAN_INTERVAL: Final = 60 # seconds
|
||||||
|
DEFAULT_TELEGRAM_CACHE_TTL: Final = 48 # hours
|
||||||
NEW_ASSETS_RESET_DELAY: Final = 300 # 5 minutes
|
NEW_ASSETS_RESET_DELAY: Final = 300 # 5 minutes
|
||||||
DEFAULT_SHARE_PASSWORD: Final = "immich123"
|
DEFAULT_SHARE_PASSWORD: Final = "immich123"
|
||||||
|
|
||||||
@@ -47,7 +49,7 @@ ATTR_REMOVED_COUNT: Final = "removed_count"
|
|||||||
ATTR_ADDED_ASSETS: Final = "added_assets"
|
ATTR_ADDED_ASSETS: Final = "added_assets"
|
||||||
ATTR_REMOVED_ASSETS: Final = "removed_assets"
|
ATTR_REMOVED_ASSETS: Final = "removed_assets"
|
||||||
ATTR_CHANGE_TYPE: Final = "change_type"
|
ATTR_CHANGE_TYPE: Final = "change_type"
|
||||||
ATTR_LAST_UPDATED: Final = "last_updated"
|
ATTR_LAST_UPDATED: Final = "last_updated_at"
|
||||||
ATTR_CREATED_AT: Final = "created_at"
|
ATTR_CREATED_AT: Final = "created_at"
|
||||||
ATTR_THUMBNAIL_URL: Final = "thumbnail_url"
|
ATTR_THUMBNAIL_URL: Final = "thumbnail_url"
|
||||||
ATTR_SHARED: Final = "shared"
|
ATTR_SHARED: Final = "shared"
|
||||||
@@ -57,15 +59,22 @@ ATTR_OLD_NAME: Final = "old_name"
|
|||||||
ATTR_NEW_NAME: Final = "new_name"
|
ATTR_NEW_NAME: Final = "new_name"
|
||||||
ATTR_OLD_SHARED: Final = "old_shared"
|
ATTR_OLD_SHARED: Final = "old_shared"
|
||||||
ATTR_NEW_SHARED: Final = "new_shared"
|
ATTR_NEW_SHARED: Final = "new_shared"
|
||||||
ATTR_ASSET_TYPE: Final = "asset_type"
|
ATTR_ASSET_TYPE: Final = "type"
|
||||||
ATTR_ASSET_FILENAME: Final = "asset_filename"
|
ATTR_ASSET_FILENAME: Final = "filename"
|
||||||
ATTR_ASSET_CREATED: Final = "asset_created"
|
ATTR_ASSET_CREATED: Final = "created_at"
|
||||||
ATTR_ASSET_OWNER: Final = "asset_owner"
|
ATTR_ASSET_OWNER: Final = "owner"
|
||||||
ATTR_ASSET_OWNER_ID: Final = "asset_owner_id"
|
ATTR_ASSET_OWNER_ID: Final = "owner_id"
|
||||||
ATTR_ASSET_URL: Final = "asset_url"
|
ATTR_ASSET_URL: Final = "url"
|
||||||
ATTR_ASSET_DOWNLOAD_URL: Final = "asset_download_url"
|
ATTR_ASSET_DOWNLOAD_URL: Final = "download_url"
|
||||||
ATTR_ASSET_PLAYBACK_URL: Final = "asset_playback_url"
|
ATTR_ASSET_PLAYBACK_URL: Final = "playback_url"
|
||||||
ATTR_ASSET_DESCRIPTION: Final = "asset_description"
|
ATTR_ASSET_DESCRIPTION: Final = "description"
|
||||||
|
ATTR_ASSET_IS_FAVORITE: Final = "is_favorite"
|
||||||
|
ATTR_ASSET_RATING: Final = "rating"
|
||||||
|
ATTR_ASSET_LATITUDE: Final = "latitude"
|
||||||
|
ATTR_ASSET_LONGITUDE: Final = "longitude"
|
||||||
|
ATTR_ASSET_CITY: Final = "city"
|
||||||
|
ATTR_ASSET_STATE: Final = "state"
|
||||||
|
ATTR_ASSET_COUNTRY: Final = "country"
|
||||||
|
|
||||||
# Asset types
|
# Asset types
|
||||||
ASSET_TYPE_IMAGE: Final = "IMAGE"
|
ASSET_TYPE_IMAGE: Final = "IMAGE"
|
||||||
@@ -76,5 +85,5 @@ PLATFORMS: Final = ["sensor", "binary_sensor", "camera", "text", "button"]
|
|||||||
|
|
||||||
# Services
|
# Services
|
||||||
SERVICE_REFRESH: Final = "refresh"
|
SERVICE_REFRESH: Final = "refresh"
|
||||||
SERVICE_GET_RECENT_ASSETS: Final = "get_recent_assets"
|
SERVICE_GET_ASSETS: Final = "get_assets"
|
||||||
SERVICE_SEND_TELEGRAM_NOTIFICATION: Final = "send_telegram_notification"
|
SERVICE_SEND_TELEGRAM_NOTIFICATION: Final = "send_telegram_notification"
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ from datetime import datetime, timedelta
|
|||||||
from typing import TYPE_CHECKING, Any
|
from typing import TYPE_CHECKING, Any
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from .storage import ImmichAlbumStorage
|
from .storage import ImmichAlbumStorage, TelegramFileCache
|
||||||
|
|
||||||
import aiohttp
|
import aiohttp
|
||||||
|
|
||||||
@@ -28,11 +28,18 @@ from .const import (
|
|||||||
ATTR_ASSET_DESCRIPTION,
|
ATTR_ASSET_DESCRIPTION,
|
||||||
ATTR_ASSET_DOWNLOAD_URL,
|
ATTR_ASSET_DOWNLOAD_URL,
|
||||||
ATTR_ASSET_FILENAME,
|
ATTR_ASSET_FILENAME,
|
||||||
|
ATTR_ASSET_IS_FAVORITE,
|
||||||
|
ATTR_ASSET_LATITUDE,
|
||||||
|
ATTR_ASSET_LONGITUDE,
|
||||||
|
ATTR_ASSET_CITY,
|
||||||
|
ATTR_ASSET_STATE,
|
||||||
|
ATTR_ASSET_COUNTRY,
|
||||||
ATTR_ASSET_OWNER,
|
ATTR_ASSET_OWNER,
|
||||||
ATTR_ASSET_OWNER_ID,
|
ATTR_ASSET_OWNER_ID,
|
||||||
|
ATTR_ASSET_PLAYBACK_URL,
|
||||||
|
ATTR_ASSET_RATING,
|
||||||
ATTR_ASSET_TYPE,
|
ATTR_ASSET_TYPE,
|
||||||
ATTR_ASSET_URL,
|
ATTR_ASSET_URL,
|
||||||
ATTR_ASSET_PLAYBACK_URL,
|
|
||||||
ATTR_CHANGE_TYPE,
|
ATTR_CHANGE_TYPE,
|
||||||
ATTR_HUB_NAME,
|
ATTR_HUB_NAME,
|
||||||
ATTR_PEOPLE,
|
ATTR_PEOPLE,
|
||||||
@@ -43,6 +50,7 @@ from .const import (
|
|||||||
ATTR_OLD_SHARED,
|
ATTR_OLD_SHARED,
|
||||||
ATTR_NEW_SHARED,
|
ATTR_NEW_SHARED,
|
||||||
ATTR_SHARED,
|
ATTR_SHARED,
|
||||||
|
ATTR_THUMBNAIL_URL,
|
||||||
DOMAIN,
|
DOMAIN,
|
||||||
EVENT_ALBUM_CHANGED,
|
EVENT_ALBUM_CHANGED,
|
||||||
EVENT_ASSETS_ADDED,
|
EVENT_ASSETS_ADDED,
|
||||||
@@ -115,6 +123,14 @@ class AssetInfo:
|
|||||||
owner_name: str = ""
|
owner_name: str = ""
|
||||||
description: str = ""
|
description: str = ""
|
||||||
people: list[str] = field(default_factory=list)
|
people: list[str] = field(default_factory=list)
|
||||||
|
is_favorite: bool = False
|
||||||
|
rating: int | None = None
|
||||||
|
latitude: float | None = None
|
||||||
|
longitude: float | None = None
|
||||||
|
city: str | None = None
|
||||||
|
state: str | None = None
|
||||||
|
country: str | None = None
|
||||||
|
is_processed: bool = True # Whether asset is fully processed by Immich
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_api_response(
|
def from_api_response(
|
||||||
@@ -130,23 +146,105 @@ class AssetInfo:
|
|||||||
if users_cache and owner_id:
|
if users_cache and owner_id:
|
||||||
owner_name = users_cache.get(owner_id, "")
|
owner_name = users_cache.get(owner_id, "")
|
||||||
|
|
||||||
# Get description from exifInfo if available
|
# Get description - prioritize user-added description over EXIF description
|
||||||
description = ""
|
description = data.get("description", "") or ""
|
||||||
exif_info = data.get("exifInfo")
|
exif_info = data.get("exifInfo")
|
||||||
if exif_info:
|
if not description and exif_info:
|
||||||
|
# Fall back to EXIF description if no user description
|
||||||
description = exif_info.get("description", "") or ""
|
description = exif_info.get("description", "") or ""
|
||||||
|
|
||||||
|
# Get favorites and rating
|
||||||
|
is_favorite = data.get("isFavorite", False)
|
||||||
|
rating = exif_info.get("rating") if exif_info else None
|
||||||
|
|
||||||
|
# Get geolocation
|
||||||
|
latitude = exif_info.get("latitude") if exif_info else None
|
||||||
|
longitude = exif_info.get("longitude") if exif_info else None
|
||||||
|
|
||||||
|
# Get reverse geocoded location
|
||||||
|
city = exif_info.get("city") if exif_info else None
|
||||||
|
state = exif_info.get("state") if exif_info else None
|
||||||
|
country = exif_info.get("country") if exif_info else None
|
||||||
|
|
||||||
|
# Check if asset is fully processed by Immich
|
||||||
|
asset_type = data.get("type", ASSET_TYPE_IMAGE)
|
||||||
|
is_processed = cls._check_processing_status(data, asset_type)
|
||||||
|
|
||||||
return cls(
|
return cls(
|
||||||
id=data["id"],
|
id=data["id"],
|
||||||
type=data.get("type", ASSET_TYPE_IMAGE),
|
type=asset_type,
|
||||||
filename=data.get("originalFileName", ""),
|
filename=data.get("originalFileName", ""),
|
||||||
created_at=data.get("fileCreatedAt", ""),
|
created_at=data.get("fileCreatedAt", ""),
|
||||||
owner_id=owner_id,
|
owner_id=owner_id,
|
||||||
owner_name=owner_name,
|
owner_name=owner_name,
|
||||||
description=description,
|
description=description,
|
||||||
people=people,
|
people=people,
|
||||||
|
is_favorite=is_favorite,
|
||||||
|
rating=rating,
|
||||||
|
latitude=latitude,
|
||||||
|
longitude=longitude,
|
||||||
|
city=city,
|
||||||
|
state=state,
|
||||||
|
country=country,
|
||||||
|
is_processed=is_processed,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _check_processing_status(data: dict[str, Any], _asset_type: str) -> bool:
|
||||||
|
"""Check if asset has been fully processed by Immich.
|
||||||
|
|
||||||
|
For all assets: Check if thumbnails have been generated (thumbhash exists).
|
||||||
|
Immich generates thumbnails for both photos and videos regardless of
|
||||||
|
whether video transcoding is needed.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
data: Asset data from API response
|
||||||
|
_asset_type: Asset type (IMAGE or VIDEO) - unused but kept for API stability
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if asset is fully processed and not trashed/offline/archived, False otherwise
|
||||||
|
"""
|
||||||
|
asset_id = data.get("id", "unknown")
|
||||||
|
asset_type = data.get("type", "unknown")
|
||||||
|
is_offline = data.get("isOffline", False)
|
||||||
|
is_trashed = data.get("isTrashed", False)
|
||||||
|
is_archived = data.get("isArchived", False)
|
||||||
|
thumbhash = data.get("thumbhash")
|
||||||
|
|
||||||
|
_LOGGER.debug(
|
||||||
|
"Asset %s (%s): isOffline=%s, isTrashed=%s, isArchived=%s, thumbhash=%s",
|
||||||
|
asset_id,
|
||||||
|
asset_type,
|
||||||
|
is_offline,
|
||||||
|
is_trashed,
|
||||||
|
is_archived,
|
||||||
|
bool(thumbhash),
|
||||||
|
)
|
||||||
|
|
||||||
|
# Exclude offline assets
|
||||||
|
if is_offline:
|
||||||
|
_LOGGER.debug("Asset %s excluded: offline", asset_id)
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Exclude trashed assets
|
||||||
|
if is_trashed:
|
||||||
|
_LOGGER.debug("Asset %s excluded: trashed", asset_id)
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Exclude archived assets
|
||||||
|
if is_archived:
|
||||||
|
_LOGGER.debug("Asset %s excluded: archived", asset_id)
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Check if thumbnails have been generated
|
||||||
|
# This works for both photos and videos - Immich always generates thumbnails
|
||||||
|
# Note: The API doesn't expose video transcoding status (encodedVideoPath),
|
||||||
|
# but thumbhash is sufficient since Immich generates thumbnails for all assets
|
||||||
|
is_processed = bool(thumbhash)
|
||||||
|
if not is_processed:
|
||||||
|
_LOGGER.debug("Asset %s excluded: no thumbhash", asset_id)
|
||||||
|
return is_processed
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class AlbumData:
|
class AlbumData:
|
||||||
@@ -237,6 +335,7 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
|
|||||||
scan_interval: int,
|
scan_interval: int,
|
||||||
hub_name: str = "Immich",
|
hub_name: str = "Immich",
|
||||||
storage: ImmichAlbumStorage | None = None,
|
storage: ImmichAlbumStorage | None = None,
|
||||||
|
telegram_cache: TelegramFileCache | None = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Initialize the coordinator."""
|
"""Initialize the coordinator."""
|
||||||
super().__init__(
|
super().__init__(
|
||||||
@@ -256,13 +355,45 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
|
|||||||
self._users_cache: dict[str, str] = {} # user_id -> name
|
self._users_cache: dict[str, str] = {} # user_id -> name
|
||||||
self._shared_links: list[SharedLinkInfo] = []
|
self._shared_links: list[SharedLinkInfo] = []
|
||||||
self._storage = storage
|
self._storage = storage
|
||||||
|
self._telegram_cache = telegram_cache
|
||||||
self._persisted_asset_ids: set[str] | None = None
|
self._persisted_asset_ids: set[str] | None = None
|
||||||
|
self._external_domain: str | None = None # Fetched from server config
|
||||||
|
self._pending_asset_ids: set[str] = set() # Assets detected but not yet processed
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def immich_url(self) -> str:
|
def immich_url(self) -> str:
|
||||||
"""Return the Immich URL."""
|
"""Return the Immich URL (for API calls)."""
|
||||||
return self._url
|
return self._url
|
||||||
|
|
||||||
|
@property
|
||||||
|
def external_url(self) -> str:
|
||||||
|
"""Return the external URL for links.
|
||||||
|
|
||||||
|
Uses externalDomain from Immich server config if set,
|
||||||
|
otherwise falls back to the connection URL.
|
||||||
|
"""
|
||||||
|
if self._external_domain:
|
||||||
|
return self._external_domain.rstrip("/")
|
||||||
|
return self._url
|
||||||
|
|
||||||
|
def get_internal_download_url(self, url: str) -> str:
|
||||||
|
"""Convert an external URL to internal URL for faster downloads.
|
||||||
|
|
||||||
|
If the URL starts with the external domain, replace it with the
|
||||||
|
internal connection URL to download via local network.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
url: The URL to convert (may be external or internal)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
URL using internal connection for downloads
|
||||||
|
"""
|
||||||
|
if self._external_domain:
|
||||||
|
external = self._external_domain.rstrip("/")
|
||||||
|
if url.startswith(external):
|
||||||
|
return url.replace(external, self._url, 1)
|
||||||
|
return url
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def api_key(self) -> str:
|
def api_key(self) -> str:
|
||||||
"""Return the API key."""
|
"""Return the API key."""
|
||||||
@@ -278,6 +409,11 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
|
|||||||
"""Return the album name."""
|
"""Return the album name."""
|
||||||
return self._album_name
|
return self._album_name
|
||||||
|
|
||||||
|
@property
|
||||||
|
def telegram_cache(self) -> TelegramFileCache | None:
|
||||||
|
"""Return the Telegram file cache."""
|
||||||
|
return self._telegram_cache
|
||||||
|
|
||||||
def update_scan_interval(self, scan_interval: int) -> None:
|
def update_scan_interval(self, scan_interval: int) -> None:
|
||||||
"""Update the scan interval."""
|
"""Update the scan interval."""
|
||||||
self.update_interval = timedelta(seconds=scan_interval)
|
self.update_interval = timedelta(seconds=scan_interval)
|
||||||
@@ -303,33 +439,138 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
|
|||||||
self._album_name,
|
self._album_name,
|
||||||
)
|
)
|
||||||
|
|
||||||
async def async_get_recent_assets(self, count: int = 10) -> list[dict[str, Any]]:
|
async def async_get_assets(
|
||||||
"""Get recent assets from the album."""
|
self,
|
||||||
|
limit: int = 10,
|
||||||
|
offset: int = 0,
|
||||||
|
favorite_only: bool = False,
|
||||||
|
filter_min_rating: int = 1,
|
||||||
|
order_by: str = "date",
|
||||||
|
order: str = "descending",
|
||||||
|
asset_type: str = "all",
|
||||||
|
min_date: str | None = None,
|
||||||
|
max_date: str | None = None,
|
||||||
|
memory_date: str | None = None,
|
||||||
|
city: str | None = None,
|
||||||
|
state: str | None = None,
|
||||||
|
country: str | None = None,
|
||||||
|
) -> list[dict[str, Any]]:
|
||||||
|
"""Get assets from the album with optional filtering and ordering.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
limit: Maximum number of assets to return (1-100)
|
||||||
|
offset: Number of assets to skip before returning results (for pagination)
|
||||||
|
favorite_only: Filter to show only favorite assets
|
||||||
|
filter_min_rating: Minimum rating for assets (1-5)
|
||||||
|
order_by: Field to sort by - 'date', 'rating', or 'name'
|
||||||
|
order: Sort direction - 'ascending', 'descending', or 'random'
|
||||||
|
asset_type: Asset type filter - 'all', 'photo', or 'video'
|
||||||
|
min_date: Filter assets created on or after this date (ISO 8601 format)
|
||||||
|
max_date: Filter assets created on or before this date (ISO 8601 format)
|
||||||
|
memory_date: Filter assets by matching month and day, excluding the same year (memories filter)
|
||||||
|
city: Filter assets by city (case-insensitive substring match)
|
||||||
|
state: Filter assets by state/region (case-insensitive substring match)
|
||||||
|
country: Filter assets by country (case-insensitive substring match)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of asset data dictionaries
|
||||||
|
"""
|
||||||
if self.data is None:
|
if self.data is None:
|
||||||
return []
|
return []
|
||||||
|
|
||||||
# Sort assets by created_at descending
|
# Start with all processed assets only
|
||||||
sorted_assets = sorted(
|
assets = [a for a in self.data.assets.values() if a.is_processed]
|
||||||
self.data.assets.values(),
|
|
||||||
key=lambda a: a.created_at,
|
|
||||||
reverse=True,
|
|
||||||
)[:count]
|
|
||||||
|
|
||||||
|
# Apply favorite filter
|
||||||
|
if favorite_only:
|
||||||
|
assets = [a for a in assets if a.is_favorite]
|
||||||
|
|
||||||
|
# Apply rating filter
|
||||||
|
if filter_min_rating > 1:
|
||||||
|
assets = [a for a in assets if a.rating is not None and a.rating >= filter_min_rating]
|
||||||
|
|
||||||
|
# Apply asset type filtering
|
||||||
|
if asset_type == "photo":
|
||||||
|
assets = [a for a in assets if a.type == ASSET_TYPE_IMAGE]
|
||||||
|
elif asset_type == "video":
|
||||||
|
assets = [a for a in assets if a.type == ASSET_TYPE_VIDEO]
|
||||||
|
|
||||||
|
# Apply date filtering
|
||||||
|
if min_date:
|
||||||
|
assets = [a for a in assets if a.created_at >= min_date]
|
||||||
|
if max_date:
|
||||||
|
assets = [a for a in assets if a.created_at <= max_date]
|
||||||
|
|
||||||
|
# Apply memory date filtering (match month and day, exclude same year)
|
||||||
|
if memory_date:
|
||||||
|
try:
|
||||||
|
# Parse the reference date (supports ISO 8601 format)
|
||||||
|
ref_date = datetime.fromisoformat(memory_date.replace("Z", "+00:00"))
|
||||||
|
ref_year = ref_date.year
|
||||||
|
ref_month = ref_date.month
|
||||||
|
ref_day = ref_date.day
|
||||||
|
|
||||||
|
def matches_memory(asset: AssetInfo) -> bool:
|
||||||
|
"""Check if asset matches memory criteria (same month/day, different year)."""
|
||||||
|
try:
|
||||||
|
asset_date = datetime.fromisoformat(
|
||||||
|
asset.created_at.replace("Z", "+00:00")
|
||||||
|
)
|
||||||
|
# Match month and day, but exclude same year (true memories behavior)
|
||||||
|
return (
|
||||||
|
asset_date.month == ref_month
|
||||||
|
and asset_date.day == ref_day
|
||||||
|
and asset_date.year != ref_year
|
||||||
|
)
|
||||||
|
except (ValueError, AttributeError):
|
||||||
|
return False
|
||||||
|
|
||||||
|
assets = [a for a in assets if matches_memory(a)]
|
||||||
|
except ValueError:
|
||||||
|
_LOGGER.warning("Invalid memory_date format: %s", memory_date)
|
||||||
|
|
||||||
|
# Apply geolocation filtering (case-insensitive substring match)
|
||||||
|
if city:
|
||||||
|
city_lower = city.lower()
|
||||||
|
assets = [a for a in assets if a.city and city_lower in a.city.lower()]
|
||||||
|
if state:
|
||||||
|
state_lower = state.lower()
|
||||||
|
assets = [a for a in assets if a.state and state_lower in a.state.lower()]
|
||||||
|
if country:
|
||||||
|
country_lower = country.lower()
|
||||||
|
assets = [a for a in assets if a.country and country_lower in a.country.lower()]
|
||||||
|
|
||||||
|
# Apply ordering
|
||||||
|
if order_by == "random":
|
||||||
|
import random
|
||||||
|
random.shuffle(assets)
|
||||||
|
elif order_by == "rating":
|
||||||
|
# Sort by rating, putting None values last
|
||||||
|
assets = sorted(
|
||||||
|
assets,
|
||||||
|
key=lambda a: (a.rating is None, a.rating if a.rating is not None else 0),
|
||||||
|
reverse=(order == "descending")
|
||||||
|
)
|
||||||
|
elif order_by == "name":
|
||||||
|
assets = sorted(
|
||||||
|
assets,
|
||||||
|
key=lambda a: a.filename.lower(),
|
||||||
|
reverse=(order == "descending")
|
||||||
|
)
|
||||||
|
else: # date (default)
|
||||||
|
assets = sorted(
|
||||||
|
assets,
|
||||||
|
key=lambda a: a.created_at,
|
||||||
|
reverse=(order == "descending")
|
||||||
|
)
|
||||||
|
|
||||||
|
# Apply offset and limit for pagination
|
||||||
|
assets = assets[offset : offset + limit]
|
||||||
|
|
||||||
|
# Build result with all available asset data (matching event data)
|
||||||
result = []
|
result = []
|
||||||
for asset in sorted_assets:
|
for asset in assets:
|
||||||
asset_data = {
|
asset_data = self._build_asset_detail(asset, include_thumbnail=True)
|
||||||
"id": asset.id,
|
|
||||||
"type": asset.type,
|
|
||||||
"filename": asset.filename,
|
|
||||||
"created_at": asset.created_at,
|
|
||||||
"description": asset.description,
|
|
||||||
"people": asset.people,
|
|
||||||
"thumbnail_url": f"{self._url}/api/assets/{asset.id}/thumbnail",
|
|
||||||
}
|
|
||||||
if asset.type == ASSET_TYPE_VIDEO:
|
|
||||||
video_url = self._get_asset_video_url(asset.id)
|
|
||||||
if video_url:
|
|
||||||
asset_data["video_url"] = video_url
|
|
||||||
result.append(asset_data)
|
result.append(asset_data)
|
||||||
return result
|
return result
|
||||||
|
|
||||||
@@ -380,6 +621,36 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
|
|||||||
|
|
||||||
return self._users_cache
|
return self._users_cache
|
||||||
|
|
||||||
|
async def _async_fetch_server_config(self) -> None:
|
||||||
|
"""Fetch server config from Immich to get external domain."""
|
||||||
|
if self._session is None:
|
||||||
|
self._session = async_get_clientsession(self.hass)
|
||||||
|
|
||||||
|
headers = {"x-api-key": self._api_key}
|
||||||
|
try:
|
||||||
|
async with self._session.get(
|
||||||
|
f"{self._url}/api/server/config",
|
||||||
|
headers=headers,
|
||||||
|
) as response:
|
||||||
|
if response.status == 200:
|
||||||
|
data = await response.json()
|
||||||
|
external_domain = data.get("externalDomain", "") or ""
|
||||||
|
self._external_domain = external_domain
|
||||||
|
if external_domain:
|
||||||
|
_LOGGER.debug(
|
||||||
|
"Using external domain from Immich: %s", external_domain
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
_LOGGER.debug(
|
||||||
|
"No external domain configured in Immich, using connection URL"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
_LOGGER.warning(
|
||||||
|
"Failed to fetch server config: HTTP %s", response.status
|
||||||
|
)
|
||||||
|
except aiohttp.ClientError as err:
|
||||||
|
_LOGGER.warning("Failed to fetch server config: %s", err)
|
||||||
|
|
||||||
async def _async_fetch_shared_links(self) -> list[SharedLinkInfo]:
|
async def _async_fetch_shared_links(self) -> list[SharedLinkInfo]:
|
||||||
"""Fetch shared links for this album from Immich."""
|
"""Fetch shared links for this album from Immich."""
|
||||||
if self._session is None:
|
if self._session is None:
|
||||||
@@ -423,29 +694,29 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
|
|||||||
"""Get the public URL if album has an accessible shared link."""
|
"""Get the public URL if album has an accessible shared link."""
|
||||||
accessible_links = self._get_accessible_links()
|
accessible_links = self._get_accessible_links()
|
||||||
if accessible_links:
|
if accessible_links:
|
||||||
return f"{self._url}/share/{accessible_links[0].key}"
|
return f"{self.external_url}/share/{accessible_links[0].key}"
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def get_any_url(self) -> str | None:
|
def get_any_url(self) -> str | None:
|
||||||
"""Get any non-expired URL (prefers accessible, falls back to protected)."""
|
"""Get any non-expired URL (prefers accessible, falls back to protected)."""
|
||||||
accessible_links = self._get_accessible_links()
|
accessible_links = self._get_accessible_links()
|
||||||
if accessible_links:
|
if accessible_links:
|
||||||
return f"{self._url}/share/{accessible_links[0].key}"
|
return f"{self.external_url}/share/{accessible_links[0].key}"
|
||||||
non_expired = [link for link in self._shared_links if not link.is_expired]
|
non_expired = [link for link in self._shared_links if not link.is_expired]
|
||||||
if non_expired:
|
if non_expired:
|
||||||
return f"{self._url}/share/{non_expired[0].key}"
|
return f"{self.external_url}/share/{non_expired[0].key}"
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def get_protected_url(self) -> str | None:
|
def get_protected_url(self) -> str | None:
|
||||||
"""Get a protected URL if any password-protected link exists."""
|
"""Get a protected URL if any password-protected link exists."""
|
||||||
protected_links = self._get_protected_links()
|
protected_links = self._get_protected_links()
|
||||||
if protected_links:
|
if protected_links:
|
||||||
return f"{self._url}/share/{protected_links[0].key}"
|
return f"{self.external_url}/share/{protected_links[0].key}"
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def get_protected_urls(self) -> list[str]:
|
def get_protected_urls(self) -> list[str]:
|
||||||
"""Get all password-protected URLs."""
|
"""Get all password-protected URLs."""
|
||||||
return [f"{self._url}/share/{link.key}" for link in self._get_protected_links()]
|
return [f"{self.external_url}/share/{link.key}" for link in self._get_protected_links()]
|
||||||
|
|
||||||
def get_protected_password(self) -> str | None:
|
def get_protected_password(self) -> str | None:
|
||||||
"""Get the password for the first protected link."""
|
"""Get the password for the first protected link."""
|
||||||
@@ -456,13 +727,13 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
|
|||||||
|
|
||||||
def get_public_urls(self) -> list[str]:
|
def get_public_urls(self) -> list[str]:
|
||||||
"""Get all accessible public URLs."""
|
"""Get all accessible public URLs."""
|
||||||
return [f"{self._url}/share/{link.key}" for link in self._get_accessible_links()]
|
return [f"{self.external_url}/share/{link.key}" for link in self._get_accessible_links()]
|
||||||
|
|
||||||
def get_shared_links_info(self) -> list[dict[str, Any]]:
|
def get_shared_links_info(self) -> list[dict[str, Any]]:
|
||||||
"""Get detailed info about all shared links."""
|
"""Get detailed info about all shared links."""
|
||||||
return [
|
return [
|
||||||
{
|
{
|
||||||
"url": f"{self._url}/share/{link.key}",
|
"url": f"{self.external_url}/share/{link.key}",
|
||||||
"has_password": link.has_password,
|
"has_password": link.has_password,
|
||||||
"is_expired": link.is_expired,
|
"is_expired": link.is_expired,
|
||||||
"expires_at": link.expires_at.isoformat() if link.expires_at else None,
|
"expires_at": link.expires_at.isoformat() if link.expires_at else None,
|
||||||
@@ -475,37 +746,108 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
|
|||||||
"""Get the public viewer URL for an asset (web page)."""
|
"""Get the public viewer URL for an asset (web page)."""
|
||||||
accessible_links = self._get_accessible_links()
|
accessible_links = self._get_accessible_links()
|
||||||
if accessible_links:
|
if accessible_links:
|
||||||
return f"{self._url}/share/{accessible_links[0].key}/photos/{asset_id}"
|
return f"{self.external_url}/share/{accessible_links[0].key}/photos/{asset_id}"
|
||||||
non_expired = [link for link in self._shared_links if not link.is_expired]
|
non_expired = [link for link in self._shared_links if not link.is_expired]
|
||||||
if non_expired:
|
if non_expired:
|
||||||
return f"{self._url}/share/{non_expired[0].key}/photos/{asset_id}"
|
return f"{self.external_url}/share/{non_expired[0].key}/photos/{asset_id}"
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def _get_asset_download_url(self, asset_id: str) -> str | None:
|
def _get_asset_download_url(self, asset_id: str) -> str | None:
|
||||||
"""Get the direct download URL for an asset (media file)."""
|
"""Get the direct download URL for an asset (media file)."""
|
||||||
accessible_links = self._get_accessible_links()
|
accessible_links = self._get_accessible_links()
|
||||||
if accessible_links:
|
if accessible_links:
|
||||||
return f"{self._url}/api/assets/{asset_id}/original?key={accessible_links[0].key}"
|
return f"{self.external_url}/api/assets/{asset_id}/original?key={accessible_links[0].key}"
|
||||||
non_expired = [link for link in self._shared_links if not link.is_expired]
|
non_expired = [link for link in self._shared_links if not link.is_expired]
|
||||||
if non_expired:
|
if non_expired:
|
||||||
return f"{self._url}/api/assets/{asset_id}/original?key={non_expired[0].key}"
|
return f"{self.external_url}/api/assets/{asset_id}/original?key={non_expired[0].key}"
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def _get_asset_video_url(self, asset_id: str) -> str | None:
|
def _get_asset_video_url(self, asset_id: str) -> str | None:
|
||||||
"""Get the transcoded video playback URL for a video asset."""
|
"""Get the transcoded video playback URL for a video asset."""
|
||||||
accessible_links = self._get_accessible_links()
|
accessible_links = self._get_accessible_links()
|
||||||
if accessible_links:
|
if accessible_links:
|
||||||
return f"{self._url}/api/assets/{asset_id}/video/playback?key={accessible_links[0].key}"
|
return f"{self.external_url}/api/assets/{asset_id}/video/playback?key={accessible_links[0].key}"
|
||||||
non_expired = [link for link in self._shared_links if not link.is_expired]
|
non_expired = [link for link in self._shared_links if not link.is_expired]
|
||||||
if non_expired:
|
if non_expired:
|
||||||
return f"{self._url}/api/assets/{asset_id}/video/playback?key={non_expired[0].key}"
|
return f"{self.external_url}/api/assets/{asset_id}/video/playback?key={non_expired[0].key}"
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
def _get_asset_photo_url(self, asset_id: str) -> str | None:
|
||||||
|
"""Get the preview-sized thumbnail URL for a photo asset."""
|
||||||
|
accessible_links = self._get_accessible_links()
|
||||||
|
if accessible_links:
|
||||||
|
return f"{self.external_url}/api/assets/{asset_id}/thumbnail?size=preview&key={accessible_links[0].key}"
|
||||||
|
non_expired = [link for link in self._shared_links if not link.is_expired]
|
||||||
|
if non_expired:
|
||||||
|
return f"{self.external_url}/api/assets/{asset_id}/thumbnail?size=preview&key={non_expired[0].key}"
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _build_asset_detail(
|
||||||
|
self, asset: AssetInfo, include_thumbnail: bool = True
|
||||||
|
) -> dict[str, Any]:
|
||||||
|
"""Build asset detail dictionary with all available data.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
asset: AssetInfo object
|
||||||
|
include_thumbnail: If True, include thumbnail_url
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary with asset details (using ATTR_* constants for consistency)
|
||||||
|
"""
|
||||||
|
# Base asset data
|
||||||
|
asset_detail = {
|
||||||
|
"id": asset.id,
|
||||||
|
ATTR_ASSET_TYPE: asset.type,
|
||||||
|
ATTR_ASSET_FILENAME: asset.filename,
|
||||||
|
ATTR_ASSET_CREATED: asset.created_at,
|
||||||
|
ATTR_ASSET_OWNER: asset.owner_name,
|
||||||
|
ATTR_ASSET_OWNER_ID: asset.owner_id,
|
||||||
|
ATTR_ASSET_DESCRIPTION: asset.description,
|
||||||
|
ATTR_PEOPLE: asset.people,
|
||||||
|
ATTR_ASSET_IS_FAVORITE: asset.is_favorite,
|
||||||
|
ATTR_ASSET_RATING: asset.rating,
|
||||||
|
ATTR_ASSET_LATITUDE: asset.latitude,
|
||||||
|
ATTR_ASSET_LONGITUDE: asset.longitude,
|
||||||
|
ATTR_ASSET_CITY: asset.city,
|
||||||
|
ATTR_ASSET_STATE: asset.state,
|
||||||
|
ATTR_ASSET_COUNTRY: asset.country,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add thumbnail URL if requested
|
||||||
|
if include_thumbnail:
|
||||||
|
asset_detail[ATTR_THUMBNAIL_URL] = f"{self.external_url}/api/assets/{asset.id}/thumbnail"
|
||||||
|
|
||||||
|
# Add public viewer URL (web page)
|
||||||
|
asset_url = self._get_asset_public_url(asset.id)
|
||||||
|
if asset_url:
|
||||||
|
asset_detail[ATTR_ASSET_URL] = asset_url
|
||||||
|
|
||||||
|
# Add download URL (direct media file)
|
||||||
|
asset_download_url = self._get_asset_download_url(asset.id)
|
||||||
|
if asset_download_url:
|
||||||
|
asset_detail[ATTR_ASSET_DOWNLOAD_URL] = asset_download_url
|
||||||
|
|
||||||
|
# Add type-specific URLs
|
||||||
|
if asset.type == ASSET_TYPE_VIDEO:
|
||||||
|
asset_video_url = self._get_asset_video_url(asset.id)
|
||||||
|
if asset_video_url:
|
||||||
|
asset_detail[ATTR_ASSET_PLAYBACK_URL] = asset_video_url
|
||||||
|
elif asset.type == ASSET_TYPE_IMAGE:
|
||||||
|
asset_photo_url = self._get_asset_photo_url(asset.id)
|
||||||
|
if asset_photo_url:
|
||||||
|
asset_detail["photo_url"] = asset_photo_url # TODO: Add ATTR_ASSET_PHOTO_URL constant
|
||||||
|
|
||||||
|
return asset_detail
|
||||||
|
|
||||||
async def _async_update_data(self) -> AlbumData | None:
|
async def _async_update_data(self) -> AlbumData | None:
|
||||||
"""Fetch data from Immich API."""
|
"""Fetch data from Immich API."""
|
||||||
if self._session is None:
|
if self._session is None:
|
||||||
self._session = async_get_clientsession(self.hass)
|
self._session = async_get_clientsession(self.hass)
|
||||||
|
|
||||||
|
# Fetch server config to get external domain (once)
|
||||||
|
if self._external_domain is None:
|
||||||
|
await self._async_fetch_server_config()
|
||||||
|
|
||||||
# Fetch users to resolve owner names
|
# Fetch users to resolve owner names
|
||||||
if not self._users_cache:
|
if not self._users_cache:
|
||||||
await self._async_fetch_users()
|
await self._async_fetch_users()
|
||||||
@@ -559,11 +901,16 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
|
|||||||
elif removed_ids and not added_ids:
|
elif removed_ids and not added_ids:
|
||||||
change_type = "assets_removed"
|
change_type = "assets_removed"
|
||||||
|
|
||||||
added_assets = [
|
added_assets = []
|
||||||
album.assets[aid]
|
for aid in added_ids:
|
||||||
for aid in added_ids
|
if aid not in album.assets:
|
||||||
if aid in album.assets
|
continue
|
||||||
]
|
asset = album.assets[aid]
|
||||||
|
if asset.is_processed:
|
||||||
|
added_assets.append(asset)
|
||||||
|
else:
|
||||||
|
# Track unprocessed assets for later
|
||||||
|
self._pending_asset_ids.add(aid)
|
||||||
|
|
||||||
change = AlbumChange(
|
change = AlbumChange(
|
||||||
album_id=album.id,
|
album_id=album.id,
|
||||||
@@ -620,34 +967,79 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
|
|||||||
added_ids = new_state.asset_ids - old_state.asset_ids
|
added_ids = new_state.asset_ids - old_state.asset_ids
|
||||||
removed_ids = old_state.asset_ids - new_state.asset_ids
|
removed_ids = old_state.asset_ids - new_state.asset_ids
|
||||||
|
|
||||||
|
_LOGGER.debug(
|
||||||
|
"Change detection: added_ids=%d, removed_ids=%d, pending=%d",
|
||||||
|
len(added_ids),
|
||||||
|
len(removed_ids),
|
||||||
|
len(self._pending_asset_ids),
|
||||||
|
)
|
||||||
|
|
||||||
|
# Track new unprocessed assets and collect processed ones
|
||||||
|
added_assets = []
|
||||||
|
for aid in added_ids:
|
||||||
|
if aid not in new_state.assets:
|
||||||
|
_LOGGER.debug("Asset %s: not in assets dict", aid)
|
||||||
|
continue
|
||||||
|
asset = new_state.assets[aid]
|
||||||
|
_LOGGER.debug(
|
||||||
|
"New asset %s (%s): is_processed=%s, filename=%s",
|
||||||
|
aid,
|
||||||
|
asset.type,
|
||||||
|
asset.is_processed,
|
||||||
|
asset.filename,
|
||||||
|
)
|
||||||
|
if asset.is_processed:
|
||||||
|
added_assets.append(asset)
|
||||||
|
else:
|
||||||
|
# Track unprocessed assets for later
|
||||||
|
self._pending_asset_ids.add(aid)
|
||||||
|
_LOGGER.debug("Asset %s added to pending (not yet processed)", aid)
|
||||||
|
|
||||||
|
# Check if any pending assets are now processed
|
||||||
|
newly_processed = []
|
||||||
|
for aid in list(self._pending_asset_ids):
|
||||||
|
if aid not in new_state.assets:
|
||||||
|
# Asset was removed, no longer pending
|
||||||
|
self._pending_asset_ids.discard(aid)
|
||||||
|
continue
|
||||||
|
asset = new_state.assets[aid]
|
||||||
|
if asset.is_processed:
|
||||||
|
_LOGGER.debug(
|
||||||
|
"Pending asset %s (%s) is now processed: filename=%s",
|
||||||
|
aid,
|
||||||
|
asset.type,
|
||||||
|
asset.filename,
|
||||||
|
)
|
||||||
|
newly_processed.append(asset)
|
||||||
|
self._pending_asset_ids.discard(aid)
|
||||||
|
|
||||||
|
# Include newly processed pending assets
|
||||||
|
added_assets.extend(newly_processed)
|
||||||
|
|
||||||
# Detect metadata changes
|
# Detect metadata changes
|
||||||
name_changed = old_state.name != new_state.name
|
name_changed = old_state.name != new_state.name
|
||||||
sharing_changed = old_state.shared != new_state.shared
|
sharing_changed = old_state.shared != new_state.shared
|
||||||
|
|
||||||
# Return None only if nothing changed at all
|
# Return None only if nothing changed at all
|
||||||
if not added_ids and not removed_ids and not name_changed and not sharing_changed:
|
if not added_assets and not removed_ids and not name_changed and not sharing_changed:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Determine primary change type
|
# Determine primary change type (use added_assets not added_ids)
|
||||||
change_type = "changed"
|
change_type = "changed"
|
||||||
if name_changed and not added_ids and not removed_ids and not sharing_changed:
|
if name_changed and not added_assets and not removed_ids and not sharing_changed:
|
||||||
change_type = "album_renamed"
|
change_type = "album_renamed"
|
||||||
elif sharing_changed and not added_ids and not removed_ids and not name_changed:
|
elif sharing_changed and not added_assets and not removed_ids and not name_changed:
|
||||||
change_type = "album_sharing_changed"
|
change_type = "album_sharing_changed"
|
||||||
elif added_ids and not removed_ids and not name_changed and not sharing_changed:
|
elif added_assets and not removed_ids and not name_changed and not sharing_changed:
|
||||||
change_type = "assets_added"
|
change_type = "assets_added"
|
||||||
elif removed_ids and not added_ids and not name_changed and not sharing_changed:
|
elif removed_ids and not added_assets and not name_changed and not sharing_changed:
|
||||||
change_type = "assets_removed"
|
change_type = "assets_removed"
|
||||||
|
|
||||||
added_assets = [
|
|
||||||
new_state.assets[aid] for aid in added_ids if aid in new_state.assets
|
|
||||||
]
|
|
||||||
|
|
||||||
return AlbumChange(
|
return AlbumChange(
|
||||||
album_id=new_state.id,
|
album_id=new_state.id,
|
||||||
album_name=new_state.name,
|
album_name=new_state.name,
|
||||||
change_type=change_type,
|
change_type=change_type,
|
||||||
added_count=len(added_ids),
|
added_count=len(added_assets), # Count only processed assets
|
||||||
removed_count=len(removed_ids),
|
removed_count=len(removed_ids),
|
||||||
added_assets=added_assets,
|
added_assets=added_assets,
|
||||||
removed_asset_ids=list(removed_ids),
|
removed_asset_ids=list(removed_ids),
|
||||||
@@ -661,26 +1053,10 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
|
|||||||
"""Fire Home Assistant events for album changes."""
|
"""Fire Home Assistant events for album changes."""
|
||||||
added_assets_detail = []
|
added_assets_detail = []
|
||||||
for asset in change.added_assets:
|
for asset in change.added_assets:
|
||||||
asset_detail = {
|
# Only include fully processed assets
|
||||||
"id": asset.id,
|
if not asset.is_processed:
|
||||||
ATTR_ASSET_TYPE: asset.type,
|
continue
|
||||||
ATTR_ASSET_FILENAME: asset.filename,
|
asset_detail = self._build_asset_detail(asset, include_thumbnail=False)
|
||||||
ATTR_ASSET_CREATED: asset.created_at,
|
|
||||||
ATTR_ASSET_OWNER: asset.owner_name,
|
|
||||||
ATTR_ASSET_OWNER_ID: asset.owner_id,
|
|
||||||
ATTR_ASSET_DESCRIPTION: asset.description,
|
|
||||||
ATTR_PEOPLE: asset.people,
|
|
||||||
}
|
|
||||||
asset_url = self._get_asset_public_url(asset.id)
|
|
||||||
if asset_url:
|
|
||||||
asset_detail[ATTR_ASSET_URL] = asset_url
|
|
||||||
asset_download_url = self._get_asset_download_url(asset.id)
|
|
||||||
if asset_download_url:
|
|
||||||
asset_detail[ATTR_ASSET_DOWNLOAD_URL] = asset_download_url
|
|
||||||
if asset.type == ASSET_TYPE_VIDEO:
|
|
||||||
asset_video_url = self._get_asset_video_url(asset.id)
|
|
||||||
if asset_video_url:
|
|
||||||
asset_detail[ATTR_ASSET_PLAYBACK_URL] = asset_video_url
|
|
||||||
added_assets_detail.append(asset_detail)
|
added_assets_detail.append(asset_detail)
|
||||||
|
|
||||||
event_data = {
|
event_data = {
|
||||||
|
|||||||
@@ -8,5 +8,5 @@
|
|||||||
"iot_class": "cloud_polling",
|
"iot_class": "cloud_polling",
|
||||||
"issue_tracker": "https://github.com/DolgolyovAlexei/haos-hacs-immich-album-watcher/issues",
|
"issue_tracker": "https://github.com/DolgolyovAlexei/haos-hacs-immich-album-watcher/issues",
|
||||||
"requirements": [],
|
"requirements": [],
|
||||||
"version": "2.0.0"
|
"version": "2.7.1"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,6 +2,7 @@
|
|||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
import logging
|
import logging
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from typing import Any
|
from typing import Any
|
||||||
@@ -24,6 +25,7 @@ from homeassistant.util import slugify
|
|||||||
|
|
||||||
from .const import (
|
from .const import (
|
||||||
ATTR_ALBUM_ID,
|
ATTR_ALBUM_ID,
|
||||||
|
ATTR_ALBUM_NAME,
|
||||||
ATTR_ALBUM_PROTECTED_URL,
|
ATTR_ALBUM_PROTECTED_URL,
|
||||||
ATTR_ALBUM_URLS,
|
ATTR_ALBUM_URLS,
|
||||||
ATTR_ASSET_COUNT,
|
ATTR_ASSET_COUNT,
|
||||||
@@ -40,7 +42,7 @@ from .const import (
|
|||||||
CONF_HUB_NAME,
|
CONF_HUB_NAME,
|
||||||
CONF_TELEGRAM_BOT_TOKEN,
|
CONF_TELEGRAM_BOT_TOKEN,
|
||||||
DOMAIN,
|
DOMAIN,
|
||||||
SERVICE_GET_RECENT_ASSETS,
|
SERVICE_GET_ASSETS,
|
||||||
SERVICE_REFRESH,
|
SERVICE_REFRESH,
|
||||||
SERVICE_SEND_TELEGRAM_NOTIFICATION,
|
SERVICE_SEND_TELEGRAM_NOTIFICATION,
|
||||||
)
|
)
|
||||||
@@ -48,6 +50,10 @@ from .coordinator import AlbumData, ImmichAlbumWatcherCoordinator
|
|||||||
|
|
||||||
_LOGGER = logging.getLogger(__name__)
|
_LOGGER = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# Telegram photo limits
|
||||||
|
TELEGRAM_MAX_PHOTO_SIZE = 10 * 1024 * 1024 # 10 MB - Telegram's max photo size
|
||||||
|
TELEGRAM_MAX_DIMENSION_SUM = 10000 # Maximum sum of width + height in pixels
|
||||||
|
|
||||||
|
|
||||||
async def async_setup_entry(
|
async def async_setup_entry(
|
||||||
hass: HomeAssistant,
|
hass: HomeAssistant,
|
||||||
@@ -88,13 +94,33 @@ async def async_setup_entry(
|
|||||||
)
|
)
|
||||||
|
|
||||||
platform.async_register_entity_service(
|
platform.async_register_entity_service(
|
||||||
SERVICE_GET_RECENT_ASSETS,
|
SERVICE_GET_ASSETS,
|
||||||
{
|
{
|
||||||
vol.Optional("count", default=10): vol.All(
|
vol.Optional("limit", default=10): vol.All(
|
||||||
vol.Coerce(int), vol.Range(min=1, max=100)
|
vol.Coerce(int), vol.Range(min=1, max=100)
|
||||||
),
|
),
|
||||||
|
vol.Optional("offset", default=0): vol.All(
|
||||||
|
vol.Coerce(int), vol.Range(min=0)
|
||||||
|
),
|
||||||
|
vol.Optional("favorite_only", default=False): bool,
|
||||||
|
vol.Optional("filter_min_rating", default=1): vol.All(
|
||||||
|
vol.Coerce(int), vol.Range(min=1, max=5)
|
||||||
|
),
|
||||||
|
vol.Optional("order_by", default="date"): vol.In(
|
||||||
|
["date", "rating", "name", "random"]
|
||||||
|
),
|
||||||
|
vol.Optional("order", default="descending"): vol.In(
|
||||||
|
["ascending", "descending"]
|
||||||
|
),
|
||||||
|
vol.Optional("asset_type", default="all"): vol.In(["all", "photo", "video"]),
|
||||||
|
vol.Optional("min_date"): str,
|
||||||
|
vol.Optional("max_date"): str,
|
||||||
|
vol.Optional("memory_date"): str,
|
||||||
|
vol.Optional("city"): str,
|
||||||
|
vol.Optional("state"): str,
|
||||||
|
vol.Optional("country"): str,
|
||||||
},
|
},
|
||||||
"async_get_recent_assets",
|
"async_get_assets",
|
||||||
supports_response=SupportsResponse.ONLY,
|
supports_response=SupportsResponse.ONLY,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -115,6 +141,13 @@ async def async_setup_entry(
|
|||||||
vol.Coerce(int), vol.Range(min=0, max=60000)
|
vol.Coerce(int), vol.Range(min=0, max=60000)
|
||||||
),
|
),
|
||||||
vol.Optional("wait_for_response", default=True): bool,
|
vol.Optional("wait_for_response", default=True): bool,
|
||||||
|
vol.Optional("max_asset_data_size"): vol.All(
|
||||||
|
vol.Coerce(int), vol.Range(min=1, max=52428800)
|
||||||
|
),
|
||||||
|
vol.Optional("send_large_photos_as_documents", default=False): bool,
|
||||||
|
vol.Optional("chat_action", default="typing"): vol.Any(
|
||||||
|
None, vol.In(["typing", "upload_photo", "upload_video", "upload_document"])
|
||||||
|
),
|
||||||
},
|
},
|
||||||
"async_send_telegram_notification",
|
"async_send_telegram_notification",
|
||||||
supports_response=SupportsResponse.OPTIONAL,
|
supports_response=SupportsResponse.OPTIONAL,
|
||||||
@@ -171,9 +204,38 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
"""Refresh data for this album."""
|
"""Refresh data for this album."""
|
||||||
await self.coordinator.async_refresh_now()
|
await self.coordinator.async_refresh_now()
|
||||||
|
|
||||||
async def async_get_recent_assets(self, count: int = 10) -> ServiceResponse:
|
async def async_get_assets(
|
||||||
"""Get recent assets for this album."""
|
self,
|
||||||
assets = await self.coordinator.async_get_recent_assets(count)
|
limit: int = 10,
|
||||||
|
offset: int = 0,
|
||||||
|
favorite_only: bool = False,
|
||||||
|
filter_min_rating: int = 1,
|
||||||
|
order_by: str = "date",
|
||||||
|
order: str = "descending",
|
||||||
|
asset_type: str = "all",
|
||||||
|
min_date: str | None = None,
|
||||||
|
max_date: str | None = None,
|
||||||
|
memory_date: str | None = None,
|
||||||
|
city: str | None = None,
|
||||||
|
state: str | None = None,
|
||||||
|
country: str | None = None,
|
||||||
|
) -> ServiceResponse:
|
||||||
|
"""Get assets for this album with optional filtering and ordering."""
|
||||||
|
assets = await self.coordinator.async_get_assets(
|
||||||
|
limit=limit,
|
||||||
|
offset=offset,
|
||||||
|
favorite_only=favorite_only,
|
||||||
|
filter_min_rating=filter_min_rating,
|
||||||
|
order_by=order_by,
|
||||||
|
order=order,
|
||||||
|
asset_type=asset_type,
|
||||||
|
min_date=min_date,
|
||||||
|
max_date=max_date,
|
||||||
|
memory_date=memory_date,
|
||||||
|
city=city,
|
||||||
|
state=state,
|
||||||
|
country=country,
|
||||||
|
)
|
||||||
return {"assets": assets}
|
return {"assets": assets}
|
||||||
|
|
||||||
async def async_send_telegram_notification(
|
async def async_send_telegram_notification(
|
||||||
@@ -188,6 +250,9 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
max_group_size: int = 10,
|
max_group_size: int = 10,
|
||||||
chunk_delay: int = 0,
|
chunk_delay: int = 0,
|
||||||
wait_for_response: bool = True,
|
wait_for_response: bool = True,
|
||||||
|
max_asset_data_size: int | None = None,
|
||||||
|
send_large_photos_as_documents: bool = False,
|
||||||
|
chat_action: str | None = "typing",
|
||||||
) -> ServiceResponse:
|
) -> ServiceResponse:
|
||||||
"""Send notification to Telegram.
|
"""Send notification to Telegram.
|
||||||
|
|
||||||
@@ -216,6 +281,9 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
parse_mode=parse_mode,
|
parse_mode=parse_mode,
|
||||||
max_group_size=max_group_size,
|
max_group_size=max_group_size,
|
||||||
chunk_delay=chunk_delay,
|
chunk_delay=chunk_delay,
|
||||||
|
max_asset_data_size=max_asset_data_size,
|
||||||
|
send_large_photos_as_documents=send_large_photos_as_documents,
|
||||||
|
chat_action=chat_action,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
return {"success": True, "status": "queued", "message": "Notification queued for background processing"}
|
return {"success": True, "status": "queued", "message": "Notification queued for background processing"}
|
||||||
@@ -231,6 +299,9 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
parse_mode=parse_mode,
|
parse_mode=parse_mode,
|
||||||
max_group_size=max_group_size,
|
max_group_size=max_group_size,
|
||||||
chunk_delay=chunk_delay,
|
chunk_delay=chunk_delay,
|
||||||
|
max_asset_data_size=max_asset_data_size,
|
||||||
|
send_large_photos_as_documents=send_large_photos_as_documents,
|
||||||
|
chat_action=chat_action,
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _execute_telegram_notification(
|
async def _execute_telegram_notification(
|
||||||
@@ -244,6 +315,9 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
parse_mode: str = "HTML",
|
parse_mode: str = "HTML",
|
||||||
max_group_size: int = 10,
|
max_group_size: int = 10,
|
||||||
chunk_delay: int = 0,
|
chunk_delay: int = 0,
|
||||||
|
max_asset_data_size: int | None = None,
|
||||||
|
send_large_photos_as_documents: bool = False,
|
||||||
|
chat_action: str | None = "typing",
|
||||||
) -> ServiceResponse:
|
) -> ServiceResponse:
|
||||||
"""Execute the Telegram notification (internal method)."""
|
"""Execute the Telegram notification (internal method)."""
|
||||||
import json
|
import json
|
||||||
@@ -261,28 +335,44 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
|
|
||||||
session = async_get_clientsession(self.hass)
|
session = async_get_clientsession(self.hass)
|
||||||
|
|
||||||
# Handle empty URLs - send simple text message
|
# Handle empty URLs - send simple text message (no typing indicator needed)
|
||||||
if not urls:
|
if not urls:
|
||||||
return await self._send_telegram_message(
|
return await self._send_telegram_message(
|
||||||
session, token, chat_id, caption or "", reply_to_message_id, disable_web_page_preview, parse_mode
|
session, token, chat_id, caption or "", reply_to_message_id, disable_web_page_preview, parse_mode
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Start chat action indicator for media notifications (before downloading assets)
|
||||||
|
typing_task = None
|
||||||
|
if chat_action:
|
||||||
|
typing_task = self._start_typing_indicator(session, token, chat_id, chat_action)
|
||||||
|
|
||||||
|
try:
|
||||||
# Handle single photo
|
# Handle single photo
|
||||||
if len(urls) == 1 and urls[0].get("type", "photo") == "photo":
|
if len(urls) == 1 and urls[0].get("type", "photo") == "photo":
|
||||||
return await self._send_telegram_photo(
|
return await self._send_telegram_photo(
|
||||||
session, token, chat_id, urls[0].get("url"), caption, reply_to_message_id, parse_mode
|
session, token, chat_id, urls[0].get("url"), caption, reply_to_message_id, parse_mode,
|
||||||
|
max_asset_data_size, send_large_photos_as_documents
|
||||||
)
|
)
|
||||||
|
|
||||||
# Handle single video
|
# Handle single video
|
||||||
if len(urls) == 1 and urls[0].get("type") == "video":
|
if len(urls) == 1 and urls[0].get("type") == "video":
|
||||||
return await self._send_telegram_video(
|
return await self._send_telegram_video(
|
||||||
session, token, chat_id, urls[0].get("url"), caption, reply_to_message_id, parse_mode
|
session, token, chat_id, urls[0].get("url"), caption, reply_to_message_id, parse_mode, max_asset_data_size
|
||||||
)
|
)
|
||||||
|
|
||||||
# Handle multiple items - send as media group(s)
|
# Handle multiple items - send as media group(s)
|
||||||
return await self._send_telegram_media_group(
|
return await self._send_telegram_media_group(
|
||||||
session, token, chat_id, urls, caption, reply_to_message_id, max_group_size, chunk_delay, parse_mode
|
session, token, chat_id, urls, caption, reply_to_message_id, max_group_size, chunk_delay, parse_mode,
|
||||||
|
max_asset_data_size, send_large_photos_as_documents
|
||||||
)
|
)
|
||||||
|
finally:
|
||||||
|
# Stop chat action indicator when done (success or error)
|
||||||
|
if typing_task:
|
||||||
|
typing_task.cancel()
|
||||||
|
try:
|
||||||
|
await typing_task
|
||||||
|
except asyncio.CancelledError:
|
||||||
|
pass
|
||||||
|
|
||||||
async def _send_telegram_message(
|
async def _send_telegram_message(
|
||||||
self,
|
self,
|
||||||
@@ -332,6 +422,172 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
_LOGGER.error("Telegram message send failed: %s", err)
|
_LOGGER.error("Telegram message send failed: %s", err)
|
||||||
return {"success": False, "error": str(err)}
|
return {"success": False, "error": str(err)}
|
||||||
|
|
||||||
|
async def _send_telegram_chat_action(
|
||||||
|
self,
|
||||||
|
session: Any,
|
||||||
|
token: str,
|
||||||
|
chat_id: str,
|
||||||
|
action: str = "typing",
|
||||||
|
) -> bool:
|
||||||
|
"""Send a chat action to Telegram (e.g., typing indicator).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
session: aiohttp client session
|
||||||
|
token: Telegram bot token
|
||||||
|
chat_id: Target chat ID
|
||||||
|
action: Chat action type (typing, upload_photo, upload_video, etc.)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if successful, False otherwise
|
||||||
|
"""
|
||||||
|
import aiohttp
|
||||||
|
|
||||||
|
telegram_url = f"https://api.telegram.org/bot{token}/sendChatAction"
|
||||||
|
payload = {"chat_id": chat_id, "action": action}
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with session.post(telegram_url, json=payload) as response:
|
||||||
|
result = await response.json()
|
||||||
|
if response.status == 200 and result.get("ok"):
|
||||||
|
_LOGGER.debug("Sent chat action '%s' to chat %s", action, chat_id)
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
_LOGGER.debug("Failed to send chat action: %s", result.get("description"))
|
||||||
|
return False
|
||||||
|
except aiohttp.ClientError as err:
|
||||||
|
_LOGGER.debug("Chat action request failed: %s", err)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _start_typing_indicator(
|
||||||
|
self,
|
||||||
|
session: Any,
|
||||||
|
token: str,
|
||||||
|
chat_id: str,
|
||||||
|
action: str = "typing",
|
||||||
|
) -> asyncio.Task:
|
||||||
|
"""Start a background task that sends chat action indicator periodically.
|
||||||
|
|
||||||
|
The chat action indicator expires after ~5 seconds, so we refresh it every 4 seconds.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
session: aiohttp client session
|
||||||
|
token: Telegram bot token
|
||||||
|
chat_id: Target chat ID
|
||||||
|
action: Chat action type (typing, upload_photo, upload_video, etc.)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The background task (cancel it when done)
|
||||||
|
"""
|
||||||
|
|
||||||
|
async def action_loop() -> None:
|
||||||
|
"""Keep sending chat action until cancelled."""
|
||||||
|
try:
|
||||||
|
while True:
|
||||||
|
await self._send_telegram_chat_action(session, token, chat_id, action)
|
||||||
|
await asyncio.sleep(4)
|
||||||
|
except asyncio.CancelledError:
|
||||||
|
_LOGGER.debug("Chat action indicator stopped for action '%s'", action)
|
||||||
|
|
||||||
|
return asyncio.create_task(action_loop())
|
||||||
|
|
||||||
|
def _log_telegram_error(
|
||||||
|
self,
|
||||||
|
error_code: int | None,
|
||||||
|
description: str,
|
||||||
|
data: bytes | None = None,
|
||||||
|
media_type: str = "photo",
|
||||||
|
) -> None:
|
||||||
|
"""Log detailed Telegram API error with diagnostics.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
error_code: Telegram error code
|
||||||
|
description: Error description from Telegram
|
||||||
|
data: Media data bytes (optional, for size diagnostics)
|
||||||
|
media_type: Type of media (photo/video)
|
||||||
|
"""
|
||||||
|
error_msg = f"Telegram API error ({error_code}): {description}"
|
||||||
|
|
||||||
|
# Add diagnostic information based on error type
|
||||||
|
if data:
|
||||||
|
error_msg += f" | Media size: {len(data)} bytes ({len(data) / (1024 * 1024):.2f} MB)"
|
||||||
|
|
||||||
|
# Check dimensions for photos
|
||||||
|
if media_type == "photo":
|
||||||
|
try:
|
||||||
|
from PIL import Image
|
||||||
|
import io
|
||||||
|
|
||||||
|
img = Image.open(io.BytesIO(data))
|
||||||
|
width, height = img.size
|
||||||
|
dimension_sum = width + height
|
||||||
|
error_msg += f" | Dimensions: {width}x{height} (sum={dimension_sum})"
|
||||||
|
|
||||||
|
# Highlight limit violations
|
||||||
|
if len(data) > TELEGRAM_MAX_PHOTO_SIZE:
|
||||||
|
error_msg += f" | EXCEEDS size limit ({TELEGRAM_MAX_PHOTO_SIZE / (1024 * 1024):.0f} MB)"
|
||||||
|
if dimension_sum > TELEGRAM_MAX_DIMENSION_SUM:
|
||||||
|
error_msg += f" | EXCEEDS dimension limit ({TELEGRAM_MAX_DIMENSION_SUM})"
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Provide suggestions based on error description
|
||||||
|
suggestions = []
|
||||||
|
if "dimension" in description.lower() or "PHOTO_INVALID_DIMENSIONS" in description:
|
||||||
|
suggestions.append("Photo dimensions too large - consider setting send_large_photos_as_documents=true")
|
||||||
|
elif "too large" in description.lower() or error_code == 413:
|
||||||
|
suggestions.append("File size too large - consider setting send_large_photos_as_documents=true or max_asset_data_size to skip large files")
|
||||||
|
elif "entity too large" in description.lower():
|
||||||
|
suggestions.append("Request entity too large - reduce max_group_size or set max_asset_data_size")
|
||||||
|
|
||||||
|
if suggestions:
|
||||||
|
error_msg += f" | Suggestions: {'; '.join(suggestions)}"
|
||||||
|
|
||||||
|
_LOGGER.error(error_msg)
|
||||||
|
|
||||||
|
def _check_telegram_photo_limits(
|
||||||
|
self,
|
||||||
|
data: bytes,
|
||||||
|
) -> tuple[bool, str | None, int | None, int | None]:
|
||||||
|
"""Check if photo data exceeds Telegram photo limits.
|
||||||
|
|
||||||
|
Telegram limits for photos:
|
||||||
|
- Max file size: 10 MB
|
||||||
|
- Max dimension sum: ~10,000 pixels (width + height)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (exceeds_limits, reason, width, height)
|
||||||
|
- exceeds_limits: True if photo exceeds limits
|
||||||
|
- reason: Human-readable reason (None if within limits)
|
||||||
|
- width: Image width in pixels (None if PIL not available)
|
||||||
|
- height: Image height in pixels (None if PIL not available)
|
||||||
|
"""
|
||||||
|
# Check file size
|
||||||
|
if len(data) > TELEGRAM_MAX_PHOTO_SIZE:
|
||||||
|
return True, f"size {len(data)} bytes exceeds {TELEGRAM_MAX_PHOTO_SIZE} bytes limit", None, None
|
||||||
|
|
||||||
|
# Try to check dimensions using PIL
|
||||||
|
try:
|
||||||
|
from PIL import Image
|
||||||
|
import io
|
||||||
|
|
||||||
|
img = Image.open(io.BytesIO(data))
|
||||||
|
width, height = img.size
|
||||||
|
dimension_sum = width + height
|
||||||
|
|
||||||
|
if dimension_sum > TELEGRAM_MAX_DIMENSION_SUM:
|
||||||
|
return True, f"dimensions {width}x{height} (sum={dimension_sum}) exceed {TELEGRAM_MAX_DIMENSION_SUM} limit", width, height
|
||||||
|
|
||||||
|
return False, None, width, height
|
||||||
|
except ImportError:
|
||||||
|
# PIL not available, can't check dimensions
|
||||||
|
_LOGGER.debug("PIL not available, skipping dimension check")
|
||||||
|
return False, None, None, None
|
||||||
|
except Exception as e:
|
||||||
|
# Failed to check dimensions
|
||||||
|
_LOGGER.debug("Failed to check photo dimensions: %s", e)
|
||||||
|
return False, None, None, None
|
||||||
|
|
||||||
|
|
||||||
async def _send_telegram_photo(
|
async def _send_telegram_photo(
|
||||||
self,
|
self,
|
||||||
session: Any,
|
session: Any,
|
||||||
@@ -341,6 +597,8 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
caption: str | None = None,
|
caption: str | None = None,
|
||||||
reply_to_message_id: int | None = None,
|
reply_to_message_id: int | None = None,
|
||||||
parse_mode: str = "HTML",
|
parse_mode: str = "HTML",
|
||||||
|
max_asset_data_size: int | None = None,
|
||||||
|
send_large_photos_as_documents: bool = False,
|
||||||
) -> ServiceResponse:
|
) -> ServiceResponse:
|
||||||
"""Send a single photo to Telegram."""
|
"""Send a single photo to Telegram."""
|
||||||
import aiohttp
|
import aiohttp
|
||||||
@@ -349,10 +607,46 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
if not url:
|
if not url:
|
||||||
return {"success": False, "error": "Missing 'url' for photo"}
|
return {"success": False, "error": "Missing 'url' for photo"}
|
||||||
|
|
||||||
|
# Check cache for file_id
|
||||||
|
cache = self.coordinator.telegram_cache
|
||||||
|
cached = cache.get(url) if cache else None
|
||||||
|
|
||||||
|
if cached and cached.get("file_id"):
|
||||||
|
# Use cached file_id - no download needed
|
||||||
|
file_id = cached["file_id"]
|
||||||
|
_LOGGER.debug("Using cached Telegram file_id for photo")
|
||||||
|
|
||||||
|
payload = {
|
||||||
|
"chat_id": chat_id,
|
||||||
|
"photo": file_id,
|
||||||
|
"parse_mode": parse_mode,
|
||||||
|
}
|
||||||
|
if caption:
|
||||||
|
payload["caption"] = caption
|
||||||
|
if reply_to_message_id:
|
||||||
|
payload["reply_to_message_id"] = reply_to_message_id
|
||||||
|
|
||||||
|
telegram_url = f"https://api.telegram.org/bot{token}/sendPhoto"
|
||||||
try:
|
try:
|
||||||
# Download the photo
|
async with session.post(telegram_url, json=payload) as response:
|
||||||
_LOGGER.debug("Downloading photo from %s", url[:80])
|
result = await response.json()
|
||||||
async with session.get(url) as resp:
|
if response.status == 200 and result.get("ok"):
|
||||||
|
return {
|
||||||
|
"success": True,
|
||||||
|
"message_id": result.get("result", {}).get("message_id"),
|
||||||
|
"cached": True,
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
# Cache might be stale, fall through to upload
|
||||||
|
_LOGGER.debug("Cached file_id failed, will re-upload: %s", result.get("description"))
|
||||||
|
except aiohttp.ClientError as err:
|
||||||
|
_LOGGER.debug("Cached file_id request failed: %s", err)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Download the photo using internal URL for faster local network transfer
|
||||||
|
download_url = self.coordinator.get_internal_download_url(url)
|
||||||
|
_LOGGER.debug("Downloading photo from %s", download_url[:80])
|
||||||
|
async with session.get(download_url) as resp:
|
||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
return {
|
return {
|
||||||
"success": False,
|
"success": False,
|
||||||
@@ -361,6 +655,37 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
data = await resp.read()
|
data = await resp.read()
|
||||||
_LOGGER.debug("Downloaded photo: %d bytes", len(data))
|
_LOGGER.debug("Downloaded photo: %d bytes", len(data))
|
||||||
|
|
||||||
|
# Check if photo exceeds max size limit (user-defined limit)
|
||||||
|
if max_asset_data_size is not None and len(data) > max_asset_data_size:
|
||||||
|
_LOGGER.warning(
|
||||||
|
"Photo size (%d bytes) exceeds max_asset_data_size limit (%d bytes), skipping",
|
||||||
|
len(data), max_asset_data_size
|
||||||
|
)
|
||||||
|
return {
|
||||||
|
"success": False,
|
||||||
|
"error": f"Photo size ({len(data)} bytes) exceeds max_asset_data_size limit ({max_asset_data_size} bytes)",
|
||||||
|
"skipped": True,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check if photo exceeds Telegram's photo limits
|
||||||
|
exceeds_limits, reason, width, height = self._check_telegram_photo_limits(data)
|
||||||
|
if exceeds_limits:
|
||||||
|
if send_large_photos_as_documents:
|
||||||
|
# Send as document instead
|
||||||
|
_LOGGER.info("Photo %s, sending as document", reason)
|
||||||
|
return await self._send_telegram_document(
|
||||||
|
session, token, chat_id, data, "photo.jpg",
|
||||||
|
caption, reply_to_message_id, parse_mode, url
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
# Skip oversized photo
|
||||||
|
_LOGGER.warning("Photo %s, skipping (set send_large_photos_as_documents=true to send as document)", reason)
|
||||||
|
return {
|
||||||
|
"success": False,
|
||||||
|
"error": f"Photo {reason}",
|
||||||
|
"skipped": True,
|
||||||
|
}
|
||||||
|
|
||||||
# Build multipart form
|
# Build multipart form
|
||||||
form = FormData()
|
form = FormData()
|
||||||
form.add_field("chat_id", chat_id)
|
form.add_field("chat_id", chat_id)
|
||||||
@@ -381,12 +706,26 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
result = await response.json()
|
result = await response.json()
|
||||||
_LOGGER.debug("Telegram API response: status=%d, ok=%s", response.status, result.get("ok"))
|
_LOGGER.debug("Telegram API response: status=%d, ok=%s", response.status, result.get("ok"))
|
||||||
if response.status == 200 and result.get("ok"):
|
if response.status == 200 and result.get("ok"):
|
||||||
|
# Extract and cache file_id
|
||||||
|
photos = result.get("result", {}).get("photo", [])
|
||||||
|
if photos and cache:
|
||||||
|
# Use the largest photo's file_id
|
||||||
|
file_id = photos[-1].get("file_id")
|
||||||
|
if file_id:
|
||||||
|
await cache.async_set(url, file_id, "photo")
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"success": True,
|
"success": True,
|
||||||
"message_id": result.get("result", {}).get("message_id"),
|
"message_id": result.get("result", {}).get("message_id"),
|
||||||
}
|
}
|
||||||
else:
|
else:
|
||||||
_LOGGER.error("Telegram API error: %s", result)
|
# Log detailed error with diagnostics
|
||||||
|
self._log_telegram_error(
|
||||||
|
error_code=result.get("error_code"),
|
||||||
|
description=result.get("description", "Unknown Telegram error"),
|
||||||
|
data=data,
|
||||||
|
media_type="photo",
|
||||||
|
)
|
||||||
return {
|
return {
|
||||||
"success": False,
|
"success": False,
|
||||||
"error": result.get("description", "Unknown Telegram error"),
|
"error": result.get("description", "Unknown Telegram error"),
|
||||||
@@ -405,6 +744,7 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
caption: str | None = None,
|
caption: str | None = None,
|
||||||
reply_to_message_id: int | None = None,
|
reply_to_message_id: int | None = None,
|
||||||
parse_mode: str = "HTML",
|
parse_mode: str = "HTML",
|
||||||
|
max_asset_data_size: int | None = None,
|
||||||
) -> ServiceResponse:
|
) -> ServiceResponse:
|
||||||
"""Send a single video to Telegram."""
|
"""Send a single video to Telegram."""
|
||||||
import aiohttp
|
import aiohttp
|
||||||
@@ -413,10 +753,46 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
if not url:
|
if not url:
|
||||||
return {"success": False, "error": "Missing 'url' for video"}
|
return {"success": False, "error": "Missing 'url' for video"}
|
||||||
|
|
||||||
|
# Check cache for file_id
|
||||||
|
cache = self.coordinator.telegram_cache
|
||||||
|
cached = cache.get(url) if cache else None
|
||||||
|
|
||||||
|
if cached and cached.get("file_id"):
|
||||||
|
# Use cached file_id - no download needed
|
||||||
|
file_id = cached["file_id"]
|
||||||
|
_LOGGER.debug("Using cached Telegram file_id for video")
|
||||||
|
|
||||||
|
payload = {
|
||||||
|
"chat_id": chat_id,
|
||||||
|
"video": file_id,
|
||||||
|
"parse_mode": parse_mode,
|
||||||
|
}
|
||||||
|
if caption:
|
||||||
|
payload["caption"] = caption
|
||||||
|
if reply_to_message_id:
|
||||||
|
payload["reply_to_message_id"] = reply_to_message_id
|
||||||
|
|
||||||
|
telegram_url = f"https://api.telegram.org/bot{token}/sendVideo"
|
||||||
try:
|
try:
|
||||||
# Download the video
|
async with session.post(telegram_url, json=payload) as response:
|
||||||
_LOGGER.debug("Downloading video from %s", url[:80])
|
result = await response.json()
|
||||||
async with session.get(url) as resp:
|
if response.status == 200 and result.get("ok"):
|
||||||
|
return {
|
||||||
|
"success": True,
|
||||||
|
"message_id": result.get("result", {}).get("message_id"),
|
||||||
|
"cached": True,
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
# Cache might be stale, fall through to upload
|
||||||
|
_LOGGER.debug("Cached file_id failed, will re-upload: %s", result.get("description"))
|
||||||
|
except aiohttp.ClientError as err:
|
||||||
|
_LOGGER.debug("Cached file_id request failed: %s", err)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Download the video using internal URL for faster local network transfer
|
||||||
|
download_url = self.coordinator.get_internal_download_url(url)
|
||||||
|
_LOGGER.debug("Downloading video from %s", download_url[:80])
|
||||||
|
async with session.get(download_url) as resp:
|
||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
return {
|
return {
|
||||||
"success": False,
|
"success": False,
|
||||||
@@ -425,6 +801,18 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
data = await resp.read()
|
data = await resp.read()
|
||||||
_LOGGER.debug("Downloaded video: %d bytes", len(data))
|
_LOGGER.debug("Downloaded video: %d bytes", len(data))
|
||||||
|
|
||||||
|
# Check if video exceeds max size limit
|
||||||
|
if max_asset_data_size is not None and len(data) > max_asset_data_size:
|
||||||
|
_LOGGER.warning(
|
||||||
|
"Video size (%d bytes) exceeds max_asset_data_size limit (%d bytes), skipping",
|
||||||
|
len(data), max_asset_data_size
|
||||||
|
)
|
||||||
|
return {
|
||||||
|
"success": False,
|
||||||
|
"error": f"Video size ({len(data)} bytes) exceeds max_asset_data_size limit ({max_asset_data_size} bytes)",
|
||||||
|
"skipped": True,
|
||||||
|
}
|
||||||
|
|
||||||
# Build multipart form
|
# Build multipart form
|
||||||
form = FormData()
|
form = FormData()
|
||||||
form.add_field("chat_id", chat_id)
|
form.add_field("chat_id", chat_id)
|
||||||
@@ -445,12 +833,25 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
result = await response.json()
|
result = await response.json()
|
||||||
_LOGGER.debug("Telegram API response: status=%d, ok=%s", response.status, result.get("ok"))
|
_LOGGER.debug("Telegram API response: status=%d, ok=%s", response.status, result.get("ok"))
|
||||||
if response.status == 200 and result.get("ok"):
|
if response.status == 200 and result.get("ok"):
|
||||||
|
# Extract and cache file_id
|
||||||
|
video = result.get("result", {}).get("video", {})
|
||||||
|
if video and cache:
|
||||||
|
file_id = video.get("file_id")
|
||||||
|
if file_id:
|
||||||
|
await cache.async_set(url, file_id, "video")
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"success": True,
|
"success": True,
|
||||||
"message_id": result.get("result", {}).get("message_id"),
|
"message_id": result.get("result", {}).get("message_id"),
|
||||||
}
|
}
|
||||||
else:
|
else:
|
||||||
_LOGGER.error("Telegram API error: %s", result)
|
# Log detailed error with diagnostics
|
||||||
|
self._log_telegram_error(
|
||||||
|
error_code=result.get("error_code"),
|
||||||
|
description=result.get("description", "Unknown Telegram error"),
|
||||||
|
data=data,
|
||||||
|
media_type="video",
|
||||||
|
)
|
||||||
return {
|
return {
|
||||||
"success": False,
|
"success": False,
|
||||||
"error": result.get("description", "Unknown Telegram error"),
|
"error": result.get("description", "Unknown Telegram error"),
|
||||||
@@ -460,6 +861,105 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
_LOGGER.error("Telegram video upload failed: %s", err)
|
_LOGGER.error("Telegram video upload failed: %s", err)
|
||||||
return {"success": False, "error": str(err)}
|
return {"success": False, "error": str(err)}
|
||||||
|
|
||||||
|
async def _send_telegram_document(
|
||||||
|
self,
|
||||||
|
session: Any,
|
||||||
|
token: str,
|
||||||
|
chat_id: str,
|
||||||
|
data: bytes,
|
||||||
|
filename: str = "photo.jpg",
|
||||||
|
caption: str | None = None,
|
||||||
|
reply_to_message_id: int | None = None,
|
||||||
|
parse_mode: str = "HTML",
|
||||||
|
source_url: str | None = None,
|
||||||
|
) -> ServiceResponse:
|
||||||
|
"""Send a photo as a document to Telegram (for oversized photos)."""
|
||||||
|
import aiohttp
|
||||||
|
from aiohttp import FormData
|
||||||
|
|
||||||
|
# Check cache for file_id if source_url is provided
|
||||||
|
cache = self.coordinator.telegram_cache
|
||||||
|
if source_url:
|
||||||
|
cached = cache.get(source_url) if cache else None
|
||||||
|
if cached and cached.get("file_id") and cached.get("type") == "document":
|
||||||
|
# Use cached file_id
|
||||||
|
file_id = cached["file_id"]
|
||||||
|
_LOGGER.debug("Using cached Telegram file_id for document")
|
||||||
|
|
||||||
|
payload = {
|
||||||
|
"chat_id": chat_id,
|
||||||
|
"document": file_id,
|
||||||
|
"parse_mode": parse_mode,
|
||||||
|
}
|
||||||
|
if caption:
|
||||||
|
payload["caption"] = caption
|
||||||
|
if reply_to_message_id:
|
||||||
|
payload["reply_to_message_id"] = reply_to_message_id
|
||||||
|
|
||||||
|
telegram_url = f"https://api.telegram.org/bot{token}/sendDocument"
|
||||||
|
try:
|
||||||
|
async with session.post(telegram_url, json=payload) as response:
|
||||||
|
result = await response.json()
|
||||||
|
if response.status == 200 and result.get("ok"):
|
||||||
|
return {
|
||||||
|
"success": True,
|
||||||
|
"message_id": result.get("result", {}).get("message_id"),
|
||||||
|
"cached": True,
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
_LOGGER.debug("Cached file_id failed, will re-upload: %s", result.get("description"))
|
||||||
|
except aiohttp.ClientError as err:
|
||||||
|
_LOGGER.debug("Cached file_id request failed: %s", err)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Build multipart form
|
||||||
|
form = FormData()
|
||||||
|
form.add_field("chat_id", chat_id)
|
||||||
|
form.add_field("document", data, filename=filename, content_type="image/jpeg")
|
||||||
|
form.add_field("parse_mode", parse_mode)
|
||||||
|
|
||||||
|
if caption:
|
||||||
|
form.add_field("caption", caption)
|
||||||
|
|
||||||
|
if reply_to_message_id:
|
||||||
|
form.add_field("reply_to_message_id", str(reply_to_message_id))
|
||||||
|
|
||||||
|
# Send to Telegram
|
||||||
|
telegram_url = f"https://api.telegram.org/bot{token}/sendDocument"
|
||||||
|
|
||||||
|
_LOGGER.debug("Uploading oversized photo as document to Telegram (%d bytes)", len(data))
|
||||||
|
async with session.post(telegram_url, data=form) as response:
|
||||||
|
result = await response.json()
|
||||||
|
_LOGGER.debug("Telegram API response: status=%d, ok=%s", response.status, result.get("ok"))
|
||||||
|
if response.status == 200 and result.get("ok"):
|
||||||
|
# Extract and cache file_id
|
||||||
|
if source_url and cache:
|
||||||
|
document = result.get("result", {}).get("document", {})
|
||||||
|
file_id = document.get("file_id")
|
||||||
|
if file_id:
|
||||||
|
await cache.async_set(source_url, file_id, "document")
|
||||||
|
|
||||||
|
return {
|
||||||
|
"success": True,
|
||||||
|
"message_id": result.get("result", {}).get("message_id"),
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
# Log detailed error with diagnostics
|
||||||
|
self._log_telegram_error(
|
||||||
|
error_code=result.get("error_code"),
|
||||||
|
description=result.get("description", "Unknown Telegram error"),
|
||||||
|
data=data,
|
||||||
|
media_type="document",
|
||||||
|
)
|
||||||
|
return {
|
||||||
|
"success": False,
|
||||||
|
"error": result.get("description", "Unknown Telegram error"),
|
||||||
|
"error_code": result.get("error_code"),
|
||||||
|
}
|
||||||
|
except aiohttp.ClientError as err:
|
||||||
|
_LOGGER.error("Telegram document upload failed: %s", err)
|
||||||
|
return {"success": False, "error": str(err)}
|
||||||
|
|
||||||
async def _send_telegram_media_group(
|
async def _send_telegram_media_group(
|
||||||
self,
|
self,
|
||||||
session: Any,
|
session: Any,
|
||||||
@@ -471,6 +971,8 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
max_group_size: int = 10,
|
max_group_size: int = 10,
|
||||||
chunk_delay: int = 0,
|
chunk_delay: int = 0,
|
||||||
parse_mode: str = "HTML",
|
parse_mode: str = "HTML",
|
||||||
|
max_asset_data_size: int | None = None,
|
||||||
|
send_large_photos_as_documents: bool = False,
|
||||||
) -> ServiceResponse:
|
) -> ServiceResponse:
|
||||||
"""Send media URLs to Telegram as media group(s).
|
"""Send media URLs to Telegram as media group(s).
|
||||||
|
|
||||||
@@ -511,12 +1013,13 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
if media_type == "photo":
|
if media_type == "photo":
|
||||||
_LOGGER.debug("Sending chunk %d/%d as single photo", chunk_idx + 1, len(chunks))
|
_LOGGER.debug("Sending chunk %d/%d as single photo", chunk_idx + 1, len(chunks))
|
||||||
result = await self._send_telegram_photo(
|
result = await self._send_telegram_photo(
|
||||||
session, token, chat_id, url, chunk_caption, chunk_reply_to, parse_mode
|
session, token, chat_id, url, chunk_caption, chunk_reply_to, parse_mode,
|
||||||
|
max_asset_data_size, send_large_photos_as_documents
|
||||||
)
|
)
|
||||||
else: # video
|
else: # video
|
||||||
_LOGGER.debug("Sending chunk %d/%d as single video", chunk_idx + 1, len(chunks))
|
_LOGGER.debug("Sending chunk %d/%d as single video", chunk_idx + 1, len(chunks))
|
||||||
result = await self._send_telegram_video(
|
result = await self._send_telegram_video(
|
||||||
session, token, chat_id, url, chunk_caption, chunk_reply_to, parse_mode
|
session, token, chat_id, url, chunk_caption, chunk_reply_to, parse_mode, max_asset_data_size
|
||||||
)
|
)
|
||||||
|
|
||||||
if not result.get("success"):
|
if not result.get("success"):
|
||||||
@@ -528,8 +1031,16 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
# Multi-item chunk: use sendMediaGroup
|
# Multi-item chunk: use sendMediaGroup
|
||||||
_LOGGER.debug("Sending chunk %d/%d as media group (%d items)", chunk_idx + 1, len(chunks), len(chunk))
|
_LOGGER.debug("Sending chunk %d/%d as media group (%d items)", chunk_idx + 1, len(chunks), len(chunk))
|
||||||
|
|
||||||
# Download all media files for this chunk
|
# Get cache reference
|
||||||
media_files: list[tuple[str, bytes, str]] = []
|
cache = self.coordinator.telegram_cache
|
||||||
|
|
||||||
|
# Collect media items - either from cache (file_id) or by downloading
|
||||||
|
# Each item: (type, media_ref, filename, url, is_cached)
|
||||||
|
# media_ref is either file_id (str) or data (bytes)
|
||||||
|
media_items: list[tuple[str, str | bytes, str, str, bool]] = []
|
||||||
|
oversized_photos: list[tuple[bytes, str | None, str]] = [] # (data, caption, url)
|
||||||
|
skipped_count = 0
|
||||||
|
|
||||||
for i, item in enumerate(chunk):
|
for i, item in enumerate(chunk):
|
||||||
url = item.get("url")
|
url = item.get("url")
|
||||||
media_type = item.get("type", "photo")
|
media_type = item.get("type", "photo")
|
||||||
@@ -546,26 +1057,115 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
"error": f"Invalid type '{media_type}' in item {chunk_idx * max_group_size + i}. Must be 'photo' or 'video'.",
|
"error": f"Invalid type '{media_type}' in item {chunk_idx * max_group_size + i}. Must be 'photo' or 'video'.",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Check cache first
|
||||||
|
cached = cache.get(url) if cache else None
|
||||||
|
if cached and cached.get("file_id"):
|
||||||
|
# Use cached file_id
|
||||||
|
ext = "jpg" if media_type == "photo" else "mp4"
|
||||||
|
filename = f"media_{chunk_idx * max_group_size + i}.{ext}"
|
||||||
|
media_items.append((media_type, cached["file_id"], filename, url, True))
|
||||||
|
_LOGGER.debug("Using cached file_id for media %d", chunk_idx * max_group_size + i)
|
||||||
|
continue
|
||||||
|
|
||||||
try:
|
try:
|
||||||
_LOGGER.debug("Downloading media %d from %s", chunk_idx * max_group_size + i, url[:80])
|
# Download using internal URL for faster local network transfer
|
||||||
async with session.get(url) as resp:
|
download_url = self.coordinator.get_internal_download_url(url)
|
||||||
|
_LOGGER.debug("Downloading media %d from %s", chunk_idx * max_group_size + i, download_url[:80])
|
||||||
|
async with session.get(download_url) as resp:
|
||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
return {
|
return {
|
||||||
"success": False,
|
"success": False,
|
||||||
"error": f"Failed to download media {chunk_idx * max_group_size + i}: HTTP {resp.status}",
|
"error": f"Failed to download media {chunk_idx * max_group_size + i}: HTTP {resp.status}",
|
||||||
}
|
}
|
||||||
data = await resp.read()
|
data = await resp.read()
|
||||||
|
_LOGGER.debug("Downloaded media %d: %d bytes", chunk_idx * max_group_size + i, len(data))
|
||||||
|
|
||||||
|
# Check if media exceeds max_asset_data_size limit (user-defined limit for skipping)
|
||||||
|
if max_asset_data_size is not None and len(data) > max_asset_data_size:
|
||||||
|
_LOGGER.warning(
|
||||||
|
"Media %d size (%d bytes) exceeds max_asset_data_size limit (%d bytes), skipping",
|
||||||
|
chunk_idx * max_group_size + i, len(data), max_asset_data_size
|
||||||
|
)
|
||||||
|
skipped_count += 1
|
||||||
|
continue
|
||||||
|
|
||||||
|
# For photos, check Telegram limits
|
||||||
|
if media_type == "photo":
|
||||||
|
exceeds_limits, reason, width, height = self._check_telegram_photo_limits(data)
|
||||||
|
if exceeds_limits:
|
||||||
|
if send_large_photos_as_documents:
|
||||||
|
# Separate this photo to send as document later
|
||||||
|
# Caption only on first item of first chunk
|
||||||
|
photo_caption = caption if chunk_idx == 0 and i == 0 and len(media_items) == 0 else None
|
||||||
|
oversized_photos.append((data, photo_caption, url))
|
||||||
|
_LOGGER.info("Photo %d %s, will send as document", i, reason)
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
# Skip oversized photo
|
||||||
|
_LOGGER.warning("Photo %d %s, skipping (set send_large_photos_as_documents=true to send as document)", i, reason)
|
||||||
|
skipped_count += 1
|
||||||
|
continue
|
||||||
|
|
||||||
ext = "jpg" if media_type == "photo" else "mp4"
|
ext = "jpg" if media_type == "photo" else "mp4"
|
||||||
filename = f"media_{chunk_idx * max_group_size + i}.{ext}"
|
filename = f"media_{chunk_idx * max_group_size + i}.{ext}"
|
||||||
media_files.append((media_type, data, filename))
|
media_items.append((media_type, data, filename, url, False))
|
||||||
_LOGGER.debug("Downloaded media %d: %d bytes", chunk_idx * max_group_size + i, len(data))
|
|
||||||
except aiohttp.ClientError as err:
|
except aiohttp.ClientError as err:
|
||||||
return {
|
return {
|
||||||
"success": False,
|
"success": False,
|
||||||
"error": f"Failed to download media {chunk_idx * max_group_size + i}: {err}",
|
"error": f"Failed to download media {chunk_idx * max_group_size + i}: {err}",
|
||||||
}
|
}
|
||||||
|
|
||||||
# Build multipart form
|
# Skip this chunk if all files were filtered out
|
||||||
|
if not media_items and not oversized_photos:
|
||||||
|
_LOGGER.info("Chunk %d/%d: all %d media items skipped",
|
||||||
|
chunk_idx + 1, len(chunks), len(chunk))
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Send media group if we have normal-sized files
|
||||||
|
if media_items:
|
||||||
|
# Check if all items are cached (can use simple JSON payload)
|
||||||
|
all_cached = all(is_cached for _, _, _, _, is_cached in media_items)
|
||||||
|
|
||||||
|
if all_cached:
|
||||||
|
# All items cached - use simple JSON payload with file_ids
|
||||||
|
_LOGGER.debug("All %d items cached, using file_ids", len(media_items))
|
||||||
|
media_json = []
|
||||||
|
for i, (media_type, file_id, _, _, _) in enumerate(media_items):
|
||||||
|
media_item_json: dict[str, Any] = {
|
||||||
|
"type": media_type,
|
||||||
|
"media": file_id,
|
||||||
|
}
|
||||||
|
if chunk_idx == 0 and i == 0 and caption and not oversized_photos:
|
||||||
|
media_item_json["caption"] = caption
|
||||||
|
media_item_json["parse_mode"] = parse_mode
|
||||||
|
media_json.append(media_item_json)
|
||||||
|
|
||||||
|
payload = {
|
||||||
|
"chat_id": chat_id,
|
||||||
|
"media": media_json,
|
||||||
|
}
|
||||||
|
if chunk_idx == 0 and reply_to_message_id:
|
||||||
|
payload["reply_to_message_id"] = reply_to_message_id
|
||||||
|
|
||||||
|
telegram_url = f"https://api.telegram.org/bot{token}/sendMediaGroup"
|
||||||
|
try:
|
||||||
|
async with session.post(telegram_url, json=payload) as response:
|
||||||
|
result = await response.json()
|
||||||
|
if response.status == 200 and result.get("ok"):
|
||||||
|
chunk_message_ids = [
|
||||||
|
msg.get("message_id") for msg in result.get("result", [])
|
||||||
|
]
|
||||||
|
all_message_ids.extend(chunk_message_ids)
|
||||||
|
else:
|
||||||
|
# Cache might be stale - fall through to upload path
|
||||||
|
_LOGGER.debug("Cached file_ids failed, will re-upload: %s", result.get("description"))
|
||||||
|
all_cached = False # Force re-upload
|
||||||
|
except aiohttp.ClientError as err:
|
||||||
|
_LOGGER.debug("Cached file_ids request failed: %s", err)
|
||||||
|
all_cached = False
|
||||||
|
|
||||||
|
if not all_cached:
|
||||||
|
# Build multipart form with mix of cached file_ids and uploaded data
|
||||||
form = FormData()
|
form = FormData()
|
||||||
form.add_field("chat_id", chat_id)
|
form.add_field("chat_id", chat_id)
|
||||||
|
|
||||||
@@ -573,22 +1173,34 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
if chunk_idx == 0 and reply_to_message_id:
|
if chunk_idx == 0 and reply_to_message_id:
|
||||||
form.add_field("reply_to_message_id", str(reply_to_message_id))
|
form.add_field("reply_to_message_id", str(reply_to_message_id))
|
||||||
|
|
||||||
# Build media JSON with attach:// references
|
# Build media JSON - use file_id for cached, attach:// for uploaded
|
||||||
media_json = []
|
media_json = []
|
||||||
for i, (media_type, data, filename) in enumerate(media_files):
|
upload_idx = 0
|
||||||
attach_name = f"file{i}"
|
urls_to_cache: list[tuple[str, int, str]] = [] # (url, result_idx, type)
|
||||||
media_item: dict[str, Any] = {
|
|
||||||
|
for i, (media_type, media_ref, filename, url, is_cached) in enumerate(media_items):
|
||||||
|
if is_cached:
|
||||||
|
# Use file_id directly
|
||||||
|
media_item_json: dict[str, Any] = {
|
||||||
|
"type": media_type,
|
||||||
|
"media": media_ref, # file_id
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
# Upload this file
|
||||||
|
attach_name = f"file{upload_idx}"
|
||||||
|
media_item_json = {
|
||||||
"type": media_type,
|
"type": media_type,
|
||||||
"media": f"attach://{attach_name}",
|
"media": f"attach://{attach_name}",
|
||||||
}
|
}
|
||||||
# Only add caption to the first item of the first chunk
|
|
||||||
if chunk_idx == 0 and i == 0 and caption:
|
|
||||||
media_item["caption"] = caption
|
|
||||||
media_item["parse_mode"] = parse_mode
|
|
||||||
media_json.append(media_item)
|
|
||||||
|
|
||||||
content_type = "image/jpeg" if media_type == "photo" else "video/mp4"
|
content_type = "image/jpeg" if media_type == "photo" else "video/mp4"
|
||||||
form.add_field(attach_name, data, filename=filename, content_type=content_type)
|
form.add_field(attach_name, media_ref, filename=filename, content_type=content_type)
|
||||||
|
urls_to_cache.append((url, i, media_type))
|
||||||
|
upload_idx += 1
|
||||||
|
|
||||||
|
if chunk_idx == 0 and i == 0 and caption and not oversized_photos:
|
||||||
|
media_item_json["caption"] = caption
|
||||||
|
media_item_json["parse_mode"] = parse_mode
|
||||||
|
media_json.append(media_item_json)
|
||||||
|
|
||||||
form.add_field("media", json.dumps(media_json))
|
form.add_field("media", json.dumps(media_json))
|
||||||
|
|
||||||
@@ -596,8 +1208,8 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
telegram_url = f"https://api.telegram.org/bot{token}/sendMediaGroup"
|
telegram_url = f"https://api.telegram.org/bot{token}/sendMediaGroup"
|
||||||
|
|
||||||
try:
|
try:
|
||||||
_LOGGER.debug("Uploading media group chunk %d/%d (%d files) to Telegram",
|
_LOGGER.debug("Uploading media group chunk %d/%d (%d files, %d cached) to Telegram",
|
||||||
chunk_idx + 1, len(chunks), len(media_files))
|
chunk_idx + 1, len(chunks), len(media_items), len(media_items) - upload_idx)
|
||||||
async with session.post(telegram_url, data=form) as response:
|
async with session.post(telegram_url, data=form) as response:
|
||||||
result = await response.json()
|
result = await response.json()
|
||||||
_LOGGER.debug("Telegram API response: status=%d, ok=%s", response.status, result.get("ok"))
|
_LOGGER.debug("Telegram API response: status=%d, ok=%s", response.status, result.get("ok"))
|
||||||
@@ -606,8 +1218,43 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
msg.get("message_id") for msg in result.get("result", [])
|
msg.get("message_id") for msg in result.get("result", [])
|
||||||
]
|
]
|
||||||
all_message_ids.extend(chunk_message_ids)
|
all_message_ids.extend(chunk_message_ids)
|
||||||
|
|
||||||
|
# Cache the newly uploaded file_ids
|
||||||
|
if cache and urls_to_cache:
|
||||||
|
result_messages = result.get("result", [])
|
||||||
|
for url, result_idx, m_type in urls_to_cache:
|
||||||
|
if result_idx < len(result_messages):
|
||||||
|
msg = result_messages[result_idx]
|
||||||
|
if m_type == "photo":
|
||||||
|
photos = msg.get("photo", [])
|
||||||
|
if photos:
|
||||||
|
await cache.async_set(url, photos[-1].get("file_id"), "photo")
|
||||||
|
elif m_type == "video":
|
||||||
|
video = msg.get("video", {})
|
||||||
|
if video.get("file_id"):
|
||||||
|
await cache.async_set(url, video["file_id"], "video")
|
||||||
else:
|
else:
|
||||||
_LOGGER.error("Telegram API error for chunk %d: %s", chunk_idx + 1, result)
|
# Log detailed error for media group with total size info
|
||||||
|
uploaded_data = [m for m in media_items if not m[4]]
|
||||||
|
total_size = sum(len(d) for _, d, _, _, _ in uploaded_data if isinstance(d, bytes))
|
||||||
|
_LOGGER.error(
|
||||||
|
"Telegram API error for chunk %d/%d: %s | Media count: %d | Uploaded size: %d bytes (%.2f MB)",
|
||||||
|
chunk_idx + 1, len(chunks),
|
||||||
|
result.get("description", "Unknown Telegram error"),
|
||||||
|
len(media_items),
|
||||||
|
total_size,
|
||||||
|
total_size / (1024 * 1024) if total_size else 0
|
||||||
|
)
|
||||||
|
# Log detailed diagnostics for the first photo in the group
|
||||||
|
for media_type, media_ref, _, _, is_cached in media_items:
|
||||||
|
if media_type == "photo" and not is_cached and isinstance(media_ref, bytes):
|
||||||
|
self._log_telegram_error(
|
||||||
|
error_code=result.get("error_code"),
|
||||||
|
description=result.get("description", "Unknown Telegram error"),
|
||||||
|
data=media_ref,
|
||||||
|
media_type="photo",
|
||||||
|
)
|
||||||
|
break # Only log details for first photo
|
||||||
return {
|
return {
|
||||||
"success": False,
|
"success": False,
|
||||||
"error": result.get("description", "Unknown Telegram error"),
|
"error": result.get("description", "Unknown Telegram error"),
|
||||||
@@ -622,6 +1269,19 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
|
|||||||
"failed_at_chunk": chunk_idx + 1,
|
"failed_at_chunk": chunk_idx + 1,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Send oversized photos as documents
|
||||||
|
for i, (data, photo_caption, photo_url) in enumerate(oversized_photos):
|
||||||
|
_LOGGER.debug("Sending oversized photo %d/%d as document", i + 1, len(oversized_photos))
|
||||||
|
result = await self._send_telegram_document(
|
||||||
|
session, token, chat_id, data, f"photo_{i}.jpg",
|
||||||
|
photo_caption, None, parse_mode, photo_url
|
||||||
|
)
|
||||||
|
if result.get("success"):
|
||||||
|
all_message_ids.append(result.get("message_id"))
|
||||||
|
else:
|
||||||
|
_LOGGER.error("Failed to send oversized photo as document: %s", result.get("error"))
|
||||||
|
# Continue with other photos even if one fails
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"success": True,
|
"success": True,
|
||||||
"message_ids": all_message_ids,
|
"message_ids": all_message_ids,
|
||||||
@@ -659,7 +1319,10 @@ class ImmichAlbumIdSensor(ImmichAlbumBaseSensor):
|
|||||||
return {}
|
return {}
|
||||||
|
|
||||||
attrs: dict[str, Any] = {
|
attrs: dict[str, Any] = {
|
||||||
"album_name": self._album_data.name,
|
ATTR_ALBUM_NAME: self._album_data.name,
|
||||||
|
ATTR_ASSET_COUNT: self._album_data.asset_count,
|
||||||
|
ATTR_LAST_UPDATED: self._album_data.updated_at,
|
||||||
|
ATTR_CREATED_AT: self._album_data.created_at,
|
||||||
}
|
}
|
||||||
|
|
||||||
# Primary share URL (prefers public, falls back to protected)
|
# Primary share URL (prefers public, falls back to protected)
|
||||||
|
|||||||
@@ -6,17 +6,17 @@ refresh:
|
|||||||
integration: immich_album_watcher
|
integration: immich_album_watcher
|
||||||
domain: sensor
|
domain: sensor
|
||||||
|
|
||||||
get_recent_assets:
|
get_assets:
|
||||||
name: Get Recent Assets
|
name: Get Assets
|
||||||
description: Get the most recent assets from the targeted album.
|
description: Get assets from the targeted album with optional filtering and ordering.
|
||||||
target:
|
target:
|
||||||
entity:
|
entity:
|
||||||
integration: immich_album_watcher
|
integration: immich_album_watcher
|
||||||
domain: sensor
|
domain: sensor
|
||||||
fields:
|
fields:
|
||||||
count:
|
limit:
|
||||||
name: Count
|
name: Limit
|
||||||
description: Number of recent assets to return (1-100).
|
description: Maximum number of assets to return (1-100).
|
||||||
required: false
|
required: false
|
||||||
default: 10
|
default: 10
|
||||||
selector:
|
selector:
|
||||||
@@ -24,6 +24,110 @@ get_recent_assets:
|
|||||||
min: 1
|
min: 1
|
||||||
max: 100
|
max: 100
|
||||||
mode: slider
|
mode: slider
|
||||||
|
offset:
|
||||||
|
name: Offset
|
||||||
|
description: Number of assets to skip before returning results (for pagination). Use with limit to fetch assets in pages.
|
||||||
|
required: false
|
||||||
|
default: 0
|
||||||
|
selector:
|
||||||
|
number:
|
||||||
|
min: 0
|
||||||
|
mode: box
|
||||||
|
favorite_only:
|
||||||
|
name: Favorite Only
|
||||||
|
description: Filter to show only favorite assets.
|
||||||
|
required: false
|
||||||
|
default: false
|
||||||
|
selector:
|
||||||
|
boolean:
|
||||||
|
filter_min_rating:
|
||||||
|
name: Minimum Rating
|
||||||
|
description: Minimum rating for assets (1-5). Set to filter by rating.
|
||||||
|
required: false
|
||||||
|
default: 1
|
||||||
|
selector:
|
||||||
|
number:
|
||||||
|
min: 1
|
||||||
|
max: 5
|
||||||
|
mode: slider
|
||||||
|
order_by:
|
||||||
|
name: Order By
|
||||||
|
description: Field to sort assets by.
|
||||||
|
required: false
|
||||||
|
default: "date"
|
||||||
|
selector:
|
||||||
|
select:
|
||||||
|
options:
|
||||||
|
- label: "Date"
|
||||||
|
value: "date"
|
||||||
|
- label: "Rating"
|
||||||
|
value: "rating"
|
||||||
|
- label: "Name"
|
||||||
|
value: "name"
|
||||||
|
- label: "Random"
|
||||||
|
value: "random"
|
||||||
|
order:
|
||||||
|
name: Order
|
||||||
|
description: Sort direction.
|
||||||
|
required: false
|
||||||
|
default: "descending"
|
||||||
|
selector:
|
||||||
|
select:
|
||||||
|
options:
|
||||||
|
- label: "Ascending"
|
||||||
|
value: "ascending"
|
||||||
|
- label: "Descending"
|
||||||
|
value: "descending"
|
||||||
|
asset_type:
|
||||||
|
name: Asset Type
|
||||||
|
description: Filter assets by type (all, photo, or video).
|
||||||
|
required: false
|
||||||
|
default: "all"
|
||||||
|
selector:
|
||||||
|
select:
|
||||||
|
options:
|
||||||
|
- label: "All (no type filtering)"
|
||||||
|
value: "all"
|
||||||
|
- label: "Photos only"
|
||||||
|
value: "photo"
|
||||||
|
- label: "Videos only"
|
||||||
|
value: "video"
|
||||||
|
min_date:
|
||||||
|
name: Minimum Date
|
||||||
|
description: Filter assets created on or after this date (ISO 8601 format, e.g., 2024-01-01 or 2024-01-01T10:30:00).
|
||||||
|
required: false
|
||||||
|
selector:
|
||||||
|
text:
|
||||||
|
max_date:
|
||||||
|
name: Maximum Date
|
||||||
|
description: Filter assets created on or before this date (ISO 8601 format, e.g., 2024-12-31 or 2024-12-31T23:59:59).
|
||||||
|
required: false
|
||||||
|
selector:
|
||||||
|
text:
|
||||||
|
memory_date:
|
||||||
|
name: Memory Date
|
||||||
|
description: Filter assets by matching month and day, excluding the same year (memories filter like Google Photos). Provide a date in ISO 8601 format (e.g., 2024-02-14) to get assets from February 14th of previous years.
|
||||||
|
required: false
|
||||||
|
selector:
|
||||||
|
text:
|
||||||
|
city:
|
||||||
|
name: City
|
||||||
|
description: Filter assets by city name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data.
|
||||||
|
required: false
|
||||||
|
selector:
|
||||||
|
text:
|
||||||
|
state:
|
||||||
|
name: State
|
||||||
|
description: Filter assets by state/region name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data.
|
||||||
|
required: false
|
||||||
|
selector:
|
||||||
|
text:
|
||||||
|
country:
|
||||||
|
name: Country
|
||||||
|
description: Filter assets by country name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data.
|
||||||
|
required: false
|
||||||
|
selector:
|
||||||
|
text:
|
||||||
|
|
||||||
send_telegram_notification:
|
send_telegram_notification:
|
||||||
name: Send Telegram Notification
|
name: Send Telegram Notification
|
||||||
@@ -116,3 +220,39 @@ send_telegram_notification:
|
|||||||
default: true
|
default: true
|
||||||
selector:
|
selector:
|
||||||
boolean:
|
boolean:
|
||||||
|
max_asset_data_size:
|
||||||
|
name: Max Asset Data Size
|
||||||
|
description: Maximum asset size in bytes. Assets exceeding this limit will be skipped. Leave empty for no limit.
|
||||||
|
required: false
|
||||||
|
selector:
|
||||||
|
number:
|
||||||
|
min: 1
|
||||||
|
max: 52428800
|
||||||
|
step: 1048576
|
||||||
|
unit_of_measurement: "bytes"
|
||||||
|
mode: box
|
||||||
|
send_large_photos_as_documents:
|
||||||
|
name: Send Large Photos As Documents
|
||||||
|
description: How to handle photos exceeding Telegram's limits (10MB or 10000px dimension sum). If true, send as documents. If false, skip oversized photos.
|
||||||
|
required: false
|
||||||
|
default: false
|
||||||
|
selector:
|
||||||
|
boolean:
|
||||||
|
chat_action:
|
||||||
|
name: Chat Action
|
||||||
|
description: Chat action to display while processing (typing, upload_photo, upload_video, upload_document). Set to empty to disable.
|
||||||
|
required: false
|
||||||
|
default: "typing"
|
||||||
|
selector:
|
||||||
|
select:
|
||||||
|
options:
|
||||||
|
- label: "Typing"
|
||||||
|
value: "typing"
|
||||||
|
- label: "Uploading Photo"
|
||||||
|
value: "upload_photo"
|
||||||
|
- label: "Uploading Video"
|
||||||
|
value: "upload_video"
|
||||||
|
- label: "Uploading Document"
|
||||||
|
value: "upload_document"
|
||||||
|
- label: "Disabled"
|
||||||
|
value: ""
|
||||||
|
|||||||
@@ -14,6 +14,9 @@ _LOGGER = logging.getLogger(__name__)
|
|||||||
STORAGE_VERSION = 1
|
STORAGE_VERSION = 1
|
||||||
STORAGE_KEY_PREFIX = "immich_album_watcher"
|
STORAGE_KEY_PREFIX = "immich_album_watcher"
|
||||||
|
|
||||||
|
# Default TTL for Telegram file_id cache (48 hours in seconds)
|
||||||
|
DEFAULT_TELEGRAM_CACHE_TTL = 48 * 60 * 60
|
||||||
|
|
||||||
|
|
||||||
class ImmichAlbumStorage:
|
class ImmichAlbumStorage:
|
||||||
"""Handles persistence of album state across restarts."""
|
"""Handles persistence of album state across restarts."""
|
||||||
@@ -63,3 +66,116 @@ class ImmichAlbumStorage:
|
|||||||
"""Remove all storage data."""
|
"""Remove all storage data."""
|
||||||
await self._store.async_remove()
|
await self._store.async_remove()
|
||||||
self._data = None
|
self._data = None
|
||||||
|
|
||||||
|
|
||||||
|
class TelegramFileCache:
|
||||||
|
"""Cache for Telegram file_ids to avoid re-uploading media.
|
||||||
|
|
||||||
|
When a file is uploaded to Telegram, it returns a file_id that can be reused
|
||||||
|
to send the same file without re-uploading. This cache stores these file_ids
|
||||||
|
keyed by the source URL.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
hass: HomeAssistant,
|
||||||
|
album_id: str,
|
||||||
|
ttl_seconds: int = DEFAULT_TELEGRAM_CACHE_TTL,
|
||||||
|
) -> None:
|
||||||
|
"""Initialize the Telegram file cache.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
hass: Home Assistant instance
|
||||||
|
album_id: Album ID for scoping the cache
|
||||||
|
ttl_seconds: Time-to-live for cache entries in seconds (default: 48 hours)
|
||||||
|
"""
|
||||||
|
self._store: Store[dict[str, Any]] = Store(
|
||||||
|
hass, STORAGE_VERSION, f"{STORAGE_KEY_PREFIX}.telegram_cache.{album_id}"
|
||||||
|
)
|
||||||
|
self._data: dict[str, Any] | None = None
|
||||||
|
self._ttl_seconds = ttl_seconds
|
||||||
|
|
||||||
|
async def async_load(self) -> None:
|
||||||
|
"""Load cache data from storage."""
|
||||||
|
self._data = await self._store.async_load() or {"files": {}}
|
||||||
|
# Clean up expired entries on load
|
||||||
|
await self._cleanup_expired()
|
||||||
|
_LOGGER.debug(
|
||||||
|
"Loaded Telegram file cache with %d entries",
|
||||||
|
len(self._data.get("files", {})),
|
||||||
|
)
|
||||||
|
|
||||||
|
async def _cleanup_expired(self) -> None:
|
||||||
|
"""Remove expired cache entries."""
|
||||||
|
if not self._data or "files" not in self._data:
|
||||||
|
return
|
||||||
|
|
||||||
|
now = datetime.now(timezone.utc)
|
||||||
|
expired_keys = []
|
||||||
|
|
||||||
|
for url, entry in self._data["files"].items():
|
||||||
|
cached_at_str = entry.get("cached_at")
|
||||||
|
if cached_at_str:
|
||||||
|
cached_at = datetime.fromisoformat(cached_at_str)
|
||||||
|
age_seconds = (now - cached_at).total_seconds()
|
||||||
|
if age_seconds > self._ttl_seconds:
|
||||||
|
expired_keys.append(url)
|
||||||
|
|
||||||
|
if expired_keys:
|
||||||
|
for key in expired_keys:
|
||||||
|
del self._data["files"][key]
|
||||||
|
await self._store.async_save(self._data)
|
||||||
|
_LOGGER.debug("Cleaned up %d expired Telegram cache entries", len(expired_keys))
|
||||||
|
|
||||||
|
def get(self, url: str) -> dict[str, Any] | None:
|
||||||
|
"""Get cached file_id for a URL.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
url: The source URL of the media
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dict with 'file_id' and 'type' if cached and not expired, None otherwise
|
||||||
|
"""
|
||||||
|
if not self._data or "files" not in self._data:
|
||||||
|
return None
|
||||||
|
|
||||||
|
entry = self._data["files"].get(url)
|
||||||
|
if not entry:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Check if expired
|
||||||
|
cached_at_str = entry.get("cached_at")
|
||||||
|
if cached_at_str:
|
||||||
|
cached_at = datetime.fromisoformat(cached_at_str)
|
||||||
|
age_seconds = (datetime.now(timezone.utc) - cached_at).total_seconds()
|
||||||
|
if age_seconds > self._ttl_seconds:
|
||||||
|
return None
|
||||||
|
|
||||||
|
return {
|
||||||
|
"file_id": entry.get("file_id"),
|
||||||
|
"type": entry.get("type"),
|
||||||
|
}
|
||||||
|
|
||||||
|
async def async_set(self, url: str, file_id: str, media_type: str) -> None:
|
||||||
|
"""Store a file_id for a URL.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
url: The source URL of the media
|
||||||
|
file_id: The Telegram file_id
|
||||||
|
media_type: The type of media ('photo', 'video', 'document')
|
||||||
|
"""
|
||||||
|
if self._data is None:
|
||||||
|
self._data = {"files": {}}
|
||||||
|
|
||||||
|
self._data["files"][url] = {
|
||||||
|
"file_id": file_id,
|
||||||
|
"type": media_type,
|
||||||
|
"cached_at": datetime.now(timezone.utc).isoformat(),
|
||||||
|
}
|
||||||
|
await self._store.async_save(self._data)
|
||||||
|
_LOGGER.debug("Cached Telegram file_id for URL (type: %s)", media_type)
|
||||||
|
|
||||||
|
async def async_remove(self) -> None:
|
||||||
|
"""Remove all cache data."""
|
||||||
|
await self._store.async_remove()
|
||||||
|
self._data = None
|
||||||
|
|||||||
@@ -116,14 +116,16 @@
|
|||||||
"step": {
|
"step": {
|
||||||
"init": {
|
"init": {
|
||||||
"title": "Immich Album Watcher Options",
|
"title": "Immich Album Watcher Options",
|
||||||
"description": "Configure the polling interval for all albums.",
|
"description": "Configure the polling interval and Telegram settings for all albums.",
|
||||||
"data": {
|
"data": {
|
||||||
"scan_interval": "Scan interval (seconds)",
|
"scan_interval": "Scan interval (seconds)",
|
||||||
"telegram_bot_token": "Telegram Bot Token"
|
"telegram_bot_token": "Telegram Bot Token",
|
||||||
|
"telegram_cache_ttl": "Telegram Cache TTL (hours)"
|
||||||
},
|
},
|
||||||
"data_description": {
|
"data_description": {
|
||||||
"scan_interval": "How often to check for album changes (10-3600 seconds)",
|
"scan_interval": "How often to check for album changes (10-3600 seconds)",
|
||||||
"telegram_bot_token": "Bot token for sending notifications to Telegram"
|
"telegram_bot_token": "Bot token for sending notifications to Telegram",
|
||||||
|
"telegram_cache_ttl": "How long to cache uploaded file IDs to avoid re-uploading (1-168 hours, default: 48)"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -133,13 +135,61 @@
|
|||||||
"name": "Refresh",
|
"name": "Refresh",
|
||||||
"description": "Force an immediate refresh of album data from Immich."
|
"description": "Force an immediate refresh of album data from Immich."
|
||||||
},
|
},
|
||||||
"get_recent_assets": {
|
"get_assets": {
|
||||||
"name": "Get Recent Assets",
|
"name": "Get Assets",
|
||||||
"description": "Get the most recent assets from the targeted album.",
|
"description": "Get assets from the targeted album with optional filtering and ordering.",
|
||||||
"fields": {
|
"fields": {
|
||||||
"count": {
|
"limit": {
|
||||||
"name": "Count",
|
"name": "Limit",
|
||||||
"description": "Number of recent assets to return (1-100)."
|
"description": "Maximum number of assets to return (1-100)."
|
||||||
|
},
|
||||||
|
"offset": {
|
||||||
|
"name": "Offset",
|
||||||
|
"description": "Number of assets to skip (for pagination)."
|
||||||
|
},
|
||||||
|
"favorite_only": {
|
||||||
|
"name": "Favorite Only",
|
||||||
|
"description": "Filter to show only favorite assets."
|
||||||
|
},
|
||||||
|
"filter_min_rating": {
|
||||||
|
"name": "Minimum Rating",
|
||||||
|
"description": "Minimum rating for assets (1-5)."
|
||||||
|
},
|
||||||
|
"order_by": {
|
||||||
|
"name": "Order By",
|
||||||
|
"description": "Field to sort assets by (date, rating, name, or random)."
|
||||||
|
},
|
||||||
|
"order": {
|
||||||
|
"name": "Order",
|
||||||
|
"description": "Sort direction (ascending or descending)."
|
||||||
|
},
|
||||||
|
"asset_type": {
|
||||||
|
"name": "Asset Type",
|
||||||
|
"description": "Filter assets by type (all, photo, or video)."
|
||||||
|
},
|
||||||
|
"min_date": {
|
||||||
|
"name": "Minimum Date",
|
||||||
|
"description": "Filter assets created on or after this date (ISO 8601 format)."
|
||||||
|
},
|
||||||
|
"max_date": {
|
||||||
|
"name": "Maximum Date",
|
||||||
|
"description": "Filter assets created on or before this date (ISO 8601 format)."
|
||||||
|
},
|
||||||
|
"memory_date": {
|
||||||
|
"name": "Memory Date",
|
||||||
|
"description": "Filter assets by matching month and day, excluding the same year (memories filter)."
|
||||||
|
},
|
||||||
|
"city": {
|
||||||
|
"name": "City",
|
||||||
|
"description": "Filter assets by city name (case-insensitive)."
|
||||||
|
},
|
||||||
|
"state": {
|
||||||
|
"name": "State",
|
||||||
|
"description": "Filter assets by state/region name (case-insensitive)."
|
||||||
|
},
|
||||||
|
"country": {
|
||||||
|
"name": "Country",
|
||||||
|
"description": "Filter assets by country name (case-insensitive)."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
@@ -186,6 +236,18 @@
|
|||||||
"wait_for_response": {
|
"wait_for_response": {
|
||||||
"name": "Wait For Response",
|
"name": "Wait For Response",
|
||||||
"description": "Wait for Telegram to finish processing before returning. Set to false for fire-and-forget (automation continues immediately)."
|
"description": "Wait for Telegram to finish processing before returning. Set to false for fire-and-forget (automation continues immediately)."
|
||||||
|
},
|
||||||
|
"max_asset_data_size": {
|
||||||
|
"name": "Max Asset Data Size",
|
||||||
|
"description": "Maximum asset size in bytes. Assets exceeding this limit will be skipped. Leave empty for no limit."
|
||||||
|
},
|
||||||
|
"send_large_photos_as_documents": {
|
||||||
|
"name": "Send Large Photos As Documents",
|
||||||
|
"description": "How to handle photos exceeding Telegram's limits (10MB or 10000px dimension sum). If true, send as documents. If false, downsize to fit limits."
|
||||||
|
},
|
||||||
|
"chat_action": {
|
||||||
|
"name": "Chat Action",
|
||||||
|
"description": "Chat action to display while processing (typing, upload_photo, upload_video, upload_document). Set to empty to disable."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -116,14 +116,16 @@
|
|||||||
"step": {
|
"step": {
|
||||||
"init": {
|
"init": {
|
||||||
"title": "Настройки Immich Album Watcher",
|
"title": "Настройки Immich Album Watcher",
|
||||||
"description": "Настройте интервал опроса для всех альбомов.",
|
"description": "Настройте интервал опроса и параметры Telegram для всех альбомов.",
|
||||||
"data": {
|
"data": {
|
||||||
"scan_interval": "Интервал сканирования (секунды)",
|
"scan_interval": "Интервал сканирования (секунды)",
|
||||||
"telegram_bot_token": "Токен Telegram бота"
|
"telegram_bot_token": "Токен Telegram бота",
|
||||||
|
"telegram_cache_ttl": "Время жизни кэша Telegram (часы)"
|
||||||
},
|
},
|
||||||
"data_description": {
|
"data_description": {
|
||||||
"scan_interval": "Как часто проверять изменения в альбомах (10-3600 секунд)",
|
"scan_interval": "Как часто проверять изменения в альбомах (10-3600 секунд)",
|
||||||
"telegram_bot_token": "Токен бота для отправки уведомлений в Telegram"
|
"telegram_bot_token": "Токен бота для отправки уведомлений в Telegram",
|
||||||
|
"telegram_cache_ttl": "Сколько хранить ID загруженных файлов для повторной отправки без загрузки (1-168 часов, по умолчанию: 48)"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -133,13 +135,61 @@
|
|||||||
"name": "Обновить",
|
"name": "Обновить",
|
||||||
"description": "Принудительно обновить данные альбома из Immich."
|
"description": "Принудительно обновить данные альбома из Immich."
|
||||||
},
|
},
|
||||||
"get_recent_assets": {
|
"get_assets": {
|
||||||
"name": "Получить последние файлы",
|
"name": "Получить файлы",
|
||||||
"description": "Получить последние файлы из выбранного альбома.",
|
"description": "Получить файлы из выбранного альбома с возможностью фильтрации и сортировки.",
|
||||||
"fields": {
|
"fields": {
|
||||||
"count": {
|
"limit": {
|
||||||
"name": "Количество",
|
"name": "Лимит",
|
||||||
"description": "Количество возвращаемых файлов (1-100)."
|
"description": "Максимальное количество возвращаемых файлов (1-100)."
|
||||||
|
},
|
||||||
|
"offset": {
|
||||||
|
"name": "Смещение",
|
||||||
|
"description": "Количество файлов для пропуска (для пагинации)."
|
||||||
|
},
|
||||||
|
"favorite_only": {
|
||||||
|
"name": "Только избранные",
|
||||||
|
"description": "Фильтр для отображения только избранных файлов."
|
||||||
|
},
|
||||||
|
"filter_min_rating": {
|
||||||
|
"name": "Минимальный рейтинг",
|
||||||
|
"description": "Минимальный рейтинг для файлов (1-5)."
|
||||||
|
},
|
||||||
|
"order_by": {
|
||||||
|
"name": "Сортировать по",
|
||||||
|
"description": "Поле для сортировки файлов (date - дата, rating - рейтинг, name - имя, random - случайный)."
|
||||||
|
},
|
||||||
|
"order": {
|
||||||
|
"name": "Порядок",
|
||||||
|
"description": "Направление сортировки (ascending - по возрастанию, descending - по убыванию)."
|
||||||
|
},
|
||||||
|
"asset_type": {
|
||||||
|
"name": "Тип файла",
|
||||||
|
"description": "Фильтровать файлы по типу (all - все, photo - только фото, video - только видео)."
|
||||||
|
},
|
||||||
|
"min_date": {
|
||||||
|
"name": "Минимальная дата",
|
||||||
|
"description": "Фильтровать файлы, созданные в эту дату или после (формат ISO 8601)."
|
||||||
|
},
|
||||||
|
"max_date": {
|
||||||
|
"name": "Максимальная дата",
|
||||||
|
"description": "Фильтровать файлы, созданные в эту дату или до (формат ISO 8601)."
|
||||||
|
},
|
||||||
|
"memory_date": {
|
||||||
|
"name": "Дата воспоминания",
|
||||||
|
"description": "Фильтр по совпадению месяца и дня, исключая тот же год (воспоминания)."
|
||||||
|
},
|
||||||
|
"city": {
|
||||||
|
"name": "Город",
|
||||||
|
"description": "Фильтр по названию города (без учёта регистра)."
|
||||||
|
},
|
||||||
|
"state": {
|
||||||
|
"name": "Регион",
|
||||||
|
"description": "Фильтр по названию региона/области (без учёта регистра)."
|
||||||
|
},
|
||||||
|
"country": {
|
||||||
|
"name": "Страна",
|
||||||
|
"description": "Фильтр по названию страны (без учёта регистра)."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
@@ -186,6 +236,18 @@
|
|||||||
"wait_for_response": {
|
"wait_for_response": {
|
||||||
"name": "Ждать ответа",
|
"name": "Ждать ответа",
|
||||||
"description": "Ждать завершения отправки в Telegram перед возвратом. Установите false для фоновой отправки (автоматизация продолжается немедленно)."
|
"description": "Ждать завершения отправки в Telegram перед возвратом. Установите false для фоновой отправки (автоматизация продолжается немедленно)."
|
||||||
|
},
|
||||||
|
"max_asset_data_size": {
|
||||||
|
"name": "Макс. размер ресурса",
|
||||||
|
"description": "Максимальный размер ресурса в байтах. Ресурсы, превышающие этот лимит, будут пропущены. Оставьте пустым для отсутствия ограничения."
|
||||||
|
},
|
||||||
|
"send_large_photos_as_documents": {
|
||||||
|
"name": "Большие фото как документы",
|
||||||
|
"description": "Как обрабатывать фото, превышающие лимиты Telegram (10МБ или сумма размеров 10000пкс). Если true, отправлять как документы. Если false, уменьшать для соответствия лимитам."
|
||||||
|
},
|
||||||
|
"chat_action": {
|
||||||
|
"name": "Действие в чате",
|
||||||
|
"description": "Действие для отображения во время обработки (typing, upload_photo, upload_video, upload_document). Оставьте пустым для отключения."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
Reference in New Issue
Block a user