14 Commits

Author SHA1 Message Date
fde2d0ae31 Bump version to 2.7.1
All checks were successful
Validate / Hassfest (push) Successful in 4s
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 02:51:22 +03:00
31663852f9 Fixed link to automation
All checks were successful
Validate / Hassfest (push) Successful in 6s
2026-02-03 02:50:19 +03:00
5cee3ccc79 Add chat_action parameter to send_telegram_notification service
All checks were successful
Validate / Hassfest (push) Successful in 4s
Shows typing/upload indicator while processing media. Supports:
typing, upload_photo, upload_video, upload_document actions.
Set to empty string to disable. Default: typing.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 02:48:25 +03:00
3b133dc4bb Exclude archived assets from processing status check
All checks were successful
Validate / Hassfest (push) Successful in 4s
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 15:02:25 +03:00
a8ea9ab46a Rename on_this_day to memory_date with exclude-same-year behavior
All checks were successful
Validate / Hassfest (push) Successful in 2s
Renamed the date filter parameter and changed default behavior to match
Google Photos memories - now excludes assets from the same year as the
reference date, returning only photos from previous years on that day.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 14:24:08 +03:00
e88fd0fa3a Add get_assets filtering: offset, on_this_day, city, state, country
All checks were successful
Validate / Hassfest (push) Successful in 3s
- Add offset parameter for pagination support
- Add on_this_day parameter for memories filtering (match month and day)
- Add city, state, country parameters for geolocation filtering
- Update README with new parameters and examples

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 12:25:35 +03:00
3cf916dc77 Rename last_updated attribute to last_updated_at
All checks were successful
Validate / Hassfest (push) Successful in 3s
Renamed for consistency with created_at attribute naming.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 00:30:39 +03:00
df446390f2 Add album metadata attributes to Album ID sensor
All checks were successful
Validate / Hassfest (push) Successful in 4s
Add asset_count, last_updated, and created_at attributes to the
Album ID sensor for convenient access to album metadata.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 00:20:38 +03:00
1d61f05552 Track pending assets for delayed processing events
All checks were successful
Validate / Hassfest (push) Successful in 3s
- Add _pending_asset_ids to track assets detected but not yet processed
- Fire events when pending assets become processed (thumbhash available)
- Fixes issue where videos added during transcoding never triggered events
- Add debug logging for change detection and pending asset tracking
- Document external domain feature in README

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 22:23:32 +03:00
38a2a6ad7a Add external domain support for URLs
All checks were successful
Validate / Hassfest (push) Successful in 4s
- Fetch externalDomain from Immich server config on startup
- Use external domain for user-facing URLs (share links, asset URLs)
- Keep internal connection URL for API calls
- Add get_internal_download_url() to convert external URLs back to
  internal for faster local network downloads (Telegram notifications)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 21:53:02 +03:00
0bb7e71a1e Fix video asset processing detection
All checks were successful
Validate / Hassfest (push) Successful in 3s
- Use thumbhash for all assets instead of encodedVideoPath for videos
  (encodedVideoPath is not exposed in Immich API response)
- Add isTrashed check to exclude trashed assets from events
- Simplify processing status logic for both photos and videos

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 21:36:21 +03:00
c29fc2fbcf Add Telegram file ID caching and reverse geocoding fields
All checks were successful
Validate / Hassfest (push) Successful in 3s
Implement caching for Telegram file_ids to avoid re-uploading the same media.
Cached IDs are reused for subsequent sends, improving performance significantly.
Added configurable cache TTL option (1-168 hours, default 48).

Also added city, state, and country fields from Immich reverse geocoding
to asset data in events and get_assets service.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 03:12:05 +03:00
011f105823 Add geolocation (latitude/longitude) to asset data
All checks were successful
Validate / Hassfest (push) Successful in 3s
Expose GPS coordinates from EXIF data in asset responses. The latitude
and longitude fields are included in get_assets service responses and
event data when available.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 02:29:56 +03:00
ee45fdc177 Fix the services API
All checks were successful
Validate / Hassfest (push) Successful in 3s
2026-02-01 02:22:52 +03:00
12 changed files with 1115 additions and 206 deletions

View File

@@ -3,6 +3,7 @@
## Version Management
Update the integration version in `custom_components/immich_album_watcher/manifest.json` only when changes are made to the **integration content** (files inside `custom_components/immich_album_watcher/`).
**IMPORTANT** ALWAYS ask for version bump before doing it.
Do NOT bump version for:

120
README.md
View File

@@ -4,7 +4,7 @@
A Home Assistant custom integration that monitors [Immich](https://immich.app/) photo/video library albums for changes and exposes them as Home Assistant entities with event-firing capabilities.
> **Tip:** For the best experience, use this integration with the [Immich Album Watcher Blueprint](https://github.com/DolgolyovAlexei/haos-blueprints/blob/main/Common/Immich%20Album%20Watcher.yaml) to easily create automations for album change notifications.
> **Tip:** For the best experience, use this integration with the [Immich Album Watcher Blueprint](https://github.com/DolgolyovAlexei/haos-blueprints/tree/main/Common/Immich%20Album%20Watcher) to easily create automations for album change notifications.
## Features
@@ -77,12 +77,36 @@ A Home Assistant custom integration that monitors [Immich](https://immich.app/)
| Albums | Albums to monitor | Required |
| Scan Interval | How often to check for changes (seconds) | 60 |
| Telegram Bot Token | Bot token for sending media to Telegram (optional) | - |
| Telegram Cache TTL | How long to cache uploaded file IDs (hours, 1-168) | 48 |
### External Domain Support
The integration supports connecting to a local Immich server while using an external domain for user-facing URLs. This is useful when:
- Your Home Assistant connects to Immich via local network (e.g., `http://192.168.1.100:2283`)
- But you want share links and asset URLs to use your public domain (e.g., `https://photos.example.com`)
**How it works:**
1. Configure "External domain" in Immich: **Administration → Settings → Server → External Domain**
2. The integration automatically fetches this setting on startup
3. All user-facing URLs (share links, asset URLs in events) use the external domain
4. API calls and file downloads still use the local connection URL for faster performance
**Example:**
- Server URL (in integration config): `http://192.168.1.100:2283`
- External Domain (in Immich settings): `https://photos.example.com`
- Share links in events: `https://photos.example.com/share/...`
- Telegram downloads: via `http://192.168.1.100:2283` (fast local network)
If no external domain is configured in Immich, all URLs will use the Server URL from the integration configuration.
## Entities Created (per album)
| Entity Type | Name | Description |
|-------------|------|-------------|
| Sensor | Album ID | Album identifier with `album_name` and `share_url` attributes |
| Sensor | Album ID | Album identifier with `album_name`, `asset_count`, `share_url`, `last_updated_at`, and `created_at` attributes |
| Sensor | Asset Count | Total number of assets (includes `people` list in attributes) |
| Sensor | Photo Count | Number of photos in the album |
| Sensor | Video Count | Number of videos in the album |
@@ -119,34 +143,44 @@ target:
entity_id: sensor.album_name_asset_limit
data:
limit: 10 # Maximum number of assets (1-100)
offset: 0 # Number of assets to skip (for pagination)
favorite_only: false # true = favorites only, false = all assets
filter_min_rating: 4 # Min rating (1-5)
order_by: "date" # Options: "date", "rating", "name"
order: "descending" # Options: "ascending", "descending", "random"
order_by: "date" # Options: "date", "rating", "name", "random"
order: "descending" # Options: "ascending", "descending"
asset_type: "all" # Options: "all", "photo", "video"
min_date: "2024-01-01" # Optional: assets created on or after this date
max_date: "2024-12-31" # Optional: assets created on or before this date
memory_date: "2024-02-14" # Optional: memories filter (excludes same year)
city: "Paris" # Optional: filter by city name
state: "California" # Optional: filter by state/region
country: "France" # Optional: filter by country
```
**Parameters:**
- `limit` (optional, default: 10): Maximum number of assets to return (1-100)
- `offset` (optional, default: 0): Number of assets to skip before returning results. Use with `limit` for pagination (e.g., `offset: 0, limit: 10` for first page, `offset: 10, limit: 10` for second page)
- `favorite_only` (optional, default: false): Filter to show only favorite assets
- `filter_min_rating` (optional, default: 1): Minimum rating for assets (1-5 stars). Applied independently of `favorite_only`
- `order_by` (optional, default: "date"): Field to sort assets by
- `"date"`: Sort by creation date
- `"rating"`: Sort by rating (assets without rating are placed last)
- `"name"`: Sort by filename
- `"random"`: Random order (ignores `order`)
- `order` (optional, default: "descending"): Sort direction
- `"ascending"`: Ascending order
- `"descending"`: Descending order
- `"random"`: Random order (ignores `order_by`)
- `asset_type` (optional, default: "all"): Filter by asset type
- `"all"`: No type filtering, return both photos and videos
- `"photo"`: Return only photos
- `"video"`: Return only videos
- `min_date` (optional): Filter assets created on or after this date. Use ISO 8601 format (e.g., `"2024-01-01"` or `"2024-01-01T10:30:00"`)
- `max_date` (optional): Filter assets created on or before this date. Use ISO 8601 format (e.g., `"2024-12-31"` or `"2024-12-31T23:59:59"`)
- `memory_date` (optional): Filter assets by matching month and day, excluding the same year (memories filter like Google Photos). Provide a date in ISO 8601 format (e.g., `"2024-02-14"`) to get all assets taken on February 14th from previous years
- `city` (optional): Filter assets by city name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data
- `state` (optional): Filter assets by state/region name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data
- `country` (optional): Filter assets by country name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data
**Examples:**
@@ -172,7 +206,7 @@ target:
data:
limit: 10
filter_min_rating: 3
order: "random"
order_by: "random"
```
Get 20 most recent photos only:
@@ -230,6 +264,72 @@ data:
order: "descending"
```
Get "On This Day" memories (photos from today's date in previous years):
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 20
memory_date: "{{ now().strftime('%Y-%m-%d') }}"
order_by: "date"
order: "ascending"
```
Paginate through all assets (first page):
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 10
offset: 0
order_by: "date"
order: "descending"
```
Paginate through all assets (second page):
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 10
offset: 10
order_by: "date"
order: "descending"
```
Get photos taken in a specific city:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 50
city: "Paris"
asset_type: "photo"
order_by: "date"
order: "descending"
```
Get all assets from a specific country:
```yaml
service: immich_album_watcher.get_assets
target:
entity_id: sensor.album_name_asset_limit
data:
limit: 100
country: "Japan"
order_by: "date"
order: "ascending"
```
### Send Telegram Notification
Send notifications to Telegram. Supports multiple formats:
@@ -241,6 +341,8 @@ Send notifications to Telegram. Supports multiple formats:
The service downloads media from Immich and uploads it to Telegram, bypassing any CORS restrictions. Large lists of media are automatically split into multiple media groups based on the `max_group_size` parameter (default: 10 items per group).
**File ID Caching:** When media is uploaded to Telegram, the service caches the returned `file_id`. Subsequent sends of the same media will use the cached `file_id` instead of re-uploading, significantly improving performance. The cache TTL is configurable in hub options (default: 48 hours, range: 1-168 hours). The cache is persistent across Home Assistant restarts and is stored per album.
**Examples:**
Text message:
@@ -330,6 +432,7 @@ data:
| `wait_for_response` | Wait for Telegram to finish processing. Set to `false` for fire-and-forget (automation continues immediately). Default: `true` | No |
| `max_asset_data_size` | Maximum asset size in bytes. Assets exceeding this limit will be skipped. Default: no limit | No |
| `send_large_photos_as_documents` | Handle photos exceeding Telegram limits (10MB or 10000px dimension sum). If `true`, send as documents. If `false`, skip oversized photos. Default: `false` | No |
| `chat_action` | Chat action to display while processing media (`typing`, `upload_photo`, `upload_video`, `upload_document`). Set to empty string to disable. Default: `typing` | No |
The service returns a response with `success` status and `message_id` (single message), `message_ids` (media group), or `groups_sent` (number of groups when split). When `wait_for_response` is `false`, the service returns immediately with `{"success": true, "status": "queued"}` while processing continues in the background.
@@ -418,6 +521,11 @@ Each item in the `added_assets` list contains the following fields:
| `description` | Description/caption of the asset (from EXIF data) |
| `is_favorite` | Whether the asset is marked as favorite (`true` or `false`) |
| `rating` | User rating of the asset (1-5 stars, or `null` if not rated) |
| `latitude` | GPS latitude coordinate (or `null` if no geolocation) |
| `longitude` | GPS longitude coordinate (or `null` if no geolocation) |
| `city` | City name from reverse geocoding (or `null` if unavailable) |
| `state` | State/region name from reverse geocoding (or `null` if unavailable) |
| `country` | Country name from reverse geocoding (or `null` if unavailable) |
| `url` | Public URL to view the asset (only present if album has a shared link) |
| `download_url` | Direct download URL for the original file (if shared link exists) |
| `playback_url` | Video playback URL (for VIDEO assets only, if shared link exists) |

View File

@@ -15,12 +15,14 @@ from .const import (
CONF_HUB_NAME,
CONF_IMMICH_URL,
CONF_SCAN_INTERVAL,
CONF_TELEGRAM_CACHE_TTL,
DEFAULT_SCAN_INTERVAL,
DEFAULT_TELEGRAM_CACHE_TTL,
DOMAIN,
PLATFORMS,
)
from .coordinator import ImmichAlbumWatcherCoordinator
from .storage import ImmichAlbumStorage
from .storage import ImmichAlbumStorage, TelegramFileCache
_LOGGER = logging.getLogger(__name__)
@@ -33,6 +35,7 @@ class ImmichHubData:
url: str
api_key: str
scan_interval: int
telegram_cache_ttl: int
@dataclass
@@ -55,6 +58,7 @@ async def async_setup_entry(hass: HomeAssistant, entry: ImmichConfigEntry) -> bo
url = entry.data[CONF_IMMICH_URL]
api_key = entry.data[CONF_API_KEY]
scan_interval = entry.options.get(CONF_SCAN_INTERVAL, DEFAULT_SCAN_INTERVAL)
telegram_cache_ttl = entry.options.get(CONF_TELEGRAM_CACHE_TTL, DEFAULT_TELEGRAM_CACHE_TTL)
# Store hub data
entry.runtime_data = ImmichHubData(
@@ -62,6 +66,7 @@ async def async_setup_entry(hass: HomeAssistant, entry: ImmichConfigEntry) -> bo
url=url,
api_key=api_key,
scan_interval=scan_interval,
telegram_cache_ttl=telegram_cache_ttl,
)
# Create storage for persisting album state across restarts
@@ -107,6 +112,12 @@ async def _async_setup_subentry_coordinator(
_LOGGER.debug("Setting up coordinator for album: %s (%s)", album_name, album_id)
# Create and load Telegram file cache for this album
# TTL is in hours from config, convert to seconds
cache_ttl_seconds = hub_data.telegram_cache_ttl * 60 * 60
telegram_cache = TelegramFileCache(hass, album_id, ttl_seconds=cache_ttl_seconds)
await telegram_cache.async_load()
# Create coordinator for this album
coordinator = ImmichAlbumWatcherCoordinator(
hass,
@@ -117,6 +128,7 @@ async def _async_setup_subentry_coordinator(
scan_interval=hub_data.scan_interval,
hub_name=hub_data.name,
storage=storage,
telegram_cache=telegram_cache,
)
# Load persisted state before first refresh to detect changes during downtime

View File

@@ -27,7 +27,9 @@ from .const import (
CONF_IMMICH_URL,
CONF_SCAN_INTERVAL,
CONF_TELEGRAM_BOT_TOKEN,
CONF_TELEGRAM_CACHE_TTL,
DEFAULT_SCAN_INTERVAL,
DEFAULT_TELEGRAM_CACHE_TTL,
DOMAIN,
SUBENTRY_TYPE_ALBUM,
)
@@ -252,6 +254,9 @@ class ImmichAlbumWatcherOptionsFlow(OptionsFlow):
CONF_TELEGRAM_BOT_TOKEN: user_input.get(
CONF_TELEGRAM_BOT_TOKEN, ""
),
CONF_TELEGRAM_CACHE_TTL: user_input.get(
CONF_TELEGRAM_CACHE_TTL, DEFAULT_TELEGRAM_CACHE_TTL
),
},
)
@@ -261,6 +266,9 @@ class ImmichAlbumWatcherOptionsFlow(OptionsFlow):
current_bot_token = self._config_entry.options.get(
CONF_TELEGRAM_BOT_TOKEN, ""
)
current_cache_ttl = self._config_entry.options.get(
CONF_TELEGRAM_CACHE_TTL, DEFAULT_TELEGRAM_CACHE_TTL
)
return self.async_show_form(
step_id="init",
@@ -272,6 +280,9 @@ class ImmichAlbumWatcherOptionsFlow(OptionsFlow):
vol.Optional(
CONF_TELEGRAM_BOT_TOKEN, default=current_bot_token
): str,
vol.Optional(
CONF_TELEGRAM_CACHE_TTL, default=current_cache_ttl
): vol.All(vol.Coerce(int), vol.Range(min=1, max=168)),
}
),
)

View File

@@ -14,12 +14,14 @@ CONF_ALBUM_ID: Final = "album_id"
CONF_ALBUM_NAME: Final = "album_name"
CONF_SCAN_INTERVAL: Final = "scan_interval"
CONF_TELEGRAM_BOT_TOKEN: Final = "telegram_bot_token"
CONF_TELEGRAM_CACHE_TTL: Final = "telegram_cache_ttl"
# Subentry type
SUBENTRY_TYPE_ALBUM: Final = "album"
# Defaults
DEFAULT_SCAN_INTERVAL: Final = 60 # seconds
DEFAULT_TELEGRAM_CACHE_TTL: Final = 48 # hours
NEW_ASSETS_RESET_DELAY: Final = 300 # 5 minutes
DEFAULT_SHARE_PASSWORD: Final = "immich123"
@@ -47,7 +49,7 @@ ATTR_REMOVED_COUNT: Final = "removed_count"
ATTR_ADDED_ASSETS: Final = "added_assets"
ATTR_REMOVED_ASSETS: Final = "removed_assets"
ATTR_CHANGE_TYPE: Final = "change_type"
ATTR_LAST_UPDATED: Final = "last_updated"
ATTR_LAST_UPDATED: Final = "last_updated_at"
ATTR_CREATED_AT: Final = "created_at"
ATTR_THUMBNAIL_URL: Final = "thumbnail_url"
ATTR_SHARED: Final = "shared"
@@ -68,6 +70,11 @@ ATTR_ASSET_PLAYBACK_URL: Final = "playback_url"
ATTR_ASSET_DESCRIPTION: Final = "description"
ATTR_ASSET_IS_FAVORITE: Final = "is_favorite"
ATTR_ASSET_RATING: Final = "rating"
ATTR_ASSET_LATITUDE: Final = "latitude"
ATTR_ASSET_LONGITUDE: Final = "longitude"
ATTR_ASSET_CITY: Final = "city"
ATTR_ASSET_STATE: Final = "state"
ATTR_ASSET_COUNTRY: Final = "country"
# Asset types
ASSET_TYPE_IMAGE: Final = "IMAGE"

View File

@@ -8,7 +8,7 @@ from datetime import datetime, timedelta
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from .storage import ImmichAlbumStorage
from .storage import ImmichAlbumStorage, TelegramFileCache
import aiohttp
@@ -29,6 +29,11 @@ from .const import (
ATTR_ASSET_DOWNLOAD_URL,
ATTR_ASSET_FILENAME,
ATTR_ASSET_IS_FAVORITE,
ATTR_ASSET_LATITUDE,
ATTR_ASSET_LONGITUDE,
ATTR_ASSET_CITY,
ATTR_ASSET_STATE,
ATTR_ASSET_COUNTRY,
ATTR_ASSET_OWNER,
ATTR_ASSET_OWNER_ID,
ATTR_ASSET_PLAYBACK_URL,
@@ -120,6 +125,11 @@ class AssetInfo:
people: list[str] = field(default_factory=list)
is_favorite: bool = False
rating: int | None = None
latitude: float | None = None
longitude: float | None = None
city: str | None = None
state: str | None = None
country: str | None = None
is_processed: bool = True # Whether asset is fully processed by Immich
@classmethod
@@ -147,6 +157,15 @@ class AssetInfo:
is_favorite = data.get("isFavorite", False)
rating = exif_info.get("rating") if exif_info else None
# Get geolocation
latitude = exif_info.get("latitude") if exif_info else None
longitude = exif_info.get("longitude") if exif_info else None
# Get reverse geocoded location
city = exif_info.get("city") if exif_info else None
state = exif_info.get("state") if exif_info else None
country = exif_info.get("country") if exif_info else None
# Check if asset is fully processed by Immich
asset_type = data.get("type", ASSET_TYPE_IMAGE)
is_processed = cls._check_processing_status(data, asset_type)
@@ -162,47 +181,69 @@ class AssetInfo:
people=people,
is_favorite=is_favorite,
rating=rating,
latitude=latitude,
longitude=longitude,
city=city,
state=state,
country=country,
is_processed=is_processed,
)
@staticmethod
def _check_processing_status(data: dict[str, Any], asset_type: str) -> bool:
def _check_processing_status(data: dict[str, Any], _asset_type: str) -> bool:
"""Check if asset has been fully processed by Immich.
For photos: Check if thumbnails/previews have been generated
For videos: Check if video transcoding is complete
For all assets: Check if thumbnails have been generated (thumbhash exists).
Immich generates thumbnails for both photos and videos regardless of
whether video transcoding is needed.
Args:
data: Asset data from API response
asset_type: Asset type (IMAGE or VIDEO)
_asset_type: Asset type (IMAGE or VIDEO) - unused but kept for API stability
Returns:
True if asset is fully processed, False otherwise
True if asset is fully processed and not trashed/offline/archived, False otherwise
"""
if asset_type == ASSET_TYPE_VIDEO:
# For videos, check if transcoding is complete
# Video is processed if it has an encoded video path or if isOffline is False
asset_id = data.get("id", "unknown")
asset_type = data.get("type", "unknown")
is_offline = data.get("isOffline", False)
is_trashed = data.get("isTrashed", False)
is_archived = data.get("isArchived", False)
thumbhash = data.get("thumbhash")
_LOGGER.debug(
"Asset %s (%s): isOffline=%s, isTrashed=%s, isArchived=%s, thumbhash=%s",
asset_id,
asset_type,
is_offline,
is_trashed,
is_archived,
bool(thumbhash),
)
# Exclude offline assets
if is_offline:
_LOGGER.debug("Asset %s excluded: offline", asset_id)
return False
# Check if video has been transcoded (has encoded video path)
# Immich uses "encodedVideoPath" or similar field when transcoding is done
has_encoded_video = bool(data.get("encodedVideoPath"))
return has_encoded_video
else: # ASSET_TYPE_IMAGE
# For photos, check if thumbnails have been generated
# Photos are processed if they have thumbnail/preview paths
is_offline = data.get("isOffline", False)
if is_offline:
# Exclude trashed assets
if is_trashed:
_LOGGER.debug("Asset %s excluded: trashed", asset_id)
return False
# Check if thumbnails exist
has_thumbhash = bool(data.get("thumbhash"))
has_thumbnail = has_thumbhash # If thumbhash exists, thumbnails should exist
# Exclude archived assets
if is_archived:
_LOGGER.debug("Asset %s excluded: archived", asset_id)
return False
return has_thumbnail
# Check if thumbnails have been generated
# This works for both photos and videos - Immich always generates thumbnails
# Note: The API doesn't expose video transcoding status (encodedVideoPath),
# but thumbhash is sufficient since Immich generates thumbnails for all assets
is_processed = bool(thumbhash)
if not is_processed:
_LOGGER.debug("Asset %s excluded: no thumbhash", asset_id)
return is_processed
@dataclass
@@ -294,6 +335,7 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
scan_interval: int,
hub_name: str = "Immich",
storage: ImmichAlbumStorage | None = None,
telegram_cache: TelegramFileCache | None = None,
) -> None:
"""Initialize the coordinator."""
super().__init__(
@@ -313,13 +355,45 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
self._users_cache: dict[str, str] = {} # user_id -> name
self._shared_links: list[SharedLinkInfo] = []
self._storage = storage
self._telegram_cache = telegram_cache
self._persisted_asset_ids: set[str] | None = None
self._external_domain: str | None = None # Fetched from server config
self._pending_asset_ids: set[str] = set() # Assets detected but not yet processed
@property
def immich_url(self) -> str:
"""Return the Immich URL."""
"""Return the Immich URL (for API calls)."""
return self._url
@property
def external_url(self) -> str:
"""Return the external URL for links.
Uses externalDomain from Immich server config if set,
otherwise falls back to the connection URL.
"""
if self._external_domain:
return self._external_domain.rstrip("/")
return self._url
def get_internal_download_url(self, url: str) -> str:
"""Convert an external URL to internal URL for faster downloads.
If the URL starts with the external domain, replace it with the
internal connection URL to download via local network.
Args:
url: The URL to convert (may be external or internal)
Returns:
URL using internal connection for downloads
"""
if self._external_domain:
external = self._external_domain.rstrip("/")
if url.startswith(external):
return url.replace(external, self._url, 1)
return url
@property
def api_key(self) -> str:
"""Return the API key."""
@@ -335,6 +409,11 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
"""Return the album name."""
return self._album_name
@property
def telegram_cache(self) -> TelegramFileCache | None:
"""Return the Telegram file cache."""
return self._telegram_cache
def update_scan_interval(self, scan_interval: int) -> None:
"""Update the scan interval."""
self.update_interval = timedelta(seconds=scan_interval)
@@ -363,6 +442,7 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
async def async_get_assets(
self,
limit: int = 10,
offset: int = 0,
favorite_only: bool = False,
filter_min_rating: int = 1,
order_by: str = "date",
@@ -370,11 +450,16 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
asset_type: str = "all",
min_date: str | None = None,
max_date: str | None = None,
memory_date: str | None = None,
city: str | None = None,
state: str | None = None,
country: str | None = None,
) -> list[dict[str, Any]]:
"""Get assets from the album with optional filtering and ordering.
Args:
limit: Maximum number of assets to return (1-100)
offset: Number of assets to skip before returning results (for pagination)
favorite_only: Filter to show only favorite assets
filter_min_rating: Minimum rating for assets (1-5)
order_by: Field to sort by - 'date', 'rating', or 'name'
@@ -382,6 +467,10 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
asset_type: Asset type filter - 'all', 'photo', or 'video'
min_date: Filter assets created on or after this date (ISO 8601 format)
max_date: Filter assets created on or before this date (ISO 8601 format)
memory_date: Filter assets by matching month and day, excluding the same year (memories filter)
city: Filter assets by city (case-insensitive substring match)
state: Filter assets by state/region (case-insensitive substring match)
country: Filter assets by country (case-insensitive substring match)
Returns:
List of asset data dictionaries
@@ -412,13 +501,50 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
if max_date:
assets = [a for a in assets if a.created_at <= max_date]
# Apply memory date filtering (match month and day, exclude same year)
if memory_date:
try:
# Parse the reference date (supports ISO 8601 format)
ref_date = datetime.fromisoformat(memory_date.replace("Z", "+00:00"))
ref_year = ref_date.year
ref_month = ref_date.month
ref_day = ref_date.day
def matches_memory(asset: AssetInfo) -> bool:
"""Check if asset matches memory criteria (same month/day, different year)."""
try:
asset_date = datetime.fromisoformat(
asset.created_at.replace("Z", "+00:00")
)
# Match month and day, but exclude same year (true memories behavior)
return (
asset_date.month == ref_month
and asset_date.day == ref_day
and asset_date.year != ref_year
)
except (ValueError, AttributeError):
return False
assets = [a for a in assets if matches_memory(a)]
except ValueError:
_LOGGER.warning("Invalid memory_date format: %s", memory_date)
# Apply geolocation filtering (case-insensitive substring match)
if city:
city_lower = city.lower()
assets = [a for a in assets if a.city and city_lower in a.city.lower()]
if state:
state_lower = state.lower()
assets = [a for a in assets if a.state and state_lower in a.state.lower()]
if country:
country_lower = country.lower()
assets = [a for a in assets if a.country and country_lower in a.country.lower()]
# Apply ordering
if order == "random":
if order_by == "random":
import random
random.shuffle(assets)
else:
# Determine sort key based on order_by
if order_by == "rating":
elif order_by == "rating":
# Sort by rating, putting None values last
assets = sorted(
assets,
@@ -438,8 +564,8 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
reverse=(order == "descending")
)
# Limit results
assets = assets[:limit]
# Apply offset and limit for pagination
assets = assets[offset : offset + limit]
# Build result with all available asset data (matching event data)
result = []
@@ -495,6 +621,36 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
return self._users_cache
async def _async_fetch_server_config(self) -> None:
"""Fetch server config from Immich to get external domain."""
if self._session is None:
self._session = async_get_clientsession(self.hass)
headers = {"x-api-key": self._api_key}
try:
async with self._session.get(
f"{self._url}/api/server/config",
headers=headers,
) as response:
if response.status == 200:
data = await response.json()
external_domain = data.get("externalDomain", "") or ""
self._external_domain = external_domain
if external_domain:
_LOGGER.debug(
"Using external domain from Immich: %s", external_domain
)
else:
_LOGGER.debug(
"No external domain configured in Immich, using connection URL"
)
else:
_LOGGER.warning(
"Failed to fetch server config: HTTP %s", response.status
)
except aiohttp.ClientError as err:
_LOGGER.warning("Failed to fetch server config: %s", err)
async def _async_fetch_shared_links(self) -> list[SharedLinkInfo]:
"""Fetch shared links for this album from Immich."""
if self._session is None:
@@ -538,29 +694,29 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
"""Get the public URL if album has an accessible shared link."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/share/{accessible_links[0].key}"
return f"{self.external_url}/share/{accessible_links[0].key}"
return None
def get_any_url(self) -> str | None:
"""Get any non-expired URL (prefers accessible, falls back to protected)."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/share/{accessible_links[0].key}"
return f"{self.external_url}/share/{accessible_links[0].key}"
non_expired = [link for link in self._shared_links if not link.is_expired]
if non_expired:
return f"{self._url}/share/{non_expired[0].key}"
return f"{self.external_url}/share/{non_expired[0].key}"
return None
def get_protected_url(self) -> str | None:
"""Get a protected URL if any password-protected link exists."""
protected_links = self._get_protected_links()
if protected_links:
return f"{self._url}/share/{protected_links[0].key}"
return f"{self.external_url}/share/{protected_links[0].key}"
return None
def get_protected_urls(self) -> list[str]:
"""Get all password-protected URLs."""
return [f"{self._url}/share/{link.key}" for link in self._get_protected_links()]
return [f"{self.external_url}/share/{link.key}" for link in self._get_protected_links()]
def get_protected_password(self) -> str | None:
"""Get the password for the first protected link."""
@@ -571,13 +727,13 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
def get_public_urls(self) -> list[str]:
"""Get all accessible public URLs."""
return [f"{self._url}/share/{link.key}" for link in self._get_accessible_links()]
return [f"{self.external_url}/share/{link.key}" for link in self._get_accessible_links()]
def get_shared_links_info(self) -> list[dict[str, Any]]:
"""Get detailed info about all shared links."""
return [
{
"url": f"{self._url}/share/{link.key}",
"url": f"{self.external_url}/share/{link.key}",
"has_password": link.has_password,
"is_expired": link.is_expired,
"expires_at": link.expires_at.isoformat() if link.expires_at else None,
@@ -590,40 +746,40 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
"""Get the public viewer URL for an asset (web page)."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/share/{accessible_links[0].key}/photos/{asset_id}"
return f"{self.external_url}/share/{accessible_links[0].key}/photos/{asset_id}"
non_expired = [link for link in self._shared_links if not link.is_expired]
if non_expired:
return f"{self._url}/share/{non_expired[0].key}/photos/{asset_id}"
return f"{self.external_url}/share/{non_expired[0].key}/photos/{asset_id}"
return None
def _get_asset_download_url(self, asset_id: str) -> str | None:
"""Get the direct download URL for an asset (media file)."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/api/assets/{asset_id}/original?key={accessible_links[0].key}"
return f"{self.external_url}/api/assets/{asset_id}/original?key={accessible_links[0].key}"
non_expired = [link for link in self._shared_links if not link.is_expired]
if non_expired:
return f"{self._url}/api/assets/{asset_id}/original?key={non_expired[0].key}"
return f"{self.external_url}/api/assets/{asset_id}/original?key={non_expired[0].key}"
return None
def _get_asset_video_url(self, asset_id: str) -> str | None:
"""Get the transcoded video playback URL for a video asset."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/api/assets/{asset_id}/video/playback?key={accessible_links[0].key}"
return f"{self.external_url}/api/assets/{asset_id}/video/playback?key={accessible_links[0].key}"
non_expired = [link for link in self._shared_links if not link.is_expired]
if non_expired:
return f"{self._url}/api/assets/{asset_id}/video/playback?key={non_expired[0].key}"
return f"{self.external_url}/api/assets/{asset_id}/video/playback?key={non_expired[0].key}"
return None
def _get_asset_photo_url(self, asset_id: str) -> str | None:
"""Get the preview-sized thumbnail URL for a photo asset."""
accessible_links = self._get_accessible_links()
if accessible_links:
return f"{self._url}/api/assets/{asset_id}/thumbnail?size=preview&key={accessible_links[0].key}"
return f"{self.external_url}/api/assets/{asset_id}/thumbnail?size=preview&key={accessible_links[0].key}"
non_expired = [link for link in self._shared_links if not link.is_expired]
if non_expired:
return f"{self._url}/api/assets/{asset_id}/thumbnail?size=preview&key={non_expired[0].key}"
return f"{self.external_url}/api/assets/{asset_id}/thumbnail?size=preview&key={non_expired[0].key}"
return None
def _build_asset_detail(
@@ -650,11 +806,16 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
ATTR_PEOPLE: asset.people,
ATTR_ASSET_IS_FAVORITE: asset.is_favorite,
ATTR_ASSET_RATING: asset.rating,
ATTR_ASSET_LATITUDE: asset.latitude,
ATTR_ASSET_LONGITUDE: asset.longitude,
ATTR_ASSET_CITY: asset.city,
ATTR_ASSET_STATE: asset.state,
ATTR_ASSET_COUNTRY: asset.country,
}
# Add thumbnail URL if requested
if include_thumbnail:
asset_detail[ATTR_THUMBNAIL_URL] = f"{self._url}/api/assets/{asset.id}/thumbnail"
asset_detail[ATTR_THUMBNAIL_URL] = f"{self.external_url}/api/assets/{asset.id}/thumbnail"
# Add public viewer URL (web page)
asset_url = self._get_asset_public_url(asset.id)
@@ -683,6 +844,10 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
if self._session is None:
self._session = async_get_clientsession(self.hass)
# Fetch server config to get external domain (once)
if self._external_domain is None:
await self._async_fetch_server_config()
# Fetch users to resolve owner names
if not self._users_cache:
await self._async_fetch_users()
@@ -736,11 +901,16 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
elif removed_ids and not added_ids:
change_type = "assets_removed"
added_assets = [
album.assets[aid]
for aid in added_ids
if aid in album.assets
]
added_assets = []
for aid in added_ids:
if aid not in album.assets:
continue
asset = album.assets[aid]
if asset.is_processed:
added_assets.append(asset)
else:
# Track unprocessed assets for later
self._pending_asset_ids.add(aid)
change = AlbumChange(
album_id=album.id,
@@ -797,12 +967,54 @@ class ImmichAlbumWatcherCoordinator(DataUpdateCoordinator[AlbumData | None]):
added_ids = new_state.asset_ids - old_state.asset_ids
removed_ids = old_state.asset_ids - new_state.asset_ids
# Only include fully processed assets in added_assets
added_assets = [
new_state.assets[aid]
for aid in added_ids
if aid in new_state.assets and new_state.assets[aid].is_processed
]
_LOGGER.debug(
"Change detection: added_ids=%d, removed_ids=%d, pending=%d",
len(added_ids),
len(removed_ids),
len(self._pending_asset_ids),
)
# Track new unprocessed assets and collect processed ones
added_assets = []
for aid in added_ids:
if aid not in new_state.assets:
_LOGGER.debug("Asset %s: not in assets dict", aid)
continue
asset = new_state.assets[aid]
_LOGGER.debug(
"New asset %s (%s): is_processed=%s, filename=%s",
aid,
asset.type,
asset.is_processed,
asset.filename,
)
if asset.is_processed:
added_assets.append(asset)
else:
# Track unprocessed assets for later
self._pending_asset_ids.add(aid)
_LOGGER.debug("Asset %s added to pending (not yet processed)", aid)
# Check if any pending assets are now processed
newly_processed = []
for aid in list(self._pending_asset_ids):
if aid not in new_state.assets:
# Asset was removed, no longer pending
self._pending_asset_ids.discard(aid)
continue
asset = new_state.assets[aid]
if asset.is_processed:
_LOGGER.debug(
"Pending asset %s (%s) is now processed: filename=%s",
aid,
asset.type,
asset.filename,
)
newly_processed.append(asset)
self._pending_asset_ids.discard(aid)
# Include newly processed pending assets
added_assets.extend(newly_processed)
# Detect metadata changes
name_changed = old_state.name != new_state.name

View File

@@ -8,5 +8,5 @@
"iot_class": "cloud_polling",
"issue_tracker": "https://github.com/DolgolyovAlexei/haos-hacs-immich-album-watcher/issues",
"requirements": [],
"version": "2.2.0"
"version": "2.7.1"
}

View File

@@ -2,6 +2,7 @@
from __future__ import annotations
import asyncio
import logging
from datetime import datetime
from typing import Any
@@ -24,6 +25,7 @@ from homeassistant.util import slugify
from .const import (
ATTR_ALBUM_ID,
ATTR_ALBUM_NAME,
ATTR_ALBUM_PROTECTED_URL,
ATTR_ALBUM_URLS,
ATTR_ASSET_COUNT,
@@ -94,14 +96,29 @@ async def async_setup_entry(
platform.async_register_entity_service(
SERVICE_GET_ASSETS,
{
vol.Optional("count", default=10): vol.All(
vol.Optional("limit", default=10): vol.All(
vol.Coerce(int), vol.Range(min=1, max=100)
),
vol.Optional("filter", default="none"): vol.In(["none", "favorite", "rating"]),
vol.Optional("offset", default=0): vol.All(
vol.Coerce(int), vol.Range(min=0)
),
vol.Optional("favorite_only", default=False): bool,
vol.Optional("filter_min_rating", default=1): vol.All(
vol.Coerce(int), vol.Range(min=1, max=5)
),
vol.Optional("order", default="descending"): vol.In(["ascending", "descending", "random"]),
vol.Optional("order_by", default="date"): vol.In(
["date", "rating", "name", "random"]
),
vol.Optional("order", default="descending"): vol.In(
["ascending", "descending"]
),
vol.Optional("asset_type", default="all"): vol.In(["all", "photo", "video"]),
vol.Optional("min_date"): str,
vol.Optional("max_date"): str,
vol.Optional("memory_date"): str,
vol.Optional("city"): str,
vol.Optional("state"): str,
vol.Optional("country"): str,
},
"async_get_assets",
supports_response=SupportsResponse.ONLY,
@@ -124,6 +141,13 @@ async def async_setup_entry(
vol.Coerce(int), vol.Range(min=0, max=60000)
),
vol.Optional("wait_for_response", default=True): bool,
vol.Optional("max_asset_data_size"): vol.All(
vol.Coerce(int), vol.Range(min=1, max=52428800)
),
vol.Optional("send_large_photos_as_documents", default=False): bool,
vol.Optional("chat_action", default="typing"): vol.Any(
None, vol.In(["typing", "upload_photo", "upload_video", "upload_document"])
),
},
"async_send_telegram_notification",
supports_response=SupportsResponse.OPTIONAL,
@@ -183,6 +207,7 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
async def async_get_assets(
self,
limit: int = 10,
offset: int = 0,
favorite_only: bool = False,
filter_min_rating: int = 1,
order_by: str = "date",
@@ -190,10 +215,15 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
asset_type: str = "all",
min_date: str | None = None,
max_date: str | None = None,
memory_date: str | None = None,
city: str | None = None,
state: str | None = None,
country: str | None = None,
) -> ServiceResponse:
"""Get assets for this album with optional filtering and ordering."""
assets = await self.coordinator.async_get_assets(
limit=limit,
offset=offset,
favorite_only=favorite_only,
filter_min_rating=filter_min_rating,
order_by=order_by,
@@ -201,6 +231,10 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
asset_type=asset_type,
min_date=min_date,
max_date=max_date,
memory_date=memory_date,
city=city,
state=state,
country=country,
)
return {"assets": assets}
@@ -218,6 +252,7 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
wait_for_response: bool = True,
max_asset_data_size: int | None = None,
send_large_photos_as_documents: bool = False,
chat_action: str | None = "typing",
) -> ServiceResponse:
"""Send notification to Telegram.
@@ -248,6 +283,7 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
chunk_delay=chunk_delay,
max_asset_data_size=max_asset_data_size,
send_large_photos_as_documents=send_large_photos_as_documents,
chat_action=chat_action,
)
)
return {"success": True, "status": "queued", "message": "Notification queued for background processing"}
@@ -265,6 +301,7 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
chunk_delay=chunk_delay,
max_asset_data_size=max_asset_data_size,
send_large_photos_as_documents=send_large_photos_as_documents,
chat_action=chat_action,
)
async def _execute_telegram_notification(
@@ -280,6 +317,7 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
chunk_delay: int = 0,
max_asset_data_size: int | None = None,
send_large_photos_as_documents: bool = False,
chat_action: str | None = "typing",
) -> ServiceResponse:
"""Execute the Telegram notification (internal method)."""
import json
@@ -297,12 +335,18 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
session = async_get_clientsession(self.hass)
# Handle empty URLs - send simple text message
# Handle empty URLs - send simple text message (no typing indicator needed)
if not urls:
return await self._send_telegram_message(
session, token, chat_id, caption or "", reply_to_message_id, disable_web_page_preview, parse_mode
)
# Start chat action indicator for media notifications (before downloading assets)
typing_task = None
if chat_action:
typing_task = self._start_typing_indicator(session, token, chat_id, chat_action)
try:
# Handle single photo
if len(urls) == 1 and urls[0].get("type", "photo") == "photo":
return await self._send_telegram_photo(
@@ -321,6 +365,14 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
session, token, chat_id, urls, caption, reply_to_message_id, max_group_size, chunk_delay, parse_mode,
max_asset_data_size, send_large_photos_as_documents
)
finally:
# Stop chat action indicator when done (success or error)
if typing_task:
typing_task.cancel()
try:
await typing_task
except asyncio.CancelledError:
pass
async def _send_telegram_message(
self,
@@ -370,6 +422,74 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
_LOGGER.error("Telegram message send failed: %s", err)
return {"success": False, "error": str(err)}
async def _send_telegram_chat_action(
self,
session: Any,
token: str,
chat_id: str,
action: str = "typing",
) -> bool:
"""Send a chat action to Telegram (e.g., typing indicator).
Args:
session: aiohttp client session
token: Telegram bot token
chat_id: Target chat ID
action: Chat action type (typing, upload_photo, upload_video, etc.)
Returns:
True if successful, False otherwise
"""
import aiohttp
telegram_url = f"https://api.telegram.org/bot{token}/sendChatAction"
payload = {"chat_id": chat_id, "action": action}
try:
async with session.post(telegram_url, json=payload) as response:
result = await response.json()
if response.status == 200 and result.get("ok"):
_LOGGER.debug("Sent chat action '%s' to chat %s", action, chat_id)
return True
else:
_LOGGER.debug("Failed to send chat action: %s", result.get("description"))
return False
except aiohttp.ClientError as err:
_LOGGER.debug("Chat action request failed: %s", err)
return False
def _start_typing_indicator(
self,
session: Any,
token: str,
chat_id: str,
action: str = "typing",
) -> asyncio.Task:
"""Start a background task that sends chat action indicator periodically.
The chat action indicator expires after ~5 seconds, so we refresh it every 4 seconds.
Args:
session: aiohttp client session
token: Telegram bot token
chat_id: Target chat ID
action: Chat action type (typing, upload_photo, upload_video, etc.)
Returns:
The background task (cancel it when done)
"""
async def action_loop() -> None:
"""Keep sending chat action until cancelled."""
try:
while True:
await self._send_telegram_chat_action(session, token, chat_id, action)
await asyncio.sleep(4)
except asyncio.CancelledError:
_LOGGER.debug("Chat action indicator stopped for action '%s'", action)
return asyncio.create_task(action_loop())
def _log_telegram_error(
self,
error_code: int | None,
@@ -487,10 +607,46 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
if not url:
return {"success": False, "error": "Missing 'url' for photo"}
# Check cache for file_id
cache = self.coordinator.telegram_cache
cached = cache.get(url) if cache else None
if cached and cached.get("file_id"):
# Use cached file_id - no download needed
file_id = cached["file_id"]
_LOGGER.debug("Using cached Telegram file_id for photo")
payload = {
"chat_id": chat_id,
"photo": file_id,
"parse_mode": parse_mode,
}
if caption:
payload["caption"] = caption
if reply_to_message_id:
payload["reply_to_message_id"] = reply_to_message_id
telegram_url = f"https://api.telegram.org/bot{token}/sendPhoto"
try:
# Download the photo
_LOGGER.debug("Downloading photo from %s", url[:80])
async with session.get(url) as resp:
async with session.post(telegram_url, json=payload) as response:
result = await response.json()
if response.status == 200 and result.get("ok"):
return {
"success": True,
"message_id": result.get("result", {}).get("message_id"),
"cached": True,
}
else:
# Cache might be stale, fall through to upload
_LOGGER.debug("Cached file_id failed, will re-upload: %s", result.get("description"))
except aiohttp.ClientError as err:
_LOGGER.debug("Cached file_id request failed: %s", err)
try:
# Download the photo using internal URL for faster local network transfer
download_url = self.coordinator.get_internal_download_url(url)
_LOGGER.debug("Downloading photo from %s", download_url[:80])
async with session.get(download_url) as resp:
if resp.status != 200:
return {
"success": False,
@@ -519,7 +675,7 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
_LOGGER.info("Photo %s, sending as document", reason)
return await self._send_telegram_document(
session, token, chat_id, data, "photo.jpg",
caption, reply_to_message_id, parse_mode
caption, reply_to_message_id, parse_mode, url
)
else:
# Skip oversized photo
@@ -550,6 +706,14 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
result = await response.json()
_LOGGER.debug("Telegram API response: status=%d, ok=%s", response.status, result.get("ok"))
if response.status == 200 and result.get("ok"):
# Extract and cache file_id
photos = result.get("result", {}).get("photo", [])
if photos and cache:
# Use the largest photo's file_id
file_id = photos[-1].get("file_id")
if file_id:
await cache.async_set(url, file_id, "photo")
return {
"success": True,
"message_id": result.get("result", {}).get("message_id"),
@@ -589,10 +753,46 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
if not url:
return {"success": False, "error": "Missing 'url' for video"}
# Check cache for file_id
cache = self.coordinator.telegram_cache
cached = cache.get(url) if cache else None
if cached and cached.get("file_id"):
# Use cached file_id - no download needed
file_id = cached["file_id"]
_LOGGER.debug("Using cached Telegram file_id for video")
payload = {
"chat_id": chat_id,
"video": file_id,
"parse_mode": parse_mode,
}
if caption:
payload["caption"] = caption
if reply_to_message_id:
payload["reply_to_message_id"] = reply_to_message_id
telegram_url = f"https://api.telegram.org/bot{token}/sendVideo"
try:
# Download the video
_LOGGER.debug("Downloading video from %s", url[:80])
async with session.get(url) as resp:
async with session.post(telegram_url, json=payload) as response:
result = await response.json()
if response.status == 200 and result.get("ok"):
return {
"success": True,
"message_id": result.get("result", {}).get("message_id"),
"cached": True,
}
else:
# Cache might be stale, fall through to upload
_LOGGER.debug("Cached file_id failed, will re-upload: %s", result.get("description"))
except aiohttp.ClientError as err:
_LOGGER.debug("Cached file_id request failed: %s", err)
try:
# Download the video using internal URL for faster local network transfer
download_url = self.coordinator.get_internal_download_url(url)
_LOGGER.debug("Downloading video from %s", download_url[:80])
async with session.get(download_url) as resp:
if resp.status != 200:
return {
"success": False,
@@ -633,6 +833,13 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
result = await response.json()
_LOGGER.debug("Telegram API response: status=%d, ok=%s", response.status, result.get("ok"))
if response.status == 200 and result.get("ok"):
# Extract and cache file_id
video = result.get("result", {}).get("video", {})
if video and cache:
file_id = video.get("file_id")
if file_id:
await cache.async_set(url, file_id, "video")
return {
"success": True,
"message_id": result.get("result", {}).get("message_id"),
@@ -664,11 +871,46 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
caption: str | None = None,
reply_to_message_id: int | None = None,
parse_mode: str = "HTML",
source_url: str | None = None,
) -> ServiceResponse:
"""Send a photo as a document to Telegram (for oversized photos)."""
import aiohttp
from aiohttp import FormData
# Check cache for file_id if source_url is provided
cache = self.coordinator.telegram_cache
if source_url:
cached = cache.get(source_url) if cache else None
if cached and cached.get("file_id") and cached.get("type") == "document":
# Use cached file_id
file_id = cached["file_id"]
_LOGGER.debug("Using cached Telegram file_id for document")
payload = {
"chat_id": chat_id,
"document": file_id,
"parse_mode": parse_mode,
}
if caption:
payload["caption"] = caption
if reply_to_message_id:
payload["reply_to_message_id"] = reply_to_message_id
telegram_url = f"https://api.telegram.org/bot{token}/sendDocument"
try:
async with session.post(telegram_url, json=payload) as response:
result = await response.json()
if response.status == 200 and result.get("ok"):
return {
"success": True,
"message_id": result.get("result", {}).get("message_id"),
"cached": True,
}
else:
_LOGGER.debug("Cached file_id failed, will re-upload: %s", result.get("description"))
except aiohttp.ClientError as err:
_LOGGER.debug("Cached file_id request failed: %s", err)
try:
# Build multipart form
form = FormData()
@@ -690,6 +932,13 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
result = await response.json()
_LOGGER.debug("Telegram API response: status=%d, ok=%s", response.status, result.get("ok"))
if response.status == 200 and result.get("ok"):
# Extract and cache file_id
if source_url and cache:
document = result.get("result", {}).get("document", {})
file_id = document.get("file_id")
if file_id:
await cache.async_set(source_url, file_id, "document")
return {
"success": True,
"message_id": result.get("result", {}).get("message_id"),
@@ -782,9 +1031,14 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
# Multi-item chunk: use sendMediaGroup
_LOGGER.debug("Sending chunk %d/%d as media group (%d items)", chunk_idx + 1, len(chunks), len(chunk))
# Download all media files for this chunk
media_files: list[tuple[str, bytes, str]] = [] # (type, data, filename)
oversized_photos: list[tuple[bytes, str | None]] = [] # For send_large_photos_as_documents=true
# Get cache reference
cache = self.coordinator.telegram_cache
# Collect media items - either from cache (file_id) or by downloading
# Each item: (type, media_ref, filename, url, is_cached)
# media_ref is either file_id (str) or data (bytes)
media_items: list[tuple[str, str | bytes, str, str, bool]] = []
oversized_photos: list[tuple[bytes, str | None, str]] = [] # (data, caption, url)
skipped_count = 0
for i, item in enumerate(chunk):
@@ -803,9 +1057,21 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
"error": f"Invalid type '{media_type}' in item {chunk_idx * max_group_size + i}. Must be 'photo' or 'video'.",
}
# Check cache first
cached = cache.get(url) if cache else None
if cached and cached.get("file_id"):
# Use cached file_id
ext = "jpg" if media_type == "photo" else "mp4"
filename = f"media_{chunk_idx * max_group_size + i}.{ext}"
media_items.append((media_type, cached["file_id"], filename, url, True))
_LOGGER.debug("Using cached file_id for media %d", chunk_idx * max_group_size + i)
continue
try:
_LOGGER.debug("Downloading media %d from %s", chunk_idx * max_group_size + i, url[:80])
async with session.get(url) as resp:
# Download using internal URL for faster local network transfer
download_url = self.coordinator.get_internal_download_url(url)
_LOGGER.debug("Downloading media %d from %s", chunk_idx * max_group_size + i, download_url[:80])
async with session.get(download_url) as resp:
if resp.status != 200:
return {
"success": False,
@@ -830,8 +1096,8 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
if send_large_photos_as_documents:
# Separate this photo to send as document later
# Caption only on first item of first chunk
photo_caption = caption if chunk_idx == 0 and i == 0 and len(media_files) == 0 else None
oversized_photos.append((data, photo_caption))
photo_caption = caption if chunk_idx == 0 and i == 0 and len(media_items) == 0 else None
oversized_photos.append((data, photo_caption, url))
_LOGGER.info("Photo %d %s, will send as document", i, reason)
continue
else:
@@ -842,7 +1108,7 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
ext = "jpg" if media_type == "photo" else "mp4"
filename = f"media_{chunk_idx * max_group_size + i}.{ext}"
media_files.append((media_type, data, filename))
media_items.append((media_type, data, filename, url, False))
except aiohttp.ClientError as err:
return {
"success": False,
@@ -850,14 +1116,56 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
}
# Skip this chunk if all files were filtered out
if not media_files and not oversized_photos:
if not media_items and not oversized_photos:
_LOGGER.info("Chunk %d/%d: all %d media items skipped",
chunk_idx + 1, len(chunks), len(chunk))
continue
# Send media group if we have normal-sized files
if media_files:
# Build multipart form
if media_items:
# Check if all items are cached (can use simple JSON payload)
all_cached = all(is_cached for _, _, _, _, is_cached in media_items)
if all_cached:
# All items cached - use simple JSON payload with file_ids
_LOGGER.debug("All %d items cached, using file_ids", len(media_items))
media_json = []
for i, (media_type, file_id, _, _, _) in enumerate(media_items):
media_item_json: dict[str, Any] = {
"type": media_type,
"media": file_id,
}
if chunk_idx == 0 and i == 0 and caption and not oversized_photos:
media_item_json["caption"] = caption
media_item_json["parse_mode"] = parse_mode
media_json.append(media_item_json)
payload = {
"chat_id": chat_id,
"media": media_json,
}
if chunk_idx == 0 and reply_to_message_id:
payload["reply_to_message_id"] = reply_to_message_id
telegram_url = f"https://api.telegram.org/bot{token}/sendMediaGroup"
try:
async with session.post(telegram_url, json=payload) as response:
result = await response.json()
if response.status == 200 and result.get("ok"):
chunk_message_ids = [
msg.get("message_id") for msg in result.get("result", [])
]
all_message_ids.extend(chunk_message_ids)
else:
# Cache might be stale - fall through to upload path
_LOGGER.debug("Cached file_ids failed, will re-upload: %s", result.get("description"))
all_cached = False # Force re-upload
except aiohttp.ClientError as err:
_LOGGER.debug("Cached file_ids request failed: %s", err)
all_cached = False
if not all_cached:
# Build multipart form with mix of cached file_ids and uploaded data
form = FormData()
form.add_field("chat_id", chat_id)
@@ -865,22 +1173,34 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
if chunk_idx == 0 and reply_to_message_id:
form.add_field("reply_to_message_id", str(reply_to_message_id))
# Build media JSON with attach:// references
# Build media JSON - use file_id for cached, attach:// for uploaded
media_json = []
for i, (media_type, data, filename) in enumerate(media_files):
attach_name = f"file{i}"
media_item: dict[str, Any] = {
upload_idx = 0
urls_to_cache: list[tuple[str, int, str]] = [] # (url, result_idx, type)
for i, (media_type, media_ref, filename, url, is_cached) in enumerate(media_items):
if is_cached:
# Use file_id directly
media_item_json: dict[str, Any] = {
"type": media_type,
"media": media_ref, # file_id
}
else:
# Upload this file
attach_name = f"file{upload_idx}"
media_item_json = {
"type": media_type,
"media": f"attach://{attach_name}",
}
# Only add caption to the first item of the first chunk (if no oversized photos with caption)
if chunk_idx == 0 and i == 0 and caption and not oversized_photos:
media_item["caption"] = caption
media_item["parse_mode"] = parse_mode
media_json.append(media_item)
content_type = "image/jpeg" if media_type == "photo" else "video/mp4"
form.add_field(attach_name, data, filename=filename, content_type=content_type)
form.add_field(attach_name, media_ref, filename=filename, content_type=content_type)
urls_to_cache.append((url, i, media_type))
upload_idx += 1
if chunk_idx == 0 and i == 0 and caption and not oversized_photos:
media_item_json["caption"] = caption
media_item_json["parse_mode"] = parse_mode
media_json.append(media_item_json)
form.add_field("media", json.dumps(media_json))
@@ -888,8 +1208,8 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
telegram_url = f"https://api.telegram.org/bot{token}/sendMediaGroup"
try:
_LOGGER.debug("Uploading media group chunk %d/%d (%d files) to Telegram",
chunk_idx + 1, len(chunks), len(media_files))
_LOGGER.debug("Uploading media group chunk %d/%d (%d files, %d cached) to Telegram",
chunk_idx + 1, len(chunks), len(media_items), len(media_items) - upload_idx)
async with session.post(telegram_url, data=form) as response:
result = await response.json()
_LOGGER.debug("Telegram API response: status=%d, ok=%s", response.status, result.get("ok"))
@@ -898,24 +1218,40 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
msg.get("message_id") for msg in result.get("result", [])
]
all_message_ids.extend(chunk_message_ids)
# Cache the newly uploaded file_ids
if cache and urls_to_cache:
result_messages = result.get("result", [])
for url, result_idx, m_type in urls_to_cache:
if result_idx < len(result_messages):
msg = result_messages[result_idx]
if m_type == "photo":
photos = msg.get("photo", [])
if photos:
await cache.async_set(url, photos[-1].get("file_id"), "photo")
elif m_type == "video":
video = msg.get("video", {})
if video.get("file_id"):
await cache.async_set(url, video["file_id"], "video")
else:
# Log detailed error for media group with total size info
total_size = sum(len(d) for _, d, _ in media_files)
uploaded_data = [m for m in media_items if not m[4]]
total_size = sum(len(d) for _, d, _, _, _ in uploaded_data if isinstance(d, bytes))
_LOGGER.error(
"Telegram API error for chunk %d/%d: %s | Media count: %d | Total size: %d bytes (%.2f MB)",
"Telegram API error for chunk %d/%d: %s | Media count: %d | Uploaded size: %d bytes (%.2f MB)",
chunk_idx + 1, len(chunks),
result.get("description", "Unknown Telegram error"),
len(media_files),
len(media_items),
total_size,
total_size / (1024 * 1024)
total_size / (1024 * 1024) if total_size else 0
)
# Log detailed diagnostics for the first photo in the group
for media_type, data, _ in media_files:
if media_type == "photo":
for media_type, media_ref, _, _, is_cached in media_items:
if media_type == "photo" and not is_cached and isinstance(media_ref, bytes):
self._log_telegram_error(
error_code=result.get("error_code"),
description=result.get("description", "Unknown Telegram error"),
data=data,
data=media_ref,
media_type="photo",
)
break # Only log details for first photo
@@ -934,11 +1270,11 @@ class ImmichAlbumBaseSensor(CoordinatorEntity[ImmichAlbumWatcherCoordinator], Se
}
# Send oversized photos as documents
for i, (data, photo_caption) in enumerate(oversized_photos):
for i, (data, photo_caption, photo_url) in enumerate(oversized_photos):
_LOGGER.debug("Sending oversized photo %d/%d as document", i + 1, len(oversized_photos))
result = await self._send_telegram_document(
session, token, chat_id, data, f"photo_{i}.jpg",
photo_caption, None, parse_mode
photo_caption, None, parse_mode, photo_url
)
if result.get("success"):
all_message_ids.append(result.get("message_id"))
@@ -983,7 +1319,10 @@ class ImmichAlbumIdSensor(ImmichAlbumBaseSensor):
return {}
attrs: dict[str, Any] = {
"album_name": self._album_data.name,
ATTR_ALBUM_NAME: self._album_data.name,
ATTR_ASSET_COUNT: self._album_data.asset_count,
ATTR_LAST_UPDATED: self._album_data.updated_at,
ATTR_CREATED_AT: self._album_data.created_at,
}
# Primary share URL (prefers public, falls back to protected)

View File

@@ -24,6 +24,15 @@ get_assets:
min: 1
max: 100
mode: slider
offset:
name: Offset
description: Number of assets to skip before returning results (for pagination). Use with limit to fetch assets in pages.
required: false
default: 0
selector:
number:
min: 0
mode: box
favorite_only:
name: Favorite Only
description: Filter to show only favorite assets.
@@ -55,6 +64,8 @@ get_assets:
value: "rating"
- label: "Name"
value: "name"
- label: "Random"
value: "random"
order:
name: Order
description: Sort direction.
@@ -67,8 +78,6 @@ get_assets:
value: "ascending"
- label: "Descending"
value: "descending"
- label: "Random"
value: "random"
asset_type:
name: Asset Type
description: Filter assets by type (all, photo, or video).
@@ -95,6 +104,30 @@ get_assets:
required: false
selector:
text:
memory_date:
name: Memory Date
description: Filter assets by matching month and day, excluding the same year (memories filter like Google Photos). Provide a date in ISO 8601 format (e.g., 2024-02-14) to get assets from February 14th of previous years.
required: false
selector:
text:
city:
name: City
description: Filter assets by city name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data.
required: false
selector:
text:
state:
name: State
description: Filter assets by state/region name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data.
required: false
selector:
text:
country:
name: Country
description: Filter assets by country name (case-insensitive substring match). Based on reverse geocoded location from asset GPS data.
required: false
selector:
text:
send_telegram_notification:
name: Send Telegram Notification
@@ -205,3 +238,21 @@ send_telegram_notification:
default: false
selector:
boolean:
chat_action:
name: Chat Action
description: Chat action to display while processing (typing, upload_photo, upload_video, upload_document). Set to empty to disable.
required: false
default: "typing"
selector:
select:
options:
- label: "Typing"
value: "typing"
- label: "Uploading Photo"
value: "upload_photo"
- label: "Uploading Video"
value: "upload_video"
- label: "Uploading Document"
value: "upload_document"
- label: "Disabled"
value: ""

View File

@@ -14,6 +14,9 @@ _LOGGER = logging.getLogger(__name__)
STORAGE_VERSION = 1
STORAGE_KEY_PREFIX = "immich_album_watcher"
# Default TTL for Telegram file_id cache (48 hours in seconds)
DEFAULT_TELEGRAM_CACHE_TTL = 48 * 60 * 60
class ImmichAlbumStorage:
"""Handles persistence of album state across restarts."""
@@ -63,3 +66,116 @@ class ImmichAlbumStorage:
"""Remove all storage data."""
await self._store.async_remove()
self._data = None
class TelegramFileCache:
"""Cache for Telegram file_ids to avoid re-uploading media.
When a file is uploaded to Telegram, it returns a file_id that can be reused
to send the same file without re-uploading. This cache stores these file_ids
keyed by the source URL.
"""
def __init__(
self,
hass: HomeAssistant,
album_id: str,
ttl_seconds: int = DEFAULT_TELEGRAM_CACHE_TTL,
) -> None:
"""Initialize the Telegram file cache.
Args:
hass: Home Assistant instance
album_id: Album ID for scoping the cache
ttl_seconds: Time-to-live for cache entries in seconds (default: 48 hours)
"""
self._store: Store[dict[str, Any]] = Store(
hass, STORAGE_VERSION, f"{STORAGE_KEY_PREFIX}.telegram_cache.{album_id}"
)
self._data: dict[str, Any] | None = None
self._ttl_seconds = ttl_seconds
async def async_load(self) -> None:
"""Load cache data from storage."""
self._data = await self._store.async_load() or {"files": {}}
# Clean up expired entries on load
await self._cleanup_expired()
_LOGGER.debug(
"Loaded Telegram file cache with %d entries",
len(self._data.get("files", {})),
)
async def _cleanup_expired(self) -> None:
"""Remove expired cache entries."""
if not self._data or "files" not in self._data:
return
now = datetime.now(timezone.utc)
expired_keys = []
for url, entry in self._data["files"].items():
cached_at_str = entry.get("cached_at")
if cached_at_str:
cached_at = datetime.fromisoformat(cached_at_str)
age_seconds = (now - cached_at).total_seconds()
if age_seconds > self._ttl_seconds:
expired_keys.append(url)
if expired_keys:
for key in expired_keys:
del self._data["files"][key]
await self._store.async_save(self._data)
_LOGGER.debug("Cleaned up %d expired Telegram cache entries", len(expired_keys))
def get(self, url: str) -> dict[str, Any] | None:
"""Get cached file_id for a URL.
Args:
url: The source URL of the media
Returns:
Dict with 'file_id' and 'type' if cached and not expired, None otherwise
"""
if not self._data or "files" not in self._data:
return None
entry = self._data["files"].get(url)
if not entry:
return None
# Check if expired
cached_at_str = entry.get("cached_at")
if cached_at_str:
cached_at = datetime.fromisoformat(cached_at_str)
age_seconds = (datetime.now(timezone.utc) - cached_at).total_seconds()
if age_seconds > self._ttl_seconds:
return None
return {
"file_id": entry.get("file_id"),
"type": entry.get("type"),
}
async def async_set(self, url: str, file_id: str, media_type: str) -> None:
"""Store a file_id for a URL.
Args:
url: The source URL of the media
file_id: The Telegram file_id
media_type: The type of media ('photo', 'video', 'document')
"""
if self._data is None:
self._data = {"files": {}}
self._data["files"][url] = {
"file_id": file_id,
"type": media_type,
"cached_at": datetime.now(timezone.utc).isoformat(),
}
await self._store.async_save(self._data)
_LOGGER.debug("Cached Telegram file_id for URL (type: %s)", media_type)
async def async_remove(self) -> None:
"""Remove all cache data."""
await self._store.async_remove()
self._data = None

View File

@@ -116,14 +116,16 @@
"step": {
"init": {
"title": "Immich Album Watcher Options",
"description": "Configure the polling interval for all albums.",
"description": "Configure the polling interval and Telegram settings for all albums.",
"data": {
"scan_interval": "Scan interval (seconds)",
"telegram_bot_token": "Telegram Bot Token"
"telegram_bot_token": "Telegram Bot Token",
"telegram_cache_ttl": "Telegram Cache TTL (hours)"
},
"data_description": {
"scan_interval": "How often to check for album changes (10-3600 seconds)",
"telegram_bot_token": "Bot token for sending notifications to Telegram"
"telegram_bot_token": "Bot token for sending notifications to Telegram",
"telegram_cache_ttl": "How long to cache uploaded file IDs to avoid re-uploading (1-168 hours, default: 48)"
}
}
}
@@ -141,6 +143,10 @@
"name": "Limit",
"description": "Maximum number of assets to return (1-100)."
},
"offset": {
"name": "Offset",
"description": "Number of assets to skip (for pagination)."
},
"favorite_only": {
"name": "Favorite Only",
"description": "Filter to show only favorite assets."
@@ -151,11 +157,11 @@
},
"order_by": {
"name": "Order By",
"description": "Field to sort assets by (date, rating, or name)."
"description": "Field to sort assets by (date, rating, name, or random)."
},
"order": {
"name": "Order",
"description": "Sort direction (ascending, descending, or random)."
"description": "Sort direction (ascending or descending)."
},
"asset_type": {
"name": "Asset Type",
@@ -168,6 +174,22 @@
"max_date": {
"name": "Maximum Date",
"description": "Filter assets created on or before this date (ISO 8601 format)."
},
"memory_date": {
"name": "Memory Date",
"description": "Filter assets by matching month and day, excluding the same year (memories filter)."
},
"city": {
"name": "City",
"description": "Filter assets by city name (case-insensitive)."
},
"state": {
"name": "State",
"description": "Filter assets by state/region name (case-insensitive)."
},
"country": {
"name": "Country",
"description": "Filter assets by country name (case-insensitive)."
}
}
},
@@ -222,6 +244,10 @@
"send_large_photos_as_documents": {
"name": "Send Large Photos As Documents",
"description": "How to handle photos exceeding Telegram's limits (10MB or 10000px dimension sum). If true, send as documents. If false, downsize to fit limits."
},
"chat_action": {
"name": "Chat Action",
"description": "Chat action to display while processing (typing, upload_photo, upload_video, upload_document). Set to empty to disable."
}
}
}

View File

@@ -116,14 +116,16 @@
"step": {
"init": {
"title": "Настройки Immich Album Watcher",
"description": "Настройте интервал опроса для всех альбомов.",
"description": "Настройте интервал опроса и параметры Telegram для всех альбомов.",
"data": {
"scan_interval": "Интервал сканирования (секунды)",
"telegram_bot_token": "Токен Telegram бота"
"telegram_bot_token": "Токен Telegram бота",
"telegram_cache_ttl": "Время жизни кэша Telegram (часы)"
},
"data_description": {
"scan_interval": "Как часто проверять изменения в альбомах (10-3600 секунд)",
"telegram_bot_token": "Токен бота для отправки уведомлений в Telegram"
"telegram_bot_token": "Токен бота для отправки уведомлений в Telegram",
"telegram_cache_ttl": "Сколько хранить ID загруженных файлов для повторной отправки без загрузки (1-168 часов, по умолчанию: 48)"
}
}
}
@@ -141,6 +143,10 @@
"name": "Лимит",
"description": "Максимальное количество возвращаемых файлов (1-100)."
},
"offset": {
"name": "Смещение",
"description": "Количество файлов для пропуска (для пагинации)."
},
"favorite_only": {
"name": "Только избранные",
"description": "Фильтр для отображения только избранных файлов."
@@ -151,11 +157,11 @@
},
"order_by": {
"name": "Сортировать по",
"description": "Поле для сортировки файлов (date - дата, rating - рейтинг, name - имя)."
"description": "Поле для сортировки файлов (date - дата, rating - рейтинг, name - имя, random - случайный)."
},
"order": {
"name": "Порядок",
"description": "Направление сортировки (ascending - по возрастанию, descending - по убыванию, random - случайный)."
"description": "Направление сортировки (ascending - по возрастанию, descending - по убыванию)."
},
"asset_type": {
"name": "Тип файла",
@@ -168,6 +174,22 @@
"max_date": {
"name": "Максимальная дата",
"description": "Фильтровать файлы, созданные в эту дату или до (формат ISO 8601)."
},
"memory_date": {
"name": "Дата воспоминания",
"description": "Фильтр по совпадению месяца и дня, исключая тот же год (воспоминания)."
},
"city": {
"name": "Город",
"description": "Фильтр по названию города (без учёта регистра)."
},
"state": {
"name": "Регион",
"description": "Фильтр по названию региона/области (без учёта регистра)."
},
"country": {
"name": "Страна",
"description": "Фильтр по названию страны (без учёта регистра)."
}
}
},
@@ -222,6 +244,10 @@
"send_large_photos_as_documents": {
"name": "Большие фото как документы",
"description": "Как обрабатывать фото, превышающие лимиты Telegram (10МБ или сумма размеров 10000пкс). Если true, отправлять как документы. Если false, уменьшать для соответствия лимитам."
},
"chat_action": {
"name": "Действие в чате",
"description": "Действие для отображения во время обработки (typing, upload_photo, upload_video, upload_document). Оставьте пустым для отключения."
}
}
}