Compare commits

...

10 Commits

Author SHA1 Message Date
a7a2da2eb2 not ideal but working bat charging 2026-02-03 15:05:29 +01:00
5b39f80862 Update config_flow.py 2025-09-30 16:39:28 +02:00
d9e03384ff Create api_client.py 2025-09-30 16:13:31 +02:00
85bad099e8 Update update_coordinator.py 2025-09-30 16:13:13 +02:00
f57bc622da Update services.yaml 2025-09-30 16:13:00 +02:00
86a4f1ad0c Update services.py 2025-09-30 16:12:46 +02:00
bcf36a439a Update sensor.py 2025-09-30 16:12:28 +02:00
85c0e6ce24 Update mqtt_publisher.py 2025-09-30 16:12:15 +02:00
af05a78cb1 Update mqtt_common.py 2025-09-30 16:12:00 +02:00
eacdf707f1 Update manifest.json 2025-09-30 16:11:40 +02:00
12 changed files with 1922 additions and 590 deletions

View File

@ -7,7 +7,7 @@ Użyj mojego kodu E3WOTQ w koszyku w aplikacji. Bonus trafi do Twojego Portfela
!!! Dedykowana Karta do integracji: !!! Dedykowana Karta do integracji:
https://github.com/balgerion/ha_Pstryk_card https://github.com/balgerion/ha_Pstryk_card
[![Wersja](https://img.shields.io/badge/wersja-1.7.2-blue)](https://github.com/balgerion/ha_Pstryk/) [![Wersja](https://img.shields.io/badge/wersja-1.8.0-blue)](https://github.com/balgerion/ha_Pstryk/)
Integracja dla Home Assistant umożliwiająca śledzenie aktualnych cen energii elektrycznej oraz prognoz z platformy Pstryk. Integracja dla Home Assistant umożliwiająca śledzenie aktualnych cen energii elektrycznej oraz prognoz z platformy Pstryk.
@ -27,7 +27,10 @@ Integracja dla Home Assistant umożliwiająca śledzenie aktualnych cen energii
- 📡 Integracja wystawia po lokalnym MQTT tablice cen w natywnym formacie EVCC - 📡 Integracja wystawia po lokalnym MQTT tablice cen w natywnym formacie EVCC
- 📊 Średnia zakupu oraz sprzedaży - miesięczna/roczna - 📊 Średnia zakupu oraz sprzedaży - miesięczna/roczna
- 📈 Bilans miesięczny/roczny - 📈 Bilans miesięczny/roczny
- 🛡️ Debug i logowanie - 🛡️ Debug i logowanie
- 🔋 **NOWOŚĆ:** Sensor rekomendacji baterii (charge/discharge/standby)
- ⚡ Algorytm intra-day arbitrage (ładowanie w taniej godziny, rozładowywanie w drogie)
- 📊 Prognoza SoC na 24h z automatycznym planowaniem
## Instalacja ## Instalacja
@ -84,6 +87,7 @@ logo.png (opcjonalnie)
| `sensor.pstryk_daily_financial_balance` | Dzienny bilans kupna/sprzedaży | | `sensor.pstryk_daily_financial_balance` | Dzienny bilans kupna/sprzedaży |
| `sensor.pstryk_monthly_financial_balance`| Miesięczny bilans kupna/sprzedaży | | `sensor.pstryk_monthly_financial_balance`| Miesięczny bilans kupna/sprzedaży |
| `sensor.pstryk_yearly_financial_balance` | Roczny bilans kupna/sprzedaży | | `sensor.pstryk_yearly_financial_balance` | Roczny bilans kupna/sprzedaży |
| `sensor.pstryk_battery_recommendation` | **NOWOŚĆ:** Rekomendacja baterii (charge/discharge/standby) |
Przykładowa Automatyzacja: Przykładowa Automatyzacja:
@ -148,6 +152,88 @@ actions:
``` ```
## Sensor Rekomendacji Baterii 🔋
Sensor `sensor.pstryk_battery_recommendation` automatycznie oblicza kiedy ładować/rozładowywać magazyn energii bazując na dynamicznych cenach Pstryk.
### Stany sensora
| Stan | Opis |
|------|------|
| `charge` | Ładuj baterię (tania energia) |
| `discharge` | Rozładuj baterię do domu/sieci (droga energia) |
| `standby` | Bez akcji |
### Algorytm Intra-day Arbitrage
Algorytm wykrywa **wiele okien arbitrażowych w ciągu dnia**:
1. **Nocne ładowanie** (00:00-05:59) - najtańsze godziny
2. **Poranny szczyt** (06:00-10:59) - rozładowanie
3. **Dolina południowa** (11:00-14:59) - ładowanie jeśli opłacalne vs wieczór
4. **Wieczorny szczyt** (15:00-20:59) - rozładowanie
**Przykład typowych cen polskich:**
```
Noc (0.80 PLN) → CHARGE
Poranek (2.58 PLN) → DISCHARGE
Południe (1.46 PLN)→ CHARGE (arbitraż: 1.46 × 1.25 = 1.83 < 2.63 avg wieczór)
Wieczór (3.10 PLN) → DISCHARGE
```
### Konfiguracja Baterii
W opcjach integracji dostępne są ustawienia:
| Parametr | Domyślnie | Opis |
|----------|-----------|------|
| Włącz sensor baterii | false | Aktywuje sensor |
| Entity SoC | - | Sensor stanu naładowania baterii |
| Pojemność | 15 kWh | Pojemność magazynu |
| Szybkość ładowania | 28 %/h | Jak szybko ładuje się bateria |
| Szybkość rozładowania | 10 %/h | Jak szybko rozładowuje się bateria |
| Minimalny SoC | 20% | Próg poniżej którego nie rozładowujemy |
| Liczba godzin ładowania | 6 | Ile najtańszych godzin do ładowania |
| Mnożnik progu discharge | 1.3 | Cena musi być 1.3x wyższa od avg charge |
### Atrybuty sensora
```yaml
sensor.pstryk_battery_recommendation:
state: "charge"
attributes:
current_price: 0.45
current_soc: 65
avg_charge_price: 0.25
discharge_threshold: 0.325
charge_hours: [0,1,2,3,4,11,12,13,14,23]
discharge_hours: [6,7,8,9,10,15,16,17,18,19,20]
standby_hours: [5,21,22]
midday_arbitrage:
profitable: true
midday_charge_hours: [11,12,13,14]
reason: "Mid-day arbitrage charge..."
next_state_change: "15:00"
next_state: "discharge"
```
### Automatyzacja sterowania falownikiem
Przykładowa automatyzacja do sterowania falownikami jest dostępna w pliku:
📁 `automations/battery_control_pstryk.yaml`
**Funkcje:**
- Natychmiastowa reakcja na zmianę sensora (30s debounce)
- Ochrona przed przeładowaniem przyłącza (np. Tesla charging > 2000W → standby)
- Sterowanie wieloma falownikami
- Logowanie do logbook
**Jak użyć:**
1. Skopiuj zawartość `automations/battery_control_pstryk.yaml`
2. W HA: Ustawienia → Automatyzacje → Utwórz → Edytuj w YAML → Wklej
3. Dostosuj `device_id` i `entity_id` do swoich urządzeń
4. Zapisz i włącz
## EVCC ## EVCC
### Scrnshoty ### Scrnshoty

View File

@ -0,0 +1,237 @@
alias: "Battery Control - Pstryk Recommendations"
description: "React instantly to Pstryk battery recommendations with Tesla charging protection"
mode: single
trigger:
# React instantly when recommendation changes
- platform: state
entity_id: sensor.pstryk_battery_recommendation
to:
- "charge"
- "discharge"
- "standby"
for:
seconds: 30
# Also react when Tesla power changes significantly
- platform: numeric_state
entity_id: 9eed3a28cda747219c2d04d079725d9e
above: 2000
for:
seconds: 10
- platform: numeric_state
entity_id: 9eed3a28cda747219c2d04d079725d9e
below: 2000
for:
seconds: 60
condition:
- condition: template
value_template: >
{{ states('sensor.pstryk_battery_recommendation') not in ['unavailable', 'unknown'] }}
action:
- choose:
# ==========================================
# OVERRIDE: Tesla charging > 2000W = STANDBY
# ==========================================
- conditions:
- type: is_power
condition: device
device_id: 371785f33a0d9b3ea38ed224f9e17a4b
entity_id: 9eed3a28cda747219c2d04d079725d9e
domain: sensor
above: 2000
sequence:
- service: logbook.log
data:
name: "Battery STANDBY (Tesla Override)"
message: >
Tesla charging detected - switching to standby to protect grid connection.
SoC: {{ states('sensor.wifiplug_battery_state_of_charge') }}%
# Falownik 1 - STANDBY mode
- device_id: ef18bab10bdf401736c3e075d9bdf9b5
domain: select
entity_id: 563f86d007910857cbd24d428ff665b0
type: select_option
option: PV-Utility-Battery (SUB)
- delay:
seconds: 15
- device_id: ef18bab10bdf401736c3e075d9bdf9b5
domain: select
entity_id: 3ae13e5dc606d367078291bda9b40274
type: select_option
option: Only PV charging is allowed
- delay:
seconds: 15
# Falownik 2 - STANDBY mode
- device_id: d65f655bdd00e2cdb019739f974b8c7c
domain: select
entity_id: c94531b376614314af08b17931f69980
type: select_option
option: PV-Utility-Battery (SUB)
- delay:
seconds: 15
- device_id: d65f655bdd00e2cdb019739f974b8c7c
domain: select
entity_id: b069e234e5478ed26733d4d85b2d00a5
type: select_option
option: Only PV charging is allowed
- service: input_boolean.turn_off
target:
entity_id: input_boolean.1h_battery_boost
# ==========================================
# CHARGE (when Tesla < 2000W)
# ==========================================
- conditions:
- condition: state
entity_id: sensor.pstryk_battery_recommendation
state: "charge"
- type: is_power
condition: device
device_id: 371785f33a0d9b3ea38ed224f9e17a4b
entity_id: 9eed3a28cda747219c2d04d079725d9e
domain: sensor
below: 2000
sequence:
- service: logbook.log
data:
name: "Battery CHARGE"
message: >
{{ state_attr('sensor.pstryk_battery_recommendation', 'reason') }}
SoC: {{ states('sensor.wifiplug_battery_state_of_charge') }}%
Price: {{ states('sensor.pstryk_current_buy_price') }} PLN/kWh
# Falownik 1 - CHARGE mode
- device_id: ef18bab10bdf401736c3e075d9bdf9b5
domain: select
entity_id: 563f86d007910857cbd24d428ff665b0
type: select_option
option: PV-Utility-Battery (SUB)
- delay:
seconds: 15
- device_id: ef18bab10bdf401736c3e075d9bdf9b5
domain: select
entity_id: 3ae13e5dc606d367078291bda9b40274
type: select_option
option: PV priority
- delay:
seconds: 15
# Falownik 2 - CHARGE mode
- device_id: d65f655bdd00e2cdb019739f974b8c7c
domain: select
entity_id: c94531b376614314af08b17931f69980
type: select_option
option: PV-Utility-Battery (SUB)
- delay:
seconds: 15
- device_id: d65f655bdd00e2cdb019739f974b8c7c
domain: select
entity_id: b069e234e5478ed26733d4d85b2d00a5
type: select_option
option: PV priority
# ==========================================
# DISCHARGE
# ==========================================
- conditions:
- condition: state
entity_id: sensor.pstryk_battery_recommendation
state: "discharge"
sequence:
- service: logbook.log
data:
name: "Battery DISCHARGE"
message: >
{{ state_attr('sensor.pstryk_battery_recommendation', 'reason') }}
SoC: {{ states('sensor.wifiplug_battery_state_of_charge') }}%
Price: {{ states('sensor.pstryk_current_buy_price') }} PLN/kWh
# Falownik 1 - DISCHARGE mode
- device_id: ef18bab10bdf401736c3e075d9bdf9b5
domain: select
entity_id: 563f86d007910857cbd24d428ff665b0
type: select_option
option: PV-Battery-Utility (SBU)
- delay:
seconds: 15
- device_id: ef18bab10bdf401736c3e075d9bdf9b5
domain: select
entity_id: 3ae13e5dc606d367078291bda9b40274
type: select_option
option: Only PV charging is allowed
- delay:
seconds: 15
# Falownik 2 - DISCHARGE mode
- device_id: d65f655bdd00e2cdb019739f974b8c7c
domain: select
entity_id: c94531b376614314af08b17931f69980
type: select_option
option: PV-Battery-Utility (SBU)
- delay:
seconds: 15
- device_id: d65f655bdd00e2cdb019739f974b8c7c
domain: select
entity_id: b069e234e5478ed26733d4d85b2d00a5
type: select_option
option: Only PV charging is allowed
- service: input_boolean.turn_off
target:
entity_id: input_boolean.1h_battery_boost
# ==========================================
# STANDBY
# ==========================================
- conditions:
- condition: state
entity_id: sensor.pstryk_battery_recommendation
state: "standby"
sequence:
- service: logbook.log
data:
name: "Battery STANDBY"
message: >
{{ state_attr('sensor.pstryk_battery_recommendation', 'reason') }}
SoC: {{ states('sensor.wifiplug_battery_state_of_charge') }}%
Price: {{ states('sensor.pstryk_current_buy_price') }} PLN/kWh
# Falownik 1 - STANDBY mode
- device_id: ef18bab10bdf401736c3e075d9bdf9b5
domain: select
entity_id: 563f86d007910857cbd24d428ff665b0
type: select_option
option: PV-Utility-Battery (SUB)
- delay:
seconds: 15
- device_id: ef18bab10bdf401736c3e075d9bdf9b5
domain: select
entity_id: 3ae13e5dc606d367078291bda9b40274
type: select_option
option: Only PV charging is allowed
- delay:
seconds: 15
# Falownik 2 - STANDBY mode
- device_id: d65f655bdd00e2cdb019739f974b8c7c
domain: select
entity_id: c94531b376614314af08b17931f69980
type: select_option
option: PV-Utility-Battery (SUB)
- delay:
seconds: 15
- device_id: d65f655bdd00e2cdb019739f974b8c7c
domain: select
entity_id: b069e234e5478ed26733d4d85b2d00a5
type: select_option
option: Only PV charging is allowed
- service: input_boolean.turn_off
target:
entity_id: input_boolean.1h_battery_boost

View File

@ -0,0 +1,370 @@
"""Shared API client for Pstryk Energy integration with caching and rate limiting."""
import logging
import asyncio
import random
from datetime import datetime, timedelta
from typing import Any, Dict, Optional
from email.utils import parsedate_to_datetime
import aiohttp
from homeassistant.core import HomeAssistant
from homeassistant.helpers.aiohttp_client import async_get_clientsession
from homeassistant.helpers.update_coordinator import UpdateFailed
from homeassistant.helpers.translation import async_get_translations
from .const import API_URL, API_TIMEOUT, DOMAIN
_LOGGER = logging.getLogger(__name__)
class PstrykAPIClient:
"""Shared API client with caching, rate limiting, and proper error handling."""
def __init__(self, hass: HomeAssistant, api_key: str):
"""Initialize the API client."""
self.hass = hass
self.api_key = api_key
self._session: Optional[aiohttp.ClientSession] = None
self._translations: Dict[str, str] = {}
self._translations_loaded = False
# Rate limiting: {endpoint_key: {"retry_after": datetime, "backoff": float}}
self._rate_limits: Dict[str, Dict[str, Any]] = {}
self._rate_limit_lock = asyncio.Lock()
# Request throttling - limit concurrent requests
self._request_semaphore = asyncio.Semaphore(3) # Max 3 concurrent requests
# Deduplication - track in-flight requests
self._in_flight: Dict[str, asyncio.Task] = {}
self._in_flight_lock = asyncio.Lock()
@property
def session(self) -> aiohttp.ClientSession:
"""Get or create aiohttp session."""
if self._session is None:
self._session = async_get_clientsession(self.hass)
return self._session
async def _load_translations(self):
"""Load translations for error messages."""
if not self._translations_loaded:
try:
self._translations = await async_get_translations(
self.hass, self.hass.config.language, DOMAIN
)
self._translations_loaded = True
# Debug: log sample keys to understand the format
if self._translations:
sample_keys = list(self._translations.keys())[:3]
_LOGGER.debug("Loaded %d translation keys, samples: %s",
len(self._translations), sample_keys)
except Exception as ex:
_LOGGER.warning("Failed to load translations for API client: %s", ex)
self._translations = {}
self._translations_loaded = True
def _t(self, key: str, **kwargs) -> str:
"""Get translated string with fallback."""
# Try different key formats as async_get_translations may return different formats
possible_keys = [
f"component.{DOMAIN}.debug.{key}", # Full format: component.pstryk.debug.key
f"{DOMAIN}.debug.{key}", # Domain format: pstryk.debug.key
f"debug.{key}", # Short format: debug.key
key # Just the key
]
template = None
for possible_key in possible_keys:
template = self._translations.get(possible_key)
if template:
break
# If translation not found, create a fallback message
if not template:
# Fallback patterns for common error types
if key == "api_error_html":
template = "API error {status} for {endpoint} (HTML error page received)"
elif key == "rate_limited":
template = "Endpoint '{endpoint}' is rate limited. Will retry after {seconds} seconds"
elif key == "waiting_rate_limit":
template = "Waiting {seconds} seconds for rate limit to clear"
else:
_LOGGER.debug("Translation key not found: %s (tried formats: %s)", key, possible_keys)
template = key
try:
return template.format(**kwargs)
except (KeyError, ValueError) as e:
_LOGGER.warning("Failed to format translation template '%s': %s", template, e)
return template
def _get_endpoint_key(self, url: str) -> str:
"""Extract endpoint key from URL for rate limiting."""
# Extract the main endpoint (e.g., "pricing", "prosumer-pricing", "energy-cost")
if "pricing/?resolution" in url:
return "pricing"
elif "prosumer-pricing/?resolution" in url:
return "prosumer-pricing"
elif "meter-data/energy-cost" in url:
return "energy-cost"
elif "meter-data/energy-usage" in url:
return "energy-usage"
return "unknown"
async def _check_rate_limit(self, endpoint_key: str) -> Optional[float]:
"""Check if we're rate limited and return wait time if needed."""
async with self._rate_limit_lock:
if endpoint_key in self._rate_limits:
limit_info = self._rate_limits[endpoint_key]
retry_after = limit_info.get("retry_after")
if retry_after and datetime.now() < retry_after:
wait_time = (retry_after - datetime.now()).total_seconds()
return wait_time
elif retry_after and datetime.now() >= retry_after:
# Rate limit expired, clear it
del self._rate_limits[endpoint_key]
return None
def _calculate_backoff(self, attempt: int, base_delay: float = 20.0) -> float:
"""Calculate exponential backoff with jitter."""
# Exponential backoff: base_delay * (2 ^ attempt)
backoff = base_delay * (2 ** attempt)
# Add jitter: ±20% randomization
jitter = backoff * 0.2 * (2 * random.random() - 1)
return max(1.0, backoff + jitter)
async def _handle_rate_limit(self, response: aiohttp.ClientResponse, endpoint_key: str):
"""Handle 429 rate limit response."""
# Ensure translations are loaded
await self._load_translations()
retry_after_header = response.headers.get("Retry-After")
wait_time = None
if retry_after_header:
try:
# Try parsing as seconds
wait_time = int(retry_after_header)
except ValueError:
# Try parsing as HTTP date
try:
retry_date = parsedate_to_datetime(retry_after_header)
wait_time = (retry_date - datetime.now()).total_seconds()
except Exception:
pass
# Fallback to 3600 seconds (1 hour) if not specified
if wait_time is None:
wait_time = 3600
retry_after_dt = datetime.now() + timedelta(seconds=wait_time)
async with self._rate_limit_lock:
self._rate_limits[endpoint_key] = {
"retry_after": retry_after_dt,
"backoff": wait_time
}
_LOGGER.warning(
self._t("rate_limited", endpoint=endpoint_key, seconds=int(wait_time))
)
async def _make_request(
self,
url: str,
max_retries: int = 3,
base_delay: float = 20.0
) -> Dict[str, Any]:
"""Make API request with retries, rate limiting, and deduplication."""
# Load translations if not already loaded
await self._load_translations()
endpoint_key = self._get_endpoint_key(url)
# Check if we're rate limited
wait_time = await self._check_rate_limit(endpoint_key)
if wait_time and wait_time > 0:
# If wait time is reasonable, wait
if wait_time <= 60:
_LOGGER.info(
self._t("waiting_rate_limit", seconds=int(wait_time))
)
await asyncio.sleep(wait_time)
else:
raise UpdateFailed(
f"API rate limited for {endpoint_key}. Please try again in {int(wait_time/60)} minutes."
)
headers = {
"Authorization": self.api_key,
"Accept": "application/json"
}
last_exception = None
for attempt in range(max_retries):
try:
# Use semaphore to limit concurrent requests
async with self._request_semaphore:
async with asyncio.timeout(API_TIMEOUT):
async with self.session.get(url, headers=headers) as response:
# Handle different status codes
if response.status == 200:
data = await response.json()
return data
elif response.status == 429:
# Handle rate limiting
await self._handle_rate_limit(response, endpoint_key)
# Retry with exponential backoff
if attempt < max_retries - 1:
backoff = self._calculate_backoff(attempt, base_delay)
_LOGGER.debug(
"Rate limited, retrying in %.1f seconds (attempt %d/%d)",
backoff, attempt + 1, max_retries
)
await asyncio.sleep(backoff)
continue
else:
raise UpdateFailed(
f"API rate limit exceeded after {max_retries} attempts"
)
elif response.status == 500:
error_text = await response.text()
# Extract plain text from HTML if present
if error_text.strip().startswith('<!doctype html>') or error_text.strip().startswith('<html'):
# Just log that it's HTML, not the whole HTML
_LOGGER.error(
self._t("api_error_html", status=500, endpoint=endpoint_key)
)
else:
# Log actual error text (truncated)
_LOGGER.error(
"API returned 500 for %s: %s",
endpoint_key, error_text[:100]
)
# Retry with backoff
if attempt < max_retries - 1:
backoff = self._calculate_backoff(attempt, base_delay)
_LOGGER.debug(
"Retrying after 500 error in %.1f seconds (attempt %d/%d)",
backoff, attempt + 1, max_retries
)
await asyncio.sleep(backoff)
continue
else:
raise UpdateFailed(
f"API server error (500) for {endpoint_key} after {max_retries} attempts"
)
elif response.status in (401, 403):
raise UpdateFailed(
f"Authentication failed (status {response.status}). Please check your API key."
)
elif response.status == 404:
raise UpdateFailed(
f"API endpoint not found (404): {endpoint_key}"
)
else:
error_text = await response.text()
# Clean HTML from error messages
if error_text.strip().startswith('<!doctype html>') or error_text.strip().startswith('<html'):
_LOGGER.error(
self._t("api_error_html", status=response.status, endpoint=endpoint_key)
)
else:
_LOGGER.error(
"API error %d for %s: %s",
response.status, endpoint_key, error_text[:100]
)
# For other errors, retry with backoff
if attempt < max_retries - 1:
backoff = self._calculate_backoff(attempt, base_delay)
await asyncio.sleep(backoff)
continue
else:
raise UpdateFailed(
f"API error {response.status} for {endpoint_key}"
)
except asyncio.TimeoutError as err:
last_exception = err
_LOGGER.warning(
"Timeout fetching from %s (attempt %d/%d)",
endpoint_key, attempt + 1, max_retries
)
if attempt < max_retries - 1:
backoff = self._calculate_backoff(attempt, base_delay)
await asyncio.sleep(backoff)
continue
except aiohttp.ClientError as err:
last_exception = err
_LOGGER.warning(
"Network error fetching from %s: %s (attempt %d/%d)",
endpoint_key, err, attempt + 1, max_retries
)
if attempt < max_retries - 1:
backoff = self._calculate_backoff(attempt, base_delay)
await asyncio.sleep(backoff)
continue
except Exception as err:
last_exception = err
_LOGGER.exception(
"Unexpected error fetching from %s: %s",
endpoint_key, err
)
break
# All retries exhausted, raise the error
if last_exception:
raise UpdateFailed(
f"Failed to fetch data from {endpoint_key} after {max_retries} attempts"
) from last_exception
raise UpdateFailed(f"Failed to fetch data from {endpoint_key}")
async def fetch(
self,
url: str,
max_retries: int = 3,
base_delay: float = 20.0
) -> Dict[str, Any]:
"""Fetch data with deduplication of concurrent requests."""
# Check if there's already an in-flight request for this URL
async with self._in_flight_lock:
if url in self._in_flight:
_LOGGER.debug("Deduplicating request for %s", url)
# Wait for the existing request to complete
try:
return await self._in_flight[url]
except Exception:
# If the in-flight request failed, create a new one
pass
# Create new request task
task = asyncio.create_task(
self._make_request(url, max_retries, base_delay)
)
self._in_flight[url] = task
try:
result = await task
return result
finally:
# Remove from in-flight requests
async with self._in_flight_lock:
self._in_flight.pop(url, None)

View File

@ -8,6 +8,7 @@ from homeassistant.core import callback
from homeassistant.components import mqtt from homeassistant.components import mqtt
from homeassistant.exceptions import HomeAssistantError from homeassistant.exceptions import HomeAssistantError
from homeassistant.helpers.aiohttp_client import async_get_clientsession from homeassistant.helpers.aiohttp_client import async_get_clientsession
from homeassistant.helpers import selector
from .const import ( from .const import (
DOMAIN, DOMAIN,
API_URL, API_URL,
@ -25,7 +26,34 @@ from .const import (
MIN_RETRY_ATTEMPTS, MIN_RETRY_ATTEMPTS,
MAX_RETRY_ATTEMPTS, MAX_RETRY_ATTEMPTS,
MIN_RETRY_DELAY, MIN_RETRY_DELAY,
MAX_RETRY_DELAY MAX_RETRY_DELAY,
# Battery recommendation constants
CONF_BATTERY_ENABLED,
CONF_BATTERY_SOC_ENTITY,
CONF_BATTERY_CAPACITY,
CONF_BATTERY_CHARGE_RATE,
CONF_BATTERY_DISCHARGE_RATE,
CONF_BATTERY_MIN_SOC,
CONF_BATTERY_CHARGE_HOURS,
CONF_BATTERY_DISCHARGE_MULTIPLIER,
DEFAULT_BATTERY_CAPACITY,
DEFAULT_BATTERY_CHARGE_RATE,
DEFAULT_BATTERY_DISCHARGE_RATE,
DEFAULT_BATTERY_MIN_SOC,
DEFAULT_BATTERY_CHARGE_HOURS,
DEFAULT_BATTERY_DISCHARGE_MULTIPLIER,
MIN_BATTERY_CAPACITY,
MAX_BATTERY_CAPACITY,
MIN_BATTERY_CHARGE_RATE,
MAX_BATTERY_CHARGE_RATE,
MIN_BATTERY_DISCHARGE_RATE,
MAX_BATTERY_DISCHARGE_RATE,
MIN_BATTERY_MIN_SOC,
MAX_BATTERY_MIN_SOC,
MIN_BATTERY_CHARGE_HOURS,
MAX_BATTERY_CHARGE_HOURS,
MIN_BATTERY_DISCHARGE_MULTIPLIER,
MAX_BATTERY_DISCHARGE_MULTIPLIER,
) )
class MQTTNotConfiguredError(HomeAssistantError): class MQTTNotConfiguredError(HomeAssistantError):
@ -192,16 +220,12 @@ class PstrykConfigFlow(config_entries.ConfigFlow, domain=DOMAIN):
@staticmethod @staticmethod
def async_get_options_flow(config_entry): def async_get_options_flow(config_entry):
"""Get the options flow for this handler.""" """Get the options flow for this handler."""
return PstrykOptionsFlowHandler(config_entry) return PstrykOptionsFlowHandler()
class PstrykOptionsFlowHandler(config_entries.OptionsFlow): class PstrykOptionsFlowHandler(config_entries.OptionsFlow):
"""Handle options flow for Pstryk Energy - single page view.""" """Handle options flow for Pstryk Energy - single page view."""
def __init__(self, config_entry):
"""Initialize options flow."""
super().__init__(config_entry)
async def async_step_init(self, user_input=None): async def async_step_init(self, user_input=None):
"""Manage the options - single page for quick configuration.""" """Manage the options - single page for quick configuration."""
errors = {} errors = {}
@ -265,6 +289,34 @@ class PstrykOptionsFlowHandler(config_entries.OptionsFlow):
vol.All(vol.Coerce(int), vol.Range(min=MIN_RETRY_DELAY, max=MAX_RETRY_DELAY)), vol.All(vol.Coerce(int), vol.Range(min=MIN_RETRY_DELAY, max=MAX_RETRY_DELAY)),
}) })
# Battery Recommendation Configuration
schema.update({
vol.Optional(CONF_BATTERY_ENABLED, default=self.config_entry.options.get(
CONF_BATTERY_ENABLED, False)): bool,
vol.Optional(CONF_BATTERY_SOC_ENTITY, default=self.config_entry.options.get(
CONF_BATTERY_SOC_ENTITY, "")): selector.EntitySelector(
selector.EntitySelectorConfig(domain="sensor")
),
vol.Optional(CONF_BATTERY_CAPACITY, default=self.config_entry.options.get(
CONF_BATTERY_CAPACITY, DEFAULT_BATTERY_CAPACITY)):
vol.All(vol.Coerce(int), vol.Range(min=MIN_BATTERY_CAPACITY, max=MAX_BATTERY_CAPACITY)),
vol.Optional(CONF_BATTERY_CHARGE_RATE, default=self.config_entry.options.get(
CONF_BATTERY_CHARGE_RATE, DEFAULT_BATTERY_CHARGE_RATE)):
vol.All(vol.Coerce(int), vol.Range(min=MIN_BATTERY_CHARGE_RATE, max=MAX_BATTERY_CHARGE_RATE)),
vol.Optional(CONF_BATTERY_DISCHARGE_RATE, default=self.config_entry.options.get(
CONF_BATTERY_DISCHARGE_RATE, DEFAULT_BATTERY_DISCHARGE_RATE)):
vol.All(vol.Coerce(int), vol.Range(min=MIN_BATTERY_DISCHARGE_RATE, max=MAX_BATTERY_DISCHARGE_RATE)),
vol.Optional(CONF_BATTERY_MIN_SOC, default=self.config_entry.options.get(
CONF_BATTERY_MIN_SOC, DEFAULT_BATTERY_MIN_SOC)):
vol.All(vol.Coerce(int), vol.Range(min=MIN_BATTERY_MIN_SOC, max=MAX_BATTERY_MIN_SOC)),
vol.Optional(CONF_BATTERY_CHARGE_HOURS, default=self.config_entry.options.get(
CONF_BATTERY_CHARGE_HOURS, DEFAULT_BATTERY_CHARGE_HOURS)):
vol.All(vol.Coerce(int), vol.Range(min=MIN_BATTERY_CHARGE_HOURS, max=MAX_BATTERY_CHARGE_HOURS)),
vol.Optional(CONF_BATTERY_DISCHARGE_MULTIPLIER, default=self.config_entry.options.get(
CONF_BATTERY_DISCHARGE_MULTIPLIER, DEFAULT_BATTERY_DISCHARGE_MULTIPLIER)):
vol.All(vol.Coerce(float), vol.Range(min=MIN_BATTERY_DISCHARGE_MULTIPLIER, max=MAX_BATTERY_DISCHARGE_MULTIPLIER)),
})
# Add description with section information # Add description with section information
description_text = "Configure your energy price monitoring settings" description_text = "Configure your energy price monitoring settings"
if mqtt_enabled: if mqtt_enabled:

View File

@ -2,7 +2,7 @@
DOMAIN = "pstryk" DOMAIN = "pstryk"
API_URL = "https://api.pstryk.pl/integrations/" API_URL = "https://api.pstryk.pl/integrations/"
API_TIMEOUT = 60 API_TIMEOUT = 30 # Reduced from 60 to allow faster startup
BUY_ENDPOINT = "pricing/?resolution=hour&window_start={start}&window_end={end}" BUY_ENDPOINT = "pricing/?resolution=hour&window_start={start}&window_end={end}"
SELL_ENDPOINT = "prosumer-pricing/?resolution=hour&window_start={start}&window_end={end}" SELL_ENDPOINT = "prosumer-pricing/?resolution=hour&window_start={start}&window_end={end}"
@ -32,3 +32,33 @@ MIN_RETRY_ATTEMPTS = 1
MAX_RETRY_ATTEMPTS = 10 MAX_RETRY_ATTEMPTS = 10
MIN_RETRY_DELAY = 5 # seconds MIN_RETRY_DELAY = 5 # seconds
MAX_RETRY_DELAY = 300 # seconds (5 minutes) MAX_RETRY_DELAY = 300 # seconds (5 minutes)
# Battery recommendation sensor constants
CONF_BATTERY_ENABLED = "battery_enabled"
CONF_BATTERY_SOC_ENTITY = "battery_soc_entity"
CONF_BATTERY_CAPACITY = "battery_capacity"
CONF_BATTERY_CHARGE_RATE = "battery_charge_rate"
CONF_BATTERY_DISCHARGE_RATE = "battery_discharge_rate"
CONF_BATTERY_MIN_SOC = "battery_min_soc"
CONF_BATTERY_CHARGE_HOURS = "battery_charge_hours"
CONF_BATTERY_DISCHARGE_MULTIPLIER = "battery_discharge_multiplier"
DEFAULT_BATTERY_CAPACITY = 15 # kWh
DEFAULT_BATTERY_CHARGE_RATE = 28 # %/h
DEFAULT_BATTERY_DISCHARGE_RATE = 10 # %/h
DEFAULT_BATTERY_MIN_SOC = 20 # %
DEFAULT_BATTERY_CHARGE_HOURS = 6 # number of cheapest hours to charge
DEFAULT_BATTERY_DISCHARGE_MULTIPLIER = 1.3 # discharge when price >= avg_charge_price * multiplier
MIN_BATTERY_CAPACITY = 1
MAX_BATTERY_CAPACITY = 100
MIN_BATTERY_CHARGE_RATE = 5
MAX_BATTERY_CHARGE_RATE = 100
MIN_BATTERY_DISCHARGE_RATE = 5
MAX_BATTERY_DISCHARGE_RATE = 50
MIN_BATTERY_MIN_SOC = 5
MAX_BATTERY_MIN_SOC = 50
MIN_BATTERY_CHARGE_HOURS = 3
MAX_BATTERY_CHARGE_HOURS = 12
MIN_BATTERY_DISCHARGE_MULTIPLIER = 1.1
MAX_BATTERY_DISCHARGE_MULTIPLIER = 2.0

View File

@ -1,7 +1,7 @@
{ {
"domain": "pstryk", "domain": "pstryk",
"name": "Pstryk Energy", "name": "Pstryk Energy",
"version": "1.7.2", "version": "1.8.0",
"codeowners": ["@balgerion"], "codeowners": ["@balgerion"],
"requirements": ["aiohttp>=3.7"], "requirements": ["aiohttp>=3.7"],
"dependencies": ["mqtt"], "dependencies": ["mqtt"],

View File

@ -1,9 +1,8 @@
"""MQTT Publisher for Pstryk Energy integration.""" """MQTT Publisher for Pstryk Energy integration."""
import logging import logging
import json import json
from datetime import datetime, timedelta from datetime import timedelta
import asyncio import asyncio
from homeassistant.helpers.entity import Entity
from homeassistant.core import HomeAssistant from homeassistant.core import HomeAssistant
from homeassistant.util import dt as dt_util from homeassistant.util import dt as dt_util
from homeassistant.components import mqtt from homeassistant.components import mqtt
@ -47,7 +46,7 @@ class PstrykMqttPublisher:
# Load translations # Load translations
try: try:
self._translations = await async_get_translations( self._translations = await async_get_translations(
self.hass, self.hass.config.language, DOMAIN, ["mqtt"] self.hass, self.hass.config.language, DOMAIN
) )
except Exception as ex: except Exception as ex:
_LOGGER.warning("Failed to load translations for MQTT publisher: %s", ex) _LOGGER.warning("Failed to load translations for MQTT publisher: %s", ex)
@ -203,7 +202,8 @@ class PstrykMqttPublisher:
last_time = formatted_prices[-1]["start"] last_time = formatted_prices[-1]["start"]
_LOGGER.debug(f"Formatted {len(formatted_prices)} prices for MQTT from {first_time} to {last_time}") _LOGGER.debug(f"Formatted {len(formatted_prices)} prices for MQTT from {first_time} to {last_time}")
# Verify we have complete days # Verify we have complete days (debug only, not critical)
today = dt_util.now().strftime("%Y-%m-%d")
hours_by_date = {} hours_by_date = {}
for fp in formatted_prices: for fp in formatted_prices:
date_part = fp["start"][:10] # YYYY-MM-DD date_part = fp["start"][:10] # YYYY-MM-DD
@ -213,7 +213,15 @@ class PstrykMqttPublisher:
for date, hours in hours_by_date.items(): for date, hours in hours_by_date.items():
if hours != 24: if hours != 24:
_LOGGER.warning(f"Incomplete day {date}: only {hours} hours instead of 24") # Only log as debug - incomplete days are normal for past/future data
# Past days get cleaned up, future days may not be available yet
if date < today:
_LOGGER.debug(f"Past day {date}: {hours} hours (old data being cleaned)")
elif date == today and hours >= 20:
# Today with 20+ hours is acceptable (may be missing 1-2 hours at edges)
_LOGGER.debug(f"Today {date}: {hours}/24 hours (acceptable)")
else:
_LOGGER.debug(f"Incomplete day {date}: {hours}/24 hours")
else: else:
_LOGGER.warning("No prices formatted for MQTT") _LOGGER.warning("No prices formatted for MQTT")

File diff suppressed because it is too large Load Diff

View File

@ -49,7 +49,7 @@ async def async_setup_services(hass: HomeAssistant) -> None:
# Get translations for logs # Get translations for logs
try: try:
translations = await async_get_translations( translations = await async_get_translations(
hass, hass.config.language, DOMAIN, ["mqtt"] hass, hass.config.language, DOMAIN
) )
except Exception as e: except Exception as e:
_LOGGER.warning("Failed to load translations for services: %s", e) _LOGGER.warning("Failed to load translations for services: %s", e)

View File

@ -76,7 +76,15 @@
"mqtt_topic_sell": "MQTT Topic for Sell Prices", "mqtt_topic_sell": "MQTT Topic for Sell Prices",
"mqtt_48h_mode": "Enable 48h mode for MQTT", "mqtt_48h_mode": "Enable 48h mode for MQTT",
"retry_attempts": "API retry attempts", "retry_attempts": "API retry attempts",
"retry_delay": "API retry delay (seconds)" "retry_delay": "API retry delay (seconds)",
"battery_enabled": "Enable Battery Recommendation",
"battery_soc_entity": "Battery SoC Sensor",
"battery_capacity": "Battery Capacity (kWh)",
"battery_charge_rate": "Charge Rate (%/h)",
"battery_discharge_rate": "Discharge Rate (%/h)",
"battery_min_soc": "Minimum SoC (%)",
"battery_charge_hours": "Number of Charge Hours",
"battery_discharge_multiplier": "Discharge Price Multiplier"
}, },
"data_description": { "data_description": {
"buy_top": "How many cheapest buy prices to highlight (1-24 hours)", "buy_top": "How many cheapest buy prices to highlight (1-24 hours)",
@ -88,7 +96,15 @@
"mqtt_topic_sell": "MQTT topic where sell prices will be published", "mqtt_topic_sell": "MQTT topic where sell prices will be published",
"mqtt_48h_mode": "Publish 48 hours of prices (today + tomorrow) instead of just today", "mqtt_48h_mode": "Publish 48 hours of prices (today + tomorrow) instead of just today",
"retry_attempts": "How many times to retry API requests on failure", "retry_attempts": "How many times to retry API requests on failure",
"retry_delay": "Wait time between API retry attempts" "retry_delay": "Wait time between API retry attempts",
"battery_enabled": "Enable smart battery charging recommendation sensor",
"battery_soc_entity": "Entity that provides current battery State of Charge (%)",
"battery_capacity": "Total battery capacity in kWh",
"battery_charge_rate": "How fast the battery charges (% per hour)",
"battery_discharge_rate": "How fast the battery discharges (% per hour)",
"battery_min_soc": "Never discharge below this level (%)",
"battery_charge_hours": "How many cheapest hours to use for charging (3-12)",
"battery_discharge_multiplier": "Discharge when price >= avg_charge_price * this value"
} }
}, },
"price_settings": { "price_settings": {

View File

@ -65,7 +65,47 @@
"step": { "step": {
"init": { "init": {
"title": "Opcje Pstryk Energy", "title": "Opcje Pstryk Energy",
"description": "Zmodyfikuj konfigurację Pstryk Energy" "description": "Zmodyfikuj konfigurację Pstryk Energy",
"data": {
"buy_top": "Liczba najlepszych cen zakupu",
"sell_top": "Liczba najlepszych cen sprzedaży",
"buy_worst": "Liczba najgorszych cen zakupu",
"sell_worst": "Liczba najgorszych cen sprzedaży",
"mqtt_enabled": "Włącz mostek MQTT",
"mqtt_topic_buy": "Temat MQTT dla cen zakupu",
"mqtt_topic_sell": "Temat MQTT dla cen sprzedaży",
"mqtt_48h_mode": "Włącz tryb 48h dla MQTT",
"retry_attempts": "Liczba prób API",
"retry_delay": "Opóźnienie między próbami (sekundy)",
"battery_enabled": "Włącz rekomendacje baterii",
"battery_soc_entity": "Sensor SoC baterii",
"battery_capacity": "Pojemność baterii (kWh)",
"battery_charge_rate": "Tempo ładowania (%/h)",
"battery_discharge_rate": "Tempo rozładowania (%/h)",
"battery_min_soc": "Minimalny SoC (%)",
"battery_charge_hours": "Liczba godzin ładowania",
"battery_discharge_multiplier": "Mnożnik progu rozładowania"
},
"data_description": {
"buy_top": "Ile najtańszych cen zakupu wyróżnić (1-24 godzin)",
"sell_top": "Ile najwyższych cen sprzedaży wyróżnić (1-24 godzin)",
"buy_worst": "Ile najdroższych cen zakupu wyróżnić (1-24 godzin)",
"sell_worst": "Ile najniższych cen sprzedaży wyróżnić (1-24 godzin)",
"mqtt_enabled": "Publikuj ceny do MQTT dla systemów zewnętrznych jak EVCC",
"mqtt_topic_buy": "Temat MQTT gdzie będą publikowane ceny zakupu",
"mqtt_topic_sell": "Temat MQTT gdzie będą publikowane ceny sprzedaży",
"mqtt_48h_mode": "Publikuj 48 godzin cen (dziś + jutro) zamiast tylko dzisiaj",
"retry_attempts": "Ile razy ponawiać żądania API w przypadku błędu",
"retry_delay": "Czas oczekiwania między próbami połączenia z API",
"battery_enabled": "Włącz inteligentny sensor rekomendacji ładowania baterii",
"battery_soc_entity": "Encja dostarczająca aktualny stan naładowania baterii (%)",
"battery_capacity": "Całkowita pojemność baterii w kWh",
"battery_charge_rate": "Jak szybko ładuje się bateria (% na godzinę)",
"battery_discharge_rate": "Jak szybko rozładowuje się bateria (% na godzinę)",
"battery_min_soc": "Nigdy nie rozładowuj poniżej tego poziomu (%)",
"battery_charge_hours": "Ile najtańszych godzin wykorzystać do ładowania (3-12)",
"battery_discharge_multiplier": "Rozładowuj gdy cena >= średnia_ładowania × ta wartość"
}
}, },
"price_settings": { "price_settings": {
"title": "Ustawienia Monitorowania Cen", "title": "Ustawienia Monitorowania Cen",

View File

@ -2,106 +2,22 @@
import logging import logging
from datetime import timedelta from datetime import timedelta
import asyncio import asyncio
import aiohttp
import async_timeout
from homeassistant.helpers.update_coordinator import DataUpdateCoordinator, UpdateFailed from homeassistant.helpers.update_coordinator import DataUpdateCoordinator, UpdateFailed
from homeassistant.helpers.event import async_track_point_in_time from homeassistant.helpers.event import async_track_point_in_time
from homeassistant.util import dt as dt_util from homeassistant.util import dt as dt_util
from homeassistant.helpers.translation import async_get_translations from homeassistant.helpers.translation import async_get_translations
from .const import ( from .const import (
API_URL, API_URL,
API_TIMEOUT, BUY_ENDPOINT,
BUY_ENDPOINT, SELL_ENDPOINT,
SELL_ENDPOINT, DOMAIN,
DOMAIN,
CONF_MQTT_48H_MODE,
CONF_RETRY_ATTEMPTS,
CONF_RETRY_DELAY,
DEFAULT_RETRY_ATTEMPTS, DEFAULT_RETRY_ATTEMPTS,
DEFAULT_RETRY_DELAY DEFAULT_RETRY_DELAY
) )
from .api_client import PstrykAPIClient
_LOGGER = logging.getLogger(__name__) _LOGGER = logging.getLogger(__name__)
class ExponentialBackoffRetry:
"""Implementacja wykładniczego opóźnienia przy ponawianiu prób."""
def __init__(self, max_retries=DEFAULT_RETRY_ATTEMPTS, base_delay=DEFAULT_RETRY_DELAY):
"""Inicjalizacja mechanizmu ponowień.
Args:
max_retries: Maksymalna liczba prób
base_delay: Podstawowe opóźnienie w sekundach (zwiększane wykładniczo)
"""
self.max_retries = max_retries
self.base_delay = base_delay
self._translations = {}
async def load_translations(self, hass):
"""Załaduj tłumaczenia dla aktualnego języka."""
try:
self._translations = await async_get_translations(
hass, hass.config.language, DOMAIN, ["debug"]
)
except Exception as ex:
_LOGGER.warning("Failed to load translations for retry mechanism: %s", ex)
async def execute(self, func, *args, price_type=None, **kwargs):
"""Wykonaj funkcję z ponawianiem prób.
Args:
func: Funkcja asynchroniczna do wykonania
args, kwargs: Argumenty funkcji
price_type: Typ ceny (do logów)
Returns:
Wynik funkcji
Raises:
UpdateFailed: Po wyczerpaniu wszystkich prób
"""
last_exception = None
for retry in range(self.max_retries):
try:
return await func(*args, **kwargs)
except Exception as err:
last_exception = err
# Nie czekamy po ostatniej próbie
if retry < self.max_retries - 1:
delay = self.base_delay * (2 ** retry)
# Użyj przetłumaczonego komunikatu jeśli dostępny
retry_msg = self._translations.get(
"debug.retry_attempt",
"Retry {retry}/{max_retries} after error: {error} (delay: {delay}s)"
).format(
retry=retry + 1,
max_retries=self.max_retries,
error=str(err),
delay=round(delay, 1)
)
_LOGGER.debug(retry_msg)
await asyncio.sleep(delay)
# Jeśli wszystkie próby zawiodły i mamy timeout
if isinstance(last_exception, asyncio.TimeoutError) and price_type:
timeout_msg = self._translations.get(
"debug.timeout_after_retries",
"Timeout fetching {price_type} data from API after {retries} retries"
).format(price_type=price_type, retries=self.max_retries)
_LOGGER.error(timeout_msg)
api_timeout_msg = self._translations.get(
"debug.api_timeout_message",
"API timeout after {timeout} seconds (tried {retries} times)"
).format(timeout=API_TIMEOUT, retries=self.max_retries)
raise UpdateFailed(api_timeout_msg)
# Dla innych typów błędów
raise last_exception
def convert_price(value): def convert_price(value):
"""Convert price string to float.""" """Convert price string to float."""
@ -111,478 +27,277 @@ def convert_price(value):
_LOGGER.warning("Price conversion error: %s", e) _LOGGER.warning("Price conversion error: %s", e)
return None return None
class PstrykDataUpdateCoordinator(DataUpdateCoordinator): class PstrykDataUpdateCoordinator(DataUpdateCoordinator):
"""Coordinator to fetch both current price and today's table.""" """Coordinator to fetch both current price and today's table."""
def __del__(self): def __init__(self, hass, api_client: PstrykAPIClient, price_type, mqtt_48h_mode=False, retry_attempts=None, retry_delay=None):
"""Properly clean up when object is deleted."""
if hasattr(self, '_unsub_hourly') and self._unsub_hourly:
self._unsub_hourly()
if hasattr(self, '_unsub_midnight') and self._unsub_midnight:
self._unsub_midnight()
if hasattr(self, '_unsub_afternoon') and self._unsub_afternoon:
self._unsub_afternoon()
def __init__(self, hass, api_key, price_type, mqtt_48h_mode=False, retry_attempts=None, retry_delay=None):
"""Initialize the coordinator.""" """Initialize the coordinator."""
self.hass = hass self.hass = hass
self.api_key = api_key self.api_client = api_client
self.price_type = price_type self.price_type = price_type
self.mqtt_48h_mode = mqtt_48h_mode self.mqtt_48h_mode = mqtt_48h_mode
self._unsub_hourly = None self._unsub_hourly = None
self._unsub_midnight = None self._unsub_midnight = None
self._unsub_afternoon = None self._unsub_afternoon = None
# Inicjalizacja tłumaczeń
self._translations = {} self._translations = {}
# Track if we had tomorrow prices in last update
self._had_tomorrow_prices = False self._had_tomorrow_prices = False
# Get retry configuration from entry options # Get retry configuration
if retry_attempts is None or retry_delay is None: if retry_attempts is None:
# Try to find the config entry to get retry options retry_attempts = DEFAULT_RETRY_ATTEMPTS
for entry in hass.config_entries.async_entries(DOMAIN): if retry_delay is None:
if entry.data.get("api_key") == api_key: retry_delay = DEFAULT_RETRY_DELAY
retry_attempts = entry.options.get(CONF_RETRY_ATTEMPTS, DEFAULT_RETRY_ATTEMPTS)
retry_delay = entry.options.get(CONF_RETRY_DELAY, DEFAULT_RETRY_DELAY) self.retry_attempts = retry_attempts
break self.retry_delay = retry_delay
else:
# Use defaults if no matching entry found # Set update interval as fallback
retry_attempts = DEFAULT_RETRY_ATTEMPTS
retry_delay = DEFAULT_RETRY_DELAY
# Inicjalizacja mechanizmu ponowień z konfigurowalnymi wartościami
self.retry_mechanism = ExponentialBackoffRetry(max_retries=retry_attempts, base_delay=retry_delay)
# Set a default update interval as a fallback (1 hour)
# This ensures data is refreshed even if scheduled updates fail
update_interval = timedelta(hours=1) update_interval = timedelta(hours=1)
super().__init__( super().__init__(
hass, hass,
_LOGGER, _LOGGER,
name=f"{DOMAIN}_{price_type}", name=f"{DOMAIN}_{price_type}",
update_interval=update_interval, # Add fallback interval update_interval=update_interval,
) )
async def _make_api_request(self, url):
"""Make API request with proper error handling."""
async with aiohttp.ClientSession() as session:
async with async_timeout.timeout(API_TIMEOUT):
resp = await session.get(
url,
headers={"Authorization": self.api_key, "Accept": "application/json"}
)
# Obsługa różnych kodów błędu
if resp.status == 401:
error_msg = self._translations.get(
"debug.api_error_401",
"API authentication failed for {price_type} - invalid API key"
).format(price_type=self.price_type)
_LOGGER.error(error_msg)
raise UpdateFailed(self._translations.get(
"debug.api_error_401_user",
"API authentication failed - invalid API key"
))
elif resp.status == 403:
error_msg = self._translations.get(
"debug.api_error_403",
"API access forbidden for {price_type} - permissions issue"
).format(price_type=self.price_type)
_LOGGER.error(error_msg)
raise UpdateFailed(self._translations.get(
"debug.api_error_403_user",
"API access forbidden - check permissions"
))
elif resp.status == 404:
error_msg = self._translations.get(
"debug.api_error_404",
"API endpoint not found for {price_type} - check URL"
).format(price_type=self.price_type)
_LOGGER.error(error_msg)
raise UpdateFailed(self._translations.get(
"debug.api_error_404_user",
"API endpoint not found"
))
elif resp.status == 429:
error_msg = self._translations.get(
"debug.api_error_429",
"API rate limit exceeded for {price_type}"
).format(price_type=self.price_type)
_LOGGER.error(error_msg)
raise UpdateFailed(self._translations.get(
"debug.api_error_429_user",
"API rate limit exceeded - try again later"
))
elif resp.status == 502:
error_msg = self._translations.get(
"debug.api_error_502",
"API Gateway error (502) for {price_type} - server may be down"
).format(price_type=self.price_type)
_LOGGER.error(error_msg)
raise UpdateFailed(self._translations.get(
"debug.api_error_502_user",
"API Gateway error (502) - server may be down"
))
elif 500 <= resp.status < 600:
error_msg = self._translations.get(
"debug.api_error_5xx",
"API server error ({status}) for {price_type} - server issue"
).format(status=resp.status, price_type=self.price_type)
_LOGGER.error(error_msg)
raise UpdateFailed(self._translations.get(
"debug.api_error_5xx_user",
"API server error ({status}) - server issue"
).format(status=resp.status))
elif resp.status != 200:
error_text = await resp.text()
# Pokaż tylko pierwsze 50 znaków błędu dla krótszego logu
short_error = error_text[:50] + ("..." if len(error_text) > 50 else "")
error_msg = self._translations.get(
"debug.api_error_generic",
"API error {status} for {price_type}: {error}"
).format(status=resp.status, price_type=self.price_type, error=short_error)
_LOGGER.error(error_msg)
raise UpdateFailed(self._translations.get(
"debug.api_error_generic_user",
"API error {status}: {error}"
).format(status=resp.status, error=short_error))
return await resp.json()
def _is_likely_placeholder_data(self, prices_for_day): def _is_likely_placeholder_data(self, prices_for_day):
"""Check if prices for a day are likely placeholders. """Check if prices for a day are likely placeholders."""
Returns True if:
- There are no prices
- ALL prices have exactly the same value (suggesting API returned default values)
- There are too many consecutive hours with the same value (e.g., 10+ hours)
"""
if not prices_for_day: if not prices_for_day:
return True return True
# Get all price values
price_values = [p.get("price") for p in prices_for_day if p.get("price") is not None] price_values = [p.get("price") for p in prices_for_day if p.get("price") is not None]
if not price_values: if not price_values:
return True return True
# If we have less than 20 prices for a day, it's incomplete data
if len(price_values) < 20: if len(price_values) < 20:
_LOGGER.debug(f"Only {len(price_values)} prices for the day, likely incomplete data") _LOGGER.debug(f"Only {len(price_values)} prices for the day, likely incomplete data")
return True return True
# Check if ALL values are identical
unique_values = set(price_values) unique_values = set(price_values)
if len(unique_values) == 1: if len(unique_values) == 1:
_LOGGER.debug(f"All {len(price_values)} prices have the same value ({price_values[0]}), likely placeholders") _LOGGER.debug(f"All {len(price_values)} prices have the same value ({price_values[0]}), likely placeholders")
return True return True
# Additional check: if more than 90% of values are the same, it's suspicious
most_common_value = max(set(price_values), key=price_values.count) most_common_value = max(set(price_values), key=price_values.count)
count_most_common = price_values.count(most_common_value) count_most_common = price_values.count(most_common_value)
if count_most_common / len(price_values) > 0.9: if count_most_common / len(price_values) > 0.9:
_LOGGER.debug(f"{count_most_common}/{len(price_values)} prices have value {most_common_value}, likely placeholders") _LOGGER.debug(f"{count_most_common}/{len(price_values)} prices have value {most_common_value}, likely placeholders")
return True return True
return False return False
def _count_consecutive_same_values(self, prices):
"""Count maximum consecutive hours with the same price."""
if not prices:
return 0
# Sort by time to ensure consecutive checking
sorted_prices = sorted(prices, key=lambda x: x.get("start", ""))
max_consecutive = 1
current_consecutive = 1
last_value = None
for price in sorted_prices:
value = price.get("price")
if value is not None:
if value == last_value:
current_consecutive += 1
max_consecutive = max(max_consecutive, current_consecutive)
else:
current_consecutive = 1
last_value = value
return max_consecutive
async def _check_and_publish_mqtt(self, new_data): async def _check_and_publish_mqtt(self, new_data):
"""Check if we should publish to MQTT after update.""" """Check if we should publish to MQTT after update."""
if not self.mqtt_48h_mode: if not self.mqtt_48h_mode:
return return
now = dt_util.now() now = dt_util.now()
tomorrow = (now + timedelta(days=1)).strftime("%Y-%m-%d") tomorrow = (now + timedelta(days=1)).strftime("%Y-%m-%d")
# Check if tomorrow prices are available in new data
all_prices = new_data.get("prices", []) all_prices = new_data.get("prices", [])
tomorrow_prices = [p for p in all_prices if p["start"].startswith(tomorrow)] tomorrow_prices = [p for p in all_prices if p["start"].startswith(tomorrow)]
# Check if tomorrow's data is valid (not placeholders)
has_valid_tomorrow_prices = ( has_valid_tomorrow_prices = (
len(tomorrow_prices) >= 20 and len(tomorrow_prices) >= 20 and
not self._is_likely_placeholder_data(tomorrow_prices) not self._is_likely_placeholder_data(tomorrow_prices)
) )
# Log what we found for debugging
if tomorrow_prices:
unique_values = set(p.get("price") for p in tomorrow_prices if p.get("price") is not None)
consecutive = self._count_consecutive_same_values(tomorrow_prices)
_LOGGER.debug(
f"Tomorrow has {len(tomorrow_prices)} prices, "
f"{len(unique_values)} unique values, "
f"max {consecutive} consecutive same values, "
f"valid: {has_valid_tomorrow_prices}"
)
# If we didn't have valid tomorrow prices before, but now we do, publish to MQTT immediately
if not self._had_tomorrow_prices and has_valid_tomorrow_prices: if not self._had_tomorrow_prices and has_valid_tomorrow_prices:
_LOGGER.info("Valid tomorrow prices detected for %s, triggering immediate MQTT publish", self.price_type) _LOGGER.info("Valid tomorrow prices detected for %s, triggering immediate MQTT publish", self.price_type)
# Find our config entry # Find our config entry
entry_id = None entry_id = None
for entry in self.hass.config_entries.async_entries(DOMAIN): for entry in self.hass.config_entries.async_entries(DOMAIN):
if entry.data.get("api_key") == self.api_key: if self.api_client.api_key == entry.data.get("api_key"):
entry_id = entry.entry_id entry_id = entry.entry_id
break break
if entry_id: if entry_id:
# Check if both coordinators are initialized before publishing
buy_coordinator = self.hass.data[DOMAIN].get(f"{entry_id}_buy") buy_coordinator = self.hass.data[DOMAIN].get(f"{entry_id}_buy")
sell_coordinator = self.hass.data[DOMAIN].get(f"{entry_id}_sell") sell_coordinator = self.hass.data[DOMAIN].get(f"{entry_id}_sell")
if not buy_coordinator or not sell_coordinator: if not buy_coordinator or not sell_coordinator:
_LOGGER.debug("Coordinators not yet initialized, skipping MQTT publish for now") _LOGGER.debug("Coordinators not yet initialized, skipping MQTT publish for now")
# Don't update _had_tomorrow_prices so we'll try again on next update
return return
# Get MQTT topics from config
from .const import CONF_MQTT_TOPIC_BUY, CONF_MQTT_TOPIC_SELL, DEFAULT_MQTT_TOPIC_BUY, DEFAULT_MQTT_TOPIC_SELL from .const import CONF_MQTT_TOPIC_BUY, CONF_MQTT_TOPIC_SELL, DEFAULT_MQTT_TOPIC_BUY, DEFAULT_MQTT_TOPIC_SELL
entry = self.hass.config_entries.async_get_entry(entry_id) entry = self.hass.config_entries.async_get_entry(entry_id)
mqtt_topic_buy = entry.options.get(CONF_MQTT_TOPIC_BUY, DEFAULT_MQTT_TOPIC_BUY) mqtt_topic_buy = entry.options.get(CONF_MQTT_TOPIC_BUY, DEFAULT_MQTT_TOPIC_BUY)
mqtt_topic_sell = entry.options.get(CONF_MQTT_TOPIC_SELL, DEFAULT_MQTT_TOPIC_SELL) mqtt_topic_sell = entry.options.get(CONF_MQTT_TOPIC_SELL, DEFAULT_MQTT_TOPIC_SELL)
# Wait a moment for both coordinators to update
await asyncio.sleep(5) await asyncio.sleep(5)
# Publish to MQTT
from .mqtt_common import publish_mqtt_prices from .mqtt_common import publish_mqtt_prices
success = await publish_mqtt_prices(self.hass, entry_id, mqtt_topic_buy, mqtt_topic_sell) success = await publish_mqtt_prices(self.hass, entry_id, mqtt_topic_buy, mqtt_topic_sell)
if success: if success:
_LOGGER.info("Successfully published 48h prices to MQTT after detecting valid tomorrow prices") _LOGGER.info("Successfully published 48h prices to MQTT after detecting valid tomorrow prices")
else: else:
_LOGGER.error("Failed to publish to MQTT after detecting tomorrow prices") _LOGGER.error("Failed to publish to MQTT after detecting tomorrow prices")
# Update state for next check
self._had_tomorrow_prices = has_valid_tomorrow_prices self._had_tomorrow_prices = has_valid_tomorrow_prices
async def _async_update_data(self): async def _async_update_data(self):
"""Fetch 48h of frames and extract current + today's list.""" """Fetch 48h of frames and extract current + today's list."""
_LOGGER.debug("Starting %s price update (48h mode: %s)", self.price_type, self.mqtt_48h_mode) _LOGGER.debug("Starting %s price update (48h mode: %s)", self.price_type, self.mqtt_48h_mode)
# Store the previous data for fallback
previous_data = None
if hasattr(self, 'data') and self.data:
previous_data = self.data.copy() if self.data else None
if previous_data:
previous_data["is_cached"] = True
# Get today's start in LOCAL time, then convert to UTC
# This ensures we get the correct "today" for the user's timezone
today_local = dt_util.now().replace(hour=0, minute=0, second=0, microsecond=0)
window_end_local = today_local + timedelta(days=2)
# Convert to UTC for API request
start_utc = dt_util.as_utc(today_local)
end_utc = dt_util.as_utc(window_end_local)
start_str = start_utc.strftime("%Y-%m-%dT%H:%M:%SZ")
end_str = end_utc.strftime("%Y-%m-%dT%H:%M:%SZ")
endpoint_tpl = BUY_ENDPOINT if self.price_type == "buy" else SELL_ENDPOINT
endpoint = endpoint_tpl.format(start=start_str, end=end_str)
url = f"{API_URL}{endpoint}"
_LOGGER.debug("Requesting %s data from %s", self.price_type, url)
# Get current time in UTC for price comparison
now_utc = dt_util.utcnow()
try:
# Load translations
await self.retry_mechanism.load_translations(self.hass)
try:
self._translations = await async_get_translations(
self.hass, self.hass.config.language, DOMAIN, ["debug"]
)
except Exception as ex:
_LOGGER.warning("Failed to load translations for coordinator: %s", ex)
# Use retry mechanism
data = await self.retry_mechanism.execute(
self._make_api_request,
url,
price_type=self.price_type
)
frames = data.get("frames", [])
if not frames:
_LOGGER.warning("No frames returned for %s prices", self.price_type)
prices = []
current_price = None
for f in frames:
val = convert_price(f.get("price_gross"))
if val is None:
continue
start = dt_util.parse_datetime(f["start"])
end = dt_util.parse_datetime(f["end"])
if not start or not end:
_LOGGER.warning("Invalid datetime format in frames for %s", self.price_type)
continue
# Convert to local time for display (we need this for hourly data)
local_start = dt_util.as_local(start).strftime("%Y-%m-%dT%H:%M:%S")
prices.append({"start": local_start, "price": val})
# Check if this is the current price
if start <= now_utc < end:
current_price = val
# Filter today's prices
today_local = dt_util.now().strftime("%Y-%m-%d")
prices_today = [p for p in prices if p["start"].startswith(today_local)]
_LOGGER.debug("Successfully fetched %s price data: current=%s, today_prices=%d, total_prices=%d",
self.price_type, current_price, len(prices_today), len(prices))
new_data = {
"prices_today": prices_today,
"prices": prices,
"current": current_price,
"is_cached": False,
}
# Check if we should publish to MQTT
if self.mqtt_48h_mode:
await self._check_and_publish_mqtt(new_data)
return new_data
except aiohttp.ClientError as err:
error_msg = self._translations.get(
"debug.network_error",
"Network error fetching {price_type} data: {error}"
).format(price_type=self.price_type, error=str(err))
_LOGGER.error(error_msg)
if previous_data:
cache_msg = self._translations.get(
"debug.using_cache",
"Using cached data from previous update due to API failure"
)
_LOGGER.warning(cache_msg)
return previous_data
raise UpdateFailed(self._translations.get(
"debug.network_error_user",
"Network error: {error}"
).format(error=err))
except Exception as err:
error_msg = self._translations.get(
"debug.unexpected_error",
"Unexpected error fetching {price_type} data: {error}"
).format(price_type=self.price_type, error=str(err))
_LOGGER.exception(error_msg)
if previous_data:
cache_msg = self._translations.get(
"debug.using_cache",
"Using cached data from previous update due to API failure"
)
_LOGGER.warning(cache_msg)
return previous_data
raise UpdateFailed(self._translations.get(
"debug.unexpected_error_user",
"Error: {error}"
).format(error=err))
previous_data = None
if hasattr(self, 'data') and self.data:
previous_data = self.data.copy() if self.data else None
if previous_data:
previous_data["is_cached"] = True
today_local = dt_util.now().replace(hour=0, minute=0, second=0, microsecond=0)
window_end_local = today_local + timedelta(days=2)
start_utc = dt_util.as_utc(today_local)
end_utc = dt_util.as_utc(window_end_local)
start_str = start_utc.strftime("%Y-%m-%dT%H:%M:%SZ")
end_str = end_utc.strftime("%Y-%m-%dT%H:%M:%SZ")
endpoint_tpl = BUY_ENDPOINT if self.price_type == "buy" else SELL_ENDPOINT
endpoint = endpoint_tpl.format(start=start_str, end=end_str)
url = f"{API_URL}{endpoint}"
_LOGGER.debug("Requesting %s data from %s", self.price_type, url)
now_utc = dt_util.utcnow()
try:
# Load translations
try:
self._translations = await async_get_translations(
self.hass, self.hass.config.language, DOMAIN
)
except Exception as ex:
_LOGGER.warning("Failed to load translations for coordinator: %s", ex)
# Use shared API client
data = await self.api_client.fetch(
url,
max_retries=self.retry_attempts,
base_delay=self.retry_delay
)
frames = data.get("frames", [])
if not frames:
_LOGGER.warning("No frames returned for %s prices", self.price_type)
prices = []
current_price = None
for f in frames:
val = convert_price(f.get("price_gross"))
if val is None:
continue
start = dt_util.parse_datetime(f["start"])
end = dt_util.parse_datetime(f["end"])
if not start or not end:
_LOGGER.warning("Invalid datetime format in frames for %s", self.price_type)
continue
local_start = dt_util.as_local(start).strftime("%Y-%m-%dT%H:%M:%S")
prices.append({"start": local_start, "price": val})
if start <= now_utc < end:
current_price = val
today_local = dt_util.now().strftime("%Y-%m-%d")
prices_today = [p for p in prices if p["start"].startswith(today_local)]
_LOGGER.debug("Successfully fetched %s price data: current=%s, today_prices=%d, total_prices=%d",
self.price_type, current_price, len(prices_today), len(prices))
new_data = {
"prices_today": prices_today,
"prices": prices,
"current": current_price,
"is_cached": False,
}
if self.mqtt_48h_mode:
await self._check_and_publish_mqtt(new_data)
return new_data
except UpdateFailed:
# UpdateFailed already has proper error message from API client
if previous_data:
_LOGGER.warning("Using cached data from previous update due to API failure")
return previous_data
raise
except Exception as err:
error_msg = self._translations.get(
"debug.unexpected_error",
"Unexpected error fetching {price_type} data: {error}"
).format(price_type=self.price_type, error=str(err))
_LOGGER.exception(error_msg)
if previous_data:
_LOGGER.warning("Using cached data from previous update due to API failure")
return previous_data
raise UpdateFailed(self._translations.get(
"debug.unexpected_error_user",
"Error: {error}"
).format(error=err))
def schedule_hourly_update(self): def schedule_hourly_update(self):
"""Schedule next refresh 1 min after each full hour.""" """Schedule next refresh 1 min after each full hour."""
if self._unsub_hourly: if self._unsub_hourly:
self._unsub_hourly() self._unsub_hourly()
self._unsub_hourly = None self._unsub_hourly = None
now = dt_util.now() now = dt_util.now()
# Keep original timing: 1 minute past the hour
next_run = (now.replace(minute=0, second=0, microsecond=0) next_run = (now.replace(minute=0, second=0, microsecond=0)
+ timedelta(hours=1, minutes=1)) + timedelta(hours=1, minutes=1))
_LOGGER.debug("Scheduling next hourly update for %s at %s", _LOGGER.debug("Scheduling next hourly update for %s at %s",
self.price_type, next_run.strftime("%Y-%m-%d %H:%M:%S")) self.price_type, next_run.strftime("%Y-%m-%d %H:%M:%S"))
self._unsub_hourly = async_track_point_in_time( self._unsub_hourly = async_track_point_in_time(
self.hass, self._handle_hourly_update, dt_util.as_utc(next_run) self.hass, self._handle_hourly_update, dt_util.as_utc(next_run)
) )
async def _check_tomorrow_prices_after_15(self): async def _check_tomorrow_prices_after_15(self):
"""Check if tomorrow prices are available and publish to MQTT if needed. """Check if tomorrow prices are available and publish to MQTT if needed."""
This runs after 15:00 until midnight in 48h mode.
"""
if not self.mqtt_48h_mode: if not self.mqtt_48h_mode:
return return
now = dt_util.now() now = dt_util.now()
# Only run between 15:00 and 23:59
if now.hour < 15: if now.hour < 15:
return return
tomorrow = (now + timedelta(days=1)).strftime("%Y-%m-%d") tomorrow = (now + timedelta(days=1)).strftime("%Y-%m-%d")
# Check if we already have valid tomorrow data
if self.data and self.data.get("prices"): if self.data and self.data.get("prices"):
tomorrow_prices = [p for p in self.data.get("prices", []) if p.get("start", "").startswith(tomorrow)] tomorrow_prices = [p for p in self.data.get("prices", []) if p.get("start", "").startswith(tomorrow)]
# Check if tomorrow's data is valid (not placeholders)
has_valid_tomorrow = ( has_valid_tomorrow = (
len(tomorrow_prices) >= 20 and len(tomorrow_prices) >= 20 and
not self._is_likely_placeholder_data(tomorrow_prices) not self._is_likely_placeholder_data(tomorrow_prices)
) )
if has_valid_tomorrow: if has_valid_tomorrow:
# We already have valid data, nothing to do
return return
else: else:
_LOGGER.info("Missing or invalid tomorrow prices at %s, will refresh data", now.strftime("%H:%M")) _LOGGER.info("Missing or invalid tomorrow prices at %s, will refresh data", now.strftime("%H:%M"))
# If we don't have valid tomorrow data, force a refresh
# The refresh will automatically trigger MQTT publish via _check_and_publish_mqtt
await self.async_request_refresh() await self.async_request_refresh()
async def _handle_hourly_update(self, _): async def _handle_hourly_update(self, _):
"""Handle hourly update.""" """Handle hourly update."""
_LOGGER.debug("Running scheduled hourly update for %s", self.price_type) _LOGGER.debug("Running scheduled hourly update for %s", self.price_type)
# First do the regular update
await self.async_request_refresh() await self.async_request_refresh()
# Then check if we need tomorrow prices (only in 48h mode after 15:00)
await self._check_tomorrow_prices_after_15() await self._check_tomorrow_prices_after_15()
# Schedule next hourly update
self.schedule_hourly_update() self.schedule_hourly_update()
def schedule_midnight_update(self): def schedule_midnight_update(self):
@ -590,14 +305,13 @@ class PstrykDataUpdateCoordinator(DataUpdateCoordinator):
if self._unsub_midnight: if self._unsub_midnight:
self._unsub_midnight() self._unsub_midnight()
self._unsub_midnight = None self._unsub_midnight = None
now = dt_util.now() now = dt_util.now()
# Keep original timing: 1 minute past midnight
next_mid = (now + timedelta(days=1)).replace(hour=0, minute=1, second=0, microsecond=0) next_mid = (now + timedelta(days=1)).replace(hour=0, minute=1, second=0, microsecond=0)
_LOGGER.debug("Scheduling next midnight update for %s at %s", _LOGGER.debug("Scheduling next midnight update for %s at %s",
self.price_type, next_mid.strftime("%Y-%m-%d %H:%M:%S")) self.price_type, next_mid.strftime("%Y-%m-%d %H:%M:%S"))
self._unsub_midnight = async_track_point_in_time( self._unsub_midnight = async_track_point_in_time(
self.hass, self._handle_midnight_update, dt_util.as_utc(next_mid) self.hass, self._handle_midnight_update, dt_util.as_utc(next_mid)
) )
@ -607,26 +321,22 @@ class PstrykDataUpdateCoordinator(DataUpdateCoordinator):
_LOGGER.debug("Running scheduled midnight update for %s", self.price_type) _LOGGER.debug("Running scheduled midnight update for %s", self.price_type)
await self.async_request_refresh() await self.async_request_refresh()
self.schedule_midnight_update() self.schedule_midnight_update()
def schedule_afternoon_update(self): def schedule_afternoon_update(self):
"""Schedule frequent updates between 14:00-15:00 for 48h mode.""" """Schedule frequent updates between 14:00-15:00 for 48h mode."""
if self._unsub_afternoon: if self._unsub_afternoon:
self._unsub_afternoon() self._unsub_afternoon()
self._unsub_afternoon = None self._unsub_afternoon = None
if not self.mqtt_48h_mode: if not self.mqtt_48h_mode:
_LOGGER.debug("Afternoon updates not scheduled for %s - 48h mode is disabled", self.price_type) _LOGGER.debug("Afternoon updates not scheduled for %s - 48h mode is disabled", self.price_type)
return return
now = dt_util.now() now = dt_util.now()
# Determine next check time
# If we're before 14:00, start at 14:00
if now.hour < 14: if now.hour < 14:
next_check = now.replace(hour=14, minute=0, second=0, microsecond=0) next_check = now.replace(hour=14, minute=0, second=0, microsecond=0)
# If we're between 14:00-15:00, find next 15-minute slot
elif now.hour == 14: elif now.hour == 14:
# Calculate minutes to next 15-minute mark
current_minutes = now.minute current_minutes = now.minute
if current_minutes < 15: if current_minutes < 15:
next_minutes = 15 next_minutes = 15
@ -635,24 +345,20 @@ class PstrykDataUpdateCoordinator(DataUpdateCoordinator):
elif current_minutes < 45: elif current_minutes < 45:
next_minutes = 45 next_minutes = 45
else: else:
# Move to 15:00
next_check = now.replace(hour=15, minute=0, second=0, microsecond=0) next_check = now.replace(hour=15, minute=0, second=0, microsecond=0)
next_minutes = None next_minutes = None
if next_minutes is not None: if next_minutes is not None:
next_check = now.replace(minute=next_minutes, second=0, microsecond=0) next_check = now.replace(minute=next_minutes, second=0, microsecond=0)
# If we're at 15:00 or later, schedule for tomorrow 14:00
else: else:
next_check = (now + timedelta(days=1)).replace(hour=14, minute=0, second=0, microsecond=0) next_check = (now + timedelta(days=1)).replace(hour=14, minute=0, second=0, microsecond=0)
# Make sure next_check is in the future
if next_check <= now: if next_check <= now:
# This shouldn't happen, but just in case
next_check = next_check + timedelta(minutes=15) next_check = next_check + timedelta(minutes=15)
_LOGGER.info("Scheduling afternoon update check for %s at %s (48h mode, checking every 15min between 14:00-15:00)", _LOGGER.info("Scheduling afternoon update check for %s at %s (48h mode, checking every 15min between 14:00-15:00)",
self.price_type, next_check.strftime("%Y-%m-%d %H:%M:%S")) self.price_type, next_check.strftime("%Y-%m-%d %H:%M:%S"))
self._unsub_afternoon = async_track_point_in_time( self._unsub_afternoon = async_track_point_in_time(
self.hass, self._handle_afternoon_update, dt_util.as_utc(next_check) self.hass, self._handle_afternoon_update, dt_util.as_utc(next_check)
) )
@ -660,28 +366,13 @@ class PstrykDataUpdateCoordinator(DataUpdateCoordinator):
async def _handle_afternoon_update(self, _): async def _handle_afternoon_update(self, _):
"""Handle afternoon update for 48h mode.""" """Handle afternoon update for 48h mode."""
now = dt_util.now() now = dt_util.now()
_LOGGER.debug("Running scheduled afternoon update check for %s at %s", _LOGGER.debug("Running scheduled afternoon update check for %s at %s",
self.price_type, now.strftime("%H:%M")) self.price_type, now.strftime("%H:%M"))
# Perform the update
await self.async_request_refresh() await self.async_request_refresh()
# Schedule next check only if we're still in the 14:00-15:00 window
if now.hour < 15: if now.hour < 15:
self.schedule_afternoon_update() self.schedule_afternoon_update()
else: else:
# We've finished the afternoon window, schedule for tomorrow
_LOGGER.info("Finished afternoon update window for %s, next cycle tomorrow", self.price_type) _LOGGER.info("Finished afternoon update window for %s, next cycle tomorrow", self.price_type)
self.schedule_afternoon_update() self.schedule_afternoon_update()
def unschedule_all_updates(self):
"""Unschedule all updates."""
if self._unsub_hourly:
self._unsub_hourly()
self._unsub_hourly = None
if self._unsub_midnight:
self._unsub_midnight()
self._unsub_midnight = None
if self._unsub_afternoon:
self._unsub_afternoon()
self._unsub_afternoon = None