- Added display_name and description to BaseStrategy - Updated MA44 and MA125 strategies with metadata - Added /api/v1/strategies endpoint for dynamic discovery - Added Strategy Simulation panel to dashboard with date picker and tooltips - Implemented JS polling for backtest results in dashboard - Added performance test scripts and DB connection guide - Expanded indicator config to all 15 timeframes
674 lines
18 KiB
Markdown
674 lines
18 KiB
Markdown
# Guide: Connecting to Synology PostgreSQL from Local PC
|
||
|
||
This guide explains how to connect to your Synology's TimescaleDB from your local PC to work with historical 1m candle data.
|
||
|
||
## Prerequisites
|
||
|
||
### 1. Install PostgreSQL Client on Your PC
|
||
|
||
**Windows:**
|
||
```bash
|
||
# Download from: https://www.postgresql.org/download/windows/
|
||
# Or use chocolatey:
|
||
choco install postgresql
|
||
```
|
||
|
||
**Mac:**
|
||
```bash
|
||
brew install postgresql
|
||
# Or download Postgres.app from postgresapp.com
|
||
```
|
||
|
||
**Linux:**
|
||
```bash
|
||
# Ubuntu/Debian
|
||
sudo apt-get install postgresql-client
|
||
|
||
# Or use Docker:
|
||
docker run -it --rm postgres:15 psql --version
|
||
```
|
||
|
||
### 2. Install Python Dependencies
|
||
|
||
```bash
|
||
pip install asyncpg pandas numpy
|
||
# Or use requirements.txt from the project
|
||
pip install -r requirements.txt
|
||
```
|
||
|
||
## Step 1: Configure Synology for Remote Access
|
||
|
||
### Open PostgreSQL Port
|
||
|
||
1. **SSH into your Synology:**
|
||
```bash
|
||
ssh admin@YOUR_SYNOLOGY_IP
|
||
```
|
||
|
||
2. **Edit PostgreSQL configuration:**
|
||
```bash
|
||
# Find postgresql.conf (usually in /var/lib/postgresql/data/)
|
||
sudo vim /var/lib/postgresql/data/postgresql.conf
|
||
|
||
# Change:
|
||
# listen_addresses = 'localhost'
|
||
# To:
|
||
listen_addresses = '*'
|
||
```
|
||
|
||
3. **Edit pg_hba.conf to allow remote connections:**
|
||
```bash
|
||
sudo vim /var/lib/postgresql/data/pg_hba.conf
|
||
|
||
# Add at the end:
|
||
host all all 0.0.0.0/0 md5
|
||
# Or for specific IP:
|
||
host all all YOUR_PC_IP/32 md5
|
||
```
|
||
|
||
4. **Restart PostgreSQL:**
|
||
```bash
|
||
sudo systemctl restart postgresql
|
||
# Or if using Docker:
|
||
cd ~/btc_bot/docker && docker-compose restart timescaledb
|
||
```
|
||
|
||
### Configure Synology Firewall
|
||
|
||
1. Open **Control Panel** → **Security** → **Firewall**
|
||
2. Click **Edit Rules**
|
||
3. Create new rule:
|
||
- **Ports:** Custom → TCP → 5433 (or your PostgreSQL port)
|
||
- **Source IP:** Your PC's IP address (or allow all)
|
||
- **Action:** Allow
|
||
4. Apply the rule
|
||
|
||
## Step 2: Test Connection
|
||
|
||
### Using psql CLI
|
||
|
||
```bash
|
||
# Replace with your Synology IP
|
||
export DB_HOST=192.168.1.100 # Your Synology IP
|
||
export DB_PORT=5433
|
||
export DB_NAME=btc_data
|
||
export DB_USER=btc_bot
|
||
export DB_PASSWORD=your_password
|
||
|
||
# Test connection
|
||
psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME -c "SELECT version();"
|
||
```
|
||
|
||
### Using Python
|
||
|
||
Create `test_connection.py`:
|
||
|
||
```python
|
||
import asyncio
|
||
import asyncpg
|
||
|
||
async def test():
|
||
conn = await asyncpg.connect(
|
||
host='192.168.1.100', # Your Synology IP
|
||
port=5433,
|
||
database='btc_data',
|
||
user='btc_bot',
|
||
password='your_password'
|
||
)
|
||
|
||
version = await conn.fetchval('SELECT version()')
|
||
print(f"Connected! PostgreSQL version: {version}")
|
||
|
||
# Test candle count
|
||
count = await conn.fetchval(
|
||
"SELECT COUNT(*) FROM candles WHERE interval = '1m'"
|
||
)
|
||
print(f"Total 1m candles: {count:,}")
|
||
|
||
await conn.close()
|
||
|
||
asyncio.run(test())
|
||
```
|
||
|
||
Run it:
|
||
```bash
|
||
python test_connection.py
|
||
```
|
||
|
||
## Step 3: Export Historical 1m Data
|
||
|
||
### Option A: Using psql (Quick Export)
|
||
|
||
```bash
|
||
psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME -c "
|
||
COPY (
|
||
SELECT time, symbol, interval, open, high, low, close, volume
|
||
FROM candles
|
||
WHERE symbol = 'BTC'
|
||
AND interval = '1m'
|
||
AND time >= '2025-01-01'
|
||
ORDER BY time
|
||
) TO STDOUT WITH CSV HEADER;
|
||
" > btc_1m_candles.csv
|
||
```
|
||
|
||
### Option B: Using Python (With Progress Bar)
|
||
|
||
Create `export_candles.py`:
|
||
|
||
```python
|
||
import asyncio
|
||
import asyncpg
|
||
import csv
|
||
from datetime import datetime, timezone
|
||
from tqdm import tqdm
|
||
|
||
async def export_candles(
|
||
host: str,
|
||
port: int,
|
||
database: str,
|
||
user: str,
|
||
password: str,
|
||
symbol: str = 'BTC',
|
||
interval: str = '1m',
|
||
start_date: str = '2025-01-01',
|
||
output_file: str = 'candles.csv'
|
||
):
|
||
"""Export candles to CSV with progress bar"""
|
||
|
||
conn = await asyncpg.connect(
|
||
host=host, port=port, database=database,
|
||
user=user, password=password
|
||
)
|
||
|
||
try:
|
||
# Get total count
|
||
total = await conn.fetchval("""
|
||
SELECT COUNT(*) FROM candles
|
||
WHERE symbol = $1 AND interval = $2 AND time >= $3
|
||
""", symbol, interval, datetime.fromisoformat(start_date).replace(tzinfo=timezone.utc))
|
||
|
||
print(f"Exporting {total:,} candles...")
|
||
|
||
# Export in batches
|
||
with open(output_file, 'w', newline='') as f:
|
||
writer = csv.writer(f)
|
||
writer.writerow(['time', 'symbol', 'interval', 'open', 'high', 'low', 'close', 'volume'])
|
||
|
||
async with conn.transaction():
|
||
cursor = await conn.cursor(
|
||
"SELECT time, symbol, interval, open, high, low, close, volume "
|
||
"FROM candles WHERE symbol = $1 AND interval = $2 AND time >= $3 "
|
||
"ORDER BY time",
|
||
symbol, interval, datetime.fromisoformat(start_date).replace(tzinfo=timezone.utc)
|
||
)
|
||
|
||
with tqdm(total=total, unit='candles') as pbar:
|
||
while True:
|
||
rows = await cursor.fetch(1000)
|
||
if not rows:
|
||
break
|
||
for row in rows:
|
||
writer.writerow(row)
|
||
pbar.update(len(rows))
|
||
|
||
print(f"✓ Exported to {output_file}")
|
||
print(f" Total rows: {total:,}")
|
||
|
||
finally:
|
||
await conn.close()
|
||
|
||
if __name__ == "__main__":
|
||
asyncio.run(export_candles(
|
||
host='192.168.1.100', # Change to your Synology IP
|
||
port=5433,
|
||
database='btc_data',
|
||
user='btc_bot',
|
||
password='your_password',
|
||
symbol='BTC',
|
||
interval='1m',
|
||
start_date='2025-01-01',
|
||
output_file='btc_1m_candles.csv'
|
||
))
|
||
```
|
||
|
||
Run:
|
||
```bash
|
||
pip install tqdm
|
||
python export_candles.py
|
||
```
|
||
|
||
## Step 4: Calculate Indicators on PC
|
||
|
||
### Using pandas-ta (Recommended)
|
||
|
||
Create `calculate_indicators.py`:
|
||
|
||
```python
|
||
import pandas as pd
|
||
import pandas_ta as ta
|
||
from datetime import datetime
|
||
|
||
def calculate_indicators(input_file: str, output_file: str):
|
||
"""Calculate technical indicators from candle data"""
|
||
|
||
# Read candles
|
||
print(f"Reading {input_file}...")
|
||
df = pd.read_csv(input_file)
|
||
df['time'] = pd.to_datetime(df['time'])
|
||
df = df.sort_values('time')
|
||
|
||
print(f"Loaded {len(df):,} candles")
|
||
|
||
# Calculate indicators
|
||
print("Calculating indicators...")
|
||
|
||
# Moving Averages
|
||
df['ma44'] = ta.sma(df['close'], length=44)
|
||
df['ma125'] = ta.sma(df['close'], length=125)
|
||
df['ma200'] = ta.sma(df['close'], length=200)
|
||
|
||
# RSI
|
||
df['rsi'] = ta.rsi(df['close'], length=14)
|
||
|
||
# Bollinger Bands
|
||
bb = ta.bbands(df['close'], length=20, std=2)
|
||
df['bb_upper'] = bb['BBU_20_2.0']
|
||
df['bb_middle'] = bb['BBM_20_2.0']
|
||
df['bb_lower'] = bb['BBL_20_2.0']
|
||
|
||
# MACD
|
||
macd = ta.macd(df['close'], fast=12, slow=26, signal=9)
|
||
df['macd'] = macd['MACD_12_26_9']
|
||
df['macd_signal'] = macd['MACDs_12_26_9']
|
||
df['macd_histogram'] = macd['MACDh_12_26_9']
|
||
|
||
# Save
|
||
print(f"Saving to {output_file}...")
|
||
df.to_csv(output_file, index=False)
|
||
|
||
print(f"✓ Calculated {len(df.columns) - 8} indicators")
|
||
print(f" Output: {output_file}")
|
||
|
||
return df
|
||
|
||
if __name__ == "__main__":
|
||
df = calculate_indicators(
|
||
input_file='btc_1m_candles.csv',
|
||
output_file='btc_1m_with_indicators.csv'
|
||
)
|
||
|
||
# Show sample
|
||
print("\nSample data:")
|
||
print(df[['time', 'close', 'ma44', 'ma125', 'rsi']].tail(10))
|
||
```
|
||
|
||
Install dependencies:
|
||
```bash
|
||
pip install pandas pandas-ta
|
||
```
|
||
|
||
Run:
|
||
```bash
|
||
python calculate_indicators.py
|
||
```
|
||
|
||
### Performance Tips
|
||
|
||
- **Chunk processing** for very large files (> 1GB):
|
||
```python
|
||
# Process 100k rows at a time
|
||
chunksize = 100000
|
||
for chunk in pd.read_csv(input_file, chunksize=chunksize):
|
||
# Calculate indicators for chunk
|
||
pass
|
||
```
|
||
|
||
- **Use multiple cores:**
|
||
```python
|
||
from multiprocessing import Pool
|
||
# Parallelize indicator calculation
|
||
```
|
||
|
||
## Step 5: Import Indicators Back to Synology
|
||
|
||
### Option A: Direct SQL Insert (Fastest)
|
||
|
||
Create `import_indicators.py`:
|
||
|
||
```python
|
||
import asyncio
|
||
import asyncpg
|
||
import pandas as pd
|
||
from datetime import datetime
|
||
from tqdm import tqdm
|
||
|
||
async def import_indicators(
|
||
host: str,
|
||
port: int,
|
||
database: str,
|
||
user: str,
|
||
password: str,
|
||
input_file: str,
|
||
batch_size: int = 1000
|
||
):
|
||
"""Import calculated indicators to Synology database"""
|
||
|
||
# Read calculated indicators
|
||
print(f"Reading {input_file}...")
|
||
df = pd.read_csv(input_file)
|
||
print(f"Loaded {len(df):,} rows with {len(df.columns)} columns")
|
||
|
||
# Connect to database
|
||
conn = await asyncpg.connect(
|
||
host=host, port=port, database=database,
|
||
user=user, password=password
|
||
)
|
||
|
||
try:
|
||
# Get indicator columns (exclude candle data)
|
||
indicator_cols = [c for c in df.columns if c not in
|
||
['time', 'symbol', 'interval', 'open', 'high', 'low', 'close', 'volume']]
|
||
|
||
print(f"Importing {len(indicator_cols)} indicators: {indicator_cols}")
|
||
|
||
# Prepare data
|
||
symbol = df['symbol'].iloc[0]
|
||
interval = df['interval'].iloc[0]
|
||
|
||
total_inserted = 0
|
||
|
||
with tqdm(total=len(df) * len(indicator_cols), unit='indicators') as pbar:
|
||
for col in indicator_cols:
|
||
# Prepare batch
|
||
values = []
|
||
for _, row in df.iterrows():
|
||
if pd.notna(row[col]): # Skip NaN values
|
||
values.append((
|
||
row['time'],
|
||
symbol,
|
||
interval,
|
||
col, # indicator_name
|
||
float(row[col]),
|
||
{} # parameters (JSONB)
|
||
))
|
||
|
||
# Insert in batches
|
||
for i in range(0, len(values), batch_size):
|
||
batch = values[i:i + batch_size]
|
||
await conn.executemany(
|
||
"""
|
||
INSERT INTO indicators (time, symbol, interval, indicator_name, value, parameters)
|
||
VALUES ($1, $2, $3, $4, $5, $6)
|
||
ON CONFLICT (time, symbol, interval, indicator_name) DO UPDATE SET
|
||
value = EXCLUDED.value,
|
||
updated_at = NOW()
|
||
""",
|
||
batch
|
||
)
|
||
total_inserted += len(batch)
|
||
pbar.update(len(batch))
|
||
|
||
print(f"\n✓ Imported {total_inserted:,} indicator values")
|
||
|
||
finally:
|
||
await conn.close()
|
||
|
||
if __name__ == "__main__":
|
||
asyncio.run(import_indicators(
|
||
host='192.168.1.100',
|
||
port=5433,
|
||
database='btc_data',
|
||
user='btc_bot',
|
||
password='your_password',
|
||
input_file='btc_1m_with_indicators.csv',
|
||
batch_size=1000
|
||
))
|
||
```
|
||
|
||
Run:
|
||
```bash
|
||
python import_indicators.py
|
||
```
|
||
|
||
### Option B: Using psql COPY (For Large Files)
|
||
|
||
```bash
|
||
# Convert to format for COPY command
|
||
python -c "
|
||
import pandas as pd
|
||
df = pd.read_csv('btc_1m_with_indicators.csv')
|
||
# Transform to long format for indicators table
|
||
indicators = []
|
||
for col in ['ma44', 'ma125', 'rsi', 'bb_upper', 'bb_lower']:
|
||
temp = df[['time', 'symbol', 'interval', col]].copy()
|
||
temp['indicator_name'] = col
|
||
temp = temp.rename(columns={col: 'value'})
|
||
indicators.append(temp)
|
||
result = pd.concat(indicators)
|
||
result = result[result['value'].notna()]
|
||
result.to_csv('indicators_for_import.csv', index=False)
|
||
"
|
||
|
||
# Upload and import on Synology
|
||
psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME -c "
|
||
COPY indicators (time, symbol, interval, indicator_name, value)
|
||
FROM '/path/to/indicators_for_import.csv'
|
||
DELIMITER ',' CSV HEADER;
|
||
"
|
||
```
|
||
|
||
## Complete Workflow Script
|
||
|
||
Create `sync_indicators.py` for one-command workflow:
|
||
|
||
```python
|
||
#!/usr/bin/env python3
|
||
"""
|
||
Complete workflow: Export candles → Calculate indicators → Import to Synology
|
||
"""
|
||
|
||
import asyncio
|
||
import asyncpg
|
||
import pandas as pd
|
||
import pandas_ta as ta
|
||
from datetime import datetime, timezone
|
||
from tqdm import tqdm
|
||
|
||
class IndicatorSync:
|
||
def __init__(self, host: str, port: int, database: str, user: str, password: str):
|
||
self.db_config = {
|
||
'host': host, 'port': port, 'database': database,
|
||
'user': user, 'password': password
|
||
}
|
||
|
||
async def export_and_calculate(
|
||
self,
|
||
symbol: str = 'BTC',
|
||
interval: str = '1m',
|
||
start_date: str = '2025-01-01',
|
||
indicators_config: dict = None
|
||
):
|
||
"""Main workflow"""
|
||
|
||
print("="*70)
|
||
print(f"INDICATOR SYNC: {symbol}/{interval}")
|
||
print(f"Period: {start_date} to now")
|
||
print("="*70)
|
||
|
||
# Connect to database
|
||
conn = await asyncpg.connect(**self.db_config)
|
||
|
||
try:
|
||
# Step 1: Export candles
|
||
print("\n📥 Step 1: Exporting candles...")
|
||
candles = await self._export_candles(conn, symbol, interval, start_date)
|
||
print(f" Exported {len(candles):,} candles")
|
||
|
||
# Step 2: Calculate indicators
|
||
print("\n⚙️ Step 2: Calculating indicators...")
|
||
indicators_df = self._calculate_indicators(candles, indicators_config)
|
||
print(f" Calculated {len(indicators_df)} indicator values")
|
||
|
||
# Step 3: Import indicators
|
||
print("\n📤 Step 3: Importing to database...")
|
||
await self._import_indicators(conn, indicators_df)
|
||
print(" Import complete!")
|
||
|
||
print("\n" + "="*70)
|
||
print("✅ SYNC COMPLETE")
|
||
print("="*70)
|
||
|
||
finally:
|
||
await conn.close()
|
||
|
||
async def _export_candles(self, conn, symbol, interval, start_date):
|
||
"""Export candles from database"""
|
||
rows = await conn.fetch(
|
||
"SELECT time, symbol, interval, open, high, low, close, volume "
|
||
"FROM candles WHERE symbol = $1 AND interval = $2 AND time >= $3 "
|
||
"ORDER BY time",
|
||
symbol, interval, datetime.fromisoformat(start_date).replace(tzinfo=timezone.utc)
|
||
)
|
||
return pd.DataFrame(rows, columns=['time', 'symbol', 'interval', 'open', 'high', 'low', 'close', 'volume'])
|
||
|
||
def _calculate_indicators(self, candles_df, config=None):
|
||
"""Calculate technical indicators"""
|
||
df = candles_df.copy()
|
||
|
||
# Default indicators
|
||
config = config or {
|
||
'ma44': lambda d: ta.sma(d['close'], length=44),
|
||
'ma125': lambda d: ta.sma(d['close'], length=125),
|
||
'rsi': lambda d: ta.rsi(d['close'], length=14),
|
||
}
|
||
|
||
# Calculate each indicator
|
||
for name, func in config.items():
|
||
df[name] = func(df)
|
||
|
||
# Transform to long format
|
||
indicators = []
|
||
indicator_cols = list(config.keys())
|
||
|
||
for col in indicator_cols:
|
||
temp = df[['time', 'symbol', 'interval', col]].copy()
|
||
temp['indicator_name'] = col
|
||
temp = temp.rename(columns={col: 'value'})
|
||
temp = temp[temp['value'].notna()]
|
||
indicators.append(temp)
|
||
|
||
return pd.concat(indicators, ignore_index=True)
|
||
|
||
async def _import_indicators(self, conn, indicators_df):
|
||
"""Import indicators to database"""
|
||
# Convert to list of tuples
|
||
values = [
|
||
(row['time'], row['symbol'], row['interval'], row['indicator_name'], float(row['value']), {})
|
||
for _, row in indicators_df.iterrows()
|
||
]
|
||
|
||
# Insert in batches
|
||
batch_size = 1000
|
||
for i in tqdm(range(0, len(values), batch_size), desc="Importing"):
|
||
batch = values[i:i + batch_size]
|
||
await conn.executemany(
|
||
"""
|
||
INSERT INTO indicators (time, symbol, interval, indicator_name, value, parameters)
|
||
VALUES ($1, $2, $3, $4, $5, $6)
|
||
ON CONFLICT (time, symbol, interval, indicator_name) DO UPDATE SET
|
||
value = EXCLUDED.value,
|
||
updated_at = NOW()
|
||
""",
|
||
batch
|
||
)
|
||
|
||
# Usage
|
||
if __name__ == "__main__":
|
||
sync = IndicatorSync(
|
||
host='192.168.1.100', # Your Synology IP
|
||
port=5433,
|
||
database='btc_data',
|
||
user='btc_bot',
|
||
password='your_password'
|
||
)
|
||
|
||
asyncio.run(sync.export_and_calculate(
|
||
symbol='BTC',
|
||
interval='1m',
|
||
start_date='2025-01-01'
|
||
))
|
||
```
|
||
|
||
Run complete workflow:
|
||
```bash
|
||
python sync_indicators.py
|
||
```
|
||
|
||
## Troubleshooting
|
||
|
||
### Connection Refused
|
||
```
|
||
Error: connection refused
|
||
```
|
||
- Check if PostgreSQL is running: `docker ps | grep timescale`
|
||
- Verify port is open: `telnet SYNOLOGY_IP 5433`
|
||
- Check firewall rules on Synology
|
||
|
||
### Permission Denied
|
||
```
|
||
Error: password authentication failed
|
||
```
|
||
- Verify password is correct
|
||
- Check pg_hba.conf has proper entries
|
||
- Restart PostgreSQL after config changes
|
||
|
||
### Slow Performance
|
||
- Use batch inserts (1000 rows at a time)
|
||
- Process large datasets in chunks
|
||
- Consider using COPY command for very large imports
|
||
- Close cursor after use
|
||
|
||
### Memory Issues
|
||
- Don't load entire table into memory
|
||
- Use server-side cursors
|
||
- Process in smaller date ranges
|
||
- Use chunksize when reading CSV
|
||
|
||
## Security Notes
|
||
|
||
⚠️ **Important:**
|
||
- Use strong password for database user
|
||
- Limit pg_hba.conf to specific IPs when possible
|
||
- Use VPN if accessing over internet
|
||
- Consider SSL connection for remote access
|
||
- Don't commit passwords to git
|
||
|
||
## Next Steps
|
||
|
||
Once indicators are calculated and imported:
|
||
|
||
1. **Backtests will be fast** - Server just reads pre-calculated values
|
||
2. **Dashboard will load quickly** - No on-the-fly calculation needed
|
||
3. **Can add more indicators** - Just re-run sync with new calculations
|
||
|
||
## Alternative: SSH Tunnel (More Secure)
|
||
|
||
Instead of opening PostgreSQL port, use SSH tunnel:
|
||
|
||
```bash
|
||
# On your PC
|
||
ssh -L 5433:localhost:5433 admin@SYNOLOGY_IP
|
||
|
||
# Now connect to localhost:5433 (tunnels to Synology)
|
||
export DB_HOST=localhost
|
||
export DB_PORT=5433
|
||
python sync_indicators.py
|
||
```
|
||
|
||
This encrypts all traffic and doesn't require opening PostgreSQL port.
|
||
|
||
---
|
||
|
||
**Questions or issues?** Check the logs and verify each step works before proceeding to the next.
|