docs: add DOCKER_GUIDE.md and fix .env parsing; chore: update docker and script configurations

This commit is contained in:
BTC Bot
2026-03-05 08:20:18 +01:00
parent e41afcf005
commit 30aeda0901
25 changed files with 1806 additions and 0 deletions

89
DOCKER_GUIDE.md Normal file
View File

@ -0,0 +1,89 @@
# Docker Management & Troubleshooting Guide
This guide provides the necessary commands to build, manage, and troubleshoot the BTC Bot Docker environment.
## 1. Manual Build Commands
Always execute these commands from the **project root** directory.
```bash
# Build the Data Collector
docker build --network host -f docker/Dockerfile.collector -t btc_collector .
# Build the API Server
docker build --network host -f docker/Dockerfile.api -t btc_api .
# Build the Bot (Ensure the tag matches docker-compose.yml)
docker build --no-cache --network host -f docker/Dockerfile.bot -t btc_ping_pong_bot .
```
---
## 2. Managing Containers
Run these commands from the **docker/** directory (`~/btc_bot/docker`).
### Restart All Services
```bash
# Full reset: Stop, remove, and recreate all containers
docker-compose down
docker-compose up -d
```
### Partial Restart (Specific Service)
```bash
# Rebuild and restart only the bot (ignores dependencies like DB)
docker-compose up -d --no-deps ping_pong_bot
```
### Stop/Start Services
```bash
docker-compose stop <service_name> # Temporarily stop
docker-compose start <service_name> # Start a stopped container
```
---
## 3. Checking Logs
Use these commands to diagnose why a service might be crashing or restarting.
```bash
# Follow live logs for the Bot (last 100 lines)
docker-compose logs -f --tail 100 ping_pong_bot
# Follow live logs for the Collector
docker-compose logs -f btc_collector
# Follow live logs for the API Server
docker-compose logs -f api_server
# View logs for ALL services combined
docker-compose logs -f
```
---
## 4. Troubleshooting Checklist
| Symptom | Common Cause & Solution |
| :--- | :--- |
| **`.env` Parsing Warning** | Check for `//` comments (use `#` instead) or hidden characters at the start of the file. |
| **Container "Restarting" Loop** | Check logs! Usually missing `API_KEY`/`API_SECRET` or DB connection failure. |
| **"No containers to restart"** | Use `docker-compose up -d` first. `restart` only works for existing containers. |
| **Database Connection Refused** | Ensure `DB_PORT=5433` is used for `host` network mode. Check if port is open with `netstat`. |
| **Code Changes Not Applying** | Rebuild the image (`--no-cache`) if you changed `requirements.txt` or the `Dockerfile`. |
---
## 5. Useful Debugging Commands
```bash
# Check status of all containers
docker-compose ps
# List all local docker images
docker images
# Check if the database port is listening on the host
netstat -tulnp | grep 5433
# Access the shell inside a running container
docker exec -it btc_ping_pong_bot /bin/bash
```

View File

@ -0,0 +1,33 @@
# Ping-Pong Strategy Configuration
# Trading Pair & Timeframe
symbol: BTCUSDT
interval: "1" # Minutes (1, 3, 5, 15, 30, 60, 120, 240, 360, 720, D, W, M)
# Indicator Settings
rsi:
period: 14
overbought: 70
oversold: 30
enabled_for_open: true
enabled_for_close: false
hurst:
period: 30
multiplier: 1.8
enabled_for_open: true
enabled_for_close: false
# Strategy Settings
direction: "long" # "long" or "short"
capital: 1000.0 # Initial capital for calculations (informational)
exchange_leverage: 1.0 # Multiplier for each 'ping' size
max_effective_leverage: 5.0 # Cap on total position size relative to equity
pos_size_margin: 10.0 # Margin per 'ping' (USD)
take_profit_pct: 1.5 # Target profit percentage per exit (1.5 = 1.5%)
partial_exit_pct: 0.15 # 15% of position closed on each TP hit
min_position_value_usd: 15.0 # Minimum remaining value to keep position open
# Execution Settings
loop_interval_seconds: 5 # How often to check for new data
debug_mode: false

23
docker/Dockerfile.api Normal file
View File

@ -0,0 +1,23 @@
FROM python:3.11-slim
WORKDIR /app
# Copy requirements first (for better caching)
COPY requirements.txt .
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY src/ ./src/
COPY config/ ./config/
COPY scripts/ ./scripts/
# Set Python path
ENV PYTHONPATH=/app
# Expose API port
EXPOSE 8000
# Run the API server
CMD ["uvicorn", "src.api.server:app", "--host", "0.0.0.0", "--port", "8000"]

20
docker/Dockerfile.bot Normal file
View File

@ -0,0 +1,20 @@
FROM python:3.11-slim
WORKDIR /app
# Copy requirements first
COPY requirements_bot.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements_bot.txt
# Copy application code
COPY src/ ./src/
COPY config/ ./config/
COPY .env .
# Set Python path
ENV PYTHONPATH=/app
# Run the bot
CMD ["python", "src/strategies/ping_pong_bot.py"]

View File

@ -0,0 +1,21 @@
FROM python:3.11-slim
WORKDIR /app
# Copy requirements first (for better caching)
COPY requirements.txt .
# Install Python dependencies
# --no-cache-dir reduces image size
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY src/ ./src/
COPY config/ ./config/
COPY scripts/ ./scripts/
# Set Python path
ENV PYTHONPATH=/app
# Run the collector
CMD ["python", "-m", "src.data_collector.main"]

View File

@ -0,0 +1 @@
timescale/timescaledb:2.11.2-pg15

110
docker/docker-compose.yml Normal file
View File

@ -0,0 +1,110 @@
# Update docker-compose.yml to mount source code as volume
version: '3.8'
services:
timescaledb:
image: timescale/timescaledb:2.11.2-pg15
container_name: btc_timescale
environment:
POSTGRES_USER: btc_bot
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: btc_data
TZ: Europe/Warsaw
volumes:
- /volume1/btc_bot/data:/var/lib/postgresql/data
- /volume1/btc_bot/backups:/backups
- ./timescaledb.conf:/etc/postgresql/postgresql.conf
- ./init-scripts:/docker-entrypoint-initdb.d
ports:
- "5433:5432"
command: postgres -c config_file=/etc/postgresql/postgresql.conf
restart: unless-stopped
deploy:
resources:
limits:
memory: 1.5G
reservations:
memory: 512M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U btc_bot -d btc_data"]
interval: 10s
timeout: 5s
retries: 5
data_collector:
build:
context: ..
dockerfile: docker/Dockerfile.collector
image: btc_collector
container_name: btc_collector
network_mode: host
environment:
- DB_HOST=localhost
- DB_PORT=5433
- DB_NAME=btc_data
- DB_USER=btc_bot
- DB_PASSWORD=${DB_PASSWORD}
- LOG_LEVEL=INFO
volumes:
- ../src:/app/src
- /volume1/btc_bot/logs:/app/logs
- ../config:/app/config:ro
depends_on:
timescaledb:
condition: service_healthy
restart: unless-stopped
deploy:
resources:
limits:
memory: 256M
reservations:
memory: 128M
api_server:
build:
context: ..
dockerfile: docker/Dockerfile.api
image: btc_api
container_name: btc_api
network_mode: host
environment:
- DB_HOST=localhost
- DB_PORT=5433
- DB_NAME=btc_data
- DB_USER=btc_bot
- DB_PASSWORD=${DB_PASSWORD}
volumes:
- ../src:/app/src
- /volume1/btc_bot/exports:/app/exports
- ../config:/app/config:ro
depends_on:
- timescaledb
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M
ping_pong_bot:
build:
context: ..
dockerfile: docker/Dockerfile.bot
image: btc_ping_pong_bot
container_name: btc_ping_pong_bot
network_mode: host
environment:
- API_KEY=${API_KEY}
- API_SECRET=${API_SECRET}
- LOG_LEVEL=INFO
volumes:
- ../src:/app/src
- /volume1/btc_bot/logs:/app/logs
- ../config:/app/config:ro
- ../.env:/app/.env:ro
restart: unless-stopped
deploy:
resources:
limits:
memory: 256M
reservations:
memory: 128M

View File

@ -0,0 +1,199 @@
-- 1. Enable TimescaleDB extension
CREATE EXTENSION IF NOT EXISTS timescaledb;
-- 2. Create candles table (main data storage)
CREATE TABLE IF NOT EXISTS candles (
time TIMESTAMPTZ NOT NULL,
symbol TEXT NOT NULL,
interval TEXT NOT NULL,
open DECIMAL(18,8) NOT NULL,
high DECIMAL(18,8) NOT NULL,
low DECIMAL(18,8) NOT NULL,
close DECIMAL(18,8) NOT NULL,
volume DECIMAL(18,8) NOT NULL,
validated BOOLEAN DEFAULT FALSE,
source TEXT DEFAULT 'hyperliquid',
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- 3. Convert to hypertable (partitioned by time)
SELECT create_hypertable('candles', 'time',
chunk_time_interval => INTERVAL '7 days',
if_not_exists => TRUE
);
-- 4. Create unique constraint for upserts (required by ON CONFLICT)
ALTER TABLE candles
ADD CONSTRAINT candles_unique_candle
UNIQUE (time, symbol, interval);
-- 5. Create indexes for efficient queries
CREATE INDEX IF NOT EXISTS idx_candles_symbol_time
ON candles (symbol, interval, time DESC);
CREATE INDEX IF NOT EXISTS idx_candles_validated
ON candles (validated) WHERE validated = FALSE;
-- 5. Create indicators table (computed values)
CREATE TABLE IF NOT EXISTS indicators (
time TIMESTAMPTZ NOT NULL,
symbol TEXT NOT NULL,
interval TEXT NOT NULL,
indicator_name TEXT NOT NULL,
value DECIMAL(18,8) NOT NULL,
parameters JSONB,
computed_at TIMESTAMPTZ DEFAULT NOW()
);
-- 6. Convert indicators to hypertable
SELECT create_hypertable('indicators', 'time',
chunk_time_interval => INTERVAL '7 days',
if_not_exists => TRUE
);
-- 7. Create unique constraint + index for indicators (required for upserts)
ALTER TABLE indicators
ADD CONSTRAINT indicators_unique
UNIQUE (time, symbol, interval, indicator_name);
CREATE INDEX IF NOT EXISTS idx_indicators_lookup
ON indicators (symbol, interval, indicator_name, time DESC);
-- 8. Create data quality log table
CREATE TABLE IF NOT EXISTS data_quality (
time TIMESTAMPTZ NOT NULL DEFAULT NOW(),
check_type TEXT NOT NULL,
severity TEXT NOT NULL,
symbol TEXT,
details JSONB,
resolved BOOLEAN DEFAULT FALSE
);
CREATE INDEX IF NOT EXISTS idx_quality_unresolved
ON data_quality (resolved) WHERE resolved = FALSE;
CREATE INDEX IF NOT EXISTS idx_quality_time
ON data_quality (time DESC);
-- 9. Create collector state tracking table
CREATE TABLE IF NOT EXISTS collector_state (
id SERIAL PRIMARY KEY,
symbol TEXT NOT NULL UNIQUE,
last_candle_time TIMESTAMPTZ,
last_validation_time TIMESTAMPTZ,
total_candles BIGINT DEFAULT 0,
updated_at TIMESTAMPTZ DEFAULT NOW()
);
-- 10. Insert initial state for cbBTC
INSERT INTO collector_state (symbol, last_candle_time)
VALUES ('cbBTC', NULL)
ON CONFLICT (symbol) DO NOTHING;
-- 11. Enable compression for old data (after 7 days)
ALTER TABLE candles SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'symbol,interval'
);
ALTER TABLE indicators SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'symbol,interval,indicator_name'
);
-- 12. Add compression policies
SELECT add_compression_policy('candles', INTERVAL '7 days', if_not_exists => TRUE);
SELECT add_compression_policy('indicators', INTERVAL '7 days', if_not_exists => TRUE);
-- 13. Create function to update collector state
CREATE OR REPLACE FUNCTION update_collector_state()
RETURNS TRIGGER AS $$
BEGIN
INSERT INTO collector_state (symbol, last_candle_time, total_candles)
VALUES (NEW.symbol, NEW.time, 1)
ON CONFLICT (symbol)
DO UPDATE SET
last_candle_time = NEW.time,
total_candles = collector_state.total_candles + 1,
updated_at = NOW();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- 14. Create trigger to auto-update state
DROP TRIGGER IF EXISTS trigger_update_state ON candles;
CREATE TRIGGER trigger_update_state
AFTER INSERT ON candles
FOR EACH ROW
EXECUTE FUNCTION update_collector_state();
-- 15. Create view for data health check
CREATE OR REPLACE VIEW data_health AS
SELECT
symbol,
COUNT(*) as total_candles,
COUNT(*) FILTER (WHERE validated) as validated_candles,
MAX(time) as latest_candle,
MIN(time) as earliest_candle,
NOW() - MAX(time) as time_since_last
FROM candles
GROUP BY symbol;
-- 16. Create decisions table (brain outputs - buy/sell/hold with full context)
CREATE TABLE IF NOT EXISTS decisions (
time TIMESTAMPTZ NOT NULL,
symbol TEXT NOT NULL,
interval TEXT NOT NULL,
decision_type TEXT NOT NULL,
strategy TEXT NOT NULL,
confidence DECIMAL(5,4),
price_at_decision DECIMAL(18,8),
indicator_snapshot JSONB NOT NULL,
candle_snapshot JSONB NOT NULL,
reasoning TEXT,
backtest_id TEXT,
executed BOOLEAN DEFAULT FALSE,
execution_price DECIMAL(18,8),
execution_time TIMESTAMPTZ,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- 17. Convert decisions to hypertable
SELECT create_hypertable('decisions', 'time',
chunk_time_interval => INTERVAL '7 days',
if_not_exists => TRUE
);
-- 18. Indexes for decisions - separate live from backtest queries
CREATE INDEX IF NOT EXISTS idx_decisions_live
ON decisions (symbol, interval, time DESC) WHERE backtest_id IS NULL;
CREATE INDEX IF NOT EXISTS idx_decisions_backtest
ON decisions (backtest_id, symbol, time DESC) WHERE backtest_id IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_decisions_type
ON decisions (symbol, decision_type, time DESC);
-- 19. Create backtest_runs metadata table
CREATE TABLE IF NOT EXISTS backtest_runs (
id TEXT PRIMARY KEY,
strategy TEXT NOT NULL,
symbol TEXT NOT NULL DEFAULT 'BTC',
start_time TIMESTAMPTZ NOT NULL,
end_time TIMESTAMPTZ NOT NULL,
intervals TEXT[] NOT NULL,
config JSONB,
results JSONB,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- 20. Compression for decisions
ALTER TABLE decisions SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'symbol,interval,strategy'
);
SELECT add_compression_policy('decisions', INTERVAL '7 days', if_not_exists => TRUE);
-- Success message
SELECT 'Database schema initialized successfully' as status;

View File

@ -0,0 +1,43 @@
-- Create a read-only user for API access (optional security)
DO $$
BEGIN
IF NOT EXISTS (SELECT FROM pg_roles WHERE rolname = 'btc_api') THEN
CREATE USER btc_api WITH PASSWORD 'api_password_change_me';
END IF;
END
$$;
-- Grant read-only permissions
GRANT CONNECT ON DATABASE btc_data TO btc_api;
GRANT USAGE ON SCHEMA public TO btc_api;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO btc_api;
-- Grant sequence access for ID columns
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO btc_api;
-- Apply to future tables
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO btc_api;
-- Create continuous aggregate for hourly stats (optional optimization)
CREATE MATERIALIZED VIEW IF NOT EXISTS hourly_stats
WITH (timescaledb.continuous) AS
SELECT
time_bucket('1 hour', time) as bucket,
symbol,
interval,
FIRST(open, time) as first_open,
MAX(high) as max_high,
MIN(low) as min_low,
LAST(close, time) as last_close,
SUM(volume) as total_volume,
COUNT(*) as candle_count
FROM candles
GROUP BY bucket, symbol, interval;
-- Add refresh policy for continuous aggregate
SELECT add_continuous_aggregate_policy('hourly_stats',
start_offset => INTERVAL '1 month',
end_offset => INTERVAL '1 hour',
schedule_interval => INTERVAL '1 hour',
if_not_exists => TRUE
);

41
docker/timescaledb.conf Normal file
View File

@ -0,0 +1,41 @@
# Optimized for Synology DS218+ (2GB RAM, dual-core CPU)
# Required for TimescaleDB
shared_preload_libraries = 'timescaledb'
# Memory settings
shared_buffers = 256MB
effective_cache_size = 768MB
work_mem = 16MB
maintenance_work_mem = 128MB
# Connection settings
listen_addresses = '*'
max_connections = 50
max_locks_per_transaction = 256
max_worker_processes = 2
max_parallel_workers_per_gather = 1
max_parallel_workers = 2
max_parallel_maintenance_workers = 1
# Write performance
wal_buffers = 16MB
checkpoint_completion_target = 0.9
random_page_cost = 1.1
effective_io_concurrency = 200
# TimescaleDB settings
timescaledb.max_background_workers = 4
# Logging (use default pg_log directory inside PGDATA)
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_rotation_age = 1d
log_rotation_size = 100MB
log_min_messages = warning
log_min_error_statement = error
# Auto-vacuum for hypertables
autovacuum_max_workers = 2
autovacuum_naptime = 10s

6
requirements_bot.txt Normal file
View File

@ -0,0 +1,6 @@
pybit
pandas
numpy
pyyaml
python-dotenv
rich

36
scripts/backfill.sh Normal file
View File

@ -0,0 +1,36 @@
#!/bin/bash
# Backfill script for Hyperliquid historical data
# Usage: ./backfill.sh [coin] [days|max] [intervals...]
# Examples:
# ./backfill.sh BTC 7 "1m" # Last 7 days of 1m candles
# ./backfill.sh BTC max "1m 1h 1d" # Maximum available data for all intervals
set -e
COIN=${1:-BTC}
DAYS=${2:-7}
INTERVALS=${3:-"1m"}
echo "=== Hyperliquid Historical Data Backfill ==="
echo "Coin: $COIN"
if [ "$DAYS" == "max" ]; then
echo "Mode: MAXIMUM (up to 5000 candles per interval)"
else
echo "Days: $DAYS"
fi
echo "Intervals: $INTERVALS"
echo ""
# Change to project root
cd "$(dirname "$0")/.."
# Run backfill inside Docker container
docker exec btc_collector python -m src.data_collector.backfill \
--coin "$COIN" \
--days "$DAYS" \
--intervals $INTERVALS \
--db-host localhost \
--db-port 5433
echo ""
echo "=== Backfill Complete ==="

37
scripts/backup.sh Normal file
View File

@ -0,0 +1,37 @@
#!/bin/bash
# Backup script for Synology DS218+
# Run via Task Scheduler every 6 hours
BACKUP_DIR="/volume1/btc_bot/backups"
DB_NAME="btc_data"
DB_USER="btc_bot"
RETENTION_DAYS=30
DATE=$(date +%Y%m%d_%H%M)
echo "Starting backup at $(date)"
# Create backup directory if not exists
mkdir -p $BACKUP_DIR
# Create backup
docker exec btc_timescale pg_dump -U $DB_USER -Fc $DB_NAME > $BACKUP_DIR/btc_data_$DATE.dump
# Compress
if [ -f "$BACKUP_DIR/btc_data_$DATE.dump" ]; then
gzip $BACKUP_DIR/btc_data_$DATE.dump
echo "Backup created: btc_data_$DATE.dump.gz"
# Calculate size
SIZE=$(du -h $BACKUP_DIR/btc_data_$DATE.dump.gz | cut -f1)
echo "Backup size: $SIZE"
else
echo "Error: Backup file not created"
exit 1
fi
# Delete old backups
DELETED=$(find $BACKUP_DIR -name "*.dump.gz" -mtime +$RETENTION_DAYS | wc -l)
find $BACKUP_DIR -name "*.dump.gz" -mtime +$RETENTION_DAYS -delete
echo "Deleted $DELETED old backup(s)"
echo "Backup completed at $(date)"

107
scripts/check_db_stats.py Normal file
View File

@ -0,0 +1,107 @@
#!/usr/bin/env python3
"""
Quick database statistics checker
Shows oldest date, newest date, and count for each interval
"""
import asyncio
import asyncpg
import os
from datetime import datetime
async def check_database_stats():
# Database connection (uses same env vars as your app)
conn = await asyncpg.connect(
host=os.getenv('DB_HOST', 'localhost'),
port=int(os.getenv('DB_PORT', 5432)),
database=os.getenv('DB_NAME', 'btc_data'),
user=os.getenv('DB_USER', 'btc_bot'),
password=os.getenv('DB_PASSWORD', '')
)
try:
print("=" * 70)
print("DATABASE STATISTICS")
print("=" * 70)
print()
# Check for each interval
intervals = ['1m', '3m', '5m', '15m', '30m', '37m', '1h', '2h', '4h', '8h', '12h', '1d']
for interval in intervals:
stats = await conn.fetchrow("""
SELECT
COUNT(*) as count,
MIN(time) as oldest,
MAX(time) as newest
FROM candles
WHERE symbol = 'BTC' AND interval = $1
""", interval)
if stats['count'] > 0:
oldest = stats['oldest'].strftime('%Y-%m-%d %H:%M') if stats['oldest'] else 'N/A'
newest = stats['newest'].strftime('%Y-%m-%d %H:%M') if stats['newest'] else 'N/A'
count = stats['count']
# Calculate days of data
if stats['oldest'] and stats['newest']:
days = (stats['newest'] - stats['oldest']).days
print(f"{interval:6} | {count:>8,} candles | {days:>4} days | {oldest} to {newest}")
print()
print("=" * 70)
# Check indicators
print("\nINDICATORS AVAILABLE:")
indicators = await conn.fetch("""
SELECT DISTINCT indicator_name, interval, COUNT(*) as count
FROM indicators
WHERE symbol = 'BTC'
GROUP BY indicator_name, interval
ORDER BY interval, indicator_name
""")
if indicators:
for ind in indicators:
print(f" {ind['indicator_name']:10} on {ind['interval']:6} | {ind['count']:>8,} values")
else:
print(" No indicators found in database")
print()
print("=" * 70)
# Check 1m specifically with more detail
print("\n1-MINUTE DATA DETAIL:")
one_min_stats = await conn.fetchrow("""
SELECT
COUNT(*) as count,
MIN(time) as oldest,
MAX(time) as newest,
COUNT(*) FILTER (WHERE time > NOW() - INTERVAL '24 hours') as last_24h
FROM candles
WHERE symbol = 'BTC' AND interval = '1m'
""")
if one_min_stats['count'] > 0:
total_days = (one_min_stats['newest'] - one_min_stats['oldest']).days
expected_candles = total_days * 24 * 60 # 1 candle per minute
actual_candles = one_min_stats['count']
coverage = (actual_candles / expected_candles) * 100 if expected_candles > 0 else 0
print(f" Total candles: {actual_candles:,}")
print(f" Date range: {one_min_stats['oldest'].strftime('%Y-%m-%d')} to {one_min_stats['newest'].strftime('%Y-%m-%d')}")
print(f" Total days: {total_days}")
print(f" Expected candles: {expected_candles:,} (if complete)")
print(f" Coverage: {coverage:.1f}%")
print(f" Last 24 hours: {one_min_stats['last_24h']:,} candles")
else:
print(" No 1m data found")
print()
print("=" * 70)
finally:
await conn.close()
if __name__ == "__main__":
asyncio.run(check_database_stats())

18
scripts/check_status.sh Normal file
View File

@ -0,0 +1,18 @@
#!/bin/bash
# Check the status of the indicators table (constraints and compression)
docker exec -i btc_timescale psql -U btc_bot -d btc_data <<EOF
\x
SELECT 'Checking constraints...' as step;
SELECT conname, pg_get_constraintdef(oid)
FROM pg_constraint
WHERE conrelid = 'indicators'::regclass;
SELECT 'Checking compression settings...' as step;
SELECT * FROM timescaledb_information.hypertables
WHERE hypertable_name = 'indicators';
SELECT 'Checking compression jobs...' as step;
SELECT * FROM timescaledb_information.jobs
WHERE hypertable_name = 'indicators';
EOF

59
scripts/deploy.sh Normal file
View File

@ -0,0 +1,59 @@
#!/bin/bash
# Deployment script for Synology DS218+
set -e
echo "=== BTC Bot Data Collector Deployment ==="
echo ""
# Check if running on Synology
if [ ! -d "/volume1" ]; then
echo "Warning: This script is designed for Synology NAS"
echo "Continuing anyway..."
fi
# Create directories
echo "Creating directories..."
mkdir -p /volume1/btc_bot/{data,backups,logs,exports}
# Check if Docker is installed
if ! command -v docker &> /dev/null; then
echo "Error: Docker not found. Please install Docker package from Synology Package Center"
exit 1
fi
# Copy configuration
echo "Setting up configuration..."
if [ ! -f "/volume1/btc_bot/.env" ]; then
cp .env.example /volume1/btc_bot/.env
echo "Created .env file. Please edit /volume1/btc_bot/.env with your settings"
fi
# Build and start services
echo "Building and starting services..."
cd docker
docker-compose pull
docker-compose build --no-cache
docker-compose up -d
# Wait for database
echo "Waiting for database to be ready..."
sleep 10
# Check status
echo ""
echo "=== Status ==="
docker-compose ps
echo ""
echo "=== Logs (last 20 lines) ==="
docker-compose logs --tail=20
echo ""
echo "=== Deployment Complete ==="
echo "Database available at: localhost:5432"
echo "API available at: http://localhost:8000"
echo ""
echo "To view logs: docker-compose logs -f"
echo "To stop: docker-compose down"
echo "To backup: ./scripts/backup.sh"

View File

@ -0,0 +1,54 @@
#!/bin/bash
# Fix indicators table schema - Version 2 (Final)
# Handles TimescaleDB compression constraints properly
echo "Fixing indicators table schema (v2)..."
# 1. Decompress chunks individually (safest method)
# We fetch the list of compressed chunks and process them one by one
echo "Checking for compressed chunks..."
CHUNKS=$(docker exec -i btc_timescale psql -U btc_bot -d btc_data -t -c "SELECT chunk_schema || '.' || chunk_name FROM timescaledb_information.chunks WHERE hypertable_name = 'indicators' AND is_compressed = true;")
for chunk in $CHUNKS; do
# Trim whitespace
chunk=$(echo "$chunk" | xargs)
if [[ ! -z "$chunk" ]]; then
echo "Decompressing chunk: $chunk"
docker exec -i btc_timescale psql -U btc_bot -d btc_data -c "SELECT decompress_chunk('$chunk');"
fi
done
# 2. Execute the schema changes
docker exec -i btc_timescale psql -U btc_bot -d btc_data <<EOF
BEGIN;
-- Remove policy first
SELECT remove_compression_policy('indicators', if_exists => true);
-- Disable compression setting (REQUIRED to add unique constraint)
ALTER TABLE indicators SET (timescaledb.compress = false);
-- Deduplicate data (just in case duplicates exist)
DELETE FROM indicators a USING indicators b
WHERE a.ctid < b.ctid
AND a.time = b.time
AND a.symbol = b.symbol
AND a.interval = b.interval
AND a.indicator_name = b.indicator_name;
-- Add the unique constraint
ALTER TABLE indicators ADD CONSTRAINT indicators_unique UNIQUE (time, symbol, interval, indicator_name);
-- Re-enable compression configuration
ALTER TABLE indicators SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'symbol,interval,indicator_name'
);
-- Re-add compression policy (7 days)
SELECT add_compression_policy('indicators', INTERVAL '7 days', if_not_exists => true);
COMMIT;
SELECT 'Indicators schema fix v2 completed successfully' as status;
EOF

View File

@ -0,0 +1,65 @@
import asyncio
import logging
import os
import sys
# Add src to path
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
from src.data_collector.database import DatabaseManager
from src.data_collector.custom_timeframe_generator import CustomTimeframeGenerator
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
async def main():
logger.info("Starting custom timeframe generation...")
# DB connection settings from env or defaults
db_host = os.getenv('DB_HOST', 'localhost')
db_port = int(os.getenv('DB_PORT', 5432))
db_name = os.getenv('DB_NAME', 'btc_data')
db_user = os.getenv('DB_USER', 'btc_bot')
db_password = os.getenv('DB_PASSWORD', '')
db = DatabaseManager(
host=db_host,
port=db_port,
database=db_name,
user=db_user,
password=db_password
)
await db.connect()
try:
generator = CustomTimeframeGenerator(db)
await generator.initialize()
# Generate 37m from 1m
logger.info("Generating 37m candles from 1m data...")
count_37m = await generator.generate_historical('37m')
logger.info(f"Generated {count_37m} candles for 37m")
# Generate 148m from 37m
# Note: 148m generation relies on 37m data existing
logger.info("Generating 148m candles from 37m data...")
count_148m = await generator.generate_historical('148m')
logger.info(f"Generated {count_148m} candles for 148m")
logger.info("Done!")
except Exception as e:
logger.error(f"Error generating custom timeframes: {e}")
import traceback
traceback.print_exc()
finally:
await db.disconnect()
if __name__ == "__main__":
asyncio.run(main())

View File

@ -0,0 +1,87 @@
#!/usr/bin/env python3
"""
Generate custom timeframes (37m, 148m) from historical 1m data
Run once to backfill all historical data
"""
import asyncio
import argparse
import logging
import sys
from pathlib import Path
# Add parent to path
sys.path.insert(0, str(Path(__file__).parent.parent / 'src'))
from data_collector.database import DatabaseManager
from data_collector.custom_timeframe_generator import CustomTimeframeGenerator
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
async def main():
parser = argparse.ArgumentParser(description='Generate custom timeframe candles')
parser.add_argument('--interval',
default='all',
help='Which interval to generate (default: all, choices: 3m, 5m, 1h, 37m, etc.)')
parser.add_argument('--batch-size', type=int, default=5000,
help='Number of source candles per batch')
parser.add_argument('--verify', action='store_true',
help='Verify integrity after generation')
args = parser.parse_args()
# Initialize database
db = DatabaseManager()
await db.connect()
try:
generator = CustomTimeframeGenerator(db)
await generator.initialize()
if not generator.first_1m_time:
logger.error("No 1m data found in database. Cannot generate custom timeframes.")
return 1
if args.interval == 'all':
intervals = list(generator.STANDARD_INTERVALS.keys()) + list(generator.CUSTOM_INTERVALS.keys())
else:
intervals = [args.interval]
for interval in intervals:
logger.info(f"=" * 60)
logger.info(f"Generating {interval} candles")
logger.info(f"=" * 60)
# Generate historical data
count = await generator.generate_historical(
interval=interval,
batch_size=args.batch_size
)
logger.info(f"Generated {count} {interval} candles")
# Verify if requested
if args.verify:
logger.info(f"Verifying {interval} integrity...")
stats = await generator.verify_integrity(interval)
logger.info(f"Stats: {stats}")
except Exception as e:
logger.error(f"Error: {e}", exc_info=True)
return 1
finally:
await db.disconnect()
logger.info("Custom timeframe generation complete!")
return 0
if __name__ == '__main__':
exit_code = asyncio.run(main())
sys.exit(exit_code)

31
scripts/health_check.sh Normal file
View File

@ -0,0 +1,31 @@
#!/bin/bash
# Health check script for cron/scheduler
# Check if containers are running
if ! docker ps | grep -q "btc_timescale"; then
echo "ERROR: TimescaleDB container not running"
# Send notification (if configured)
exit 1
fi
if ! docker ps | grep -q "btc_collector"; then
echo "ERROR: Data collector container not running"
exit 1
fi
# Check database connectivity
docker exec btc_timescale pg_isready -U btc_bot -d btc_data > /dev/null 2>&1
if [ $? -ne 0 ]; then
echo "ERROR: Cannot connect to database"
exit 1
fi
# Check if recent data exists
LATEST=$(docker exec btc_timescale psql -U btc_bot -d btc_data -t -c "SELECT MAX(time) FROM candles WHERE time > NOW() - INTERVAL '5 minutes';" 2>/dev/null)
if [ -z "$LATEST" ]; then
echo "WARNING: No recent data in database"
exit 1
fi
echo "OK: All systems operational"
exit 0

11
scripts/run_test.sh Normal file
View File

@ -0,0 +1,11 @@
#!/bin/bash
# Run performance test inside Docker container
# Usage: ./run_test.sh [days] [interval]
DAYS=${1:-7}
INTERVAL=${2:-1m}
echo "Running MA44 performance test: ${DAYS} days of ${INTERVAL} data"
echo "=================================================="
docker exec btc_collector python scripts/test_ma44_performance.py --days $DAYS --interval $INTERVAL

View File

@ -0,0 +1,187 @@
#!/usr/bin/env python3
"""
Performance Test Script for MA44 Strategy
Tests backtesting performance on Synology DS218+ with 6GB RAM
Usage:
python test_ma44_performance.py [--days DAYS] [--interval INTERVAL]
Example:
python test_ma44_performance.py --days 7 --interval 1m
"""
import asyncio
import argparse
import time
import sys
import os
from datetime import datetime, timedelta, timezone
# Add src to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))
from data_collector.database import DatabaseManager
from data_collector.indicator_engine import IndicatorEngine, IndicatorConfig
from data_collector.brain import Brain
from data_collector.backtester import Backtester
async def run_performance_test(days: int = 7, interval: str = "1m"):
"""Run MA44 backtest and measure performance"""
print("=" * 70)
print(f"PERFORMANCE TEST: MA44 Strategy")
print(f"Timeframe: {interval}")
print(f"Period: Last {days} days")
print(f"Hardware: Synology DS218+ (6GB RAM)")
print("=" * 70)
print()
# Database connection (adjust these if needed)
db = DatabaseManager(
host=os.getenv('DB_HOST', 'localhost'),
port=int(os.getenv('DB_PORT', 5432)),
database=os.getenv('DB_NAME', 'btc_data'),
user=os.getenv('DB_USER', 'btc_bot'),
password=os.getenv('DB_PASSWORD', '')
)
try:
await db.connect()
print("✓ Database connected")
# Calculate date range
end_date = datetime.now(timezone.utc)
start_date = end_date - timedelta(days=days)
print(f"✓ Date range: {start_date.date()} to {end_date.date()}")
print(f"✓ Symbol: BTC")
print(f"✓ Strategy: MA44 (44-period SMA)")
print()
# Check data availability
async with db.acquire() as conn:
count = await conn.fetchval("""
SELECT COUNT(*) FROM candles
WHERE symbol = 'BTC'
AND interval = $1
AND time >= $2
AND time <= $3
""", interval, start_date, end_date)
print(f"📊 Data points: {count:,} {interval} candles")
if count == 0:
print("❌ ERROR: No data found for this period!")
print(f" Run: python -m data_collector.backfill --days {days} --intervals {interval}")
return
print(f" (Expected: ~{count * int(interval.replace('m','').replace('h','').replace('d',''))} minutes of data)")
print()
# Setup indicator configuration
indicator_configs = [
IndicatorConfig("ma44", "sma", 44, [interval])
]
engine = IndicatorEngine(db, indicator_configs)
brain = Brain(db, engine)
backtester = Backtester(db, engine, brain)
print("⚙️ Running backtest...")
print("-" * 70)
# Measure execution time
start_time = time.time()
await backtester.run("BTC", [interval], start_date, end_date)
end_time = time.time()
execution_time = end_time - start_time
print("-" * 70)
print()
# Fetch results from database
async with db.acquire() as conn:
latest_backtest = await conn.fetchrow("""
SELECT id, strategy, start_time, end_time, intervals, results, created_at
FROM backtest_runs
WHERE strategy LIKE '%ma44%'
ORDER BY created_at DESC
LIMIT 1
""")
if latest_backtest and latest_backtest['results']:
import json
results = json.loads(latest_backtest['results'])
print("📈 RESULTS:")
print("=" * 70)
print(f" Total Trades: {results.get('total_trades', 'N/A')}")
print(f" Win Rate: {results.get('win_rate', 0):.1f}%")
print(f" Win Count: {results.get('win_count', 0)}")
print(f" Loss Count: {results.get('loss_count', 0)}")
print(f" Total P&L: ${results.get('total_pnl', 0):.2f}")
print(f" P&L Percent: {results.get('total_pnl_pct', 0):.2f}%")
print(f" Initial Balance: ${results.get('initial_balance', 1000):.2f}")
print(f" Final Balance: ${results.get('final_balance', 1000):.2f}")
print(f" Max Drawdown: {results.get('max_drawdown', 0):.2f}%")
print()
print("⏱️ PERFORMANCE:")
print(f" Execution Time: {execution_time:.2f} seconds")
print(f" Candles/Second: {count / execution_time:.0f}")
print(f" Backtest ID: {latest_backtest['id']}")
print()
# Performance assessment
if execution_time < 30:
print("✅ PERFORMANCE: Excellent (< 30s)")
elif execution_time < 60:
print("✅ PERFORMANCE: Good (< 60s)")
elif execution_time < 300:
print("⚠️ PERFORMANCE: Acceptable (1-5 min)")
else:
print("❌ PERFORMANCE: Slow (> 5 min) - Consider shorter periods or higher TFs")
print()
print("💡 RECOMMENDATIONS:")
if execution_time > 60:
print(" • For faster results, use higher timeframes (15m, 1h, 4h)")
print(" • Or reduce date range (< 7 days)")
else:
print(" • Hardware is sufficient for this workload")
print(" • Can handle larger date ranges or multiple timeframes")
else:
print("❌ ERROR: No results found in database!")
print(" The backtest may have failed. Check server logs.")
except Exception as e:
print(f"\n❌ ERROR: {e}")
import traceback
traceback.print_exc()
finally:
await db.disconnect()
print()
print("=" * 70)
print("Test completed")
print("=" * 70)
def main():
parser = argparse.ArgumentParser(description='Test MA44 backtest performance')
parser.add_argument('--days', type=int, default=7,
help='Number of days to backtest (default: 7)')
parser.add_argument('--interval', type=str, default='1m',
help='Candle interval (default: 1m)')
args = parser.parse_args()
# Run the async test
asyncio.run(run_performance_test(args.days, args.interval))
if __name__ == "__main__":
main()

87
scripts/update_schema.sh Normal file
View File

@ -0,0 +1,87 @@
#!/bin/bash
# Apply schema updates to a running TimescaleDB container without wiping data
echo "Applying schema updates to btc_timescale container..."
# Execute the schema SQL inside the container
# We use psql with the environment variables set in docker-compose
docker exec -i btc_timescale psql -U btc_bot -d btc_data <<EOF
-- 1. Unique constraint for indicators (if not exists)
DO \$\$
BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'indicators_unique') THEN
ALTER TABLE indicators ADD CONSTRAINT indicators_unique UNIQUE (time, symbol, interval, indicator_name);
END IF;
END \$\$;
-- 2. Index for indicators
CREATE INDEX IF NOT EXISTS idx_indicators_lookup ON indicators (symbol, interval, indicator_name, time DESC);
-- 3. Data health view update
CREATE OR REPLACE VIEW data_health AS
SELECT
symbol,
COUNT(*) as total_candles,
COUNT(*) FILTER (WHERE validated) as validated_candles,
MAX(time) as latest_candle,
MIN(time) as earliest_candle,
NOW() - MAX(time) as time_since_last
FROM candles
GROUP BY symbol;
-- 4. Decisions table
CREATE TABLE IF NOT EXISTS decisions (
time TIMESTAMPTZ NOT NULL,
symbol TEXT NOT NULL,
interval TEXT NOT NULL,
decision_type TEXT NOT NULL,
strategy TEXT NOT NULL,
confidence DECIMAL(5,4),
price_at_decision DECIMAL(18,8),
indicator_snapshot JSONB NOT NULL,
candle_snapshot JSONB NOT NULL,
reasoning TEXT,
backtest_id TEXT,
executed BOOLEAN DEFAULT FALSE,
execution_price DECIMAL(18,8),
execution_time TIMESTAMPTZ,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- 5. Decisions hypertable (ignore error if already exists)
DO \$\$
BEGIN
PERFORM create_hypertable('decisions', 'time', chunk_time_interval => INTERVAL '7 days', if_not_exists => TRUE);
EXCEPTION WHEN OTHERS THEN
NULL; -- Ignore if already hypertable
END \$\$;
-- 6. Decisions indexes
CREATE INDEX IF NOT EXISTS idx_decisions_live ON decisions (symbol, interval, time DESC) WHERE backtest_id IS NULL;
CREATE INDEX IF NOT EXISTS idx_decisions_backtest ON decisions (backtest_id, symbol, time DESC) WHERE backtest_id IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_decisions_type ON decisions (symbol, decision_type, time DESC);
-- 7. Backtest runs table
CREATE TABLE IF NOT EXISTS backtest_runs (
id TEXT PRIMARY KEY,
strategy TEXT NOT NULL,
symbol TEXT NOT NULL DEFAULT 'BTC',
start_time TIMESTAMPTZ NOT NULL,
end_time TIMESTAMPTZ NOT NULL,
intervals TEXT[] NOT NULL,
config JSONB,
results JSONB,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- 8. Compression policies
DO \$\$
BEGIN
ALTER TABLE decisions SET (timescaledb.compress, timescaledb.compress_segmentby = 'symbol,interval,strategy');
PERFORM add_compression_policy('decisions', INTERVAL '7 days', if_not_exists => TRUE);
EXCEPTION WHEN OTHERS THEN
NULL; -- Ignore compression errors if already set
END \$\$;
SELECT 'Schema update completed successfully' as status;
EOF

33
scripts/verify_files.sh Normal file
View File

@ -0,0 +1,33 @@
#!/bin/bash
# BTC Bot Dashboard Setup Script
# Run this from ~/btc_bot to verify all files exist
echo "=== BTC Bot File Verification ==="
echo ""
FILES=(
"src/api/server.py"
"src/api/websocket_manager.py"
"src/api/dashboard/static/index.html"
"docker/Dockerfile.api"
"docker/Dockerfile.collector"
)
for file in "${FILES[@]}"; do
if [ -f "$file" ]; then
size=$(stat -f%z "$file" 2>/dev/null || stat -c%s "$file" 2>/dev/null || echo "unknown")
echo "$file (${size} bytes)"
else
echo "$file (MISSING)"
fi
done
echo ""
echo "=== Next Steps ==="
echo "1. If all files exist, rebuild:"
echo " cd ~/btc_bot"
echo " docker build --network host --no-cache -f docker/Dockerfile.api -t btc_api ."
echo " cd docker && docker-compose up -d"
echo ""
echo "2. Check logs:"
echo " docker logs btc_api --tail 20"

View File

@ -0,0 +1,408 @@
import os
import time
import yaml
import hmac
import hashlib
import json
import logging
import asyncio
import pandas as pd
import numpy as np
from datetime import datetime, timezone
from dotenv import load_dotenv
from rich.console import Console
from rich.table import Table
from rich.live import Live
from rich.panel import Panel
from rich.layout import Layout
from rich import box
# Try to import pybit, if not available, we'll suggest installing it
try:
from pybit.unified_trading import HTTP
except ImportError:
print("Error: 'pybit' library not found. Please install it with: pip install pybit")
exit(1)
# Load environment variables
load_dotenv()
# Setup Logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
filename='logs/ping_pong_bot.log'
)
logger = logging.getLogger("PingPongBot")
console = Console()
class PingPongBot:
def __init__(self, config_path="config/ping_pong_config.yaml"):
with open(config_path, 'r') as f:
self.config = yaml.safe_load(f)
self.api_key = os.getenv("API_KEY")
self.api_secret = os.getenv("API_SECRET")
if not self.api_key or not self.api_secret:
raise ValueError("API_KEY and API_SECRET must be set in .env file")
self.session = HTTP(
testnet=False,
api_key=self.api_key,
api_secret=self.api_secret,
)
self.symbol = self.config['symbol']
self.interval = self.config['interval']
self.direction = self.config['direction'].lower()
# State
self.last_candle_time = None
self.current_indicators = {}
self.position = None
self.wallet_balance = 0
self.status_msg = "Initializing..."
self.last_signal = None
self.start_time = datetime.now()
# Grid parameters from config
self.tp_pct = self.config['take_profit_pct'] / 100.0
self.partial_exit_pct = self.config['partial_exit_pct']
self.min_val_usd = self.config['min_position_value_usd']
self.pos_size_margin = self.config['pos_size_margin']
self.leverage = self.config['exchange_leverage']
self.max_eff_lev = self.config['max_effective_leverage']
def rma(self, series, length):
"""Rolling Moving Average (Wilder's Smoothing) - matches Pine Script ta.rma"""
alpha = 1 / length
return series.ewm(alpha=alpha, adjust=False).mean()
def calculate_indicators(self, df):
"""Calculate RSI and Hurst Bands matching the JS/Dashboard implementation"""
# 1. RSI
rsi_cfg = self.config['rsi']
delta = df['close'].diff()
gain = (delta.where(delta > 0, 0))
loss = (-delta.where(delta < 0, 0))
avg_gain = self.rma(gain, rsi_cfg['period'])
avg_loss = self.rma(loss, rsi_cfg['period'])
rs = avg_gain / avg_loss
df['rsi'] = 100 - (100 / (1 + rs))
# 2. Hurst Bands
hurst_cfg = self.config['hurst']
mcl_t = hurst_cfg['period']
mcm = hurst_cfg['multiplier']
mcl = mcl_t / 2
mcl_2 = int(round(mcl / 2))
# True Range
df['h_l'] = df['high'] - df['low']
df['h_pc'] = abs(df['high'] - df['close'].shift(1))
df['l_pc'] = abs(df['low'] - df['close'].shift(1))
df['tr'] = df[['h_l', 'h_pc', 'l_pc']].max(axis=1)
# RMA of Close and ATR
df['ma_mcl'] = self.rma(df['close'], mcl)
df['atr_mcl'] = self.rma(df['tr'], mcl)
# Historical Offset
df['center'] = df['ma_mcl'].shift(mcl_2)
# Fill first values where shift produces NaN with the MA itself (as done in JS: historical_ma || src)
df['center'] = df['center'].fillna(df['ma_mcl'])
mcm_off = mcm * df['atr_mcl']
df['hurst_upper'] = df['center'] + mcm_off
df['hurst_lower'] = df['center'] - mcm_off
return df
async def fetch_data(self):
"""Fetch latest Klines from Bybit V5"""
try:
# We fetch 200 candles to ensure indicators stabilize
response = self.session.get_kline(
category="linear",
symbol=self.symbol,
interval=self.interval,
limit=200
)
if response['retCode'] != 0:
self.status_msg = f"API Error: {response['retMsg']}"
return None
klines = response['result']['list']
# Bybit returns newest first, we need oldest first
df = pd.DataFrame(klines, columns=['start_time', 'open', 'high', 'low', 'close', 'volume', 'turnover'])
df = df.astype(float)
df = df.iloc[::-1].reset_index(drop=True)
return self.calculate_indicators(df)
except Exception as e:
logger.error(f"Error fetching data: {e}")
self.status_msg = f"Fetch Error: {str(e)}"
return None
async def update_account_info(self):
"""Update position and balance information"""
try:
# Get Position
pos_response = self.session.get_positions(
category="linear",
symbol=self.symbol
)
if pos_response['retCode'] == 0:
positions = pos_response['result']['list']
# Filter by side or just take the one with size > 0
active_pos = [p for p in positions if float(p['size']) > 0]
if active_pos:
self.position = active_pos[0]
else:
self.position = None
# Get Balance
wallet_response = self.session.get_wallet_balance(
category="linear",
coin="USDT"
)
if wallet_response['retCode'] == 0:
self.wallet_balance = float(wallet_response['result']['list'][0]['coin'][0]['walletBalance'])
except Exception as e:
logger.error(f"Error updating account info: {e}")
def check_signals(self, df):
"""Determine if we should Open or Close based on indicators"""
if len(df) < 2:
return None
last = df.iloc[-1]
prev = df.iloc[-2]
rsi_cfg = self.config['rsi']
hurst_cfg = self.config['hurst']
open_signal = False
close_signal = False
# 1. RSI Signals
rsi_buy = prev['rsi'] < rsi_cfg['oversold'] and last['rsi'] >= rsi_cfg['oversold']
rsi_sell = prev['rsi'] > rsi_cfg['overbought'] and last['rsi'] <= rsi_cfg['overbought']
# 2. Hurst Signals
hurst_buy = prev['close'] > prev['hurst_lower'] and last['close'] <= last['hurst_lower']
hurst_sell = prev['close'] > prev['hurst_upper'] and last['close'] <= last['hurst_upper']
# Logic for LONG
if self.direction == 'long':
if (rsi_cfg['enabled_for_open'] and rsi_buy) or (hurst_cfg['enabled_for_open'] and hurst_buy):
open_signal = True
if (rsi_cfg['enabled_for_close'] and rsi_sell) or (hurst_cfg['enabled_for_close'] and hurst_sell):
close_signal = True
# Logic for SHORT
else:
if (rsi_cfg['enabled_for_open'] and rsi_sell) or (hurst_cfg['enabled_for_open'] and hurst_sell):
open_signal = True
if (rsi_cfg['enabled_for_close'] and rsi_buy) or (hurst_cfg['enabled_for_close'] and hurst_buy):
close_signal = True
return "open" if open_signal else ("close" if close_signal else None)
async def execute_trade_logic(self, df, signal):
"""Apply the Ping-Pong strategy logic (Accumulation + TP)"""
last_price = float(df.iloc[-1]['close'])
# 1. Check Take Profit (TP)
if self.position:
avg_price = float(self.position['avgPrice'])
current_qty = float(self.position['size'])
is_tp = False
if self.direction == 'long':
if last_price >= avg_price * (1 + self.tp_pct):
is_tp = True
else:
if last_price <= avg_price * (1 - self.tp_pct):
is_tp = True
if is_tp:
qty_to_close = current_qty * self.partial_exit_pct
remaining_qty = current_qty - qty_to_close
# Min size check
if (remaining_qty * last_price) < self.min_val_usd:
qty_to_close = current_qty
self.status_msg = "TP: Closing Full Position (Min Size reached)"
else:
self.status_msg = f"TP: Closing Partial {self.partial_exit_pct*100}%"
self.place_order(qty_to_close, last_price, is_close=True)
return
# 2. Check Close Signal
if signal == "close" and self.position:
current_qty = float(self.position['size'])
qty_to_close = current_qty * self.partial_exit_pct
if (current_qty - qty_to_close) * last_price < self.min_val_usd:
qty_to_close = current_qty
self.status_msg = "Signal: Closing Position (Partial/Full)"
self.place_order(qty_to_close, last_price, is_close=True)
return
# 3. Check Open/Accumulate Signal
if signal == "open":
# Check Max Effective Leverage
current_qty = float(self.position['size']) if self.position else 0
current_notional = current_qty * last_price
entry_notional = self.pos_size_margin * self.leverage
projected_notional = current_notional + entry_notional
effective_leverage = projected_notional / max(self.wallet_balance, 1.0)
if effective_leverage <= self.max_eff_lev:
qty_to_open = entry_notional / last_price
# Round qty based on symbol precision (simplified)
qty_to_open = round(qty_to_open, 3)
self.status_msg = f"Signal: Opening/Accumulating {qty_to_open} units"
self.place_order(qty_to_open, last_price, is_close=False)
else:
self.status_msg = f"Signal Ignored: Max Leverage {effective_leverage:.2f} > {self.max_eff_lev}"
def place_order(self, qty, price, is_close=False):
"""Send order to Bybit V5"""
side = ""
if self.direction == "long":
side = "Sell" if is_close else "Buy"
else:
side = "Buy" if is_close else "Sell"
try:
response = self.session.place_order(
category="linear",
symbol=self.symbol,
side=side,
orderType="Market",
qty=str(qty),
timeInForce="GTC",
reduceOnly=is_close
)
if response['retCode'] == 0:
logger.info(f"Order Placed: {side} {qty} {self.symbol}")
self.last_signal = f"{side} {qty} @ Market"
else:
logger.error(f"Order Failed: {response['retMsg']}")
self.status_msg = f"Order Error: {response['retMsg']}"
except Exception as e:
logger.error(f"Execution Error: {e}")
self.status_msg = f"Exec Error: {str(e)}"
def create_dashboard(self, df):
"""Create a Rich layout for status display"""
layout = Layout()
layout.split_column(
Layout(name="header", size=3),
Layout(name="main", ratio=1),
Layout(name="footer", size=3)
)
# Header
header_table = Table.grid(expand=True)
header_table.add_column(justify="left", ratio=1)
header_table.add_column(justify="right", ratio=1)
runtime = str(datetime.now() - self.start_time).split('.')[0]
header_table.add_row(
f"[bold cyan]Ping-Pong Bot v1.0[/bold cyan] | Symbol: [yellow]{self.symbol}[/yellow] | TF: [yellow]{self.interval}m[/yellow]",
f"Runtime: [green]{runtime}[/green] | Time: {datetime.now().strftime('%H:%M:%S')}"
)
layout["header"].update(Panel(header_table, style="white on blue"))
# Main Content
main_table = Table(box=box.SIMPLE, expand=True)
main_table.add_column("Category", style="cyan")
main_table.add_column("Value", style="white")
# Indicators
last = df.iloc[-1]
rsi_val = f"{last['rsi']:.2f}"
rsi_status = "[green]Oversold[/green]" if last['rsi'] < self.config['rsi']['oversold'] else ("[red]Overbought[/red]" if last['rsi'] > self.config['rsi']['overbought'] else "Neutral")
main_table.add_row("Price", f"{last['close']:.2f}")
main_table.add_row("RSI", f"{rsi_val} ({rsi_status})")
main_table.add_row("Hurst Upper", f"{last['hurst_upper']:.2f}")
main_table.add_row("Hurst Lower", f"{last['hurst_lower']:.2f}")
main_table.add_section()
# Position Info
if self.position:
size = self.position['size']
avg_p = self.position['avgPrice']
upnl = float(self.position['unrealisedPnl'])
upnl_style = "green" if upnl >= 0 else "red"
main_table.add_row("Position Size", f"{size}")
main_table.add_row("Avg Entry", f"{avg_p}")
main_table.add_row("Unrealized PnL", f"[{upnl_style}]{upnl:.2f} USDT[/{upnl_style}]")
else:
main_table.add_row("Position", "None")
main_table.add_row("Wallet Balance", f"{self.wallet_balance:.2f} USDT")
layout["main"].update(Panel(main_table, title="Current Status", border_style="cyan"))
# Footer
footer_text = f"Status: [bold white]{self.status_msg}[/bold white]"
if self.last_signal:
footer_text += f" | Last Action: [yellow]{self.last_signal}[/yellow]"
layout["footer"].update(Panel(footer_text, border_style="yellow"))
return layout
async def run(self):
"""Main loop"""
with Live(console=console, refresh_per_second=1) as live:
while True:
# 1. Update Account
await self.update_account_info()
# 2. Fetch Data & Calculate Indicators
df = await self.fetch_data()
if df is not None:
# 3. Check for New Candle (for signal processing)
current_time = df.iloc[-1]['start_time']
# 4. Strategy Logic
signal = self.check_signals(df)
await self.execute_trade_logic(df, signal)
# 5. Update UI
live.update(self.create_dashboard(df))
await asyncio.sleep(self.config.get('loop_interval_seconds', 5))
if __name__ == "__main__":
try:
bot = PingPongBot()
asyncio.run(bot.run())
except KeyboardInterrupt:
console.print("\n[bold red]Bot Stopped by User[/bold red]")
except Exception as e:
console.print(f"\n[bold red]Critical Error: {e}[/bold red]")
logger.exception("Critical Error in main loop")