6.4 KiB
BTC Accumulation Bot - Data Collection Phase
High-performance data collection system for cbBTC on Hyperliquid with TimescaleDB storage on Synology DS218+.
Architecture Overview
- Data Source: Hyperliquid WebSocket (primary)
- Database: TimescaleDB (PostgreSQL extension) on NAS
- Collection: 1-minute candles with automatic batching
- API: FastAPI with real-time dashboard
- Deployment: Docker Compose on Synology
Project Structure
btc_bot/
├── docker/ # Docker configurations
│ ├── docker-compose.yml
│ ├── Dockerfile.collector
│ ├── Dockerfile.api
│ └── init-scripts/ # Database initialization
├── config/ # YAML configurations
├── src/
│ ├── data_collector/ # WebSocket client & database writer
│ └── api/ # REST API & dashboard
├── scripts/ # Deployment & backup scripts
└── requirements.txt
Prerequisites
-
Synology DS218+ with:
- Docker package installed
- SSH access enabled
- 6GB RAM recommended (upgrade from stock 2GB)
-
Network:
- Static IP for NAS (recommended)
- Port 5432 (database) and 8000 (API) available
Installation
1. Clone Repository on NAS
ssh user@your-nas-ip
cd /volume1
mkdir -p btc_bot
cd btc_bot
# Copy project files here
2. Configure Environment
# Copy example environment file
cp .env.example .env
# Edit with your settings
nano .env
Required settings:
DB_PASSWORD: Strong password for databaseBASE_RPC_URL: Alchemy/Infura API key for Base chain validationTELEGRAM_BOT_TOKENandCHAT_ID: For notifications (optional)
3. Deploy
chmod +x scripts/deploy.sh
./scripts/deploy.sh
This will:
- Create necessary directories
- Build Docker images
- Start TimescaleDB
- Initialize database schema
- Start data collector
- Start API server
4. Verify Installation
# Check container status
cd docker
docker-compose ps
# View logs
docker-compose logs -f data_collector
docker-compose logs -f api_server
# Test database connection
docker exec btc_timescale psql -U btc_bot -d btc_data -c "SELECT COUNT(*) FROM candles;"
Usage
Web Dashboard
Access the dashboard at: http://your-nas-ip:8000/dashboard
Features:
- Real-time price chart
- 24h statistics
- Recent candles table
- CSV export
- Auto-refresh every 30 seconds
REST API
Get Candles
curl "http://your-nas-ip:8000/api/v1/candles?symbol=cbBTC-PERP&interval=1m&limit=100"
Get Latest Candle
curl "http://your-nas-ip:8000/api/v1/candles/latest?symbol=cbBTC-PERP&interval=1m"
Export CSV
curl "http://your-nas-ip:8000/api/v1/export/csv?symbol=cbBTC-PERP&days=7" -o cbBTC_7d.csv
Health Check
curl "http://your-nas-ip:8000/api/v1/health"
API Documentation
Interactive API docs available at: http://your-nas-ip:8000/docs
Database Access
Connect directly to TimescaleDB:
# From NAS
docker exec -it btc_timescale psql -U btc_bot -d btc_data
# From remote (if port 5432 forwarded)
psql -h your-nas-ip -p 5432 -U btc_bot -d btc_data
Useful Queries
-- Check latest data
SELECT * FROM candles ORDER BY time DESC LIMIT 10;
-- Check data gaps (last 24h)
SELECT * FROM data_quality
WHERE time > NOW() - INTERVAL '24 hours'
AND resolved = false;
-- Database statistics
SELECT * FROM data_health;
-- Compression status
SELECT chunk_name, compression_status
FROM timescaledb_information.chunks
WHERE hypertable_name = 'candles';
Backup & Maintenance
Automated Backups
Setup scheduled task in Synology DSM:
- Open Control Panel → Task Scheduler
- Create Triggered Task → User-defined script
- Schedule: Every 6 hours
- Command:
/volume1/btc_bot/scripts/backup.sh
Manual Backup
cd /volume1/btc_bot
./scripts/backup.sh
Backups stored in: /volume1/btc_bot/backups/
Health Monitoring
Add to Task Scheduler (every 5 minutes):
/volume1/btc_bot/scripts/health_check.sh
Database Maintenance
# Manual compression (runs automatically after 7 days)
docker exec btc_timescale psql -U btc_bot -d btc_data -c "SELECT compress_chunk(i) FROM show_chunks('candles') i;"
# Vacuum and analyze
docker exec btc_timescale psql -U btc_bot -d btc_data -c "VACUUM ANALYZE candles;"
Troubleshooting
High Memory Usage
If DS218+ runs out of memory:
# Reduce memory limits in docker-compose.yml
# Edit docker/docker-compose.yml
deploy:
resources:
limits:
memory: 1G # Reduce from 1.5G
Then restart:
cd docker
docker-compose down
docker-compose up -d
Data Gaps
If gaps detected:
# Check logs
docker-compose logs data_collector | grep -i gap
# Manual backfill (not yet implemented - will be in Phase 2)
WebSocket Disconnections
Normal behavior - client auto-reconnects. Check:
# Connection health
docker-compose logs data_collector | grep -i "reconnect"
Disk Space
Monitor usage:
du -sh /volume1/btc_bot/data
du -sh /volume1/btc_bot/backups
Expected growth:
- 1m candles: ~50MB/year (compressed)
- Indicators: ~100MB/year
- Backups: Varies based on retention
Performance Tuning
For DS218+ with limited resources:
-
Buffer size: Reduce in
config/data_config.yaml:buffer: max_size: 500 # From 1000 flush_interval_seconds: 60 # From 30 -
Database connections: Reduce pool size:
database: pool_size: 3 # From 5 -
Compression: Already enabled after 7 days
Security Considerations
- Environment file:
.envcontains secrets - never commit to git - Database: Not exposed externally by default
- API: No authentication (local network only)
- Firewall: Only open port 8000 if needed externally (use VPN instead)
Next Steps (Phase 2)
- Backfill system: REST API integration for gap filling
- Indicators: RSI, MACD, EMA computation engine
- Brain: Decision engine with configurable rules
- Execution: EVM wallet integration for cbBTC trading
- Aave: Automatic yield generation on collected cbBTC
Support
- API Issues: Check logs with
docker-compose logs api_server - Data Issues: Check logs with
docker-compose logs data_collector - Database Issues: Check logs with
docker-compose logs timescaledb
License
Private project - not for redistribution