11 KiB
BTC Bot Project - Migration Context
Created: 2024-02-11 Phase: 1 of 4 (Data Collection) - COMPLETE Status: Ready for deployment on Synology DS218+
📁 Project Structure
btc_bot/
├── docker/ # Docker configurations
│ ├── docker-compose.yml # Main orchestration file
│ ├── Dockerfile.collector # Data collector service (no apt-get)
│ ├── Dockerfile.api # API server service
│ ├── timescaledb.conf # Database optimization for NAS
│ └── init-scripts/ # Auto-run SQL on first start
│ ├── 01-schema.sql # Main tables & hypertables
│ └── 02-optimization.sql # Indexes & compression
│
├── config/
│ └── data_config.yaml # Data collection settings
│
├── src/
│ ├── data_collector/ # Data ingestion module
│ │ ├── __init__.py
│ │ ├── main.py # Entry point & orchestrator
│ │ ├── websocket_client.py # Hyperliquid WebSocket client
│ │ ├── candle_buffer.py # In-memory circular buffer
│ │ └── database.py # TimescaleDB interface
│ │
│ └── api/ # REST API & dashboard
│ ├── server.py # FastAPI application
│ └── dashboard/
│ └── static/
│ └── index.html # Real-time web dashboard
│
├── scripts/ # Operations
│ ├── deploy.sh # One-command deployment
│ ├── backup.sh # Automated backup script
│ └── health_check.sh # Health monitoring
│
├── requirements.txt # Python dependencies
├── .env.example # Environment template
└── README.md # Full documentation
✅ Completed Features
Phase 1: Data Collection (DONE)
Components Built:
-
Hyperliquid WebSocket Client
- Real-time cbBTC-PERP 1m candles
- Auto-reconnection with exponential backoff
- Connection health monitoring
- File:
src/data_collector/websocket_client.py
-
Candle Buffer System
- Circular buffer (1000 candles max)
- Automatic batching (every 30s or 100 candles)
- Gap detection
- File:
src/data_collector/candle_buffer.py
-
TimescaleDB Integration
- Hypertables with weekly partitioning
- Automatic compression after 7 days
- Connection pooling
- Batch inserts with conflict resolution
- File:
src/data_collector/database.py
-
Main Orchestrator
- Async event loop
- Health monitoring (every 60s)
- Gap detection (every 5 min)
- Graceful shutdown handling
- File:
src/data_collector/main.py
-
REST API
- FastAPI with auto-generated docs
- Endpoints: /candles, /candles/latest, /health, /export/csv
- Real-time dashboard with charts
- File:
src/api/server.py
-
Database Schema
candles- Main price data (hypertable)indicators- Computed values (hypertable)data_quality- Issues & gaps logcollector_state- Metadata tracking- Compression enabled for old data
- Files:
docker/init-scripts/*.sql
-
Operations Scripts
- Automated deployment
- Backup with retention
- Health monitoring
- Files:
scripts/*.sh
⚙️ Configuration
Environment Variables (.env)
# Database
DB_HOST=timescaledb # Use 'timescaledb' for Docker, 'localhost' for direct
DB_PORT=5432
DB_NAME=btc_data
DB_USER=btc_bot
DB_PASSWORD=your_secure_password_here
# Validation (optional)
BASE_RPC_URL=https://base-mainnet.g.alchemy.com/v2/YOUR_KEY
# Notifications (optional)
TELEGRAM_BOT_TOKEN=your_token
TELEGRAM_CHAT_ID=your_chat_id
Data Collection Settings (config/data_config.yaml)
# Key settings:
# - Primary: Hyperliquid WebSocket
# - Symbol: cbBTC-PERP
# - Interval: 1m (base), custom intervals computed
# - Buffer: 1000 candles, 30s flush
# - Validation: Every 5 minutes
🚀 Deployment Steps
Prerequisites
- Synology DS218+ (or similar NAS)
- Docker package installed
- 6GB RAM recommended (upgraded from 2GB)
- SSH access enabled
Deploy Command
# On NAS:
cd /volume1/btc_bot
chmod +x scripts/deploy.sh
./scripts/deploy.sh
Post-Deployment Verification
# Check services
cd docker
docker-compose ps
# View logs
docker-compose logs -f data_collector
docker-compose logs -f api_server
# Test database
docker exec btc_timescale psql -U btc_bot -d btc_data -c "SELECT COUNT(*) FROM candles;"
# Access dashboard
http://your-nas-ip:8000/dashboard
📊 Database Access
Direct Connection
# From NAS
docker exec -it btc_timescale psql -U btc_bot -d btc_data
# Useful queries:
# Latest data: SELECT * FROM candles ORDER BY time DESC LIMIT 10;
# Check gaps: SELECT * FROM data_quality WHERE resolved = false;
# Health: SELECT * FROM data_health;
Connection String
postgresql://btc_bot:password@localhost:5432/btc_data
🔧 Known Issues & Solutions
1. Docker DNS Resolution (FIXED)
Problem: apt-get update fails in containers
Solution: Removed apt-get from Dockerfiles - using pre-compiled Python packages only
Files Modified: docker/Dockerfile.collector, docker/Dockerfile.api
2. CPU Architecture (opencode incompatibility)
Problem: Intel Atom D2701 lacks SSE4.2 instructions Solution: Run opencode on modern PC, connect to NAS via VS Code Remote-SSH Workflow: Edit on PC → Files on NAS via SSH → Docker sees changes
3. Memory Constraints on DS218+
Mitigation:
- TimescaleDB limited to 1.5GB RAM
- Collector limited to 256MB
- API limited to 512MB
- Compression enabled after 7 days
📈 Storage Estimates
| Data Type | Growth Rate | Compression |
|---|---|---|
| 1m Candles | ~50MB/year | ~70% reduction |
| Indicators | ~100MB/year | ~70% reduction |
| Backups | Configurable | gzip compressed |
Total with 1 year retention: ~200MB compressed
🎯 Next Phases (TODO)
Phase 2: Indicators & Brain
Status: Not Started Components:
- RSI, MACD, EMA calculations
- Custom interval builder (37m, etc.)
- Indicator storage in database
- Backfill system for gaps
- Brain/decision engine
- Weighted signal combination
Files to Create:
src/indicators/*.py(base, rsi, macd, ema, bollinger, volume)src/brain/decision_engine.pysrc/brain/weights.py
Phase 3: Wallet & Execution
Status: Not Started Components:
- Web3.py EVM integration
- Wallet management (EOA)
- Uniswap V3 swap execution
- Gas management
- Aave V3 integration (supply cbBTC)
Files to Create:
src/execution/wallet.pysrc/execution/uniswap.pysrc/execution/aave.pysrc/execution/gas_manager.py
Phase 4: Trading Bot Integration
Status: Not Started Components:
- Signal → Trade execution flow
- $25 USDC trade size
- Automatic Aave deposit
- Risk management
- Telegram notifications
- Performance tracking
🔌 Integration Points
Hyperliquid API
- WebSocket:
wss://api.hyperliquid.xyz/ws - REST:
https://api.hyperliquid.xyz/info - Subscription:
{"method": "subscribe", "subscription": {"type": "candle", "coin": "cbBTC", "interval": "1m"}}
Base Chain (for validation)
- RPC: Alchemy/Infura/QuickNode
- Contract: cbBTC token on Base
- Purpose: Cross-validate Hyperliquid prices
Aave V3 (Phase 3)
- Network: Base
- Pool:
0xA238Dd80C259a72e81d7e4664a9801593F98d1c5 - Action: Supply cbBTC as collateral
📋 Dependencies
Python Packages (requirements.txt)
Key packages:
websockets- WebSocket clientasyncpg- PostgreSQL async driverfastapi+uvicorn- API serverpandas+numpy- Data processingweb3- Ethereum integration (Phase 3)pydantic- Data validationpyyaml- Configuration
Docker Images
timescale/timescaledb:2.11.2-pg15python:3.11-slim(for custom builds)
🎨 Development Workflow
Recommended Setup
PC (opencode + VS Code) ──SSH──► NAS (Docker containers)
VS Code Extensions
- Remote - SSH
- Python
- Docker (optional)
File Editing Options
- VS Code Remote-SSH: Edit directly on NAS
- SSHFS: Mount NAS locally
- WinSCP: Sync local ↔ remote
- Synology Drive: Bidirectional sync
🔒 Security Notes
- Environment File:
.envcontains secrets - never commit to git - Database: Not exposed externally by default
- API: No authentication (assumes local network)
- Wallet Keys: Will be in
.envfor Phase 3 - use hardware wallet for large amounts
📞 Troubleshooting Guide
Container Won't Start
# Check logs
docker-compose logs service_name
# Common fixes:
docker-compose down
docker system prune -f
docker-compose build --no-cache
docker-compose up -d
Database Connection Issues
# Check if DB is ready
docker exec btc_timescale pg_isready -U btc_bot
# Reset (WARNING: deletes all data!)
docker-compose down -v
docker-compose up -d
High Memory Usage
- Edit
docker/timescaledb.conf- reduceshared_buffers - Edit
docker/docker-compose.yml- reduce memory limits - Enable more aggressive compression
Data Gaps Detected
- Check WebSocket logs for disconnections
- Verify Hyperliquid API status
- Consider implementing REST API backfill (Phase 2)
📝 Additional Notes
Design Decisions
- 1m candles as base: Custom intervals computed on-demand
- TimescaleDB over InfluxDB: Better SQL support, handles custom intervals
- Docker over native: Easier deployment, resource isolation
- Asyncio: Handles many concurrent connections efficiently
- Batch writes: Minimizes database load on NAS
Performance Targets
- Latency: < 1s from trade to database
- Throughput: Handle 1 candle/minute easily
- Memory: < 2.5GB total usage
- Storage: < 1GB/year with compression
Future Enhancements
- Multi-asset support (ETH, SOL, etc.)
- Historical backfill from REST API
- Machine learning layer for signals
- WebSocket multiplexing for multiple symbols
- Prometheus metrics export
- Grafana dashboard (alternative to custom UI)
✅ Pre-Migration Checklist
Before moving to new folder/location:
- All files listed in structure present
- Dockerfiles don't contain
apt-getcommands .envfile configured with your passwordsscripts/*.shhave execute permissions (chmod +x)docker/init-scripts/*.sqlpresentrequirements.txtincludes all dependencies- Tested on current location (if possible)
🚀 Quick Start in New Location
# 1. Copy all files to new location
cp -r btc_bot /new/location/
# 2. Set permissions
chmod +x /new/location/btc_bot/scripts/*.sh
# 3. Configure environment
cd /new/location/btc_bot
cp .env.example .env
nano .env # Edit passwords
# 4. Deploy
cd docker
docker-compose up -d
# 5. Verify
docker-compose ps
curl http://localhost:8000/api/v1/health
End of Context File Ready for migration to new location