Initial commit: BTC Bot with dashboard, TA analysis, and 14 timeframes
This commit is contained in:
305
README.md
Normal file
305
README.md
Normal file
@ -0,0 +1,305 @@
|
||||
# BTC Accumulation Bot - Data Collection Phase
|
||||
|
||||
High-performance data collection system for cbBTC on Hyperliquid with TimescaleDB storage on Synology DS218+.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
- **Data Source**: Hyperliquid WebSocket (primary)
|
||||
- **Database**: TimescaleDB (PostgreSQL extension) on NAS
|
||||
- **Collection**: 1-minute candles with automatic batching
|
||||
- **API**: FastAPI with real-time dashboard
|
||||
- **Deployment**: Docker Compose on Synology
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
btc_bot/
|
||||
├── docker/ # Docker configurations
|
||||
│ ├── docker-compose.yml
|
||||
│ ├── Dockerfile.collector
|
||||
│ ├── Dockerfile.api
|
||||
│ └── init-scripts/ # Database initialization
|
||||
├── config/ # YAML configurations
|
||||
├── src/
|
||||
│ ├── data_collector/ # WebSocket client & database writer
|
||||
│ └── api/ # REST API & dashboard
|
||||
├── scripts/ # Deployment & backup scripts
|
||||
└── requirements.txt
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Synology DS218+** with:
|
||||
- Docker package installed
|
||||
- SSH access enabled
|
||||
- 6GB RAM recommended (upgrade from stock 2GB)
|
||||
|
||||
2. **Network**:
|
||||
- Static IP for NAS (recommended)
|
||||
- Port 5432 (database) and 8000 (API) available
|
||||
|
||||
## Installation
|
||||
|
||||
### 1. Clone Repository on NAS
|
||||
|
||||
```bash
|
||||
ssh user@your-nas-ip
|
||||
cd /volume1
|
||||
mkdir -p btc_bot
|
||||
cd btc_bot
|
||||
# Copy project files here
|
||||
```
|
||||
|
||||
### 2. Configure Environment
|
||||
|
||||
```bash
|
||||
# Copy example environment file
|
||||
cp .env.example .env
|
||||
|
||||
# Edit with your settings
|
||||
nano .env
|
||||
```
|
||||
|
||||
Required settings:
|
||||
- `DB_PASSWORD`: Strong password for database
|
||||
- `BASE_RPC_URL`: Alchemy/Infura API key for Base chain validation
|
||||
- `TELEGRAM_BOT_TOKEN` and `CHAT_ID`: For notifications (optional)
|
||||
|
||||
### 3. Deploy
|
||||
|
||||
```bash
|
||||
chmod +x scripts/deploy.sh
|
||||
./scripts/deploy.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
1. Create necessary directories
|
||||
2. Build Docker images
|
||||
3. Start TimescaleDB
|
||||
4. Initialize database schema
|
||||
5. Start data collector
|
||||
6. Start API server
|
||||
|
||||
### 4. Verify Installation
|
||||
|
||||
```bash
|
||||
# Check container status
|
||||
cd docker
|
||||
docker-compose ps
|
||||
|
||||
# View logs
|
||||
docker-compose logs -f data_collector
|
||||
docker-compose logs -f api_server
|
||||
|
||||
# Test database connection
|
||||
docker exec btc_timescale psql -U btc_bot -d btc_data -c "SELECT COUNT(*) FROM candles;"
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Web Dashboard
|
||||
|
||||
Access the dashboard at: `http://your-nas-ip:8000/dashboard`
|
||||
|
||||
Features:
|
||||
- Real-time price chart
|
||||
- 24h statistics
|
||||
- Recent candles table
|
||||
- CSV export
|
||||
- Auto-refresh every 30 seconds
|
||||
|
||||
### REST API
|
||||
|
||||
#### Get Candles
|
||||
```bash
|
||||
curl "http://your-nas-ip:8000/api/v1/candles?symbol=cbBTC-PERP&interval=1m&limit=100"
|
||||
```
|
||||
|
||||
#### Get Latest Candle
|
||||
```bash
|
||||
curl "http://your-nas-ip:8000/api/v1/candles/latest?symbol=cbBTC-PERP&interval=1m"
|
||||
```
|
||||
|
||||
#### Export CSV
|
||||
```bash
|
||||
curl "http://your-nas-ip:8000/api/v1/export/csv?symbol=cbBTC-PERP&days=7" -o cbBTC_7d.csv
|
||||
```
|
||||
|
||||
#### Health Check
|
||||
```bash
|
||||
curl "http://your-nas-ip:8000/api/v1/health"
|
||||
```
|
||||
|
||||
### API Documentation
|
||||
|
||||
Interactive API docs available at: `http://your-nas-ip:8000/docs`
|
||||
|
||||
## Database Access
|
||||
|
||||
Connect directly to TimescaleDB:
|
||||
|
||||
```bash
|
||||
# From NAS
|
||||
docker exec -it btc_timescale psql -U btc_bot -d btc_data
|
||||
|
||||
# From remote (if port 5432 forwarded)
|
||||
psql -h your-nas-ip -p 5432 -U btc_bot -d btc_data
|
||||
```
|
||||
|
||||
### Useful Queries
|
||||
|
||||
```sql
|
||||
-- Check latest data
|
||||
SELECT * FROM candles ORDER BY time DESC LIMIT 10;
|
||||
|
||||
-- Check data gaps (last 24h)
|
||||
SELECT * FROM data_quality
|
||||
WHERE time > NOW() - INTERVAL '24 hours'
|
||||
AND resolved = false;
|
||||
|
||||
-- Database statistics
|
||||
SELECT * FROM data_health;
|
||||
|
||||
-- Compression status
|
||||
SELECT chunk_name, compression_status
|
||||
FROM timescaledb_information.chunks
|
||||
WHERE hypertable_name = 'candles';
|
||||
```
|
||||
|
||||
## Backup & Maintenance
|
||||
|
||||
### Automated Backups
|
||||
|
||||
Setup scheduled task in Synology DSM:
|
||||
|
||||
1. Open **Control Panel** → **Task Scheduler**
|
||||
2. Create **Triggered Task** → **User-defined script**
|
||||
3. Schedule: Every 6 hours
|
||||
4. Command: `/volume1/btc_bot/scripts/backup.sh`
|
||||
|
||||
### Manual Backup
|
||||
|
||||
```bash
|
||||
cd /volume1/btc_bot
|
||||
./scripts/backup.sh
|
||||
```
|
||||
|
||||
Backups stored in: `/volume1/btc_bot/backups/`
|
||||
|
||||
### Health Monitoring
|
||||
|
||||
Add to Task Scheduler (every 5 minutes):
|
||||
|
||||
```bash
|
||||
/volume1/btc_bot/scripts/health_check.sh
|
||||
```
|
||||
|
||||
### Database Maintenance
|
||||
|
||||
```bash
|
||||
# Manual compression (runs automatically after 7 days)
|
||||
docker exec btc_timescale psql -U btc_bot -d btc_data -c "SELECT compress_chunk(i) FROM show_chunks('candles') i;"
|
||||
|
||||
# Vacuum and analyze
|
||||
docker exec btc_timescale psql -U btc_bot -d btc_data -c "VACUUM ANALYZE candles;"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### High Memory Usage
|
||||
|
||||
If DS218+ runs out of memory:
|
||||
|
||||
```bash
|
||||
# Reduce memory limits in docker-compose.yml
|
||||
# Edit docker/docker-compose.yml
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 1G # Reduce from 1.5G
|
||||
```
|
||||
|
||||
Then restart:
|
||||
```bash
|
||||
cd docker
|
||||
docker-compose down
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Data Gaps
|
||||
|
||||
If gaps detected:
|
||||
|
||||
```bash
|
||||
# Check logs
|
||||
docker-compose logs data_collector | grep -i gap
|
||||
|
||||
# Manual backfill (not yet implemented - will be in Phase 2)
|
||||
```
|
||||
|
||||
### WebSocket Disconnections
|
||||
|
||||
Normal behavior - client auto-reconnects. Check:
|
||||
|
||||
```bash
|
||||
# Connection health
|
||||
docker-compose logs data_collector | grep -i "reconnect"
|
||||
```
|
||||
|
||||
### Disk Space
|
||||
|
||||
Monitor usage:
|
||||
|
||||
```bash
|
||||
du -sh /volume1/btc_bot/data
|
||||
du -sh /volume1/btc_bot/backups
|
||||
```
|
||||
|
||||
Expected growth:
|
||||
- 1m candles: ~50MB/year (compressed)
|
||||
- Indicators: ~100MB/year
|
||||
- Backups: Varies based on retention
|
||||
|
||||
## Performance Tuning
|
||||
|
||||
For DS218+ with limited resources:
|
||||
|
||||
1. **Buffer size**: Reduce in `config/data_config.yaml`:
|
||||
```yaml
|
||||
buffer:
|
||||
max_size: 500 # From 1000
|
||||
flush_interval_seconds: 60 # From 30
|
||||
```
|
||||
|
||||
2. **Database connections**: Reduce pool size:
|
||||
```yaml
|
||||
database:
|
||||
pool_size: 3 # From 5
|
||||
```
|
||||
|
||||
3. **Compression**: Already enabled after 7 days
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Environment file**: `.env` contains secrets - never commit to git
|
||||
2. **Database**: Not exposed externally by default
|
||||
3. **API**: No authentication (local network only)
|
||||
4. **Firewall**: Only open port 8000 if needed externally (use VPN instead)
|
||||
|
||||
## Next Steps (Phase 2)
|
||||
|
||||
1. **Backfill system**: REST API integration for gap filling
|
||||
2. **Indicators**: RSI, MACD, EMA computation engine
|
||||
3. **Brain**: Decision engine with configurable rules
|
||||
4. **Execution**: EVM wallet integration for cbBTC trading
|
||||
5. **Aave**: Automatic yield generation on collected cbBTC
|
||||
|
||||
## Support
|
||||
|
||||
- **API Issues**: Check logs with `docker-compose logs api_server`
|
||||
- **Data Issues**: Check logs with `docker-compose logs data_collector`
|
||||
- **Database Issues**: Check logs with `docker-compose logs timescaledb`
|
||||
|
||||
## License
|
||||
|
||||
Private project - not for redistribution
|
||||
Reference in New Issue
Block a user