Files
btc-trading/AGENTS.md
BTC Bot d7bdfcf716 feat: implement strategy metadata and dashboard simulation panel
- Added display_name and description to BaseStrategy
- Updated MA44 and MA125 strategies with metadata
- Added /api/v1/strategies endpoint for dynamic discovery
- Added Strategy Simulation panel to dashboard with date picker and tooltips
- Implemented JS polling for backtest results in dashboard
- Added performance test scripts and DB connection guide
- Expanded indicator config to all 15 timeframes
2026-02-13 09:50:08 +01:00

129 lines
5.3 KiB
Markdown

# AGENTS.md - AI Coding Assistant Guidelines
## Project Overview
BTC Accumulation Bot - Data Collection & Backtesting Phase. High-performance system for
cbBTC on Hyperliquid with TimescaleDB. Core components: Data Collector (WS),
Indicator Engine (SMA, etc.), Brain (Decision Logic), and Backtester.
## Build/Run Commands
### Docker (Primary deployment - Synology DS218+)
```bash
cd docker && docker-compose up -d --build # Build and start all services
docker-compose logs -f data_collector # View logs
bash scripts/deploy.sh # Full deploy
```
### Development
```bash
cd src/api && uvicorn server:app --reload --host 0.0.0.0 --port 8000
# Docs: http://localhost:8000/docs | Dashboard: http://localhost:8000/dashboard
cd src/data_collector && python -m data_collector.main
```
### Testing
```bash
pytest # All tests
pytest tests/data_collector/test_websocket_client.py # Single file
pytest --cov=src --cov-report=html # With coverage
```
## Project Structure
```
src/
├── data_collector/ # WebSocket client, buffer, database
│ ├── __init__.py # Package exports (all public classes)
│ ├── main.py # Entry point, orchestration
│ ├── websocket_client.py # Hyperliquid WS client
│ ├── candle_buffer.py # Circular buffer with async flush
│ ├── database.py # asyncpg/TimescaleDB interface
│ ├── backfill.py # Historical data backfill
│ ├── custom_timeframe_generator.py # 37m, 148m, 1d aggregation
│ ├── indicator_engine.py # SMA/EMA computation & storage
│ ├── brain.py # Strategy evaluation & decision logging
│ └── backtester.py # Historical replay driver
└── api/
├── server.py # FastAPI app, endpoints for data/backtests
└── dashboard/static/index.html # Real-time web dashboard
config/data_config.yaml # Operational config & indicator settings
docker/ # Docker orchestration & init-scripts
scripts/ # Deploy, backup, & utility scripts
```
## Architecture & Data Flow
```
Live: WS -> Buffer -> DB -> CustomTF -> IndicatorEngine -> Brain -> Decisions
│ │
Backtest: DB (History) -> Backtester ─────────┴─────────────┘
```
- **Stateless Logic**: `IndicatorEngine` and `Brain` are driver-agnostic. They read from DB
and write to DB, unaware if the trigger is live WS or backtest replay.
- **Consistency**: Indicators are computed exactly the same way for live and backtest.
- **Visualization**: Dashboard queries `indicators` and `decisions` tables directly.
Decisions contain a JSON snapshot of indicators at the moment of decision.
## Key Dataclasses
```python
@dataclass
class Candle: # Standard OHLCV
time: datetime; symbol: str; interval: str; ...
@dataclass
class Decision: # Brain output
time: datetime; symbol: str; decision_type: str; confidence: float
indicator_snapshot: Dict; # Values seen by Brain at decision time
backtest_id: Optional[str] # UUID if backtest, None if live
```
## Database Schema (TimescaleDB)
| Table | Purpose | Key Columns |
|-------|---------|-------------|
| `candles` | OHLCV data | `(time, symbol, interval)` UNIQUE |
| `indicators` | Computed values | `(time, symbol, interval, indicator_name)` UNIQUE |
| `decisions` | Buy/sell signals | `(time, symbol, interval, backtest_id)` |
| `backtest_runs` | Run metadata | `(id, strategy, config, results)` |
- `decisions` table stores `indicator_snapshot` JSONB for exact replay/audit.
- Compression enabled on all hypertables (7-day policy).
## API Endpoints (src/api/server.py)
| Method | Path | Description |
|--------|------|-------------|
| GET | `/api/v1/candles` | Query raw candles |
| GET | `/api/v1/indicators` | Query computed indicators (MA, RSI, etc.) |
| GET | `/api/v1/decisions` | Query signals (live or backtest) |
| GET | `/api/v1/backtests` | List historical backtest runs |
| POST | `/api/v1/backtests` | Trigger a new backtest (async background task) |
| GET | `/api/v1/stats` | 24h trading stats |
## Code Style Guidelines
- **Imports**: Stdlib, then Third-party, then Local (relative within package).
- **Async**: Use `async/await` for all I/O. Use `asyncpg` pool.
- **Typing**: strict type hints required. `Optional[T]`, `List[T]`.
- **Logging**: Use `logger = logging.getLogger(__name__)`.
- **Config**: Load from `config/data_config.yaml` or env vars.
## Common Tasks
### Add New Indicator
1. Add to `config/data_config.yaml` under `indicators`.
2. Update `IndicatorEngine._compute_indicator` in `src/data_collector/indicator_engine.py` if new type (non-SMA).
3. No DB schema change needed (rows are generic).
### Run Backtest
```bash
# CLI
python -m data_collector.backtester --symbol BTC --intervals 37m --start 2025-01-01
# API
curl -X POST http://localhost:8000/api/v1/backtests \
-H "Content-Type: application/json" \
-d '{"symbol": "BTC", "intervals": ["37m"], "start_date": "2025-01-01"}'
```