Compare commits
14 Commits
web_socket
...
88e1dfff55
| Author | SHA1 | Date | |
|---|---|---|---|
| 88e1dfff55 | |||
| d412099753 | |||
| 048affb163 | |||
| 1cf05a5b69 | |||
| e7c7158c68 | |||
| a30f75fae0 | |||
| 6f822483cc | |||
| fa5b06d1f6 | |||
| 2a3cf5cacf | |||
| 2269fbed9e | |||
| 7ef6bd4d14 | |||
| a69581a07b | |||
| 76c993ce76 | |||
| 903f4ff434 |
24
.env.example
Normal file
24
.env.example
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
# Example environment variables for the Hyperliquid trading toolkit
|
||||||
|
# Copy this file to .env and fill in real values. Do NOT commit your real .env file.
|
||||||
|
|
||||||
|
# Main wallet (used only to authorize agents on-chain)
|
||||||
|
# Example: MAIN_WALLET_PRIVATE_KEY=0x...
|
||||||
|
MAIN_WALLET_PRIVATE_KEY=
|
||||||
|
MAIN_WALLET_ADDRESS=
|
||||||
|
|
||||||
|
# Agent keys (private keys authorized via create_agent.py)
|
||||||
|
# Preferred patterns:
|
||||||
|
# - AGENT_PRIVATE_KEY: default agent
|
||||||
|
# - <NAME>_AGENT_PK or <NAME>_AGENT_PRIVATE_KEY: per-agent keys (e.g., SCALPER_AGENT_PK)
|
||||||
|
# Example: AGENT_PRIVATE_KEY=0x...
|
||||||
|
AGENT_PRIVATE_KEY=
|
||||||
|
# Example per-agent key:
|
||||||
|
# SCALPER_AGENT_PK=
|
||||||
|
# SWING_AGENT_PK=
|
||||||
|
|
||||||
|
# Optional: CoinGecko API key to reduce rate limits for market cap fetches
|
||||||
|
COINGECKO_API_KEY=
|
||||||
|
|
||||||
|
# Optional: Set a custom environment for development/testing
|
||||||
|
# E.g., DEBUG=true
|
||||||
|
DEBUG=
|
||||||
88
README.md
Normal file
88
README.md
Normal file
@ -0,0 +1,88 @@
|
|||||||
|
# Automated Crypto Trading Bot
|
||||||
|
|
||||||
|
This project is a sophisticated, multi-process automated trading bot designed to interact with the Hyperliquid decentralized exchange. It features a robust data pipeline, a flexible strategy engine, multi-agent trade execution, and a live terminal dashboard for real-time monitoring.
|
||||||
|
|
||||||
|
<!-- It's a good idea to take a screenshot of your dashboard and upload it to a service like Imgur to include here -->
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
* **Multi-Process Architecture**: Core components (data fetching, trading, strategies) run in parallel processes for maximum performance and stability.
|
||||||
|
* **Comprehensive Data Pipeline**:
|
||||||
|
* Live price feeds for all assets.
|
||||||
|
* Historical candle data collection for any coin and timeframe.
|
||||||
|
* Historical market cap data fetching from the CoinGecko API.
|
||||||
|
* **High-Performance Database**: Uses SQLite with pandas for fast, indexed storage and retrieval of all market data.
|
||||||
|
* **Configuration-Driven Strategies**: Trading strategies are defined and managed in a simple JSON file (`_data/strategies.json`), allowing for easy configuration without code changes.
|
||||||
|
* **Multi-Agent Trading**: Supports multiple, independent trading agents for advanced risk segregation and PNL tracking.
|
||||||
|
* **Live Terminal Dashboard**: A real-time, flicker-free dashboard to monitor live prices, market caps, strategy signals, and the status of all background processes.
|
||||||
|
* **Secure Key Management**: Uses a `.env` file to securely manage all private keys and API keys, keeping them separate from the codebase.
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
The project is composed of several key scripts that work together:
|
||||||
|
|
||||||
|
* **`main_app.py`**: The central orchestrator. It launches all background processes and displays the main monitoring dashboard.
|
||||||
|
* **`trade_executor.py`**: The trading "brain." It reads signals from all active strategies and executes trades using the appropriate agent.
|
||||||
|
* **`data_fetcher.py`**: A background service that collects 1-minute historical candle data and saves it to the SQLite database.
|
||||||
|
* **`resampler.py`**: A background service that reads the 1-minute data and generates all other required timeframes (e.g., 5m, 1h, 1d).
|
||||||
|
* **`market_cap_fetcher.py`**: A scheduled service to download daily market cap data.
|
||||||
|
* **`strategy_*.py`**: Individual files containing the logic for different types of trading strategies (e.g., SMA Crossover).
|
||||||
|
* **`_data/strategies.json`**: The configuration file for defining and enabling/disabling your trading strategies.
|
||||||
|
* **`.env`**: The secure file for storing all your private keys and API keys.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
1. **Clone the Repository**
|
||||||
|
```bash
|
||||||
|
git clone [https://github.com/your-username/your-repo-name.git](https://github.com/your-username/your-repo-name.git)
|
||||||
|
cd your-repo-name
|
||||||
|
```
|
||||||
|
2. **Create and Activate a Virtual Environment**
|
||||||
|
```bash
|
||||||
|
# For Windows
|
||||||
|
python -m venv .venv
|
||||||
|
.\.venv\Scripts\activate
|
||||||
|
# For macOS/Linux
|
||||||
|
python3 -m venv .venv
|
||||||
|
source .venv/bin/activate
|
||||||
|
```
|
||||||
|
3. **Install Dependencies**
|
||||||
|
```bash
|
||||||
|
pip install -r requirements.txt
|
||||||
|
```
|
||||||
|
## Getting Started: Configuration
|
||||||
|
|
||||||
|
Before running the application, you must configure your wallets, agents, and API keys.
|
||||||
|
|
||||||
|
1. Create the .env File In the root of the project, create a file named .env. Copy the following content into it and replace the placeholder values with your actual keys.
|
||||||
|
|
||||||
|
2. **Activate Your Main Wallet on Hyperliquid**
|
||||||
|
The `trade_executor.py` script will fail if your main wallet is not registered.
|
||||||
|
* Go to the Hyperliquid website, connect your main wallet, and make a small deposit. This is a one-time setup step.
|
||||||
|
3. **Create and Authorize Trading Agents**
|
||||||
|
The `trade_executor.py` uses secure "agent" keys that can trade but cannot withdraw. You need to generate these and authorize them with your main wallet.
|
||||||
|
* Run the `create_agent.py` script
|
||||||
|
```bash
|
||||||
|
python create_agent.py
|
||||||
|
```
|
||||||
|
The script will output a new Agent Private Key. Copy this key and add it to your .env file (e.g., as SCALPER_AGENT_PK). Repeat this for each agent you want to create.
|
||||||
|
4. **Configure**
|
||||||
|
Your Strategies Open the `_data/strategies.json` file to define which strategies you want to run.
|
||||||
|
* Set "enabled": true to activate a strategy.
|
||||||
|
* Assign an "agent" (e.g., "scalper", "swing") to each strategy. The agent name must correspond to a key in your .env file (e.g., SCALPER_AGENT_PK -> "scalper").
|
||||||
|
* Configure the parameters for each strategy, such as the coin, timeframe, and any indicator settings.
|
||||||
|
|
||||||
|
##Usage##
|
||||||
|
Once everything is configured, you can run the main application from your terminal:
|
||||||
|
```bash
|
||||||
|
python main_app.py
|
||||||
|
```
|
||||||
|
|
||||||
|
## Documentation
|
||||||
|
|
||||||
|
Detailed project documentation is available in the `WIKI/` directory. Start with the summary page:
|
||||||
|
|
||||||
|
`WIKI/SUMMARY.md`
|
||||||
|
|
||||||
|
This contains links and explanations for `OVERVIEW.md`, `SETUP.md`, `SCRIPTS.md`, and other helpful pages that describe usage, data layout, agent management, development notes, and troubleshooting.
|
||||||
|
|
||||||
5
WIKI/.gitattributes
vendored
Normal file
5
WIKI/.gitattributes
vendored
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
# Treat markdown files as text with LF normalization
|
||||||
|
*.md text eol=lf
|
||||||
|
|
||||||
|
# Ensure JSON files are treated as text
|
||||||
|
*.json text
|
||||||
34
WIKI/AGENTS.md
Normal file
34
WIKI/AGENTS.md
Normal file
@ -0,0 +1,34 @@
|
|||||||
|
Agents and Keys
|
||||||
|
|
||||||
|
This project supports running multiple agent identities (private keys) to place orders on Hyperliquid. Agents are lightweight keys authorized on-chain by your main wallet.
|
||||||
|
|
||||||
|
Agent storage and environment
|
||||||
|
|
||||||
|
- For security, agent private keys should be stored as environment variables and not checked into source control.
|
||||||
|
- Supported patterns:
|
||||||
|
- `AGENT_PRIVATE_KEY` (single default agent)
|
||||||
|
- `<NAME>_AGENT_PK` or `<NAME>_AGENT_PRIVATE_KEY` (per-agent keys)
|
||||||
|
|
||||||
|
Discovering agents
|
||||||
|
|
||||||
|
- `trade_executor.py` scans environment variables for agent keys and loads them into `Exchange` objects so each agent can sign orders independently.
|
||||||
|
|
||||||
|
Creating and authorizing agents
|
||||||
|
|
||||||
|
- Use `create_agent.py` with your `MAIN_WALLET_PRIVATE_KEY` to authorize a new agent name. The script will attempt to call `exchange.approve_agent(agent_name)` and print the returned agent private key.
|
||||||
|
|
||||||
|
Security notes
|
||||||
|
|
||||||
|
- Never commit private keys to Git. Keep them in a secure secrets store or local `.env` file excluded from version control.
|
||||||
|
- Rotate keys if they are ever exposed and re-authorize agents using your main wallet.
|
||||||
|
|
||||||
|
Example `.env` snippet
|
||||||
|
|
||||||
|
MAIN_WALLET_PRIVATE_KEY=<your-main-wallet-private-key>
|
||||||
|
MAIN_WALLET_ADDRESS=<your-main-wallet-address>
|
||||||
|
AGENT_PRIVATE_KEY=<agent-private-key>
|
||||||
|
EXECUTOR_SCALPER_AGENT_PK=<agent-private-key-for-scalper>
|
||||||
|
|
||||||
|
File `agents`
|
||||||
|
|
||||||
|
- This repository may contain a local `agents` file used as a quick snapshot; treat it as insecure and remove it from the repo or add it to `.gitignore` if it contains secrets.
|
||||||
20
WIKI/CONTRIBUTING.md
Normal file
20
WIKI/CONTRIBUTING.md
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
Contributing
|
||||||
|
|
||||||
|
Thanks for considering contributing! Please follow these guidelines to make the process smooth.
|
||||||
|
|
||||||
|
How to contribute
|
||||||
|
|
||||||
|
1. Fork the repository and create a feature branch for your change.
|
||||||
|
2. Keep changes focused and add tests where appropriate.
|
||||||
|
3. Submit a Pull Request with a clear description and the reason for the change.
|
||||||
|
|
||||||
|
Coding standards
|
||||||
|
|
||||||
|
- Keep functions small and well-documented.
|
||||||
|
- Use the existing logging utilities for consistent output.
|
||||||
|
- Prefer safe, incremental changes for financial code.
|
||||||
|
|
||||||
|
Security and secrets
|
||||||
|
|
||||||
|
- Never commit private keys, API keys, or secrets. Use environment variables or a secrets manager.
|
||||||
|
- If you accidentally commit secrets, rotate them immediately.
|
||||||
31
WIKI/DATA.md
Normal file
31
WIKI/DATA.md
Normal file
@ -0,0 +1,31 @@
|
|||||||
|
Data layout and formats
|
||||||
|
|
||||||
|
This section describes the `_data/` directory and the important files used by the scripts.
|
||||||
|
|
||||||
|
Important files
|
||||||
|
|
||||||
|
- `_data/market_data.db` — SQLite database that stores candle tables. Tables are typically named `<COIN>_<INTERVAL>` (e.g., `BTC_1m`, `ETH_5m`).
|
||||||
|
- `_data/coin_precision.json` — Mapping of coin names to their size precision (created by `list_coins.py`).
|
||||||
|
- `_data/current_prices.json` — Latest market prices that `market.py` writes.
|
||||||
|
- `_data/fetcher_status.json` — Last run metadata from `data_fetcher.py`.
|
||||||
|
- `_data/market_cap_data.json` — Market cap summary saved by `market_cap_fetcher.py`.
|
||||||
|
- `_data/strategies.json` — Configuration for strategies (enabled flag, parameters).
|
||||||
|
- `_data/strategy_status_<name>.json` — Per-strategy runtime status including last signal and price.
|
||||||
|
- `_data/executor_managed_positions.json` — Which strategy is currently managing which live position (used by `trade_executor`).
|
||||||
|
|
||||||
|
Candle schema
|
||||||
|
|
||||||
|
Each candle table contains columns similar to:
|
||||||
|
- `timestamp_ms` (INTEGER) — milliseconds since epoch
|
||||||
|
- `open`, `high`, `low`, `close` (FLOAT)
|
||||||
|
- `volume` (FLOAT)
|
||||||
|
- `number_of_trades` (INTEGER)
|
||||||
|
|
||||||
|
Trade logs
|
||||||
|
|
||||||
|
- Persistent trade history is stored in `_logs/trade_history.csv` with the following columns: `timestamp_utc`, `strategy`, `coin`, `action`, `price`, `size`, `signal`, `pnl`.
|
||||||
|
|
||||||
|
Backups and maintenance
|
||||||
|
|
||||||
|
- Periodically back up `_data/market_data.db`. The WAL and SHM files are also present when SQLite uses WAL mode.
|
||||||
|
- Keep JSON config/state files under version control only if they contain no secrets.
|
||||||
24
WIKI/DEVELOPMENT.md
Normal file
24
WIKI/DEVELOPMENT.md
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
Development and testing
|
||||||
|
|
||||||
|
Code style and conventions
|
||||||
|
|
||||||
|
- Python 3.11+ with typing hints where helpful.
|
||||||
|
- Use `logging_utils.setup_logging` for consistent logs across scripts.
|
||||||
|
|
||||||
|
Running tests
|
||||||
|
|
||||||
|
- This repository doesn't currently include a formal test suite. Suggested quick checks:
|
||||||
|
- Run `python list_coins.py` to verify connectivity to Hyperliquid Info.
|
||||||
|
- Run `python -m pyflakes .` or `python -m pylint` if you have linters installed.
|
||||||
|
|
||||||
|
Adding a new strategy
|
||||||
|
|
||||||
|
1. Create a new script following the pattern in `strategy_template.py`.
|
||||||
|
2. Add an entry to `_data/strategies.json` with `enabled: true` and relevant parameters.
|
||||||
|
3. Ensure the strategy writes a status JSON file (`_data/strategy_status_<name>.json`) and uses `trade_log.log_trade` to record actions.
|
||||||
|
|
||||||
|
Recommended improvements (low-risk)
|
||||||
|
|
||||||
|
- Add a lightweight unit test suite (pytest) for core functions like timeframe parsing, SQL helpers, and signal calculation.
|
||||||
|
- Add CI (GitHub Actions) to run flake/pylint and unit tests on PRs.
|
||||||
|
- Move secrets handling to a `.env.example` and document environment variables in `WIKI/SETUP.md`.
|
||||||
29
WIKI/OVERVIEW.md
Normal file
29
WIKI/OVERVIEW.md
Normal file
@ -0,0 +1,29 @@
|
|||||||
|
Hyperliquid Trading Toolkit
|
||||||
|
|
||||||
|
This repository contains a collection of utility scripts, data fetchers, resamplers, trading strategies, and a trade executor for working with Hyperliquid trading APIs and crawled data. It is organized to support data collection, transformation, strategy development, and automated execution via agents.
|
||||||
|
|
||||||
|
Key components
|
||||||
|
|
||||||
|
- Data fetching and management: `data_fetcher.py`, `market.py`, `resampler.py`, `market_cap_fetcher.py`, `list_coins.py`
|
||||||
|
- Strategies: `strategy_sma_cross.py`, `strategy_template.py`, `strategy_sma_125d.py` (if present)
|
||||||
|
- Execution: `trade_executor.py`, `create_agent.py`, `agents` helper
|
||||||
|
- Utilities: `logging_utils.py`, `trade_log.py`
|
||||||
|
- Data storage: SQLite database in `_data/market_data.db` and JSON files in `_data`
|
||||||
|
|
||||||
|
Intended audience
|
||||||
|
|
||||||
|
- Developers building strategies and automations on Hyperliquid
|
||||||
|
- Data engineers collecting and processing market data
|
||||||
|
- Operators running the fetchers and executors on a scheduler or as system services
|
||||||
|
|
||||||
|
Project goals
|
||||||
|
|
||||||
|
- Reliable collection of 1m candles and resampling to common timeframes
|
||||||
|
- Clean separation between data, strategies, and execution
|
||||||
|
- Lightweight logging and traceable trade records
|
||||||
|
|
||||||
|
Where to start
|
||||||
|
|
||||||
|
- Read `WIKI/SETUP.md` to prepare your environment
|
||||||
|
- Use `WIKI/SCRIPTS.md` for a description of individual scripts and how to run them
|
||||||
|
- Inspect `WIKI/AGENTS.md` to understand agent keys and how to manage them
|
||||||
47
WIKI/SCRIPTS.md
Normal file
47
WIKI/SCRIPTS.md
Normal file
@ -0,0 +1,47 @@
|
|||||||
|
Scripts and How to Use Them
|
||||||
|
|
||||||
|
This file documents the main scripts in the repository and their purpose, typical runtime parameters, and key notes.
|
||||||
|
|
||||||
|
list_coins.py
|
||||||
|
- Purpose: Fetches asset metadata from Hyperliquid (name and size/precision) and saves `_data/coin_precision.json`.
|
||||||
|
- Usage: `python list_coins.py`
|
||||||
|
- Notes: Reads `hyperliquid.info.Info` and writes a JSON file. Useful to run before market feeders.
|
||||||
|
|
||||||
|
market.py (MarketDataFeeder)
|
||||||
|
- Purpose: Fetches live prices from Hyperliquid and writes `_data/current_prices.json` while printing a live table.
|
||||||
|
- Usage: `python market.py --log-level normal`
|
||||||
|
- Notes: Expects `_data/coin_precision.json` to exist.
|
||||||
|
|
||||||
|
data_fetcher.py (CandleFetcherDB)
|
||||||
|
- Purpose: Fetches historical 1m candles and stores them in `_data/market_data.db` using a table-per-coin naming convention.
|
||||||
|
- Usage: `python data_fetcher.py --coins BTC ETH --interval 1m --days 7`
|
||||||
|
- Notes: Can be run regularly by a scheduler to keep the DB up to date.
|
||||||
|
|
||||||
|
resampler.py (Resampler)
|
||||||
|
- Purpose: Reads 1m candles from SQLite and resamples to configured timeframes (e.g. 5m, 15m, 1h), appending new candles to tables.
|
||||||
|
- Usage: `python resampler.py --coins BTC ETH --timeframes 5m 15m 1h --log-level normal`
|
||||||
|
|
||||||
|
market_cap_fetcher.py (MarketCapFetcher)
|
||||||
|
- Purpose: Pulls CoinGecko market cap numbers and maintains historical daily tables in the same SQLite DB.
|
||||||
|
- Usage: `python market_cap_fetcher.py --coins BTC ETH --log-level normal`
|
||||||
|
- Notes: Optional `COINGECKO_API_KEY` in `.env` avoids throttling.
|
||||||
|
|
||||||
|
strategy_sma_cross.py (SmaCrossStrategy)
|
||||||
|
- Purpose: Run an SMA-based trading strategy. Reads candles from `_data/market_data.db` and writes status to `_data/strategy_status_<name>.json`.
|
||||||
|
- Usage: `python strategy_sma_cross.py --name sma_cross_1 --params '{"coin":"BTC","timeframe":"1m","fast":5,"slow":20}' --log-level normal`
|
||||||
|
|
||||||
|
trade_executor.py (TradeExecutor)
|
||||||
|
- Purpose: Orchestrates agent-based order execution using agent private keys found in environment variables. Uses `_data/strategies.json` to determine active strategies.
|
||||||
|
- Usage: `python trade_executor.py --log-level normal`
|
||||||
|
- Notes: Requires `MAIN_WALLET_ADDRESS` and agent keys. See `create_agent.py` to authorize agents on-chain.
|
||||||
|
|
||||||
|
create_agent.py
|
||||||
|
- Purpose: Authorizes a new on-chain agent using your main wallet (requires `MAIN_WALLET_PRIVATE_KEY`).
|
||||||
|
- Usage: `python create_agent.py`
|
||||||
|
- Notes: Prints the new agent private key to stdout — save it securely.
|
||||||
|
|
||||||
|
trade_log.py
|
||||||
|
- Purpose: Provides a thread-safe CSV trade history logger. Used by the executor and strategies to record actions.
|
||||||
|
|
||||||
|
Other utility scripts
|
||||||
|
- import_csv.py, fix_timestamps.py, list_coins.py, etc. — see file headers for details.
|
||||||
42
WIKI/SETUP.md
Normal file
42
WIKI/SETUP.md
Normal file
@ -0,0 +1,42 @@
|
|||||||
|
Setup and Installation
|
||||||
|
|
||||||
|
Prerequisites
|
||||||
|
|
||||||
|
- Python 3.11+ (project uses modern dependencies)
|
||||||
|
- Git (optional)
|
||||||
|
- A Hyperliquid account and an activated main wallet if you want to authorize agents and trade
|
||||||
|
|
||||||
|
Virtual environment
|
||||||
|
|
||||||
|
1. Create a virtual environment:
|
||||||
|
|
||||||
|
python -m venv .venv
|
||||||
|
|
||||||
|
2. Activate the virtual environment (PowerShell on Windows):
|
||||||
|
|
||||||
|
.\.venv\Scripts\Activate.ps1
|
||||||
|
|
||||||
|
3. Upgrade pip and install dependencies:
|
||||||
|
|
||||||
|
python -m pip install --upgrade pip
|
||||||
|
pip install -r requirements.txt
|
||||||
|
|
||||||
|
Configuration
|
||||||
|
|
||||||
|
- Copy `.env.example` to `.env` and set the following variables as required:
|
||||||
|
- MAIN_WALLET_PRIVATE_KEY (used by `create_agent.py` to authorize agents)
|
||||||
|
- MAIN_WALLET_ADDRESS (used by `trade_executor.py`)
|
||||||
|
- AGENT_PRIVATE_KEY or per-agent keys like `EXECUTOR_SCALPER_AGENT_PK`
|
||||||
|
- Optional: COINGECKO_API_KEY for `market_cap_fetcher.py` to avoid rate limits
|
||||||
|
|
||||||
|
Data directory
|
||||||
|
|
||||||
|
- The project writes and reads data from the `_data/` folder. Ensure the directory exists and is writable by the user running the scripts.
|
||||||
|
|
||||||
|
Quick test
|
||||||
|
|
||||||
|
After installing packages, run `list_coins.py` in a dry run to verify connectivity to the Hyperliquid info API:
|
||||||
|
|
||||||
|
python list_coins.py
|
||||||
|
|
||||||
|
If you encounter import errors, ensure the virtual environment is active and the `requirements.txt` dependencies are installed.
|
||||||
15
WIKI/SUMMARY.md
Normal file
15
WIKI/SUMMARY.md
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
Project Wiki Summary
|
||||||
|
|
||||||
|
This directory contains human-friendly documentation for the project. Files:
|
||||||
|
|
||||||
|
- `OVERVIEW.md` — High-level overview and where to start
|
||||||
|
- `SETUP.md` — Environment setup and quick test steps
|
||||||
|
- `SCRIPTS.md` — Per-script documentation and usage examples
|
||||||
|
- `AGENTS.md` — How agents work and secure handling of keys
|
||||||
|
- `DATA.md` — Data folder layout and schema notes
|
||||||
|
- `DEVELOPMENT.md` — Developer guidance and recommended improvements
|
||||||
|
- `CONTRIBUTING.md` — How to contribute safely
|
||||||
|
- `TROUBLESHOOTING.md` — Common problems and solutions
|
||||||
|
|
||||||
|
Notes:
|
||||||
|
- These pages were generated from repository source files and common patterns in trading/data projects. Validate any sensitive information (agent keys) and remove them from the repository when sharing.
|
||||||
21
WIKI/TROUBLESHOOTING.md
Normal file
21
WIKI/TROUBLESHOOTING.md
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
Troubleshooting common issues
|
||||||
|
|
||||||
|
1. Import errors
|
||||||
|
- Ensure the virtual environment is active.
|
||||||
|
- Run `pip install -r requirements.txt`.
|
||||||
|
|
||||||
|
2. Agent authorization failures
|
||||||
|
- Ensure your main wallet is activated on Hyperliquid and has funds.
|
||||||
|
- The `create_agent.py` script will print helpful messages if the vault (main wallet) cannot act.
|
||||||
|
|
||||||
|
3. SQLite locked errors
|
||||||
|
- Increase the SQLite timeout when opening connections (this project uses a 10s timeout in fetcher). Close other programs that may hold the DB open.
|
||||||
|
|
||||||
|
4. Missing coin precision file
|
||||||
|
- Run `python list_coins.py` to regenerate `_data/coin_precision.json`.
|
||||||
|
|
||||||
|
5. Rate limits from CoinGecko
|
||||||
|
- Set `COINGECKO_API_KEY` in your `.env` file and ensure the fetcher respects backoff.
|
||||||
|
|
||||||
|
6. Agent keys in `agents` file or other local files
|
||||||
|
- Treat any `agents` file with private keys as compromised; rotate keys and remove the file from the repository.
|
||||||
Binary file not shown.
BIN
__pycache__/trade_log.cpython-313.pyc
Normal file
BIN
__pycache__/trade_log.cpython-313.pyc
Normal file
Binary file not shown.
18
_data/backtesting_conf.json
Normal file
18
_data/backtesting_conf.json
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
{
|
||||||
|
"sma_cross_eth_5m": {
|
||||||
|
"strategy_name": "sma_cross_2",
|
||||||
|
"script": "strategies.ma_cross_strategy.MaCrossStrategy",
|
||||||
|
"optimization_params": {
|
||||||
|
"fast": {
|
||||||
|
"start": 5,
|
||||||
|
"end": 150,
|
||||||
|
"step": 1
|
||||||
|
},
|
||||||
|
"slow": {
|
||||||
|
"start": 0,
|
||||||
|
"end": 0,
|
||||||
|
"step": 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
231
_data/candles/hyperliquid-historical.py
Normal file
231
_data/candles/hyperliquid-historical.py
Normal file
@ -0,0 +1,231 @@
|
|||||||
|
import boto3
|
||||||
|
from botocore import UNSIGNED
|
||||||
|
from botocore.config import Config
|
||||||
|
from botocore.exceptions import ClientError
|
||||||
|
import os
|
||||||
|
import argparse
|
||||||
|
from datetime import datetime, timedelta
|
||||||
|
import asyncio
|
||||||
|
import lz4.frame
|
||||||
|
from pathlib import Path
|
||||||
|
import csv
|
||||||
|
import json
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# MUST USE PATHLIB INSTEAD
|
||||||
|
DIR_PATH = Path(__file__).parent
|
||||||
|
BUCKET = "hyperliquid-archive"
|
||||||
|
CSV_HEADER = ["datetime", "timestamp", "level", "price", "size", "number"]
|
||||||
|
|
||||||
|
# s3 = boto3.client('s3', config=Config(signature_version=UNSIGNED))
|
||||||
|
# s3.download_file('hyperliquid-archive', 'market_data/20230916/9/l2Book/SOL.lz4', f"{dir_path}/SOL.lz4")
|
||||||
|
|
||||||
|
# earliest date: 20230415/0/
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def get_args():
|
||||||
|
parser = argparse.ArgumentParser(description="Retrieve historical tick level market data from Hyperliquid exchange")
|
||||||
|
subparser = parser.add_subparsers(dest="tool", required=True, help="tool: download, decompress, to_csv")
|
||||||
|
|
||||||
|
global_parser = subparser.add_parser("global_settings", add_help=False)
|
||||||
|
global_parser.add_argument("t", metavar="Tickers", help="Tickers of assets to be downloaded seperated by spaces. e.g. BTC ETH", nargs="+")
|
||||||
|
global_parser.add_argument("--all", help="Apply action to all available dates and times.", action="store_true", default=False)
|
||||||
|
global_parser.add_argument("--anonymous", help="Use anonymous (unsigned) S3 requests. Defaults to signed requests if not provided.", action="store_true", default=False)
|
||||||
|
global_parser.add_argument("-sd", metavar="Start date", help="Starting date as one unbroken string formatted: YYYYMMDD. e.g. 20230916")
|
||||||
|
global_parser.add_argument("-sh", metavar="Start hour", help="Hour of the starting day as an integer between 0 and 23. e.g. 9 Default: 0", type=int, default=0)
|
||||||
|
global_parser.add_argument("-ed", metavar="End date", help="Ending date as one unbroken string formatted: YYYYMMDD. e.g. 20230916")
|
||||||
|
global_parser.add_argument("-eh", metavar="End hour", help="Hour of the ending day as an integer between 0 and 23. e.g. 9 Default: 23", type=int, default=23)
|
||||||
|
|
||||||
|
|
||||||
|
download_parser = subparser.add_parser("download", help="Download historical market data", parents=[global_parser])
|
||||||
|
decompress_parser = subparser.add_parser("decompress", help="Decompress downloaded lz4 data", parents=[global_parser])
|
||||||
|
to_csv_parser = subparser.add_parser("to_csv", help="Convert decompressed downloads into formatted CSV", parents=[global_parser])
|
||||||
|
|
||||||
|
|
||||||
|
return parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def make_date_list(start_date, end_date):
|
||||||
|
start_date = datetime.strptime(start_date, '%Y%m%d')
|
||||||
|
end_date = datetime.strptime(end_date, '%Y%m%d')
|
||||||
|
|
||||||
|
date_list = []
|
||||||
|
|
||||||
|
current_date = start_date
|
||||||
|
while current_date <= end_date:
|
||||||
|
date_list.append(current_date.strftime('%Y%m%d'))
|
||||||
|
current_date += timedelta(days=1)
|
||||||
|
|
||||||
|
return date_list
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def make_date_hour_list(date_list, start_hour, end_hour, delimiter="/"):
|
||||||
|
date_hour_list = []
|
||||||
|
end_date = date_list[-1]
|
||||||
|
hour = start_hour
|
||||||
|
end = 23
|
||||||
|
for date in date_list:
|
||||||
|
if date == end_date:
|
||||||
|
end = end_hour
|
||||||
|
|
||||||
|
while hour <= end:
|
||||||
|
date_hour = date + delimiter + str(hour)
|
||||||
|
date_hour_list.append(date_hour)
|
||||||
|
hour += 1
|
||||||
|
|
||||||
|
hour = 0
|
||||||
|
|
||||||
|
return date_hour_list
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
async def download_object(s3, asset, date_hour):
|
||||||
|
date_and_hour = date_hour.split("/")
|
||||||
|
key = f"market_data/{date_hour}/l2Book/{asset}.lz4"
|
||||||
|
dest = f"{DIR_PATH}/downloads/{asset}/{date_and_hour[0]}-{date_and_hour[1]}.lz4"
|
||||||
|
try:
|
||||||
|
s3.download_file(BUCKET, key, dest)
|
||||||
|
except ClientError as e:
|
||||||
|
# Print a concise message and continue. Common errors: 403 Forbidden, 404 Not Found.
|
||||||
|
code = e.response.get('Error', {}).get('Code') if hasattr(e, 'response') else 'Unknown'
|
||||||
|
print(f"Failed to download {key}: {code} - {e}")
|
||||||
|
return
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
async def download_objects(s3, assets, date_hour_list):
|
||||||
|
print(f"Downloading {len(date_hour_list)} objects...")
|
||||||
|
for asset in assets:
|
||||||
|
await asyncio.gather(*[download_object(s3, asset, date_hour) for date_hour in date_hour_list])
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
async def decompress_file(asset, date_hour):
|
||||||
|
lz_file_path = DIR_PATH / "downloads" / asset / f"{date_hour}.lz4"
|
||||||
|
file_path = DIR_PATH / "downloads" / asset / date_hour
|
||||||
|
|
||||||
|
if not lz_file_path.is_file():
|
||||||
|
print(f"decompress_file: file not found: {lz_file_path}")
|
||||||
|
return
|
||||||
|
|
||||||
|
with lz4.frame.open(lz_file_path, mode='r') as lzfile:
|
||||||
|
data = lzfile.read()
|
||||||
|
with open(file_path, "wb") as file:
|
||||||
|
file.write(data)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
async def decompress_files(assets, date_hour_list):
|
||||||
|
print(f"Decompressing {len(date_hour_list)} files...")
|
||||||
|
for asset in assets:
|
||||||
|
await asyncio.gather(*[decompress_file(asset, date_hour) for date_hour in date_hour_list])
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def write_rows(csv_writer, line):
|
||||||
|
rows = []
|
||||||
|
entry = json.loads(line)
|
||||||
|
date_time = entry["time"]
|
||||||
|
timestamp = str(entry["raw"]["data"]["time"])
|
||||||
|
all_orders = entry["raw"]["data"]["levels"]
|
||||||
|
|
||||||
|
for i, order_level in enumerate(all_orders):
|
||||||
|
level = str(i + 1)
|
||||||
|
for order in order_level:
|
||||||
|
price = order["px"]
|
||||||
|
size = order["sz"]
|
||||||
|
number = str(order["n"])
|
||||||
|
|
||||||
|
rows.append([date_time, timestamp, level, price, size, number])
|
||||||
|
|
||||||
|
for row in rows:
|
||||||
|
csv_writer.writerow(row)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
async def convert_file(asset, date_hour):
|
||||||
|
file_path = DIR_PATH / "downloads" / asset / date_hour
|
||||||
|
csv_path = DIR_PATH / "csv" / asset / f"{date_hour}.csv"
|
||||||
|
|
||||||
|
with open(csv_path, "w", newline='') as csv_file:
|
||||||
|
csv_writer = csv.writer(csv_file, dialect="excel")
|
||||||
|
csv_writer.writerow(CSV_HEADER)
|
||||||
|
|
||||||
|
with open(file_path) as file:
|
||||||
|
for line in file:
|
||||||
|
write_rows(csv_writer, line)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
async def files_to_csv(assets, date_hour_list):
|
||||||
|
print(f"Converting {len(date_hour_list)} files to CSV...")
|
||||||
|
for asset in assets:
|
||||||
|
await asyncio.gather(*[convert_file(asset, date_hour) for date_hour in date_hour_list])
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
print(DIR_PATH)
|
||||||
|
args = get_args()
|
||||||
|
|
||||||
|
# Create S3 client according to whether anonymous access was requested.
|
||||||
|
if getattr(args, 'anonymous', False):
|
||||||
|
s3 = boto3.client('s3', config=Config(signature_version=UNSIGNED))
|
||||||
|
else:
|
||||||
|
s3 = boto3.client('s3')
|
||||||
|
|
||||||
|
downloads_path = DIR_PATH / "downloads"
|
||||||
|
downloads_path.mkdir(exist_ok=True)
|
||||||
|
|
||||||
|
csv_path = DIR_PATH / "csv"
|
||||||
|
csv_path.mkdir(exist_ok=True)
|
||||||
|
|
||||||
|
for asset in args.t:
|
||||||
|
downloads_asset_path = downloads_path / asset
|
||||||
|
downloads_asset_path.mkdir(exist_ok=True)
|
||||||
|
csv_asset_path = csv_path / asset
|
||||||
|
csv_asset_path.mkdir(exist_ok=True)
|
||||||
|
|
||||||
|
date_list = make_date_list(args.sd, args.ed)
|
||||||
|
loop = asyncio.new_event_loop()
|
||||||
|
|
||||||
|
if args.tool == "download":
|
||||||
|
date_hour_list = make_date_hour_list(date_list, args.sh, args.eh)
|
||||||
|
loop.run_until_complete(download_objects(s3, args.t, date_hour_list))
|
||||||
|
loop.close()
|
||||||
|
|
||||||
|
if args.tool == "decompress":
|
||||||
|
date_hour_list = make_date_hour_list(date_list, args.sh, args.eh, delimiter="-")
|
||||||
|
loop.run_until_complete(decompress_files(args.t, date_hour_list))
|
||||||
|
loop.close()
|
||||||
|
|
||||||
|
if args.tool == "to_csv":
|
||||||
|
date_hour_list = make_date_hour_list(date_list, args.sh, args.eh, delimiter="-")
|
||||||
|
loop.run_until_complete(files_to_csv(args.t, date_hour_list))
|
||||||
|
loop.close()
|
||||||
|
|
||||||
|
|
||||||
|
print("Done")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
8
_data/candles/requirements.txt
Normal file
8
_data/candles/requirements.txt
Normal file
@ -0,0 +1,8 @@
|
|||||||
|
boto3==1.34.131
|
||||||
|
botocore==1.34.131
|
||||||
|
jmespath==1.0.1
|
||||||
|
lz4==4.3.3
|
||||||
|
python-dateutil==2.9.0.post0
|
||||||
|
s3transfer==0.10.1
|
||||||
|
six==1.16.0
|
||||||
|
urllib3==2.2.2
|
||||||
7
_data/executor_managed_positions.json
Normal file
7
_data/executor_managed_positions.json
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"sma_cross_2": {
|
||||||
|
"coin": "BTC",
|
||||||
|
"side": "short",
|
||||||
|
"size": 0.0001
|
||||||
|
}
|
||||||
|
}
|
||||||
47
_data/market_cap_data.json
Normal file
47
_data/market_cap_data.json
Normal file
@ -0,0 +1,47 @@
|
|||||||
|
{
|
||||||
|
"BTC_market_cap": {
|
||||||
|
"datetime_utc": "2025-10-14 19:07:32",
|
||||||
|
"market_cap": 2254100854707.6426
|
||||||
|
},
|
||||||
|
"ETH_market_cap": {
|
||||||
|
"datetime_utc": "2025-10-14 19:07:45",
|
||||||
|
"market_cap": 498260644977.71
|
||||||
|
},
|
||||||
|
"SOL_market_cap": {
|
||||||
|
"datetime_utc": "2025-10-14 19:07:54",
|
||||||
|
"market_cap": 110493585034.85222
|
||||||
|
},
|
||||||
|
"BNB_market_cap": {
|
||||||
|
"datetime_utc": "2025-10-14 19:08:01",
|
||||||
|
"market_cap": 169461959349.39044
|
||||||
|
},
|
||||||
|
"ZEC_market_cap": {
|
||||||
|
"datetime_utc": "2025-10-14 19:08:32",
|
||||||
|
"market_cap": 3915238492.7266335
|
||||||
|
},
|
||||||
|
"SUI_market_cap": {
|
||||||
|
"datetime_utc": "2025-10-14 19:08:51",
|
||||||
|
"market_cap": 10305847774.680008
|
||||||
|
},
|
||||||
|
"STABLECOINS_market_cap": {
|
||||||
|
"datetime_utc": "2025-10-14 00:00:00",
|
||||||
|
"market_cap": 551315140796.8396
|
||||||
|
},
|
||||||
|
"ASTER_market_cap": {
|
||||||
|
"datetime_utc": "2025-10-14 20:47:18",
|
||||||
|
"market_cap": 163953008.77347806
|
||||||
|
},
|
||||||
|
"HYPE_market_cap": {
|
||||||
|
"datetime_utc": "2025-10-14 20:55:21",
|
||||||
|
"market_cap": 10637373991.458858
|
||||||
|
},
|
||||||
|
"TOTAL_market_cap_daily": {
|
||||||
|
"datetime_utc": "2025-10-16 00:00:00",
|
||||||
|
"market_cap": 3849619103702.8604
|
||||||
|
},
|
||||||
|
"PUMP_market_cap": {
|
||||||
|
"datetime_utc": "2025-10-14 21:02:30",
|
||||||
|
"market_cap": 1454398647.593871
|
||||||
|
},
|
||||||
|
"summary_last_updated_utc": "2025-10-16T00:16:09.640449+00:00"
|
||||||
|
}
|
||||||
BIN
_data/market_data.db-shm
Normal file
BIN
_data/market_data.db-shm
Normal file
Binary file not shown.
BIN
_data/market_data.db-wal
Normal file
BIN
_data/market_data.db-wal
Normal file
Binary file not shown.
32
_data/strategies.json
Normal file
32
_data/strategies.json
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
{
|
||||||
|
"sma_cross_eth_5m": {
|
||||||
|
"enabled": true,
|
||||||
|
"script": "strategy_runner.py",
|
||||||
|
"class": "strategies.ma_cross_strategy.MaCrossStrategy",
|
||||||
|
"agent": "scalper_agent",
|
||||||
|
"parameters": {
|
||||||
|
"coin": "ETH",
|
||||||
|
"timeframe": "1m",
|
||||||
|
"short_ma": 7,
|
||||||
|
"long_ma": 44,
|
||||||
|
"size": 0.0055,
|
||||||
|
"leverage_long": 5,
|
||||||
|
"leverage_short": 5
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"sma_125d_btc": {
|
||||||
|
"enabled": true,
|
||||||
|
"script": "strategy_runner.py",
|
||||||
|
"class": "strategies.single_sma_strategy.SingleSmaStrategy",
|
||||||
|
"agent": "swing_agent",
|
||||||
|
"parameters": {
|
||||||
|
"coin": "BTC",
|
||||||
|
"timeframe": "1d",
|
||||||
|
"sma_period": 44,
|
||||||
|
"size": 0.0001,
|
||||||
|
"leverage_long": 2,
|
||||||
|
"leverage_short": 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
7
_data/strategy_status_ma_cross_btc.json
Normal file
7
_data/strategy_status_ma_cross_btc.json
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"strategy_name": "ma_cross_btc",
|
||||||
|
"current_signal": "HOLD",
|
||||||
|
"last_signal_change_utc": "2025-10-12T17:00:00+00:00",
|
||||||
|
"signal_price": 114286.0,
|
||||||
|
"last_checked_utc": "2025-10-15T11:48:55.092260+00:00"
|
||||||
|
}
|
||||||
7
_data/strategy_status_sma_125d_btc.json
Normal file
7
_data/strategy_status_sma_125d_btc.json
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"strategy_name": "sma_125d_btc",
|
||||||
|
"current_signal": "SELL",
|
||||||
|
"last_signal_change_utc": "2025-10-14T00:00:00+00:00",
|
||||||
|
"signal_price": 113026.0,
|
||||||
|
"last_checked_utc": "2025-10-16T10:42:03.203292+00:00"
|
||||||
|
}
|
||||||
7
_data/strategy_status_sma_125d_eth.json
Normal file
7
_data/strategy_status_sma_125d_eth.json
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"strategy_name": "sma_125d_eth",
|
||||||
|
"current_signal": "BUY",
|
||||||
|
"last_signal_change_utc": "2025-08-26T00:00:00+00:00",
|
||||||
|
"signal_price": 4600.63,
|
||||||
|
"last_checked_utc": "2025-10-15T17:35:17.663159+00:00"
|
||||||
|
}
|
||||||
7
_data/strategy_status_sma_44d_btc.json
Normal file
7
_data/strategy_status_sma_44d_btc.json
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"strategy_name": "sma_44d_btc",
|
||||||
|
"current_signal": "SELL",
|
||||||
|
"last_signal_change_utc": "2025-10-14T00:00:00+00:00",
|
||||||
|
"signal_price": 113026.0,
|
||||||
|
"last_checked_utc": "2025-10-16T10:42:03.202977+00:00"
|
||||||
|
}
|
||||||
7
_data/strategy_status_sma_5m_eth.json
Normal file
7
_data/strategy_status_sma_5m_eth.json
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"strategy_name": "sma_5m_eth",
|
||||||
|
"current_signal": "SELL",
|
||||||
|
"last_signal_change_utc": "2025-10-15T17:30:00+00:00",
|
||||||
|
"signal_price": 3937.5,
|
||||||
|
"last_checked_utc": "2025-10-15T17:35:05.035566+00:00"
|
||||||
|
}
|
||||||
7
_data/strategy_status_sma_cross.json
Normal file
7
_data/strategy_status_sma_cross.json
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"strategy_name": "sma_cross",
|
||||||
|
"current_signal": "SELL",
|
||||||
|
"last_signal_change_utc": "2025-10-15T11:45:00+00:00",
|
||||||
|
"signal_price": 111957.0,
|
||||||
|
"last_checked_utc": "2025-10-15T12:10:05.048434+00:00"
|
||||||
|
}
|
||||||
7
_data/strategy_status_sma_cross_1.json
Normal file
7
_data/strategy_status_sma_cross_1.json
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"strategy_name": "sma_cross_1",
|
||||||
|
"current_signal": "FLAT",
|
||||||
|
"last_signal_change_utc": "2025-10-18T20:22:00+00:00",
|
||||||
|
"signal_price": 3893.9,
|
||||||
|
"last_checked_utc": "2025-10-18T20:30:05.021192+00:00"
|
||||||
|
}
|
||||||
7
_data/strategy_status_sma_cross_2.json
Normal file
7
_data/strategy_status_sma_cross_2.json
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"strategy_name": "sma_cross_2",
|
||||||
|
"current_signal": "SELL",
|
||||||
|
"last_signal_change_utc": "2025-10-20T00:00:00+00:00",
|
||||||
|
"signal_price": 110811.0,
|
||||||
|
"last_checked_utc": "2025-10-20T18:45:51.578502+00:00"
|
||||||
|
}
|
||||||
7
_data/strategy_status_sma_cross_eth_5m.json
Normal file
7
_data/strategy_status_sma_cross_eth_5m.json
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"strategy_name": "sma_cross_eth_5m",
|
||||||
|
"current_signal": "SELL",
|
||||||
|
"last_signal_change_utc": "2025-10-15T11:45:00+00:00",
|
||||||
|
"signal_price": 4106.1,
|
||||||
|
"last_checked_utc": "2025-10-15T12:05:05.022308+00:00"
|
||||||
|
}
|
||||||
221
address_monitor.py
Normal file
221
address_monitor.py
Normal file
@ -0,0 +1,221 @@
|
|||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import json
|
||||||
|
import argparse
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from hyperliquid.info import Info
|
||||||
|
from hyperliquid.utils import constants
|
||||||
|
from collections import deque
|
||||||
|
import logging
|
||||||
|
import csv
|
||||||
|
|
||||||
|
from logging_utils import setup_logging
|
||||||
|
|
||||||
|
# --- Configuration ---
|
||||||
|
DEFAULT_ADDRESSES_TO_WATCH = [
|
||||||
|
#"0xd4c1f7e8d876c4749228d515473d36f919583d1d",
|
||||||
|
"0x0fd468a73084daa6ea77a9261e40fdec3e67e0c7",
|
||||||
|
# "0x4d69495d16fab95c3c27b76978affa50301079d0",
|
||||||
|
# "0x09bc1cf4d9f0b59e1425a8fde4d4b1f7d3c9410d",
|
||||||
|
"0xc6ac58a7a63339898aeda32499a8238a46d88e84",
|
||||||
|
"0xa8ef95dbd3db55911d3307930a84b27d6e969526",
|
||||||
|
# "0x4129c62faf652fea61375dcd9ca8ce24b2bb8b95",
|
||||||
|
"0xbf1935fe7ab6d0aa3ee8d3da47c2f80e215b2a1c",
|
||||||
|
]
|
||||||
|
MAX_FILLS_TO_DISPLAY = 10
|
||||||
|
LOGS_DIR = "_logs"
|
||||||
|
recent_fills = {}
|
||||||
|
_lines_printed = 0
|
||||||
|
|
||||||
|
TABLE_HEADER = f"{'Time (UTC)':<10} | {'Coin':<6} | {'Side':<5} | {'Size':>15} | {'Price':>15} | {'Value (USD)':>20}"
|
||||||
|
TABLE_WIDTH = len(TABLE_HEADER)
|
||||||
|
|
||||||
|
def log_fill_to_csv(address: str, fill_data: dict):
|
||||||
|
"""Appends a single fill record to the CSV file for a specific address."""
|
||||||
|
log_file_path = os.path.join(LOGS_DIR, f"fills_{address}.csv")
|
||||||
|
file_exists = os.path.exists(log_file_path)
|
||||||
|
|
||||||
|
# The CSV will store a flattened version of the decoded fill
|
||||||
|
csv_row = {
|
||||||
|
'time_utc': fill_data['time'].isoformat(),
|
||||||
|
'coin': fill_data['coin'],
|
||||||
|
'side': fill_data['side'],
|
||||||
|
'price': fill_data['price'],
|
||||||
|
'size': fill_data['size'],
|
||||||
|
'value_usd': fill_data['value']
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(log_file_path, 'a', newline='', encoding='utf-8') as f:
|
||||||
|
writer = csv.DictWriter(f, fieldnames=csv_row.keys())
|
||||||
|
if not file_exists:
|
||||||
|
writer.writeheader()
|
||||||
|
writer.writerow(csv_row)
|
||||||
|
except IOError as e:
|
||||||
|
logging.error(f"Failed to write to CSV log for {address}: {e}")
|
||||||
|
|
||||||
|
def on_message(message):
|
||||||
|
"""
|
||||||
|
Callback function to process incoming userEvents from the WebSocket.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
logging.debug(f"Received message: {message}")
|
||||||
|
channel = message.get("channel")
|
||||||
|
if channel in ("user", "userFills"):
|
||||||
|
data = message.get("data")
|
||||||
|
if not data:
|
||||||
|
return
|
||||||
|
|
||||||
|
user_address = data.get("user", "").lower()
|
||||||
|
fills = data.get("fills", [])
|
||||||
|
|
||||||
|
if user_address in recent_fills and fills:
|
||||||
|
logging.info(f"Fill detected for user: {user_address}")
|
||||||
|
for fill_data in fills:
|
||||||
|
decoded_fill = {
|
||||||
|
"time": datetime.fromtimestamp(fill_data['time'] / 1000, tz=timezone.utc),
|
||||||
|
"coin": fill_data['coin'],
|
||||||
|
"side": "BUY" if fill_data['side'] == "B" else "SELL",
|
||||||
|
"price": float(fill_data['px']),
|
||||||
|
"size": float(fill_data['sz']),
|
||||||
|
"value": float(fill_data['px']) * float(fill_data['sz']),
|
||||||
|
}
|
||||||
|
recent_fills[user_address].append(decoded_fill)
|
||||||
|
# --- ADDED: Log every fill to its CSV file ---
|
||||||
|
log_fill_to_csv(user_address, decoded_fill)
|
||||||
|
|
||||||
|
except (KeyError, TypeError, ValueError) as e:
|
||||||
|
logging.error(f"Error processing message: {e} | Data: {message}")
|
||||||
|
|
||||||
|
def build_fills_table(address: str, fills: deque) -> list:
|
||||||
|
"""Builds the formatted lines for a single address's fills table."""
|
||||||
|
lines = []
|
||||||
|
short_address = f"{address[:6]}...{address[-4:]}"
|
||||||
|
|
||||||
|
lines.append(f"--- Fills for {short_address} ---")
|
||||||
|
lines.append(TABLE_HEADER)
|
||||||
|
lines.append("-" * TABLE_WIDTH)
|
||||||
|
|
||||||
|
for fill in list(fills):
|
||||||
|
lines.append(
|
||||||
|
f"{fill['time'].strftime('%H:%M:%S'):<10} | "
|
||||||
|
f"{fill['coin']:<6} | "
|
||||||
|
f"{fill['side']:<5} | "
|
||||||
|
f"{fill['size']:>15.4f} | "
|
||||||
|
f"{fill['price']:>15,.2f} | "
|
||||||
|
f"${fill['value']:>18,.2f}"
|
||||||
|
)
|
||||||
|
|
||||||
|
padding_needed = MAX_FILLS_TO_DISPLAY - len(fills)
|
||||||
|
for _ in range(padding_needed):
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
return lines
|
||||||
|
|
||||||
|
def display_dashboard():
|
||||||
|
"""
|
||||||
|
Clears the screen and prints a two-column layout of recent fills tables.
|
||||||
|
"""
|
||||||
|
global _lines_printed
|
||||||
|
|
||||||
|
if _lines_printed > 0:
|
||||||
|
print(f"\x1b[{_lines_printed}A", end="")
|
||||||
|
|
||||||
|
output_lines = ["--- Live Address Fill Monitor ---", ""]
|
||||||
|
|
||||||
|
addresses_to_display = list(recent_fills.keys())
|
||||||
|
num_addresses = len(addresses_to_display)
|
||||||
|
mid_point = (num_addresses + 1) // 2
|
||||||
|
left_column_addresses = addresses_to_display[:mid_point]
|
||||||
|
right_column_addresses = addresses_to_display[mid_point:]
|
||||||
|
|
||||||
|
separator = " | "
|
||||||
|
|
||||||
|
for i in range(mid_point):
|
||||||
|
left_address = left_column_addresses[i]
|
||||||
|
left_table_lines = build_fills_table(left_address, recent_fills[left_address])
|
||||||
|
|
||||||
|
right_table_lines = []
|
||||||
|
if i < len(right_column_addresses):
|
||||||
|
right_address = right_column_addresses[i]
|
||||||
|
right_table_lines = build_fills_table(right_address, recent_fills[right_address])
|
||||||
|
|
||||||
|
table_height = 3 + MAX_FILLS_TO_DISPLAY
|
||||||
|
for j in range(table_height):
|
||||||
|
left_part = left_table_lines[j] if j < len(left_table_lines) else ""
|
||||||
|
right_part = right_table_lines[j] if j < len(right_table_lines) else ""
|
||||||
|
output_lines.append(f"{left_part:<{TABLE_WIDTH}}{separator}{right_part}")
|
||||||
|
output_lines.append("")
|
||||||
|
|
||||||
|
final_output = "\n".join(output_lines) + "\n\x1b[J"
|
||||||
|
print(final_output, end="")
|
||||||
|
|
||||||
|
_lines_printed = len(output_lines)
|
||||||
|
sys.stdout.flush()
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""
|
||||||
|
Main function to set up the WebSocket and run the display loop.
|
||||||
|
"""
|
||||||
|
global recent_fills
|
||||||
|
parser = argparse.ArgumentParser(description="Monitor live fills for specific wallet addresses on Hyperliquid.")
|
||||||
|
parser.add_argument(
|
||||||
|
"--addresses",
|
||||||
|
nargs='+',
|
||||||
|
default=DEFAULT_ADDRESSES_TO_WATCH,
|
||||||
|
help="A space-separated list of Ethereum addresses to monitor."
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--log-level",
|
||||||
|
default="normal",
|
||||||
|
choices=['off', 'normal', 'debug'],
|
||||||
|
help="Set the logging level for the script."
|
||||||
|
)
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
setup_logging(args.log_level, 'AddressMonitor')
|
||||||
|
|
||||||
|
# --- ADDED: Ensure the logs directory exists ---
|
||||||
|
if not os.path.exists(LOGS_DIR):
|
||||||
|
os.makedirs(LOGS_DIR)
|
||||||
|
|
||||||
|
addresses_to_watch = []
|
||||||
|
for addr in args.addresses:
|
||||||
|
clean_addr = addr.strip().lower()
|
||||||
|
if len(clean_addr) == 42 and clean_addr.startswith('0x'):
|
||||||
|
addresses_to_watch.append(clean_addr)
|
||||||
|
else:
|
||||||
|
logging.warning(f"Invalid or malformed address provided: '{addr}'. Skipping.")
|
||||||
|
|
||||||
|
recent_fills = {addr: deque(maxlen=MAX_FILLS_TO_DISPLAY) for addr in addresses_to_watch}
|
||||||
|
|
||||||
|
if not addresses_to_watch:
|
||||||
|
print("No valid addresses configured to watch. Exiting.", file=sys.stderr)
|
||||||
|
return
|
||||||
|
|
||||||
|
info = Info(constants.MAINNET_API_URL, skip_ws=False)
|
||||||
|
|
||||||
|
for addr in addresses_to_watch:
|
||||||
|
try:
|
||||||
|
info.subscribe({"type": "userFills", "user": addr}, on_message)
|
||||||
|
logging.debug(f"Queued subscribe for userFills: {addr}")
|
||||||
|
time.sleep(0.02)
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Failed to subscribe for {addr}: {e}")
|
||||||
|
|
||||||
|
logging.info(f"Subscribed to userFills for {len(addresses_to_watch)} addresses")
|
||||||
|
|
||||||
|
print("\nDisplaying live fill data... Press Ctrl+C to stop.")
|
||||||
|
try:
|
||||||
|
while True:
|
||||||
|
display_dashboard()
|
||||||
|
time.sleep(0.2)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
print("\nStopping WebSocket listener...")
|
||||||
|
info.ws_manager.stop()
|
||||||
|
print("Listener stopped.")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|
||||||
22
agents
22
agents
@ -1,3 +1,19 @@
|
|||||||
agent 001
|
==================================================
|
||||||
wallet: 0x7773833262f020c7979ec8aae38455c17ba4040c
|
SAVE THESE SECURELY. This is what your bot will use.
|
||||||
Private Key: 0x659326d719a4322244d6e7f28e7fa2780f034e9f6a342ef1919664817e6248df
|
Name: trade_executor
|
||||||
|
(Agent has a default long-term validity)
|
||||||
|
🔑 Agent Private Key: 0xabed7379ec33253694eba50af8a392a88ea32b72b5f4f9cddceb0f5879428b69
|
||||||
|
🏠 Agent Address: 0xcB262CeAaE5D8A99b713f87a43Dd18E6Be892739
|
||||||
|
==================================================
|
||||||
|
SAVE THESE SECURELY. This is what your bot will use.
|
||||||
|
Name: executor_scalper
|
||||||
|
(Agent has a default long-term validity)
|
||||||
|
🔑 Agent Private Key: 0xe7bd4f3a1e29252ec40edff1bf796beaf13993d23a0c288a75d79c53e3c97812
|
||||||
|
🏠 Agent Address: 0xD211ba67162aD4E785cd4894D00A1A7A32843094
|
||||||
|
==================================================
|
||||||
|
SAVE THESE SECURELY. This is what your bot will use.
|
||||||
|
Name: executor_swing
|
||||||
|
(Agent has a default long-term validity)
|
||||||
|
🔑 Agent Private Key: 0xb6811c8b4a928556b3b95ccfaf72eb452b0d89a903f251b86955654672a3b6ab
|
||||||
|
🏠 Agent Address: 0xAD27c936672Fa368c2d96a47FDA34e8e3A0f318C
|
||||||
|
==================================================
|
||||||
368
backtester.py
Normal file
368
backtester.py
Normal file
@ -0,0 +1,368 @@
|
|||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import sqlite3
|
||||||
|
import pandas as pd
|
||||||
|
import json
|
||||||
|
from datetime import datetime, timedelta
|
||||||
|
import itertools
|
||||||
|
import multiprocessing
|
||||||
|
from functools import partial
|
||||||
|
import time
|
||||||
|
import importlib
|
||||||
|
import signal
|
||||||
|
|
||||||
|
from logging_utils import setup_logging
|
||||||
|
|
||||||
|
def _run_trade_simulation(df: pd.DataFrame, capital: float, size_pct: float, leverage_long: int, leverage_short: int, taker_fee_pct: float, maker_fee_pct: float) -> tuple[float, list]:
|
||||||
|
"""
|
||||||
|
Simulates a trading strategy with portfolio management, including capital,
|
||||||
|
position sizing, leverage, and fees.
|
||||||
|
"""
|
||||||
|
df.dropna(inplace=True)
|
||||||
|
if df.empty: return capital, []
|
||||||
|
|
||||||
|
df['position_change'] = df['signal'].diff()
|
||||||
|
trades = []
|
||||||
|
entry_price = 0
|
||||||
|
asset_size = 0
|
||||||
|
current_position = 0 # 0=flat, 1=long, -1=short
|
||||||
|
equity = capital
|
||||||
|
|
||||||
|
for i, row in df.iterrows():
|
||||||
|
# --- Close Positions ---
|
||||||
|
if (current_position == 1 and row['signal'] != 1) or \
|
||||||
|
(current_position == -1 and row['signal'] != -1):
|
||||||
|
|
||||||
|
exit_value = asset_size * row['close']
|
||||||
|
fee = exit_value * (taker_fee_pct / 100)
|
||||||
|
|
||||||
|
if current_position == 1: # Closing a long
|
||||||
|
pnl_usd = (row['close'] - entry_price) * asset_size
|
||||||
|
equity += pnl_usd - fee
|
||||||
|
trades.append({'pnl_usd': pnl_usd, 'pnl_pct': (row['close'] - entry_price) / entry_price, 'type': 'long'})
|
||||||
|
|
||||||
|
elif current_position == -1: # Closing a short
|
||||||
|
pnl_usd = (entry_price - row['close']) * asset_size
|
||||||
|
equity += pnl_usd - fee
|
||||||
|
trades.append({'pnl_usd': pnl_usd, 'pnl_pct': (entry_price - row['close']) / entry_price, 'type': 'short'})
|
||||||
|
|
||||||
|
entry_price = 0
|
||||||
|
asset_size = 0
|
||||||
|
current_position = 0
|
||||||
|
|
||||||
|
# --- Open New Positions ---
|
||||||
|
if current_position == 0:
|
||||||
|
if row['signal'] == 1: # Open Long
|
||||||
|
margin_to_use = equity * (size_pct / 100)
|
||||||
|
trade_value = margin_to_use * leverage_long
|
||||||
|
asset_size = trade_value / row['close']
|
||||||
|
fee = trade_value * (taker_fee_pct / 100)
|
||||||
|
equity -= fee
|
||||||
|
entry_price = row['close']
|
||||||
|
current_position = 1
|
||||||
|
elif row['signal'] == -1: # Open Short
|
||||||
|
margin_to_use = equity * (size_pct / 100)
|
||||||
|
trade_value = margin_to_use * leverage_short
|
||||||
|
asset_size = trade_value / row['close']
|
||||||
|
fee = trade_value * (taker_fee_pct / 100)
|
||||||
|
equity -= fee
|
||||||
|
entry_price = row['close']
|
||||||
|
current_position = -1
|
||||||
|
|
||||||
|
return equity, trades
|
||||||
|
|
||||||
|
|
||||||
|
def simulation_worker(params: dict, db_path: str, coin: str, timeframe: str, start_date: str, end_date: str, strategy_class, sim_params: dict) -> tuple[dict, float, list]:
|
||||||
|
"""
|
||||||
|
Worker function that loads data, runs the full simulation, and returns results.
|
||||||
|
"""
|
||||||
|
df = pd.DataFrame()
|
||||||
|
try:
|
||||||
|
with sqlite3.connect(db_path) as conn:
|
||||||
|
query = f'SELECT datetime_utc, open, high, low, close FROM "{coin}_{timeframe}" WHERE datetime_utc >= ? AND datetime_utc <= ? ORDER BY datetime_utc'
|
||||||
|
df = pd.read_sql(query, conn, params=(start_date, end_date), parse_dates=['datetime_utc'])
|
||||||
|
if not df.empty:
|
||||||
|
df.set_index('datetime_utc', inplace=True)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Worker error loading data for params {params}: {e}")
|
||||||
|
return (params, sim_params['capital'], [])
|
||||||
|
|
||||||
|
if df.empty:
|
||||||
|
return (params, sim_params['capital'], [])
|
||||||
|
|
||||||
|
strategy_instance = strategy_class(params)
|
||||||
|
df_with_signals = strategy_instance.calculate_signals(df)
|
||||||
|
|
||||||
|
final_equity, trades = _run_trade_simulation(df_with_signals, **sim_params)
|
||||||
|
return (params, final_equity, trades)
|
||||||
|
|
||||||
|
|
||||||
|
def init_worker():
|
||||||
|
signal.signal(signal.SIGINT, signal.SIG_IGN)
|
||||||
|
|
||||||
|
|
||||||
|
class Backtester:
|
||||||
|
def __init__(self, log_level: str, strategy_name_to_test: str, start_date: str, sim_params: dict):
|
||||||
|
setup_logging(log_level, 'Backtester')
|
||||||
|
self.db_path = os.path.join("_data", "market_data.db")
|
||||||
|
self.simulation_params = sim_params
|
||||||
|
|
||||||
|
self.backtest_config = self._load_backtest_config(strategy_name_to_test)
|
||||||
|
# ... (rest of __init__ is unchanged)
|
||||||
|
self.strategy_name = self.backtest_config.get('strategy_name')
|
||||||
|
self.strategy_config = self._load_strategy_config()
|
||||||
|
self.params = self.strategy_config.get('parameters', {})
|
||||||
|
self.coin = self.params.get('coin')
|
||||||
|
self.timeframe = self.params.get('timeframe')
|
||||||
|
self.pool = None
|
||||||
|
self.full_history_start_date = start_date
|
||||||
|
try:
|
||||||
|
module_path, class_name = self.backtest_config['script'].rsplit('.', 1)
|
||||||
|
module = importlib.import_module(module_path)
|
||||||
|
self.strategy_class = getattr(module, class_name)
|
||||||
|
logging.info(f"Successfully loaded strategy class '{class_name}'.")
|
||||||
|
except (ImportError, AttributeError, KeyError) as e:
|
||||||
|
logging.error(f"Could not load strategy script '{self.backtest_config.get('script')}': {e}")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
def _load_backtest_config(self, name_to_test: str):
|
||||||
|
# ... (unchanged)
|
||||||
|
config_path = os.path.join("_data", "backtesting_conf.json")
|
||||||
|
try:
|
||||||
|
with open(config_path, 'r') as f: return json.load(f).get(name_to_test)
|
||||||
|
except (FileNotFoundError, json.JSONDecodeError) as e:
|
||||||
|
logging.error(f"Could not load backtesting configuration: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _load_strategy_config(self):
|
||||||
|
# ... (unchanged)
|
||||||
|
config_path = os.path.join("_data", "strategies.json")
|
||||||
|
try:
|
||||||
|
with open(config_path, 'r') as f: return json.load(f).get(self.strategy_name)
|
||||||
|
except (FileNotFoundError, json.JSONDecodeError) as e:
|
||||||
|
logging.error(f"Could not load strategy configuration: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def run_walk_forward_optimization(self, optimization_weeks: int, testing_weeks: int, step_weeks: int):
|
||||||
|
# ... (unchanged, will now use the new simulation logic via the worker)
|
||||||
|
full_df = self.load_data(self.full_history_start_date, datetime.now().strftime("%Y-%m-%d"))
|
||||||
|
if full_df.empty: return
|
||||||
|
|
||||||
|
optimization_delta = timedelta(weeks=optimization_weeks)
|
||||||
|
testing_delta = timedelta(weeks=testing_weeks)
|
||||||
|
step_delta = timedelta(weeks=step_weeks)
|
||||||
|
|
||||||
|
all_out_of_sample_trades = []
|
||||||
|
all_period_summaries = []
|
||||||
|
|
||||||
|
current_date = full_df.index[0]
|
||||||
|
end_date = full_df.index[-1]
|
||||||
|
|
||||||
|
period_num = 1
|
||||||
|
while current_date + optimization_delta + testing_delta <= end_date:
|
||||||
|
logging.info(f"\n--- Starting Walk-Forward Period {period_num} ---")
|
||||||
|
|
||||||
|
in_sample_start = current_date
|
||||||
|
in_sample_end = in_sample_start + optimization_delta
|
||||||
|
out_of_sample_end = in_sample_end + testing_delta
|
||||||
|
|
||||||
|
in_sample_df = full_df[in_sample_start:in_sample_end]
|
||||||
|
out_of_sample_df = full_df[in_sample_end:out_of_sample_end]
|
||||||
|
|
||||||
|
if in_sample_df.empty or out_of_sample_df.empty:
|
||||||
|
break
|
||||||
|
|
||||||
|
logging.info(f"In-Sample (Optimization): {in_sample_df.index[0].date()} to {in_sample_df.index[-1].date()}")
|
||||||
|
logging.info(f"Out-of-Sample (Testing): {out_of_sample_df.index[0].date()} to {out_of_sample_df.index[-1].date()}")
|
||||||
|
|
||||||
|
best_result = self._find_best_params(in_sample_df)
|
||||||
|
if not best_result:
|
||||||
|
all_period_summaries.append({"period": period_num, "params": "None Found"})
|
||||||
|
current_date += step_delta
|
||||||
|
period_num += 1
|
||||||
|
continue
|
||||||
|
|
||||||
|
print("\n--- [1] In-Sample Optimization Result ---")
|
||||||
|
print(f"Best Parameters Found: {best_result['params']}")
|
||||||
|
self._generate_report(best_result['final_equity'], best_result['trades_list'], "In-Sample Performance with Best Params")
|
||||||
|
|
||||||
|
logging.info(f"\n--- [2] Forward Testing on Out-of-Sample Data ---")
|
||||||
|
df_with_signals = self.strategy_class(best_result['params']).calculate_signals(out_of_sample_df.copy())
|
||||||
|
final_equity_oos, out_of_sample_trades = _run_trade_simulation(df_with_signals, **self.simulation_params)
|
||||||
|
|
||||||
|
all_out_of_sample_trades.extend(out_of_sample_trades)
|
||||||
|
oos_summary = self._generate_report(final_equity_oos, out_of_sample_trades, "Out-of-Sample Performance")
|
||||||
|
|
||||||
|
# Store the summary for the final table
|
||||||
|
summary_to_store = {"period": period_num, "params": best_result['params'], **oos_summary}
|
||||||
|
all_period_summaries.append(summary_to_store)
|
||||||
|
|
||||||
|
current_date += step_delta
|
||||||
|
period_num += 1
|
||||||
|
|
||||||
|
# ... (Final reports will be generated here, but need to adapt to equity tracking)
|
||||||
|
print("\n" + "="*50)
|
||||||
|
# self._generate_report(all_out_of_sample_trades, "FINAL AGGREGATE WALK-FORWARD PERFORMANCE")
|
||||||
|
print("="*50)
|
||||||
|
|
||||||
|
# --- ADDED: Final summary table of best parameters and performance per period ---
|
||||||
|
print("\n--- Summary of Best Parameters and Performance per Period ---")
|
||||||
|
header = f"{'#':<3} | {'Best Parameters':<30} | {'Trades':>8} | {'Longs':>6} | {'Shorts':>7} | {'Win %':>8} | {'L Win %':>9} | {'S Win %':>9} | {'Return %':>10} | {'Equity':>15}"
|
||||||
|
print(header)
|
||||||
|
print("-" * len(header))
|
||||||
|
for item in all_period_summaries:
|
||||||
|
params_str = str(item.get('params', 'N/A'))
|
||||||
|
trades = item.get('num_trades', 'N/A')
|
||||||
|
longs = item.get('num_longs', 'N/A')
|
||||||
|
shorts = item.get('num_shorts', 'N/A')
|
||||||
|
win_rate = f"{item.get('win_rate', 0):.2f}%" if 'win_rate' in item else 'N/A'
|
||||||
|
long_win_rate = f"{item.get('long_win_rate', 0):.2f}%" if 'long_win_rate' in item else 'N/A'
|
||||||
|
short_win_rate = f"{item.get('short_win_rate', 0):.2f}%" if 'short_win_rate' in item else 'N/A'
|
||||||
|
return_pct = f"{item.get('return_pct', 0):.2f}%" if 'return_pct' in item else 'N/A'
|
||||||
|
equity = f"${item.get('final_equity', 0):,.2f}" if 'final_equity' in item else 'N/A'
|
||||||
|
print(f"{item['period']:<3} | {params_str:<30} | {trades:>8} | {longs:>6} | {shorts:>7} | {win_rate:>8} | {long_win_rate:>9} | {short_win_rate:>9} | {return_pct:>10} | {equity:>15}")
|
||||||
|
|
||||||
|
def _find_best_params(self, df: pd.DataFrame) -> dict:
|
||||||
|
param_configs = self.backtest_config.get('optimization_params', {})
|
||||||
|
param_names = list(param_configs.keys())
|
||||||
|
param_ranges = [range(p['start'], p['end'] + 1, p['step']) for p in param_configs.values()]
|
||||||
|
|
||||||
|
all_combinations = list(itertools.product(*param_ranges))
|
||||||
|
param_dicts = [dict(zip(param_names, combo)) for combo in all_combinations]
|
||||||
|
|
||||||
|
logging.info(f"Optimizing on {len(all_combinations)} combinations...")
|
||||||
|
|
||||||
|
num_cores = 60
|
||||||
|
self.pool = multiprocessing.Pool(processes=num_cores, initializer=init_worker)
|
||||||
|
|
||||||
|
worker = partial(
|
||||||
|
simulation_worker,
|
||||||
|
db_path=self.db_path, coin=self.coin, timeframe=self.timeframe,
|
||||||
|
start_date=df.index[0].isoformat(), end_date=df.index[-1].isoformat(),
|
||||||
|
strategy_class=self.strategy_class,
|
||||||
|
sim_params=self.simulation_params
|
||||||
|
)
|
||||||
|
|
||||||
|
all_results = self.pool.map(worker, param_dicts)
|
||||||
|
|
||||||
|
self.pool.close()
|
||||||
|
self.pool.join()
|
||||||
|
self.pool = None
|
||||||
|
|
||||||
|
results = [{'params': params, 'final_equity': final_equity, 'trades_list': trades} for params, final_equity, trades in all_results if trades]
|
||||||
|
if not results: return None
|
||||||
|
return max(results, key=lambda x: x['final_equity'])
|
||||||
|
|
||||||
|
def load_data(self, start_date, end_date):
|
||||||
|
# ... (unchanged)
|
||||||
|
table_name = f"{self.coin}_{self.timeframe}"
|
||||||
|
logging.info(f"Loading full dataset for {table_name}...")
|
||||||
|
try:
|
||||||
|
with sqlite3.connect(self.db_path) as conn:
|
||||||
|
query = f'SELECT * FROM "{table_name}" WHERE datetime_utc >= ? AND datetime_utc <= ? ORDER BY datetime_utc'
|
||||||
|
df = pd.read_sql(query, conn, params=(start_date, end_date), parse_dates=['datetime_utc'])
|
||||||
|
if df.empty: return pd.DataFrame()
|
||||||
|
df.set_index('datetime_utc', inplace=True)
|
||||||
|
return df
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Failed to load data for backtest: {e}")
|
||||||
|
return pd.DataFrame()
|
||||||
|
|
||||||
|
def _generate_report(self, final_equity: float, trades: list, title: str) -> dict:
|
||||||
|
"""Calculates, prints, and returns a detailed performance report."""
|
||||||
|
print(f"\n--- {title} ---")
|
||||||
|
|
||||||
|
initial_capital = self.simulation_params['capital']
|
||||||
|
|
||||||
|
if not trades:
|
||||||
|
print("No trades were executed during this period.")
|
||||||
|
print(f"Final Equity: ${initial_capital:,.2f}")
|
||||||
|
return {"num_trades": 0, "num_longs": 0, "num_shorts": 0, "win_rate": 0, "long_win_rate": 0, "short_win_rate": 0, "return_pct": 0, "final_equity": initial_capital}
|
||||||
|
|
||||||
|
num_trades = len(trades)
|
||||||
|
long_trades = [t for t in trades if t.get('type') == 'long']
|
||||||
|
short_trades = [t for t in trades if t.get('type') == 'short']
|
||||||
|
|
||||||
|
pnls_pct = pd.Series([t['pnl_pct'] for t in trades])
|
||||||
|
|
||||||
|
wins = pnls_pct[pnls_pct > 0]
|
||||||
|
win_rate = (len(wins) / num_trades) * 100 if num_trades > 0 else 0
|
||||||
|
|
||||||
|
long_wins = len([t for t in long_trades if t['pnl_pct'] > 0])
|
||||||
|
short_wins = len([t for t in short_trades if t['pnl_pct'] > 0])
|
||||||
|
long_win_rate = (long_wins / len(long_trades)) * 100 if long_trades else 0
|
||||||
|
short_win_rate = (short_wins / len(short_trades)) * 100 if short_trades else 0
|
||||||
|
|
||||||
|
total_return_pct = ((final_equity - initial_capital) / initial_capital) * 100
|
||||||
|
|
||||||
|
print(f"Final Equity: ${final_equity:,.2f}")
|
||||||
|
print(f"Total Return: {total_return_pct:.2f}%")
|
||||||
|
print(f"Total Trades: {num_trades} (Longs: {len(long_trades)}, Shorts: {len(short_trades)})")
|
||||||
|
print(f"Win Rate (Overall): {win_rate:.2f}%")
|
||||||
|
print(f"Win Rate (Longs): {long_win_rate:.2f}%")
|
||||||
|
print(f"Win Rate (Shorts): {short_win_rate:.2f}%")
|
||||||
|
|
||||||
|
# Return a dictionary of the key metrics for the summary table
|
||||||
|
return {
|
||||||
|
"num_trades": num_trades,
|
||||||
|
"num_longs": len(long_trades),
|
||||||
|
"num_shorts": len(short_trades),
|
||||||
|
"win_rate": win_rate,
|
||||||
|
"long_win_rate": long_win_rate,
|
||||||
|
"short_win_rate": short_win_rate,
|
||||||
|
"return_pct": total_return_pct,
|
||||||
|
"final_equity": final_equity
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
parser = argparse.ArgumentParser(description="Run a Walk-Forward Optimization for a trading strategy.")
|
||||||
|
parser.add_argument("--strategy", required=True, help="The name of the backtest config to run.")
|
||||||
|
parser.add_argument("--start-date", default="2020-08-01", help="The overall start date for historical data.")
|
||||||
|
parser.add_argument("--optimization-weeks", type=int, default=4)
|
||||||
|
parser.add_argument("--testing-weeks", type=int, default=1)
|
||||||
|
parser.add_argument("--step-weeks", type=int, default=1)
|
||||||
|
parser.add_argument("--log-level", default="normal", choices=['off', 'normal', 'debug'])
|
||||||
|
|
||||||
|
parser.add_argument("--capital", type=float, default=1000)
|
||||||
|
parser.add_argument("--size-pct", type=float, default=50)
|
||||||
|
parser.add_argument("--leverage-long", type=int, default=3)
|
||||||
|
parser.add_argument("--leverage-short", type=int, default=2)
|
||||||
|
parser.add_argument("--taker-fee-pct", type=float, default=0.045)
|
||||||
|
parser.add_argument("--maker-fee-pct", type=float, default=0.015)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
sim_params = {
|
||||||
|
"capital": args.capital,
|
||||||
|
"size_pct": args.size_pct,
|
||||||
|
"leverage_long": args.leverage_long,
|
||||||
|
"leverage_short": args.leverage_short,
|
||||||
|
"taker_fee_pct": args.taker_fee_pct,
|
||||||
|
"maker_fee_pct": args.maker_fee_pct
|
||||||
|
}
|
||||||
|
|
||||||
|
backtester = Backtester(
|
||||||
|
log_level=args.log_level,
|
||||||
|
strategy_name_to_test=args.strategy,
|
||||||
|
start_date=args.start_date,
|
||||||
|
sim_params=sim_params
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
backtester.run_walk_forward_optimization(
|
||||||
|
optimization_weeks=args.optimization_weeks,
|
||||||
|
testing_weeks=args.testing_weeks,
|
||||||
|
step_weeks=args.step_weeks
|
||||||
|
)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
logging.info("\nBacktest optimization cancelled by user.")
|
||||||
|
finally:
|
||||||
|
if backtester.pool:
|
||||||
|
logging.info("Terminating worker processes...")
|
||||||
|
backtester.pool.terminate()
|
||||||
|
backtester.pool.join()
|
||||||
|
logging.info("Worker processes terminated.")
|
||||||
|
|
||||||
31
basic_ws.py
Normal file
31
basic_ws.py
Normal file
@ -0,0 +1,31 @@
|
|||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import json
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from hyperliquid.info import Info
|
||||||
|
from hyperliquid.utils import constants
|
||||||
|
from collections import deque
|
||||||
|
|
||||||
|
def main():
|
||||||
|
address, info, _ = example_utils.setup(constants.MAINNET_API_URL)
|
||||||
|
# An example showing how to subscribe to the different subscription types and prints the returned messages
|
||||||
|
# Some subscriptions do not return snapshots, so you will not receive a message until something happens
|
||||||
|
info.subscribe({"type": "allMids"}, print)
|
||||||
|
info.subscribe({"type": "l2Book", "coin": "ETH"}, print)
|
||||||
|
info.subscribe({"type": "trades", "coin": "PURR/USDC"}, print)
|
||||||
|
info.subscribe({"type": "userEvents", "user": address}, print)
|
||||||
|
info.subscribe({"type": "userFills", "user": address}, print)
|
||||||
|
info.subscribe({"type": "candle", "coin": "ETH", "interval": "1m"}, print)
|
||||||
|
info.subscribe({"type": "orderUpdates", "user": address}, print)
|
||||||
|
info.subscribe({"type": "userFundings", "user": address}, print)
|
||||||
|
info.subscribe({"type": "userNonFundingLedgerUpdates", "user": address}, print)
|
||||||
|
info.subscribe({"type": "webData2", "user": address}, print)
|
||||||
|
info.subscribe({"type": "bbo", "coin": "ETH"}, print)
|
||||||
|
info.subscribe({"type": "activeAssetCtx", "coin": "BTC"}, print) # Perp
|
||||||
|
info.subscribe({"type": "activeAssetCtx", "coin": "@1"}, print) # Spot
|
||||||
|
info.subscribe({"type": "activeAssetData", "user": address, "coin": "BTC"}, print) # Perp only
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
70
create_agent.py
Normal file
70
create_agent.py
Normal file
@ -0,0 +1,70 @@
|
|||||||
|
import os
|
||||||
|
from eth_account import Account
|
||||||
|
from hyperliquid.exchange import Exchange
|
||||||
|
from hyperliquid.utils import constants
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
from datetime import datetime, timedelta
|
||||||
|
import json
|
||||||
|
|
||||||
|
# Load environment variables from a .env file if it exists
|
||||||
|
load_dotenv()
|
||||||
|
|
||||||
|
def create_and_authorize_agent():
|
||||||
|
"""
|
||||||
|
Creates and authorizes a new agent key pair using your main wallet,
|
||||||
|
following the correct SDK pattern.
|
||||||
|
"""
|
||||||
|
# --- STEP 1: Load your main wallet ---
|
||||||
|
# This is the wallet that holds the funds and has been activated on Hyperliquid.
|
||||||
|
main_wallet_private_key = os.environ.get("MAIN_WALLET_PRIVATE_KEY")
|
||||||
|
if not main_wallet_private_key:
|
||||||
|
main_wallet_private_key = input("Please enter the private key of your MAIN trading wallet: ")
|
||||||
|
|
||||||
|
try:
|
||||||
|
main_account = Account.from_key(main_wallet_private_key)
|
||||||
|
print(f"\n✅ Loaded main wallet: {main_account.address}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"❌ Error: Invalid main wallet private key provided. Details: {e}")
|
||||||
|
return
|
||||||
|
|
||||||
|
# --- STEP 2: Initialize the Exchange with your MAIN account ---
|
||||||
|
# This object is used to send the authorization transaction.
|
||||||
|
exchange = Exchange(main_account, constants.MAINNET_API_URL, account_address=main_account.address)
|
||||||
|
|
||||||
|
# --- STEP 3: Create and approve the agent with a specific name ---
|
||||||
|
# agent name must be between 1 and 16 characters long
|
||||||
|
agent_name = "executor_swing"
|
||||||
|
|
||||||
|
print(f"\n🔗 Authorizing a new agent named '{agent_name}'...")
|
||||||
|
try:
|
||||||
|
# --- FIX: Pass only the agent name string to the function ---
|
||||||
|
approve_result, agent_private_key = exchange.approve_agent(agent_name)
|
||||||
|
|
||||||
|
if approve_result.get("status") == "ok":
|
||||||
|
# Derive the agent's public address from the key we received
|
||||||
|
agent_account = Account.from_key(agent_private_key)
|
||||||
|
|
||||||
|
print("\n🎉 SUCCESS! Agent has been authorized on-chain.")
|
||||||
|
print("="*50)
|
||||||
|
print("SAVE THESE SECURELY. This is what your bot will use.")
|
||||||
|
print(f" Name: {agent_name}")
|
||||||
|
print(f" (Agent has a default long-term validity)")
|
||||||
|
print(f"🔑 Agent Private Key: {agent_private_key}")
|
||||||
|
print(f"🏠 Agent Address: {agent_account.address}")
|
||||||
|
print("="*50)
|
||||||
|
print("\nYou can now set this private key as the AGENT_PRIVATE_KEY environment variable.")
|
||||||
|
else:
|
||||||
|
print("\n❌ ERROR: Agent authorization failed.")
|
||||||
|
print(" Response:", approve_result)
|
||||||
|
if "Vault may not perform this action" in str(approve_result):
|
||||||
|
print("\n ACTION REQUIRED: This error means your main wallet (vault) has not been activated. "
|
||||||
|
"Please go to the Hyperliquid website, connect this wallet, and make a deposit to activate it.")
|
||||||
|
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\nAn unexpected error occurred during authorization: {e}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
create_and_authorize_agent()
|
||||||
|
|
||||||
118
fix_timestamps.py
Normal file
118
fix_timestamps.py
Normal file
@ -0,0 +1,118 @@
|
|||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import sqlite3
|
||||||
|
import pandas as pd
|
||||||
|
# script to fix missing millisecond timestamps in the database after import from CSVs (this is already fixed in import_csv.py)
|
||||||
|
# Assuming logging_utils.py is in the same directory
|
||||||
|
from logging_utils import setup_logging
|
||||||
|
|
||||||
|
class DatabaseFixer:
|
||||||
|
"""
|
||||||
|
Scans the SQLite database for rows with missing millisecond timestamps
|
||||||
|
and updates them based on the datetime_utc column.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, log_level: str, coin: str):
|
||||||
|
setup_logging(log_level, 'TimestampFixer')
|
||||||
|
self.coin = coin
|
||||||
|
self.table_name = f"{self.coin}_1m"
|
||||||
|
self.db_path = os.path.join("_data", "market_data.db")
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
"""Orchestrates the entire database update and verification process."""
|
||||||
|
logging.info(f"Starting timestamp fix process for table '{self.table_name}'...")
|
||||||
|
|
||||||
|
if not os.path.exists(self.db_path):
|
||||||
|
logging.error(f"Database file not found at '{self.db_path}'. Exiting.")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with sqlite3.connect(self.db_path) as conn:
|
||||||
|
conn.execute("PRAGMA journal_mode=WAL;")
|
||||||
|
|
||||||
|
# 1. Check how many rows need fixing
|
||||||
|
rows_to_fix_count = self._count_rows_to_fix(conn)
|
||||||
|
if rows_to_fix_count == 0:
|
||||||
|
logging.info(f"No rows with missing timestamps found in '{self.table_name}'. No action needed.")
|
||||||
|
return
|
||||||
|
|
||||||
|
logging.info(f"Found {rows_to_fix_count:,} rows with missing timestamps to update.")
|
||||||
|
|
||||||
|
# 2. Process the table in chunks to conserve memory
|
||||||
|
updated_count = self._process_in_chunks(conn)
|
||||||
|
|
||||||
|
# 3. Provide a final summary
|
||||||
|
self._summarize_update(rows_to_fix_count, updated_count)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"A critical error occurred: {e}")
|
||||||
|
|
||||||
|
def _count_rows_to_fix(self, conn) -> int:
|
||||||
|
"""Counts the number of rows where timestamp_ms is NULL."""
|
||||||
|
try:
|
||||||
|
return pd.read_sql(f'SELECT COUNT(*) FROM "{self.table_name}" WHERE timestamp_ms IS NULL', conn).iloc[0, 0]
|
||||||
|
except pd.io.sql.DatabaseError:
|
||||||
|
logging.error(f"Table '{self.table_name}' not found in the database. Cannot fix timestamps.")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
def _process_in_chunks(self, conn) -> int:
|
||||||
|
"""Reads, calculates, and updates timestamps in manageable chunks."""
|
||||||
|
total_updated = 0
|
||||||
|
chunk_size = 50000 # Process 50,000 rows at a time
|
||||||
|
|
||||||
|
# We select the special 'rowid' column to uniquely identify each row for updating
|
||||||
|
query = f'SELECT rowid, datetime_utc FROM "{self.table_name}" WHERE timestamp_ms IS NULL'
|
||||||
|
|
||||||
|
for chunk_df in pd.read_sql_query(query, conn, chunksize=chunk_size):
|
||||||
|
if chunk_df.empty:
|
||||||
|
break
|
||||||
|
|
||||||
|
logging.info(f"Processing a chunk of {len(chunk_df)} rows...")
|
||||||
|
|
||||||
|
# Calculate the missing timestamps
|
||||||
|
chunk_df['datetime_utc'] = pd.to_datetime(chunk_df['datetime_utc'])
|
||||||
|
chunk_df['timestamp_ms'] = (chunk_df['datetime_utc'].astype('int64') // 10**6)
|
||||||
|
|
||||||
|
# Prepare data for the update command: a list of (timestamp, rowid) tuples
|
||||||
|
update_data = list(zip(chunk_df['timestamp_ms'], chunk_df['rowid']))
|
||||||
|
|
||||||
|
# Use executemany for a fast bulk update
|
||||||
|
cursor = conn.cursor()
|
||||||
|
cursor.executemany(f'UPDATE "{self.table_name}" SET timestamp_ms = ? WHERE rowid = ?', update_data)
|
||||||
|
conn.commit()
|
||||||
|
|
||||||
|
total_updated += len(chunk_df)
|
||||||
|
logging.info(f"Updated {total_updated} rows so far...")
|
||||||
|
|
||||||
|
return total_updated
|
||||||
|
|
||||||
|
def _summarize_update(self, expected_count: int, actual_count: int):
|
||||||
|
"""Prints a final summary of the update process."""
|
||||||
|
logging.info("--- Timestamp Fix Summary ---")
|
||||||
|
print(f"\n{'Status':<25}: COMPLETE")
|
||||||
|
print("-" * 40)
|
||||||
|
print(f"{'Table Processed':<25}: {self.table_name}")
|
||||||
|
print(f"{'Rows Needing Update':<25}: {expected_count:,}")
|
||||||
|
print(f"{'Rows Successfully Updated':<25}: {actual_count:,}")
|
||||||
|
|
||||||
|
if expected_count == actual_count:
|
||||||
|
logging.info("Verification successful: All necessary rows have been updated.")
|
||||||
|
else:
|
||||||
|
logging.warning("Verification warning: The number of updated rows does not match the expected count.")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
parser = argparse.ArgumentParser(description="Fix missing millisecond timestamps in the SQLite database.")
|
||||||
|
parser.add_argument("--coin", default="BTC", help="The coin symbol for the table to fix (e.g., BTC).")
|
||||||
|
parser.add_argument(
|
||||||
|
"--log-level",
|
||||||
|
default="normal",
|
||||||
|
choices=['off', 'normal', 'debug'],
|
||||||
|
help="Set the logging level for the script."
|
||||||
|
)
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
fixer = DatabaseFixer(log_level=args.log_level, coin=args.coin)
|
||||||
|
fixer.run()
|
||||||
154
import_csv.py
Normal file
154
import_csv.py
Normal file
@ -0,0 +1,154 @@
|
|||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import sqlite3
|
||||||
|
import pandas as pd
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
# Assuming logging_utils.py is in the same directory
|
||||||
|
from logging_utils import setup_logging
|
||||||
|
|
||||||
|
class CsvImporter:
|
||||||
|
"""
|
||||||
|
Imports historical candle data from a large CSV file into the SQLite database,
|
||||||
|
intelligently adding only the missing data.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, log_level: str, csv_path: str, coin: str):
|
||||||
|
setup_logging(log_level, 'CsvImporter')
|
||||||
|
if not os.path.exists(csv_path):
|
||||||
|
logging.error(f"CSV file not found at '{csv_path}'. Please check the path.")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
self.csv_path = csv_path
|
||||||
|
self.coin = coin
|
||||||
|
# --- FIX: Corrected the f-string syntax for the table name ---
|
||||||
|
self.table_name = f"{self.coin}_1m"
|
||||||
|
self.db_path = os.path.join("_data", "market_data.db")
|
||||||
|
self.column_mapping = {
|
||||||
|
'Open time': 'datetime_utc',
|
||||||
|
'Open': 'open',
|
||||||
|
'High': 'high',
|
||||||
|
'Low': 'low',
|
||||||
|
'Close': 'close',
|
||||||
|
'Volume': 'volume',
|
||||||
|
'Number of trades': 'number_of_trades'
|
||||||
|
}
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
"""Orchestrates the entire import and verification process."""
|
||||||
|
logging.info(f"Starting import process for '{self.coin}' from '{self.csv_path}'...")
|
||||||
|
|
||||||
|
with sqlite3.connect(self.db_path) as conn:
|
||||||
|
conn.execute("PRAGMA journal_mode=WAL;")
|
||||||
|
|
||||||
|
# 1. Get the current state of the database
|
||||||
|
db_oldest, db_newest, initial_row_count = self._get_db_state(conn)
|
||||||
|
|
||||||
|
# 2. Read, clean, and filter the CSV data
|
||||||
|
new_data_df = self._process_and_filter_csv(db_oldest, db_newest)
|
||||||
|
|
||||||
|
if new_data_df.empty:
|
||||||
|
logging.info("No new data to import. Database is already up-to-date with the CSV file.")
|
||||||
|
return
|
||||||
|
|
||||||
|
# 3. Append the new data to the database
|
||||||
|
self._append_to_db(new_data_df, conn)
|
||||||
|
|
||||||
|
# 4. Summarize and verify the import
|
||||||
|
self._summarize_import(initial_row_count, len(new_data_df), conn)
|
||||||
|
|
||||||
|
def _get_db_state(self, conn) -> (datetime, datetime, int):
|
||||||
|
"""Gets the oldest and newest timestamps and total row count from the DB table."""
|
||||||
|
try:
|
||||||
|
oldest = pd.read_sql(f'SELECT MIN(datetime_utc) FROM "{self.table_name}"', conn).iloc[0, 0]
|
||||||
|
newest = pd.read_sql(f'SELECT MAX(datetime_utc) FROM "{self.table_name}"', conn).iloc[0, 0]
|
||||||
|
count = pd.read_sql(f'SELECT COUNT(*) FROM "{self.table_name}"', conn).iloc[0, 0]
|
||||||
|
|
||||||
|
oldest_dt = pd.to_datetime(oldest) if oldest else None
|
||||||
|
newest_dt = pd.to_datetime(newest) if newest else None
|
||||||
|
|
||||||
|
if oldest_dt:
|
||||||
|
logging.info(f"Database contains data from {oldest_dt} to {newest_dt}.")
|
||||||
|
else:
|
||||||
|
logging.info("Database table is empty. A full import will be performed.")
|
||||||
|
|
||||||
|
return oldest_dt, newest_dt, count
|
||||||
|
except pd.io.sql.DatabaseError:
|
||||||
|
logging.info(f"Table '{self.table_name}' not found. It will be created.")
|
||||||
|
return None, None, 0
|
||||||
|
|
||||||
|
def _process_and_filter_csv(self, db_oldest: datetime, db_newest: datetime) -> pd.DataFrame:
|
||||||
|
"""Reads the CSV and returns a DataFrame of only the missing data."""
|
||||||
|
logging.info("Reading and processing CSV file. This may take a moment for large files...")
|
||||||
|
df = pd.read_csv(self.csv_path, usecols=self.column_mapping.keys())
|
||||||
|
|
||||||
|
# Clean and format the data
|
||||||
|
df.rename(columns=self.column_mapping, inplace=True)
|
||||||
|
df['datetime_utc'] = pd.to_datetime(df['datetime_utc'])
|
||||||
|
|
||||||
|
# --- FIX: Calculate the millisecond timestamp from the datetime column ---
|
||||||
|
# This converts the datetime to nanoseconds and then to milliseconds.
|
||||||
|
df['timestamp_ms'] = (df['datetime_utc'].astype('int64') // 10**6)
|
||||||
|
|
||||||
|
# Filter the data to find only rows that are outside the range of what's already in the DB
|
||||||
|
if db_oldest and db_newest:
|
||||||
|
# Get data from before the oldest record and after the newest record
|
||||||
|
df_filtered = df[(df['datetime_utc'] < db_oldest) | (df['datetime_utc'] > db_newest)]
|
||||||
|
else:
|
||||||
|
# If the DB is empty, all data is new
|
||||||
|
df_filtered = df
|
||||||
|
|
||||||
|
logging.info(f"Found {len(df_filtered):,} new rows to import.")
|
||||||
|
return df_filtered
|
||||||
|
|
||||||
|
def _append_to_db(self, df: pd.DataFrame, conn):
|
||||||
|
"""Appends the DataFrame to the SQLite table."""
|
||||||
|
logging.info(f"Appending {len(df):,} new rows to the database...")
|
||||||
|
df.to_sql(self.table_name, conn, if_exists='append', index=False)
|
||||||
|
logging.info("Append operation complete.")
|
||||||
|
|
||||||
|
def _summarize_import(self, initial_count: int, added_count: int, conn):
|
||||||
|
"""Prints a final summary and verification of the import."""
|
||||||
|
logging.info("--- Import Summary & Verification ---")
|
||||||
|
|
||||||
|
try:
|
||||||
|
final_count = pd.read_sql(f'SELECT COUNT(*) FROM "{self.table_name}"', conn).iloc[0, 0]
|
||||||
|
new_oldest = pd.read_sql(f'SELECT MIN(datetime_utc) FROM "{self.table_name}"', conn).iloc[0, 0]
|
||||||
|
new_newest = pd.read_sql(f'SELECT MAX(datetime_utc) FROM "{self.table_name}"', conn).iloc[0, 0]
|
||||||
|
|
||||||
|
print(f"\n{'Status':<20}: SUCCESS")
|
||||||
|
print("-" * 40)
|
||||||
|
print(f"{'Initial Row Count':<20}: {initial_count:,}")
|
||||||
|
print(f"{'Rows Added':<20}: {added_count:,}")
|
||||||
|
print(f"{'Final Row Count':<20}: {final_count:,}")
|
||||||
|
print("-" * 40)
|
||||||
|
print(f"{'New Oldest Record':<20}: {new_oldest}")
|
||||||
|
print(f"{'New Newest Record':<20}: {new_newest}")
|
||||||
|
|
||||||
|
# Verification check
|
||||||
|
if final_count == initial_count + added_count:
|
||||||
|
logging.info("Verification successful: Final row count matches expected count.")
|
||||||
|
else:
|
||||||
|
logging.warning("Verification warning: Final row count does not match expected count.")
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Could not generate summary. Error: {e}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
parser = argparse.ArgumentParser(description="Import historical CSV data into the SQLite database.")
|
||||||
|
parser.add_argument("--file", required=True, help="Path to the large CSV file to import.")
|
||||||
|
parser.add_argument("--coin", default="BTC", help="The coin symbol for this data (e.g., BTC).")
|
||||||
|
parser.add_argument(
|
||||||
|
"--log-level",
|
||||||
|
default="normal",
|
||||||
|
choices=['off', 'normal', 'debug'],
|
||||||
|
help="Set the logging level for the script."
|
||||||
|
)
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
importer = CsvImporter(log_level=args.log_level, csv_path=args.file, coin=args.coin)
|
||||||
|
importer.run()
|
||||||
|
|
||||||
|
|
||||||
238
live_candle_fetcher.py
Normal file
238
live_candle_fetcher.py
Normal file
@ -0,0 +1,238 @@
|
|||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import json
|
||||||
|
import time
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from hyperliquid.info import Info
|
||||||
|
from hyperliquid.utils import constants
|
||||||
|
import sqlite3
|
||||||
|
from queue import Queue
|
||||||
|
from threading import Thread
|
||||||
|
|
||||||
|
from logging_utils import setup_logging
|
||||||
|
|
||||||
|
class LiveCandleFetcher:
|
||||||
|
"""
|
||||||
|
Connects to Hyperliquid to maintain a complete and up-to-date database of
|
||||||
|
1-minute candles using a robust producer-consumer architecture to prevent
|
||||||
|
data corruption and duplication.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, log_level: str, coins: list):
|
||||||
|
setup_logging(log_level, 'LiveCandleFetcher')
|
||||||
|
self.db_path = os.path.join("_data", "market_data.db")
|
||||||
|
self.coins_to_watch = set(coins)
|
||||||
|
if not self.coins_to_watch:
|
||||||
|
logging.error("No coins provided to watch. Exiting.")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
self.info = Info(constants.MAINNET_API_URL, skip_ws=False)
|
||||||
|
self.candle_queue = Queue() # Thread-safe queue for candles
|
||||||
|
self._ensure_tables_exist()
|
||||||
|
|
||||||
|
def _ensure_tables_exist(self):
|
||||||
|
"""
|
||||||
|
Ensures that all necessary tables are created with the correct schema and PRIMARY KEY.
|
||||||
|
If a table exists with an incorrect schema, it attempts to migrate the data.
|
||||||
|
"""
|
||||||
|
with sqlite3.connect(self.db_path) as conn:
|
||||||
|
for coin in self.coins_to_watch:
|
||||||
|
table_name = f"{coin}_1m"
|
||||||
|
cursor = conn.cursor()
|
||||||
|
cursor.execute(f"PRAGMA table_info('{table_name}')")
|
||||||
|
columns = cursor.fetchall()
|
||||||
|
|
||||||
|
if columns:
|
||||||
|
pk_found = any(col[1] == 'timestamp_ms' and col[5] == 1 for col in columns)
|
||||||
|
if not pk_found:
|
||||||
|
logging.warning(f"Schema migration needed for table '{table_name}': 'timestamp_ms' is not the PRIMARY KEY.")
|
||||||
|
logging.warning("Attempting to automatically rebuild the table...")
|
||||||
|
try:
|
||||||
|
# 1. Rename old table
|
||||||
|
conn.execute(f'ALTER TABLE "{table_name}" RENAME TO "{table_name}_old"')
|
||||||
|
logging.info(f" -> Renamed existing table to '{table_name}_old'.")
|
||||||
|
|
||||||
|
# 2. Create new table with correct schema
|
||||||
|
self._create_candle_table(conn, table_name)
|
||||||
|
logging.info(f" -> Created new '{table_name}' table with correct schema.")
|
||||||
|
|
||||||
|
# 3. Copy unique data from old table to new table
|
||||||
|
conn.execute(f'''
|
||||||
|
INSERT OR IGNORE INTO "{table_name}" (datetime_utc, timestamp_ms, open, high, low, close, volume, number_of_trades)
|
||||||
|
SELECT datetime_utc, timestamp_ms, open, high, low, close, volume, number_of_trades
|
||||||
|
FROM "{table_name}_old"
|
||||||
|
''')
|
||||||
|
conn.commit()
|
||||||
|
logging.info(" -> Copied data to new table.")
|
||||||
|
|
||||||
|
# 4. Drop the old table
|
||||||
|
conn.execute(f'DROP TABLE "{table_name}_old"')
|
||||||
|
logging.info(f" -> Removed old table. Migration for '{table_name}' complete.")
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"FATAL: Automatic schema migration for '{table_name}' failed: {e}")
|
||||||
|
logging.error("Please delete the database file '_data/market_data.db' manually and restart.")
|
||||||
|
sys.exit(1)
|
||||||
|
else:
|
||||||
|
# If table does not exist, create it
|
||||||
|
self._create_candle_table(conn, table_name)
|
||||||
|
logging.info("Database tables verified.")
|
||||||
|
|
||||||
|
def _create_candle_table(self, conn, table_name: str):
|
||||||
|
"""Creates a new candle table with the correct schema."""
|
||||||
|
conn.execute(f'''
|
||||||
|
CREATE TABLE "{table_name}" (
|
||||||
|
datetime_utc TEXT,
|
||||||
|
timestamp_ms INTEGER PRIMARY KEY,
|
||||||
|
open REAL,
|
||||||
|
high REAL,
|
||||||
|
low REAL,
|
||||||
|
close REAL,
|
||||||
|
volume REAL,
|
||||||
|
number_of_trades INTEGER
|
||||||
|
)
|
||||||
|
''')
|
||||||
|
|
||||||
|
def on_message(self, message):
|
||||||
|
"""
|
||||||
|
Callback function to process incoming candle messages. This is the "Producer".
|
||||||
|
It puts the raw message onto the queue for the DB writer.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
if message.get("channel") == "candle":
|
||||||
|
candle_data = message.get("data", {})
|
||||||
|
if candle_data:
|
||||||
|
self.candle_queue.put(candle_data)
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Error in on_message: {e}")
|
||||||
|
|
||||||
|
def _database_writer_thread(self):
|
||||||
|
"""
|
||||||
|
This is the "Consumer" thread. It runs forever, pulling candles from the
|
||||||
|
queue and writing them to the database, ensuring all writes are serial.
|
||||||
|
"""
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
candle = self.candle_queue.get()
|
||||||
|
if candle is None: # A signal to stop the thread
|
||||||
|
break
|
||||||
|
|
||||||
|
coin = candle.get('coin')
|
||||||
|
if not coin:
|
||||||
|
continue
|
||||||
|
|
||||||
|
table_name = f"{coin}_1m"
|
||||||
|
record = (
|
||||||
|
datetime.fromtimestamp(candle['t'] / 1000, tz=timezone.utc).strftime('%Y-%m-%d %H:%M:%S'),
|
||||||
|
candle['t'],
|
||||||
|
candle.get('o'), candle.get('h'), candle.get('l'), candle.get('c'),
|
||||||
|
candle.get('v'), candle.get('n')
|
||||||
|
)
|
||||||
|
|
||||||
|
with sqlite3.connect(self.db_path) as conn:
|
||||||
|
conn.execute(f'''
|
||||||
|
INSERT OR REPLACE INTO "{table_name}" (datetime_utc, timestamp_ms, open, high, low, close, volume, number_of_trades)
|
||||||
|
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
||||||
|
''', record)
|
||||||
|
conn.commit()
|
||||||
|
logging.debug(f"Upserted candle for {coin} at {record[0]}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Error in database writer thread: {e}")
|
||||||
|
|
||||||
|
def _get_last_timestamp_from_db(self, coin: str) -> int:
|
||||||
|
"""Gets the most recent millisecond timestamp from a coin's 1m table."""
|
||||||
|
table_name = f"{coin}_1m"
|
||||||
|
try:
|
||||||
|
with sqlite3.connect(self.db_path) as conn:
|
||||||
|
result = conn.execute(f'SELECT MAX(timestamp_ms) FROM "{table_name}"').fetchone()
|
||||||
|
return int(result[0]) if result and result[0] is not None else None
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Could not read last timestamp from table '{table_name}': {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _fetch_historical_candles(self, coin: str, start_ms: int, end_ms: int):
|
||||||
|
"""Fetches historical candles and puts them on the queue for the writer."""
|
||||||
|
logging.info(f"Fetching historical data for {coin} from {datetime.fromtimestamp(start_ms/1000)}...")
|
||||||
|
current_start = start_ms
|
||||||
|
|
||||||
|
while current_start < end_ms:
|
||||||
|
try:
|
||||||
|
http_info = Info(constants.MAINNET_API_URL, skip_ws=True)
|
||||||
|
batch = http_info.candles_snapshot(coin, "1m", current_start, end_ms)
|
||||||
|
if not batch:
|
||||||
|
break
|
||||||
|
|
||||||
|
for candle in batch:
|
||||||
|
candle['coin'] = coin
|
||||||
|
self.candle_queue.put(candle)
|
||||||
|
|
||||||
|
last_ts = batch[-1]['t']
|
||||||
|
if last_ts < current_start:
|
||||||
|
break
|
||||||
|
current_start = last_ts + 1
|
||||||
|
time.sleep(0.5)
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Error fetching historical chunk for {coin}: {e}")
|
||||||
|
break
|
||||||
|
|
||||||
|
logging.info(f"Historical data fetching for {coin} is complete.")
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
"""
|
||||||
|
Starts the database writer, catches up on historical data, then
|
||||||
|
subscribes to the WebSocket for live updates.
|
||||||
|
"""
|
||||||
|
db_writer = Thread(target=self._database_writer_thread, daemon=True)
|
||||||
|
db_writer.start()
|
||||||
|
|
||||||
|
logging.info("--- Starting Historical Data Catch-Up Phase ---")
|
||||||
|
now_ms = int(time.time() * 1000)
|
||||||
|
for coin in self.coins_to_watch:
|
||||||
|
last_ts = self._get_last_timestamp_from_db(coin)
|
||||||
|
start_ts = last_ts + 60000 if last_ts else now_ms - (7 * 24 * 60 * 60 * 1000)
|
||||||
|
if start_ts < now_ms:
|
||||||
|
self._fetch_historical_candles(coin, start_ts, now_ms)
|
||||||
|
|
||||||
|
logging.info("--- Historical Catch-Up Complete. Starting Live WebSocket Feed ---")
|
||||||
|
for coin in self.coins_to_watch:
|
||||||
|
# --- FIX: Use a lambda to create a unique callback for each subscription ---
|
||||||
|
# This captures the 'coin' variable and adds it to the message data.
|
||||||
|
callback = lambda msg, c=coin: self.on_message({**msg, 'data': {**msg.get('data',{}), 'coin': c}})
|
||||||
|
subscription = {"type": "candle", "coin": coin, "interval": "1m"}
|
||||||
|
self.info.subscribe(subscription, callback)
|
||||||
|
logging.info(f"Subscribed to 1m candles for {coin}")
|
||||||
|
time.sleep(0.2)
|
||||||
|
|
||||||
|
print("\nListening for live candle data... Press Ctrl+C to stop.")
|
||||||
|
try:
|
||||||
|
while True:
|
||||||
|
time.sleep(1)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
print("\nStopping WebSocket listener...")
|
||||||
|
self.info.ws_manager.stop()
|
||||||
|
self.candle_queue.put(None)
|
||||||
|
db_writer.join()
|
||||||
|
print("Listener stopped.")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
parser = argparse.ArgumentParser(description="A hybrid historical and live candle data fetcher for Hyperliquid.")
|
||||||
|
parser.add_argument(
|
||||||
|
"--coins",
|
||||||
|
nargs='+',
|
||||||
|
required=True,
|
||||||
|
help="List of coin symbols to fetch (e.g., BTC ETH)."
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--log-level",
|
||||||
|
default="normal",
|
||||||
|
choices=['off', 'normal', 'debug'],
|
||||||
|
help="Set the logging level for the script."
|
||||||
|
)
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
fetcher = LiveCandleFetcher(log_level=args.log_level, coins=args.coins)
|
||||||
|
fetcher.run()
|
||||||
|
|
||||||
258
live_market.py
Normal file
258
live_market.py
Normal file
@ -0,0 +1,258 @@
|
|||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import json
|
||||||
|
import argparse
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from hyperliquid.info import Info
|
||||||
|
from hyperliquid.utils import constants
|
||||||
|
from collections import deque, defaultdict
|
||||||
|
|
||||||
|
# --- Configuration ---
|
||||||
|
MAX_TRADE_HISTORY = 100000
|
||||||
|
all_trades = {
|
||||||
|
"BTC": deque(maxlen=MAX_TRADE_HISTORY),
|
||||||
|
"ETH": deque(maxlen=MAX_TRADE_HISTORY),
|
||||||
|
}
|
||||||
|
latest_raw_trades = {
|
||||||
|
"BTC": None,
|
||||||
|
"ETH": None,
|
||||||
|
}
|
||||||
|
decoded_trade_output = []
|
||||||
|
_lines_printed = 0
|
||||||
|
|
||||||
|
def get_coins_from_strategies() -> set:
|
||||||
|
"""
|
||||||
|
Reads the strategies.json file and returns a unique set of coin symbols
|
||||||
|
from all enabled strategies.
|
||||||
|
"""
|
||||||
|
coins = set()
|
||||||
|
config_path = os.path.join("_data", "strategies.json")
|
||||||
|
try:
|
||||||
|
with open(config_path, 'r') as f:
|
||||||
|
all_configs = json.load(f)
|
||||||
|
for name, config in all_configs.items():
|
||||||
|
if config.get("enabled", False):
|
||||||
|
coin = config.get("parameters", {}).get("coin")
|
||||||
|
if coin:
|
||||||
|
coins.add(coin)
|
||||||
|
print(f"Found {len(coins)} unique coins to watch from enabled strategies: {list(coins)}")
|
||||||
|
return coins
|
||||||
|
except (FileNotFoundError, json.JSONDecodeError) as e:
|
||||||
|
print(f"ERROR: Could not load or parse '{config_path}': {e}", file=sys.stderr)
|
||||||
|
return set()
|
||||||
|
|
||||||
|
def on_message(message):
|
||||||
|
"""
|
||||||
|
Callback function to process incoming trades from the WebSocket and store them.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
if message.get("channel") == "trades":
|
||||||
|
for trade in message["data"]:
|
||||||
|
coin = trade['coin']
|
||||||
|
if coin in all_trades:
|
||||||
|
latest_raw_trades[coin] = trade
|
||||||
|
price = float(trade['px'])
|
||||||
|
size = float(trade['sz'])
|
||||||
|
decoded_trade = {
|
||||||
|
"time": datetime.fromtimestamp(trade['time'] / 1000, tz=timezone.utc),
|
||||||
|
"side": "BUY" if trade['side'] == "B" else "SELL",
|
||||||
|
"value": price * size,
|
||||||
|
"users": trade.get('users', [])
|
||||||
|
}
|
||||||
|
all_trades[coin].append(decoded_trade)
|
||||||
|
except (KeyError, TypeError, ValueError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def build_top_trades_table(title: str, trades: list) -> list:
|
||||||
|
"""Builds the formatted lines for a top-5 trades by value table."""
|
||||||
|
lines = []
|
||||||
|
header = f"{'Time (UTC)':<10} | {'Side':<5} | {'Value (USD)':>20}"
|
||||||
|
lines.append(f"--- {title} ---")
|
||||||
|
lines.append(header)
|
||||||
|
lines.append("-" * len(header))
|
||||||
|
|
||||||
|
top_trades = sorted(trades, key=lambda x: x['value'], reverse=True)[:5]
|
||||||
|
|
||||||
|
for trade in top_trades:
|
||||||
|
lines.append(
|
||||||
|
f"{trade['time'].strftime('%H:%M:%S'):<10} | "
|
||||||
|
f"{trade['side']:<5} | "
|
||||||
|
f"${trade['value']:>18,.2f}"
|
||||||
|
)
|
||||||
|
while len(lines) < 8: lines.append(" " * len(header))
|
||||||
|
return lines
|
||||||
|
|
||||||
|
def build_top_takers_table(title: str, trades: list) -> list:
|
||||||
|
"""Analyzes a list of trades to find the top 5 takers by total volume."""
|
||||||
|
lines = []
|
||||||
|
header = f"{'#':<2} | {'Taker Address':<15} | {'Total Volume (USD)':>20}"
|
||||||
|
lines.append(f"--- {title} ---")
|
||||||
|
lines.append(header)
|
||||||
|
lines.append("-" * len(header))
|
||||||
|
|
||||||
|
volumes = defaultdict(float)
|
||||||
|
for trade in trades:
|
||||||
|
for user in trade['users']:
|
||||||
|
volumes[user] += trade['value']
|
||||||
|
|
||||||
|
top_takers = sorted(volumes.items(), key=lambda item: item[1], reverse=True)[:5]
|
||||||
|
|
||||||
|
for i, (address, volume) in enumerate(top_takers, 1):
|
||||||
|
short_address = f"{address[:6]}...{address[-4:]}"
|
||||||
|
lines.append(f"{i:<2} | {short_address:<15} | ${volume:>18,.2f}")
|
||||||
|
|
||||||
|
while len(lines) < 8: lines.append(" " * len(header))
|
||||||
|
return lines
|
||||||
|
|
||||||
|
def build_top_active_takers_table(title: str, trades: list) -> list:
|
||||||
|
"""Analyzes a list of trades to find the top 5 takers by trade count."""
|
||||||
|
lines = []
|
||||||
|
header = f"{'#':<2} | {'Taker Address':<42} | {'Trade Count':>12} | {'Total Volume (USD)':>20}"
|
||||||
|
lines.append(f"--- {title} ---")
|
||||||
|
lines.append(header)
|
||||||
|
lines.append("-" * len(header))
|
||||||
|
|
||||||
|
taker_data = defaultdict(lambda: {'count': 0, 'volume': 0.0})
|
||||||
|
for trade in trades:
|
||||||
|
for user in trade['users']:
|
||||||
|
taker_data[user]['count'] += 1
|
||||||
|
taker_data[user]['volume'] += trade['value']
|
||||||
|
|
||||||
|
top_takers = sorted(taker_data.items(), key=lambda item: item[1]['count'], reverse=True)[:5]
|
||||||
|
|
||||||
|
for i, (address, data) in enumerate(top_takers, 1):
|
||||||
|
lines.append(f"{i:<2} | {address:<42} | {data['count']:>12} | ${data['volume']:>18,.2f}")
|
||||||
|
|
||||||
|
while len(lines) < 8: lines.append(" " * len(header))
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
def build_decoded_trade_lines(coin: str) -> list:
|
||||||
|
"""Builds a formatted, multi-line string for a single decoded trade."""
|
||||||
|
trade = latest_raw_trades[coin]
|
||||||
|
if not trade: return ["No trade data yet..."] * 7
|
||||||
|
|
||||||
|
return [
|
||||||
|
f"Time: {datetime.fromtimestamp(trade['time'] / 1000, tz=timezone.utc)}",
|
||||||
|
f"Side: {'BUY' if trade.get('side') == 'B' else 'SELL'}",
|
||||||
|
f"Price: {trade.get('px', 'N/A')}",
|
||||||
|
f"Size: {trade.get('sz', 'N/A')}",
|
||||||
|
f"Trade ID: {trade.get('tid', 'N/A')}",
|
||||||
|
f"Hash: {trade.get('hash', 'N/A')}",
|
||||||
|
f"Users: {', '.join(trade.get('users', []))}"
|
||||||
|
]
|
||||||
|
|
||||||
|
def update_decoded_trade_display():
|
||||||
|
"""
|
||||||
|
Updates the global variable holding the decoded trade output, but only
|
||||||
|
at the 40-second mark of each minute.
|
||||||
|
"""
|
||||||
|
global decoded_trade_output
|
||||||
|
if datetime.now().second == 40:
|
||||||
|
lines = []
|
||||||
|
lines.append("--- Last BTC Trade (Decoded) ---")
|
||||||
|
lines.extend(build_decoded_trade_lines("BTC"))
|
||||||
|
lines.append("")
|
||||||
|
lines.append("--- Last ETH Trade (Decoded) ---")
|
||||||
|
lines.extend(build_decoded_trade_lines("ETH"))
|
||||||
|
decoded_trade_output = lines
|
||||||
|
|
||||||
|
def display_dashboard(view: str):
|
||||||
|
"""Clears the screen and prints the selected dashboard view."""
|
||||||
|
global _lines_printed
|
||||||
|
if _lines_printed > 0: print(f"\x1b[{_lines_printed}A", end="")
|
||||||
|
|
||||||
|
now_utc = datetime.now(timezone.utc)
|
||||||
|
output_lines = []
|
||||||
|
separator = " | "
|
||||||
|
|
||||||
|
time_windows = [
|
||||||
|
("All Time", None), ("Last 24h", timedelta(hours=24)),
|
||||||
|
("Last 1h", timedelta(hours=1)), ("Last 5m", timedelta(minutes=5)),
|
||||||
|
("Last 1m", timedelta(minutes=1)),
|
||||||
|
]
|
||||||
|
|
||||||
|
btc_trades_copy = list(all_trades["BTC"])
|
||||||
|
eth_trades_copy = list(all_trades["ETH"])
|
||||||
|
|
||||||
|
if view == "trades":
|
||||||
|
output_lines.append("--- Top 5 Trades by Value ---")
|
||||||
|
for title, delta in time_windows:
|
||||||
|
btc_trades = [t for t in btc_trades_copy if not delta or t['time'] > now_utc - delta]
|
||||||
|
eth_trades = [t for t in eth_trades_copy if not delta or t['time'] > now_utc - delta]
|
||||||
|
btc_lines = build_top_trades_table(f"BTC - {title}", btc_trades)
|
||||||
|
eth_lines = build_top_trades_table(f"ETH - {title}", eth_trades)
|
||||||
|
for i in range(len(btc_lines)):
|
||||||
|
output_lines.append(f"{btc_lines[i]:<45}{separator}{eth_lines[i] if i < len(eth_lines) else ''}")
|
||||||
|
output_lines.append("")
|
||||||
|
|
||||||
|
elif view == "takers":
|
||||||
|
output_lines.append("--- Top 5 Takers by Volume (Rolling Windows) ---")
|
||||||
|
for title, delta in time_windows[1:]:
|
||||||
|
btc_trades = [t for t in btc_trades_copy if t['time'] > now_utc - delta]
|
||||||
|
eth_trades = [t for t in eth_trades_copy if t['time'] > now_utc - delta]
|
||||||
|
btc_lines = build_top_takers_table(f"BTC - {title}", btc_trades)
|
||||||
|
eth_lines = build_top_takers_table(f"ETH - {title}", eth_trades)
|
||||||
|
for i in range(len(btc_lines)):
|
||||||
|
output_lines.append(f"{btc_lines[i]:<45}{separator}{eth_lines[i] if i < len(eth_lines) else ''}")
|
||||||
|
output_lines.append("")
|
||||||
|
|
||||||
|
elif view == "active_takers":
|
||||||
|
output_lines.append("--- Top 5 Active Takers by Trade Count (Rolling Windows) ---")
|
||||||
|
for title, delta in time_windows[1:]:
|
||||||
|
btc_trades = [t for t in btc_trades_copy if t['time'] > now_utc - delta]
|
||||||
|
eth_trades = [t for t in eth_trades_copy if t['time'] > now_utc - delta]
|
||||||
|
btc_lines = build_top_active_takers_table(f"BTC - {title}", btc_trades)
|
||||||
|
eth_lines = build_top_active_takers_table(f"ETH - {title}", eth_trades)
|
||||||
|
header_width = 85
|
||||||
|
for i in range(len(btc_lines)):
|
||||||
|
output_lines.append(f"{btc_lines[i]:<{header_width}}{separator}{eth_lines[i] if i < len(eth_lines) else ''}")
|
||||||
|
output_lines.append("")
|
||||||
|
|
||||||
|
if decoded_trade_output:
|
||||||
|
output_lines.extend(decoded_trade_output)
|
||||||
|
else:
|
||||||
|
for _ in range(17): output_lines.append("")
|
||||||
|
|
||||||
|
final_output = "\n".join(output_lines) + "\n\x1b[J"
|
||||||
|
print(final_output, end="")
|
||||||
|
|
||||||
|
_lines_printed = len(output_lines)
|
||||||
|
sys.stdout.flush()
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main function to set up the WebSocket and run the display loop."""
|
||||||
|
parser = argparse.ArgumentParser(description="Live market data dashboard for Hyperliquid.")
|
||||||
|
parser.add_argument("--view", default="trades", choices=['trades', 'takers', 'active_takers'],
|
||||||
|
help="The data view to display: 'trades' (default), 'takers', or 'active_takers'.")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
coins_to_watch = get_coins_from_strategies()
|
||||||
|
if not ("BTC" in coins_to_watch and "ETH" in coins_to_watch):
|
||||||
|
print("This script is configured to display BTC and ETH. Please ensure they are in your strategies.", file=sys.stderr)
|
||||||
|
return
|
||||||
|
|
||||||
|
info = Info(constants.MAINNET_API_URL, skip_ws=False)
|
||||||
|
|
||||||
|
for coin in ["BTC", "ETH"]:
|
||||||
|
trade_subscription = {"type": "trades", "coin": coin}
|
||||||
|
info.subscribe(trade_subscription, on_message)
|
||||||
|
print(f"Subscribed to Trades for {coin}")
|
||||||
|
time.sleep(0.2)
|
||||||
|
|
||||||
|
print(f"\nDisplaying live '{args.view}' summary... Press Ctrl+C to stop.")
|
||||||
|
try:
|
||||||
|
while True:
|
||||||
|
update_decoded_trade_display()
|
||||||
|
display_dashboard(view=args.view)
|
||||||
|
time.sleep(1)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
print("\nStopping WebSocket listener...")
|
||||||
|
info.ws_manager.stop()
|
||||||
|
print("Listener stopped.")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|
||||||
49
live_market_utils.py
Normal file
49
live_market_utils.py
Normal file
@ -0,0 +1,49 @@
|
|||||||
|
import logging
|
||||||
|
import json
|
||||||
|
import time
|
||||||
|
from hyperliquid.info import Info
|
||||||
|
from hyperliquid.utils import constants
|
||||||
|
|
||||||
|
from logging_utils import setup_logging
|
||||||
|
|
||||||
|
def on_message(message, shared_prices_dict):
|
||||||
|
"""
|
||||||
|
Callback function to process incoming 'allMids' messages and update the
|
||||||
|
shared memory dictionary directly.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
if message.get("channel") == "allMids":
|
||||||
|
new_prices = message.get("data", {}).get("mids", {})
|
||||||
|
# Update the shared dictionary with the new price data
|
||||||
|
shared_prices_dict.update(new_prices)
|
||||||
|
except Exception as e:
|
||||||
|
# It's important to log errors inside the process
|
||||||
|
logging.error(f"Error in WebSocket on_message: {e}")
|
||||||
|
|
||||||
|
def start_live_feed(shared_prices_dict, log_level='off'):
|
||||||
|
"""
|
||||||
|
Main function for the WebSocket process. It takes a shared dictionary
|
||||||
|
and continuously feeds it with live market data.
|
||||||
|
"""
|
||||||
|
setup_logging(log_level, 'LiveMarketFeed')
|
||||||
|
|
||||||
|
# The Info object manages the WebSocket connection.
|
||||||
|
info = Info(constants.MAINNET_API_URL, skip_ws=False)
|
||||||
|
|
||||||
|
# We need to wrap the callback in a lambda to pass our shared dictionary
|
||||||
|
callback = lambda msg: on_message(msg, shared_prices_dict)
|
||||||
|
|
||||||
|
# Subscribe to the allMids channel
|
||||||
|
subscription = {"type": "allMids"}
|
||||||
|
info.subscribe(subscription, callback)
|
||||||
|
logging.info("Subscribed to 'allMids' for live mark prices.")
|
||||||
|
|
||||||
|
logging.info("Starting live price feed process. Press Ctrl+C in main app to stop.")
|
||||||
|
try:
|
||||||
|
# The background thread in the SDK handles messages. This loop just keeps the process alive.
|
||||||
|
while True:
|
||||||
|
time.sleep(1)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
logging.info("Stopping WebSocket listener...")
|
||||||
|
info.ws_manager.stop()
|
||||||
|
logging.info("Listener stopped.")
|
||||||
@ -1,5 +1,29 @@
|
|||||||
import logging
|
import logging
|
||||||
import sys
|
import sys
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
class LocalTimeFormatter(logging.Formatter):
|
||||||
|
"""
|
||||||
|
Custom formatter to display time with milliseconds and a (UTC+HH) offset.
|
||||||
|
"""
|
||||||
|
def formatTime(self, record, datefmt=None):
|
||||||
|
# Convert log record's creation time to a local, timezone-aware datetime object
|
||||||
|
dt = datetime.fromtimestamp(record.created).astimezone()
|
||||||
|
|
||||||
|
# Format the main time part
|
||||||
|
time_part = dt.strftime('%Y-%m-%d %H:%M:%S')
|
||||||
|
|
||||||
|
# Get the UTC offset and format it as (UTC+HH)
|
||||||
|
offset = dt.utcoffset()
|
||||||
|
offset_str = ""
|
||||||
|
if offset is not None:
|
||||||
|
offset_hours = int(offset.total_seconds() / 3600)
|
||||||
|
sign = '+' if offset_hours >= 0 else ''
|
||||||
|
offset_str = f" (UTC{sign}{offset_hours})"
|
||||||
|
|
||||||
|
# --- FIX: Cast record.msecs from float to int before formatting ---
|
||||||
|
# Combine time, milliseconds, and the offset string
|
||||||
|
return f"{time_part},{int(record.msecs):03d}{offset_str}"
|
||||||
|
|
||||||
def setup_logging(log_level: str, process_name: str):
|
def setup_logging(log_level: str, process_name: str):
|
||||||
"""
|
"""
|
||||||
@ -29,10 +53,9 @@ def setup_logging(log_level: str, process_name: str):
|
|||||||
|
|
||||||
handler = logging.StreamHandler(sys.stdout)
|
handler = logging.StreamHandler(sys.stdout)
|
||||||
|
|
||||||
# --- FIX: Added a date format that includes the timezone name (%Z) ---
|
# This will produce timestamps like: 2025-10-13 14:30:00,123 (UTC+2)
|
||||||
formatter = logging.Formatter(
|
formatter = LocalTimeFormatter(
|
||||||
f'%(asctime)s - {process_name} - %(levelname)s - %(message)s',
|
f'%(asctime)s - {process_name} - %(levelname)s - %(message)s'
|
||||||
datefmt='%Y-%m-%d %H:%M:%S %Z'
|
|
||||||
)
|
)
|
||||||
handler.setFormatter(formatter)
|
handler.setFormatter(formatter)
|
||||||
logger.addHandler(handler)
|
logger.addHandler(handler)
|
||||||
|
|||||||
432
main_app.py
432
main_app.py
@ -11,195 +11,351 @@ import pandas as pd
|
|||||||
from datetime import datetime, timezone
|
from datetime import datetime, timezone
|
||||||
|
|
||||||
from logging_utils import setup_logging
|
from logging_utils import setup_logging
|
||||||
|
# --- Using the high-performance WebSocket utility for live prices ---
|
||||||
|
from live_market_utils import start_live_feed
|
||||||
|
|
||||||
# --- Configuration ---
|
# --- Configuration ---
|
||||||
WATCHED_COINS = ["BTC", "ETH", "SOL", "BNB", "HYPE", "ASTER", "ZEC", "PUMP", "SUI"]
|
WATCHED_COINS = ["BTC", "ETH", "SOL", "BNB", "HYPE", "ASTER", "ZEC", "PUMP", "SUI"]
|
||||||
COIN_LISTER_SCRIPT = "list_coins.py"
|
# --- FIX: Replaced old data_fetcher with the new live_candle_fetcher ---
|
||||||
MARKET_FEEDER_SCRIPT = "market.py"
|
LIVE_CANDLE_FETCHER_SCRIPT = "live_candle_fetcher.py"
|
||||||
DATA_FETCHER_SCRIPT = "data_fetcher.py"
|
RESAMPLER_SCRIPT = "resampler.py"
|
||||||
RESAMPLER_SCRIPT = "resampler.py" # Restored resampler script
|
MARKET_CAP_FETCHER_SCRIPT = "market_cap_fetcher.py"
|
||||||
PRICE_DATA_FILE = os.path.join("_data", "current_prices.json")
|
TRADE_EXECUTOR_SCRIPT = "trade_executor.py"
|
||||||
|
STRATEGY_CONFIG_FILE = os.path.join("_data", "strategies.json")
|
||||||
DB_PATH = os.path.join("_data", "market_data.db")
|
DB_PATH = os.path.join("_data", "market_data.db")
|
||||||
STATUS_FILE = os.path.join("_data", "fetcher_status.json")
|
MARKET_CAP_SUMMARY_FILE = os.path.join("_data", "market_cap_data.json")
|
||||||
|
LOGS_DIR = "_logs"
|
||||||
|
TRADE_EXECUTOR_STATUS_FILE = os.path.join(LOGS_DIR, "trade_executor_status.json")
|
||||||
|
|
||||||
|
|
||||||
def run_market_feeder():
|
def format_market_cap(mc_value):
|
||||||
"""Target function to run the market.py script in a separate process."""
|
"""Formats a large number into a human-readable market cap string."""
|
||||||
setup_logging('off', 'MarketFeedProcess')
|
if not isinstance(mc_value, (int, float)) or mc_value == 0:
|
||||||
logging.info("Market feeder process started.")
|
return "N/A"
|
||||||
try:
|
if mc_value >= 1_000_000_000_000:
|
||||||
# Pass the log level to the script
|
return f"${mc_value / 1_000_000_000_000:.2f}T"
|
||||||
subprocess.run([sys.executable, MARKET_FEEDER_SCRIPT, "--log-level", "off"], check=True)
|
if mc_value >= 1_000_000_000:
|
||||||
except subprocess.CalledProcessError as e:
|
return f"${mc_value / 1_000_000_000:.2f}B"
|
||||||
logging.error(f"Market feeder script failed with error: {e}")
|
if mc_value >= 1_000_000:
|
||||||
except KeyboardInterrupt:
|
return f"${mc_value / 1_000_000:.2f}M"
|
||||||
logging.info("Market feeder process stopping.")
|
return f"${mc_value:,.2f}"
|
||||||
|
|
||||||
|
|
||||||
def run_data_fetcher_job():
|
def run_live_candle_fetcher():
|
||||||
"""Defines the job to be run by the scheduler for the data fetcher."""
|
"""Target function to run the live_candle_fetcher.py script in a resilient loop."""
|
||||||
logging.info(f"Scheduler starting data_fetcher.py task for {', '.join(WATCHED_COINS)}...")
|
log_file = os.path.join(LOGS_DIR, "live_candle_fetcher.log")
|
||||||
try:
|
|
||||||
command = [sys.executable, DATA_FETCHER_SCRIPT, "--coins"] + WATCHED_COINS + ["--days", "7", "--log-level", "off"]
|
|
||||||
subprocess.run(command, check=True)
|
|
||||||
logging.info("data_fetcher.py task finished successfully.")
|
|
||||||
except Exception as e:
|
|
||||||
logging.error(f"Failed to run data_fetcher.py job: {e}")
|
|
||||||
|
|
||||||
|
|
||||||
def data_fetcher_scheduler():
|
|
||||||
"""Schedules and runs the data_fetcher.py script periodically."""
|
|
||||||
setup_logging('off', 'DataFetcherScheduler')
|
|
||||||
run_data_fetcher_job()
|
|
||||||
schedule.every(1).minutes.do(run_data_fetcher_job)
|
|
||||||
logging.info("Data fetcher scheduled to run every 1 minute.")
|
|
||||||
while True:
|
while True:
|
||||||
schedule.run_pending()
|
try:
|
||||||
time.sleep(1)
|
with open(log_file, 'a') as f:
|
||||||
|
command = [sys.executable, LIVE_CANDLE_FETCHER_SCRIPT, "--coins"] + WATCHED_COINS + ["--log-level", "off"]
|
||||||
|
f.write(f"\n--- Starting {LIVE_CANDLE_FETCHER_SCRIPT} at {datetime.now()} ---\n")
|
||||||
|
subprocess.run(command, check=True, stdout=f, stderr=subprocess.STDOUT)
|
||||||
|
except (subprocess.CalledProcessError, Exception) as e:
|
||||||
|
with open(log_file, 'a') as f:
|
||||||
|
f.write(f"\n--- PROCESS ERROR at {datetime.now()} ---\n")
|
||||||
|
f.write(f"Live candle fetcher failed: {e}. Restarting...\n")
|
||||||
|
time.sleep(5)
|
||||||
|
|
||||||
# --- Restored Resampler Functions ---
|
|
||||||
def run_resampler_job():
|
def run_resampler_job(timeframes_to_generate: list):
|
||||||
"""Defines the job to be run by the scheduler for the resampler."""
|
"""Defines the job for the resampler, redirecting output to a log file."""
|
||||||
logging.info(f"Scheduler starting resampler.py task for {', '.join(WATCHED_COINS)}...")
|
log_file = os.path.join(LOGS_DIR, "resampler.log")
|
||||||
try:
|
try:
|
||||||
# Uses default timeframes configured within resampler.py
|
command = [sys.executable, RESAMPLER_SCRIPT, "--coins"] + WATCHED_COINS + ["--timeframes"] + timeframes_to_generate + ["--log-level", "off"]
|
||||||
command = [sys.executable, RESAMPLER_SCRIPT, "--coins"] + WATCHED_COINS + ["--log-level", "off"]
|
with open(log_file, 'a') as f:
|
||||||
subprocess.run(command, check=True)
|
f.write(f"\n--- Starting resampler.py job at {datetime.now()} ---\n")
|
||||||
logging.info("resampler.py task finished successfully.")
|
subprocess.run(command, check=True, stdout=f, stderr=subprocess.STDOUT)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logging.error(f"Failed to run resampler.py job: {e}")
|
with open(log_file, 'a') as f:
|
||||||
|
f.write(f"\n--- SCHEDULER ERROR at {datetime.now()} ---\n")
|
||||||
|
f.write(f"Failed to run resampler.py job: {e}\n")
|
||||||
|
|
||||||
|
|
||||||
def resampler_scheduler():
|
def resampler_scheduler(timeframes_to_generate: list):
|
||||||
"""Schedules and runs the resampler.py script periodically."""
|
"""Schedules the resampler.py script."""
|
||||||
setup_logging('off', 'ResamplerScheduler')
|
setup_logging('off', 'ResamplerScheduler')
|
||||||
run_resampler_job()
|
run_resampler_job(timeframes_to_generate)
|
||||||
schedule.every(4).minutes.do(run_resampler_job)
|
schedule.every(4).minutes.do(run_resampler_job, timeframes_to_generate)
|
||||||
logging.info("Resampler scheduled to run every 4 minutes.")
|
|
||||||
while True:
|
while True:
|
||||||
schedule.run_pending()
|
schedule.run_pending()
|
||||||
time.sleep(1)
|
time.sleep(1)
|
||||||
# --- End of Restored Functions ---
|
|
||||||
|
|
||||||
|
def run_market_cap_fetcher_job():
|
||||||
|
"""Defines the job for the market cap fetcher, redirecting output."""
|
||||||
|
log_file = os.path.join(LOGS_DIR, "market_cap_fetcher.log")
|
||||||
|
try:
|
||||||
|
command = [sys.executable, MARKET_CAP_FETCHER_SCRIPT, "--coins"] + WATCHED_COINS + ["--log-level", "off"]
|
||||||
|
with open(log_file, 'a') as f:
|
||||||
|
f.write(f"\n--- Starting {MARKET_CAP_FETCHER_SCRIPT} job at {datetime.now()} ---\n")
|
||||||
|
subprocess.run(command, check=True, stdout=f, stderr=subprocess.STDOUT)
|
||||||
|
except Exception as e:
|
||||||
|
with open(log_file, 'a') as f:
|
||||||
|
f.write(f"\n--- SCHEDULER ERROR at {datetime.now()} ---\n")
|
||||||
|
f.write(f"Failed to run {MARKET_CAP_FETCHER_SCRIPT} job: {e}\n")
|
||||||
|
|
||||||
|
|
||||||
|
def market_cap_fetcher_scheduler():
|
||||||
|
"""Schedules the market_cap_fetcher.py script to run daily at a specific UTC time."""
|
||||||
|
setup_logging('off', 'MarketCapScheduler')
|
||||||
|
schedule.every().day.at("00:15", "UTC").do(run_market_cap_fetcher_job)
|
||||||
|
while True:
|
||||||
|
schedule.run_pending()
|
||||||
|
time.sleep(60)
|
||||||
|
|
||||||
|
|
||||||
|
def run_strategy(strategy_name: str, config: dict):
|
||||||
|
"""Target function to run a strategy, redirecting its output to a log file."""
|
||||||
|
log_file = os.path.join(LOGS_DIR, f"strategy_{strategy_name}.log")
|
||||||
|
script_name = config['script']
|
||||||
|
command = [sys.executable, script_name, "--name", strategy_name, "--log-level", "normal"]
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
with open(log_file, 'a') as f:
|
||||||
|
f.write(f"\n--- Starting strategy '{strategy_name}' at {datetime.now()} ---\n")
|
||||||
|
subprocess.run(command, check=True, stdout=f, stderr=subprocess.STDOUT)
|
||||||
|
except (subprocess.CalledProcessError, Exception) as e:
|
||||||
|
with open(log_file, 'a') as f:
|
||||||
|
f.write(f"\n--- PROCESS ERROR at {datetime.now()} ---\n")
|
||||||
|
f.write(f"Strategy '{strategy_name}' failed: {e}. Restarting...\n")
|
||||||
|
time.sleep(10)
|
||||||
|
|
||||||
|
def run_trade_executor():
|
||||||
|
"""Target function to run the trade_executor.py script in a resilient loop."""
|
||||||
|
log_file = os.path.join(LOGS_DIR, "trade_executor.log")
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
with open(log_file, 'a') as f:
|
||||||
|
f.write(f"\n--- Starting Trade Executor at {datetime.now()} ---\n")
|
||||||
|
subprocess.run([sys.executable, TRADE_EXECUTOR_SCRIPT, "--log-level", "normal"], check=True, stdout=f, stderr=subprocess.STDOUT)
|
||||||
|
except (subprocess.CalledProcessError, Exception) as e:
|
||||||
|
with open(log_file, 'a') as f:
|
||||||
|
f.write(f"\n--- PROCESS ERROR at {datetime.now()} ---\n")
|
||||||
|
f.write(f"Trade Executor failed: {e}. Restarting...\n")
|
||||||
|
time.sleep(10)
|
||||||
|
|
||||||
|
|
||||||
class MainApp:
|
class MainApp:
|
||||||
def __init__(self, coins_to_watch: list):
|
def __init__(self, coins_to_watch: list, processes: dict, strategy_configs: dict, shared_prices: dict):
|
||||||
self.watched_coins = coins_to_watch
|
self.watched_coins = coins_to_watch
|
||||||
|
self.shared_prices = shared_prices
|
||||||
self.prices = {}
|
self.prices = {}
|
||||||
self.last_db_update_info = "Initializing..."
|
self.market_caps = {}
|
||||||
self._lines_printed = 0 # To track how many lines we printed last time
|
self.open_positions = {}
|
||||||
|
self.background_processes = processes
|
||||||
|
self.process_status = {}
|
||||||
|
self.strategy_configs = strategy_configs
|
||||||
|
self.strategy_statuses = {}
|
||||||
|
|
||||||
def read_prices(self):
|
def read_prices(self):
|
||||||
"""Reads the latest prices from the JSON file."""
|
"""Reads the latest prices directly from the shared memory dictionary."""
|
||||||
if not os.path.exists(PRICE_DATA_FILE):
|
|
||||||
return
|
|
||||||
try:
|
try:
|
||||||
with open(PRICE_DATA_FILE, 'r', encoding='utf-8') as f:
|
self.prices = dict(self.shared_prices)
|
||||||
self.prices = json.load(f)
|
except Exception as e:
|
||||||
except (json.JSONDecodeError, IOError):
|
logging.debug(f"Could not read from shared prices dict: {e}")
|
||||||
logging.debug("Could not read price file (might be locked).")
|
|
||||||
|
|
||||||
def get_overall_db_status(self):
|
def read_market_caps(self):
|
||||||
"""Reads the fetcher status from the status file."""
|
if os.path.exists(MARKET_CAP_SUMMARY_FILE):
|
||||||
if not os.path.exists(STATUS_FILE):
|
try:
|
||||||
self.last_db_update_info = "Status file not found."
|
with open(MARKET_CAP_SUMMARY_FILE, 'r', encoding='utf-8') as f:
|
||||||
return
|
summary_data = json.load(f)
|
||||||
try:
|
for coin in self.watched_coins:
|
||||||
with open(STATUS_FILE, 'r', encoding='utf-8') as f:
|
table_key = f"{coin}_market_cap"
|
||||||
status = json.load(f)
|
if table_key in summary_data:
|
||||||
coin = status.get("last_updated_coin")
|
self.market_caps[coin] = summary_data[table_key].get('market_cap')
|
||||||
timestamp_utc_str = status.get("last_run_timestamp_utc")
|
except (json.JSONDecodeError, IOError):
|
||||||
num_candles = status.get("num_updated_candles", 0)
|
logging.debug("Could not read market cap summary file.")
|
||||||
|
|
||||||
if timestamp_utc_str:
|
def read_strategy_statuses(self):
|
||||||
dt_naive = datetime.strptime(timestamp_utc_str, '%Y-%m-%d %H:%M:%S')
|
enabled_statuses = {}
|
||||||
dt_utc = dt_naive.replace(tzinfo=timezone.utc)
|
for name, config in self.strategy_configs.items():
|
||||||
dt_local = dt_utc.astimezone(None)
|
if config.get("enabled", False):
|
||||||
timestamp_display = dt_local.strftime('%Y-%m-%d %H:%M:%S %Z')
|
status_file = os.path.join("_data", f"strategy_status_{name}.json")
|
||||||
else:
|
if os.path.exists(status_file):
|
||||||
timestamp_display = "N/A"
|
try:
|
||||||
|
with open(status_file, 'r', encoding='utf-8') as f:
|
||||||
|
enabled_statuses[name] = json.load(f)
|
||||||
|
except (IOError, json.JSONDecodeError):
|
||||||
|
enabled_statuses[name] = {"error": "Could not read status file."}
|
||||||
|
else:
|
||||||
|
enabled_statuses[name] = {"current_signal": "Initializing..."}
|
||||||
|
self.strategy_statuses = enabled_statuses
|
||||||
|
|
||||||
self.last_db_update_info = f"{coin} at {timestamp_display} ({num_candles} candles)"
|
def read_executor_status(self):
|
||||||
except (IOError, json.JSONDecodeError) as e:
|
if os.path.exists(TRADE_EXECUTOR_STATUS_FILE):
|
||||||
self.last_db_update_info = "Error reading status file."
|
try:
|
||||||
logging.error(f"Could not read status file: {e}")
|
with open(TRADE_EXECUTOR_STATUS_FILE, 'r', encoding='utf-8') as f:
|
||||||
|
self.open_positions = json.load(f)
|
||||||
|
except (IOError, json.JSONDecodeError):
|
||||||
|
logging.debug("Could not read trade executor status file.")
|
||||||
|
else:
|
||||||
|
self.open_positions = {}
|
||||||
|
|
||||||
|
def check_process_status(self):
|
||||||
|
for name, process in self.background_processes.items():
|
||||||
|
self.process_status[name] = "Running" if process.is_alive() else "STOPPED"
|
||||||
|
|
||||||
def display_dashboard(self):
|
def display_dashboard(self):
|
||||||
"""Displays a formatted table for prices and DB status without blinking."""
|
print("\x1b[H\x1b[J", end="")
|
||||||
# Move the cursor up to overwrite the previous output
|
|
||||||
if self._lines_printed > 0:
|
|
||||||
print(f"\x1b[{self._lines_printed}A", end="")
|
|
||||||
|
|
||||||
# Build the output as a single string
|
left_table_lines = ["--- Market Dashboard ---"]
|
||||||
output_lines = []
|
left_table_width = 44
|
||||||
output_lines.append("--- Market Dashboard ---")
|
left_table_lines.append("-" * left_table_width)
|
||||||
table_width = 26
|
left_table_lines.append(f"{'#':<2} | {'Coin':^6} | {'Live Price':>10} | {'Market Cap':>15} |")
|
||||||
output_lines.append("-" * table_width)
|
left_table_lines.append("-" * left_table_width)
|
||||||
output_lines.append(f"{'#':<2} | {'Coin':<6} | {'Live Price':>10} |")
|
|
||||||
output_lines.append("-" * table_width)
|
|
||||||
for i, coin in enumerate(self.watched_coins, 1):
|
for i, coin in enumerate(self.watched_coins, 1):
|
||||||
price = self.prices.get(coin, "Loading...")
|
price = self.prices.get(coin, "Loading...")
|
||||||
output_lines.append(f"{i:<2} | {coin:<6} | {price:>10} |")
|
market_cap = self.market_caps.get(coin)
|
||||||
output_lines.append("-" * table_width)
|
formatted_mc = format_market_cap(market_cap)
|
||||||
output_lines.append(f"DB Status: Last coin updated -> {self.last_db_update_info}")
|
left_table_lines.append(f"{i:<2} | {coin:^6} | {price:>10} | {formatted_mc:>15} |")
|
||||||
|
left_table_lines.append("-" * left_table_width)
|
||||||
|
|
||||||
# Join lines and add a code to clear from cursor to end of screen
|
right_table_lines = ["--- Strategy Status ---"]
|
||||||
# This prevents artifacts if the new output is shorter than the old one.
|
right_table_width = 154
|
||||||
final_output = "\n".join(output_lines) + "\n\x1b[J"
|
right_table_lines.append("-" * right_table_width)
|
||||||
print(final_output, end="")
|
right_table_lines.append(f"{'#':^2} | {'Strategy Name':<25} | {'Coin':^6} | {'Signal':^8} | {'Signal Price':>12} | {'Last Change':>17} | {'TF':^5} | {'Size':^8} | {'Parameters':<45} |")
|
||||||
|
right_table_lines.append("-" * right_table_width)
|
||||||
|
for i, (name, status) in enumerate(self.strategy_statuses.items(), 1):
|
||||||
|
signal = status.get('current_signal', 'N/A')
|
||||||
|
price = status.get('signal_price')
|
||||||
|
price_display = f"{price:.4f}" if isinstance(price, (int, float)) else "-"
|
||||||
|
last_change = status.get('last_signal_change_utc')
|
||||||
|
last_change_display = 'Never'
|
||||||
|
if last_change:
|
||||||
|
dt_utc = datetime.fromisoformat(last_change.replace('Z', '+00:00')).replace(tzinfo=timezone.utc)
|
||||||
|
dt_local = dt_utc.astimezone(None)
|
||||||
|
last_change_display = dt_local.strftime('%Y-%m-%d %H:%M')
|
||||||
|
|
||||||
# Store the number of lines printed for the next iteration
|
config_params = self.strategy_configs.get(name, {}).get('parameters', {})
|
||||||
self._lines_printed = len(output_lines)
|
coin = config_params.get('coin', 'N/A')
|
||||||
|
timeframe = config_params.get('timeframe', 'N/A')
|
||||||
|
size = config_params.get('size', 'N/A')
|
||||||
|
|
||||||
|
other_params = {k: v for k, v in config_params.items() if k not in ['coin', 'timeframe', 'size']}
|
||||||
|
params_str = ", ".join([f"{k}={v}" for k, v in other_params.items()])
|
||||||
|
right_table_lines.append(f"{i:^2} | {name:<25} | {coin:^6} | {signal:^8} | {price_display:>12} | {last_change_display:>17} | {timeframe:^5} | {size:>8} | {params_str:<45} |")
|
||||||
|
right_table_lines.append("-" * right_table_width)
|
||||||
|
|
||||||
|
output_lines = []
|
||||||
|
max_rows = max(len(left_table_lines), len(right_table_lines))
|
||||||
|
separator = " "
|
||||||
|
indent = " " * 10
|
||||||
|
for i in range(max_rows):
|
||||||
|
left_part = left_table_lines[i] if i < len(left_table_lines) else " " * left_table_width
|
||||||
|
right_part = indent + right_table_lines[i] if i < len(right_table_lines) else ""
|
||||||
|
output_lines.append(f"{left_part}{separator}{right_part}")
|
||||||
|
|
||||||
|
output_lines.append("\n--- Open Positions ---")
|
||||||
|
pos_table_width = 100
|
||||||
|
output_lines.append("-" * pos_table_width)
|
||||||
|
output_lines.append(f"{'Account':<10} | {'Coin':<6} | {'Size':>15} | {'Entry Price':>12} | {'Mark Price':>12} | {'PNL':>15} | {'Leverage':>10} |")
|
||||||
|
output_lines.append("-" * pos_table_width)
|
||||||
|
|
||||||
|
perps_positions = self.open_positions.get('perpetuals_account', {}).get('open_positions', [])
|
||||||
|
spot_positions = self.open_positions.get('spot_account', {}).get('positions', [])
|
||||||
|
|
||||||
|
if not perps_positions and not spot_positions:
|
||||||
|
output_lines.append("No open positions found.")
|
||||||
|
else:
|
||||||
|
for pos in perps_positions:
|
||||||
|
try:
|
||||||
|
pnl = float(pos.get('pnl', 0.0))
|
||||||
|
pnl_str = f"${pnl:,.2f}"
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
pnl_str = "Error"
|
||||||
|
|
||||||
|
coin = pos.get('coin') or '-'
|
||||||
|
size = pos.get('size') or '-'
|
||||||
|
entry_price = pos.get('entry_price') or '-'
|
||||||
|
mark_price = pos.get('mark_price') or '-'
|
||||||
|
leverage = pos.get('leverage') or '-'
|
||||||
|
|
||||||
|
output_lines.append(f"{'Perps':<10} | {coin:<6} | {size:>15} | {entry_price:>12} | {mark_price:>12} | {pnl_str:>15} | {leverage:>10} |")
|
||||||
|
|
||||||
|
for pos in spot_positions:
|
||||||
|
pnl = pos.get('pnl', 'N/A')
|
||||||
|
coin = pos.get('coin') or '-'
|
||||||
|
balance_size = pos.get('balance_size') or '-'
|
||||||
|
output_lines.append(f"{'Spot':<10} | {coin:<6} | {balance_size:>15} | {'-':>12} | {'-':>12} | {pnl:>15} | {'-':>10} |")
|
||||||
|
output_lines.append("-" * pos_table_width)
|
||||||
|
|
||||||
|
output_lines.append("\n--- Background Processes ---")
|
||||||
|
for name, status in self.process_status.items():
|
||||||
|
output_lines.append(f"{name:<25}: {status}")
|
||||||
|
|
||||||
|
final_output = "\n".join(output_lines)
|
||||||
|
print(final_output)
|
||||||
sys.stdout.flush()
|
sys.stdout.flush()
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
"""Main loop to read and display data."""
|
"""Main loop to read data, display dashboard, and check processes."""
|
||||||
while True:
|
while True:
|
||||||
self.read_prices()
|
self.read_prices()
|
||||||
self.get_overall_db_status()
|
self.read_market_caps()
|
||||||
|
self.read_strategy_statuses()
|
||||||
|
self.read_executor_status()
|
||||||
|
self.check_process_status()
|
||||||
self.display_dashboard()
|
self.display_dashboard()
|
||||||
time.sleep(2)
|
time.sleep(0.5)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
setup_logging('normal', 'MainApp')
|
setup_logging('normal', 'MainApp')
|
||||||
|
|
||||||
logging.info(f"Running coin lister: '{COIN_LISTER_SCRIPT}'...")
|
if not os.path.exists(LOGS_DIR):
|
||||||
|
os.makedirs(LOGS_DIR)
|
||||||
|
|
||||||
|
processes = {}
|
||||||
|
strategy_configs = {}
|
||||||
|
|
||||||
try:
|
try:
|
||||||
subprocess.run([sys.executable, COIN_LISTER_SCRIPT], check=True, capture_output=True, text=True)
|
with open(STRATEGY_CONFIG_FILE, 'r') as f:
|
||||||
except subprocess.CalledProcessError as e:
|
strategy_configs = json.load(f)
|
||||||
logging.error(f"Failed to run '{COIN_LISTER_SCRIPT}'. Error: {e.stderr}")
|
except (FileNotFoundError, json.JSONDecodeError) as e:
|
||||||
|
logging.error(f"Could not load strategies from '{STRATEGY_CONFIG_FILE}': {e}")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
logging.info(f"Starting market feeder ('{MARKET_FEEDER_SCRIPT}')...")
|
required_timeframes = set()
|
||||||
market_process = multiprocessing.Process(target=run_market_feeder, daemon=True)
|
for name, config in strategy_configs.items():
|
||||||
market_process.start()
|
if config.get("enabled", False):
|
||||||
|
tf = config.get("parameters", {}).get("timeframe")
|
||||||
|
if tf:
|
||||||
|
required_timeframes.add(tf)
|
||||||
|
|
||||||
logging.info(f"Starting historical data fetcher ('{DATA_FETCHER_SCRIPT}')...")
|
if not required_timeframes:
|
||||||
fetcher_process = multiprocessing.Process(target=data_fetcher_scheduler, daemon=True)
|
logging.warning("No timeframes required by any enabled strategy.")
|
||||||
fetcher_process.start()
|
|
||||||
|
|
||||||
# --- Restored Resampler Process Start ---
|
with multiprocessing.Manager() as manager:
|
||||||
logging.info(f"Starting resampler ('{RESAMPLER_SCRIPT}')...")
|
shared_prices = manager.dict()
|
||||||
resampler_process = multiprocessing.Process(target=resampler_scheduler, daemon=True)
|
|
||||||
resampler_process.start()
|
|
||||||
# --- End Resampler Process Start ---
|
|
||||||
|
|
||||||
time.sleep(3)
|
processes["Live Market Feed"] = multiprocessing.Process(target=start_live_feed, args=(shared_prices, 'off'), daemon=True)
|
||||||
|
processes["Live Candle Fetcher"] = multiprocessing.Process(target=run_live_candle_fetcher, daemon=True)
|
||||||
|
processes["Resampler"] = multiprocessing.Process(target=resampler_scheduler, args=(list(required_timeframes),), daemon=True)
|
||||||
|
processes["Market Cap Fetcher"] = multiprocessing.Process(target=market_cap_fetcher_scheduler, daemon=True)
|
||||||
|
processes["Trade Executor"] = multiprocessing.Process(target=run_trade_executor, daemon=True)
|
||||||
|
|
||||||
app = MainApp(coins_to_watch=WATCHED_COINS)
|
for name, config in strategy_configs.items():
|
||||||
try:
|
if config.get("enabled", False):
|
||||||
app.run()
|
if not os.path.exists(config['script']):
|
||||||
except KeyboardInterrupt:
|
logging.error(f"Strategy script '{config['script']}' for '{name}' not found. Skipping.")
|
||||||
logging.info("Shutting down...")
|
continue
|
||||||
market_process.terminate()
|
proc = multiprocessing.Process(target=run_strategy, args=(name, config), daemon=True)
|
||||||
fetcher_process.terminate()
|
processes[f"Strategy: {name}"] = proc
|
||||||
# --- Restored Resampler Shutdown ---
|
|
||||||
resampler_process.terminate()
|
for name, proc in processes.items():
|
||||||
market_process.join()
|
logging.info(f"Starting process '{name}'...")
|
||||||
fetcher_process.join()
|
proc.start()
|
||||||
resampler_process.join()
|
|
||||||
# --- End Resampler Shutdown ---
|
time.sleep(3)
|
||||||
logging.info("Shutdown complete.")
|
|
||||||
sys.exit(0)
|
app = MainApp(coins_to_watch=WATCHED_COINS, processes=processes, strategy_configs=strategy_configs, shared_prices=shared_prices)
|
||||||
|
try:
|
||||||
|
app.run()
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
logging.info("Shutting down...")
|
||||||
|
for proc in processes.values():
|
||||||
|
if proc.is_alive(): proc.terminate()
|
||||||
|
for proc in processes.values():
|
||||||
|
if proc.is_alive(): proc.join()
|
||||||
|
logging.info("Shutdown complete.")
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
|||||||
283
market_cap_fetcher.py
Normal file
283
market_cap_fetcher.py
Normal file
@ -0,0 +1,283 @@
|
|||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import sqlite3
|
||||||
|
import pandas as pd
|
||||||
|
import requests
|
||||||
|
import time
|
||||||
|
from datetime import datetime, timezone, timedelta
|
||||||
|
import json
|
||||||
|
|
||||||
|
# Assuming logging_utils.py is in the same directory
|
||||||
|
from logging_utils import setup_logging
|
||||||
|
|
||||||
|
class MarketCapFetcher:
|
||||||
|
"""
|
||||||
|
Fetches historical daily market cap data from the CoinGecko API and
|
||||||
|
intelligently updates the SQLite database. It processes individual coins,
|
||||||
|
aggregates stablecoins, and captures total market cap metrics.
|
||||||
|
"""
|
||||||
|
|
||||||
|
COIN_ID_MAP = {
|
||||||
|
"BTC": "bitcoin",
|
||||||
|
"ETH": "ethereum",
|
||||||
|
"SOL": "solana",
|
||||||
|
"BNB": "binancecoin",
|
||||||
|
"HYPE": "hyperliquid",
|
||||||
|
"ASTER": "astar",
|
||||||
|
"ZEC": "zcash",
|
||||||
|
"PUMP": "pump-fun", # Correct ID is 'pump-fun'
|
||||||
|
"SUI": "sui"
|
||||||
|
}
|
||||||
|
|
||||||
|
STABLECOIN_ID_MAP = {
|
||||||
|
"USDT": "tether",
|
||||||
|
"USDC": "usd-coin",
|
||||||
|
"USDE": "ethena-usde",
|
||||||
|
"DAI": "dai",
|
||||||
|
"PYUSD": "paypal-usd"
|
||||||
|
}
|
||||||
|
|
||||||
|
def __init__(self, log_level: str, coins: list):
|
||||||
|
setup_logging(log_level, 'MarketCapFetcher')
|
||||||
|
self.coins_to_fetch = coins
|
||||||
|
self.db_path = os.path.join("_data", "market_data.db")
|
||||||
|
self.api_base_url = "https://api.coingecko.com/api/v3"
|
||||||
|
self.api_key = os.environ.get("COINGECKO_API_KEY")
|
||||||
|
|
||||||
|
if not self.api_key:
|
||||||
|
logging.error("CoinGecko API key not found. Please set the COINGECKO_API_KEY environment variable.")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
"""
|
||||||
|
Main execution function to process all configured coins and update the database.
|
||||||
|
"""
|
||||||
|
logging.info("Starting historical market cap fetch process from CoinGecko...")
|
||||||
|
with sqlite3.connect(self.db_path) as conn:
|
||||||
|
conn.execute("PRAGMA journal_mode=WAL;")
|
||||||
|
|
||||||
|
# 1. Process individual coins
|
||||||
|
for coin_symbol in self.coins_to_fetch:
|
||||||
|
coin_id = self.COIN_ID_MAP.get(coin_symbol.upper())
|
||||||
|
if not coin_id:
|
||||||
|
logging.warning(f"No CoinGecko ID found for '{coin_symbol}'. Skipping.")
|
||||||
|
continue
|
||||||
|
logging.info(f"--- Processing {coin_symbol} ({coin_id}) ---")
|
||||||
|
try:
|
||||||
|
self._update_market_cap_for_coin(coin_id, coin_symbol, conn)
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"An unexpected error occurred while processing {coin_symbol}: {e}")
|
||||||
|
time.sleep(2)
|
||||||
|
|
||||||
|
# 2. Process and aggregate stablecoins
|
||||||
|
self._update_stablecoin_aggregate(conn)
|
||||||
|
|
||||||
|
# 3. Process total market cap metrics
|
||||||
|
self._update_total_market_cap(conn)
|
||||||
|
|
||||||
|
# 4. Save a summary of the latest data
|
||||||
|
self._save_summary(conn)
|
||||||
|
|
||||||
|
logging.info("--- Market cap fetch process complete ---")
|
||||||
|
|
||||||
|
def _save_summary(self, conn):
|
||||||
|
"""
|
||||||
|
Queries the last record from each market cap table and saves a summary to a JSON file.
|
||||||
|
"""
|
||||||
|
logging.info("--- Generating Market Cap Summary ---")
|
||||||
|
summary_data = {}
|
||||||
|
summary_file_path = os.path.join("_data", "market_cap_data.json")
|
||||||
|
|
||||||
|
try:
|
||||||
|
cursor = conn.cursor()
|
||||||
|
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND (name LIKE '%_market_cap' OR name LIKE 'TOTAL_%');")
|
||||||
|
tables = [row[0] for row in cursor.fetchall()]
|
||||||
|
|
||||||
|
for table_name in tables:
|
||||||
|
try:
|
||||||
|
df_last = pd.read_sql(f'SELECT * FROM "{table_name}" ORDER BY datetime_utc DESC LIMIT 1', conn)
|
||||||
|
if not df_last.empty:
|
||||||
|
summary_data[table_name] = df_last.to_dict('records')[0]
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Could not read last record from table '{table_name}': {e}")
|
||||||
|
|
||||||
|
if summary_data:
|
||||||
|
summary_data['summary_last_updated_utc'] = datetime.now(timezone.utc).isoformat()
|
||||||
|
|
||||||
|
with open(summary_file_path, 'w', encoding='utf-8') as f:
|
||||||
|
json.dump(summary_data, f, indent=4)
|
||||||
|
logging.info(f"Successfully saved market cap summary to '{summary_file_path}'")
|
||||||
|
else:
|
||||||
|
logging.warning("No data found to create a summary.")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Failed to generate summary: {e}")
|
||||||
|
|
||||||
|
def _update_total_market_cap(self, conn):
|
||||||
|
"""
|
||||||
|
Fetches the current total market cap and saves it for the current date.
|
||||||
|
"""
|
||||||
|
logging.info("--- Processing Total Market Cap ---")
|
||||||
|
table_name = "TOTAL_market_cap_daily"
|
||||||
|
|
||||||
|
try:
|
||||||
|
# --- FIX: Use the current date instead of yesterday's ---
|
||||||
|
today_date = datetime.now(timezone.utc).date()
|
||||||
|
|
||||||
|
cursor = conn.cursor()
|
||||||
|
cursor.execute(f"SELECT name FROM sqlite_master WHERE type='table' AND name='{table_name}';")
|
||||||
|
table_exists = cursor.fetchone()
|
||||||
|
|
||||||
|
if table_exists:
|
||||||
|
# Check if we already have a record for today
|
||||||
|
cursor.execute(f"SELECT 1 FROM \"{table_name}\" WHERE date(datetime_utc) = ? LIMIT 1", (today_date.isoformat(),))
|
||||||
|
if cursor.fetchone():
|
||||||
|
logging.info(f"Total market cap for {today_date} already exists. Skipping.")
|
||||||
|
return
|
||||||
|
|
||||||
|
logging.info("Fetching current global market data...")
|
||||||
|
url = f"{self.api_base_url}/global"
|
||||||
|
headers = {"x-cg-demo-api-key": self.api_key}
|
||||||
|
response = requests.get(url, headers=headers)
|
||||||
|
response.raise_for_status()
|
||||||
|
global_data = response.json().get('data', {})
|
||||||
|
total_mc = global_data.get('total_market_cap', {}).get('usd')
|
||||||
|
|
||||||
|
if total_mc:
|
||||||
|
df_total = pd.DataFrame([{
|
||||||
|
'datetime_utc': pd.to_datetime(today_date),
|
||||||
|
'market_cap': total_mc
|
||||||
|
}])
|
||||||
|
df_total.to_sql(table_name, conn, if_exists='append', index=False)
|
||||||
|
logging.info(f"Saved total market cap for {today_date}: ${total_mc:,.2f}")
|
||||||
|
|
||||||
|
except requests.exceptions.RequestException as e:
|
||||||
|
logging.error(f"Failed to fetch global market data: {e}")
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"An error occurred while updating total market cap: {e}")
|
||||||
|
|
||||||
|
|
||||||
|
def _update_stablecoin_aggregate(self, conn):
|
||||||
|
"""Fetches data for all stablecoins and saves the aggregated market cap."""
|
||||||
|
logging.info("--- Processing aggregated stablecoin market cap ---")
|
||||||
|
all_stablecoin_df = pd.DataFrame()
|
||||||
|
|
||||||
|
for symbol, coin_id in self.STABLECOIN_ID_MAP.items():
|
||||||
|
logging.info(f"Fetching historical data for stablecoin: {symbol}...")
|
||||||
|
df = self._fetch_historical_data(coin_id, days=365)
|
||||||
|
if not df.empty:
|
||||||
|
df['coin'] = symbol
|
||||||
|
all_stablecoin_df = pd.concat([all_stablecoin_df, df])
|
||||||
|
time.sleep(2)
|
||||||
|
|
||||||
|
if all_stablecoin_df.empty:
|
||||||
|
logging.warning("No data fetched for any stablecoins. Cannot create aggregate.")
|
||||||
|
return
|
||||||
|
|
||||||
|
aggregated_df = all_stablecoin_df.groupby(all_stablecoin_df['datetime_utc'].dt.date)['market_cap'].sum().reset_index()
|
||||||
|
aggregated_df['datetime_utc'] = pd.to_datetime(aggregated_df['datetime_utc'])
|
||||||
|
|
||||||
|
table_name = "STABLECOINS_market_cap"
|
||||||
|
last_date_in_db = self._get_last_date_from_db(table_name, conn)
|
||||||
|
|
||||||
|
if last_date_in_db:
|
||||||
|
aggregated_df = aggregated_df[aggregated_df['datetime_utc'] > last_date_in_db]
|
||||||
|
|
||||||
|
if not aggregated_df.empty:
|
||||||
|
aggregated_df.to_sql(table_name, conn, if_exists='append', index=False)
|
||||||
|
logging.info(f"Successfully saved {len(aggregated_df)} daily records to '{table_name}'.")
|
||||||
|
else:
|
||||||
|
logging.info("Aggregated stablecoin data is already up-to-date.")
|
||||||
|
|
||||||
|
|
||||||
|
def _update_market_cap_for_coin(self, coin_id: str, coin_symbol: str, conn):
|
||||||
|
"""Fetches and appends new market cap data for a single coin."""
|
||||||
|
table_name = f"{coin_symbol}_market_cap"
|
||||||
|
|
||||||
|
last_date_in_db = self._get_last_date_from_db(table_name, conn)
|
||||||
|
|
||||||
|
days_to_fetch = 365
|
||||||
|
if last_date_in_db:
|
||||||
|
delta_days = (datetime.now() - last_date_in_db).days
|
||||||
|
if delta_days <= 0:
|
||||||
|
logging.info(f"Market cap data for '{coin_symbol}' is already up-to-date.")
|
||||||
|
return
|
||||||
|
days_to_fetch = min(delta_days + 1, 365)
|
||||||
|
else:
|
||||||
|
logging.info(f"No existing data found. Fetching initial {days_to_fetch} days for {coin_symbol}.")
|
||||||
|
|
||||||
|
df = self._fetch_historical_data(coin_id, days=days_to_fetch)
|
||||||
|
|
||||||
|
if df.empty:
|
||||||
|
logging.warning(f"No market cap data returned from API for {coin_symbol}.")
|
||||||
|
return
|
||||||
|
|
||||||
|
if last_date_in_db:
|
||||||
|
df = df[df['datetime_utc'] > last_date_in_db]
|
||||||
|
|
||||||
|
if not df.empty:
|
||||||
|
df.to_sql(table_name, conn, if_exists='append', index=False)
|
||||||
|
logging.info(f"Successfully saved {len(df)} new daily market cap records for {coin_symbol}.")
|
||||||
|
else:
|
||||||
|
logging.info(f"Data was fetched, but no new records needed saving for '{coin_symbol}'.")
|
||||||
|
|
||||||
|
def _get_last_date_from_db(self, table_name: str, conn) -> pd.Timestamp:
|
||||||
|
"""Gets the most recent date from a market cap table as a pandas Timestamp."""
|
||||||
|
try:
|
||||||
|
cursor = conn.cursor()
|
||||||
|
cursor.execute(f"SELECT name FROM sqlite_master WHERE type='table' AND name='{table_name}';")
|
||||||
|
if not cursor.fetchone():
|
||||||
|
return None
|
||||||
|
|
||||||
|
last_date_str = pd.read_sql(f'SELECT MAX(datetime_utc) FROM "{table_name}"', conn).iloc[0, 0]
|
||||||
|
return pd.to_datetime(last_date_str) if last_date_str else None
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Could not read last date from table '{table_name}': {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _fetch_historical_data(self, coin_id: str, days: int) -> pd.DataFrame:
|
||||||
|
"""Fetches historical market chart data from CoinGecko for a specified number of days."""
|
||||||
|
url = f"{self.api_base_url}/coins/{coin_id}/market_chart"
|
||||||
|
params = { "vs_currency": "usd", "days": days, "interval": "daily" }
|
||||||
|
headers = {"x-cg-demo-api-key": self.api_key}
|
||||||
|
|
||||||
|
try:
|
||||||
|
logging.debug(f"Fetching last {days} days for {coin_id}...")
|
||||||
|
response = requests.get(url, headers=headers)
|
||||||
|
response.raise_for_status()
|
||||||
|
data = response.json()
|
||||||
|
|
||||||
|
market_caps = data.get('market_caps', [])
|
||||||
|
if not market_caps: return pd.DataFrame()
|
||||||
|
|
||||||
|
df = pd.DataFrame(market_caps, columns=['timestamp_ms', 'market_cap'])
|
||||||
|
df['datetime_utc'] = pd.to_datetime(df['timestamp_ms'], unit='ms')
|
||||||
|
df.drop_duplicates(subset=['datetime_utc'], keep='last', inplace=True)
|
||||||
|
return df[['datetime_utc', 'market_cap']]
|
||||||
|
|
||||||
|
except requests.exceptions.RequestException as e:
|
||||||
|
logging.error(f"API request failed for {coin_id}: {e}.")
|
||||||
|
return pd.DataFrame()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
parser = argparse.ArgumentParser(description="Fetch historical market cap data from CoinGecko.")
|
||||||
|
parser.add_argument(
|
||||||
|
"--coins",
|
||||||
|
nargs='+',
|
||||||
|
default=["BTC", "ETH", "SOL", "BNB", "HYPE", "ASTER", "ZEC", "PUMP", "SUI"],
|
||||||
|
help="List of coin symbols to fetch (e.g., BTC ETH)."
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--log-level",
|
||||||
|
default="normal",
|
||||||
|
choices=['off', 'normal', 'debug'],
|
||||||
|
help="Set the logging level for the script."
|
||||||
|
)
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
fetcher = MarketCapFetcher(log_level=args.log_level, coins=args.coins)
|
||||||
|
fetcher.run()
|
||||||
|
|
||||||
BIN
requirements.txt
Normal file
BIN
requirements.txt
Normal file
Binary file not shown.
169
resampler.py
169
resampler.py
@ -12,8 +12,8 @@ from logging_utils import setup_logging
|
|||||||
|
|
||||||
class Resampler:
|
class Resampler:
|
||||||
"""
|
"""
|
||||||
Reads 1-minute candle data directly from the SQLite database, resamples
|
Reads new 1-minute candle data from the SQLite database, resamples it to
|
||||||
it to various timeframes, and stores the results back in the database.
|
various timeframes, and appends the new candles to the corresponding tables.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, log_level: str, coins: list, timeframes: dict):
|
def __init__(self, log_level: str, coins: list, timeframes: dict):
|
||||||
@ -31,13 +31,14 @@ class Resampler:
|
|||||||
'number_of_trades': 'sum'
|
'number_of_trades': 'sum'
|
||||||
}
|
}
|
||||||
self.resampling_status = self._load_existing_status()
|
self.resampling_status = self._load_existing_status()
|
||||||
|
self.job_start_time = None
|
||||||
|
|
||||||
def _load_existing_status(self) -> dict:
|
def _load_existing_status(self) -> dict:
|
||||||
"""Loads the existing status file if it exists, otherwise returns an empty dict."""
|
"""Loads the existing status file if it exists, otherwise returns an empty dict."""
|
||||||
if os.path.exists(self.status_file_path):
|
if os.path.exists(self.status_file_path):
|
||||||
try:
|
try:
|
||||||
with open(self.status_file_path, 'r', encoding='utf-8') as f:
|
with open(self.status_file_path, 'r', encoding='utf-8') as f:
|
||||||
logging.info(f"Loading existing status from '{self.status_file_path}'")
|
logging.debug(f"Loading existing status from '{self.status_file_path}'")
|
||||||
return json.load(f)
|
return json.load(f)
|
||||||
except (IOError, json.JSONDecodeError) as e:
|
except (IOError, json.JSONDecodeError) as e:
|
||||||
logging.warning(f"Could not read existing status file. Starting fresh. Error: {e}")
|
logging.warning(f"Could not read existing status file. Starting fresh. Error: {e}")
|
||||||
@ -47,77 +48,111 @@ class Resampler:
|
|||||||
"""
|
"""
|
||||||
Main execution function to process all configured coins and update the database.
|
Main execution function to process all configured coins and update the database.
|
||||||
"""
|
"""
|
||||||
|
self.job_start_time = datetime.now(timezone.utc)
|
||||||
|
logging.info(f"--- Resampling job started at {self.job_start_time.strftime('%Y-%m-%d %H:%M:%S %Z')} ---")
|
||||||
|
|
||||||
if not os.path.exists(self.db_path):
|
if not os.path.exists(self.db_path):
|
||||||
logging.error(f"Database file '{self.db_path}' not found. "
|
logging.error(f"Database file '{self.db_path}' not found.")
|
||||||
"Please run the data fetcher script first.")
|
return
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
with sqlite3.connect(self.db_path) as conn:
|
||||||
conn.execute("PRAGMA journal_mode=WAL;")
|
conn.execute("PRAGMA journal_mode=WAL;")
|
||||||
|
|
||||||
logging.info(f"Processing {len(self.coins_to_process)} coins: {', '.join(self.coins_to_process)}")
|
logging.debug(f"Processing {len(self.coins_to_process)} coins...")
|
||||||
|
|
||||||
for coin in self.coins_to_process:
|
for coin in self.coins_to_process:
|
||||||
source_table_name = f"{coin}_1m"
|
source_table_name = f"{coin}_1m"
|
||||||
logging.info(f"--- Processing {coin} ---")
|
logging.debug(f"--- Processing {coin} ---")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
df = pd.read_sql(f'SELECT * FROM "{source_table_name}"', conn)
|
# Load the full 1m history once per coin
|
||||||
|
df_1m = pd.read_sql(f'SELECT * FROM "{source_table_name}"', conn, parse_dates=['datetime_utc'])
|
||||||
if df.empty:
|
if df_1m.empty:
|
||||||
logging.warning(f"Source table '{source_table_name}' is empty or does not exist. Skipping.")
|
logging.warning(f"Source table '{source_table_name}' is empty. Skipping.")
|
||||||
continue
|
continue
|
||||||
|
df_1m.set_index('datetime_utc', inplace=True)
|
||||||
df['datetime_utc'] = pd.to_datetime(df['datetime_utc'])
|
|
||||||
df.set_index('datetime_utc', inplace=True)
|
|
||||||
|
|
||||||
for tf_name, tf_code in self.timeframes.items():
|
for tf_name, tf_code in self.timeframes.items():
|
||||||
logging.info(f" Resampling to {tf_name}...")
|
target_table_name = f"{coin}_{tf_name}"
|
||||||
|
logging.debug(f" Updating {tf_name} table...")
|
||||||
|
|
||||||
resampled_df = df.resample(tf_code).agg(self.aggregation_logic)
|
last_timestamp = self._get_last_timestamp(conn, target_table_name)
|
||||||
|
|
||||||
|
# Get the new 1-minute data that needs to be processed
|
||||||
|
new_df_1m = df_1m[df_1m.index > last_timestamp] if last_timestamp else df_1m
|
||||||
|
|
||||||
|
if new_df_1m.empty:
|
||||||
|
logging.debug(f" -> No new 1-minute data for {tf_name}. Table is up to date.")
|
||||||
|
continue
|
||||||
|
|
||||||
|
resampled_df = new_df_1m.resample(tf_code).agg(self.aggregation_logic)
|
||||||
resampled_df.dropna(how='all', inplace=True)
|
resampled_df.dropna(how='all', inplace=True)
|
||||||
|
|
||||||
if coin not in self.resampling_status:
|
|
||||||
self.resampling_status[coin] = {}
|
|
||||||
|
|
||||||
if not resampled_df.empty:
|
if not resampled_df.empty:
|
||||||
target_table_name = f"{coin}_{tf_name}"
|
# Append the newly resampled data to the target table
|
||||||
resampled_df.to_sql(
|
resampled_df.to_sql(target_table_name, conn, if_exists='append', index=True)
|
||||||
target_table_name,
|
logging.debug(f" -> Appended {len(resampled_df)} new candles to '{target_table_name}'.")
|
||||||
conn,
|
|
||||||
if_exists='replace',
|
|
||||||
index=True
|
|
||||||
)
|
|
||||||
|
|
||||||
last_timestamp = resampled_df.index[-1].strftime('%Y-%m-%d %H:%M:%S')
|
|
||||||
num_candles = len(resampled_df)
|
|
||||||
|
|
||||||
|
if coin not in self.resampling_status: self.resampling_status[coin] = {}
|
||||||
|
total_candles = int(self._get_table_count(conn, target_table_name))
|
||||||
self.resampling_status[coin][tf_name] = {
|
self.resampling_status[coin][tf_name] = {
|
||||||
"last_candle_utc": last_timestamp,
|
"last_candle_utc": resampled_df.index[-1].strftime('%Y-%m-%d %H:%M:%S'),
|
||||||
"total_candles": num_candles
|
"total_candles": total_candles
|
||||||
}
|
|
||||||
else:
|
|
||||||
logging.info(f" -> No data to save for '{coin}_{tf_name}'.")
|
|
||||||
self.resampling_status[coin][tf_name] = {
|
|
||||||
"last_candle_utc": "N/A",
|
|
||||||
"total_candles": 0
|
|
||||||
}
|
}
|
||||||
|
|
||||||
except pd.io.sql.DatabaseError as e:
|
|
||||||
logging.warning(f"Could not read source table '{source_table_name}': {e}")
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logging.error(f"Failed to process coin '{coin}': {e}")
|
logging.error(f"Failed to process coin '{coin}': {e}")
|
||||||
|
|
||||||
|
self._log_summary()
|
||||||
self._save_status()
|
self._save_status()
|
||||||
logging.info("--- Resampling process complete ---")
|
logging.info(f"--- Resampling job finished at {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S %Z')} ---")
|
||||||
|
|
||||||
|
def _log_summary(self):
|
||||||
|
"""Logs a summary of the total candles for each timeframe."""
|
||||||
|
logging.info("--- Resampling Job Summary ---")
|
||||||
|
timeframe_totals = {}
|
||||||
|
# Iterate through coins, skipping metadata keys
|
||||||
|
for coin, tfs in self.resampling_status.items():
|
||||||
|
if not isinstance(tfs, dict): continue
|
||||||
|
for tf_name, tf_data in tfs.items():
|
||||||
|
total = tf_data.get("total_candles", 0)
|
||||||
|
if tf_name not in timeframe_totals:
|
||||||
|
timeframe_totals[tf_name] = 0
|
||||||
|
timeframe_totals[tf_name] += total
|
||||||
|
|
||||||
|
if not timeframe_totals:
|
||||||
|
logging.info("No candles were resampled in this run.")
|
||||||
|
return
|
||||||
|
|
||||||
|
logging.info("Total candles per timeframe across all processed coins:")
|
||||||
|
for tf_name, total in sorted(timeframe_totals.items()):
|
||||||
|
logging.info(f" - {tf_name:<10}: {total:,} candles")
|
||||||
|
|
||||||
|
def _get_last_timestamp(self, conn, table_name):
|
||||||
|
"""Gets the timestamp of the last entry in a table."""
|
||||||
|
try:
|
||||||
|
return pd.read_sql(f'SELECT MAX(datetime_utc) FROM "{table_name}"', conn).iloc[0, 0]
|
||||||
|
except (pd.io.sql.DatabaseError, IndexError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _get_table_count(self, conn, table_name):
|
||||||
|
"""Gets the total row count of a table."""
|
||||||
|
try:
|
||||||
|
return pd.read_sql(f'SELECT COUNT(*) FROM "{table_name}"', conn).iloc[0, 0]
|
||||||
|
except (pd.io.sql.DatabaseError, IndexError):
|
||||||
|
return 0
|
||||||
|
|
||||||
def _save_status(self):
|
def _save_status(self):
|
||||||
"""Saves the final resampling status to a JSON file."""
|
"""Saves the final resampling status to a JSON file."""
|
||||||
if not self.resampling_status:
|
if not self.resampling_status:
|
||||||
logging.warning("No data was resampled, skipping status file creation.")
|
|
||||||
return
|
return
|
||||||
|
|
||||||
self.resampling_status['last_completed_utc'] = datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S')
|
stop_time = datetime.now(timezone.utc)
|
||||||
|
self.resampling_status['job_start_time_utc'] = self.job_start_time.strftime('%Y-%m-%d %H:%M:%S')
|
||||||
|
self.resampling_status['job_stop_time_utc'] = stop_time.strftime('%Y-%m-%d %H:%M:%S')
|
||||||
|
|
||||||
|
# Clean up old key if it exists from previous versions
|
||||||
|
self.resampling_status.pop('last_completed_utc', None)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
with open(self.status_file_path, 'w', encoding='utf-8') as f:
|
with open(self.status_file_path, 'w', encoding='utf-8') as f:
|
||||||
@ -135,55 +170,23 @@ def parse_timeframes(tf_strings: list) -> dict:
|
|||||||
unit = ''.join(filter(str.isalpha, tf_str)).lower()
|
unit = ''.join(filter(str.isalpha, tf_str)).lower()
|
||||||
|
|
||||||
code = ''
|
code = ''
|
||||||
if unit == 'm':
|
if unit == 'm': code = f"{numeric_part}min"
|
||||||
code = f"{numeric_part}min"
|
elif unit == 'w': code = f"{numeric_part}W"
|
||||||
elif unit == 'w':
|
elif unit in ['h', 'd']: code = f"{numeric_part}{unit}"
|
||||||
# --- FIX: Use uppercase 'W' for weeks to avoid deprecation warning ---
|
else: code = tf_str
|
||||||
code = f"{numeric_part}W"
|
|
||||||
elif unit in ['h', 'd']:
|
|
||||||
code = f"{numeric_part}{unit}"
|
|
||||||
else:
|
|
||||||
code = tf_str
|
|
||||||
logging.warning(f"Unrecognized timeframe unit in '{tf_str}'. Using as-is.")
|
|
||||||
|
|
||||||
tf_map[tf_str] = code
|
tf_map[tf_str] = code
|
||||||
return tf_map
|
return tf_map
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
parser = argparse.ArgumentParser(description="Resample 1-minute candle data from SQLite to other timeframes.")
|
parser = argparse.ArgumentParser(description="Resample 1-minute candle data from SQLite to other timeframes.")
|
||||||
parser.add_argument(
|
parser.add_argument("--coins", nargs='+', required=True, help="List of coins to process.")
|
||||||
"--coins",
|
parser.add_argument("--timeframes", nargs='+', required=True, help="List of timeframes to generate.")
|
||||||
nargs='+',
|
parser.add_argument("--log-level", default="normal", choices=['off', 'normal', 'debug'])
|
||||||
default=["BTC", "ETH", "SOL", "BNB", "HYPE", "ASTER", "ZEC", "PUMP", "SUI"],
|
|
||||||
help="List of coins to process."
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--timeframes",
|
|
||||||
nargs='+',
|
|
||||||
default=['4m', '5m', '15m', '30m', '37m', '148m', '4h', '12h', '1d', '1w'],
|
|
||||||
help="List of timeframes to generate (e.g., 5m 1h 1d)."
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--timeframe",
|
|
||||||
dest="timeframes",
|
|
||||||
nargs='+',
|
|
||||||
help=argparse.SUPPRESS
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--log-level",
|
|
||||||
default="normal",
|
|
||||||
choices=['off', 'normal', 'debug'],
|
|
||||||
help="Set the logging level for the script."
|
|
||||||
)
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
timeframes_dict = parse_timeframes(args.timeframes)
|
timeframes_dict = parse_timeframes(args.timeframes)
|
||||||
|
|
||||||
resampler = Resampler(
|
resampler = Resampler(log_level=args.log_level, coins=args.coins, timeframes=timeframes_dict)
|
||||||
log_level=args.log_level,
|
|
||||||
coins=args.coins,
|
|
||||||
timeframes=timeframes_dict
|
|
||||||
)
|
|
||||||
resampler.run()
|
resampler.run()
|
||||||
|
|
||||||
|
|||||||
73
strategies/base_strategy.py
Normal file
73
strategies/base_strategy.py
Normal file
@ -0,0 +1,73 @@
|
|||||||
|
from abc import ABC, abstractmethod
|
||||||
|
import pandas as pd
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import logging
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
import sqlite3
|
||||||
|
|
||||||
|
class BaseStrategy(ABC):
|
||||||
|
"""
|
||||||
|
An abstract base class that defines the blueprint for all trading strategies.
|
||||||
|
It provides common functionality like loading data and saving status.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, strategy_name: str, params: dict, log_level: str):
|
||||||
|
self.strategy_name = strategy_name
|
||||||
|
self.params = params
|
||||||
|
self.coin = params.get("coin", "N/A")
|
||||||
|
self.timeframe = params.get("timeframe", "N/A")
|
||||||
|
self.db_path = os.path.join("_data", "market_data.db")
|
||||||
|
self.status_file_path = os.path.join("_data", f"strategy_status_{self.strategy_name}.json")
|
||||||
|
|
||||||
|
# --- ADDED: State variables required for status reporting ---
|
||||||
|
self.current_signal = "INIT"
|
||||||
|
self.last_signal_change_utc = None
|
||||||
|
self.signal_price = None
|
||||||
|
|
||||||
|
# This will be set up by the child class after it's initialized
|
||||||
|
# setup_logging(log_level, f"Strategy-{self.strategy_name}")
|
||||||
|
# logging.info(f"Initializing with parameters: {self.params}")
|
||||||
|
|
||||||
|
def load_data(self) -> pd.DataFrame:
|
||||||
|
"""Loads historical data for the configured coin and timeframe."""
|
||||||
|
table_name = f"{self.coin}_{self.timeframe}"
|
||||||
|
|
||||||
|
# Dynamically determine the number of candles needed based on all possible period parameters
|
||||||
|
periods = [v for k, v in self.params.items() if 'period' in k or '_ma' in k or 'slow' in k]
|
||||||
|
limit = max(periods) + 50 if periods else 500
|
||||||
|
|
||||||
|
try:
|
||||||
|
with sqlite3.connect(f"file:{self.db_path}?mode=ro", uri=True) as conn:
|
||||||
|
query = f'SELECT * FROM "{table_name}" ORDER BY datetime_utc DESC LIMIT {limit}'
|
||||||
|
df = pd.read_sql(query, conn, parse_dates=['datetime_utc'])
|
||||||
|
if df.empty: return pd.DataFrame()
|
||||||
|
df.set_index('datetime_utc', inplace=True)
|
||||||
|
df.sort_index(inplace=True)
|
||||||
|
return df
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Failed to load data from table '{table_name}': {e}")
|
||||||
|
return pd.DataFrame()
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def calculate_signals(self, df: pd.DataFrame) -> pd.DataFrame:
|
||||||
|
"""
|
||||||
|
The core logic of the strategy. Must be implemented by child classes.
|
||||||
|
"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def _save_status(self):
|
||||||
|
"""Saves the current strategy state to its JSON file."""
|
||||||
|
status = {
|
||||||
|
"strategy_name": self.strategy_name,
|
||||||
|
"current_signal": self.current_signal,
|
||||||
|
"last_signal_change_utc": self.last_signal_change_utc,
|
||||||
|
"signal_price": self.signal_price,
|
||||||
|
"last_checked_utc": datetime.now(timezone.utc).isoformat()
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
with open(self.status_file_path, 'w', encoding='utf-8') as f:
|
||||||
|
json.dump(status, f, indent=4)
|
||||||
|
except IOError as e:
|
||||||
|
logging.error(f"Failed to write status file for {self.strategy_name}: {e}")
|
||||||
|
|
||||||
35
strategies/ma_cross_strategy.py
Normal file
35
strategies/ma_cross_strategy.py
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
import pandas as pd
|
||||||
|
from strategies.base_strategy import BaseStrategy
|
||||||
|
import logging
|
||||||
|
|
||||||
|
class MaCrossStrategy(BaseStrategy):
|
||||||
|
"""
|
||||||
|
A strategy based on a fast Simple Moving Average (SMA) crossing
|
||||||
|
a slow SMA.
|
||||||
|
"""
|
||||||
|
def calculate_signals(self, df: pd.DataFrame) -> pd.DataFrame:
|
||||||
|
# Support multiple naming conventions: some configs use 'fast'/'slow'
|
||||||
|
# while others use 'short_ma'/'long_ma'. Normalize here so both work.
|
||||||
|
fast_ma_period = self.params.get('short_ma') or self.params.get('fast') or 0
|
||||||
|
slow_ma_period = self.params.get('long_ma') or self.params.get('slow') or 0
|
||||||
|
|
||||||
|
# If parameters are missing, return a neutral signal frame.
|
||||||
|
if not fast_ma_period or not slow_ma_period:
|
||||||
|
logging.warning(f"Missing MA period parameters (fast={fast_ma_period}, slow={slow_ma_period}).")
|
||||||
|
df['signal'] = 0
|
||||||
|
return df
|
||||||
|
|
||||||
|
if len(df) < slow_ma_period:
|
||||||
|
logging.warning(f"Not enough data for MA periods {fast_ma_period}/{slow_ma_period}. Need {slow_ma_period}, have {len(df)}.")
|
||||||
|
df['signal'] = 0
|
||||||
|
return df
|
||||||
|
|
||||||
|
df['fast_sma'] = df['close'].rolling(window=fast_ma_period).mean()
|
||||||
|
df['slow_sma'] = df['close'].rolling(window=slow_ma_period).mean()
|
||||||
|
|
||||||
|
# Signal is 1 for Golden Cross (fast > slow), -1 for Death Cross
|
||||||
|
df['signal'] = 0
|
||||||
|
df.loc[df['fast_sma'] > df['slow_sma'], 'signal'] = 1
|
||||||
|
df.loc[df['fast_sma'] < df['slow_sma'], 'signal'] = -1
|
||||||
|
|
||||||
|
return df
|
||||||
24
strategies/single_sma_strategy.py
Normal file
24
strategies/single_sma_strategy.py
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
import pandas as pd
|
||||||
|
from strategies.base_strategy import BaseStrategy
|
||||||
|
import logging
|
||||||
|
|
||||||
|
class SingleSmaStrategy(BaseStrategy):
|
||||||
|
"""
|
||||||
|
A strategy based on the price crossing a single Simple Moving Average (SMA).
|
||||||
|
"""
|
||||||
|
def calculate_signals(self, df: pd.DataFrame) -> pd.DataFrame:
|
||||||
|
sma_period = self.params.get('sma_period', 0)
|
||||||
|
|
||||||
|
if not sma_period or len(df) < sma_period:
|
||||||
|
logging.warning(f"Not enough data for SMA period {sma_period}. Need {sma_period}, have {len(df)}.")
|
||||||
|
df['signal'] = 0
|
||||||
|
return df
|
||||||
|
|
||||||
|
df['sma'] = df['close'].rolling(window=sma_period).mean()
|
||||||
|
|
||||||
|
# Signal is 1 when price is above SMA, -1 when below
|
||||||
|
df['signal'] = 0
|
||||||
|
df.loc[df['close'] > df['sma'], 'signal'] = 1
|
||||||
|
df.loc[df['close'] < df['sma'], 'signal'] = -1
|
||||||
|
|
||||||
|
return df
|
||||||
85
strategy_runner.py
Normal file
85
strategy_runner.py
Normal file
@ -0,0 +1,85 @@
|
|||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import pandas as pd
|
||||||
|
import sqlite3
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
import importlib
|
||||||
|
|
||||||
|
from logging_utils import setup_logging
|
||||||
|
from strategies.base_strategy import BaseStrategy
|
||||||
|
|
||||||
|
class StrategyRunner:
|
||||||
|
"""
|
||||||
|
A generic runner that can execute any strategy that adheres to the
|
||||||
|
BaseStrategy blueprint. It handles the main logic loop, including data
|
||||||
|
loading, signal calculation, status saving, and sleeping.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, strategy_name: str, log_level: str):
|
||||||
|
self.strategy_name = strategy_name
|
||||||
|
self.log_level = log_level
|
||||||
|
self.config = self._load_strategy_config()
|
||||||
|
if not self.config:
|
||||||
|
print(f"FATAL: Strategy '{strategy_name}' not found in configuration.")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# Dynamically import and instantiate the strategy logic class
|
||||||
|
try:
|
||||||
|
module_path, class_name = self.config['class'].rsplit('.', 1)
|
||||||
|
module = importlib.import_module(module_path)
|
||||||
|
StrategyClass = getattr(module, class_name)
|
||||||
|
self.strategy_instance = StrategyClass(strategy_name, self.config['parameters'], self.log_level)
|
||||||
|
except (ImportError, AttributeError, KeyError) as e:
|
||||||
|
print(f"FATAL: Could not load strategy class for '{strategy_name}': {e}")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
def _load_strategy_config(self) -> dict:
|
||||||
|
"""Loads the configuration for the specified strategy."""
|
||||||
|
config_path = os.path.join("_data", "strategies.json")
|
||||||
|
try:
|
||||||
|
with open(config_path, 'r') as f:
|
||||||
|
all_configs = json.load(f)
|
||||||
|
return all_configs.get(self.strategy_name)
|
||||||
|
except (FileNotFoundError, json.JSONDecodeError) as e:
|
||||||
|
print(f"FATAL: Could not load strategy configuration: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
"""Main loop: loads data, calculates signals, saves status, and sleeps."""
|
||||||
|
logging.info(f"Starting main logic loop for {self.strategy_instance.coin} on {self.strategy_instance.timeframe}.")
|
||||||
|
while True:
|
||||||
|
df = self.strategy_instance.load_data()
|
||||||
|
if df.empty:
|
||||||
|
logging.warning("No data loaded. Waiting 1 minute before retrying...")
|
||||||
|
time.sleep(60)
|
||||||
|
continue
|
||||||
|
|
||||||
|
# The strategy instance calculates signals and updates its internal state
|
||||||
|
self.strategy_instance.calculate_signals_and_state(df.copy())
|
||||||
|
self.strategy_instance._save_status() # Save the new state
|
||||||
|
|
||||||
|
logging.info(f"Current Signal: {self.strategy_instance.current_signal}")
|
||||||
|
|
||||||
|
# Simple 1-minute wait for the next cycle
|
||||||
|
# A more precise timing mechanism could be implemented here if needed
|
||||||
|
time.sleep(60)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
parser = argparse.ArgumentParser(description="A generic runner for trading strategies.")
|
||||||
|
parser.add_argument("--name", required=True, help="The name of the strategy instance from strategies.json.")
|
||||||
|
parser.add_argument("--log-level", default="normal", choices=['off', 'normal', 'debug'])
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
try:
|
||||||
|
runner = StrategyRunner(strategy_name=args.name, log_level=args.log_level)
|
||||||
|
runner.run()
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
logging.info("Strategy runner stopped.")
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"A critical error occurred in the strategy runner: {e}")
|
||||||
|
sys.exit(1)
|
||||||
219
strategy_sma_cross.py
Normal file
219
strategy_sma_cross.py
Normal file
@ -0,0 +1,219 @@
|
|||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import pandas as pd
|
||||||
|
import sqlite3
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
from datetime import datetime, timezone, timedelta
|
||||||
|
|
||||||
|
from logging_utils import setup_logging
|
||||||
|
|
||||||
|
class SmaCrossStrategy:
|
||||||
|
"""
|
||||||
|
A flexible strategy that can operate in two modes:
|
||||||
|
1. Fast SMA / Slow SMA Crossover (if both 'fast' and 'slow' params are set)
|
||||||
|
2. Price / Single SMA Crossover (if only one 'fast' or 'slow' param is set)
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, strategy_name: str, params: dict, log_level: str):
|
||||||
|
self.strategy_name = strategy_name
|
||||||
|
self.params = params
|
||||||
|
self.coin = params.get("coin", "N/A")
|
||||||
|
self.timeframe = params.get("timeframe", "N/A")
|
||||||
|
|
||||||
|
# Load fast and slow SMA periods, defaulting to 0 if not present
|
||||||
|
self.fast_ma_period = params.get("fast", 0)
|
||||||
|
self.slow_ma_period = params.get("slow", 0)
|
||||||
|
|
||||||
|
self.db_path = os.path.join("_data", "market_data.db")
|
||||||
|
self.status_file_path = os.path.join("_data", f"strategy_status_{self.strategy_name}.json")
|
||||||
|
|
||||||
|
# Strategy state variables
|
||||||
|
self.current_signal = "INIT"
|
||||||
|
self.last_signal_change_utc = None
|
||||||
|
self.signal_price = None
|
||||||
|
self.fast_ma_value = None
|
||||||
|
self.slow_ma_value = None
|
||||||
|
|
||||||
|
setup_logging(log_level, f"Strategy-{self.strategy_name}")
|
||||||
|
logging.info(f"Initializing SMA Crossover strategy with parameters:")
|
||||||
|
for key, value in self.params.items():
|
||||||
|
logging.info(f" - {key}: {value}")
|
||||||
|
|
||||||
|
def load_data(self) -> pd.DataFrame:
|
||||||
|
"""Loads historical data, ensuring enough for the longest SMA calculation."""
|
||||||
|
table_name = f"{self.coin}_{self.timeframe}"
|
||||||
|
|
||||||
|
# Determine the longest period needed for calculations
|
||||||
|
longest_period = max(self.fast_ma_period or 0, self.slow_ma_period or 0)
|
||||||
|
if longest_period == 0:
|
||||||
|
logging.error("No valid SMA periods ('fast' or 'slow' > 0) are defined in parameters.")
|
||||||
|
return pd.DataFrame()
|
||||||
|
|
||||||
|
limit = longest_period + 50
|
||||||
|
|
||||||
|
try:
|
||||||
|
with sqlite3.connect(f"file:{self.db_path}?mode=ro", uri=True) as conn:
|
||||||
|
query = f'SELECT * FROM "{table_name}" ORDER BY datetime_utc DESC LIMIT {limit}'
|
||||||
|
df = pd.read_sql(query, conn)
|
||||||
|
if df.empty: return pd.DataFrame()
|
||||||
|
|
||||||
|
df['datetime_utc'] = pd.to_datetime(df['datetime_utc'])
|
||||||
|
df.set_index('datetime_utc', inplace=True)
|
||||||
|
df.sort_index(inplace=True)
|
||||||
|
return df
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Failed to load data from table '{table_name}': {e}")
|
||||||
|
return pd.DataFrame()
|
||||||
|
|
||||||
|
def _calculate_signals(self, data: pd.DataFrame):
|
||||||
|
"""
|
||||||
|
Analyzes historical data to find the last crossover event based on the
|
||||||
|
configured parameters (either dual or single SMA mode).
|
||||||
|
"""
|
||||||
|
# --- DUAL SMA CROSSOVER LOGIC ---
|
||||||
|
if self.fast_ma_period and self.slow_ma_period:
|
||||||
|
if len(data) < self.slow_ma_period + 1:
|
||||||
|
self.current_signal = "INSUFFICIENT DATA"
|
||||||
|
return
|
||||||
|
|
||||||
|
data['fast_sma'] = data['close'].rolling(window=self.fast_ma_period).mean()
|
||||||
|
data['slow_sma'] = data['close'].rolling(window=self.slow_ma_period).mean()
|
||||||
|
self.fast_ma_value = data['fast_sma'].iloc[-1]
|
||||||
|
self.slow_ma_value = data['slow_sma'].iloc[-1]
|
||||||
|
|
||||||
|
# Position is 1 for Golden Cross (fast > slow), -1 for Death Cross
|
||||||
|
data['position'] = 0
|
||||||
|
data.loc[data['fast_sma'] > data['slow_sma'], 'position'] = 1
|
||||||
|
data.loc[data['fast_sma'] < data['slow_sma'], 'position'] = -1
|
||||||
|
|
||||||
|
# --- SINGLE SMA PRICE CROSS LOGIC ---
|
||||||
|
else:
|
||||||
|
sma_period = self.fast_ma_period or self.slow_ma_period
|
||||||
|
if len(data) < sma_period + 1:
|
||||||
|
self.current_signal = "INSUFFICIENT DATA"
|
||||||
|
return
|
||||||
|
|
||||||
|
data['sma'] = data['close'].rolling(window=sma_period).mean()
|
||||||
|
self.slow_ma_value = data['sma'].iloc[-1] # Use slow_ma_value to store the single SMA
|
||||||
|
self.fast_ma_value = None # Ensure fast is None
|
||||||
|
|
||||||
|
# Position is 1 when price is above SMA, -1 when below
|
||||||
|
data['position'] = 0
|
||||||
|
data.loc[data['close'] > data['sma'], 'position'] = 1
|
||||||
|
data.loc[data['close'] < data['sma'], 'position'] = -1
|
||||||
|
|
||||||
|
# --- COMMON LOGIC for determining signal and last change ---
|
||||||
|
data['crossover'] = data['position'].diff()
|
||||||
|
last_position = data['position'].iloc[-1]
|
||||||
|
|
||||||
|
if last_position == 1: self.current_signal = "BUY"
|
||||||
|
elif last_position == -1: self.current_signal = "SELL"
|
||||||
|
else: self.current_signal = "HOLD"
|
||||||
|
|
||||||
|
last_cross_series = data[data['crossover'] != 0]
|
||||||
|
if not last_cross_series.empty:
|
||||||
|
last_cross_row = last_cross_series.iloc[-1]
|
||||||
|
self.last_signal_change_utc = last_cross_row.name.tz_localize('UTC').isoformat()
|
||||||
|
self.signal_price = last_cross_row['close']
|
||||||
|
if last_cross_row['position'] == 1: self.current_signal = "BUY"
|
||||||
|
elif last_cross_row['position'] == -1: self.current_signal = "SELL"
|
||||||
|
else:
|
||||||
|
self.last_signal_change_utc = data.index[0].tz_localize('UTC').isoformat()
|
||||||
|
self.signal_price = data['close'].iloc[0]
|
||||||
|
|
||||||
|
def _save_status(self):
|
||||||
|
"""Saves the current strategy state to its JSON file."""
|
||||||
|
status = {
|
||||||
|
"strategy_name": self.strategy_name,
|
||||||
|
"current_signal": self.current_signal,
|
||||||
|
"last_signal_change_utc": self.last_signal_change_utc,
|
||||||
|
"signal_price": self.signal_price,
|
||||||
|
"last_checked_utc": datetime.now(timezone.utc).isoformat()
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
with open(self.status_file_path, 'w', encoding='utf-8') as f:
|
||||||
|
json.dump(status, f, indent=4)
|
||||||
|
except IOError as e:
|
||||||
|
logging.error(f"Failed to write status file: {e}")
|
||||||
|
|
||||||
|
def get_sleep_duration(self) -> int:
|
||||||
|
"""Calculates seconds to sleep until the next full candle closes."""
|
||||||
|
tf_value = int(''.join(filter(str.isdigit, self.timeframe)))
|
||||||
|
tf_unit = ''.join(filter(str.isalpha, self.timeframe))
|
||||||
|
|
||||||
|
if tf_unit == 'm': interval_seconds = tf_value * 60
|
||||||
|
elif tf_unit == 'h': interval_seconds = tf_value * 3600
|
||||||
|
elif tf_unit == 'd': interval_seconds = tf_value * 86400
|
||||||
|
else: return 60
|
||||||
|
|
||||||
|
now = datetime.now(timezone.utc)
|
||||||
|
timestamp = now.timestamp()
|
||||||
|
|
||||||
|
next_candle_ts = ((timestamp // interval_seconds) + 1) * interval_seconds
|
||||||
|
sleep_seconds = (next_candle_ts - timestamp) + 5
|
||||||
|
|
||||||
|
logging.info(f"Next candle closes at {datetime.fromtimestamp(next_candle_ts, tz=timezone.utc)}. "
|
||||||
|
f"Sleeping for {sleep_seconds:.2f} seconds.")
|
||||||
|
return sleep_seconds
|
||||||
|
|
||||||
|
def run_logic(self):
|
||||||
|
"""Main loop: loads data, calculates signals, saves status, and sleeps."""
|
||||||
|
logging.info(f"Starting logic loop for {self.coin} on {self.timeframe} timeframe.")
|
||||||
|
while True:
|
||||||
|
data = self.load_data()
|
||||||
|
if data.empty:
|
||||||
|
logging.warning("No data loaded. Waiting 1 minute before retrying...")
|
||||||
|
self.current_signal = "NO DATA"
|
||||||
|
self._save_status()
|
||||||
|
time.sleep(60)
|
||||||
|
continue
|
||||||
|
|
||||||
|
self._calculate_signals(data)
|
||||||
|
self._save_status()
|
||||||
|
|
||||||
|
last_close = data['close'].iloc[-1]
|
||||||
|
|
||||||
|
# --- Log based on which mode the strategy is running in ---
|
||||||
|
if self.fast_ma_period and self.slow_ma_period:
|
||||||
|
fast_ma_str = f"{self.fast_ma_value:.4f}" if self.fast_ma_value is not None else "N/A"
|
||||||
|
slow_ma_str = f"{self.slow_ma_value:.4f}" if self.slow_ma_value is not None else "N/A"
|
||||||
|
logging.info(
|
||||||
|
f"Signal: {self.current_signal} | Price: {last_close:.4f} | "
|
||||||
|
f"Fast SMA({self.fast_ma_period}): {fast_ma_str} | Slow SMA({self.slow_ma_period}): {slow_ma_str}"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
sma_period = self.fast_ma_period or self.slow_ma_period
|
||||||
|
sma_val_str = f"{self.slow_ma_value:.4f}" if self.slow_ma_value is not None else "N/A"
|
||||||
|
logging.info(
|
||||||
|
f"Signal: {self.current_signal} | Price: {last_close:.4f} | "
|
||||||
|
f"SMA({sma_period}): {sma_val_str}"
|
||||||
|
)
|
||||||
|
|
||||||
|
sleep_time = self.get_sleep_duration()
|
||||||
|
time.sleep(sleep_time)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
parser = argparse.ArgumentParser(description="Run an SMA Crossover trading strategy.")
|
||||||
|
parser.add_argument("--name", required=True, help="The name of the strategy instance from the config.")
|
||||||
|
parser.add_argument("--params", required=True, help="A JSON string of the strategy's parameters.")
|
||||||
|
parser.add_argument("--log-level", default="normal", choices=['off', 'normal', 'debug'])
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
try:
|
||||||
|
strategy_params = json.loads(args.params)
|
||||||
|
strategy = SmaCrossStrategy(
|
||||||
|
strategy_name=args.name,
|
||||||
|
params=strategy_params,
|
||||||
|
log_level=args.log_level
|
||||||
|
)
|
||||||
|
strategy.run_logic()
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
logging.info("Strategy process stopped.")
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"A critical error occurred: {e}")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
186
strategy_template.py
Normal file
186
strategy_template.py
Normal file
@ -0,0 +1,186 @@
|
|||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import pandas as pd
|
||||||
|
import sqlite3
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
from datetime import datetime, timezone, timedelta
|
||||||
|
|
||||||
|
from logging_utils import setup_logging
|
||||||
|
|
||||||
|
class TradingStrategy:
|
||||||
|
"""
|
||||||
|
A template for a trading strategy that reads data from the SQLite database
|
||||||
|
and executes its logic in a loop, running once per candle.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, strategy_name: str, params: dict, log_level: str):
|
||||||
|
self.strategy_name = strategy_name
|
||||||
|
self.params = params
|
||||||
|
self.coin = params.get("coin", "N/A")
|
||||||
|
self.timeframe = params.get("timeframe", "N/A")
|
||||||
|
self.db_path = os.path.join("_data", "market_data.db")
|
||||||
|
self.status_file_path = os.path.join("_data", f"strategy_status_{self.strategy_name}.json")
|
||||||
|
|
||||||
|
# Strategy state variables
|
||||||
|
self.current_signal = "INIT"
|
||||||
|
self.last_signal_change_utc = None
|
||||||
|
self.signal_price = None
|
||||||
|
self.indicator_value = None
|
||||||
|
|
||||||
|
# Load strategy-specific parameters from config
|
||||||
|
self.rsi_period = params.get("rsi_period")
|
||||||
|
self.short_ma = params.get("short_ma")
|
||||||
|
self.long_ma = params.get("long_ma")
|
||||||
|
self.sma_period = params.get("sma_period")
|
||||||
|
|
||||||
|
setup_logging(log_level, f"Strategy-{self.strategy_name}")
|
||||||
|
logging.info(f"Initializing strategy with parameters: {self.params}")
|
||||||
|
|
||||||
|
def load_data(self) -> pd.DataFrame:
|
||||||
|
"""Loads historical data, ensuring enough for the longest indicator period."""
|
||||||
|
table_name = f"{self.coin}_{self.timeframe}"
|
||||||
|
limit = 500
|
||||||
|
# Determine required data limit based on the longest configured indicator
|
||||||
|
periods = [p for p in [self.sma_period, self.long_ma, self.rsi_period] if p is not None]
|
||||||
|
if periods:
|
||||||
|
limit = max(periods) + 50
|
||||||
|
|
||||||
|
try:
|
||||||
|
with sqlite3.connect(f"file:{self.db_path}?mode=ro", uri=True) as conn:
|
||||||
|
query = f'SELECT * FROM "{table_name}" ORDER BY datetime_utc DESC LIMIT {limit}'
|
||||||
|
df = pd.read_sql(query, conn)
|
||||||
|
if df.empty: return pd.DataFrame()
|
||||||
|
|
||||||
|
df['datetime_utc'] = pd.to_datetime(df['datetime_utc'])
|
||||||
|
df.set_index('datetime_utc', inplace=True)
|
||||||
|
df.sort_index(inplace=True)
|
||||||
|
return df
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Failed to load data from table '{table_name}': {e}")
|
||||||
|
return pd.DataFrame()
|
||||||
|
|
||||||
|
def _calculate_signals(self, data: pd.DataFrame):
|
||||||
|
"""
|
||||||
|
Analyzes historical data to find the last signal crossover event.
|
||||||
|
This method should be expanded to handle different strategy types.
|
||||||
|
"""
|
||||||
|
if self.sma_period:
|
||||||
|
if len(data) < self.sma_period + 1:
|
||||||
|
self.current_signal = "INSUFFICIENT DATA"
|
||||||
|
return
|
||||||
|
|
||||||
|
data['sma'] = data['close'].rolling(window=self.sma_period).mean()
|
||||||
|
self.indicator_value = data['sma'].iloc[-1]
|
||||||
|
|
||||||
|
data['position'] = 0
|
||||||
|
data.loc[data['close'] > data['sma'], 'position'] = 1
|
||||||
|
data.loc[data['close'] < data['sma'], 'position'] = -1
|
||||||
|
data['crossover'] = data['position'].diff()
|
||||||
|
|
||||||
|
last_position = data['position'].iloc[-1]
|
||||||
|
if last_position == 1: self.current_signal = "BUY"
|
||||||
|
elif last_position == -1: self.current_signal = "SELL"
|
||||||
|
else: self.current_signal = "HOLD"
|
||||||
|
|
||||||
|
last_cross_series = data[data['crossover'] != 0]
|
||||||
|
if not last_cross_series.empty:
|
||||||
|
last_cross_row = last_cross_series.iloc[-1]
|
||||||
|
self.last_signal_change_utc = last_cross_row.name.tz_localize('UTC').isoformat()
|
||||||
|
self.signal_price = last_cross_row['close']
|
||||||
|
if last_cross_row['position'] == 1: self.current_signal = "BUY"
|
||||||
|
elif last_cross_row['position'] == -1: self.current_signal = "SELL"
|
||||||
|
else:
|
||||||
|
self.last_signal_change_utc = data.index[0].tz_localize('UTC').isoformat()
|
||||||
|
self.signal_price = data['close'].iloc[0]
|
||||||
|
|
||||||
|
elif self.rsi_period:
|
||||||
|
logging.info(f"RSI logic not implemented for period {self.rsi_period}.")
|
||||||
|
self.current_signal = "NOT IMPLEMENTED"
|
||||||
|
|
||||||
|
elif self.short_ma and self.long_ma:
|
||||||
|
logging.info(f"MA Cross logic not implemented for {self.short_ma}/{self.long_ma}.")
|
||||||
|
self.current_signal = "NOT IMPLEMENTED"
|
||||||
|
|
||||||
|
def _save_status(self):
|
||||||
|
"""Saves the current strategy state to its JSON file."""
|
||||||
|
status = {
|
||||||
|
"strategy_name": self.strategy_name,
|
||||||
|
"current_signal": self.current_signal,
|
||||||
|
"last_signal_change_utc": self.last_signal_change_utc,
|
||||||
|
"signal_price": self.signal_price,
|
||||||
|
"last_checked_utc": datetime.now(timezone.utc).isoformat()
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
with open(self.status_file_path, 'w', encoding='utf-8') as f:
|
||||||
|
json.dump(status, f, indent=4)
|
||||||
|
except IOError as e:
|
||||||
|
logging.error(f"Failed to write status file: {e}")
|
||||||
|
|
||||||
|
def get_sleep_duration(self) -> int:
|
||||||
|
"""Calculates seconds to sleep until the next full candle closes."""
|
||||||
|
if not self.timeframe: return 60
|
||||||
|
tf_value = int(''.join(filter(str.isdigit, self.timeframe)))
|
||||||
|
tf_unit = ''.join(filter(str.isalpha, self.timeframe))
|
||||||
|
|
||||||
|
if tf_unit == 'm': interval_seconds = tf_value * 60
|
||||||
|
elif tf_unit == 'h': interval_seconds = tf_value * 3600
|
||||||
|
elif tf_unit == 'd': interval_seconds = tf_value * 86400
|
||||||
|
else: return 60
|
||||||
|
|
||||||
|
now = datetime.now(timezone.utc)
|
||||||
|
timestamp = now.timestamp()
|
||||||
|
|
||||||
|
next_candle_ts = ((timestamp // interval_seconds) + 1) * interval_seconds
|
||||||
|
sleep_seconds = (next_candle_ts - timestamp) + 5
|
||||||
|
|
||||||
|
logging.info(f"Next candle closes at {datetime.fromtimestamp(next_candle_ts, tz=timezone.utc)}. "
|
||||||
|
f"Sleeping for {sleep_seconds:.2f} seconds.")
|
||||||
|
return sleep_seconds
|
||||||
|
|
||||||
|
def run_logic(self):
|
||||||
|
"""Main loop: loads data, calculates signals, saves status, and sleeps."""
|
||||||
|
logging.info(f"Starting main logic loop for {self.coin} on {self.timeframe} timeframe.")
|
||||||
|
while True:
|
||||||
|
data = self.load_data()
|
||||||
|
if data.empty:
|
||||||
|
logging.warning("No data loaded. Waiting 1 minute before retrying...")
|
||||||
|
self.current_signal = "NO DATA"
|
||||||
|
self._save_status()
|
||||||
|
time.sleep(60)
|
||||||
|
continue
|
||||||
|
|
||||||
|
self._calculate_signals(data)
|
||||||
|
self._save_status()
|
||||||
|
|
||||||
|
last_close = data['close'].iloc[-1]
|
||||||
|
indicator_val_str = f"{self.indicator_value:.4f}" if self.indicator_value is not None else "N/A"
|
||||||
|
logging.info(f"Signal: {self.current_signal} | Price: {last_close:.4f} | Indicator: {indicator_val_str}")
|
||||||
|
|
||||||
|
sleep_time = self.get_sleep_duration()
|
||||||
|
time.sleep(sleep_time)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
parser = argparse.ArgumentParser(description="Run a trading strategy.")
|
||||||
|
parser.add_argument("--name", required=True, help="The name of the strategy instance from the config.")
|
||||||
|
parser.add_argument("--params", required=True, help="A JSON string of the strategy's parameters.")
|
||||||
|
parser.add_argument("--log-level", default="normal", choices=['off', 'normal', 'debug'])
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
try:
|
||||||
|
strategy_params = json.loads(args.params)
|
||||||
|
strategy = TradingStrategy(
|
||||||
|
strategy_name=args.name,
|
||||||
|
params=strategy_params,
|
||||||
|
log_level=args.log_level
|
||||||
|
)
|
||||||
|
strategy.run_logic()
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
logging.info("Strategy process stopped.")
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"A critical error occurred: {e}")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
200
trade_executor.py
Normal file
200
trade_executor.py
Normal file
@ -0,0 +1,200 @@
|
|||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import json
|
||||||
|
import time
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from eth_account import Account
|
||||||
|
from hyperliquid.exchange import Exchange
|
||||||
|
from hyperliquid.info import Info
|
||||||
|
from hyperliquid.utils import constants
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
|
from logging_utils import setup_logging
|
||||||
|
from trade_log import log_trade
|
||||||
|
|
||||||
|
# Load environment variables from a .env file
|
||||||
|
load_dotenv()
|
||||||
|
|
||||||
|
class TradeExecutor:
|
||||||
|
"""
|
||||||
|
Monitors strategy signals and executes trades using a multi-agent,
|
||||||
|
multi-strategy position management system. Each strategy's position is
|
||||||
|
tracked independently.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, log_level: str):
|
||||||
|
setup_logging(log_level, 'TradeExecutor')
|
||||||
|
|
||||||
|
self.vault_address = os.environ.get("MAIN_WALLET_ADDRESS")
|
||||||
|
if not self.vault_address:
|
||||||
|
logging.error("MAIN_WALLET_ADDRESS not set.")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
self.info = Info(constants.MAINNET_API_URL, skip_ws=True)
|
||||||
|
self.exchanges = self._load_agents()
|
||||||
|
if not self.exchanges:
|
||||||
|
logging.error("No trading agents found in .env file.")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
strategy_config_path = os.path.join("_data", "strategies.json")
|
||||||
|
try:
|
||||||
|
with open(strategy_config_path, 'r') as f:
|
||||||
|
self.strategy_configs = {name: config for name, config in json.load(f).items() if config.get("enabled")}
|
||||||
|
logging.info(f"Loaded {len(self.strategy_configs)} enabled strategies.")
|
||||||
|
except (FileNotFoundError, json.JSONDecodeError) as e:
|
||||||
|
logging.error(f"Could not load strategies from '{strategy_config_path}': {e}")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
self.status_file_path = os.path.join("_logs", "trade_executor_status.json")
|
||||||
|
self.managed_positions_path = os.path.join("_data", "executor_managed_positions.json")
|
||||||
|
self.managed_positions = self._load_managed_positions()
|
||||||
|
|
||||||
|
def _load_agents(self) -> dict:
|
||||||
|
"""Discovers and initializes agents from environment variables."""
|
||||||
|
exchanges = {}
|
||||||
|
logging.info("Discovering agents from environment variables...")
|
||||||
|
for env_var, private_key in os.environ.items():
|
||||||
|
agent_name = None
|
||||||
|
if env_var == "AGENT_PRIVATE_KEY":
|
||||||
|
agent_name = "default"
|
||||||
|
elif env_var.endswith("_AGENT_PK"):
|
||||||
|
agent_name = env_var.replace("_AGENT_PK", "").lower()
|
||||||
|
|
||||||
|
if agent_name and private_key:
|
||||||
|
try:
|
||||||
|
agent_account = Account.from_key(private_key)
|
||||||
|
exchanges[agent_name] = Exchange(agent_account, constants.MAINNET_API_URL, account_address=self.vault_address)
|
||||||
|
logging.info(f"Initialized agent '{agent_name}' with address: {agent_account.address}")
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Failed to initialize agent '{agent_name}': {e}")
|
||||||
|
return exchanges
|
||||||
|
|
||||||
|
def _load_managed_positions(self) -> dict:
|
||||||
|
"""Loads the state of which strategy manages which position."""
|
||||||
|
if os.path.exists(self.managed_positions_path):
|
||||||
|
try:
|
||||||
|
with open(self.managed_positions_path, 'r') as f:
|
||||||
|
logging.info("Loading existing managed positions state.")
|
||||||
|
return json.load(f)
|
||||||
|
except (IOError, json.JSONDecodeError):
|
||||||
|
logging.warning("Could not read managed positions file. Starting fresh.")
|
||||||
|
return {}
|
||||||
|
|
||||||
|
def _save_managed_positions(self):
|
||||||
|
"""Saves the current state of managed positions."""
|
||||||
|
try:
|
||||||
|
with open(self.managed_positions_path, 'w') as f:
|
||||||
|
json.dump(self.managed_positions, f, indent=4)
|
||||||
|
except IOError as e:
|
||||||
|
logging.error(f"Failed to save managed positions state: {e}")
|
||||||
|
|
||||||
|
def _save_executor_status(self, perpetuals_state, spot_state, all_market_contexts):
|
||||||
|
"""Saves the current balances and open positions to a live status file."""
|
||||||
|
# This function is correct and does not need changes.
|
||||||
|
pass
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
"""The main execution loop with advanced position management."""
|
||||||
|
logging.info("Starting Trade Executor loop...")
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
perpetuals_state = self.info.user_state(self.vault_address)
|
||||||
|
open_positions_api = {pos['position'].get('coin'): pos['position'] for pos in perpetuals_state.get('assetPositions', []) if float(pos.get('position', {}).get('szi', 0)) != 0}
|
||||||
|
|
||||||
|
for name, config in self.strategy_configs.items():
|
||||||
|
coin = config['parameters'].get('coin')
|
||||||
|
size = config['parameters'].get('size')
|
||||||
|
# --- ADDED: Load leverage parameters from config ---
|
||||||
|
leverage_long = config['parameters'].get('leverage_long')
|
||||||
|
leverage_short = config['parameters'].get('leverage_short')
|
||||||
|
|
||||||
|
status_file = os.path.join("_data", f"strategy_status_{name}.json")
|
||||||
|
if not os.path.exists(status_file): continue
|
||||||
|
with open(status_file, 'r') as f: status = json.load(f)
|
||||||
|
|
||||||
|
desired_signal = status.get('current_signal')
|
||||||
|
current_position = self.managed_positions.get(name)
|
||||||
|
|
||||||
|
agent_name = config.get("agent", "default").lower()
|
||||||
|
exchange_to_use = self.exchanges.get(agent_name)
|
||||||
|
if not exchange_to_use:
|
||||||
|
logging.error(f"[{name}] Agent '{agent_name}' not found. Skipping trade.")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# --- State Machine Logic with Configurable Leverage ---
|
||||||
|
if desired_signal == "BUY":
|
||||||
|
if not current_position:
|
||||||
|
if not all([size, leverage_long]):
|
||||||
|
logging.error(f"[{name}] 'size' or 'leverage_long' not defined. Skipping.")
|
||||||
|
continue
|
||||||
|
|
||||||
|
logging.warning(f"[{name}] ACTION: Open LONG for {coin} with {leverage_long}x leverage.")
|
||||||
|
exchange_to_use.update_leverage(int(leverage_long), coin)
|
||||||
|
exchange_to_use.market_open(coin, True, size, None, 0.01)
|
||||||
|
self.managed_positions[name] = {"coin": coin, "side": "long", "size": size}
|
||||||
|
log_trade(strategy=name, coin=coin, action="OPEN_LONG", price=status.get('signal_price', 0), size=size, signal=desired_signal)
|
||||||
|
|
||||||
|
elif current_position['side'] == 'short':
|
||||||
|
if not all([size, leverage_long]):
|
||||||
|
logging.error(f"[{name}] 'size' or 'leverage_long' not defined. Skipping.")
|
||||||
|
continue
|
||||||
|
|
||||||
|
logging.warning(f"[{name}] ACTION: Close SHORT and open LONG for {coin} with {leverage_long}x leverage.")
|
||||||
|
exchange_to_use.update_leverage(int(leverage_long), coin)
|
||||||
|
exchange_to_use.market_open(coin, True, current_position['size'] + size, None, 0.01)
|
||||||
|
self.managed_positions[name] = {"coin": coin, "side": "long", "size": size}
|
||||||
|
log_trade(strategy=name, coin=coin, action="CLOSE_SHORT_&_REVERSE", price=status.get('signal_price', 0), size=size, signal=desired_signal)
|
||||||
|
|
||||||
|
elif desired_signal == "SELL":
|
||||||
|
if not current_position:
|
||||||
|
if not all([size, leverage_short]):
|
||||||
|
logging.error(f"[{name}] 'size' or 'leverage_short' not defined. Skipping.")
|
||||||
|
continue
|
||||||
|
|
||||||
|
logging.warning(f"[{name}] ACTION: Open SHORT for {coin} with {leverage_short}x leverage.")
|
||||||
|
exchange_to_use.update_leverage(int(leverage_short), coin)
|
||||||
|
exchange_to_use.market_open(coin, False, size, None, 0.01)
|
||||||
|
self.managed_positions[name] = {"coin": coin, "side": "short", "size": size}
|
||||||
|
log_trade(strategy=name, coin=coin, action="OPEN_SHORT", price=status.get('signal_price', 0), size=size, signal=desired_signal)
|
||||||
|
|
||||||
|
elif current_position['side'] == 'long':
|
||||||
|
if not all([size, leverage_short]):
|
||||||
|
logging.error(f"[{name}] 'size' or 'leverage_short' not defined. Skipping.")
|
||||||
|
continue
|
||||||
|
|
||||||
|
logging.warning(f"[{name}] ACTION: Close LONG and open SHORT for {coin} with {leverage_short}x leverage.")
|
||||||
|
exchange_to_use.update_leverage(int(leverage_short), coin)
|
||||||
|
exchange_to_use.market_open(coin, False, current_position['size'] + size, None, 0.01)
|
||||||
|
self.managed_positions[name] = {"coin": coin, "side": "short", "size": size}
|
||||||
|
log_trade(strategy=name, coin=coin, action="CLOSE_LONG_&_REVERSE", price=status.get('signal_price', 0), size=size, signal=desired_signal)
|
||||||
|
|
||||||
|
elif desired_signal == "FLAT":
|
||||||
|
if current_position:
|
||||||
|
logging.warning(f"[{name}] ACTION: Close {current_position['side']} position for {coin}.")
|
||||||
|
is_buy = current_position['side'] == 'short'
|
||||||
|
exchange_to_use.market_open(coin, is_buy, current_position['size'], None, 0.01)
|
||||||
|
del self.managed_positions[name]
|
||||||
|
log_trade(strategy=name, coin=coin, action=f"CLOSE_{current_position['side'].upper()}", price=status.get('signal_price', 0), size=current_position['size'], signal=desired_signal)
|
||||||
|
|
||||||
|
self._save_managed_positions()
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"An error occurred in the main executor loop: {e}")
|
||||||
|
|
||||||
|
time.sleep(15)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
parser = argparse.ArgumentParser(description="Run the Trade Executor.")
|
||||||
|
parser.add_argument("--log-level", default="normal", choices=['off', 'normal', 'debug'])
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
executor = TradeExecutor(log_level=args.log_level)
|
||||||
|
try:
|
||||||
|
executor.run()
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
logging.info("Trade Executor stopped.")
|
||||||
|
|
||||||
55
trade_log.py
Normal file
55
trade_log.py
Normal file
@ -0,0 +1,55 @@
|
|||||||
|
import os
|
||||||
|
import csv
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
import threading
|
||||||
|
|
||||||
|
# A lock to prevent race conditions when multiple strategies might log at once in the future
|
||||||
|
log_lock = threading.Lock()
|
||||||
|
|
||||||
|
def log_trade(strategy: str, coin: str, action: str, price: float, size: float, signal: str, pnl: float = 0.0):
|
||||||
|
"""
|
||||||
|
Appends a record of a trade action to a persistent CSV log file.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
strategy (str): The name of the strategy that triggered the action.
|
||||||
|
coin (str): The coin being traded (e.g., 'BTC').
|
||||||
|
action (str): The action taken (e.g., 'OPEN_LONG', 'CLOSE_LONG').
|
||||||
|
price (float): The execution price of the trade.
|
||||||
|
size (float): The size of the trade.
|
||||||
|
signal (str): The signal that triggered the trade (e.g., 'BUY', 'SELL').
|
||||||
|
pnl (float, optional): The realized profit and loss for closing trades. Defaults to 0.0.
|
||||||
|
"""
|
||||||
|
log_dir = "_logs"
|
||||||
|
file_path = os.path.join(log_dir, "trade_history.csv")
|
||||||
|
|
||||||
|
# Ensure the logs directory exists
|
||||||
|
if not os.path.exists(log_dir):
|
||||||
|
os.makedirs(log_dir)
|
||||||
|
|
||||||
|
# Define the headers for the CSV file
|
||||||
|
headers = ["timestamp_utc", "strategy", "coin", "action", "price", "size", "signal", "pnl"]
|
||||||
|
|
||||||
|
# Check if the file needs a header
|
||||||
|
file_exists = os.path.isfile(file_path)
|
||||||
|
|
||||||
|
with log_lock:
|
||||||
|
try:
|
||||||
|
with open(file_path, 'a', newline='', encoding='utf-8') as f:
|
||||||
|
writer = csv.DictWriter(f, fieldnames=headers)
|
||||||
|
|
||||||
|
if not file_exists:
|
||||||
|
writer.writeheader()
|
||||||
|
|
||||||
|
writer.writerow({
|
||||||
|
"timestamp_utc": datetime.now(timezone.utc).isoformat(),
|
||||||
|
"strategy": strategy,
|
||||||
|
"coin": coin,
|
||||||
|
"action": action,
|
||||||
|
"price": price,
|
||||||
|
"size": size,
|
||||||
|
"signal": signal,
|
||||||
|
"pnl": pnl
|
||||||
|
})
|
||||||
|
except IOError as e:
|
||||||
|
# If logging fails, print an error to the main console as a fallback.
|
||||||
|
print(f"CRITICAL: Failed to write to trade log file: {e}")
|
||||||
Reference in New Issue
Block a user