Compare commits
14 Commits
optymaliza
...
88e1dfff55
| Author | SHA1 | Date | |
|---|---|---|---|
| 88e1dfff55 | |||
| d412099753 | |||
| 048affb163 | |||
| 1cf05a5b69 | |||
| e7c7158c68 | |||
| a30f75fae0 | |||
| 6f822483cc | |||
| fa5b06d1f6 | |||
| 2a3cf5cacf | |||
| 2269fbed9e | |||
| 7ef6bd4d14 | |||
| a69581a07b | |||
| 76c993ce76 | |||
| 903f4ff434 |
45
.gitignore
vendored
45
.gitignore
vendored
@ -1,45 +0,0 @@
|
||||
# --- Secrets & Environment ---
|
||||
# Ignore local environment variables
|
||||
.env
|
||||
# Ignore virtual environment folders
|
||||
.venv/
|
||||
venv/
|
||||
|
||||
# --- Python ---
|
||||
# Ignore cache files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
|
||||
# --- Data & Logs ---
|
||||
# Ignore all database files (db, write-ahead log, shared memory)
|
||||
_data/*.db
|
||||
_data/*.db-shm
|
||||
_data/*.db-wal
|
||||
|
||||
# Ignore all JSON files in the data folder
|
||||
_data/*.json
|
||||
|
||||
# Ignore all log files
|
||||
_logs/
|
||||
|
||||
# --- SDK ---
|
||||
# Ignore all contents of the sdk directory
|
||||
sdk/
|
||||
|
||||
# --- Other ---
|
||||
# Ignore custom agents directory
|
||||
agents/
|
||||
|
||||
# Ignore temporary files and examples
|
||||
.temp/
|
||||
|
||||
# Ignore Jekyll files
|
||||
.nojekyll
|
||||
|
||||
# --- Editor & OS Files ---
|
||||
# Ignore VSCode, JetBrains, and macOS/Windows system files
|
||||
.vscode/
|
||||
.idea/
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
.opencode/
|
||||
@ -1,117 +0,0 @@
|
||||
# Model Comparison: Session Summary Styles
|
||||
|
||||
## OpenCode Zen (Bigpickle) Style Example
|
||||
|
||||
```markdown
|
||||
## Session Summary
|
||||
|
||||
**Date:** 2025-11-10
|
||||
|
||||
**Objective(s):**
|
||||
Fix urllib3 SSL compatibility warning and implement sessionsummary agent.
|
||||
|
||||
**Key Accomplishments:**
|
||||
* Resolved NotOpenSSLWarning by downgrading urllib3 from 2.5.0 to 1.26.20
|
||||
* Updated requirements.txt with compatible dependency version
|
||||
* Created sessionsummary agent in .opencode/agent/ following OpenCode.ai specifications
|
||||
* Configured agent with proper permissions and tool restrictions
|
||||
|
||||
**Decisions Made:**
|
||||
* Selected urllib3 downgrade over SSL environment upgrade for system stability
|
||||
* Implemented OpenCode.ai markdown agent instead of custom Python solution
|
||||
* Set bash permission to "deny" for security constraints
|
||||
|
||||
**Key Files Modified:**
|
||||
* `requirements.txt`
|
||||
* `GEMINI.md`
|
||||
* `.opencode/agent/sessionsummary.md`
|
||||
|
||||
**Next Steps/Open Questions:**
|
||||
* Validate trading bot functionality post-SSL fix
|
||||
* Test sessionsummary agent integration with OpenCode.ai interface
|
||||
```
|
||||
|
||||
## Gemini 2.5 Pro Style Example
|
||||
|
||||
```markdown
|
||||
## Session Summary
|
||||
|
||||
**Date:** 2025-11-10
|
||||
|
||||
**Objective(s):**
|
||||
We worked on resolving a critical SSL compatibility issue with urllib3 that was preventing smooth development, and then created a proper session summary system following OpenCode.ai guidelines to better track our project's progress.
|
||||
|
||||
**Key Accomplishments:**
|
||||
* Successfully diagnosed and fixed the NotOpenSSLWarning that was appearing when using urllib3 v2.5.0 with LibreSSL 2.8.3 on macOS by strategically downgrading to urllib3 v1.26.20, which maintains full compatibility while preserving all required functionality.
|
||||
* Updated the requirements.txt file to lock in the compatible version, preventing future dependency conflicts and ensuring consistent builds across different environments.
|
||||
* Created a comprehensive sessionsummary agent using the OpenCode.ai framework, placing it in the proper .opencode/agent/ directory structure with appropriate permissions, tool configurations, and detailed documentation.
|
||||
* Properly configured the agent with security-conscious settings, denying bash access while allowing write/edit capabilities for documentation updates.
|
||||
|
||||
**Decisions Made:**
|
||||
* Made the strategic decision to downgrade urllib3 rather than attempting to upgrade the system SSL environment, as this approach provides better stability and avoids potential system-wide compatibility issues.
|
||||
* Chose to follow OpenCode.ai agent guidelines instead of building a custom Python implementation, ensuring better integration with the development ecosystem and reducing maintenance overhead.
|
||||
* Implemented a security-first approach by configuring restrictive permissions that prevent accidental system modifications while still allowing the agent to fulfill its documentation purpose.
|
||||
|
||||
**Key Files Modified:**
|
||||
* `requirements.txt` - Updated urllib3 version from 2.5.0 to 1.26.20
|
||||
* `GEMINI.md` - Added structured session summary following the specified format
|
||||
* `.opencode/agent/sessionsummary.md` - Created comprehensive agent configuration with proper OpenCode.ai formatting
|
||||
|
||||
**Next Steps/Open Questions:**
|
||||
* Need to thoroughly test the trading bot functionality to ensure the urllib3 downgrade doesn't introduce any regressions in API connectivity or data processing.
|
||||
* Should verify that the sessionsummary agent works correctly within the OpenCode.ai interface and can be invoked properly using @mentions.
|
||||
* Consider establishing a regular workflow for session documentation to maintain consistent project tracking.
|
||||
```
|
||||
|
||||
## Key Differences Analysis
|
||||
|
||||
### **Technical Precision vs. Contextual Richness**
|
||||
|
||||
| Aspect | OpenCode Zen (Bigpickle) | Gemini 2.5 Pro |
|
||||
|---------|-------------------------|------------------|
|
||||
| **Length** | Concise, bullet-point focused | More detailed, narrative style |
|
||||
| **Technical Detail** | High precision, version-specific | High precision with explanatory context |
|
||||
| **Decision Rationale** | Brief, action-oriented | Comprehensive, explains "why" |
|
||||
| **Language Style** | Technical, direct | Descriptive, explanatory |
|
||||
| **Context** | Assumes technical audience | Provides background and implications |
|
||||
|
||||
### **When to Choose Each Model**
|
||||
|
||||
#### **OpenCode Zen (Bigpickle) - Best For:**
|
||||
- ✅ **Daily quick summaries** when you need fast, accurate documentation
|
||||
- ✅ **Technical teams** who prefer concise, scannable information
|
||||
- ✅ **API documentation** where precision matters more than narrative
|
||||
- ✅ **Time-sensitive sessions** where speed is important
|
||||
- ✅ **Highly technical work** where context is already understood
|
||||
|
||||
#### **Gemini 2.5 Pro - Best For:**
|
||||
- ✅ **Complex sessions** with multiple decision points
|
||||
- ✅ **Learning/documentation** where context helps future understanding
|
||||
- ✅ **Team collaboration** where others need full background
|
||||
- ✅ **Strategic planning** where rationale is crucial
|
||||
- ✅ **Knowledge transfer** when onboarding new developers
|
||||
|
||||
### **Practical Impact on Your Use Case**
|
||||
|
||||
For your **Hyperliquid trading bot project**, consider:
|
||||
|
||||
1. **Quick bug fixes**: Use OpenCode Zen for fast, precise documentation
|
||||
2. **Strategy development**: Use Gemini 2.5 Pro for detailed decision tracking
|
||||
3. **Performance optimization**: Gemini 2.5 Pro to document complex trade-offs
|
||||
4. **Daily maintenance**: OpenCode Zen for efficient progress tracking
|
||||
5. **Architecture changes**: Gemini 2.5 Pro for comprehensive rationale
|
||||
|
||||
### **Recommendation**
|
||||
|
||||
**Use OpenCode Zen (Bigpickle) as your default** for:
|
||||
- Day-to-day development
|
||||
- Bug fixes and small features
|
||||
- Technical documentation
|
||||
|
||||
**Switch to Gemini 2.5 Pro for:**
|
||||
- Major architectural decisions
|
||||
- Complex problem-solving sessions
|
||||
- Strategic planning
|
||||
- When creating comprehensive documentation
|
||||
|
||||
The sessionsummary agent will work with both models - just expect different levels of detail and narrative style based on the model's strengths!
|
||||
246
GEMINI.md
246
GEMINI.md
@ -1,246 +0,0 @@
|
||||
# Project Overview
|
||||
|
||||
This project is a sophisticated, multi-process automated trading bot for the Hyperliquid decentralized exchange. It is written in Python and uses a modular architecture to separate concerns like data fetching, strategy execution, and trade management.
|
||||
|
||||
The bot uses a high-performance data pipeline with SQLite for storing market data. Trading strategies are defined and configured in a JSON file, allowing for easy adjustments without code changes. The system supports multiple, independent trading agents for risk segregation and PNL tracking. A live terminal dashboard provides real-time monitoring of market data, strategy signals, and the status of all background processes.
|
||||
|
||||
## Building and Running
|
||||
|
||||
### 1. Setup
|
||||
|
||||
1. **Create and activate a virtual environment:**
|
||||
```bash
|
||||
# For Windows
|
||||
python -m venv .venv
|
||||
.\.venv\Scripts\activate
|
||||
|
||||
# For macOS/Linux
|
||||
python3 -m venv .venv
|
||||
source .venv/bin/activate
|
||||
```
|
||||
|
||||
2. **Install dependencies:**
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
3. **Configure environment variables:**
|
||||
Create a `.env` file in the root of the project (you can copy `.env.example`) and add your Hyperliquid wallet private key and any agent keys.
|
||||
|
||||
4. **Configure strategies:**
|
||||
Edit `_data/strategies.json` to enable and configure your desired trading strategies.
|
||||
|
||||
### 2. Running the Bot
|
||||
|
||||
To run the main application, which includes the dashboard and all background processes, execute the following command:
|
||||
|
||||
```bash
|
||||
python main_app.py
|
||||
```
|
||||
|
||||
## Development Conventions
|
||||
|
||||
* **Modularity:** The project is divided into several scripts, each with a specific responsibility (e.g., `data_fetcher.py`, `trade_executor.py`).
|
||||
* **Configuration-driven:** Strategies are defined in `_data/strategies.json`, not hardcoded. This allows for easy management of strategies.
|
||||
* **Multi-processing:** The application uses the `multiprocessing` module to run different components in parallel for performance and stability.
|
||||
* **Strategies:** Custom strategies should inherit from the `BaseStrategy` class (defined in `strategies/base_strategy.py`) and implement the `calculate_signals` method.
|
||||
* **Documentation:** The `WIKI/` directory contains detailed documentation for the project. Start with `WIKI/SUMMARY.md`.
|
||||
|
||||
## Session Summary
|
||||
|
||||
**Date:** 2025-11-10
|
||||
|
||||
**Objective(s):**
|
||||
Fix urllib3 SSL compatibility warning and create sessionsummary agent following OpenCode.ai guidelines
|
||||
|
||||
**Key Accomplishments:**
|
||||
* Resolved NotOpenSSLWarning by downgrading urllib3 from 2.5.0 to 1.26.20
|
||||
* Updated requirements.txt to prevent future SSL compatibility issues
|
||||
* Created sessionsummary agent in .opencode/agent/ following OpenCode.ai specifications
|
||||
* Removed incorrect Python implementation and created proper markdown agent configuration
|
||||
|
||||
**Decisions Made:**
|
||||
* Chose to downgrade urllib3 instead of upgrading SSL environment for stability
|
||||
* Followed OpenCode.ai agent guidelines instead of creating custom Python implementation
|
||||
* Configured sessionsummary as subagent with proper permissions and tools
|
||||
|
||||
**Key Files Modified:**
|
||||
* `requirements.txt`
|
||||
* `GEMINI.md`
|
||||
* `.opencode/agent/sessionsummary.md`
|
||||
|
||||
**Next Steps/Open Questions:**
|
||||
* Test trading bot functionality after SSL fix to ensure no regressions
|
||||
* Integrate sessionsummary agent into regular development workflow
|
||||
* Add .opencode/ to .gitignore if not already present
|
||||
|
||||
## Session Summary
|
||||
|
||||
**Date:** 2025-11-11
|
||||
|
||||
**Objective(s):**
|
||||
Start new Gemini session and organize project files by creating .temp folder for examples and temporary files
|
||||
|
||||
**Key Accomplishments:**
|
||||
* Created .temp folder for organizing examples and temporary files
|
||||
* Updated .gitignore to include .temp/ directory
|
||||
* Moved model_comparison_examples.md to .temp folder for better organization
|
||||
* Established file management practices for future development
|
||||
|
||||
**Decisions Made:**
|
||||
* Chose to use .temp folder instead of mixing examples with main project files
|
||||
* Added .temp to .gitignore to prevent accidental commits of temporary files
|
||||
* Followed user instruction to organize project structure for better maintainability
|
||||
|
||||
**Key Files Modified:**
|
||||
* `.gitignore`
|
||||
* `.temp/` (created)
|
||||
* `model_comparison_examples.md` (moved to .temp/)
|
||||
|
||||
**Next Steps/Open Questions:**
|
||||
* Continue organizing any other example or temporary files into .temp folder
|
||||
* Maintain consistent file organization practices in future development
|
||||
* Consider creating additional organizational directories if needed
|
||||
|
||||
## Session Summary
|
||||
|
||||
**Date:** 2025-11-11
|
||||
|
||||
**Objective(s):**
|
||||
Fix DashboardDataFetcher path resolution error causing file operation failures
|
||||
|
||||
**Key Accomplishments:**
|
||||
* Identified root cause of file path error in dashboard_data_fetcher.py subprocess execution
|
||||
* Fixed path resolution by using absolute paths instead of relative paths
|
||||
* Added os.makedirs() call to ensure _logs directory exists before file operations
|
||||
* Tested fix and confirmed DashboardDataFetcher now works correctly
|
||||
* Committed and pushed fix to remote repository
|
||||
|
||||
**Decisions Made:**
|
||||
* Used os.path.dirname(os.path.abspath(__file__)) to get correct project root
|
||||
* Ensured backward compatibility while fixing the path resolution issue
|
||||
* Maintained atomic file write pattern for data integrity
|
||||
|
||||
**Key Files Modified:**
|
||||
* `dashboard_data_fetcher.py`
|
||||
* `GEMINI.md`
|
||||
|
||||
**Next Steps/Open Questions:**
|
||||
* Monitor DashboardDataFetcher to ensure no further path-related errors occur
|
||||
* Consider reviewing other subprocess scripts for similar path resolution issues
|
||||
* Test main_app.py to ensure dashboard displays data correctly
|
||||
|
||||
## Session Summary
|
||||
|
||||
**Date:** 2025-11-11
|
||||
|
||||
**Objective(s):**
|
||||
Debug and fix DashboardDataFetcher path resolution error causing file operation failures
|
||||
|
||||
**Key Accomplishments:**
|
||||
* Identified root cause of file path error in dashboard_data_fetcher.py subprocess execution
|
||||
* Fixed path resolution by using absolute paths instead of relative paths
|
||||
* Added os.makedirs() call to ensure _logs directory exists before file operations
|
||||
* Tested fix and confirmed DashboardDataFetcher now works correctly
|
||||
* Committed and pushed fix to remote repository
|
||||
* Organized project files with .temp folder for better structure
|
||||
|
||||
**Decisions Made:**
|
||||
* Used os.path.dirname(os.path.abspath(__file__)) to get correct project root
|
||||
* Ensured backward compatibility while fixing path resolution issue
|
||||
* Maintained atomic file write pattern for data integrity
|
||||
* Added proper directory existence checks to prevent runtime errors
|
||||
|
||||
**Key Files Modified:**
|
||||
* `dashboard_data_fetcher.py`
|
||||
* `GEMINI.md`
|
||||
* `.gitignore`
|
||||
* `.temp/` (created)
|
||||
|
||||
**Next Steps/Open Questions:**
|
||||
* Monitor DashboardDataFetcher to ensure no further path-related errors occur
|
||||
* Consider reviewing other subprocess scripts for similar path resolution issues
|
||||
* Test main_app.py to ensure dashboard displays data correctly
|
||||
* Continue improving project organization and file management practices
|
||||
|
||||
---
|
||||
|
||||
# Project Review and Recommendations
|
||||
|
||||
This review provides an analysis of the current state of the automated trading bot project, proposes specific code improvements, and identifies files that appear to be unused or are one-off utilities that could be reorganized.
|
||||
|
||||
The project is a well-structured, multi-process Python application for crypto trading. It has a clear separation of concerns between data fetching, strategy execution, and trade management. The use of `multiprocessing` and a centralized `main_app.py` orchestrator is a solid architectural choice.
|
||||
|
||||
The following sections detail recommendations for improving configuration management, code structure, and robustness, along with a list of files recommended for cleanup.
|
||||
|
||||
---
|
||||
|
||||
## Proposed Code Changes
|
||||
|
||||
### 1. Centralize Configuration
|
||||
|
||||
- **Issue:** Key configuration variables like `WATCHED_COINS` and `required_timeframes` are hardcoded in `main_app.py`. This makes them difficult to change without modifying the source code.
|
||||
- **Proposal:**
|
||||
- Create a central configuration file, e.g., `_data/config.json`.
|
||||
- Move `WATCHED_COINS` and `required_timeframes` into this new file.
|
||||
- Load this configuration in `main_app.py` at startup.
|
||||
- **Benefit:** Decouples configuration from code, making the application more flexible and easier to manage.
|
||||
|
||||
### 2. Refactor `main_app.py` for Clarity
|
||||
|
||||
- **Issue:** `main_app.py` is long and handles multiple responsibilities: process orchestration, dashboard rendering, and data reading.
|
||||
- **Proposal:**
|
||||
- **Abstract Process Management:** The functions for running subprocesses (e.g., `run_live_candle_fetcher`, `run_resampler_job`) contain repetitive logic for logging, shutdown handling, and process looping. This could be abstracted into a generic `ProcessRunner` class.
|
||||
- **Create a Dashboard Class:** The complex dashboard rendering logic could be moved into a separate `Dashboard` class to improve separation of concerns and make the main application loop cleaner.
|
||||
- **Benefit:** Improves code readability, reduces duplication, and makes the application easier to maintain and extend.
|
||||
|
||||
### 3. Improve Project Structure
|
||||
|
||||
- **Issue:** The root directory is cluttered with numerous Python scripts, making it difficult to distinguish between core application files, utility scripts, and old/example files.
|
||||
- **Proposal:**
|
||||
- Create a `scripts/` directory and move all one-off utility and maintenance scripts into it.
|
||||
- Consider creating a `src/` or `app/` directory to house the core application source code (`main_app.py`, `trade_executor.py`, etc.), separating it clearly from configuration, data, and documentation.
|
||||
- **Benefit:** A cleaner, more organized project structure that is easier for new developers to understand.
|
||||
|
||||
### 4. Enhance Robustness and Error Handling
|
||||
|
||||
- **Issue:** The agent loading in `trade_executor.py` relies on discovering environment variables by a naming convention (`_AGENT_PK`). This is clever but can be brittle if environment variables are named incorrectly.
|
||||
- **Proposal:**
|
||||
- Explicitly define the agent names and their corresponding environment variable keys in the proposed `_data/config.json` file. The `trade_executor` would then load only the agents specified in the configuration.
|
||||
- **Benefit:** Makes agent configuration more explicit and less prone to errors from stray environment variables.
|
||||
|
||||
---
|
||||
|
||||
## Identified Unused/Utility Files
|
||||
|
||||
The following files were identified as likely being unused by the core application, being obsolete, or serving as one-off utilities. It is recommended to **move them to a `scripts/` directory** or **delete them** if they are obsolete.
|
||||
|
||||
### Obsolete / Old Versions:
|
||||
- `data_fetcher_old.py`
|
||||
- `market_old.py`
|
||||
- `base_strategy.py` (The one in the root directory; the one in `strategies/` is used).
|
||||
|
||||
### One-Off Utility Scripts (Recommend moving to `scripts/`):
|
||||
- `!migrate_to_sqlite.py`
|
||||
- `import_csv.py`
|
||||
- `del_market_cap_tables.py`
|
||||
- `fix_timestamps.py`
|
||||
- `list_coins.py`
|
||||
- `create_agent.py`
|
||||
|
||||
### Examples / Unused Code:
|
||||
- `basic_ws.py` (Appears to be an example file).
|
||||
- `backtester.py`
|
||||
- `strategy_sma_cross.py` (A strategy file in the root, not in the `strategies` folder).
|
||||
- `strategy_template.py`
|
||||
|
||||
### Standalone / Potentially Unused Core Files:
|
||||
The following files seem to have their logic already integrated into the main multi-process application. They might be remnants of a previous architecture and may not be needed as standalone scripts.
|
||||
- `address_monitor.py`
|
||||
- `position_monitor.py`
|
||||
- `trade_log.py`
|
||||
- `wallet_data.py`
|
||||
- `whale_tracker.py`
|
||||
|
||||
### Data / Log Files (Recommend archiving or deleting):
|
||||
- `hyperliquid_wallet_data_*.json` (These appear to be backups or logs).
|
||||
88
README.md
Normal file
88
README.md
Normal file
@ -0,0 +1,88 @@
|
||||
# Automated Crypto Trading Bot
|
||||
|
||||
This project is a sophisticated, multi-process automated trading bot designed to interact with the Hyperliquid decentralized exchange. It features a robust data pipeline, a flexible strategy engine, multi-agent trade execution, and a live terminal dashboard for real-time monitoring.
|
||||
|
||||
<!-- It's a good idea to take a screenshot of your dashboard and upload it to a service like Imgur to include here -->
|
||||
|
||||
## Features
|
||||
|
||||
* **Multi-Process Architecture**: Core components (data fetching, trading, strategies) run in parallel processes for maximum performance and stability.
|
||||
* **Comprehensive Data Pipeline**:
|
||||
* Live price feeds for all assets.
|
||||
* Historical candle data collection for any coin and timeframe.
|
||||
* Historical market cap data fetching from the CoinGecko API.
|
||||
* **High-Performance Database**: Uses SQLite with pandas for fast, indexed storage and retrieval of all market data.
|
||||
* **Configuration-Driven Strategies**: Trading strategies are defined and managed in a simple JSON file (`_data/strategies.json`), allowing for easy configuration without code changes.
|
||||
* **Multi-Agent Trading**: Supports multiple, independent trading agents for advanced risk segregation and PNL tracking.
|
||||
* **Live Terminal Dashboard**: A real-time, flicker-free dashboard to monitor live prices, market caps, strategy signals, and the status of all background processes.
|
||||
* **Secure Key Management**: Uses a `.env` file to securely manage all private keys and API keys, keeping them separate from the codebase.
|
||||
|
||||
## Project Structure
|
||||
|
||||
The project is composed of several key scripts that work together:
|
||||
|
||||
* **`main_app.py`**: The central orchestrator. It launches all background processes and displays the main monitoring dashboard.
|
||||
* **`trade_executor.py`**: The trading "brain." It reads signals from all active strategies and executes trades using the appropriate agent.
|
||||
* **`data_fetcher.py`**: A background service that collects 1-minute historical candle data and saves it to the SQLite database.
|
||||
* **`resampler.py`**: A background service that reads the 1-minute data and generates all other required timeframes (e.g., 5m, 1h, 1d).
|
||||
* **`market_cap_fetcher.py`**: A scheduled service to download daily market cap data.
|
||||
* **`strategy_*.py`**: Individual files containing the logic for different types of trading strategies (e.g., SMA Crossover).
|
||||
* **`_data/strategies.json`**: The configuration file for defining and enabling/disabling your trading strategies.
|
||||
* **`.env`**: The secure file for storing all your private keys and API keys.
|
||||
|
||||
## Installation
|
||||
|
||||
1. **Clone the Repository**
|
||||
```bash
|
||||
git clone [https://github.com/your-username/your-repo-name.git](https://github.com/your-username/your-repo-name.git)
|
||||
cd your-repo-name
|
||||
```
|
||||
2. **Create and Activate a Virtual Environment**
|
||||
```bash
|
||||
# For Windows
|
||||
python -m venv .venv
|
||||
.\.venv\Scripts\activate
|
||||
# For macOS/Linux
|
||||
python3 -m venv .venv
|
||||
source .venv/bin/activate
|
||||
```
|
||||
3. **Install Dependencies**
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
## Getting Started: Configuration
|
||||
|
||||
Before running the application, you must configure your wallets, agents, and API keys.
|
||||
|
||||
1. Create the .env File In the root of the project, create a file named .env. Copy the following content into it and replace the placeholder values with your actual keys.
|
||||
|
||||
2. **Activate Your Main Wallet on Hyperliquid**
|
||||
The `trade_executor.py` script will fail if your main wallet is not registered.
|
||||
* Go to the Hyperliquid website, connect your main wallet, and make a small deposit. This is a one-time setup step.
|
||||
3. **Create and Authorize Trading Agents**
|
||||
The `trade_executor.py` uses secure "agent" keys that can trade but cannot withdraw. You need to generate these and authorize them with your main wallet.
|
||||
* Run the `create_agent.py` script
|
||||
```bash
|
||||
python create_agent.py
|
||||
```
|
||||
The script will output a new Agent Private Key. Copy this key and add it to your .env file (e.g., as SCALPER_AGENT_PK). Repeat this for each agent you want to create.
|
||||
4. **Configure**
|
||||
Your Strategies Open the `_data/strategies.json` file to define which strategies you want to run.
|
||||
* Set "enabled": true to activate a strategy.
|
||||
* Assign an "agent" (e.g., "scalper", "swing") to each strategy. The agent name must correspond to a key in your .env file (e.g., SCALPER_AGENT_PK -> "scalper").
|
||||
* Configure the parameters for each strategy, such as the coin, timeframe, and any indicator settings.
|
||||
|
||||
##Usage##
|
||||
Once everything is configured, you can run the main application from your terminal:
|
||||
```bash
|
||||
python main_app.py
|
||||
```
|
||||
|
||||
## Documentation
|
||||
|
||||
Detailed project documentation is available in the `WIKI/` directory. Start with the summary page:
|
||||
|
||||
`WIKI/SUMMARY.md`
|
||||
|
||||
This contains links and explanations for `OVERVIEW.md`, `SETUP.md`, `SCRIPTS.md`, and other helpful pages that describe usage, data layout, agent management, development notes, and troubleshooting.
|
||||
|
||||
5
WIKI/.gitattributes
vendored
Normal file
5
WIKI/.gitattributes
vendored
Normal file
@ -0,0 +1,5 @@
|
||||
# Treat markdown files as text with LF normalization
|
||||
*.md text eol=lf
|
||||
|
||||
# Ensure JSON files are treated as text
|
||||
*.json text
|
||||
34
WIKI/AGENTS.md
Normal file
34
WIKI/AGENTS.md
Normal file
@ -0,0 +1,34 @@
|
||||
Agents and Keys
|
||||
|
||||
This project supports running multiple agent identities (private keys) to place orders on Hyperliquid. Agents are lightweight keys authorized on-chain by your main wallet.
|
||||
|
||||
Agent storage and environment
|
||||
|
||||
- For security, agent private keys should be stored as environment variables and not checked into source control.
|
||||
- Supported patterns:
|
||||
- `AGENT_PRIVATE_KEY` (single default agent)
|
||||
- `<NAME>_AGENT_PK` or `<NAME>_AGENT_PRIVATE_KEY` (per-agent keys)
|
||||
|
||||
Discovering agents
|
||||
|
||||
- `trade_executor.py` scans environment variables for agent keys and loads them into `Exchange` objects so each agent can sign orders independently.
|
||||
|
||||
Creating and authorizing agents
|
||||
|
||||
- Use `create_agent.py` with your `MAIN_WALLET_PRIVATE_KEY` to authorize a new agent name. The script will attempt to call `exchange.approve_agent(agent_name)` and print the returned agent private key.
|
||||
|
||||
Security notes
|
||||
|
||||
- Never commit private keys to Git. Keep them in a secure secrets store or local `.env` file excluded from version control.
|
||||
- Rotate keys if they are ever exposed and re-authorize agents using your main wallet.
|
||||
|
||||
Example `.env` snippet
|
||||
|
||||
MAIN_WALLET_PRIVATE_KEY=<your-main-wallet-private-key>
|
||||
MAIN_WALLET_ADDRESS=<your-main-wallet-address>
|
||||
AGENT_PRIVATE_KEY=<agent-private-key>
|
||||
EXECUTOR_SCALPER_AGENT_PK=<agent-private-key-for-scalper>
|
||||
|
||||
File `agents`
|
||||
|
||||
- This repository may contain a local `agents` file used as a quick snapshot; treat it as insecure and remove it from the repo or add it to `.gitignore` if it contains secrets.
|
||||
20
WIKI/CONTRIBUTING.md
Normal file
20
WIKI/CONTRIBUTING.md
Normal file
@ -0,0 +1,20 @@
|
||||
Contributing
|
||||
|
||||
Thanks for considering contributing! Please follow these guidelines to make the process smooth.
|
||||
|
||||
How to contribute
|
||||
|
||||
1. Fork the repository and create a feature branch for your change.
|
||||
2. Keep changes focused and add tests where appropriate.
|
||||
3. Submit a Pull Request with a clear description and the reason for the change.
|
||||
|
||||
Coding standards
|
||||
|
||||
- Keep functions small and well-documented.
|
||||
- Use the existing logging utilities for consistent output.
|
||||
- Prefer safe, incremental changes for financial code.
|
||||
|
||||
Security and secrets
|
||||
|
||||
- Never commit private keys, API keys, or secrets. Use environment variables or a secrets manager.
|
||||
- If you accidentally commit secrets, rotate them immediately.
|
||||
31
WIKI/DATA.md
Normal file
31
WIKI/DATA.md
Normal file
@ -0,0 +1,31 @@
|
||||
Data layout and formats
|
||||
|
||||
This section describes the `_data/` directory and the important files used by the scripts.
|
||||
|
||||
Important files
|
||||
|
||||
- `_data/market_data.db` — SQLite database that stores candle tables. Tables are typically named `<COIN>_<INTERVAL>` (e.g., `BTC_1m`, `ETH_5m`).
|
||||
- `_data/coin_precision.json` — Mapping of coin names to their size precision (created by `list_coins.py`).
|
||||
- `_data/current_prices.json` — Latest market prices that `market.py` writes.
|
||||
- `_data/fetcher_status.json` — Last run metadata from `data_fetcher.py`.
|
||||
- `_data/market_cap_data.json` — Market cap summary saved by `market_cap_fetcher.py`.
|
||||
- `_data/strategies.json` — Configuration for strategies (enabled flag, parameters).
|
||||
- `_data/strategy_status_<name>.json` — Per-strategy runtime status including last signal and price.
|
||||
- `_data/executor_managed_positions.json` — Which strategy is currently managing which live position (used by `trade_executor`).
|
||||
|
||||
Candle schema
|
||||
|
||||
Each candle table contains columns similar to:
|
||||
- `timestamp_ms` (INTEGER) — milliseconds since epoch
|
||||
- `open`, `high`, `low`, `close` (FLOAT)
|
||||
- `volume` (FLOAT)
|
||||
- `number_of_trades` (INTEGER)
|
||||
|
||||
Trade logs
|
||||
|
||||
- Persistent trade history is stored in `_logs/trade_history.csv` with the following columns: `timestamp_utc`, `strategy`, `coin`, `action`, `price`, `size`, `signal`, `pnl`.
|
||||
|
||||
Backups and maintenance
|
||||
|
||||
- Periodically back up `_data/market_data.db`. The WAL and SHM files are also present when SQLite uses WAL mode.
|
||||
- Keep JSON config/state files under version control only if they contain no secrets.
|
||||
24
WIKI/DEVELOPMENT.md
Normal file
24
WIKI/DEVELOPMENT.md
Normal file
@ -0,0 +1,24 @@
|
||||
Development and testing
|
||||
|
||||
Code style and conventions
|
||||
|
||||
- Python 3.11+ with typing hints where helpful.
|
||||
- Use `logging_utils.setup_logging` for consistent logs across scripts.
|
||||
|
||||
Running tests
|
||||
|
||||
- This repository doesn't currently include a formal test suite. Suggested quick checks:
|
||||
- Run `python list_coins.py` to verify connectivity to Hyperliquid Info.
|
||||
- Run `python -m pyflakes .` or `python -m pylint` if you have linters installed.
|
||||
|
||||
Adding a new strategy
|
||||
|
||||
1. Create a new script following the pattern in `strategy_template.py`.
|
||||
2. Add an entry to `_data/strategies.json` with `enabled: true` and relevant parameters.
|
||||
3. Ensure the strategy writes a status JSON file (`_data/strategy_status_<name>.json`) and uses `trade_log.log_trade` to record actions.
|
||||
|
||||
Recommended improvements (low-risk)
|
||||
|
||||
- Add a lightweight unit test suite (pytest) for core functions like timeframe parsing, SQL helpers, and signal calculation.
|
||||
- Add CI (GitHub Actions) to run flake/pylint and unit tests on PRs.
|
||||
- Move secrets handling to a `.env.example` and document environment variables in `WIKI/SETUP.md`.
|
||||
29
WIKI/OVERVIEW.md
Normal file
29
WIKI/OVERVIEW.md
Normal file
@ -0,0 +1,29 @@
|
||||
Hyperliquid Trading Toolkit
|
||||
|
||||
This repository contains a collection of utility scripts, data fetchers, resamplers, trading strategies, and a trade executor for working with Hyperliquid trading APIs and crawled data. It is organized to support data collection, transformation, strategy development, and automated execution via agents.
|
||||
|
||||
Key components
|
||||
|
||||
- Data fetching and management: `data_fetcher.py`, `market.py`, `resampler.py`, `market_cap_fetcher.py`, `list_coins.py`
|
||||
- Strategies: `strategy_sma_cross.py`, `strategy_template.py`, `strategy_sma_125d.py` (if present)
|
||||
- Execution: `trade_executor.py`, `create_agent.py`, `agents` helper
|
||||
- Utilities: `logging_utils.py`, `trade_log.py`
|
||||
- Data storage: SQLite database in `_data/market_data.db` and JSON files in `_data`
|
||||
|
||||
Intended audience
|
||||
|
||||
- Developers building strategies and automations on Hyperliquid
|
||||
- Data engineers collecting and processing market data
|
||||
- Operators running the fetchers and executors on a scheduler or as system services
|
||||
|
||||
Project goals
|
||||
|
||||
- Reliable collection of 1m candles and resampling to common timeframes
|
||||
- Clean separation between data, strategies, and execution
|
||||
- Lightweight logging and traceable trade records
|
||||
|
||||
Where to start
|
||||
|
||||
- Read `WIKI/SETUP.md` to prepare your environment
|
||||
- Use `WIKI/SCRIPTS.md` for a description of individual scripts and how to run them
|
||||
- Inspect `WIKI/AGENTS.md` to understand agent keys and how to manage them
|
||||
47
WIKI/SCRIPTS.md
Normal file
47
WIKI/SCRIPTS.md
Normal file
@ -0,0 +1,47 @@
|
||||
Scripts and How to Use Them
|
||||
|
||||
This file documents the main scripts in the repository and their purpose, typical runtime parameters, and key notes.
|
||||
|
||||
list_coins.py
|
||||
- Purpose: Fetches asset metadata from Hyperliquid (name and size/precision) and saves `_data/coin_precision.json`.
|
||||
- Usage: `python list_coins.py`
|
||||
- Notes: Reads `hyperliquid.info.Info` and writes a JSON file. Useful to run before market feeders.
|
||||
|
||||
market.py (MarketDataFeeder)
|
||||
- Purpose: Fetches live prices from Hyperliquid and writes `_data/current_prices.json` while printing a live table.
|
||||
- Usage: `python market.py --log-level normal`
|
||||
- Notes: Expects `_data/coin_precision.json` to exist.
|
||||
|
||||
data_fetcher.py (CandleFetcherDB)
|
||||
- Purpose: Fetches historical 1m candles and stores them in `_data/market_data.db` using a table-per-coin naming convention.
|
||||
- Usage: `python data_fetcher.py --coins BTC ETH --interval 1m --days 7`
|
||||
- Notes: Can be run regularly by a scheduler to keep the DB up to date.
|
||||
|
||||
resampler.py (Resampler)
|
||||
- Purpose: Reads 1m candles from SQLite and resamples to configured timeframes (e.g. 5m, 15m, 1h), appending new candles to tables.
|
||||
- Usage: `python resampler.py --coins BTC ETH --timeframes 5m 15m 1h --log-level normal`
|
||||
|
||||
market_cap_fetcher.py (MarketCapFetcher)
|
||||
- Purpose: Pulls CoinGecko market cap numbers and maintains historical daily tables in the same SQLite DB.
|
||||
- Usage: `python market_cap_fetcher.py --coins BTC ETH --log-level normal`
|
||||
- Notes: Optional `COINGECKO_API_KEY` in `.env` avoids throttling.
|
||||
|
||||
strategy_sma_cross.py (SmaCrossStrategy)
|
||||
- Purpose: Run an SMA-based trading strategy. Reads candles from `_data/market_data.db` and writes status to `_data/strategy_status_<name>.json`.
|
||||
- Usage: `python strategy_sma_cross.py --name sma_cross_1 --params '{"coin":"BTC","timeframe":"1m","fast":5,"slow":20}' --log-level normal`
|
||||
|
||||
trade_executor.py (TradeExecutor)
|
||||
- Purpose: Orchestrates agent-based order execution using agent private keys found in environment variables. Uses `_data/strategies.json` to determine active strategies.
|
||||
- Usage: `python trade_executor.py --log-level normal`
|
||||
- Notes: Requires `MAIN_WALLET_ADDRESS` and agent keys. See `create_agent.py` to authorize agents on-chain.
|
||||
|
||||
create_agent.py
|
||||
- Purpose: Authorizes a new on-chain agent using your main wallet (requires `MAIN_WALLET_PRIVATE_KEY`).
|
||||
- Usage: `python create_agent.py`
|
||||
- Notes: Prints the new agent private key to stdout — save it securely.
|
||||
|
||||
trade_log.py
|
||||
- Purpose: Provides a thread-safe CSV trade history logger. Used by the executor and strategies to record actions.
|
||||
|
||||
Other utility scripts
|
||||
- import_csv.py, fix_timestamps.py, list_coins.py, etc. — see file headers for details.
|
||||
42
WIKI/SETUP.md
Normal file
42
WIKI/SETUP.md
Normal file
@ -0,0 +1,42 @@
|
||||
Setup and Installation
|
||||
|
||||
Prerequisites
|
||||
|
||||
- Python 3.11+ (project uses modern dependencies)
|
||||
- Git (optional)
|
||||
- A Hyperliquid account and an activated main wallet if you want to authorize agents and trade
|
||||
|
||||
Virtual environment
|
||||
|
||||
1. Create a virtual environment:
|
||||
|
||||
python -m venv .venv
|
||||
|
||||
2. Activate the virtual environment (PowerShell on Windows):
|
||||
|
||||
.\.venv\Scripts\Activate.ps1
|
||||
|
||||
3. Upgrade pip and install dependencies:
|
||||
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
|
||||
Configuration
|
||||
|
||||
- Copy `.env.example` to `.env` and set the following variables as required:
|
||||
- MAIN_WALLET_PRIVATE_KEY (used by `create_agent.py` to authorize agents)
|
||||
- MAIN_WALLET_ADDRESS (used by `trade_executor.py`)
|
||||
- AGENT_PRIVATE_KEY or per-agent keys like `EXECUTOR_SCALPER_AGENT_PK`
|
||||
- Optional: COINGECKO_API_KEY for `market_cap_fetcher.py` to avoid rate limits
|
||||
|
||||
Data directory
|
||||
|
||||
- The project writes and reads data from the `_data/` folder. Ensure the directory exists and is writable by the user running the scripts.
|
||||
|
||||
Quick test
|
||||
|
||||
After installing packages, run `list_coins.py` in a dry run to verify connectivity to the Hyperliquid info API:
|
||||
|
||||
python list_coins.py
|
||||
|
||||
If you encounter import errors, ensure the virtual environment is active and the `requirements.txt` dependencies are installed.
|
||||
15
WIKI/SUMMARY.md
Normal file
15
WIKI/SUMMARY.md
Normal file
@ -0,0 +1,15 @@
|
||||
Project Wiki Summary
|
||||
|
||||
This directory contains human-friendly documentation for the project. Files:
|
||||
|
||||
- `OVERVIEW.md` — High-level overview and where to start
|
||||
- `SETUP.md` — Environment setup and quick test steps
|
||||
- `SCRIPTS.md` — Per-script documentation and usage examples
|
||||
- `AGENTS.md` — How agents work and secure handling of keys
|
||||
- `DATA.md` — Data folder layout and schema notes
|
||||
- `DEVELOPMENT.md` — Developer guidance and recommended improvements
|
||||
- `CONTRIBUTING.md` — How to contribute safely
|
||||
- `TROUBLESHOOTING.md` — Common problems and solutions
|
||||
|
||||
Notes:
|
||||
- These pages were generated from repository source files and common patterns in trading/data projects. Validate any sensitive information (agent keys) and remove them from the repository when sharing.
|
||||
21
WIKI/TROUBLESHOOTING.md
Normal file
21
WIKI/TROUBLESHOOTING.md
Normal file
@ -0,0 +1,21 @@
|
||||
Troubleshooting common issues
|
||||
|
||||
1. Import errors
|
||||
- Ensure the virtual environment is active.
|
||||
- Run `pip install -r requirements.txt`.
|
||||
|
||||
2. Agent authorization failures
|
||||
- Ensure your main wallet is activated on Hyperliquid and has funds.
|
||||
- The `create_agent.py` script will print helpful messages if the vault (main wallet) cannot act.
|
||||
|
||||
3. SQLite locked errors
|
||||
- Increase the SQLite timeout when opening connections (this project uses a 10s timeout in fetcher). Close other programs that may hold the DB open.
|
||||
|
||||
4. Missing coin precision file
|
||||
- Run `python list_coins.py` to regenerate `_data/coin_precision.json`.
|
||||
|
||||
5. Rate limits from CoinGecko
|
||||
- Set `COINGECKO_API_KEY` in your `.env` file and ensure the fetcher respects backoff.
|
||||
|
||||
6. Agent keys in `agents` file or other local files
|
||||
- Treat any `agents` file with private keys as compromised; rotate keys and remove the file from the repository.
|
||||
@ -1,6 +1,6 @@
|
||||
{
|
||||
"sma_cross_eth_5m": {
|
||||
"strategy_name": "sma_cross_1",
|
||||
"strategy_name": "sma_cross_2",
|
||||
"script": "strategies.ma_cross_strategy.MaCrossStrategy",
|
||||
"optimization_params": {
|
||||
"fast": {
|
||||
|
||||
@ -1,208 +0,0 @@
|
||||
{
|
||||
"0G": "zero-gravity",
|
||||
"2Z": "doublezero",
|
||||
"AAVE": "aave",
|
||||
"ACE": "endurance",
|
||||
"ADA": "ada-the-dog",
|
||||
"AI": "sleepless-ai",
|
||||
"AI16Z": "ai16z",
|
||||
"AIXBT": "aixbt",
|
||||
"ALGO": "dear-algorithm",
|
||||
"ALT": "altlayer",
|
||||
"ANIME": "anime-token",
|
||||
"APE": "ape-3",
|
||||
"APEX": "apex-token-2",
|
||||
"APT": "aptos",
|
||||
"AR": "arweave",
|
||||
"ARB": "osmosis-allarb",
|
||||
"ARK": "ark-3",
|
||||
"ASTER": "astar",
|
||||
"ATOM": "lost-bitcoin-layer",
|
||||
"AVAX": "binance-peg-avalanche",
|
||||
"AVNT": "avantis",
|
||||
"BABY": "baby-2",
|
||||
"BADGER": "badger-dao",
|
||||
"BANANA": "nforbanana",
|
||||
"BCH": "bitcoin-cash",
|
||||
"BERA": "berachain-bera",
|
||||
"BIGTIME": "big-time",
|
||||
"BIO": "bio-protocol",
|
||||
"BLAST": "blast",
|
||||
"BLUR": "blur",
|
||||
"BLZ": "bluzelle",
|
||||
"BNB": "binancecoin",
|
||||
"BNT": "bancor",
|
||||
"BOME": "book-of-meme",
|
||||
"BRETT": "brett",
|
||||
"BSV": "bitcoin-cash-sv",
|
||||
"BTC": "bitcoin",
|
||||
"CAKE": "pancakeswap-token",
|
||||
"CANTO": "canto",
|
||||
"CATI": "catizen",
|
||||
"CELO": "celo",
|
||||
"CFX": "cosmic-force-token-v2",
|
||||
"CHILLGUY": "just-a-chill-guy",
|
||||
"COMP": "compound-governance-token",
|
||||
"CRV": "curve-dao-token",
|
||||
"CYBER": "cyberconnect",
|
||||
"DOGE": "doge-on-pulsechain",
|
||||
"DOOD": "doodles",
|
||||
"DOT": "xcdot",
|
||||
"DYDX": "dydx-chain",
|
||||
"DYM": "dymension",
|
||||
"EIGEN": "eigenlayer",
|
||||
"ENA": "ethena",
|
||||
"ENS": "ethereum-name-service",
|
||||
"ETC": "ethereum-classic",
|
||||
"ETH": "ethereum",
|
||||
"ETHFI": "ether-fi",
|
||||
"FARTCOIN": "fartcoin-2",
|
||||
"FET": "fetch-ai",
|
||||
"FIL": "filecoin",
|
||||
"FRIEND": "friend-tech",
|
||||
"FTM": "fantom",
|
||||
"FTT": "ftx-token",
|
||||
"GALA": "gala",
|
||||
"GAS": "gas",
|
||||
"GMT": "stepn",
|
||||
"GMX": "gmx",
|
||||
"GOAT": "goat",
|
||||
"GRASS": "grass-3",
|
||||
"GRIFFAIN": "griffain",
|
||||
"HBAR": "hedera-hashgraph",
|
||||
"HEMI": "hemi",
|
||||
"HMSTR": "hamster-kombat",
|
||||
"HYPE": "hyperliquid",
|
||||
"HYPER": "hyper-4",
|
||||
"ILV": "illuvium",
|
||||
"IMX": "immutable-x",
|
||||
"INIT": "initia",
|
||||
"INJ": "injective-protocol",
|
||||
"IO": "io",
|
||||
"IOTA": "iota-2",
|
||||
"IP": "story-2",
|
||||
"JELLY": "jelly-time",
|
||||
"JTO": "jito-governance-token",
|
||||
"JUP": "jupiter-exchange-solana",
|
||||
"KAITO": "kaito",
|
||||
"KAS": "wrapped-kaspa",
|
||||
"LAUNCHCOIN": "ben-pasternak",
|
||||
"LAYER": "unilayer",
|
||||
"LDO": "linea-bridged-ldo-linea",
|
||||
"LINEA": "linea",
|
||||
"LINK": "osmosis-alllink",
|
||||
"LISTA": "lista",
|
||||
"LOOM": "loom",
|
||||
"LTC": "litecoin",
|
||||
"MANTA": "manta-network",
|
||||
"MATIC": "matic-network",
|
||||
"MAV": "maverick-protocol",
|
||||
"MAVIA": "heroes-of-mavia",
|
||||
"ME": "magic-eden",
|
||||
"MEGA": "megaeth",
|
||||
"MELANIA": "melania-meme",
|
||||
"MEME": "mpx6900",
|
||||
"MERL": "merlin-chain",
|
||||
"MET": "metya",
|
||||
"MEW": "cat-in-a-dogs-world",
|
||||
"MINA": "mina-protocol",
|
||||
"MKR": "maker",
|
||||
"MNT": "mynth",
|
||||
"MON": "mon-protocol",
|
||||
"MOODENG": "moo-deng-2",
|
||||
"MORPHO": "morpho",
|
||||
"MOVE": "movement",
|
||||
"MYRO": "myro",
|
||||
"NEAR": "near",
|
||||
"NEO": "neo",
|
||||
"NIL": "nillion",
|
||||
"NOT": "nothing-3",
|
||||
"NTRN": "neutron-3",
|
||||
"NXPC": "nexpace",
|
||||
"OGN": "origin-protocol",
|
||||
"OM": "mantra-dao",
|
||||
"OMNI": "omni-2",
|
||||
"ONDO": "ondo-finance",
|
||||
"OP": "optimism",
|
||||
"ORBS": "orbs",
|
||||
"ORDI": "ordinals",
|
||||
"OX": "ox-fun",
|
||||
"PANDORA": "pandora",
|
||||
"PAXG": "pax-gold",
|
||||
"PENDLE": "pendle",
|
||||
"PENGU": "pudgy-penguins",
|
||||
"PEOPLE": "constitutiondao-wormhole",
|
||||
"PIXEL": "pixel-3",
|
||||
"PNUT": "pnut",
|
||||
"POL": "proof-of-liquidity",
|
||||
"POLYX": "polymesh",
|
||||
"POPCAT": "popcat",
|
||||
"PROMPT": "wayfinder",
|
||||
"PROVE": "succinct",
|
||||
"PUMP": "pump-fun",
|
||||
"PURR": "purr-2",
|
||||
"PYTH": "pyth-network",
|
||||
"RDNT": "radiant-capital",
|
||||
"RENDER": "render-token",
|
||||
"REQ": "request-network",
|
||||
"RESOLV": "resolv",
|
||||
"REZ": "renzo",
|
||||
"RLB": "rollbit-coin",
|
||||
"RSR": "reserve-rights-token",
|
||||
"RUNE": "thorchain",
|
||||
"S": "token-s",
|
||||
"SAGA": "saga-2",
|
||||
"SAND": "the-sandbox-wormhole",
|
||||
"SCR": "scroll",
|
||||
"SEI": "sei-network",
|
||||
"SHIA": "shiba-saga",
|
||||
"SKY": "sky",
|
||||
"SNX": "havven",
|
||||
"SOL": "solana",
|
||||
"SOPH": "sophon",
|
||||
"SPX": "spx6900",
|
||||
"STBL": "stbl",
|
||||
"STG": "stargate-finance",
|
||||
"STRAX": "stratis",
|
||||
"STRK": "starknet",
|
||||
"STX": "stox",
|
||||
"SUI": "sui",
|
||||
"SUPER": "superfarm",
|
||||
"SUSHI": "sushi",
|
||||
"SYRUP": "syrup",
|
||||
"TAO": "the-anthropic-order",
|
||||
"TIA": "tia",
|
||||
"TNSR": "tensorium",
|
||||
"TON": "tontoken",
|
||||
"TRB": "tellor",
|
||||
"TRUMP": "trumpeffect69420",
|
||||
"TRX": "tron-bsc",
|
||||
"TST": "test-3",
|
||||
"TURBO": "turbo",
|
||||
"UMA": "uma",
|
||||
"UNI": "uni",
|
||||
"UNIBOT": "unibot",
|
||||
"USTC": "wrapped-ust",
|
||||
"USUAL": "usual",
|
||||
"VINE": "vine",
|
||||
"VIRTUAL": "virtual-protocol",
|
||||
"VVV": "venice-token",
|
||||
"W": "w",
|
||||
"WCT": "connect-token-wct",
|
||||
"WIF": "wif-secondchance",
|
||||
"WLD": "worldcoin-wld",
|
||||
"WLFI": "world-liberty-financial",
|
||||
"XAI": "xai-blockchain",
|
||||
"XLM": "stellar",
|
||||
"XPL": "pulse-2",
|
||||
"XRP": "ripple",
|
||||
"YGG": "yield-guild-games",
|
||||
"YZY": "yzy",
|
||||
"ZEC": "zcash",
|
||||
"ZEN": "zenith-3",
|
||||
"ZEREBRO": "zerebro",
|
||||
"ZETA": "zeta",
|
||||
"ZK": "zksync",
|
||||
"ZORA": "zora",
|
||||
"ZRO": "layerzero"
|
||||
}
|
||||
@ -101,7 +101,6 @@
|
||||
"MAV": 0,
|
||||
"MAVIA": 1,
|
||||
"ME": 1,
|
||||
"MEGA": 0,
|
||||
"MELANIA": 1,
|
||||
"MEME": 0,
|
||||
"MERL": 0,
|
||||
|
||||
7
_data/executor_managed_positions.json
Normal file
7
_data/executor_managed_positions.json
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"sma_cross_2": {
|
||||
"coin": "BTC",
|
||||
"side": "short",
|
||||
"size": 0.0001
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
BIN
_data/market_data.db-shm
Normal file
BIN
_data/market_data.db-shm
Normal file
Binary file not shown.
BIN
_data/market_data.db-wal
Normal file
BIN
_data/market_data.db-wal
Normal file
Binary file not shown.
@ -1,11 +0,0 @@
|
||||
{
|
||||
"copy_trader_eth_ETH": {
|
||||
"strategy": "copy_trader_eth",
|
||||
"coin": "ETH",
|
||||
"side": "long",
|
||||
"open_time_utc": "2025-11-02T20:35:02.988272+00:00",
|
||||
"open_price": 3854.9,
|
||||
"amount": 0.0055,
|
||||
"leverage": 3
|
||||
}
|
||||
}
|
||||
@ -1,11 +1,12 @@
|
||||
{
|
||||
"sma_cross_1": {
|
||||
"enabled": false,
|
||||
"sma_cross_eth_5m": {
|
||||
"enabled": true,
|
||||
"script": "strategy_runner.py",
|
||||
"class": "strategies.ma_cross_strategy.MaCrossStrategy",
|
||||
"agent": "scalper_agent",
|
||||
"parameters": {
|
||||
"coin": "ETH",
|
||||
"timeframe": "15m",
|
||||
"timeframe": "1m",
|
||||
"short_ma": 7,
|
||||
"long_ma": 44,
|
||||
"size": 0.0055,
|
||||
@ -13,39 +14,19 @@
|
||||
"leverage_short": 5
|
||||
}
|
||||
},
|
||||
"sma_44d_btc": {
|
||||
"enabled": false,
|
||||
"sma_125d_btc": {
|
||||
"enabled": true,
|
||||
"script": "strategy_runner.py",
|
||||
"class": "strategies.single_sma_strategy.SingleSmaStrategy",
|
||||
"agent": "swing_agent",
|
||||
"parameters": {
|
||||
"agent": "swing",
|
||||
"coin": "BTC",
|
||||
"timeframe": "1d",
|
||||
"sma_period": 44,
|
||||
"size": 0.0001,
|
||||
"leverage_long": 3,
|
||||
"leverage_long": 2,
|
||||
"leverage_short": 1
|
||||
}
|
||||
},
|
||||
"copy_trader_eth": {
|
||||
"enabled": true,
|
||||
"is_event_driven": true,
|
||||
"class": "strategies.copy_trader_strategy.CopyTraderStrategy",
|
||||
"parameters": {
|
||||
"agent": "scalper",
|
||||
"target_address": "0x32885a6adac4375858E6edC092EfDDb0Ef46484C",
|
||||
"coins_to_copy": {
|
||||
"ETH": {
|
||||
"size": 0.0055,
|
||||
"leverage_long": 3,
|
||||
"leverage_short": 3
|
||||
},
|
||||
"BTC": {
|
||||
"size": 0.0002,
|
||||
"leverage_long": 1,
|
||||
"leverage_short": 1
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@ -1,7 +0,0 @@
|
||||
{
|
||||
"ETH": {
|
||||
"side": "long",
|
||||
"size": 0.018,
|
||||
"entry": 3864.2
|
||||
}
|
||||
}
|
||||
@ -1,7 +0,0 @@
|
||||
{
|
||||
"strategy_name": "copy_trader_eth",
|
||||
"current_signal": "WAIT",
|
||||
"last_signal_change_utc": null,
|
||||
"signal_price": null,
|
||||
"last_checked_utc": "2025-11-02T09:55:08.460168+00:00"
|
||||
}
|
||||
7
_data/strategy_status_ma_cross_btc.json
Normal file
7
_data/strategy_status_ma_cross_btc.json
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"strategy_name": "ma_cross_btc",
|
||||
"current_signal": "HOLD",
|
||||
"last_signal_change_utc": "2025-10-12T17:00:00+00:00",
|
||||
"signal_price": 114286.0,
|
||||
"last_checked_utc": "2025-10-15T11:48:55.092260+00:00"
|
||||
}
|
||||
7
_data/strategy_status_sma_125d_btc.json
Normal file
7
_data/strategy_status_sma_125d_btc.json
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"strategy_name": "sma_125d_btc",
|
||||
"current_signal": "SELL",
|
||||
"last_signal_change_utc": "2025-10-14T00:00:00+00:00",
|
||||
"signal_price": 113026.0,
|
||||
"last_checked_utc": "2025-10-16T10:42:03.203292+00:00"
|
||||
}
|
||||
7
_data/strategy_status_sma_125d_eth.json
Normal file
7
_data/strategy_status_sma_125d_eth.json
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"strategy_name": "sma_125d_eth",
|
||||
"current_signal": "BUY",
|
||||
"last_signal_change_utc": "2025-08-26T00:00:00+00:00",
|
||||
"signal_price": 4600.63,
|
||||
"last_checked_utc": "2025-10-15T17:35:17.663159+00:00"
|
||||
}
|
||||
7
_data/strategy_status_sma_44d_btc.json
Normal file
7
_data/strategy_status_sma_44d_btc.json
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"strategy_name": "sma_44d_btc",
|
||||
"current_signal": "SELL",
|
||||
"last_signal_change_utc": "2025-10-14T00:00:00+00:00",
|
||||
"signal_price": 113026.0,
|
||||
"last_checked_utc": "2025-10-16T10:42:03.202977+00:00"
|
||||
}
|
||||
7
_data/strategy_status_sma_5m_eth.json
Normal file
7
_data/strategy_status_sma_5m_eth.json
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"strategy_name": "sma_5m_eth",
|
||||
"current_signal": "SELL",
|
||||
"last_signal_change_utc": "2025-10-15T17:30:00+00:00",
|
||||
"signal_price": 3937.5,
|
||||
"last_checked_utc": "2025-10-15T17:35:05.035566+00:00"
|
||||
}
|
||||
7
_data/strategy_status_sma_cross.json
Normal file
7
_data/strategy_status_sma_cross.json
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"strategy_name": "sma_cross",
|
||||
"current_signal": "SELL",
|
||||
"last_signal_change_utc": "2025-10-15T11:45:00+00:00",
|
||||
"signal_price": 111957.0,
|
||||
"last_checked_utc": "2025-10-15T12:10:05.048434+00:00"
|
||||
}
|
||||
7
_data/strategy_status_sma_cross_1.json
Normal file
7
_data/strategy_status_sma_cross_1.json
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"strategy_name": "sma_cross_1",
|
||||
"current_signal": "FLAT",
|
||||
"last_signal_change_utc": "2025-10-18T20:22:00+00:00",
|
||||
"signal_price": 3893.9,
|
||||
"last_checked_utc": "2025-10-18T20:30:05.021192+00:00"
|
||||
}
|
||||
7
_data/strategy_status_sma_cross_2.json
Normal file
7
_data/strategy_status_sma_cross_2.json
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"strategy_name": "sma_cross_2",
|
||||
"current_signal": "SELL",
|
||||
"last_signal_change_utc": "2025-10-20T00:00:00+00:00",
|
||||
"signal_price": 110811.0,
|
||||
"last_checked_utc": "2025-10-20T18:45:51.578502+00:00"
|
||||
}
|
||||
7
_data/strategy_status_sma_cross_eth_5m.json
Normal file
7
_data/strategy_status_sma_cross_eth_5m.json
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"strategy_name": "sma_cross_eth_5m",
|
||||
"current_signal": "SELL",
|
||||
"last_signal_change_utc": "2025-10-15T11:45:00+00:00",
|
||||
"signal_price": 4106.1,
|
||||
"last_checked_utc": "2025-10-15T12:05:05.022308+00:00"
|
||||
}
|
||||
@ -1,290 +0,0 @@
|
||||
{
|
||||
"Whale 1 (BTC Maxi)": {
|
||||
"address": "0xb83de012dba672c76a7dbbbf3e459cb59d7d6e36",
|
||||
"core_state": {
|
||||
"raw_state": {
|
||||
"marginSummary": {
|
||||
"accountValue": "30018881.1193690002",
|
||||
"totalNtlPos": "182930683.6996490061",
|
||||
"totalRawUsd": "212949564.8190180063",
|
||||
"totalMarginUsed": "22969943.9848450013"
|
||||
},
|
||||
"crossMarginSummary": {
|
||||
"accountValue": "30018881.1193690002",
|
||||
"totalNtlPos": "182930683.6996490061",
|
||||
"totalRawUsd": "212949564.8190180063",
|
||||
"totalMarginUsed": "22969943.9848450013"
|
||||
},
|
||||
"crossMaintenanceMarginUsed": "5420634.4984849999",
|
||||
"withdrawable": "7043396.1885489998",
|
||||
"assetPositions": [
|
||||
{
|
||||
"type": "oneWay",
|
||||
"position": {
|
||||
"coin": "BTC",
|
||||
"szi": "-546.94441",
|
||||
"leverage": {
|
||||
"type": "cross",
|
||||
"value": 10
|
||||
},
|
||||
"entryPx": "115183.2",
|
||||
"positionValue": "62795781.6009199992",
|
||||
"unrealizedPnl": "203045.067519",
|
||||
"returnOnEquity": "0.0322299761",
|
||||
"liquidationPx": "159230.7089577085",
|
||||
"marginUsed": "6279578.1600919999",
|
||||
"maxLeverage": 40,
|
||||
"cumFunding": {
|
||||
"allTime": "-6923407.0911370004",
|
||||
"sinceOpen": "-6923407.0970780002",
|
||||
"sinceChange": "-1574.188052"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "oneWay",
|
||||
"position": {
|
||||
"coin": "ETH",
|
||||
"szi": "-13938.989",
|
||||
"leverage": {
|
||||
"type": "cross",
|
||||
"value": 10
|
||||
},
|
||||
"entryPx": "4106.64",
|
||||
"positionValue": "58064252.5784000009",
|
||||
"unrealizedPnl": "-821803.895073",
|
||||
"returnOnEquity": "-0.1435654683",
|
||||
"liquidationPx": "5895.7059682083",
|
||||
"marginUsed": "5806425.2578400001",
|
||||
"maxLeverage": 25,
|
||||
"cumFunding": {
|
||||
"allTime": "-6610045.8844170002",
|
||||
"sinceOpen": "-6610045.8844170002",
|
||||
"sinceChange": "-730.403023"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "oneWay",
|
||||
"position": {
|
||||
"coin": "SOL",
|
||||
"szi": "-75080.68",
|
||||
"leverage": {
|
||||
"type": "cross",
|
||||
"value": 10
|
||||
},
|
||||
"entryPx": "201.3063",
|
||||
"positionValue": "14975592.4328000005",
|
||||
"unrealizedPnl": "138627.573942",
|
||||
"returnOnEquity": "0.0917199656",
|
||||
"liquidationPx": "519.0933515657",
|
||||
"marginUsed": "1497559.2432800001",
|
||||
"maxLeverage": 20,
|
||||
"cumFunding": {
|
||||
"allTime": "-792893.154387",
|
||||
"sinceOpen": "-922.301401",
|
||||
"sinceChange": "-187.682929"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "oneWay",
|
||||
"position": {
|
||||
"coin": "DOGE",
|
||||
"szi": "-109217.0",
|
||||
"leverage": {
|
||||
"type": "cross",
|
||||
"value": 10
|
||||
},
|
||||
"entryPx": "0.279959",
|
||||
"positionValue": "22081.49306",
|
||||
"unrealizedPnl": "8494.879599",
|
||||
"returnOnEquity": "2.7782496288",
|
||||
"liquidationPx": "213.2654356057",
|
||||
"marginUsed": "2208.149306",
|
||||
"maxLeverage": 10,
|
||||
"cumFunding": {
|
||||
"allTime": "-1875.469799",
|
||||
"sinceOpen": "-1875.469799",
|
||||
"sinceChange": "45.79339"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "oneWay",
|
||||
"position": {
|
||||
"coin": "INJ",
|
||||
"szi": "-18747.2",
|
||||
"leverage": {
|
||||
"type": "cross",
|
||||
"value": 3
|
||||
},
|
||||
"entryPx": "13.01496",
|
||||
"positionValue": "162200.7744",
|
||||
"unrealizedPnl": "81793.4435",
|
||||
"returnOnEquity": "1.005680924",
|
||||
"liquidationPx": "1208.3529290194",
|
||||
"marginUsed": "54066.9248",
|
||||
"maxLeverage": 10,
|
||||
"cumFunding": {
|
||||
"allTime": "-539.133533",
|
||||
"sinceOpen": "-539.133533",
|
||||
"sinceChange": "-7.367325"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "oneWay",
|
||||
"position": {
|
||||
"coin": "SUI",
|
||||
"szi": "-376577.6",
|
||||
"leverage": {
|
||||
"type": "cross",
|
||||
"value": 3
|
||||
},
|
||||
"entryPx": "3.85881",
|
||||
"positionValue": "989495.3017599999",
|
||||
"unrealizedPnl": "463648.956001",
|
||||
"returnOnEquity": "0.9571980625",
|
||||
"liquidationPx": "64.3045458208",
|
||||
"marginUsed": "329831.767253",
|
||||
"maxLeverage": 10,
|
||||
"cumFunding": {
|
||||
"allTime": "-45793.455728",
|
||||
"sinceOpen": "-45793.450891",
|
||||
"sinceChange": "-1233.875821"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "oneWay",
|
||||
"position": {
|
||||
"coin": "XRP",
|
||||
"szi": "-39691.0",
|
||||
"leverage": {
|
||||
"type": "cross",
|
||||
"value": 20
|
||||
},
|
||||
"entryPx": "2.468585",
|
||||
"positionValue": "105486.7707",
|
||||
"unrealizedPnl": "-7506.1484",
|
||||
"returnOnEquity": "-1.5321699789",
|
||||
"liquidationPx": "607.2856858464",
|
||||
"marginUsed": "5274.338535",
|
||||
"maxLeverage": 20,
|
||||
"cumFunding": {
|
||||
"allTime": "-2645.400002",
|
||||
"sinceOpen": "-116.036833",
|
||||
"sinceChange": "-116.036833"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "oneWay",
|
||||
"position": {
|
||||
"coin": "HYPE",
|
||||
"szi": "-750315.16",
|
||||
"leverage": {
|
||||
"type": "cross",
|
||||
"value": 5
|
||||
},
|
||||
"entryPx": "43.3419",
|
||||
"positionValue": "34957933.6195600033",
|
||||
"unrealizedPnl": "-2437823.0249080001",
|
||||
"returnOnEquity": "-0.3748177636",
|
||||
"liquidationPx": "76.3945326684",
|
||||
"marginUsed": "6991586.7239119997",
|
||||
"maxLeverage": 5,
|
||||
"cumFunding": {
|
||||
"allTime": "-1881584.4214250001",
|
||||
"sinceOpen": "-1881584.4214250001",
|
||||
"sinceChange": "-45247.838743"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "oneWay",
|
||||
"position": {
|
||||
"coin": "FARTCOIN",
|
||||
"szi": "-4122236.7999999998",
|
||||
"leverage": {
|
||||
"type": "cross",
|
||||
"value": 10
|
||||
},
|
||||
"entryPx": "0.80127",
|
||||
"positionValue": "1681584.057824",
|
||||
"unrealizedPnl": "1621478.3279619999",
|
||||
"returnOnEquity": "4.9090151459",
|
||||
"liquidationPx": "6.034656163",
|
||||
"marginUsed": "168158.405782",
|
||||
"maxLeverage": 10,
|
||||
"cumFunding": {
|
||||
"allTime": "-72941.395024",
|
||||
"sinceOpen": "-51271.5204",
|
||||
"sinceChange": "-6504.295598"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "oneWay",
|
||||
"position": {
|
||||
"coin": "PUMP",
|
||||
"szi": "-1921732999.0",
|
||||
"leverage": {
|
||||
"type": "cross",
|
||||
"value": 5
|
||||
},
|
||||
"entryPx": "0.005551",
|
||||
"positionValue": "9176275.0702250004",
|
||||
"unrealizedPnl": "1491738.24016",
|
||||
"returnOnEquity": "0.6991640321",
|
||||
"liquidationPx": "0.0166674064",
|
||||
"marginUsed": "1835255.0140450001",
|
||||
"maxLeverage": 10,
|
||||
"cumFunding": {
|
||||
"allTime": "-196004.534539",
|
||||
"sinceOpen": "-196004.534539",
|
||||
"sinceChange": "-9892.654861"
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"time": 1761595358385
|
||||
},
|
||||
"account_value": 30018881.119369,
|
||||
"margin_used": 22969943.984845,
|
||||
"margin_utilization": 0.765183215640378,
|
||||
"available_margin": 7048937.134523999,
|
||||
"total_position_value": 0.0,
|
||||
"portfolio_leverage": 0.0
|
||||
},
|
||||
"open_orders": {
|
||||
"raw_orders": [
|
||||
{
|
||||
"coin": "WLFI",
|
||||
"side": "B",
|
||||
"limitPx": "0.10447",
|
||||
"sz": "2624.0",
|
||||
"oid": 194029229960,
|
||||
"timestamp": 1760131688558,
|
||||
"origSz": "12760.0",
|
||||
"cloid": "0x00000000000000000000001261000016"
|
||||
},
|
||||
{
|
||||
"coin": "@166",
|
||||
"side": "A",
|
||||
"limitPx": "1.01",
|
||||
"sz": "103038.77",
|
||||
"oid": 174787748753,
|
||||
"timestamp": 1758819420037,
|
||||
"origSz": "3000000.0"
|
||||
}
|
||||
]
|
||||
},
|
||||
"account_metrics": {
|
||||
"cumVlm": "2823125892.6900000572",
|
||||
"nRequestsUsed": 1766294,
|
||||
"nRequestsCap": 2823135892
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -1,7 +0,0 @@
|
||||
[
|
||||
{
|
||||
"name": "Whale 1 (BTC Maxi)",
|
||||
"address": "0xb83de012dba672c76a7dbbbf3e459cb59d7d6e36",
|
||||
"tags": ["btc", "high_leverage"]
|
||||
}
|
||||
]
|
||||
@ -15,13 +15,13 @@ from logging_utils import setup_logging
|
||||
# --- Configuration ---
|
||||
DEFAULT_ADDRESSES_TO_WATCH = [
|
||||
#"0xd4c1f7e8d876c4749228d515473d36f919583d1d",
|
||||
"0x47930c76790c865217472f2ddb4d14c640ee450a",
|
||||
"0x0fd468a73084daa6ea77a9261e40fdec3e67e0c7",
|
||||
# "0x4d69495d16fab95c3c27b76978affa50301079d0",
|
||||
# "0x09bc1cf4d9f0b59e1425a8fde4d4b1f7d3c9410d",
|
||||
"0xc6ac58a7a63339898aeda32499a8238a46d88e84",
|
||||
"0xa8ef95dbd3db55911d3307930a84b27d6e969526",
|
||||
# "0x4129c62faf652fea61375dcd9ca8ce24b2bb8b95",
|
||||
"0x32885a6adac4375858E6edC092EfDDb0Ef46484C",
|
||||
"0xbf1935fe7ab6d0aa3ee8d3da47c2f80e215b2a1c",
|
||||
]
|
||||
MAX_FILLS_TO_DISPLAY = 10
|
||||
LOGS_DIR = "_logs"
|
||||
|
||||
165
base_strategy.py
165
base_strategy.py
@ -1,165 +0,0 @@
|
||||
from abc import ABC, abstractmethod
|
||||
import pandas as pd
|
||||
import json
|
||||
import os
|
||||
import logging
|
||||
from datetime import datetime, timezone
|
||||
import sqlite3
|
||||
import multiprocessing
|
||||
import time
|
||||
|
||||
from logging_utils import setup_logging
|
||||
from hyperliquid.info import Info
|
||||
from hyperliquid.utils import constants
|
||||
|
||||
class BaseStrategy(ABC):
|
||||
"""
|
||||
An abstract base class that defines the blueprint for all trading strategies.
|
||||
It provides common functionality like loading data, saving status, and state management.
|
||||
"""
|
||||
|
||||
def __init__(self, strategy_name: str, params: dict, trade_signal_queue: multiprocessing.Queue = None, shared_status: dict = None):
|
||||
self.strategy_name = strategy_name
|
||||
self.params = params
|
||||
self.trade_signal_queue = trade_signal_queue
|
||||
# Optional multiprocessing.Manager().dict() to hold live status (avoids file IO)
|
||||
self.shared_status = shared_status
|
||||
|
||||
self.coin = params.get("coin", "N/A")
|
||||
self.timeframe = params.get("timeframe", "N/A")
|
||||
self.db_path = os.path.join("_data", "market_data.db")
|
||||
self.status_file_path = os.path.join("_data", f"strategy_status_{self.strategy_name}.json")
|
||||
|
||||
self.current_signal = "INIT"
|
||||
self.last_signal_change_utc = None
|
||||
self.signal_price = None
|
||||
|
||||
# Note: Logging is set up by the run_strategy function
|
||||
|
||||
def load_data(self) -> pd.DataFrame:
|
||||
"""Loads historical data for the configured coin and timeframe."""
|
||||
table_name = f"{self.coin}_{self.timeframe}"
|
||||
|
||||
periods = [v for k, v in self.params.items() if 'period' in k or '_ma' in k or 'slow' in k or 'fast' in k]
|
||||
limit = max(periods) + 50 if periods else 500
|
||||
|
||||
try:
|
||||
with sqlite3.connect(f"file:{self.db_path}?mode=ro", uri=True) as conn:
|
||||
query = f'SELECT * FROM "{table_name}" ORDER BY datetime_utc DESC LIMIT {limit}'
|
||||
df = pd.read_sql(query, conn, parse_dates=['datetime_utc'])
|
||||
if df.empty: return pd.DataFrame()
|
||||
df.set_index('datetime_utc', inplace=True)
|
||||
df.sort_index(inplace=True)
|
||||
return df
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to load data from table '{table_name}': {e}")
|
||||
return pd.DataFrame()
|
||||
|
||||
@abstractmethod
|
||||
def calculate_signals(self, df: pd.DataFrame) -> pd.DataFrame:
|
||||
"""The core logic of the strategy. Must be implemented by child classes."""
|
||||
pass
|
||||
|
||||
def calculate_signals_and_state(self, df: pd.DataFrame) -> bool:
|
||||
"""
|
||||
A wrapper that calls the strategy's signal calculation, determines
|
||||
the last signal change, and returns True if the signal has changed.
|
||||
"""
|
||||
df_with_signals = self.calculate_signals(df)
|
||||
df_with_signals.dropna(inplace=True)
|
||||
if df_with_signals.empty:
|
||||
return False
|
||||
|
||||
df_with_signals['position_change'] = df_with_signals['signal'].diff()
|
||||
|
||||
last_signal_int = df_with_signals['signal'].iloc[-1]
|
||||
new_signal_str = "HOLD"
|
||||
if last_signal_int == 1: new_signal_str = "BUY"
|
||||
elif last_signal_int == -1: new_signal_str = "SELL"
|
||||
|
||||
signal_changed = False
|
||||
if self.current_signal == "INIT":
|
||||
if new_signal_str == "BUY": self.current_signal = "INIT_BUY"
|
||||
elif new_signal_str == "SELL": self.current_signal = "INIT_SELL"
|
||||
else: self.current_signal = "HOLD"
|
||||
signal_changed = True
|
||||
elif new_signal_str != self.current_signal:
|
||||
self.current_signal = new_signal_str
|
||||
signal_changed = True
|
||||
|
||||
if signal_changed:
|
||||
last_change_series = df_with_signals[df_with_signals['position_change'] != 0]
|
||||
if not last_change_series.empty:
|
||||
last_change_row = last_change_series.iloc[-1]
|
||||
self.last_signal_change_utc = last_change_row.name.tz_localize('UTC').isoformat()
|
||||
self.signal_price = last_change_row['close']
|
||||
|
||||
return signal_changed
|
||||
|
||||
def _save_status(self):
|
||||
"""Saves the current strategy state to its JSON file."""
|
||||
status = {
|
||||
"strategy_name": self.strategy_name,
|
||||
"current_signal": self.current_signal,
|
||||
"last_signal_change_utc": self.last_signal_change_utc,
|
||||
"signal_price": self.signal_price,
|
||||
"last_checked_utc": datetime.now(timezone.utc).isoformat()
|
||||
}
|
||||
# If a shared status dict is provided (Manager.dict()), update it instead of writing files
|
||||
try:
|
||||
if self.shared_status is not None:
|
||||
try:
|
||||
# store the status under the strategy name for easy lookup
|
||||
self.shared_status[self.strategy_name] = status
|
||||
except Exception:
|
||||
# Manager proxies may not accept nested mutable objects consistently; assign a copy
|
||||
self.shared_status[self.strategy_name] = dict(status)
|
||||
else:
|
||||
with open(self.status_file_path, 'w', encoding='utf-8') as f:
|
||||
json.dump(status, f, indent=4)
|
||||
except IOError as e:
|
||||
logging.error(f"Failed to write status file for {self.strategy_name}: {e}")
|
||||
|
||||
def run_polling_loop(self):
|
||||
"""
|
||||
The default execution loop for polling-based strategies (e.g., SMAs).
|
||||
"""
|
||||
while True:
|
||||
df = self.load_data()
|
||||
if df.empty:
|
||||
logging.warning("No data loaded. Waiting 1 minute...")
|
||||
time.sleep(60)
|
||||
continue
|
||||
|
||||
signal_changed = self.calculate_signals_and_state(df.copy())
|
||||
self._save_status()
|
||||
|
||||
if signal_changed or self.current_signal == "INIT_BUY" or self.current_signal == "INIT_SELL":
|
||||
logging.warning(f"New signal detected: {self.current_signal}")
|
||||
self.trade_signal_queue.put({
|
||||
"strategy_name": self.strategy_name,
|
||||
"signal": self.current_signal,
|
||||
"coin": self.coin,
|
||||
"signal_price": self.signal_price,
|
||||
"config": {"agent": self.params.get("agent"), "parameters": self.params}
|
||||
})
|
||||
if self.current_signal == "INIT_BUY": self.current_signal = "BUY"
|
||||
if self.current_signal == "INIT_SELL": self.current_signal = "SELL"
|
||||
|
||||
logging.info(f"Current Signal: {self.current_signal}")
|
||||
time.sleep(60)
|
||||
|
||||
def run_event_loop(self):
|
||||
"""
|
||||
A placeholder for event-driven (WebSocket) strategies.
|
||||
Child classes must override this.
|
||||
"""
|
||||
logging.error("run_event_loop() is not implemented for this strategy.")
|
||||
time.sleep(3600) # Sleep for an hour to prevent rapid error loops
|
||||
|
||||
def on_fill_message(self, message):
|
||||
"""
|
||||
Placeholder for the WebSocket callback.
|
||||
Child classes must override this.
|
||||
"""
|
||||
pass
|
||||
@ -1,6 +0,0 @@
|
||||
2025-12-11 14:29:08,607 - INFO - Strategy Initialized. Liquidity (L): 1236.4542
|
||||
2025-12-11 14:29:09,125 - INFO - CLP Hedger initialized. Agent: 0xcB262CeAaE5D8A99b713f87a43Dd18E6Be892739. Coin: ETH (Decimals: 4)
|
||||
2025-12-11 14:29:09,126 - INFO - Starting Hedge Monitor Loop. Interval: 30s
|
||||
2025-12-11 14:29:09,126 - INFO - Hedging Range: 2844.11 - 3477.24 | Static Long: 0.4
|
||||
2025-12-11 14:29:09,769 - INFO - Price: 3201.85 | Pool Delta: 0.883 | Tgt Short: 1.283 | Act Short: 0.000 | Diff: 1.283
|
||||
2025-12-11 14:29:11,987 - ERROR - Order API Error: Order has invalid price.
|
||||
@ -1,86 +0,0 @@
|
||||
# Session Summary
|
||||
|
||||
**Date:** 2025-12-11
|
||||
|
||||
**Objective(s):**
|
||||
Fix API errors, enhance bot functionality with safety features (auto-close), and add leverage/funding monitoring.
|
||||
|
||||
**Key Accomplishments:**
|
||||
* **Fixed API Price Error:** Implemented `round_to_sig_figs` to ensure limit prices meet Hyperliquid's 5 significant figure requirement, resolving the "Order has invalid price" error.
|
||||
* **Safety Shutdown:** Added `close_all_positions` method and linked it to `KeyboardInterrupt`. The bot now automatically closes its hedge position when stopped manually.
|
||||
* **Leverage Management:** Configured the bot to automatically set leverage to **4x Cross** (`LEVERAGE = 4`) upon initialization.
|
||||
* **Market Monitoring:** Added real-time **Funding Rate** display to the main logging loop using `meta_and_asset_ctxs`.
|
||||
|
||||
**Key Files Modified:**
|
||||
* `clp_hedger.py`
|
||||
|
||||
**Decisions Made:**
|
||||
* Used `math.log10` based calculation for significant figures to ensure broad compatibility with asset price ranges.
|
||||
* Implemented `close_all_positions` as a blocking call during shutdown to prioritize safety over an immediate exit.
|
||||
* Hardcoded `LEVERAGE` in configuration for now, with a plan to potentially move to a config file later if needed.
|
||||
|
||||
# Session Summary
|
||||
|
||||
**Date:** 2025-12-11
|
||||
|
||||
**Objective(s):**
|
||||
Implement a dynamic gap recovery strategy to neutralize initial losses from delayed hedging.
|
||||
|
||||
**Key Accomplishments:**
|
||||
* Implemented "Gap Recovery" logic to dynamically adjust hedging based on current price relative to CLP `ENTRY_PRICE` and initial `START_PRICE`.
|
||||
* Defined three distinct hedging zones:
|
||||
* **NORMAL (below Entry):** 100% hedge for safety.
|
||||
* **RECOVERY (between Entry and Recovery Target):** 0% hedge (naked long) to maximize recovery.
|
||||
* **NORMAL (above Recovery Target):** 100% hedge after gap is neutralized.
|
||||
* Introduced `PRICE_BUFFER_PCT` and `TIME_BUFFER_SECONDS` to prevent trade churn around zone boundaries.
|
||||
|
||||
**Key Files Modified:**
|
||||
* `clp_hedger.py`
|
||||
|
||||
**Decisions Made:**
|
||||
* Chosen a dynamic `START_PRICE` capture at bot initialization to calculate the `GAP`.
|
||||
* Opted for 0% hedge in the recovery zone for faster loss neutralization, acknowledging higher short-term risk.
|
||||
* Implemented price and time buffers for robust mode switching.
|
||||
|
||||
# Session Summary
|
||||
|
||||
**Date:** 2025-12-12
|
||||
|
||||
**Objective(s):**
|
||||
Develop a Uniswap V3 position manager script (formerly monitor) for Arbitrum, including fee collection, closing positions, and automated opening of new positions with auto-swapping. Refine hedging architecture for multi-position management.
|
||||
|
||||
**Key Accomplishments:**
|
||||
* **`uniswap_manager.py` (Unified Lifecycle Manager):**
|
||||
* Transformed into a continuous lifecycle manager for AUTOMATIC positions.
|
||||
* **Features:**
|
||||
* Manages "AUTOMATIC" CLP positions (Open, Monitor, Close, Collect Fees).
|
||||
* Reads/Writes state to `hedge_status.json`.
|
||||
* Implemented auto-wrapping of native ETH to WETH when needed.
|
||||
* Includes robust auto-swapping (WETH <-> USDC) to balance tokens before minting.
|
||||
* Implemented robust event parsing using `process_receipt` to extract exact `amount0` and `amount1` from mint transactions.
|
||||
* **Fixed `web3.py` v7 `raw_transaction` access across all transaction types.**
|
||||
* **Fixed Uniswap V3 Math precision** in `calculate_mint_amounts` for accurate token splits.
|
||||
* **Troubleshooting & Resolution:**
|
||||
* **Address Validation:** Replaced hardcoded factory address with dynamic lookup.
|
||||
* **ABI Mismatch:** Updated NPM ABI with event definitions for `IncreaseLiquidity` and `Transfer`.
|
||||
* **Typo/Indentation Errors:** Resolved multiple `NameError` (`target_tick_lower`, `w3_instance`, `position_details`) and `IndentationError` issues during script refactoring.
|
||||
* **JSON Update Failure:** Fixed `mint_new_position`'s log parsing for Token ID to correctly update `hedge_status.json` after successful mint.
|
||||
* **`clp_scalper_hedger.py` (Dedicated Automatic Hedger):**
|
||||
* Created as a new script to hedge `type: "AUTOMATIC"` positions defined in `hedge_status.json`.
|
||||
* Uses `SCALPER_AGENT_PK` from `.env`.
|
||||
* **Accurate L Calculation:** Calculates Uniswap V3 liquidity (`L`) using `amount0_initial` or `amount1_initial` from `hedge_status.json`, falling back to a heuristic based on `target_value` if amounts are missing.
|
||||
* **Dynamic Rebalance Threshold:** Threshold adapts to 5% of the position's maximum ETH risk (`max_potential_eth`).
|
||||
* **Minimum Order Value:** Enforces a minimum order size of $10 to prevent dust trades and API errors.
|
||||
* **`clp_hedger.py` (Updated Manual Hedger):**
|
||||
* Modified to load its configuration entirely from the `type: "MANUAL"` entry in `hedge_status.json`.
|
||||
* Respects the `hedge_enabled` flag from the JSON.
|
||||
* Idles if hedging is disabled or no manual position is found.
|
||||
* **`hedge_status.json`:**
|
||||
* Becomes the central source of truth for all (MANUAL and AUTOMATIC) CLP positions, including their type, status, ranges, `entry_price`, `target_value` (for automatic), and `hedge_enabled` flag.
|
||||
* **.env File Location:** All scripts updated to load `.env` from the current working directory (`clp_hedger/`).
|
||||
|
||||
**Decisions Made:**
|
||||
* Adopted a multi-script architecture for clarity and separation of concerns (Manager vs. Hedgers).
|
||||
* Used `hedge_status.json` as the centralized state manager for all CLP positions.
|
||||
* Implemented robust error handling and debugging throughout the development process.
|
||||
* Ensured `clp_scalper_hedger.py` is resilient to missing initial amount data in `hedge_status.json` by implementing fallback `L` calculation methods.
|
||||
@ -1,469 +0,0 @@
|
||||
import os
|
||||
import time
|
||||
import logging
|
||||
import sys
|
||||
import math
|
||||
import json
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# --- FIX: Add project root to sys.path to import local modules ---
|
||||
current_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
project_root = os.path.dirname(current_dir)
|
||||
sys.path.append(project_root)
|
||||
|
||||
# Now we can import from root
|
||||
from logging_utils import setup_logging
|
||||
from eth_account import Account
|
||||
from hyperliquid.exchange import Exchange
|
||||
from hyperliquid.info import Info
|
||||
from hyperliquid.utils import constants
|
||||
|
||||
# Load environment variables from .env in current directory
|
||||
dotenv_path = os.path.join(current_dir, '.env')
|
||||
if os.path.exists(dotenv_path):
|
||||
load_dotenv(dotenv_path)
|
||||
else:
|
||||
# Fallback to default search
|
||||
load_dotenv()
|
||||
|
||||
# Setup Logging using project convention
|
||||
setup_logging("normal", "CLP_HEDGER")
|
||||
|
||||
# --- CONFIGURATION DEFAULTS (Can be overridden by JSON) ---
|
||||
REBALANCE_THRESHOLD = 0.15 # ETH
|
||||
CHECK_INTERVAL = 30 # Seconds
|
||||
LEVERAGE = 5
|
||||
STATUS_FILE = "hedge_status.json"
|
||||
|
||||
# Gap Recovery Configuration
|
||||
PRICE_BUFFER_PCT = 0.004 # 0.5% buffer to prevent churn
|
||||
TIME_BUFFER_SECONDS = 120 # 2 minutes wait between mode switches
|
||||
|
||||
def get_manual_position_config():
|
||||
"""Reads hedge_status.json and returns the first OPEN MANUAL position dict, or None."""
|
||||
if not os.path.exists(STATUS_FILE):
|
||||
return None
|
||||
try:
|
||||
with open(STATUS_FILE, 'r') as f:
|
||||
data = json.load(f)
|
||||
for entry in data:
|
||||
if entry.get('type') == 'MANUAL' and entry.get('status') == 'OPEN':
|
||||
return entry
|
||||
except Exception as e:
|
||||
logging.error(f"ERROR reading status file: {e}")
|
||||
return None
|
||||
|
||||
class HyperliquidStrategy:
|
||||
def __init__(self, entry_weth, entry_price, low_range, high_range, start_price, static_long=0.4):
|
||||
# Your Pool Configuration
|
||||
self.entry_weth = entry_weth
|
||||
self.entry_price = entry_price
|
||||
self.low_range = low_range
|
||||
self.high_range = high_range
|
||||
self.static_long = static_long
|
||||
|
||||
# Gap Recovery State
|
||||
self.start_price = start_price
|
||||
# GAP = max(0, ENTRY - START). If Start > Entry (we are winning), Gap is 0.
|
||||
self.gap = max(0.0, entry_price - start_price)
|
||||
self.recovery_target = entry_price + (2 * self.gap)
|
||||
|
||||
self.current_mode = "NORMAL" # "NORMAL" (100% Hedge) or "RECOVERY" (0% Hedge)
|
||||
self.last_switch_time = 0
|
||||
|
||||
logging.info(f"Strategy Init. Start Px: {start_price:.2f} | Gap: {self.gap:.2f} | Recovery Tgt: {self.recovery_target:.2f}")
|
||||
|
||||
# Calculate Constant Liquidity (L) once
|
||||
# Formula: L = x / (1/sqrt(P) - 1/sqrt(Pb))
|
||||
try:
|
||||
sqrt_P = math.sqrt(entry_price)
|
||||
sqrt_Pb = math.sqrt(high_range)
|
||||
self.L = entry_weth / ((1/sqrt_P) - (1/sqrt_Pb))
|
||||
logging.info(f"Liquidity (L): {self.L:.4f}")
|
||||
except Exception as e:
|
||||
logging.error(f"Error calculating liquidity: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
def get_pool_delta(self, current_price):
|
||||
"""Calculates how much ETH the pool currently holds (The Risk)"""
|
||||
# If price is above range, you hold 0 ETH (100% USDC)
|
||||
if current_price >= self.high_range:
|
||||
return 0.0
|
||||
|
||||
# If price is below range, you hold Max ETH
|
||||
if current_price <= self.low_range:
|
||||
sqrt_Pa = math.sqrt(self.low_range)
|
||||
sqrt_Pb = math.sqrt(self.high_range)
|
||||
return self.L * ((1/sqrt_Pa) - (1/sqrt_Pb))
|
||||
|
||||
# If in range, calculate active ETH
|
||||
sqrt_P = math.sqrt(current_price)
|
||||
sqrt_Pb = math.sqrt(self.high_range)
|
||||
return self.L * ((1/sqrt_P) - (1/sqrt_Pb))
|
||||
|
||||
def calculate_rebalance(self, current_price, current_short_position_size):
|
||||
"""
|
||||
Determines if we need to trade and the exact order size.
|
||||
"""
|
||||
# 1. Base Target (Full Hedge)
|
||||
pool_delta = self.get_pool_delta(current_price)
|
||||
raw_target_short = pool_delta + self.static_long
|
||||
|
||||
# 2. Determine Mode (Normal vs Recovery)
|
||||
# Buffers
|
||||
entry_upper = self.entry_price * (1 + PRICE_BUFFER_PCT)
|
||||
entry_lower = self.entry_price * (1 - PRICE_BUFFER_PCT)
|
||||
|
||||
desired_mode = self.current_mode # Default to staying same
|
||||
|
||||
if self.current_mode == "NORMAL":
|
||||
# Switch to RECOVERY if:
|
||||
# Price > Entry + Buffer AND Price < Recovery Target
|
||||
if current_price > entry_upper and current_price < self.recovery_target:
|
||||
desired_mode = "RECOVERY"
|
||||
|
||||
elif self.current_mode == "RECOVERY":
|
||||
# Switch back to NORMAL if:
|
||||
# Price < Entry - Buffer (Fell back down) OR Price > Recovery Target (Finished)
|
||||
if current_price < entry_lower or current_price >= self.recovery_target:
|
||||
desired_mode = "NORMAL"
|
||||
|
||||
# 3. Apply Time Buffer
|
||||
now = time.time()
|
||||
if desired_mode != self.current_mode:
|
||||
if (now - self.last_switch_time) >= TIME_BUFFER_SECONDS:
|
||||
logging.info(f"🔄 MODE SWITCH: {self.current_mode} -> {desired_mode} (Px: {current_price:.2f})")
|
||||
self.current_mode = desired_mode
|
||||
self.last_switch_time = now
|
||||
else:
|
||||
logging.info(f"⏳ Mode Switch Delayed (Time Buffer). Pending: {desired_mode}")
|
||||
|
||||
# 4. Set Final Target based on Mode
|
||||
if self.current_mode == "RECOVERY":
|
||||
target_short_size = 0.0
|
||||
logging.info(f"🩹 RECOVERY MODE ACTIVE (0% Hedge). Target: {self.recovery_target:.2f}")
|
||||
else:
|
||||
target_short_size = raw_target_short
|
||||
|
||||
# 5. Calculate Difference
|
||||
diff = target_short_size - abs(current_short_position_size)
|
||||
|
||||
return {
|
||||
"current_price": current_price,
|
||||
"pool_delta": pool_delta,
|
||||
"target_short": target_short_size,
|
||||
"raw_target": raw_target_short,
|
||||
"current_short": abs(current_short_position_size),
|
||||
"diff": diff, # Positive = SELL more (Add Short), Negative = BUY (Reduce Short)
|
||||
"action": "SELL" if diff > 0 else "BUY",
|
||||
"mode": self.current_mode
|
||||
}
|
||||
|
||||
def round_to_sz_decimals(amount, sz_decimals=4):
|
||||
"""
|
||||
Hyperliquid requires specific rounding 'szDecimals'.
|
||||
For ETH, this is usually 4 (e.g., 1.2345).
|
||||
"""
|
||||
factor = 10 ** sz_decimals
|
||||
# Use floor to avoid rounding up into money you don't have,
|
||||
# but strictly simply rounding is often sufficient for small adjustments.
|
||||
# Using round() standard here.
|
||||
return round(abs(amount), sz_decimals)
|
||||
|
||||
def round_to_sig_figs(x, sig_figs=5):
|
||||
"""
|
||||
Rounds a number to a specified number of significant figures.
|
||||
Hyperliquid prices generally require 5 significant figures.
|
||||
"""
|
||||
if x == 0:
|
||||
return 0.0
|
||||
return round(x, sig_figs - int(math.floor(math.log10(abs(x)))) - 1)
|
||||
|
||||
class CLPHedger:
|
||||
def __init__(self):
|
||||
self.private_key = os.environ.get("HEDGER_PRIVATE_KEY") or os.environ.get("AGENT_PRIVATE_KEY")
|
||||
self.vault_address = os.environ.get("MAIN_WALLET_ADDRESS")
|
||||
|
||||
if not self.private_key:
|
||||
logging.error("No private key found (HEDGER_PRIVATE_KEY or AGENT_PRIVATE_KEY) in .env")
|
||||
sys.exit(1)
|
||||
if not self.vault_address:
|
||||
logging.warning("MAIN_WALLET_ADDRESS not found in .env. Assuming Agent is the Vault (not strictly recommended for CLPs).")
|
||||
|
||||
self.account = Account.from_key(self.private_key)
|
||||
|
||||
# API Connection
|
||||
self.info = Info(constants.MAINNET_API_URL, skip_ws=True)
|
||||
|
||||
# Note: If this agent is trading on behalf of a Vault (Main Account),
|
||||
# the exchange object needs the vault's address as `account_address`.
|
||||
self.exchange = Exchange(self.account, constants.MAINNET_API_URL, account_address=self.vault_address)
|
||||
|
||||
# Load Manual Config from JSON
|
||||
self.manual_config = get_manual_position_config()
|
||||
self.coin_symbol = "ETH" # Default, but will try to read from JSON
|
||||
self.sz_decimals = 4
|
||||
self.strategy = None
|
||||
|
||||
if self.manual_config:
|
||||
self.coin_symbol = self.manual_config.get('coin_symbol', 'ETH')
|
||||
|
||||
if self.manual_config.get('hedge_enabled', False):
|
||||
self._init_strategy()
|
||||
else:
|
||||
logging.warning("MANUAL position found but 'hedge_enabled' is FALSE. Hedger will remain idle.")
|
||||
else:
|
||||
logging.warning("No MANUAL position found in hedge_status.json. Hedger will remain idle.")
|
||||
|
||||
# Set Leverage on Initialization (if coin symbol known)
|
||||
try:
|
||||
logging.info(f"Setting leverage to {LEVERAGE}x (Cross) for {self.coin_symbol}...")
|
||||
self.exchange.update_leverage(LEVERAGE, self.coin_symbol, is_cross=True)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to update leverage: {e}")
|
||||
|
||||
# Fetch meta once to get szDecimals
|
||||
self.sz_decimals = self._get_sz_decimals(self.coin_symbol)
|
||||
logging.info(f"CLP Hedger initialized. Agent: {self.account.address}. Coin: {self.coin_symbol} (Decimals: {self.sz_decimals})")
|
||||
|
||||
def _init_strategy(self):
|
||||
try:
|
||||
entry_p = self.manual_config['entry_price']
|
||||
lower = self.manual_config['range_lower']
|
||||
upper = self.manual_config['range_upper']
|
||||
static_long = self.manual_config.get('static_long', 0.0)
|
||||
# Require entry_amount0 (or entry_weth)
|
||||
entry_weth = self.manual_config.get('entry_amount0', 0.45) # Default to 0.45 if missing for now
|
||||
|
||||
start_price = self.get_market_price(self.coin_symbol)
|
||||
if start_price is None:
|
||||
logging.warning("Waiting for initial price to start strategy...")
|
||||
# Logic will retry in run loop
|
||||
return
|
||||
|
||||
self.strategy = HyperliquidStrategy(
|
||||
entry_weth=entry_weth,
|
||||
entry_price=entry_p,
|
||||
low_range=lower,
|
||||
high_range=upper,
|
||||
start_price=start_price,
|
||||
static_long=static_long
|
||||
)
|
||||
logging.info(f"Strategy Initialized for {self.coin_symbol}.")
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to init strategy: {e}")
|
||||
self.strategy = None
|
||||
|
||||
def _get_sz_decimals(self, coin):
|
||||
try:
|
||||
meta = self.info.meta()
|
||||
for asset in meta["universe"]:
|
||||
if asset["name"] == coin:
|
||||
return asset["szDecimals"]
|
||||
logging.warning(f"Could not find szDecimals for {coin}, defaulting to 4.")
|
||||
return 4
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to fetch meta: {e}")
|
||||
return 4
|
||||
|
||||
def get_funding_rate(self, coin):
|
||||
try:
|
||||
meta, asset_ctxs = self.info.meta_and_asset_ctxs()
|
||||
for i, asset in enumerate(meta["universe"]):
|
||||
if asset["name"] == coin:
|
||||
# Funding rate is in the asset context at same index
|
||||
return float(asset_ctxs[i]["funding"])
|
||||
return 0.0
|
||||
except Exception as e:
|
||||
logging.error(f"Error fetching funding rate: {e}")
|
||||
return 0.0
|
||||
|
||||
def get_market_price(self, coin):
|
||||
try:
|
||||
# Get all mids is efficient
|
||||
mids = self.info.all_mids()
|
||||
if coin in mids:
|
||||
return float(mids[coin])
|
||||
else:
|
||||
logging.error(f"Price for {coin} not found in all_mids.")
|
||||
return None
|
||||
except Exception as e:
|
||||
logging.error(f"Error fetching price: {e}")
|
||||
return None
|
||||
|
||||
def get_current_position(self, coin):
|
||||
try:
|
||||
# We need the User State of the Vault (or the account we are trading for)
|
||||
user_state = self.info.user_state(self.vault_address or self.account.address)
|
||||
for pos in user_state["assetPositions"]:
|
||||
if pos["position"]["coin"] == coin:
|
||||
# szi is the size. Positive = Long, Negative = Short.
|
||||
return float(pos["position"]["szi"])
|
||||
return 0.0 # No position
|
||||
except Exception as e:
|
||||
logging.error(f"Error fetching position: {e}")
|
||||
return 0.0
|
||||
|
||||
def execute_trade(self, coin, is_buy, size, price):
|
||||
logging.info(f"🚀 EXECUTING: {coin} {'BUY' if is_buy else 'SELL'} {size} @ ~{price}")
|
||||
|
||||
# Check for reduceOnly logic
|
||||
# If we are BUYING to reduce a SHORT, it is reduceOnly.
|
||||
# If we are SELLING to increase a SHORT, it is NOT reduceOnly.
|
||||
# Since we are essentially managing a Short hedge:
|
||||
# Action BUY = Reducing Hedge -> reduceOnly=True
|
||||
# Action SELL = Increasing Hedge -> reduceOnly=False
|
||||
reduce_only = is_buy
|
||||
|
||||
try:
|
||||
# Market order (limit with aggressive TIF or just widely crossing limit)
|
||||
# Hyperliquid SDK 'order' method parameters: coin, is_buy, sz, limit_px, order_type, reduce_only
|
||||
# We use a limit price slightly better than market to ensure fill or just use market price logic
|
||||
|
||||
# Using a simplistic "Market" approach by setting limit far away
|
||||
slippage = 0.05 # 5% slippage tolerance
|
||||
raw_limit_px = price * (1.05 if is_buy else 0.95)
|
||||
limit_px = round_to_sig_figs(raw_limit_px, 5)
|
||||
|
||||
order_result = self.exchange.order(
|
||||
coin,
|
||||
is_buy,
|
||||
size,
|
||||
limit_px,
|
||||
{"limit": {"tif": "Ioc"}},
|
||||
reduce_only=reduce_only
|
||||
)
|
||||
|
||||
status = order_result["status"]
|
||||
if status == "ok":
|
||||
response_data = order_result["response"]["data"]
|
||||
if "statuses" in response_data and "error" in response_data["statuses"][0]:
|
||||
logging.error(f"Order API Error: {response_data['statuses'][0]['error']}")
|
||||
else:
|
||||
logging.info(f"✅ Trade Success: {response_data}")
|
||||
else:
|
||||
logging.error(f"Order Failed: {order_result}")
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Exception during trade execution: {e}")
|
||||
|
||||
def close_all_positions(self):
|
||||
logging.info("Attempting to close all open positions...")
|
||||
try:
|
||||
# 1. Get latest price
|
||||
price = self.get_market_price(COIN_SYMBOL)
|
||||
if price is None:
|
||||
logging.error("Could not fetch price to close positions. Aborting close.")
|
||||
return
|
||||
|
||||
# 2. Get current position
|
||||
current_pos = self.get_current_position(COIN_SYMBOL)
|
||||
if current_pos == 0:
|
||||
logging.info("No open positions to close.")
|
||||
return
|
||||
|
||||
# 3. Determine Side and Size
|
||||
# If Short (-), we need to Buy (+).
|
||||
# If Long (+), we need to Sell (-).
|
||||
is_buy = current_pos < 0
|
||||
abs_size = abs(current_pos)
|
||||
|
||||
# Ensure size is rounded correctly for the API
|
||||
final_size = round_to_sz_decimals(abs_size, self.sz_decimals)
|
||||
|
||||
if final_size == 0:
|
||||
logging.info("Position size effectively 0 after rounding.")
|
||||
return
|
||||
|
||||
logging.info(f"Closing Position: {current_pos} {COIN_SYMBOL} -> Action: {'BUY' if is_buy else 'SELL'} {final_size}")
|
||||
|
||||
# 4. Execute
|
||||
self.execute_trade(COIN_SYMBOL, is_buy, final_size, price)
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error during close_all_positions: {e}")
|
||||
|
||||
def run(self):
|
||||
logging.info(f"Starting Hedge Monitor Loop. Interval: {CHECK_INTERVAL}s")
|
||||
|
||||
while True:
|
||||
try:
|
||||
# Reload Config periodically
|
||||
self.manual_config = get_manual_position_config()
|
||||
|
||||
# Check Global Enable Switch
|
||||
if not self.manual_config or not self.manual_config.get('hedge_enabled', False):
|
||||
# If previously active, close?
|
||||
# Yes, safety first.
|
||||
if self.strategy is not None:
|
||||
logging.info("Hedge Disabled. Closing any remaining positions.")
|
||||
self.close_all_positions()
|
||||
self.strategy = None
|
||||
else:
|
||||
# Just idle check to keep connection alive or log occasionally
|
||||
# logging.info("Idle. Hedge Disabled.")
|
||||
pass
|
||||
|
||||
time.sleep(CHECK_INTERVAL)
|
||||
continue
|
||||
|
||||
# If enabled but strategy not init, Init it.
|
||||
if self.strategy is None:
|
||||
self._init_strategy()
|
||||
if self.strategy is None: # Init failed
|
||||
time.sleep(CHECK_INTERVAL)
|
||||
continue
|
||||
|
||||
# 1. Get Data
|
||||
price = self.get_market_price(COIN_SYMBOL)
|
||||
if price is None:
|
||||
time.sleep(5)
|
||||
continue
|
||||
|
||||
funding_rate = self.get_funding_rate(COIN_SYMBOL)
|
||||
|
||||
current_pos_size = self.get_current_position(COIN_SYMBOL)
|
||||
|
||||
# 2. Calculate Logic
|
||||
# Pass raw size (e.g. -1.5). The strategy handles the logic.
|
||||
calc = self.strategy.calculate_rebalance(price, current_pos_size)
|
||||
|
||||
diff_abs = abs(calc['diff'])
|
||||
trade_size = round_to_sz_decimals(diff_abs, self.sz_decimals)
|
||||
|
||||
# Logging Status
|
||||
status_msg = (
|
||||
f"Price: {price:.2f} | Fund: {funding_rate:.6f} | "
|
||||
f"Mode: {calc['mode']} | "
|
||||
f"Pool Delta: {calc['pool_delta']:.3f} | "
|
||||
f"Tgt Short: {calc['target_short']:.3f} | "
|
||||
f"Act Short: {calc['current_short']:.3f} | "
|
||||
f"Diff: {calc['diff']:.3f}"
|
||||
)
|
||||
if calc.get('is_recovering'):
|
||||
status_msg += f" | 🩹 REC MODE ({calc['raw_target']:.3f} -> {calc['target_short']:.3f})"
|
||||
|
||||
logging.info(status_msg)
|
||||
|
||||
# 3. Check Threshold
|
||||
if diff_abs >= REBALANCE_THRESHOLD:
|
||||
if trade_size > 0:
|
||||
logging.info(f"⚡ THRESHOLD TRIGGERED ({diff_abs:.3f} >= {REBALANCE_THRESHOLD})")
|
||||
is_buy = (calc['action'] == "BUY")
|
||||
self.execute_trade(COIN_SYMBOL, is_buy, trade_size, price)
|
||||
else:
|
||||
logging.info("Trade size rounds to 0. Skipping.")
|
||||
|
||||
time.sleep(CHECK_INTERVAL)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
logging.info("Stopping Hedger...")
|
||||
self.close_all_positions()
|
||||
break
|
||||
except Exception as e:
|
||||
logging.error(f"Loop Error: {e}", exc_info=True)
|
||||
time.sleep(10)
|
||||
|
||||
if __name__ == "__main__":
|
||||
hedger = CLPHedger()
|
||||
hedger.run()
|
||||
@ -1,562 +0,0 @@
|
||||
import os
|
||||
import time
|
||||
import logging
|
||||
import sys
|
||||
import math
|
||||
import json
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# --- FIX: Add project root to sys.path to import local modules ---
|
||||
current_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
project_root = os.path.dirname(current_dir)
|
||||
sys.path.append(project_root)
|
||||
|
||||
# Now we can import from root
|
||||
from logging_utils import setup_logging
|
||||
from eth_account import Account
|
||||
from hyperliquid.exchange import Exchange
|
||||
from hyperliquid.info import Info
|
||||
from hyperliquid.utils import constants
|
||||
|
||||
# Load environment variables from .env in current directory
|
||||
dotenv_path = os.path.join(current_dir, '.env')
|
||||
if os.path.exists(dotenv_path):
|
||||
load_dotenv(dotenv_path)
|
||||
else:
|
||||
# Fallback to default search
|
||||
load_dotenv()
|
||||
|
||||
setup_logging("normal", "SCALPER_HEDGER")
|
||||
|
||||
# --- CONFIGURATION ---
|
||||
COIN_SYMBOL = "ETH"
|
||||
CHECK_INTERVAL = 1 # Faster check for scalper
|
||||
LEVERAGE = 5 # 3x Leverage
|
||||
STATUS_FILE = "hedge_status.json"
|
||||
|
||||
# --- STRATEGY ZONES (Percent of Range Width) ---
|
||||
# Bottom Hedge Zone: 0% to 15% -> Active Hedging
|
||||
ZONE_BOTTOM_HEDGE_LIMIT = 0.5
|
||||
|
||||
# Close Zone: 15% to 20% -> Close All Hedges (Flatten)
|
||||
ZONE_CLOSE_START = 0.51
|
||||
ZONE_CLOSE_END = 0.52
|
||||
|
||||
# Middle Zone: 20% to 85% -> Idle (No new orders, keep existing)
|
||||
# Implied by gaps between other zones.
|
||||
|
||||
# Top Hedge Zone: 85% to 100% -> Active Hedging
|
||||
ZONE_TOP_HEDGE_START = 0.8
|
||||
|
||||
# --- ORDER SETTINGS ---
|
||||
PRICE_BUFFER_PCT = 0.0005 # 0.05% price move triggers order update
|
||||
MIN_THRESHOLD_ETH = 0.01 # Minimum trade size in ETH
|
||||
MIN_ORDER_VALUE_USD = 10.0 # Minimum order value for API safety
|
||||
|
||||
def get_active_automatic_position():
|
||||
if not os.path.exists(STATUS_FILE):
|
||||
return None
|
||||
try:
|
||||
with open(STATUS_FILE, 'r') as f:
|
||||
data = json.load(f)
|
||||
for entry in data:
|
||||
if entry.get('type') == 'AUTOMATIC' and entry.get('status') == 'OPEN':
|
||||
return entry
|
||||
except Exception as e:
|
||||
logging.error(f"ERROR reading status file: {e}")
|
||||
return None
|
||||
|
||||
def update_position_zones_in_json(token_id, zones_data):
|
||||
"""Updates the active position in JSON with calculated zone prices and formats the entry."""
|
||||
if not os.path.exists(STATUS_FILE): return
|
||||
try:
|
||||
with open(STATUS_FILE, 'r') as f:
|
||||
data = json.load(f)
|
||||
|
||||
updated = False
|
||||
for i, entry in enumerate(data):
|
||||
if entry.get('type') == 'AUTOMATIC' and entry.get('status') == 'OPEN' and entry.get('token_id') == token_id:
|
||||
|
||||
# Merge Zones
|
||||
for k, v in zones_data.items():
|
||||
entry[k] = v
|
||||
|
||||
# Format & Reorder
|
||||
open_ts = entry.get('timestamp_open', int(time.time()))
|
||||
opened_str = time.strftime('%H:%M %d/%m/%y', time.localtime(open_ts))
|
||||
|
||||
# Reconstruct Dict in Order
|
||||
new_entry = {
|
||||
"type": entry.get('type'),
|
||||
"token_id": entry.get('token_id'),
|
||||
"opened": opened_str,
|
||||
"status": entry.get('status'),
|
||||
"entry_price": round(entry.get('entry_price', 0), 2),
|
||||
"target_value": round(entry.get('target_value', 0), 2),
|
||||
# Amounts might be string or float or int. Ensure float.
|
||||
"amount0_initial": round(float(entry.get('amount0_initial', 0)), 4),
|
||||
"amount1_initial": round(float(entry.get('amount1_initial', 0)), 2),
|
||||
|
||||
"range_upper": round(entry.get('range_upper', 0), 2),
|
||||
"zone_top_start_price": entry.get('zone_top_start_price'),
|
||||
"zone_close_top_price": entry.get('zone_close_top_price'),
|
||||
"zone_close_bottom_price": entry.get('zone_close_bottom_price'),
|
||||
"zone_bottom_limit_price": entry.get('zone_bottom_limit_price'),
|
||||
"range_lower": round(entry.get('range_lower', 0), 2),
|
||||
|
||||
"static_long": entry.get('static_long', 0.0),
|
||||
"timestamp_open": open_ts,
|
||||
"timestamp_close": entry.get('timestamp_close')
|
||||
}
|
||||
|
||||
data[i] = new_entry
|
||||
updated = True
|
||||
break
|
||||
|
||||
if updated:
|
||||
with open(STATUS_FILE, 'w') as f:
|
||||
json.dump(data, f, indent=2)
|
||||
logging.info(f"Updated JSON with Formatted Zone Prices for Position {token_id}")
|
||||
except Exception as e:
|
||||
logging.error(f"Error updating JSON zones: {e}")
|
||||
|
||||
def round_to_sig_figs(x, sig_figs=5):
|
||||
if x == 0: return 0.0
|
||||
return round(x, sig_figs - int(math.floor(math.log10(abs(x)))) - 1)
|
||||
|
||||
def round_to_sz_decimals(amount, sz_decimals=4):
|
||||
return round(abs(amount), sz_decimals)
|
||||
|
||||
class HyperliquidStrategy:
|
||||
def __init__(self, entry_amount0, entry_amount1, target_value, entry_price, low_range, high_range, start_price, static_long=0.0):
|
||||
self.entry_amount0 = entry_amount0
|
||||
self.entry_amount1 = entry_amount1
|
||||
self.target_value = target_value
|
||||
self.entry_price = entry_price
|
||||
self.low_range = low_range
|
||||
self.high_range = high_range
|
||||
self.static_long = static_long
|
||||
|
||||
self.start_price = start_price
|
||||
self.gap = max(0.0, entry_price - start_price)
|
||||
self.recovery_target = entry_price + (2 * self.gap)
|
||||
|
||||
self.current_mode = "NORMAL"
|
||||
self.last_switch_time = 0
|
||||
|
||||
logging.info(f"Strategy Init. Start Px: {start_price:.2f} | Gap: {self.gap:.2f} | Recovery Tgt: {self.recovery_target:.2f}")
|
||||
|
||||
try:
|
||||
sqrt_P = math.sqrt(entry_price)
|
||||
sqrt_Pa = math.sqrt(low_range)
|
||||
sqrt_Pb = math.sqrt(high_range)
|
||||
|
||||
self.L = 0.0
|
||||
|
||||
# Method 1: Use Amount0 (WETH)
|
||||
if entry_amount0 > 0:
|
||||
# If amount is huge (Wei), scale it. If small (ETH), use as is.
|
||||
if entry_amount0 > 1000: amount0_eth = entry_amount0 / 10**18
|
||||
else: amount0_eth = entry_amount0
|
||||
|
||||
denom0 = (1/sqrt_P) - (1/sqrt_Pb)
|
||||
if denom0 > 0.00000001:
|
||||
self.L = amount0_eth / denom0
|
||||
logging.info(f"Calculated L from Amount0: {self.L:.4f}")
|
||||
|
||||
# Method 2: Use Amount1 (USDC)
|
||||
if self.L == 0.0 and entry_amount1 > 0:
|
||||
if entry_amount1 > 100000: amount1_usdc = entry_amount1 / 10**6
|
||||
else: amount1_usdc = entry_amount1
|
||||
|
||||
denom1 = sqrt_P - sqrt_Pa
|
||||
if denom1 > 0.00000001:
|
||||
self.L = amount1_usdc / denom1
|
||||
logging.info(f"Calculated L from Amount1: {self.L:.4f}")
|
||||
|
||||
# Method 3: Fallback Heuristic
|
||||
if self.L == 0.0:
|
||||
logging.warning("Amounts missing or 0. Using Target Value Heuristic.")
|
||||
max_eth_heuristic = target_value / low_range
|
||||
denom_h = (1/sqrt_Pa) - (1/sqrt_Pb)
|
||||
if denom_h > 0:
|
||||
self.L = max_eth_heuristic / denom_h
|
||||
logging.info(f"Calculated L from Target Value: {self.L:.4f}")
|
||||
else:
|
||||
logging.error("Critical: Denominator 0 in Heuristic. Invalid Range?")
|
||||
self.L = 0.0
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error calculating liquidity: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
def get_pool_delta(self, current_price):
|
||||
if current_price >= self.high_range: return 0.0
|
||||
if current_price <= self.low_range:
|
||||
sqrt_Pa = math.sqrt(self.low_range)
|
||||
sqrt_Pb = math.sqrt(self.high_range)
|
||||
return self.L * ((1/sqrt_Pa) - (1/sqrt_Pb))
|
||||
|
||||
sqrt_P = math.sqrt(current_price)
|
||||
sqrt_Pb = math.sqrt(self.high_range)
|
||||
return self.L * ((1/sqrt_P) - (1/sqrt_Pb))
|
||||
|
||||
def calculate_rebalance(self, current_price, current_short_position_size):
|
||||
pool_delta = self.get_pool_delta(current_price)
|
||||
raw_target_short = pool_delta + self.static_long
|
||||
|
||||
target_short_size = raw_target_short
|
||||
diff = target_short_size - abs(current_short_position_size)
|
||||
|
||||
return {
|
||||
"current_price": current_price,
|
||||
"pool_delta": pool_delta,
|
||||
"target_short": target_short_size,
|
||||
"current_short": abs(current_short_position_size),
|
||||
"diff": diff,
|
||||
"action": "SELL" if diff > 0 else "BUY",
|
||||
"mode": "NORMAL"
|
||||
}
|
||||
|
||||
class ScalperHedger:
|
||||
def __init__(self):
|
||||
self.private_key = os.environ.get("SCALPER_AGENT_PK")
|
||||
self.vault_address = os.environ.get("MAIN_WALLET_ADDRESS")
|
||||
|
||||
if not self.private_key:
|
||||
logging.error("No SCALPER_AGENT_PK found in .env")
|
||||
sys.exit(1)
|
||||
|
||||
self.account = Account.from_key(self.private_key)
|
||||
self.info = Info(constants.MAINNET_API_URL, skip_ws=True)
|
||||
self.exchange = Exchange(self.account, constants.MAINNET_API_URL, account_address=self.vault_address)
|
||||
|
||||
try:
|
||||
logging.info(f"Setting leverage to {LEVERAGE}x (Cross)...")
|
||||
self.exchange.update_leverage(LEVERAGE, COIN_SYMBOL, is_cross=True)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to update leverage: {e}")
|
||||
|
||||
self.strategy = None
|
||||
self.sz_decimals = self._get_sz_decimals(COIN_SYMBOL)
|
||||
self.active_position_id = None
|
||||
self.active_order = None
|
||||
|
||||
logging.info(f"Scalper Hedger initialized. Agent: {self.account.address}")
|
||||
|
||||
def _init_strategy(self, position_data):
|
||||
try:
|
||||
entry_amount0 = position_data.get('amount0_initial', 0)
|
||||
entry_amount1 = position_data.get('amount1_initial', 0)
|
||||
target_value = position_data.get('target_value', 50.0)
|
||||
|
||||
entry_price = position_data['entry_price']
|
||||
lower = position_data['range_lower']
|
||||
upper = position_data['range_upper']
|
||||
static_long = position_data.get('static_long', 0.0)
|
||||
|
||||
start_price = self.get_market_price(COIN_SYMBOL)
|
||||
if start_price is None:
|
||||
logging.warning("Waiting for initial price to start strategy...")
|
||||
return
|
||||
|
||||
self.strategy = HyperliquidStrategy(
|
||||
entry_amount0=entry_amount0,
|
||||
entry_amount1=entry_amount1,
|
||||
target_value=target_value,
|
||||
entry_price=entry_price,
|
||||
low_range=lower,
|
||||
high_range=upper,
|
||||
start_price=start_price,
|
||||
static_long=static_long
|
||||
)
|
||||
logging.info(f"Strategy Initialized for Position {position_data['token_id']}.")
|
||||
self.active_position_id = position_data['token_id']
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to init strategy: {e}")
|
||||
self.strategy = None
|
||||
|
||||
def _get_sz_decimals(self, coin):
|
||||
try:
|
||||
meta = self.info.meta()
|
||||
for asset in meta["universe"]:
|
||||
if asset["name"] == coin:
|
||||
return asset["szDecimals"]
|
||||
return 4
|
||||
except: return 4
|
||||
|
||||
def get_market_price(self, coin):
|
||||
try:
|
||||
mids = self.info.all_mids()
|
||||
if coin in mids: return float(mids[coin])
|
||||
except: pass
|
||||
return None
|
||||
|
||||
def get_order_book_mid(self, coin):
|
||||
try:
|
||||
l2_snapshot = self.info.l2_snapshot(coin)
|
||||
if l2_snapshot and 'levels' in l2_snapshot:
|
||||
bids = l2_snapshot['levels'][0]
|
||||
asks = l2_snapshot['levels'][1]
|
||||
if bids and asks:
|
||||
best_bid = float(bids[0]['px'])
|
||||
best_ask = float(asks[0]['px'])
|
||||
return (best_bid + best_ask) / 2
|
||||
return self.get_market_price(coin)
|
||||
except:
|
||||
return self.get_market_price(coin)
|
||||
|
||||
def get_funding_rate(self, coin):
|
||||
try:
|
||||
meta, asset_ctxs = self.info.meta_and_asset_ctxs()
|
||||
for i, asset in enumerate(meta["universe"]):
|
||||
if asset["name"] == coin:
|
||||
return float(asset_ctxs[i]["funding"])
|
||||
return 0.0
|
||||
except: return 0.0
|
||||
|
||||
def get_current_position(self, coin):
|
||||
try:
|
||||
user_state = self.info.user_state(self.vault_address or self.account.address)
|
||||
for pos in user_state["assetPositions"]:
|
||||
if pos["position"]["coin"] == coin:
|
||||
return float(pos["position"]["szi"])
|
||||
return 0.0
|
||||
except: return 0.0
|
||||
|
||||
def get_open_orders(self):
|
||||
try:
|
||||
return self.info.open_orders(self.vault_address or self.account.address)
|
||||
except: return []
|
||||
|
||||
def cancel_order(self, coin, oid):
|
||||
logging.info(f"Cancelling order {oid}...")
|
||||
try:
|
||||
return self.exchange.cancel(coin, oid)
|
||||
except Exception as e:
|
||||
logging.error(f"Error cancelling order: {e}")
|
||||
|
||||
def place_limit_order(self, coin, is_buy, size, price):
|
||||
logging.info(f"🕒 PLACING LIMIT: {coin} {'BUY' if is_buy else 'SELL'} {size} @ {price:.2f}")
|
||||
reduce_only = is_buy
|
||||
try:
|
||||
# Gtc order (Maker)
|
||||
limit_px = round_to_sig_figs(price, 5)
|
||||
|
||||
order_result = self.exchange.order(coin, is_buy, size, limit_px, {"limit": {"tif": "Gtc"}}, reduce_only=reduce_only)
|
||||
status = order_result["status"]
|
||||
if status == "ok":
|
||||
response_data = order_result["response"]["data"]
|
||||
if "statuses" in response_data:
|
||||
status_obj = response_data["statuses"][0]
|
||||
|
||||
if "error" in status_obj:
|
||||
logging.error(f"Order API Error: {status_obj['error']}")
|
||||
return None
|
||||
|
||||
# Parse OID from nested structure
|
||||
oid = None
|
||||
if "resting" in status_obj:
|
||||
oid = status_obj["resting"]["oid"]
|
||||
elif "filled" in status_obj:
|
||||
oid = status_obj["filled"]["oid"]
|
||||
logging.info("Order filled immediately.")
|
||||
|
||||
if oid:
|
||||
logging.info(f"✅ Limit Order Placed: OID {oid}")
|
||||
return oid
|
||||
else:
|
||||
logging.warning(f"Order placed but OID not found in: {status_obj}")
|
||||
return None
|
||||
else:
|
||||
logging.error(f"Order Failed: {order_result}")
|
||||
return None
|
||||
except Exception as e:
|
||||
logging.error(f"Exception during trade: {e}")
|
||||
return None
|
||||
|
||||
def manage_orders(self):
|
||||
"""
|
||||
Checks open orders.
|
||||
Returns: True if an order exists and is valid (don't trade), False if no order (can trade).
|
||||
"""
|
||||
open_orders = self.get_open_orders()
|
||||
my_orders = [o for o in open_orders if o['coin'] == COIN_SYMBOL]
|
||||
|
||||
if not my_orders:
|
||||
self.active_order = None
|
||||
return False
|
||||
|
||||
if len(my_orders) > 1:
|
||||
logging.warning("Multiple open orders found. Cancelling all for safety.")
|
||||
for o in my_orders:
|
||||
self.cancel_order(COIN_SYMBOL, o['oid'])
|
||||
self.active_order = None
|
||||
return False
|
||||
|
||||
order = my_orders[0]
|
||||
oid = order['oid']
|
||||
order_price = float(order['limitPx'])
|
||||
|
||||
current_mid = self.get_order_book_mid(COIN_SYMBOL)
|
||||
pct_diff = abs(current_mid - order_price) / order_price
|
||||
|
||||
if pct_diff > PRICE_BUFFER_PCT:
|
||||
logging.info(f"Price moved {pct_diff*100:.3f}% > {PRICE_BUFFER_PCT*100}%. Cancelling/Replacing order {oid}.")
|
||||
self.cancel_order(COIN_SYMBOL, oid)
|
||||
self.active_order = None
|
||||
return False
|
||||
else:
|
||||
logging.info(f"Pending Order {oid} @ {order_price:.2f} is within range ({pct_diff*100:.3f}%). Waiting.")
|
||||
return True
|
||||
|
||||
def close_all_positions(self):
|
||||
logging.info("Closing all positions (Market Order)...")
|
||||
try:
|
||||
# Cancel open orders first
|
||||
open_orders = self.get_open_orders()
|
||||
for o in open_orders:
|
||||
if o['coin'] == COIN_SYMBOL:
|
||||
self.cancel_order(COIN_SYMBOL, o['oid'])
|
||||
|
||||
price = self.get_market_price(COIN_SYMBOL)
|
||||
current_pos = self.get_current_position(COIN_SYMBOL)
|
||||
if current_pos == 0: return
|
||||
|
||||
is_buy = current_pos < 0
|
||||
final_size = round_to_sz_decimals(abs(current_pos), self.sz_decimals)
|
||||
if final_size == 0: return
|
||||
|
||||
# Market order for closing
|
||||
self.exchange.order(COIN_SYMBOL, is_buy, final_size, round_to_sig_figs(price * (1.05 if is_buy else 0.95), 5), {"limit": {"tif": "Ioc"}}, reduce_only=True)
|
||||
self.active_position_id = None
|
||||
except Exception as e:
|
||||
logging.error(f"Error closing: {e}")
|
||||
|
||||
def run(self):
|
||||
logging.info(f"Starting Scalper Monitor Loop. Interval: {CHECK_INTERVAL}s")
|
||||
|
||||
while True:
|
||||
try:
|
||||
active_pos = get_active_automatic_position()
|
||||
|
||||
# Check Global Enable Switch
|
||||
if not active_pos or not active_pos.get('hedge_enabled', True):
|
||||
if self.strategy is not None:
|
||||
logging.info("Hedge Disabled or Position Closed. Closing remaining positions.")
|
||||
self.close_all_positions()
|
||||
self.strategy = None
|
||||
else:
|
||||
pass
|
||||
time.sleep(CHECK_INTERVAL)
|
||||
continue
|
||||
|
||||
if self.strategy is None or self.active_position_id != active_pos['token_id']:
|
||||
logging.info(f"New position {active_pos['token_id']} detected or strategy not initialized. Initializing strategy.")
|
||||
self._init_strategy(active_pos)
|
||||
if self.strategy is None:
|
||||
time.sleep(CHECK_INTERVAL)
|
||||
continue
|
||||
|
||||
if self.strategy is None: continue
|
||||
|
||||
# --- ORDER MANAGEMENT ---
|
||||
if self.manage_orders():
|
||||
time.sleep(CHECK_INTERVAL)
|
||||
continue
|
||||
|
||||
# 2. Market Data
|
||||
price = self.get_order_book_mid(COIN_SYMBOL)
|
||||
if price is None:
|
||||
time.sleep(5)
|
||||
continue
|
||||
|
||||
funding_rate = self.get_funding_rate(COIN_SYMBOL)
|
||||
current_pos_size = self.get_current_position(COIN_SYMBOL)
|
||||
|
||||
# 3. Calculate Logic
|
||||
calc = self.strategy.calculate_rebalance(price, current_pos_size)
|
||||
diff_abs = abs(calc['diff'])
|
||||
|
||||
# 4. Dynamic Threshold Calculation
|
||||
sqrt_Pa = math.sqrt(self.strategy.low_range)
|
||||
sqrt_Pb = math.sqrt(self.strategy.high_range)
|
||||
max_potential_eth = self.strategy.L * ((1/sqrt_Pa) - (1/sqrt_Pb))
|
||||
|
||||
# Use MIN_THRESHOLD_ETH from config
|
||||
rebalance_threshold = max(MIN_THRESHOLD_ETH, max_potential_eth * 0.05)
|
||||
|
||||
# 5. Determine Hedge Zone
|
||||
clp_low_range = self.strategy.low_range
|
||||
clp_high_range = self.strategy.high_range
|
||||
range_width = clp_high_range - clp_low_range
|
||||
|
||||
# Calculate Prices for Zones
|
||||
zone_bottom_limit_price = clp_low_range + (range_width * ZONE_BOTTOM_HEDGE_LIMIT)
|
||||
zone_close_bottom_price = clp_low_range + (range_width * ZONE_CLOSE_START)
|
||||
zone_close_top_price = clp_low_range + (range_width * ZONE_CLOSE_END)
|
||||
zone_top_start_price = clp_low_range + (range_width * ZONE_TOP_HEDGE_START)
|
||||
|
||||
# Update JSON with zone prices if missing
|
||||
if 'zone_bottom_limit_price' not in active_pos:
|
||||
update_position_zones_in_json(active_pos['token_id'], {
|
||||
'zone_top_start_price': round(zone_top_start_price, 2),
|
||||
'zone_close_top_price': round(zone_close_top_price, 2),
|
||||
'zone_close_bottom_price': round(zone_close_bottom_price, 2),
|
||||
'zone_bottom_limit_price': round(zone_bottom_limit_price, 2)
|
||||
})
|
||||
|
||||
# Check Zones
|
||||
in_close_zone = (price >= zone_close_bottom_price and price <= zone_close_top_price)
|
||||
in_hedge_zone = (price <= zone_bottom_limit_price) or (price >= zone_top_start_price)
|
||||
|
||||
# --- Execute Logic ---
|
||||
if in_close_zone:
|
||||
logging.info(f"ZONE: CLOSE ({price:.2f} in {zone_close_bottom_price:.2f}-{zone_close_top_price:.2f}). Closing all hedge positions.")
|
||||
self.close_all_positions()
|
||||
time.sleep(CHECK_INTERVAL)
|
||||
continue
|
||||
|
||||
elif in_hedge_zone:
|
||||
# HEDGE NORMALLY
|
||||
if diff_abs > rebalance_threshold:
|
||||
trade_size = round_to_sz_decimals(diff_abs, self.sz_decimals)
|
||||
|
||||
# --- SOFT START LOGIC (Bottom Zone Only) ---
|
||||
# If in Bottom Zone, opening a NEW Short (SELL), and current position is 0 -> Cut size by 50%
|
||||
if (price <= zone_bottom_limit_price) and (current_pos_size == 0) and (calc['action'] == "SELL"):
|
||||
logging.info(f"🔰 SOFT START: Reducing initial hedge size by 50% in Bottom Zone.")
|
||||
trade_size = round_to_sz_decimals(trade_size * 0.5, self.sz_decimals)
|
||||
|
||||
min_trade_size = MIN_ORDER_VALUE_USD / price
|
||||
|
||||
if trade_size < min_trade_size:
|
||||
logging.info(f"Idle. Trade size {trade_size} < Min Order Size {min_trade_size:.4f} (${MIN_ORDER_VALUE_USD:.2f})")
|
||||
elif trade_size > 0:
|
||||
logging.info(f"⚡ THRESHOLD TRIGGERED ({diff_abs:.4f} >= {rebalance_threshold:.4f}). In Hedge Zone.")
|
||||
is_buy = (calc['action'] == "BUY")
|
||||
self.place_limit_order(COIN_SYMBOL, is_buy, trade_size, price)
|
||||
else:
|
||||
logging.info("Trade size rounds to 0. Skipping.")
|
||||
else:
|
||||
logging.info(f"Idle. Diff {diff_abs:.4f} < Threshold {rebalance_threshold:.4f}. In Hedge Zone.")
|
||||
|
||||
else:
|
||||
# MIDDLE ZONE (IDLE)
|
||||
pct_position = (price - clp_low_range) / range_width
|
||||
logging.info(f"Idle. In Middle Zone ({pct_position*100:.1f}%). No Actions.")
|
||||
|
||||
time.sleep(CHECK_INTERVAL)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
logging.info("Stopping Hedger...")
|
||||
self.close_all_positions()
|
||||
break
|
||||
except Exception as e:
|
||||
logging.error(f"Loop Error: {e}", exc_info=True)
|
||||
time.sleep(10)
|
||||
|
||||
if __name__ == "__main__":
|
||||
hedger = ScalperHedger()
|
||||
hedger.run()
|
||||
@ -1,396 +0,0 @@
|
||||
[
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5154921,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3088.180203068298,
|
||||
"range_lower": 3071.745207606606,
|
||||
"range_upper": 3102.615208978462,
|
||||
"target_value": 99.31729381997206,
|
||||
"amount0_initial": 0,
|
||||
"amount1_initial": 0,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765575924,
|
||||
"timestamp_close": 1765613747
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5155502,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3105.4778071503983,
|
||||
"range_lower": 3090.230154007496,
|
||||
"range_upper": 3118.1663529424395,
|
||||
"target_value": 81.22159710646565,
|
||||
"amount0_initial": 0,
|
||||
"amount1_initial": 0,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765613789,
|
||||
"timestamp_close": 1765614083
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5155511,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3122.1562247614547,
|
||||
"range_lower": 3105.7192207366634,
|
||||
"range_upper": 3136.930649460415,
|
||||
"target_value": 98.20653967768193,
|
||||
"amount0_initial": 0,
|
||||
"amount1_initial": 0,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765614124,
|
||||
"timestamp_close": 1765617105
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5155580,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3120.03330314008,
|
||||
"range_lower": 3111.93656358668,
|
||||
"range_upper": 3124.4086137206154,
|
||||
"target_value": 258.2420686245357,
|
||||
"amount0_initial": 0,
|
||||
"amount1_initial": 0,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765617197,
|
||||
"timestamp_close": 1765617236
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5155610,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3118.03462860249,
|
||||
"range_lower": 3056.425578524254,
|
||||
"range_upper": 3177.9749053788623,
|
||||
"target_value": 348.982123656927,
|
||||
"amount0_initial": 54654586929109032,
|
||||
"amount1_initial": 178567229,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765619246,
|
||||
"timestamp_close": null
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5155618,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3120.854321555066,
|
||||
"range_lower": 3111.93656358668,
|
||||
"range_upper": 3127.5344286932063,
|
||||
"target_value": 342.45943993806645,
|
||||
"amount0_initial": 46935127322790001,
|
||||
"amount1_initial": 195981745,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765619616,
|
||||
"timestamp_close": 1765621159
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5155660,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3129.521502331058,
|
||||
"range_lower": 3121.285922844486,
|
||||
"range_upper": 3136.930649460415,
|
||||
"target_value": 345.19101843135434,
|
||||
"amount0_initial": 52148054681776174,
|
||||
"amount1_initial": 181992560,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765621204,
|
||||
"timestamp_close": 1765625900
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5155742,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3120.452464830275,
|
||||
"range_lower": 3111.93656358668,
|
||||
"range_upper": 3127.5344286932063,
|
||||
"target_value": 330.2607520468071,
|
||||
"amount0_initial": 45273020063291068,
|
||||
"amount1_initial": 188988445,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765625947,
|
||||
"timestamp_close": 1765629916
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5155807,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3111.8306135157013,
|
||||
"range_lower": 3102.615208978462,
|
||||
"range_upper": 3118.1663529424395,
|
||||
"target_value": 342.2298529154781,
|
||||
"amount0_initial": 44749390699692539,
|
||||
"amount1_initial": 202977329,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765629968,
|
||||
"timestamp_close": null
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5155828,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3116.7126648332624,
|
||||
"range_lower": 3099.514299525495,
|
||||
"range_upper": 3130.663370887762,
|
||||
"target_value": 347.83537144876755,
|
||||
"amount0_initial": 49847371623870561,
|
||||
"amount1_initial": 192475437,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765630905,
|
||||
"timestamp_close": 1765632623
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5155863,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3097.40295247475,
|
||||
"range_lower": 3080.973817800786,
|
||||
"range_upper": 3111.93656358668,
|
||||
"target_value": 308.3116676933205,
|
||||
"amount0_initial": 39654626336294149,
|
||||
"amount1_initial": 185485311,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765632672,
|
||||
"timestamp_close": 1765634422
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5155882,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3112.8609359236384,
|
||||
"range_lower": 3096.4164892771637,
|
||||
"range_upper": 3127.5344286932063,
|
||||
"target_value": 343.5299941433273,
|
||||
"amount0_initial": 51896697111974758,
|
||||
"amount1_initial": 181982793,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765634468,
|
||||
"timestamp_close": 1765661569
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5156323,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3083.0072388847652,
|
||||
"range_lower": 3065.6081631285606,
|
||||
"range_upper": 3096.4164892771637,
|
||||
"target_value": 312.46495296583043,
|
||||
"amount0_initial": 37786473705449745,
|
||||
"amount1_initial": 195968981,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765661623,
|
||||
"timestamp_close": 1765661755
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5156327,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3099.025060823837,
|
||||
"range_lower": 3080.973817800786,
|
||||
"range_upper": 3111.93656358668,
|
||||
"target_value": 341.5043895497362,
|
||||
"amount0_initial": 44705050404757454,
|
||||
"amount1_initial": 202962318,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765661800,
|
||||
"timestamp_close": 1765663051
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5156339,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3114.5494347315303,
|
||||
"range_lower": 3096.4164892771637,
|
||||
"range_upper": 3127.5344286932063,
|
||||
"target_value": 313.18766451496026,
|
||||
"amount0_initial": 47209859594870944,
|
||||
"amount1_initial": 166150223,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765663096,
|
||||
"timestamp_close": 1765675725,
|
||||
"zone_bottom_limit_price": 3099.528283218768,
|
||||
"zone_close_start_price": 3102.017718372051,
|
||||
"zone_close_end_price": 3102.640077160372,
|
||||
"zone_top_start_price": 3121.310840809998
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5156507,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3128.29006521609,
|
||||
"range_lower": 3111.93656358668,
|
||||
"range_upper": 3143.2104745051906,
|
||||
"target_value": 347.15268590066694,
|
||||
"amount0_initial": 52797230582023401,
|
||||
"amount1_initial": 181987634,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765675770,
|
||||
"timestamp_close": 1765687389,
|
||||
"zone_bottom_limit_price": 3115.0639546785314,
|
||||
"zone_close_start_price": 3117.565867552012,
|
||||
"zone_close_end_price": 3118.191345770382,
|
||||
"zone_top_start_price": 3136.9556923214886
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5156576,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3109.1484174484244,
|
||||
"range_lower": 3093.3217751359653,
|
||||
"range_upper": 3124.4086137206154,
|
||||
"target_value": 349.75269804513647,
|
||||
"amount0_initial": 55081765825023475,
|
||||
"amount1_initial": 178495313,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765687433,
|
||||
"timestamp_close": 1765712073,
|
||||
"zone_bottom_limit_price": 3096.4304589944304,
|
||||
"zone_close_start_price": 3098.9174060812024,
|
||||
"zone_close_end_price": 3099.539142852895,
|
||||
"zone_top_start_price": 3118.1912460036856
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5156880,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3092.1804685415204,
|
||||
"range_lower": 3074.8183354682296,
|
||||
"range_upper": 3105.7192207366634,
|
||||
"target_value": 348.0802699013006,
|
||||
"amount0_initial": 49191436738181486,
|
||||
"amount1_initial": 195971470,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765712124,
|
||||
"timestamp_close": 1765712700,
|
||||
"zone_bottom_limit_price": 3077.908423995073,
|
||||
"zone_close_start_price": 3080.3804948165475,
|
||||
"zone_close_end_price": 3080.9985125219164,
|
||||
"zone_top_start_price": 3099.5390436829766
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5156912,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3080.3709911881006,
|
||||
"range_lower": 3062.5442403757074,
|
||||
"range_upper": 3093.3217751359653,
|
||||
"target_value": 291.15223765283383,
|
||||
"amount0_initial": 47732710466839755,
|
||||
"amount1_initial": 144117781,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765712910,
|
||||
"timestamp_close": 1765714350,
|
||||
"zone_bottom_limit_price": 3065.6219938517334,
|
||||
"zone_close_start_price": 3068.084196632554,
|
||||
"zone_close_end_price": 3068.699747327759,
|
||||
"zone_top_start_price": 3087.166268183914
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5156972,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3090.0637108037877,
|
||||
"range_lower": 3074.8183354682296,
|
||||
"range_upper": 3102.615208978462,
|
||||
"target_value": 271.3892587233541,
|
||||
"amount0_initial": 51605992189032833,
|
||||
"amount1_initial": 111923455,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765714399,
|
||||
"timestamp_close": 1765715701,
|
||||
"zone_bottom_limit_price": 3077.598022819253,
|
||||
"zone_close_start_price": 3079.8217727000715,
|
||||
"zone_close_end_price": 3080.3777101702763,
|
||||
"zone_top_start_price": 3097.055834276415
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5157018,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3101.5146208910464,
|
||||
"range_lower": 3084.056178426586,
|
||||
"range_upper": 3115.0499008952183,
|
||||
"target_value": 334.88770454868376,
|
||||
"amount0_initial": 49662753969037209,
|
||||
"amount1_initial": 180857947,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765715747,
|
||||
"timestamp_close": 1765722919,
|
||||
"zone_bottom_limit_price": 3087.1555506734494,
|
||||
"zone_close_start_price": 3089.6350484709396,
|
||||
"zone_close_end_price": 3090.2549229203123,
|
||||
"zone_top_start_price": 3108.851156401492
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5157176,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3079.8157532039463,
|
||||
"range_lower": 3062.5442403757074,
|
||||
"range_upper": 3093.3217751359653,
|
||||
"target_value": 272.62430135026136,
|
||||
"amount0_initial": 24888578243851017,
|
||||
"amount1_initial": 195972066,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765722970,
|
||||
"timestamp_close": 1765729241,
|
||||
"zone_bottom_limit_price": 3065.6219938517334,
|
||||
"zone_close_start_price": 3068.084196632554,
|
||||
"zone_close_end_price": 3068.699747327759,
|
||||
"zone_top_start_price": 3087.166268183914
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5157312,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3093.971464080226,
|
||||
"range_lower": 3077.8945378409912,
|
||||
"range_upper": 3108.8263379038003,
|
||||
"target_value": 326.92184420403566,
|
||||
"amount0_initial": 46843176767023226,
|
||||
"amount1_initial": 181990392,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765729286,
|
||||
"timestamp_close": 1765733514,
|
||||
"zone_bottom_limit_price": 3080.987717847272,
|
||||
"zone_close_start_price": 3083.4622618522967,
|
||||
"zone_close_end_price": 3084.080897853553,
|
||||
"zone_top_start_price": 3102.6399778912387
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5157395,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3079.3931567773757,
|
||||
"range_lower": 3062.5442403757074,
|
||||
"range_upper": 3093.3217751359653,
|
||||
"target_value": 344.4599070677894,
|
||||
"amount0_initial": 50492037278704046,
|
||||
"amount1_initial": 188975073,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765733564,
|
||||
"timestamp_close": 1765736225,
|
||||
"zone_bottom_limit_price": 3065.6219938517334,
|
||||
"zone_close_start_price": 3068.084196632554,
|
||||
"zone_close_end_price": 3068.699747327759,
|
||||
"zone_top_start_price": 3087.166268183914
|
||||
},
|
||||
{
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": 5157445,
|
||||
"status": "CLOSED",
|
||||
"entry_price": 3095.4053081664565,
|
||||
"range_lower": 3077.8945378409912,
|
||||
"range_upper": 3108.8263379038003,
|
||||
"target_value": 332.600152414756,
|
||||
"amount0_initial": 44140371554667029,
|
||||
"amount1_initial": 195967812,
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": 1765736272,
|
||||
"timestamp_close": 1765743062,
|
||||
"zone_bottom_limit_price": 3080.987717847272,
|
||||
"zone_close_start_price": 3083.4622618522967,
|
||||
"zone_close_end_price": 3084.080897853553,
|
||||
"zone_top_start_price": 3102.6399778912387
|
||||
}
|
||||
]
|
||||
@ -1,789 +0,0 @@
|
||||
import os
|
||||
import time
|
||||
import json
|
||||
import re
|
||||
from web3 import Web3
|
||||
from eth_account import Account
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# --- Helper Functions ---
|
||||
def clean_address(addr):
|
||||
return re.sub(r'[^0-9a-fA-FxX]', '', addr)
|
||||
|
||||
def price_from_sqrt_price_x96(sqrt_price_x96, token0_decimals, token1_decimals):
|
||||
price = (sqrt_price_x96 / (2**96))**2
|
||||
# Adjust for token decimals assuming price is Token1 per Token0
|
||||
price = price * (10**(token0_decimals - token1_decimals))
|
||||
return price
|
||||
|
||||
def price_from_tick(tick, token0_decimals, token1_decimals):
|
||||
price = 1.0001**tick
|
||||
# Adjust for token decimals assuming price is Token1 per Token0
|
||||
price = price * (10**(token0_decimals - token1_decimals))
|
||||
return price
|
||||
|
||||
def from_wei(amount, decimals):
|
||||
return amount / (10**decimals)
|
||||
|
||||
# --- V3 Math Helpers ---
|
||||
def get_sqrt_ratio_at_tick(tick):
|
||||
# Returns sqrt(price) as a Q96 number
|
||||
return int((1.0001 ** (tick / 2)) * (2 ** 96))
|
||||
|
||||
def get_liquidity_for_amount0(sqrt_ratio_a, sqrt_ratio_b, amount0):
|
||||
# This function is not used directly in the current calculate_mint_amounts logic,
|
||||
# but is a common V3 helper
|
||||
if sqrt_ratio_a > sqrt_ratio_b:
|
||||
sqrt_ratio_a, sqrt_ratio_b = sqrt_ratio_b, sqrt_ratio_a
|
||||
# This formula is for a single-sided deposit when current price is outside the range
|
||||
return int(amount0 * sqrt_ratio_a * sqrt_ratio_b / (sqrt_ratio_b - sqrt_ratio_a))
|
||||
|
||||
def get_liquidity_for_amount1(sqrt_ratio_a, sqrt_ratio_b, amount1):
|
||||
# This function is not used directly in the current calculate_mint_amounts logic,
|
||||
# but is a common V3 helper
|
||||
if sqrt_ratio_a > sqrt_ratio_b:
|
||||
sqrt_ratio_a, sqrt_ratio_b = sqrt_ratio_b, sqrt_ratio_a
|
||||
# This formula is for a single-sided deposit when current price is outside the range
|
||||
return int(amount1 / (sqrt_ratio_b - sqrt_ratio_a))
|
||||
|
||||
def get_amounts_for_liquidity(sqrt_ratio_current, sqrt_ratio_a, sqrt_ratio_b, liquidity):
|
||||
# Calculates the required amount of token0 and token1 for a given liquidity and price range
|
||||
if sqrt_ratio_a > sqrt_ratio_b:
|
||||
sqrt_ratio_a, sqrt_ratio_b = sqrt_ratio_b, sqrt_ratio_a
|
||||
|
||||
amount0 = 0
|
||||
amount1 = 0
|
||||
Q96 = 1 << 96 # 2^96
|
||||
|
||||
# Current price below the lower tick boundary
|
||||
if sqrt_ratio_current <= sqrt_ratio_a:
|
||||
amount0 = ((liquidity * Q96) // sqrt_ratio_a) - ((liquidity * Q96) // sqrt_ratio_b)
|
||||
amount1 = 0
|
||||
# Current price within the range
|
||||
elif sqrt_ratio_current < sqrt_ratio_b:
|
||||
amount0 = ((liquidity * Q96) // sqrt_ratio_current) - ((liquidity * Q96) // sqrt_ratio_b)
|
||||
amount1 = (liquidity * (sqrt_ratio_current - sqrt_ratio_a)) // Q96
|
||||
# Current price above the upper tick boundary
|
||||
else:
|
||||
amount1 = (liquidity * (sqrt_ratio_b - sqrt_ratio_a)) // Q96
|
||||
amount0 = 0
|
||||
|
||||
return amount0, amount1
|
||||
|
||||
# --- Configuration ---
|
||||
# RPC URL and Private Key are loaded from .env
|
||||
RPC_URL = os.environ.get("MAINNET_RPC_URL")
|
||||
PRIVATE_KEY = os.environ.get("MAIN_WALLET_PRIVATE_KEY") or os.environ.get("PRIVATE_KEY")
|
||||
|
||||
# Script behavior flags
|
||||
MONITOR_INTERVAL_SECONDS = 451
|
||||
COLLECT_FEES_ENABLED = False # If True, will attempt to collect fees once and exit if no open auto position
|
||||
CLOSE_POSITION_ENABLED = True # If True, will attempt to close auto position when out of range
|
||||
CLOSE_IF_OUT_OF_RANGE_ONLY = True # If True, closes only if out of range; if False, closes immediately
|
||||
OPEN_POSITION_ENABLED = True # If True, will open a new position if no auto position exists
|
||||
REBALANCE_ON_CLOSE_BELOW_RANGE = False # If True, will sell 50% of WETH to USDC when closing below range
|
||||
|
||||
# New Position Parameters
|
||||
TARGET_INVESTMENT_VALUE_TOKEN1 = 2000.0 # Target total investment value in Token1 terms (e.g. 350 USDC)
|
||||
RANGE_WIDTH_PCT = 0.01 # +/- 2% range for new positions
|
||||
|
||||
# JSON File for tracking position state
|
||||
STATUS_FILE = "hedge_status.json"
|
||||
|
||||
# --- JSON State Helpers ---
|
||||
def get_active_automatic_position():
|
||||
"""Reads hedge_status.json and returns the first OPEN AUTOMATIC position dict, or None."""
|
||||
if not os.path.exists(STATUS_FILE):
|
||||
return None
|
||||
try:
|
||||
with open(STATUS_FILE, 'r') as f:
|
||||
data = json.load(f)
|
||||
for entry in data:
|
||||
if entry.get('type') == 'AUTOMATIC' and entry.get('status') == 'OPEN':
|
||||
return entry
|
||||
except Exception as e:
|
||||
print(f"ERROR reading status file: {e}")
|
||||
return None
|
||||
|
||||
def get_all_open_positions():
|
||||
"""Reads hedge_status.json and returns a list of all OPEN positions (Manual and Automatic)."""
|
||||
if not os.path.exists(STATUS_FILE):
|
||||
return []
|
||||
try:
|
||||
with open(STATUS_FILE, 'r') as f:
|
||||
data = json.load(f)
|
||||
return [entry for entry in data if entry.get('status') == 'OPEN']
|
||||
except Exception as e:
|
||||
print(f"ERROR reading status file: {e}")
|
||||
return []
|
||||
|
||||
def update_hedge_status_file(action, position_data):
|
||||
"""
|
||||
Updates the hedge_status.json file.
|
||||
action: "OPEN" or "CLOSE"
|
||||
position_data: Dict containing details (token_id, entry_price, range, etc.)
|
||||
"""
|
||||
current_data = []
|
||||
if os.path.exists(STATUS_FILE):
|
||||
try:
|
||||
with open(STATUS_FILE, 'r') as f:
|
||||
current_data = json.load(f)
|
||||
except:
|
||||
current_data = []
|
||||
|
||||
if action == "OPEN":
|
||||
# Format Timestamp
|
||||
open_ts = int(time.time())
|
||||
opened_str = time.strftime('%H:%M %d/%m/%y', time.localtime(open_ts))
|
||||
|
||||
# Scale Amounts
|
||||
raw_amt0 = position_data.get('amount0_initial', 0)
|
||||
raw_amt1 = position_data.get('amount1_initial', 0)
|
||||
|
||||
# Handle if they are already scaled (unlikely here, but safe)
|
||||
if raw_amt0 > 1000: fmt_amt0 = round(raw_amt0 / 10**18, 4)
|
||||
else: fmt_amt0 = round(raw_amt0, 4)
|
||||
|
||||
if raw_amt1 > 1000: fmt_amt1 = round(raw_amt1 / 10**6, 2)
|
||||
else: fmt_amt1 = round(raw_amt1, 2)
|
||||
|
||||
new_entry = {
|
||||
"type": "AUTOMATIC",
|
||||
"token_id": position_data['token_id'],
|
||||
"opened": opened_str,
|
||||
"status": "OPEN",
|
||||
"entry_price": round(position_data['entry_price'], 2),
|
||||
"target_value": round(position_data.get('target_value', 0.0), 2),
|
||||
"amount0_initial": fmt_amt0,
|
||||
"amount1_initial": fmt_amt1,
|
||||
|
||||
"range_upper": round(position_data['range_upper'], 2),
|
||||
# Zones (if present in position_data, otherwise None/Skip)
|
||||
"zone_top_start_price": round(position_data['zone_top_start_price'], 2) if 'zone_top_start_price' in position_data else None,
|
||||
"zone_close_top_price": round(position_data['zone_close_end_price'], 2) if 'zone_close_end_price' in position_data else None,
|
||||
"zone_close_bottom_price": round(position_data['zone_close_start_price'], 2) if 'zone_close_start_price' in position_data else None,
|
||||
"zone_bottom_limit_price": round(position_data['zone_bottom_limit_price'], 2) if 'zone_bottom_limit_price' in position_data else None,
|
||||
"range_lower": round(position_data['range_lower'], 2),
|
||||
|
||||
"static_long": 0.0,
|
||||
"timestamp_open": open_ts,
|
||||
"timestamp_close": None
|
||||
}
|
||||
# Remove None keys to keep it clean? Or keep structure?
|
||||
# User wants specific structure.
|
||||
|
||||
current_data.append(new_entry)
|
||||
print(f"Recorded new AUTOMATIC position {position_data['token_id']} in {STATUS_FILE}")
|
||||
|
||||
elif action == "CLOSE":
|
||||
found = False
|
||||
for entry in current_data:
|
||||
if (
|
||||
entry.get('type') == "AUTOMATIC" and
|
||||
entry.get('status') == "OPEN" and
|
||||
entry.get('token_id') == position_data['token_id']
|
||||
):
|
||||
|
||||
entry['status'] = "CLOSED"
|
||||
entry['timestamp_close'] = int(time.time())
|
||||
found = True
|
||||
print(f"Marked position {entry['token_id']} as CLOSED in {STATUS_FILE}")
|
||||
break
|
||||
if not found:
|
||||
print(f"WARNING: Could not find open AUTOMATIC position {position_data['token_id']} to close.")
|
||||
|
||||
with open(STATUS_FILE, 'w') as f:
|
||||
json.dump(current_data, f, indent=2)
|
||||
|
||||
# --- ABIs ---
|
||||
# Simplified for length, usually loaded from huge string
|
||||
NONFUNGIBLE_POSITION_MANAGER_ABI = json.loads('''
|
||||
[
|
||||
{"anonymous": false, "inputs": [{"indexed": true, "internalType": "uint256", "name": "tokenId", "type": "uint256"}, {"indexed": false, "internalType": "uint128", "name": "liquidity", "type": "uint128"}, {"indexed": false, "internalType": "uint256", "name": "amount0", "type": "uint256"}, {"indexed": false, "internalType": "uint256", "name": "amount1", "type": "uint256"}], "name": "IncreaseLiquidity", "type": "event"},
|
||||
{"anonymous": false, "inputs": [{"indexed": true, "internalType": "address", "name": "from", "type": "address"}, {"indexed": true, "internalType": "address", "name": "to", "type": "address"}, {"indexed": true, "internalType": "uint256", "name": "tokenId", "type": "uint256"}], "name": "Transfer", "type": "event"},
|
||||
{"inputs": [], "name": "factory", "outputs": [{"internalType": "address", "name": "", "type": "address"}], "stateMutability": "view", "type": "function"},
|
||||
{"inputs": [{"internalType": "uint256", "name": "tokenId", "type": "uint256"}], "name": "positions", "outputs": [{"internalType": "uint96", "name": "nonce", "type": "uint96"}, {"internalType": "address", "name": "operator", "type": "address"}, {"internalType": "address", "name": "token0", "type": "address"}, {"internalType": "address", "name": "token1", "type": "address"}, {"internalType": "uint24", "name": "fee", "type": "uint24"}, {"internalType": "int24", "name": "tickLower", "type": "int24"}, {"internalType": "int24", "name": "tickUpper", "type": "int24"}, {"internalType": "uint128", "name": "liquidity", "type": "uint128"}, {"internalType": "uint256", "name": "feeGrowthInside0LastX128", "type": "uint256"}, {"internalType": "uint256", "name": "feeGrowthInside1LastX128", "type": "uint256"}, {"internalType": "uint128", "name": "tokensOwed0", "type": "uint128"}, {"internalType": "uint128", "name": "tokensOwed1", "type": "uint128"}], "stateMutability": "view", "type": "function"},
|
||||
{"inputs": [{"components": [{"internalType": "uint256", "name": "tokenId", "type": "uint256"}, {"internalType": "address", "name": "recipient", "type": "address"}, {"internalType": "uint128", "name": "amount0Max", "type": "uint128"}, {"internalType": "uint128", "name": "amount1Max", "type": "uint128"}], "internalType": "struct INonfungiblePositionManager.CollectParams", "name": "params", "type": "tuple"}], "name": "collect", "outputs": [{"internalType": "uint256", "name": "amount0", "type": "uint256"}, {"internalType": "uint256", "name": "amount1", "type": "uint256"}], "stateMutability": "payable", "type": "function"},
|
||||
{"inputs": [{"components": [{"internalType": "uint256", "name": "tokenId", "type": "uint256"}, {"internalType": "uint128", "name": "liquidity", "type": "uint128"}, {"internalType": "uint256", "name": "amount0Min", "type": "uint256"}, {"internalType": "uint256", "name": "amount1Min", "type": "uint256"}, {"internalType": "uint256", "name": "deadline", "type": "uint256"}], "internalType": "struct INonfungiblePositionManager.DecreaseLiquidityParams", "name": "params", "type": "tuple"}], "name": "decreaseLiquidity", "outputs": [{"internalType": "uint256", "name": "amount0", "type": "uint256"}, {"internalType": "uint256", "name": "amount1", "type": "uint256"}], "stateMutability": "payable", "type": "function"},
|
||||
{"inputs": [{"components": [{"internalType": "address", "name": "token0", "type": "address"}, {"internalType": "address", "name": "token1", "type": "address"}, {"internalType": "uint24", "name": "fee", "type": "uint24"}, {"internalType": "int24", "name": "tickLower", "type": "int24"}, {"internalType": "int24", "name": "tickUpper", "type": "int24"}, {"internalType": "uint256", "name": "amount0Desired", "type": "uint256"}, {"internalType": "uint256", "name": "amount1Desired", "type": "uint256"}, {"internalType": "uint256", "name": "amount0Min", "type": "uint256"}, {"internalType": "uint256", "name": "amount1Min", "type": "uint256"}, {"internalType": "address", "name": "recipient", "type": "address"}, {"internalType": "uint256", "name": "deadline", "type": "uint256"}], "internalType": "struct INonfungiblePositionManager.MintParams", "name": "params", "type": "tuple"}], "name": "mint", "outputs": [{"internalType": "uint256", "name": "tokenId", "type": "uint256"}, {"internalType": "uint128", "name": "liquidity", "type": "uint128"}, {"internalType": "uint256", "name": "amount0", "type": "uint256"}, {"internalType": "uint256", "name": "amount1", "type": "uint256"}], "stateMutability": "payable", "type": "function"}
|
||||
]
|
||||
''')
|
||||
|
||||
UNISWAP_V3_POOL_ABI = json.loads('''
|
||||
[
|
||||
{"inputs": [], "name": "slot0", "outputs": [{"internalType": "uint160", "name": "sqrtPriceX96", "type": "uint160"}, {"internalType": "int24", "name": "tick", "type": "int24"}, {"internalType": "uint16", "name": "observationIndex", "type": "uint16"}, {"internalType": "uint16", "name": "observationCardinality", "type": "uint16"}, {"internalType": "uint16", "name": "observationCardinalityNext", "type": "uint16"}, {"internalType": "uint8", "name": "feeProtocol", "type": "uint8"}, {"internalType": "bool", "name": "unlocked", "type": "bool"}], "stateMutability": "view", "type": "function"},
|
||||
{"inputs": [], "name": "token0", "outputs": [{"internalType": "address", "name": "", "type": "address"}], "stateMutability": "view", "type": "function"},
|
||||
{"inputs": [], "name": "token1", "outputs": [{"internalType": "address", "name": "", "type": "address"}], "stateMutability": "view", "type": "function"},
|
||||
{"inputs": [], "name": "fee", "outputs": [{"internalType": "uint24", "name": "", "type": "uint24"}], "stateMutability": "view", "type": "function"},
|
||||
{"inputs": [], "name": "liquidity", "outputs": [{"internalType": "uint128", "name": "", "type": "uint128"}], "stateMutability": "view", "type": "function"}
|
||||
]
|
||||
''')
|
||||
|
||||
ERC20_ABI = json.loads('''
|
||||
[
|
||||
{"inputs": [], "name": "decimals", "outputs": [{"internalType": "uint8", "name": "", "type": "uint8"}], "stateMutability": "view", "type": "function"},
|
||||
{"inputs": [], "name": "symbol", "outputs": [{"internalType": "string", "name": "", "type": "string"}], "stateMutability": "view", "type": "function"},
|
||||
{"inputs": [{"internalType": "address", "name": "account", "type": "address"}], "name": "balanceOf", "outputs": [{"internalType": "uint256", "name": "", "type": "uint256"}], "stateMutability": "view", "type": "function"},
|
||||
{"inputs": [{"internalType": "address", "name": "spender", "type": "address"}, {"internalType": "uint256", "name": "amount", "type": "uint256"}], "name": "approve", "outputs": [{"internalType": "bool", "name": "", "type": "bool"}], "stateMutability": "nonpayable", "type": "function"},
|
||||
{"inputs": [{"internalType": "address", "name": "owner", "type": "address"}, {"internalType": "address", "name": "spender", "type": "address"}], "name": "allowance", "outputs": [{"internalType": "uint256", "name": "", "type": "uint256"}], "stateMutability": "view", "type": "function"}
|
||||
]
|
||||
''')
|
||||
|
||||
UNISWAP_V3_FACTORY_ABI = json.loads('''
|
||||
[
|
||||
{"inputs": [{"internalType": "address", "name": "tokenA", "type": "address"}, {"internalType": "address", "name": "tokenB", "type": "address"}, {"internalType": "uint24", "name": "fee", "type": "uint24"}], "name": "getPool", "outputs": [{"internalType": "address", "name": "pool", "type": "address"}], "stateMutability": "view", "type": "function"}
|
||||
]
|
||||
''')
|
||||
|
||||
SWAP_ROUTER_ABI = json.loads('''
|
||||
[
|
||||
{"inputs": [{"components": [{"internalType": "address", "name": "tokenIn", "type": "address"}, {"internalType": "address", "name": "tokenOut", "type": "address"}, {"internalType": "uint24", "name": "fee", "type": "uint24"}, {"internalType": "address", "name": "recipient", "type": "address"}, {"internalType": "uint256", "name": "deadline", "type": "uint256"}, {"internalType": "uint256", "name": "amountIn", "type": "uint256"}, {"internalType": "uint256", "name": "amountOutMinimum", "type": "uint256"}, {"internalType": "uint160", "name": "sqrtPriceLimitX96", "type": "uint160"}], "internalType": "struct ISwapRouter.ExactInputSingleParams", "name": "params", "type": "tuple"}], "name": "exactInputSingle", "outputs": [{"internalType": "uint256", "name": "amountOut", "type": "uint256"}], "stateMutability": "payable", "type": "function"}
|
||||
]
|
||||
''')
|
||||
|
||||
WETH9_ABI = json.loads('''
|
||||
[
|
||||
{"constant": false, "inputs": [], "name": "deposit", "outputs": [], "payable": true, "stateMutability": "payable", "type": "function"},
|
||||
{"constant": false, "inputs": [{"name": "wad", "type": "uint256"}], "name": "withdraw", "outputs": [], "payable": false, "stateMutability": "nonpayable", "type": "function"}
|
||||
]
|
||||
''')
|
||||
|
||||
NONFUNGIBLE_POSITION_MANAGER_ADDRESS = bytes.fromhex("C36442b4" + "a4522E87" + "1399CD71" + "7aBDD847" + "Ab11FE88")
|
||||
UNISWAP_V3_SWAP_ROUTER_ADDRESS = bytes.fromhex("E592427A0AEce92De3Edee1F18E0157C05861564")
|
||||
WETH_ADDRESS = "0x82aF49447D8a07e3bd95BD0d56f35241523fBab1" # Arbitrum WETH
|
||||
|
||||
# --- Core Logic Functions ---
|
||||
def get_position_details(w3_instance, npm_c, factory_c, token_id):
|
||||
try:
|
||||
position_data = npm_c.functions.positions(token_id).call()
|
||||
(nonce, operator, token0_address, token1_address, fee, tickLower, tickUpper, liquidity,
|
||||
feeGrowthInside0, feeGrowthInside1, tokensOwed0, tokensOwed1) = position_data
|
||||
|
||||
token0_contract = w3_instance.eth.contract(address=token0_address, abi=ERC20_ABI)
|
||||
token1_contract = w3_instance.eth.contract(address=token1_address, abi=ERC20_ABI)
|
||||
token0_symbol = token0_contract.functions.symbol().call()
|
||||
token1_symbol = token1_contract.functions.symbol().call()
|
||||
token0_decimals = token0_contract.functions.decimals().call()
|
||||
token1_decimals = token1_contract.functions.decimals().call()
|
||||
|
||||
pool_address = factory_c.functions.getPool(token0_address, token1_address, fee).call()
|
||||
if pool_address == '0x0000000000000000000000000000000000000000':
|
||||
return None, None
|
||||
|
||||
pool_contract = w3_instance.eth.contract(address=pool_address, abi=UNISWAP_V3_POOL_ABI)
|
||||
|
||||
return {
|
||||
"token0_address": token0_address, "token1_address": token1_address,
|
||||
"token0_symbol": token0_symbol, "token1_symbol": token1_symbol,
|
||||
"token0_decimals": token0_decimals, "token1_decimals": token1_decimals,
|
||||
"fee": fee, "tickLower": tickLower, "tickUpper": tickUpper, "liquidity": liquidity,
|
||||
"pool_address": pool_address
|
||||
}, pool_contract
|
||||
except Exception as e:
|
||||
print(f"ERROR fetching position details: {e}")
|
||||
return None, None
|
||||
|
||||
def get_pool_dynamic_data(pool_c):
|
||||
try:
|
||||
slot0_data = pool_c.functions.slot0().call()
|
||||
return {"sqrtPriceX96": slot0_data[0], "tick": slot0_data[1]}
|
||||
except Exception as e:
|
||||
print(f"ERROR fetching pool dynamic data: {e}")
|
||||
return None
|
||||
|
||||
def calculate_mint_amounts(current_tick, tick_lower, tick_upper, investment_value_token1, decimals0, decimals1, sqrt_price_current_x96):
|
||||
sqrt_price_current = get_sqrt_ratio_at_tick(current_tick)
|
||||
sqrt_price_lower = get_sqrt_ratio_at_tick(tick_lower)
|
||||
sqrt_price_upper = get_sqrt_ratio_at_tick(tick_upper)
|
||||
|
||||
# 1. Get Price of Token0 in terms of Token1
|
||||
price_of_token0_in_token1_units = price_from_sqrt_price_x96(sqrt_price_current_x96, decimals0, decimals1)
|
||||
|
||||
# 2. Estimate Amounts
|
||||
L_test = 1 << 128
|
||||
amt0_test, amt1_test = get_amounts_for_liquidity(sqrt_price_current, sqrt_price_lower, sqrt_price_upper, L_test)
|
||||
|
||||
# 3. Adjust for decimals
|
||||
real_amt0_test = amt0_test / (10**decimals0)
|
||||
real_amt1_test = amt1_test / (10**decimals1)
|
||||
|
||||
# 4. Calculate Total Value of Test Position in Token1 terms
|
||||
value_test = (real_amt0_test * price_of_token0_in_token1_units) + real_amt1_test
|
||||
|
||||
if value_test == 0:
|
||||
return 0, 0
|
||||
|
||||
# 5. Scale
|
||||
scale = investment_value_token1 / value_test
|
||||
|
||||
# 6. Final Amounts
|
||||
final_amt0 = int(amt0_test * scale)
|
||||
final_amt1 = int(amt1_test * scale)
|
||||
|
||||
return final_amt0, final_amt1
|
||||
|
||||
def check_and_swap(w3_instance, router_contract, account, token0, token1, amount0_needed, amount1_needed):
|
||||
token0_contract = w3_instance.eth.contract(address=token0, abi=ERC20_ABI)
|
||||
token1_contract = w3_instance.eth.contract(address=token1, abi=ERC20_ABI)
|
||||
bal0 = token0_contract.functions.balanceOf(account.address).call()
|
||||
bal1 = token1_contract.functions.balanceOf(account.address).call()
|
||||
|
||||
# Debug Balances
|
||||
s0 = token0_contract.functions.symbol().call()
|
||||
s1 = token1_contract.functions.symbol().call()
|
||||
d0 = token0_contract.functions.decimals().call()
|
||||
d1 = token1_contract.functions.decimals().call()
|
||||
|
||||
print(f"\n--- WALLET CHECK ---")
|
||||
print(f"Required: {from_wei(amount0_needed, d0):.6f} {s0} | {from_wei(amount1_needed, d1):.2f} {s1}")
|
||||
print(f"Balance : {from_wei(bal0, d0):.6f} {s0} | {from_wei(bal1, d1):.2f} {s1}")
|
||||
|
||||
deficit0 = max(0, amount0_needed - bal0)
|
||||
deficit1 = max(0, amount1_needed - bal1)
|
||||
|
||||
if deficit0 > 0: print(f"Deficit {s0}: {from_wei(deficit0, d0):.6f}")
|
||||
if deficit1 > 0: print(f"Deficit {s1}: {from_wei(deficit1, d1):.2f}")
|
||||
|
||||
# --- AUTO-WRAP ETH LOGIC ---
|
||||
weth_addr_lower = WETH_ADDRESS.lower()
|
||||
|
||||
# Wrap for Token0 Deficit
|
||||
if (deficit0 > 0 or deficit1 > 0) and token0.lower() == weth_addr_lower:
|
||||
native_bal = w3_instance.eth.get_balance(account.address)
|
||||
gas_reserve = 5 * 10**15 # 0.005 ETH (Reduced for L2)
|
||||
available_native = max(0, native_bal - gas_reserve)
|
||||
|
||||
amount_to_wrap = 0
|
||||
if deficit0 > 0:
|
||||
amount_to_wrap = deficit0
|
||||
|
||||
if deficit1 > 0:
|
||||
amount_to_wrap = available_native
|
||||
|
||||
amount_to_wrap = min(amount_to_wrap, available_native)
|
||||
|
||||
if amount_to_wrap > 0:
|
||||
print(f"Auto-Wrapping {from_wei(amount_to_wrap, 18)} ETH to WETH...")
|
||||
weth_contract = w3_instance.eth.contract(address=token0, abi=WETH9_ABI)
|
||||
wrap_txn = weth_contract.functions.deposit().build_transaction({
|
||||
'from': account.address, 'value': amount_to_wrap, 'nonce': w3_instance.eth.get_transaction_count(account.address), 'gas': 100000, 'maxFeePerGas': w3_instance.eth.gas_price * 2, 'maxPriorityFeePerGas': w3_instance.eth.max_priority_fee, 'chainId': w3_instance.eth.chain_id
|
||||
})
|
||||
signed_wrap = w3_instance.eth.account.sign_transaction(wrap_txn, private_key=account.key)
|
||||
raw_wrap = signed_wrap.rawTransaction if hasattr(signed_wrap, 'rawTransaction') else signed_wrap.raw_transaction
|
||||
tx_hash = w3_instance.eth.send_raw_transaction(raw_wrap)
|
||||
print(f"Wrap Sent: {tx_hash.hex()}")
|
||||
w3_instance.eth.wait_for_transaction_receipt(tx_hash)
|
||||
bal0 = token0_contract.functions.balanceOf(account.address).call()
|
||||
deficit0 = max(0, amount0_needed - bal0)
|
||||
else:
|
||||
if deficit0 > 0:
|
||||
print(f"Insufficient Native ETH to wrap. Need: {from_wei(deficit0, 18)}, Available: {from_wei(available_native, 18)}")
|
||||
|
||||
# Wrap for Token1 Deficit (if Token1 is WETH)
|
||||
if deficit1 > 0 and token1.lower() == weth_addr_lower:
|
||||
native_bal = w3_instance.eth.get_balance(account.address)
|
||||
gas_reserve = 5 * 10**15 # 0.005 ETH
|
||||
available_native = max(0, native_bal - gas_reserve)
|
||||
if available_native >= deficit1:
|
||||
print(f"Auto-Wrapping {from_wei(deficit1, 18)} ETH to WETH...")
|
||||
weth_contract = w3_instance.eth.contract(address=token1, abi=WETH9_ABI)
|
||||
wrap_txn = weth_contract.functions.deposit().build_transaction({
|
||||
'from': account.address, 'value': deficit1, 'nonce': w3_instance.eth.get_transaction_count(account.address), 'gas': 100000, 'maxFeePerGas': w3_instance.eth.gas_price * 2, 'maxPriorityFeePerGas': w3_instance.eth.max_priority_fee, 'chainId': w3_instance.eth.chain_id
|
||||
})
|
||||
signed_wrap = w3_instance.eth.account.sign_transaction(wrap_txn, private_key=account.key)
|
||||
raw_wrap = signed_wrap.rawTransaction if hasattr(signed_wrap, 'rawTransaction') else signed_wrap.raw_transaction
|
||||
tx_hash = w3_instance.eth.send_raw_transaction(raw_wrap)
|
||||
print(f"Wrap Sent: {tx_hash.hex()}")
|
||||
w3_instance.eth.wait_for_transaction_receipt(tx_hash)
|
||||
bal1 = token1_contract.functions.balanceOf(account.address).call()
|
||||
deficit1 = max(0, amount1_needed - bal1)
|
||||
|
||||
if deficit0 == 0 and deficit1 == 0:
|
||||
return True
|
||||
|
||||
if deficit0 > 0 and bal1 > amount1_needed:
|
||||
surplus1 = bal1 - amount1_needed
|
||||
print(f"Swapping surplus Token1 ({surplus1}) for Token0...")
|
||||
|
||||
approve_txn = token1_contract.functions.approve(router_contract.address, surplus1).build_transaction({
|
||||
'from': account.address, 'nonce': w3_instance.eth.get_transaction_count(account.address),
|
||||
'gas': 100000, 'maxFeePerGas': w3_instance.eth.gas_price * 2, 'maxPriorityFeePerGas': w3_instance.eth.max_priority_fee,
|
||||
'chainId': w3_instance.eth.chain_id
|
||||
})
|
||||
signed = w3_instance.eth.account.sign_transaction(approve_txn, private_key=account.key)
|
||||
raw = signed.rawTransaction if hasattr(signed, 'rawTransaction') else signed.raw_transaction
|
||||
w3_instance.eth.send_raw_transaction(raw)
|
||||
time.sleep(2)
|
||||
|
||||
params = (token1, token0, 500, account.address, int(time.time()) + 120, surplus1, 0, 0)
|
||||
swap_txn = router_contract.functions.exactInputSingle(params).build_transaction({
|
||||
'from': account.address, 'nonce': w3_instance.eth.get_transaction_count(account.address),
|
||||
'gas': 300000, 'maxFeePerGas': w3_instance.eth.gas_price * 2, 'maxPriorityFeePerGas': w3_instance.eth.max_priority_fee,
|
||||
'chainId': w3_instance.eth.chain_id
|
||||
})
|
||||
signed_swap = w3_instance.eth.account.sign_transaction(swap_txn, private_key=account.key)
|
||||
raw_swap = signed_swap.rawTransaction if hasattr(signed_swap, 'rawTransaction') else signed_swap.raw_transaction
|
||||
tx_hash = w3_instance.eth.send_raw_transaction(raw_swap)
|
||||
print(f"Swap Sent: {tx_hash.hex()}")
|
||||
w3_instance.eth.wait_for_transaction_receipt(tx_hash)
|
||||
|
||||
# Verify Balance After Swap
|
||||
bal0 = token0_contract.functions.balanceOf(account.address).call()
|
||||
if bal0 < amount0_needed:
|
||||
print(f"❌ Swap insufficient. Have {bal0}, Need {amount0_needed}")
|
||||
return False
|
||||
return True
|
||||
|
||||
elif deficit1 > 0 and bal0 > amount0_needed:
|
||||
surplus0 = bal0 - amount0_needed
|
||||
print(f"Swapping surplus Token0 ({surplus0}) for Token1...")
|
||||
|
||||
approve_txn = token0_contract.functions.approve(router_contract.address, surplus0).build_transaction({
|
||||
'from': account.address, 'nonce': w3_instance.eth.get_transaction_count(account.address),
|
||||
'gas': 100000, 'maxFeePerGas': w3_instance.eth.gas_price * 2, 'maxPriorityFeePerGas': w3_instance.eth.max_priority_fee,
|
||||
'chainId': w3_instance.eth.chain_id
|
||||
})
|
||||
signed = w3_instance.eth.account.sign_transaction(approve_txn, private_key=account.key)
|
||||
raw = signed.rawTransaction if hasattr(signed, 'rawTransaction') else signed.raw_transaction
|
||||
w3_instance.eth.send_raw_transaction(raw)
|
||||
time.sleep(2)
|
||||
|
||||
params = (token0, token1, 500, account.address, int(time.time()) + 120, surplus0, 0, 0)
|
||||
swap_txn = router_contract.functions.exactInputSingle(params).build_transaction({
|
||||
'from': account.address, 'nonce': w3_instance.eth.get_transaction_count(account.address),
|
||||
'gas': 300000, 'maxFeePerGas': w3_instance.eth.gas_price * 2, 'maxPriorityFeePerGas': w3_instance.eth.max_priority_fee,
|
||||
'chainId': w3_instance.eth.chain_id
|
||||
})
|
||||
signed_swap = w3_instance.eth.account.sign_transaction(swap_txn, private_key=account.key)
|
||||
raw_swap = signed_swap.rawTransaction if hasattr(signed_swap, 'rawTransaction') else signed_swap.raw_transaction
|
||||
tx_hash = w3_instance.eth.send_raw_transaction(raw_swap)
|
||||
print(f"Swap Sent: {tx_hash.hex()}")
|
||||
w3_instance.eth.wait_for_transaction_receipt(tx_hash)
|
||||
|
||||
# Verify Balance After Swap
|
||||
bal1 = token1_contract.functions.balanceOf(account.address).call()
|
||||
if bal1 < amount1_needed:
|
||||
print(f"❌ Swap insufficient. Have {bal1}, Need {amount1_needed}")
|
||||
return False
|
||||
return True
|
||||
|
||||
print("❌ Insufficient funds for required amounts.")
|
||||
return False
|
||||
|
||||
def get_token_balances(w3_instance, account_address, token0_address, token1_address):
|
||||
try:
|
||||
token0_contract = w3_instance.eth.contract(address=token0, abi=ERC20_ABI)
|
||||
token1_contract = w3_instance.eth.contract(address=token1, abi=ERC20_ABI)
|
||||
b0 = token0_contract.functions.balanceOf(account_address).call()
|
||||
b1 = token1_contract.functions.balanceOf(account_address).call()
|
||||
return b0, b1
|
||||
except: return 0, 0
|
||||
|
||||
def decrease_liquidity(w3_instance, npm_contract, account, position_id, liquidity_amount):
|
||||
try:
|
||||
txn = npm_contract.functions.decreaseLiquidity((position_id, liquidity_amount, 0, 0, int(time.time()) + 180)).build_transaction({
|
||||
'from': account.address, 'gas': 1000000, 'maxFeePerGas': w3_instance.eth.gas_price * 2, 'maxPriorityFeePerGas': w3_instance.eth.max_priority_fee, 'nonce': w3_instance.eth.get_transaction_count(account.address), 'chainId': w3_instance.eth.chain_id
|
||||
})
|
||||
signed = w3_instance.eth.account.sign_transaction(txn, private_key=account.key)
|
||||
raw = signed.rawTransaction if hasattr(signed, 'rawTransaction') else signed.raw_transaction
|
||||
tx_hash = w3_instance.eth.send_raw_transaction(raw)
|
||||
print(f"Decrease Sent: {tx_hash.hex()}")
|
||||
w3_instance.eth.wait_for_transaction_receipt(tx_hash)
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"Error decreasing: {e}")
|
||||
return False
|
||||
|
||||
def mint_new_position(w3_instance, npm_contract, account, token0, token1, amount0, amount1, tick_lower, tick_upper):
|
||||
print(f"\n--- Attempting to Mint ---")
|
||||
try:
|
||||
token0_c = w3_instance.eth.contract(address=token0, abi=ERC20_ABI)
|
||||
token1_c = w3_instance.eth.contract(address=token1, abi=ERC20_ABI)
|
||||
|
||||
# Approve 0
|
||||
txn0 = token0_c.functions.approve(npm_contract.address, amount0).build_transaction({
|
||||
'from': account.address, 'nonce': w3_instance.eth.get_transaction_count(account.address),
|
||||
'gas': 100000, 'maxFeePerGas': w3_instance.eth.gas_price * 2, 'maxPriorityFeePerGas': w3_instance.eth.max_priority_fee, 'chainId': w3_instance.eth.chain_id
|
||||
})
|
||||
signed0 = w3_instance.eth.account.sign_transaction(txn0, private_key=account.key)
|
||||
raw0 = signed0.rawTransaction if hasattr(signed0, 'rawTransaction') else signed0.raw_transaction
|
||||
w3_instance.eth.send_raw_transaction(raw0)
|
||||
time.sleep(2)
|
||||
|
||||
# Approve 1
|
||||
txn1 = token1_c.functions.approve(npm_contract.address, amount1).build_transaction({
|
||||
'from': account.address, 'nonce': w3_instance.eth.get_transaction_count(account.address),
|
||||
'gas': 100000, 'maxFeePerGas': w3_instance.eth.gas_price * 2, 'maxPriorityFeePerGas': w3_instance.eth.max_priority_fee, 'chainId': w3_instance.eth.chain_id
|
||||
})
|
||||
signed1 = w3_instance.eth.account.sign_transaction(txn1, private_key=account.key)
|
||||
raw1 = signed1.rawTransaction if hasattr(signed1, 'rawTransaction') else signed1.raw_transaction
|
||||
w3_instance.eth.send_raw_transaction(raw1)
|
||||
time.sleep(2)
|
||||
|
||||
# Mint
|
||||
params = (token0, token1, 500, tick_lower, tick_upper, amount0, amount1, 0, 0, account.address, int(time.time()) + 180)
|
||||
mint_txn = npm_contract.functions.mint(params).build_transaction({
|
||||
'from': account.address, 'nonce': w3_instance.eth.get_transaction_count(account.address),
|
||||
'gas': 800000, 'maxFeePerGas': w3_instance.eth.gas_price * 2, 'maxPriorityFeePerGas': w3_instance.eth.max_priority_fee, 'chainId': w3_instance.eth.chain_id
|
||||
})
|
||||
signed_mint = w3_instance.eth.account.sign_transaction(mint_txn, private_key=account.key)
|
||||
raw_mint = signed_mint.rawTransaction if hasattr(signed_mint, 'rawTransaction') else signed_mint.raw_transaction
|
||||
tx_hash = w3_instance.eth.send_raw_transaction(raw_mint)
|
||||
print(f"Mint Sent: {tx_hash.hex()}")
|
||||
|
||||
receipt = w3_instance.eth.wait_for_transaction_receipt(tx_hash)
|
||||
if receipt.status == 1:
|
||||
print("✅ Mint Successful!")
|
||||
|
||||
result_data = {'token_id': None, 'liquidity': 0, 'amount0': 0, 'amount1': 0}
|
||||
|
||||
# Web3.py Event Processing to capture ID and Amounts
|
||||
try:
|
||||
# 1. Capture Token ID from Transfer event
|
||||
transfer_events = npm_contract.events.Transfer().process_receipt(receipt)
|
||||
for event in transfer_events:
|
||||
if event['args']['from'] == "0x0000000000000000000000000000000000000000":
|
||||
result_data['token_id'] = event['args']['tokenId']
|
||||
break
|
||||
|
||||
# 2. Capture Amounts from IncreaseLiquidity event
|
||||
inc_liq_events = npm_contract.events.IncreaseLiquidity().process_receipt(receipt)
|
||||
for event in inc_liq_events:
|
||||
if result_data['token_id'] and event['args']['tokenId'] == result_data['token_id']:
|
||||
result_data['amount0'] = event['args']['amount0']
|
||||
result_data['amount1'] = event['args']['amount1']
|
||||
result_data['liquidity'] = event['args']['liquidity']
|
||||
break
|
||||
|
||||
except Exception as e:
|
||||
print(f"Event Processing Warning: {e}")
|
||||
|
||||
if result_data['token_id']:
|
||||
print(f"Captured: ID {result_data['token_id']}, Amt0 {result_data['amount0']}, Amt1 {result_data['amount1']}")
|
||||
return result_data
|
||||
|
||||
return None
|
||||
else:
|
||||
print("❌ Mint Failed!")
|
||||
return None
|
||||
except Exception as e:
|
||||
print(f"Mint Error: {e}")
|
||||
return None
|
||||
|
||||
def collect_fees(w3_instance, npm_contract, account, position_id):
|
||||
try:
|
||||
txn = npm_contract.functions.collect((position_id, account.address, 2**128-1, 2**128-1)).build_transaction({
|
||||
'from': account.address, 'gas': 1000000, 'maxFeePerGas': w3_instance.eth.gas_price * 2, 'maxPriorityFeePerGas': w3_instance.eth.max_priority_fee, 'nonce': w3_instance.eth.get_transaction_count(account.address), 'chainId': w3_instance.eth.chain_id
|
||||
})
|
||||
signed = w3_instance.eth.account.sign_transaction(txn, private_key=account.key)
|
||||
raw = signed.rawTransaction if hasattr(signed, 'rawTransaction') else signed.raw_transaction
|
||||
tx_hash = w3_instance.eth.send_raw_transaction(raw)
|
||||
print(f"Collect Sent: {tx_hash.hex()}")
|
||||
w3_instance.eth.wait_for_transaction_receipt(tx_hash)
|
||||
return True
|
||||
except: return False
|
||||
|
||||
def main():
|
||||
print(f"CWD: {os.getcwd()}")
|
||||
# Load .env from current directory
|
||||
load_dotenv(override=True)
|
||||
|
||||
rpc_url = os.environ.get("MAINNET_RPC_URL")
|
||||
private_key = os.environ.get("MAIN_WALLET_PRIVATE_KEY") or os.environ.get("PRIVATE_KEY")
|
||||
|
||||
if not rpc_url or not private_key:
|
||||
print("Missing RPC or Private Key.")
|
||||
return
|
||||
|
||||
w3 = Web3(Web3.HTTPProvider(rpc_url))
|
||||
if not w3.is_connected():
|
||||
print("RPC Connection Failed")
|
||||
return
|
||||
print(f"Connected to Chain ID: {w3.eth.chain_id}")
|
||||
|
||||
account = Account.from_key(private_key)
|
||||
w3.eth.default_account = account.address
|
||||
print(f"Wallet: {account.address}")
|
||||
|
||||
npm_contract = w3.eth.contract(address=NONFUNGIBLE_POSITION_MANAGER_ADDRESS, abi=NONFUNGIBLE_POSITION_MANAGER_ABI)
|
||||
factory_addr = npm_contract.functions.factory().call()
|
||||
factory_contract = w3.eth.contract(address=factory_addr, abi=UNISWAP_V3_FACTORY_ABI)
|
||||
router_contract = w3.eth.contract(address=UNISWAP_V3_SWAP_ROUTER_ADDRESS, abi=SWAP_ROUTER_ABI)
|
||||
|
||||
print("\n--- STARTING LIFECYCLE MANAGER ---")
|
||||
while True:
|
||||
try:
|
||||
# 1. Get All Open Positions
|
||||
all_positions = get_all_open_positions()
|
||||
|
||||
# Check if we have an active AUTOMATIC position
|
||||
active_automatic_position = next((p for p in all_positions if p['type'] == 'AUTOMATIC' and p['status'] == 'OPEN'), None)
|
||||
|
||||
if all_positions:
|
||||
print("\n" + "="*60)
|
||||
print(f"Monitoring at: {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime())}")
|
||||
|
||||
for position in all_positions:
|
||||
token_id = position['token_id']
|
||||
pos_type = position['type']
|
||||
|
||||
# Fetch Details
|
||||
pos_details, pool_c = get_position_details(w3, npm_contract, factory_contract, token_id)
|
||||
if not pos_details:
|
||||
print(f"ERROR: Could not get details for Position {token_id}. Skipping.")
|
||||
continue
|
||||
|
||||
pool_data = get_pool_dynamic_data(pool_c)
|
||||
current_tick = pool_data['tick']
|
||||
|
||||
# Calculate Fees (Simulation)
|
||||
unclaimed0 = 0
|
||||
unclaimed1 = 0
|
||||
try:
|
||||
fees_sim = npm_contract.functions.collect((token_id, "0x0000000000000000000000000000000000000000", 2**128-1, 2**128-1)).call()
|
||||
unclaimed0 = from_wei(fees_sim[0], pos_details['token0_decimals'])
|
||||
unclaimed1 = from_wei(fees_sim[1], pos_details['token1_decimals'])
|
||||
except: pass
|
||||
|
||||
# Check Range
|
||||
is_out_of_range = False
|
||||
status_str = "IN RANGE"
|
||||
if current_tick < pos_details['tickLower']:
|
||||
is_out_of_range = True
|
||||
status_str = "OUT OF RANGE (BELOW)"
|
||||
elif current_tick >= pos_details['tickUpper']:
|
||||
is_out_of_range = True
|
||||
status_str = "OUT OF RANGE (ABOVE)"
|
||||
|
||||
print(f"\nID: {token_id} | Type: {pos_type} | Status: {status_str}")
|
||||
print(f" Range: {position['range_lower']:.2f} - {position['range_upper']:.2f}")
|
||||
print(f" Fees: {unclaimed0:.4f} {pos_details['token0_symbol']} / {unclaimed1:.4f} {pos_details['token1_symbol']}")
|
||||
|
||||
# --- AUTO CLOSE LOGIC (AUTOMATIC ONLY) ---
|
||||
if pos_type == 'AUTOMATIC' and CLOSE_POSITION_ENABLED and is_out_of_range:
|
||||
print(f"⚠️ Automatic Position {token_id} is OUT OF RANGE! Initiating Close...")
|
||||
liq = pos_details['liquidity']
|
||||
if liq > 0:
|
||||
if decrease_liquidity(w3, npm_contract, account, token_id, liq):
|
||||
time.sleep(5)
|
||||
collect_fees(w3, npm_contract, account, token_id)
|
||||
update_hedge_status_file("CLOSE", {'token_id': token_id})
|
||||
print("Position Closed & Status Updated.")
|
||||
|
||||
# --- REBALANCE ON CLOSE (If Price Dropped) ---
|
||||
if REBALANCE_ON_CLOSE_BELOW_RANGE and status_str == "OUT OF RANGE (BELOW)":
|
||||
print("📉 Position closed BELOW range (100% ETH). Selling 50% of WETH inventory to USDC...")
|
||||
try:
|
||||
# Get WETH Balance
|
||||
token0_c = w3.eth.contract(address=pos_details['token0_address'], abi=ERC20_ABI)
|
||||
weth_bal = token0_c.functions.balanceOf(account.address).call()
|
||||
|
||||
amount_in = weth_bal // 2
|
||||
|
||||
if amount_in > 0:
|
||||
# Approve Router
|
||||
approve_txn = token0_c.functions.approve(router_contract.address, amount_in).build_transaction({
|
||||
'from': account.address, 'nonce': w3.eth.get_transaction_count(account.address),
|
||||
'gas': 100000, 'maxFeePerGas': w3.eth.gas_price * 2, 'maxPriorityFeePerGas': w3.eth.max_priority_fee,
|
||||
'chainId': w3.eth.chain_id
|
||||
})
|
||||
signed = w3.eth.account.sign_transaction(approve_txn, private_key=account.key)
|
||||
raw = signed.rawTransaction if hasattr(signed, 'rawTransaction') else signed.raw_transaction
|
||||
w3.eth.send_raw_transaction(raw)
|
||||
time.sleep(2)
|
||||
|
||||
# Swap WETH -> USDC
|
||||
params = (pos_details['token0_address'], pos_details['token1_address'], 500, account.address, int(time.time()) + 120, amount_in, 0, 0)
|
||||
swap_txn = router_contract.functions.exactInputSingle(params).build_transaction({
|
||||
'from': account.address, 'nonce': w3.eth.get_transaction_count(account.address),
|
||||
'gas': 300000, 'maxFeePerGas': w3.eth.gas_price * 2, 'maxPriorityFeePerGas': w3.eth.max_priority_fee,
|
||||
'chainId': w3.eth.chain_id
|
||||
})
|
||||
signed_swap = w3.eth.account.sign_transaction(swap_txn, private_key=account.key)
|
||||
raw_swap = signed_swap.rawTransaction if hasattr(signed_swap, 'rawTransaction') else signed_swap.raw_transaction
|
||||
tx_hash = w3.eth.send_raw_transaction(raw_swap)
|
||||
print(f"⚖️ Rebalance Swap Sent: {tx_hash.hex()}")
|
||||
w3.eth.wait_for_transaction_receipt(tx_hash)
|
||||
print("✅ Rebalance Complete.")
|
||||
except Exception as e:
|
||||
print(f"Error during rebalance swap: {e}")
|
||||
|
||||
else:
|
||||
print("Liquidity 0. Marking closed.")
|
||||
update_hedge_status_file("CLOSE", {'token_id': token_id})
|
||||
|
||||
# 2. Opening Logic (If no active automatic position)
|
||||
if not active_automatic_position and OPEN_POSITION_ENABLED:
|
||||
print("\n[OPENING] No active automatic position. Starting Open Sequence...")
|
||||
# Get Pool (WETH/USDC)
|
||||
token0 = "0x82aF49447D8a07e3bd95BD0d56f35241523fBab1" # WETH
|
||||
token1 = "0xaf88d065e77c8cC2239327C5EDb3A432268e5831" # USDC
|
||||
pool_addr = factory_contract.functions.getPool(token0, token1, 500).call()
|
||||
pool_c = w3.eth.contract(address=pool_addr, abi=UNISWAP_V3_POOL_ABI)
|
||||
|
||||
pool_data = get_pool_dynamic_data(pool_c)
|
||||
tick = pool_data['tick']
|
||||
|
||||
# Range +/- 2%
|
||||
import math
|
||||
tick_delta = int(math.log(1 + RANGE_WIDTH_PCT) / math.log(1.0001))
|
||||
spacing = 10
|
||||
lower = (tick - tick_delta) // spacing * spacing
|
||||
upper = (tick + tick_delta) // spacing * spacing
|
||||
|
||||
# Amounts
|
||||
try:
|
||||
token0_c = w3.eth.contract(address=token0, abi=ERC20_ABI)
|
||||
token1_c = w3.eth.contract(address=token1, abi=ERC20_ABI)
|
||||
d0 = token0_c.functions.decimals().call()
|
||||
d1 = token1_c.functions.decimals().call()
|
||||
except Exception as e:
|
||||
print(f"Error fetching decimals: {e}")
|
||||
time.sleep(MONITOR_INTERVAL_SECONDS)
|
||||
continue
|
||||
|
||||
amt0, amt1 = calculate_mint_amounts(tick, lower, upper, TARGET_INVESTMENT_VALUE_TOKEN1, d0, d1, pool_data['sqrtPriceX96'])
|
||||
amt0_buf, amt1_buf = int(amt0 * 1.02), int(amt1 * 1.02)
|
||||
|
||||
if check_and_swap(w3, router_contract, account, token0, token1, amt0_buf, amt1_buf):
|
||||
mint_result = mint_new_position(w3, npm_contract, account, token0, token1, amt0, amt1, lower, upper)
|
||||
|
||||
if mint_result: # Calculate Actual Value
|
||||
try:
|
||||
s0 = token0_c.functions.symbol().call()
|
||||
s1 = token1_c.functions.symbol().call()
|
||||
except:
|
||||
s0, s1 = "T0", "T1"
|
||||
|
||||
real_amt0 = from_wei(mint_result['amount0'], d0)
|
||||
real_amt1 = from_wei(mint_result['amount1'], d1)
|
||||
entry_price = price_from_sqrt_price_x96(pool_data['sqrtPriceX96'], d0, d1)
|
||||
actual_value = (real_amt0 * entry_price) + real_amt1
|
||||
print(f"ACTUAL MINT VALUE: {actual_value:.2f} {s1}/{s0}")
|
||||
|
||||
pos_data = {
|
||||
'token_id': mint_result['token_id'],
|
||||
'entry_price': entry_price,
|
||||
'range_lower': price_from_tick(lower, d0, d1),
|
||||
'range_upper': price_from_tick(upper, d0, d1),
|
||||
'target_value': actual_value,
|
||||
'amount0_initial': mint_result['amount0'],
|
||||
'amount1_initial': mint_result['amount1']
|
||||
}
|
||||
update_hedge_status_file("OPEN", pos_data)
|
||||
print("Cycle Complete. Monitoring.")
|
||||
|
||||
elif not all_positions:
|
||||
print("No open positions (Manual or Automatic). Waiting...")
|
||||
|
||||
time.sleep(MONITOR_INTERVAL_SECONDS)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\nManager stopped.")
|
||||
break
|
||||
except Exception as e:
|
||||
print(f"Error in Main Loop: {e}")
|
||||
time.sleep(MONITOR_INTERVAL_SECONDS)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@ -1,95 +0,0 @@
|
||||
import os
|
||||
import json
|
||||
import logging
|
||||
import requests
|
||||
from hyperliquid.info import Info
|
||||
from hyperliquid.utils import constants
|
||||
|
||||
from logging_utils import setup_logging
|
||||
|
||||
def update_coin_mapping():
|
||||
"""
|
||||
Fetches all assets from Hyperliquid and all coins from CoinGecko,
|
||||
then creates and saves a mapping from the Hyperliquid symbol to the
|
||||
CoinGecko ID using a robust matching algorithm.
|
||||
"""
|
||||
setup_logging('normal', 'CoinMapUpdater')
|
||||
logging.info("Starting coin mapping update process...")
|
||||
|
||||
# --- 1. Fetch all assets from Hyperliquid ---
|
||||
try:
|
||||
logging.info("Fetching assets from Hyperliquid...")
|
||||
info = Info(constants.MAINNET_API_URL, skip_ws=True)
|
||||
meta, asset_contexts = info.meta_and_asset_ctxs()
|
||||
hyperliquid_assets = meta['universe']
|
||||
logging.info(f"Found {len(hyperliquid_assets)} assets on Hyperliquid.")
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to fetch assets from Hyperliquid: {e}")
|
||||
return
|
||||
|
||||
# --- 2. Fetch all coins from CoinGecko ---
|
||||
try:
|
||||
logging.info("Fetching coin list from CoinGecko...")
|
||||
response = requests.get("https://api.coingecko.com/api/v3/coins/list")
|
||||
response.raise_for_status()
|
||||
coingecko_coins = response.json()
|
||||
|
||||
# Create more robust lookup tables
|
||||
cg_symbol_lookup = {coin['symbol'].upper(): coin['id'] for coin in coingecko_coins}
|
||||
cg_name_lookup = {coin['name'].upper(): coin['id'] for coin in coingecko_coins}
|
||||
|
||||
logging.info(f"Found {len(coingecko_coins)} coins on CoinGecko.")
|
||||
except requests.exceptions.RequestException as e:
|
||||
logging.error(f"Failed to fetch coin list from CoinGecko: {e}")
|
||||
return
|
||||
|
||||
# --- 3. Create the mapping ---
|
||||
final_mapping = {}
|
||||
# Use manual overrides for critical coins where symbols are ambiguous
|
||||
manual_overrides = {
|
||||
"BTC": "bitcoin",
|
||||
"ETH": "ethereum",
|
||||
"SOL": "solana",
|
||||
"BNB": "binancecoin",
|
||||
"HYPE": "hyperliquid",
|
||||
"PUMP": "pump-fun",
|
||||
"ASTER": "astar",
|
||||
"ZEC": "zcash",
|
||||
"SUI": "sui",
|
||||
"ACE": "endurance",
|
||||
# Add other important ones you watch here
|
||||
}
|
||||
|
||||
logging.info("Generating symbol-to-id mapping...")
|
||||
for asset in hyperliquid_assets:
|
||||
asset_symbol = asset['name'].upper()
|
||||
asset_name = asset.get('name', '').upper() # Use full name if available
|
||||
|
||||
# Priority 1: Manual Overrides
|
||||
if asset_symbol in manual_overrides:
|
||||
final_mapping[asset_symbol] = manual_overrides[asset_symbol]
|
||||
continue
|
||||
|
||||
# Priority 2: Exact Name Match
|
||||
if asset_name in cg_name_lookup:
|
||||
final_mapping[asset_symbol] = cg_name_lookup[asset_name]
|
||||
continue
|
||||
|
||||
# Priority 3: Symbol Match
|
||||
if asset_symbol in cg_symbol_lookup:
|
||||
final_mapping[asset_symbol] = cg_symbol_lookup[asset_symbol]
|
||||
else:
|
||||
logging.warning(f"No match found for '{asset_symbol}' on CoinGecko. It will be excluded.")
|
||||
|
||||
# --- 4. Save the mapping to a file ---
|
||||
map_file_path = os.path.join("_data", "coin_id_map.json")
|
||||
try:
|
||||
with open(map_file_path, 'w', encoding='utf-8') as f:
|
||||
json.dump(final_mapping, f, indent=4, sort_keys=True)
|
||||
logging.info(f"Successfully saved new coin mapping with {len(final_mapping)} entries to '{map_file_path}'.")
|
||||
except IOError as e:
|
||||
logging.error(f"Failed to write coin mapping file: {e}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
update_coin_mapping()
|
||||
|
||||
@ -33,7 +33,7 @@ def create_and_authorize_agent():
|
||||
|
||||
# --- STEP 3: Create and approve the agent with a specific name ---
|
||||
# agent name must be between 1 and 16 characters long
|
||||
agent_name = "executor_SCALPER"
|
||||
agent_name = "executor_swing"
|
||||
|
||||
print(f"\n🔗 Authorizing a new agent named '{agent_name}'...")
|
||||
try:
|
||||
|
||||
@ -1,143 +0,0 @@
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import time
|
||||
import argparse # <-- THE FIX: Added this import
|
||||
from datetime import datetime
|
||||
from eth_account import Account
|
||||
from hyperliquid.info import Info
|
||||
from hyperliquid.utils import constants
|
||||
from dotenv import load_dotenv
|
||||
|
||||
from logging_utils import setup_logging
|
||||
|
||||
# Load .env file
|
||||
load_dotenv()
|
||||
|
||||
class DashboardDataFetcher:
|
||||
"""
|
||||
A dedicated, lightweight process that runs in a loop to fetch and save
|
||||
the account's state (balances, positions) for the main dashboard to display.
|
||||
"""
|
||||
|
||||
def __init__(self, log_level: str):
|
||||
setup_logging(log_level, 'DashboardDataFetcher')
|
||||
|
||||
self.vault_address = os.environ.get("MAIN_WALLET_ADDRESS")
|
||||
if not self.vault_address:
|
||||
logging.error("MAIN_WALLET_ADDRESS not set in .env file. Cannot proceed.")
|
||||
sys.exit(1)
|
||||
|
||||
self.info = Info(constants.MAINNET_API_URL, skip_ws=True)
|
||||
|
||||
# Use absolute path to ensure consistency across different working directories
|
||||
project_root = os.path.dirname(os.path.abspath(__file__))
|
||||
self.status_file_path = os.path.join(project_root, "_logs", "trade_executor_status.json")
|
||||
self.managed_positions_path = os.path.join(project_root, "_data", "executor_managed_positions.json")
|
||||
logging.info(f"Dashboard Data Fetcher initialized for vault: {self.vault_address}")
|
||||
|
||||
def load_managed_positions(self) -> dict:
|
||||
"""Loads the state of which strategy manages which position."""
|
||||
if os.path.exists(self.managed_positions_path):
|
||||
try:
|
||||
with open(self.managed_positions_path, 'r') as f:
|
||||
data = json.load(f)
|
||||
# Create a reverse map: {coin: strategy_name}
|
||||
return {v['coin']: k for k, v in data.items()}
|
||||
except (IOError, json.JSONDecodeError):
|
||||
logging.warning("Could not read managed positions file.")
|
||||
return {}
|
||||
|
||||
def fetch_and_save_status(self):
|
||||
"""Fetches all account data and saves it to JSON status file."""
|
||||
try:
|
||||
perpetuals_state = self.info.user_state(self.vault_address)
|
||||
spot_state = self.info.spot_user_state(self.vault_address)
|
||||
meta, all_market_contexts = self.info.meta_and_asset_ctxs()
|
||||
coin_to_strategy_map = self.load_managed_positions()
|
||||
|
||||
status = {
|
||||
"last_updated_utc": datetime.now().isoformat(),
|
||||
"perpetuals_account": { "balances": {}, "open_positions": [] },
|
||||
"spot_account": { "positions": [] }
|
||||
}
|
||||
|
||||
# 1. Extract Perpetuals Account Data
|
||||
margin_summary = perpetuals_state.get("marginSummary", {})
|
||||
status["perpetuals_account"]["balances"] = {
|
||||
"account_value": margin_summary.get("accountValue"),
|
||||
"total_margin_used": margin_summary.get("totalMarginUsed"),
|
||||
"withdrawable": margin_summary.get("withdrawable")
|
||||
}
|
||||
|
||||
asset_positions = perpetuals_state.get("assetPositions", [])
|
||||
for asset_pos in asset_positions:
|
||||
pos = asset_pos.get('position', {})
|
||||
if float(pos.get('szi', 0)) != 0:
|
||||
coin = pos.get('coin')
|
||||
position_value = float(pos.get('positionValue', 0))
|
||||
margin_used = float(pos.get('marginUsed', 0))
|
||||
leverage = position_value / margin_used if margin_used > 0 else 0
|
||||
|
||||
position_info = {
|
||||
"coin": coin,
|
||||
"strategy": coin_to_strategy_map.get(coin, "Unmanaged"),
|
||||
"size": pos.get('szi'),
|
||||
"position_value": pos.get('positionValue'),
|
||||
"entry_price": pos.get('entryPx'),
|
||||
"mark_price": pos.get('markPx'),
|
||||
"pnl": pos.get('unrealizedPnl'),
|
||||
"liq_price": pos.get('liquidationPx'),
|
||||
"margin": pos.get('marginUsed'),
|
||||
"funding": pos.get('fundingRate'),
|
||||
"leverage": f"{leverage:.1f}x"
|
||||
}
|
||||
status["perpetuals_account"]["open_positions"].append(position_info)
|
||||
|
||||
# 2. Extract Spot Account Data
|
||||
price_map = { asset.get("universe", {}).get("name"): asset.get("markPx") for asset in all_market_contexts if asset.get("universe", {}).get("name") }
|
||||
spot_balances = spot_state.get("balances", [])
|
||||
for bal in spot_balances:
|
||||
total_balance = float(bal.get('total', 0))
|
||||
if total_balance > 0:
|
||||
coin = bal.get('coin')
|
||||
mark_price = float(price_map.get(coin, 0))
|
||||
status["spot_account"]["positions"].append({
|
||||
"coin": coin, "balance_size": total_balance,
|
||||
"position_value": total_balance * mark_price, "pnl": "N/A"
|
||||
})
|
||||
|
||||
# 3. Ensure directory exists and write to file
|
||||
# Ensure the _logs directory exists
|
||||
logs_dir = os.path.dirname(self.status_file_path)
|
||||
os.makedirs(logs_dir, exist_ok=True)
|
||||
|
||||
# Use atomic write to prevent partial reads from main_app
|
||||
temp_file_path = self.status_file_path + ".tmp"
|
||||
with open(temp_file_path, 'w', encoding='utf-8') as f:
|
||||
json.dump(status, f, indent=4)
|
||||
# Rename is atomic
|
||||
os.replace(temp_file_path, self.status_file_path)
|
||||
|
||||
logging.debug(f"Successfully updated dashboard status file.")
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to fetch or save account status: {e}")
|
||||
|
||||
def run(self):
|
||||
"""Main loop to periodically fetch and save data."""
|
||||
while True:
|
||||
self.fetch_and_save_status()
|
||||
time.sleep(5) # Update dashboard data every 5 seconds
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description="Run the Dashboard Data Fetcher.")
|
||||
parser.add_argument("--log-level", default="normal", choices=['off', 'normal', 'debug'])
|
||||
args = parser.parse_args()
|
||||
|
||||
fetcher = DashboardDataFetcher(log_level=args.log_level)
|
||||
try:
|
||||
fetcher.run()
|
||||
except KeyboardInterrupt:
|
||||
logging.info("Dashboard Data Fetcher stopped.")
|
||||
@ -1,56 +0,0 @@
|
||||
import sqlite3
|
||||
import logging
|
||||
import os
|
||||
|
||||
from logging_utils import setup_logging
|
||||
|
||||
def cleanup_market_cap_tables():
|
||||
"""
|
||||
Scans the database and drops all tables related to market cap data
|
||||
to allow for a clean refresh.
|
||||
"""
|
||||
setup_logging('normal', 'DBCleanup')
|
||||
db_path = os.path.join("_data", "market_data.db")
|
||||
|
||||
if not os.path.exists(db_path):
|
||||
logging.error(f"Database file not found at '{db_path}'. Nothing to clean.")
|
||||
return
|
||||
|
||||
logging.info(f"Connecting to database at '{db_path}'...")
|
||||
try:
|
||||
with sqlite3.connect(db_path) as conn:
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Find all tables that were created by the market cap fetcher
|
||||
cursor.execute("""
|
||||
SELECT name FROM sqlite_master
|
||||
WHERE type='table'
|
||||
AND (name LIKE '%_market_cap' OR name LIKE 'TOTAL_%')
|
||||
""")
|
||||
|
||||
tables_to_drop = cursor.fetchall()
|
||||
|
||||
if not tables_to_drop:
|
||||
logging.info("No market cap tables found to clean up. Database is already clean.")
|
||||
return
|
||||
|
||||
logging.warning(f"Found {len(tables_to_drop)} market cap tables to remove...")
|
||||
|
||||
for table in tables_to_drop:
|
||||
table_name = table[0]
|
||||
try:
|
||||
logging.info(f"Dropping table: {table_name}...")
|
||||
conn.execute(f'DROP TABLE IF EXISTS "{table_name}"')
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to drop table {table_name}: {e}")
|
||||
|
||||
conn.commit()
|
||||
logging.info("--- Database cleanup complete ---")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
logging.error(f"A database error occurred: {e}")
|
||||
except Exception as e:
|
||||
logging.error(f"An unexpected error occurred: {e}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
cleanup_market_cap_tables()
|
||||
@ -1,187 +1,49 @@
|
||||
import logging
|
||||
import json
|
||||
import time
|
||||
import os
|
||||
import traceback
|
||||
import sys
|
||||
from hyperliquid.info import Info
|
||||
from hyperliquid.utils import constants
|
||||
|
||||
from logging_utils import setup_logging
|
||||
|
||||
# --- Configuration for standalone error logging ---
|
||||
LOGS_DIR = "_logs"
|
||||
ERROR_LOG_FILE = os.path.join(LOGS_DIR, "live_market_errors.log")
|
||||
|
||||
def log_error(error_message: str, include_traceback: bool = True):
|
||||
"""A simple, robust file logger for any errors."""
|
||||
try:
|
||||
if not os.path.exists(LOGS_DIR):
|
||||
os.makedirs(LOGS_DIR)
|
||||
|
||||
with open(ERROR_LOG_FILE, 'a') as f:
|
||||
timestamp = time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime())
|
||||
f.write(f"--- ERROR at {timestamp} UTC ---\n")
|
||||
f.write(error_message + "\n")
|
||||
if include_traceback:
|
||||
f.write(traceback.format_exc() + "\n")
|
||||
f.write("="*50 + "\n")
|
||||
except Exception:
|
||||
print(f"CRITICAL: Failed to write to error log file: {error_message}", file=sys.stderr)
|
||||
|
||||
|
||||
def on_message(message, shared_prices_dict):
|
||||
"""
|
||||
Callback function to process incoming WebSocket messages for 'bbo' and 'trades'
|
||||
and update the shared memory dictionary.
|
||||
Callback function to process incoming 'allMids' messages and update the
|
||||
shared memory dictionary directly.
|
||||
"""
|
||||
try:
|
||||
logging.debug(f"Received WebSocket message: {message}")
|
||||
channel = message.get("channel")
|
||||
|
||||
# --- Parser 1: Handle Best Bid/Offer messages ---
|
||||
if channel == "bbo":
|
||||
data = message.get("data")
|
||||
if not data:
|
||||
logging.warning("BBO message received with no data.")
|
||||
return
|
||||
|
||||
coin = data.get("coin")
|
||||
if not coin:
|
||||
logging.warning("BBO data received with no coin identifier.")
|
||||
return
|
||||
|
||||
bid_ask_data = data.get("bbo")
|
||||
|
||||
if not bid_ask_data or not isinstance(bid_ask_data, list) or len(bid_ask_data) < 2:
|
||||
logging.warning(f"[{coin}] Received BBO message with invalid 'bbo' array: {bid_ask_data}")
|
||||
return
|
||||
|
||||
try:
|
||||
bid_price_str = bid_ask_data[0].get('px')
|
||||
ask_price_str = bid_ask_data[1].get('px')
|
||||
|
||||
if not bid_price_str or not ask_price_str:
|
||||
logging.warning(f"[{coin}] BBO data missing 'px' field.")
|
||||
return
|
||||
|
||||
bid_price = float(bid_price_str)
|
||||
ask_price = float(ask_price_str)
|
||||
|
||||
# Update the shared dictionary for Bid and Ask
|
||||
shared_prices_dict[f"{coin}_bid"] = bid_price
|
||||
shared_prices_dict[f"{coin}_ask"] = ask_price
|
||||
|
||||
logging.info(f"Updated {coin} (BBO): Bid={bid_price:.4f}, Ask={ask_price:.4f}")
|
||||
|
||||
except (ValueError, TypeError, IndexError) as e:
|
||||
logging.error(f"[{coin}] Error parsing BBO data: {e}. Data: {bid_ask_data}")
|
||||
|
||||
# --- Parser 2: Handle Live Trade messages ---
|
||||
elif channel == "trades":
|
||||
trade_list = message.get("data")
|
||||
|
||||
if not trade_list or not isinstance(trade_list, list) or len(trade_list) == 0:
|
||||
logging.warning(f"Received 'trades' message with invalid data: {trade_list}")
|
||||
return
|
||||
|
||||
# Process all trades in the batch
|
||||
for trade in trade_list:
|
||||
try:
|
||||
coin = trade.get("coin")
|
||||
price_str = trade.get("px")
|
||||
|
||||
if not coin or not price_str:
|
||||
logging.warning(f"Trade data missing 'coin' or 'px': {trade}")
|
||||
continue
|
||||
|
||||
price = float(price_str)
|
||||
|
||||
# Update the shared dictionary for the "Live Price" column
|
||||
shared_prices_dict[coin] = price
|
||||
|
||||
logging.info(f"Updated {coin} (Live Price) to last trade: {price:.4f}")
|
||||
|
||||
except (ValueError, TypeError) as e:
|
||||
logging.error(f"Error parsing trade data: {e}. Data: {trade}")
|
||||
|
||||
if message.get("channel") == "allMids":
|
||||
new_prices = message.get("data", {}).get("mids", {})
|
||||
# Update the shared dictionary with the new price data
|
||||
shared_prices_dict.update(new_prices)
|
||||
except Exception as e:
|
||||
log_error(f"Error in WebSocket on_message: {e}")
|
||||
# It's important to log errors inside the process
|
||||
logging.error(f"Error in WebSocket on_message: {e}")
|
||||
|
||||
def start_live_feed(shared_prices_dict, coins_to_watch: list, log_level='off'):
|
||||
def start_live_feed(shared_prices_dict, log_level='off'):
|
||||
"""
|
||||
Main function for the WebSocket process.
|
||||
Subscribes to BOTH 'bbo' and 'trades' for all watched coins.
|
||||
Main function for the WebSocket process. It takes a shared dictionary
|
||||
and continuously feeds it with live market data.
|
||||
"""
|
||||
setup_logging(log_level, 'LiveMarketFeed_Combined')
|
||||
setup_logging(log_level, 'LiveMarketFeed')
|
||||
|
||||
info = None
|
||||
# The Info object manages the WebSocket connection.
|
||||
info = Info(constants.MAINNET_API_URL, skip_ws=False)
|
||||
|
||||
# We need to wrap the callback in a lambda to pass our shared dictionary
|
||||
callback = lambda msg: on_message(msg, shared_prices_dict)
|
||||
|
||||
def connect_and_subscribe():
|
||||
"""Establishes a new WebSocket connection and subscribes to both streams."""
|
||||
try:
|
||||
logging.info("Connecting to Hyperliquid WebSocket...")
|
||||
new_info = Info(constants.MAINNET_API_URL, skip_ws=False)
|
||||
|
||||
# --- MODIFIED: Subscribe to 'bbo' AND 'trades' for each coin ---
|
||||
for coin in coins_to_watch:
|
||||
# Subscribe to Best Bid/Offer
|
||||
bbo_sub = {"type": "bbo", "coin": coin}
|
||||
new_info.subscribe(bbo_sub, callback)
|
||||
logging.info(f"Subscribed to 'bbo' for {coin}.")
|
||||
|
||||
# Subscribe to Live Trades
|
||||
trades_sub = {"type": "trades", "coin": coin}
|
||||
new_info.subscribe(trades_sub, callback)
|
||||
logging.info(f"Subscribed to 'trades' for {coin}.")
|
||||
|
||||
logging.info("WebSocket connected and all subscriptions sent.")
|
||||
return new_info
|
||||
except Exception as e:
|
||||
log_error(f"Failed to connect to WebSocket: {e}")
|
||||
return None
|
||||
|
||||
info = connect_and_subscribe()
|
||||
|
||||
if info is None:
|
||||
logging.critical("Initial WebSocket connection failed. Exiting process.")
|
||||
log_error("Initial WebSocket connection failed. Exiting process.", include_traceback=False)
|
||||
time.sleep(10) # Wait before letting the process manager restart it
|
||||
return
|
||||
|
||||
logging.info("Starting Combined (BBO + Trades) live price feed process.")
|
||||
|
||||
# Subscribe to the allMids channel
|
||||
subscription = {"type": "allMids"}
|
||||
info.subscribe(subscription, callback)
|
||||
logging.info("Subscribed to 'allMids' for live mark prices.")
|
||||
|
||||
logging.info("Starting live price feed process. Press Ctrl+C in main app to stop.")
|
||||
try:
|
||||
# The background thread in the SDK handles messages. This loop just keeps the process alive.
|
||||
while True:
|
||||
# --- Watchdog Logic ---
|
||||
time.sleep(15) # Check the connection every 15 seconds
|
||||
|
||||
if not info.ws_manager.is_alive():
|
||||
error_msg = "WebSocket connection lost. Attempting to reconnect..."
|
||||
logging.warning(error_msg)
|
||||
log_error(error_msg, include_traceback=False) # Log it to the file
|
||||
|
||||
try:
|
||||
info.ws_manager.stop() # Clean up old manager
|
||||
except Exception as e:
|
||||
log_error(f"Error stopping old ws_manager: {e}")
|
||||
|
||||
info = connect_and_subscribe()
|
||||
|
||||
if info is None:
|
||||
logging.error("Reconnect failed, will retry in 15s.")
|
||||
else:
|
||||
logging.info("Successfully reconnected to WebSocket.")
|
||||
else:
|
||||
logging.debug("Watchdog check: WebSocket connection is active.")
|
||||
|
||||
time.sleep(1)
|
||||
except KeyboardInterrupt:
|
||||
logging.info("Stopping WebSocket listener...")
|
||||
except Exception as e:
|
||||
log_error(f"Live Market Feed process crashed: {e}")
|
||||
finally:
|
||||
if info and info.ws_manager:
|
||||
info.ws_manager.stop()
|
||||
logging.info("Combined Listener stopped.")
|
||||
|
||||
logging.info("Listener stopped.")
|
||||
|
||||
576
main_app.py
576
main_app.py
@ -9,26 +9,21 @@ import schedule
|
||||
import sqlite3
|
||||
import pandas as pd
|
||||
from datetime import datetime, timezone
|
||||
import importlib
|
||||
# --- REMOVED: import signal ---
|
||||
# --- REMOVED: from queue import Empty ---
|
||||
|
||||
from logging_utils import setup_logging
|
||||
# --- Using the new high-performance WebSocket utility for live prices ---
|
||||
# --- Using the high-performance WebSocket utility for live prices ---
|
||||
from live_market_utils import start_live_feed
|
||||
# --- Import the base class for type hinting (optional but good practice) ---
|
||||
from strategies.base_strategy import BaseStrategy
|
||||
|
||||
# --- Configuration ---
|
||||
WATCHED_COINS = ["BTC", "ETH", "SOL", "BNB", "HYPE", "ASTER", "ZEC", "PUMP", "SUI"]
|
||||
# --- FIX: Replaced old data_fetcher with the new live_candle_fetcher ---
|
||||
LIVE_CANDLE_FETCHER_SCRIPT = "live_candle_fetcher.py"
|
||||
RESAMPLER_SCRIPT = "resampler.py"
|
||||
# --- REMOVED: Market Cap Fetcher ---
|
||||
# --- REMOVED: trade_executor.py is no longer a script ---
|
||||
DASHBOARD_DATA_FETCHER_SCRIPT = "dashboard_data_fetcher.py"
|
||||
MARKET_CAP_FETCHER_SCRIPT = "market_cap_fetcher.py"
|
||||
TRADE_EXECUTOR_SCRIPT = "trade_executor.py"
|
||||
STRATEGY_CONFIG_FILE = os.path.join("_data", "strategies.json")
|
||||
DB_PATH = os.path.join("_data", "market_data.db")
|
||||
# --- REMOVED: Market Cap File ---
|
||||
MARKET_CAP_SUMMARY_FILE = os.path.join("_data", "market_cap_data.json")
|
||||
LOGS_DIR = "_logs"
|
||||
TRADE_EXECUTOR_STATUS_FILE = os.path.join(LOGS_DIR, "trade_executor_status.json")
|
||||
|
||||
@ -48,62 +43,25 @@ def format_market_cap(mc_value):
|
||||
|
||||
def run_live_candle_fetcher():
|
||||
"""Target function to run the live_candle_fetcher.py script in a resilient loop."""
|
||||
|
||||
# --- GRACEFUL SHUTDOWN HANDLER ---
|
||||
import signal
|
||||
shutdown_requested = False
|
||||
|
||||
def handle_shutdown_signal(signum, frame):
|
||||
nonlocal shutdown_requested
|
||||
# Use print here as logging may not be set up
|
||||
print(f"[CandleFetcher] Shutdown signal ({signum}) received. Will stop after current run.")
|
||||
shutdown_requested = True
|
||||
|
||||
signal.signal(signal.SIGTERM, handle_shutdown_signal)
|
||||
signal.signal(signal.SIGINT, handle_shutdown_signal)
|
||||
# --- END GRACEFUL SHUTDOWN HANDLER ---
|
||||
|
||||
log_file = os.path.join(LOGS_DIR, "live_candle_fetcher.log")
|
||||
|
||||
while not shutdown_requested: # <-- MODIFIED
|
||||
process = None
|
||||
while True:
|
||||
try:
|
||||
with open(log_file, 'a') as f:
|
||||
command = [sys.executable, LIVE_CANDLE_FETCHER_SCRIPT, "--coins"] + WATCHED_COINS + ["--log-level", "off"]
|
||||
f.write(f"\n--- Starting {LIVE_CANDLE_FETCHER_SCRIPT} at {datetime.now()} ---\n")
|
||||
|
||||
# Use Popen instead of run to be non-blocking
|
||||
process = subprocess.Popen(command, stdout=f, stderr=subprocess.STDOUT)
|
||||
|
||||
# Poll the process and check for shutdown request
|
||||
while process.poll() is None and not shutdown_requested:
|
||||
time.sleep(0.5) # Poll every 500ms
|
||||
|
||||
if shutdown_requested and process.poll() is None:
|
||||
print(f"[CandleFetcher] Terminating subprocess {LIVE_CANDLE_FETCHER_SCRIPT}...")
|
||||
process.terminate() # Terminate the child script
|
||||
process.wait() # Wait for it to exit
|
||||
print(f"[CandleFetcher] Subprocess terminated.")
|
||||
|
||||
subprocess.run(command, check=True, stdout=f, stderr=subprocess.STDOUT)
|
||||
except (subprocess.CalledProcessError, Exception) as e:
|
||||
if shutdown_requested:
|
||||
break # Don't restart if we're shutting down
|
||||
with open(log_file, 'a') as f:
|
||||
f.write(f"\n--- PROCESS ERROR at {datetime.now()} ---\n")
|
||||
f.write(f"Live candle fetcher failed: {e}. Restarting...\n")
|
||||
time.sleep(5)
|
||||
|
||||
if shutdown_requested:
|
||||
break # Exit outer loop
|
||||
|
||||
print("[CandleFetcher] Live candle fetcher shutting down.")
|
||||
|
||||
|
||||
def run_resampler_job(timeframes_to_generate: list):
|
||||
"""Defines the job for the resampler, redirecting output to a log file."""
|
||||
log_file = os.path.join(LOGS_DIR, "resampler.log")
|
||||
try:
|
||||
command = [sys.executable, RESAMPLER_SCRIPT, "--coins"] + WATCHED_COINS + ["--timeframes"] + timeframes_to_generate + ["--log-level", "normal"]
|
||||
command = [sys.executable, RESAMPLER_SCRIPT, "--coins"] + WATCHED_COINS + ["--timeframes"] + timeframes_to_generate + ["--log-level", "off"]
|
||||
with open(log_file, 'a') as f:
|
||||
f.write(f"\n--- Starting resampler.py job at {datetime.now()} ---\n")
|
||||
subprocess.run(command, check=True, stdout=f, stderr=subprocess.STDOUT)
|
||||
@ -115,231 +73,65 @@ def run_resampler_job(timeframes_to_generate: list):
|
||||
|
||||
def resampler_scheduler(timeframes_to_generate: list):
|
||||
"""Schedules the resampler.py script."""
|
||||
|
||||
# --- GRACEFUL SHUTDOWN HANDLER ---
|
||||
import signal
|
||||
shutdown_requested = False
|
||||
|
||||
def handle_shutdown_signal(signum, frame):
|
||||
nonlocal shutdown_requested
|
||||
try:
|
||||
logging.info(f"Shutdown signal ({signum}) received. Exiting loop...")
|
||||
except NameError:
|
||||
print(f"[ResamplerScheduler] Shutdown signal ({signum}) received. Exiting loop...")
|
||||
shutdown_requested = True
|
||||
|
||||
signal.signal(signal.SIGTERM, handle_shutdown_signal)
|
||||
signal.signal(signal.SIGINT, handle_shutdown_signal)
|
||||
# --- END GRACEFUL SHUTDOWN HANDLER ---
|
||||
|
||||
setup_logging('off', 'ResamplerScheduler')
|
||||
run_resampler_job(timeframes_to_generate)
|
||||
# Schedule to run every minute at the :01 second mark
|
||||
schedule.every().minute.at(":01").do(run_resampler_job, timeframes_to_generate=timeframes_to_generate)
|
||||
logging.info("Resampler scheduled to run every minute at :01.")
|
||||
|
||||
while not shutdown_requested: # <-- MODIFIED
|
||||
schedule.every(4).minutes.do(run_resampler_job, timeframes_to_generate)
|
||||
while True:
|
||||
schedule.run_pending()
|
||||
time.sleep(0.5) # Check every 500ms to not miss the scheduled time and be responsive
|
||||
|
||||
logging.info("ResamplerScheduler shutting down.")
|
||||
time.sleep(1)
|
||||
|
||||
|
||||
# --- REMOVED: run_market_cap_fetcher_job function ---
|
||||
|
||||
# --- REMOVED: market_cap_fetcher_scheduler function ---
|
||||
|
||||
|
||||
def run_trade_executor(order_execution_queue: multiprocessing.Queue):
|
||||
"""
|
||||
Target function to run the TradeExecutor class in a resilient loop.
|
||||
It now consumes from the order_execution_queue.
|
||||
"""
|
||||
|
||||
# --- GRACEFUL SHUTDOWN HANDLER ---
|
||||
import signal
|
||||
|
||||
def handle_shutdown_signal(signum, frame):
|
||||
# We can just raise KeyboardInterrupt, as it's handled below
|
||||
logging.info(f"Shutdown signal ({signum}) received. Initiating graceful exit...")
|
||||
raise KeyboardInterrupt
|
||||
|
||||
signal.signal(signal.SIGTERM, handle_shutdown_signal)
|
||||
# --- END GRACEFUL SHUTDOWN HANDLER ---
|
||||
|
||||
log_file_path = os.path.join(LOGS_DIR, "trade_executor.log")
|
||||
def run_market_cap_fetcher_job():
|
||||
"""Defines the job for the market cap fetcher, redirecting output."""
|
||||
log_file = os.path.join(LOGS_DIR, "market_cap_fetcher.log")
|
||||
try:
|
||||
sys.stdout = open(log_file_path, 'a', buffering=1)
|
||||
sys.stderr = sys.stdout
|
||||
command = [sys.executable, MARKET_CAP_FETCHER_SCRIPT, "--coins"] + WATCHED_COINS + ["--log-level", "off"]
|
||||
with open(log_file, 'a') as f:
|
||||
f.write(f"\n--- Starting {MARKET_CAP_FETCHER_SCRIPT} job at {datetime.now()} ---\n")
|
||||
subprocess.run(command, check=True, stdout=f, stderr=subprocess.STDOUT)
|
||||
except Exception as e:
|
||||
print(f"Failed to open log file for TradeExecutor: {e}")
|
||||
with open(log_file, 'a') as f:
|
||||
f.write(f"\n--- SCHEDULER ERROR at {datetime.now()} ---\n")
|
||||
f.write(f"Failed to run {MARKET_CAP_FETCHER_SCRIPT} job: {e}\n")
|
||||
|
||||
setup_logging('normal', f"TradeExecutor")
|
||||
logging.info("\n--- Starting Trade Executor process ---")
|
||||
|
||||
def market_cap_fetcher_scheduler():
|
||||
"""Schedules the market_cap_fetcher.py script to run daily at a specific UTC time."""
|
||||
setup_logging('off', 'MarketCapScheduler')
|
||||
schedule.every().day.at("00:15", "UTC").do(run_market_cap_fetcher_job)
|
||||
while True:
|
||||
try:
|
||||
from trade_executor import TradeExecutor
|
||||
|
||||
executor = TradeExecutor(log_level="normal", order_execution_queue=order_execution_queue)
|
||||
|
||||
# --- REVERTED: Call executor.run() directly ---
|
||||
executor.run()
|
||||
|
||||
except KeyboardInterrupt:
|
||||
logging.info("Trade Executor interrupted. Exiting.")
|
||||
return
|
||||
except Exception as e:
|
||||
logging.error(f"Trade Executor failed: {e}. Restarting...\n", exc_info=True)
|
||||
time.sleep(10)
|
||||
|
||||
def run_position_manager(trade_signal_queue: multiprocessing.Queue, order_execution_queue: multiprocessing.Queue):
|
||||
"""
|
||||
Target function to run the PositionManager class in a resilient loop.
|
||||
Consumes from trade_signal_queue, produces for order_execution_queue.
|
||||
"""
|
||||
|
||||
# --- GRACEFUL SHUTDOWN HANDLER ---
|
||||
import signal
|
||||
|
||||
def handle_shutdown_signal(signum, frame):
|
||||
# Raise KeyboardInterrupt, as it's handled by the loop
|
||||
logging.info(f"Shutdown signal ({signum}) received. Initiating graceful exit...")
|
||||
raise KeyboardInterrupt
|
||||
|
||||
signal.signal(signal.SIGTERM, handle_shutdown_signal)
|
||||
# --- END GRACEFUL SHUTDOWN HANDLER ---
|
||||
|
||||
log_file_path = os.path.join(LOGS_DIR, "position_manager.log")
|
||||
try:
|
||||
sys.stdout = open(log_file_path, 'a', buffering=1)
|
||||
sys.stderr = sys.stdout
|
||||
except Exception as e:
|
||||
print(f"Failed to open log file for PositionManager: {e}")
|
||||
|
||||
setup_logging('normal', f"PositionManager")
|
||||
logging.info("\n--- Starting Position Manager process ---")
|
||||
|
||||
while True:
|
||||
try:
|
||||
from position_manager import PositionManager
|
||||
|
||||
manager = PositionManager(
|
||||
log_level="normal",
|
||||
trade_signal_queue=trade_signal_queue,
|
||||
order_execution_queue=order_execution_queue
|
||||
)
|
||||
|
||||
# --- REVERTED: Call manager.run() directly ---
|
||||
manager.run()
|
||||
|
||||
except KeyboardInterrupt:
|
||||
logging.info("Position Manager interrupted. Exiting.")
|
||||
return
|
||||
except Exception as e:
|
||||
logging.error(f"Position Manager failed: {e}. Restarting...\n", exc_info=True)
|
||||
time.sleep(10)
|
||||
schedule.run_pending()
|
||||
time.sleep(60)
|
||||
|
||||
|
||||
def run_strategy(strategy_name: str, config: dict, trade_signal_queue: multiprocessing.Queue):
|
||||
"""
|
||||
This function BECOMES the strategy runner. It is executed as a separate
|
||||
process and pushes signals to the shared queue.
|
||||
"""
|
||||
# These imports only happen in the new, lightweight process
|
||||
import importlib
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import logging
|
||||
import signal # <-- ADDED
|
||||
from logging_utils import setup_logging
|
||||
from strategies.base_strategy import BaseStrategy
|
||||
|
||||
# --- GRACEFUL SHUTDOWN HANDLER ---
|
||||
def handle_shutdown_signal(signum, frame):
|
||||
# Raise KeyboardInterrupt, as it's handled by the loop
|
||||
try:
|
||||
logging.info(f"Shutdown signal ({signum}) received. Initiating graceful exit...")
|
||||
except NameError:
|
||||
print(f"[Strategy-{strategy_name}] Shutdown signal ({signum}) received. Initiating graceful exit...")
|
||||
raise KeyboardInterrupt
|
||||
|
||||
signal.signal(signal.SIGTERM, handle_shutdown_signal)
|
||||
# --- END GRACEFUL SHUTDOWN HANDLER ---
|
||||
|
||||
# --- Setup logging to file for this specific process ---
|
||||
log_file_path = os.path.join(LOGS_DIR, f"strategy_{strategy_name}.log")
|
||||
try:
|
||||
sys.stdout = open(log_file_path, 'a', buffering=1) # 1 = line buffering
|
||||
sys.stderr = sys.stdout
|
||||
except Exception as e:
|
||||
print(f"Failed to open log file for {strategy_name}: {e}")
|
||||
|
||||
setup_logging('normal', f"Strategy-{strategy_name}")
|
||||
|
||||
while True:
|
||||
try:
|
||||
logging.info(f"--- Starting strategy '{strategy_name}' ---")
|
||||
|
||||
if 'class' not in config:
|
||||
logging.error(f"Strategy config for '{strategy_name}' is missing the 'class' key. Exiting.")
|
||||
return
|
||||
|
||||
module_path, class_name = config['class'].rsplit('.', 1)
|
||||
module = importlib.import_module(module_path)
|
||||
StrategyClass = getattr(module, class_name)
|
||||
|
||||
strategy = StrategyClass(strategy_name, config['parameters'], trade_signal_queue)
|
||||
|
||||
if config.get("is_event_driven", False):
|
||||
logging.info(f"Starting EVENT-DRIVEN logic loop...")
|
||||
strategy.run_event_loop() # This is a blocking call
|
||||
else:
|
||||
logging.info(f"Starting POLLING logic loop...")
|
||||
strategy.run_polling_loop() # This is the original blocking call
|
||||
|
||||
# --- REVERTED: Added back simple KeyboardInterrupt handler ---
|
||||
except KeyboardInterrupt:
|
||||
logging.info(f"Strategy {strategy_name} process stopping.")
|
||||
return
|
||||
except Exception as e:
|
||||
# --- REVERTED: Removed specific check for KeyboardInterrupt ---
|
||||
logging.error(f"Strategy '{strategy_name}' failed: {e}", exc_info=True)
|
||||
logging.info("Restarting strategy in 10 seconds...")
|
||||
time.sleep(10)
|
||||
|
||||
|
||||
def run_dashboard_data_fetcher():
|
||||
"""Target function to run the dashboard_data_fetcher.py script."""
|
||||
|
||||
# --- GRACEFUL SHUTDOWN HANDLER ---
|
||||
import signal
|
||||
|
||||
def handle_shutdown_signal(signum, frame):
|
||||
# Raise KeyboardInterrupt, as it's handled by the loop
|
||||
try:
|
||||
logging.info(f"Shutdown signal ({signum}) received. Initiating graceful exit...")
|
||||
except NameError:
|
||||
print(f"[DashboardDataFetcher] Shutdown signal ({signum}) received. Initiating graceful exit...")
|
||||
raise KeyboardInterrupt
|
||||
|
||||
signal.signal(signal.SIGTERM, handle_shutdown_signal)
|
||||
# --- END GRACEFUL SHUTDOWN HANDLER ---
|
||||
|
||||
log_file = os.path.join(LOGS_DIR, "dashboard_data_fetcher.log")
|
||||
def run_strategy(strategy_name: str, config: dict):
|
||||
"""Target function to run a strategy, redirecting its output to a log file."""
|
||||
log_file = os.path.join(LOGS_DIR, f"strategy_{strategy_name}.log")
|
||||
script_name = config['script']
|
||||
command = [sys.executable, script_name, "--name", strategy_name, "--log-level", "normal"]
|
||||
while True:
|
||||
try:
|
||||
with open(log_file, 'a') as f:
|
||||
f.write(f"\n--- Starting Dashboard Data Fetcher at {datetime.now()} ---\n")
|
||||
subprocess.run([sys.executable, DASHBOARD_DATA_FETCHER_SCRIPT, "--log-level", "normal"], check=True, stdout=f, stderr=subprocess.STDOUT)
|
||||
except KeyboardInterrupt: # --- MODIFIED: Added to catch interrupt ---
|
||||
logging.info("Dashboard Data Fetcher stopping.")
|
||||
break
|
||||
f.write(f"\n--- Starting strategy '{strategy_name}' at {datetime.now()} ---\n")
|
||||
subprocess.run(command, check=True, stdout=f, stderr=subprocess.STDOUT)
|
||||
except (subprocess.CalledProcessError, Exception) as e:
|
||||
with open(log_file, 'a') as f:
|
||||
f.write(f"\n--- PROCESS ERROR at {datetime.now()} ---\n")
|
||||
f.write(f"Dashboard Data Fetcher failed: {e}. Restarting...\n")
|
||||
f.write(f"Strategy '{strategy_name}' failed: {e}. Restarting...\n")
|
||||
time.sleep(10)
|
||||
|
||||
def run_trade_executor():
|
||||
"""Target function to run the trade_executor.py script in a resilient loop."""
|
||||
log_file = os.path.join(LOGS_DIR, "trade_executor.log")
|
||||
while True:
|
||||
try:
|
||||
with open(log_file, 'a') as f:
|
||||
f.write(f"\n--- Starting Trade Executor at {datetime.now()} ---\n")
|
||||
subprocess.run([sys.executable, TRADE_EXECUTOR_SCRIPT, "--log-level", "normal"], check=True, stdout=f, stderr=subprocess.STDOUT)
|
||||
except (subprocess.CalledProcessError, Exception) as e:
|
||||
with open(log_file, 'a') as f:
|
||||
f.write(f"\n--- PROCESS ERROR at {datetime.now()} ---\n")
|
||||
f.write(f"Trade Executor failed: {e}. Restarting...\n")
|
||||
time.sleep(10)
|
||||
|
||||
|
||||
@ -348,7 +140,7 @@ class MainApp:
|
||||
self.watched_coins = coins_to_watch
|
||||
self.shared_prices = shared_prices
|
||||
self.prices = {}
|
||||
# --- REMOVED: self.market_caps ---
|
||||
self.market_caps = {}
|
||||
self.open_positions = {}
|
||||
self.background_processes = processes
|
||||
self.process_status = {}
|
||||
@ -358,15 +150,23 @@ class MainApp:
|
||||
def read_prices(self):
|
||||
"""Reads the latest prices directly from the shared memory dictionary."""
|
||||
try:
|
||||
# --- FIX: Use .copy() for thread-safe iteration ---
|
||||
self.prices = self.shared_prices.copy()
|
||||
self.prices = dict(self.shared_prices)
|
||||
except Exception as e:
|
||||
logging.debug(f"Could not read from shared prices dict: {e}")
|
||||
|
||||
# --- REMOVED: read_market_caps method ---
|
||||
def read_market_caps(self):
|
||||
if os.path.exists(MARKET_CAP_SUMMARY_FILE):
|
||||
try:
|
||||
with open(MARKET_CAP_SUMMARY_FILE, 'r', encoding='utf-8') as f:
|
||||
summary_data = json.load(f)
|
||||
for coin in self.watched_coins:
|
||||
table_key = f"{coin}_market_cap"
|
||||
if table_key in summary_data:
|
||||
self.market_caps[coin] = summary_data[table_key].get('market_cap')
|
||||
except (json.JSONDecodeError, IOError):
|
||||
logging.debug("Could not read market cap summary file.")
|
||||
|
||||
def read_strategy_statuses(self):
|
||||
"""Reads the status JSON file for each enabled strategy."""
|
||||
enabled_statuses = {}
|
||||
for name, config in self.strategy_configs.items():
|
||||
if config.get("enabled", False):
|
||||
@ -382,84 +182,38 @@ class MainApp:
|
||||
self.strategy_statuses = enabled_statuses
|
||||
|
||||
def read_executor_status(self):
|
||||
"""Reads the live status file from the trade executor."""
|
||||
if os.path.exists(TRADE_EXECUTOR_STATUS_FILE):
|
||||
try:
|
||||
with open(TRADE_EXECUTOR_STATUS_FILE, 'r', encoding='utf-8') as f:
|
||||
# --- FIX: Read the 'open_positions' key from the file ---
|
||||
status_data = json.load(f)
|
||||
self.open_positions = status_data.get('open_positions', {})
|
||||
self.open_positions = json.load(f)
|
||||
except (IOError, json.JSONDecodeError):
|
||||
logging.debug("Could not read trade executor status file.")
|
||||
else:
|
||||
self.open_positions = {}
|
||||
|
||||
def check_process_status(self):
|
||||
"""Checks if the background processes are still running."""
|
||||
for name, process in self.background_processes.items():
|
||||
self.process_status[name] = "Running" if process.is_alive() else "STOPPED"
|
||||
|
||||
def _format_price(self, price_val, width=10):
|
||||
"""Helper function to format prices for the dashboard."""
|
||||
try:
|
||||
price_float = float(price_val)
|
||||
if price_float < 1:
|
||||
price_str = f"{price_float:>{width}.6f}"
|
||||
elif price_float < 100:
|
||||
price_str = f"{price_float:>{width}.4f}"
|
||||
else:
|
||||
price_str = f"{price_float:>{width}.2f}"
|
||||
except (ValueError, TypeError):
|
||||
price_str = f"{'Loading...':>{width}}"
|
||||
return price_str
|
||||
|
||||
def display_dashboard(self):
|
||||
"""Displays a formatted dashboard with side-by-side tables."""
|
||||
print("\x1b[H\x1b[J", end="") # Clear screen
|
||||
print("\x1b[H\x1b[J", end="")
|
||||
|
||||
left_table_lines = ["--- Market Dashboard ---"]
|
||||
# --- MODIFIED: Adjusted width for new columns ---
|
||||
left_table_width = 65
|
||||
left_table_width = 44
|
||||
left_table_lines.append("-" * left_table_width)
|
||||
# --- MODIFIED: Replaced Market Cap with Gap ---
|
||||
left_table_lines.append(f"{'#':<2} | {'Coin':^6} | {'Best Bid':>10} | {'Live Price':>10} | {'Best Ask':>10} | {'Gap':>10} |")
|
||||
left_table_lines.append(f"{'#':<2} | {'Coin':^6} | {'Live Price':>10} | {'Market Cap':>15} |")
|
||||
left_table_lines.append("-" * left_table_width)
|
||||
for i, coin in enumerate(self.watched_coins, 1):
|
||||
# --- MODIFIED: Fetch all three price types ---
|
||||
mid_price = self.prices.get(coin, "Loading...")
|
||||
bid_price = self.prices.get(f"{coin}_bid", "Loading...")
|
||||
ask_price = self.prices.get(f"{coin}_ask", "Loading...")
|
||||
|
||||
# --- MODIFIED: Use the new formatting helper ---
|
||||
formatted_mid = self._format_price(mid_price)
|
||||
formatted_bid = self._format_price(bid_price)
|
||||
formatted_ask = self._format_price(ask_price)
|
||||
|
||||
# --- MODIFIED: Calculate gap ---
|
||||
gap_str = f"{'Loading...':>10}"
|
||||
try:
|
||||
# Calculate the spread
|
||||
gap_val = float(ask_price) - float(bid_price)
|
||||
# Format gap with high precision, similar to price
|
||||
if gap_val < 1:
|
||||
gap_str = f"{gap_val:>{10}.6f}"
|
||||
else:
|
||||
gap_str = f"{gap_val:>{10}.4f}"
|
||||
except (ValueError, TypeError):
|
||||
pass # Keep 'Loading...'
|
||||
|
||||
# --- REMOVED: Market Cap logic ---
|
||||
|
||||
# --- MODIFIED: Print all price columns including gap ---
|
||||
left_table_lines.append(f"{i:<2} | {coin:^6} | {formatted_bid} | {formatted_mid} | {formatted_ask} | {gap_str} |")
|
||||
price = self.prices.get(coin, "Loading...")
|
||||
market_cap = self.market_caps.get(coin)
|
||||
formatted_mc = format_market_cap(market_cap)
|
||||
left_table_lines.append(f"{i:<2} | {coin:^6} | {price:>10} | {formatted_mc:>15} |")
|
||||
left_table_lines.append("-" * left_table_width)
|
||||
|
||||
right_table_lines = ["--- Strategy Status ---"]
|
||||
# --- FIX: Adjusted table width after removing parameters ---
|
||||
right_table_width = 105
|
||||
right_table_width = 154
|
||||
right_table_lines.append("-" * right_table_width)
|
||||
# --- FIX: Removed 'Parameters' from header ---
|
||||
right_table_lines.append(f"{'#':^2} | {'Strategy Name':<25} | {'Coin':^6} | {'Signal':^8} | {'Signal Price':>12} | {'Last Change':>17} | {'TF':^5} | {'Size':^8} |")
|
||||
right_table_lines.append(f"{'#':^2} | {'Strategy Name':<25} | {'Coin':^6} | {'Signal':^8} | {'Signal Price':>12} | {'Last Change':>17} | {'TF':^5} | {'Size':^8} | {'Parameters':<45} |")
|
||||
right_table_lines.append("-" * right_table_width)
|
||||
for i, (name, status) in enumerate(self.strategy_statuses.items(), 1):
|
||||
signal = status.get('current_signal', 'N/A')
|
||||
@ -473,37 +227,13 @@ class MainApp:
|
||||
last_change_display = dt_local.strftime('%Y-%m-%d %H:%M')
|
||||
|
||||
config_params = self.strategy_configs.get(name, {}).get('parameters', {})
|
||||
|
||||
# --- FIX: Read coin/size from status file first, fallback to config ---
|
||||
coin = status.get('coin', config_params.get('coin', 'N/A'))
|
||||
|
||||
# --- FIX: Handle nested 'coins_to_copy' logic for size ---
|
||||
# --- MODIFIED: Read 'size' from status first, then config, then 'Multi' ---
|
||||
size = status.get('size')
|
||||
if not size:
|
||||
if 'coins_to_copy' in config_params:
|
||||
size = 'Multi'
|
||||
else:
|
||||
coin = config_params.get('coin', 'N/A')
|
||||
timeframe = config_params.get('timeframe', 'N/A')
|
||||
size = config_params.get('size', 'N/A')
|
||||
|
||||
timeframe = config_params.get('timeframe', 'N/A')
|
||||
|
||||
# --- FIX: Removed parameter string logic ---
|
||||
|
||||
# --- FIX: Removed 'params_str' from the formatted line ---
|
||||
|
||||
size_display = f"{size:>8}"
|
||||
if isinstance(size, (int, float)):
|
||||
# --- MODIFIED: More flexible size formatting ---
|
||||
if size < 0.0001:
|
||||
size_display = f"{size:>8.6f}"
|
||||
elif size < 1:
|
||||
size_display = f"{size:>8.4f}"
|
||||
else:
|
||||
size_display = f"{size:>8.2f}"
|
||||
# --- END NEW LOGIC ---
|
||||
|
||||
right_table_lines.append(f"{i:^2} | {name:<25} | {coin:^6} | {signal:^8} | {price_display:>12} | {last_change_display:>17} | {timeframe:^5} | {size_display} |")
|
||||
other_params = {k: v for k, v in config_params.items() if k not in ['coin', 'timeframe', 'size']}
|
||||
params_str = ", ".join([f"{k}={v}" for k, v in other_params.items()])
|
||||
right_table_lines.append(f"{i:^2} | {name:<25} | {coin:^6} | {signal:^8} | {price_display:>12} | {last_change_display:>17} | {timeframe:^5} | {size:>8} | {params_str:<45} |")
|
||||
right_table_lines.append("-" * right_table_width)
|
||||
|
||||
output_lines = []
|
||||
@ -521,33 +251,38 @@ class MainApp:
|
||||
output_lines.append(f"{'Account':<10} | {'Coin':<6} | {'Size':>15} | {'Entry Price':>12} | {'Mark Price':>12} | {'PNL':>15} | {'Leverage':>10} |")
|
||||
output_lines.append("-" * pos_table_width)
|
||||
|
||||
# --- FIX: Correctly read and display open positions ---
|
||||
if not self.open_positions:
|
||||
output_lines.append(f"{'No open positions.':^{pos_table_width}}")
|
||||
perps_positions = self.open_positions.get('perpetuals_account', {}).get('open_positions', [])
|
||||
spot_positions = self.open_positions.get('spot_account', {}).get('positions', [])
|
||||
|
||||
if not perps_positions and not spot_positions:
|
||||
output_lines.append("No open positions found.")
|
||||
else:
|
||||
for account, positions in self.open_positions.items():
|
||||
if not positions:
|
||||
continue
|
||||
for coin, pos in positions.items():
|
||||
for pos in perps_positions:
|
||||
try:
|
||||
size_f = float(pos.get('size', 0))
|
||||
entry_f = float(pos.get('entry_price', 0))
|
||||
mark_f = float(self.prices.get(coin, 0))
|
||||
pnl_f = (mark_f - entry_f) * size_f if size_f > 0 else (entry_f - mark_f) * abs(size_f)
|
||||
lev = pos.get('leverage', 1)
|
||||
|
||||
size_str = f"{size_f:>{15}.5f}"
|
||||
entry_str = f"{entry_f:>{12}.2f}"
|
||||
mark_str = f"{mark_f:>{12}.2f}"
|
||||
pnl_str = f"{pnl_f:>{15}.2f}"
|
||||
lev_str = f"{lev}x"
|
||||
|
||||
output_lines.append(f"{account:<10} | {coin:<6} | {size_str} | {entry_str} | {mark_str} | {pnl_str} | {lev_str:>10} |")
|
||||
pnl = float(pos.get('pnl', 0.0))
|
||||
pnl_str = f"${pnl:,.2f}"
|
||||
except (ValueError, TypeError):
|
||||
output_lines.append(f"{account:<10} | {coin:<6} | {'Error parsing data...':^{pos_table_width-20}} |")
|
||||
pnl_str = "Error"
|
||||
|
||||
coin = pos.get('coin') or '-'
|
||||
size = pos.get('size') or '-'
|
||||
entry_price = pos.get('entry_price') or '-'
|
||||
mark_price = pos.get('mark_price') or '-'
|
||||
leverage = pos.get('leverage') or '-'
|
||||
|
||||
output_lines.append(f"{'Perps':<10} | {coin:<6} | {size:>15} | {entry_price:>12} | {mark_price:>12} | {pnl_str:>15} | {leverage:>10} |")
|
||||
|
||||
for pos in spot_positions:
|
||||
pnl = pos.get('pnl', 'N/A')
|
||||
coin = pos.get('coin') or '-'
|
||||
balance_size = pos.get('balance_size') or '-'
|
||||
output_lines.append(f"{'Spot':<10} | {coin:<6} | {balance_size:>15} | {'-':>12} | {'-':>12} | {pnl:>15} | {'-':>10} |")
|
||||
output_lines.append("-" * pos_table_width)
|
||||
|
||||
output_lines.append("\n--- Background Processes ---")
|
||||
for name, status in self.process_status.items():
|
||||
output_lines.append(f"{name:<25}: {status}")
|
||||
|
||||
final_output = "\n".join(output_lines)
|
||||
print(final_output)
|
||||
sys.stdout.flush()
|
||||
@ -556,10 +291,10 @@ class MainApp:
|
||||
"""Main loop to read data, display dashboard, and check processes."""
|
||||
while True:
|
||||
self.read_prices()
|
||||
# --- REMOVED: self.read_market_caps() ---
|
||||
self.read_market_caps()
|
||||
self.read_strategy_statuses()
|
||||
self.read_executor_status()
|
||||
# --- REMOVED: self.check_process_status() ---
|
||||
self.check_process_status()
|
||||
self.display_dashboard()
|
||||
time.sleep(0.5)
|
||||
|
||||
@ -570,7 +305,7 @@ if __name__ == "__main__":
|
||||
os.makedirs(LOGS_DIR)
|
||||
|
||||
processes = {}
|
||||
# --- REVERTED: Removed process groups ---
|
||||
strategy_configs = {}
|
||||
|
||||
try:
|
||||
with open(STRATEGY_CONFIG_FILE, 'r') as f:
|
||||
@ -579,53 +314,32 @@ if __name__ == "__main__":
|
||||
logging.error(f"Could not load strategies from '{STRATEGY_CONFIG_FILE}': {e}")
|
||||
sys.exit(1)
|
||||
|
||||
# --- FIX: Hardcoded timeframes ---
|
||||
required_timeframes = [
|
||||
"3m", "5m", "15m", "30m", "1h", "2h", "4h", "8h",
|
||||
"12h", "1d", "3d", "1w", "1M", "148m", "37m"
|
||||
]
|
||||
logging.info(f"Using fixed timeframes for resampler: {required_timeframes}")
|
||||
required_timeframes = set()
|
||||
for name, config in strategy_configs.items():
|
||||
if config.get("enabled", False):
|
||||
tf = config.get("parameters", {}).get("timeframe")
|
||||
if tf:
|
||||
required_timeframes.add(tf)
|
||||
|
||||
if not required_timeframes:
|
||||
logging.warning("No timeframes required by any enabled strategy.")
|
||||
|
||||
with multiprocessing.Manager() as manager:
|
||||
shared_prices = manager.dict()
|
||||
# --- FIX: Create TWO queues ---
|
||||
trade_signal_queue = manager.Queue()
|
||||
order_execution_queue = manager.Queue()
|
||||
|
||||
# --- REVERTED: All processes are daemon=True and in one dict ---
|
||||
|
||||
# --- FIX: Pass WATCHED_COINS to the start_live_feed process ---
|
||||
# --- MODIFICATION: Set log level back to 'off' ---
|
||||
processes["Live Market Feed"] = multiprocessing.Process(
|
||||
target=start_live_feed,
|
||||
args=(shared_prices, WATCHED_COINS, 'off'),
|
||||
daemon=True
|
||||
)
|
||||
processes["Live Market Feed"] = multiprocessing.Process(target=start_live_feed, args=(shared_prices, 'off'), daemon=True)
|
||||
processes["Live Candle Fetcher"] = multiprocessing.Process(target=run_live_candle_fetcher, daemon=True)
|
||||
processes["Resampler"] = multiprocessing.Process(target=resampler_scheduler, args=(list(required_timeframes),), daemon=True)
|
||||
# --- REMOVED: Market Cap Fetcher Process ---
|
||||
processes["Dashboard Data"] = multiprocessing.Process(target=run_dashboard_data_fetcher, daemon=True)
|
||||
|
||||
processes["Position Manager"] = multiprocessing.Process(
|
||||
target=run_position_manager,
|
||||
args=(trade_signal_queue, order_execution_queue),
|
||||
daemon=True
|
||||
)
|
||||
processes["Trade Executor"] = multiprocessing.Process(
|
||||
target=run_trade_executor,
|
||||
args=(order_execution_queue,),
|
||||
daemon=True
|
||||
)
|
||||
processes["Market Cap Fetcher"] = multiprocessing.Process(target=market_cap_fetcher_scheduler, daemon=True)
|
||||
processes["Trade Executor"] = multiprocessing.Process(target=run_trade_executor, daemon=True)
|
||||
|
||||
for name, config in strategy_configs.items():
|
||||
if config.get("enabled", False):
|
||||
if 'class' not in config:
|
||||
logging.error(f"Strategy '{name}' is missing 'class' key. Skipping.")
|
||||
if not os.path.exists(config['script']):
|
||||
logging.error(f"Strategy script '{config['script']}' for '{name}' not found. Skipping.")
|
||||
continue
|
||||
proc = multiprocessing.Process(target=run_strategy, args=(name, config, trade_signal_queue), daemon=True)
|
||||
processes[f"Strategy: {name}"] = proc # Add to strategy group
|
||||
|
||||
# --- REVERTED: Removed combined dict ---
|
||||
proc = multiprocessing.Process(target=run_strategy, args=(name, config), daemon=True)
|
||||
processes[f"Strategy: {name}"] = proc
|
||||
|
||||
for name, proc in processes.items():
|
||||
logging.info(f"Starting process '{name}'...")
|
||||
@ -637,49 +351,11 @@ if __name__ == "__main__":
|
||||
try:
|
||||
app.run()
|
||||
except KeyboardInterrupt:
|
||||
# --- MODIFIED: Staged shutdown ---
|
||||
logging.info("Shutting down...")
|
||||
|
||||
strategy_procs = {}
|
||||
other_procs = {}
|
||||
for name, proc in processes.items():
|
||||
if name.startswith("Strategy:"):
|
||||
strategy_procs[name] = proc
|
||||
else:
|
||||
other_procs[name] = proc
|
||||
|
||||
# --- 1. Terminate strategy processes ---
|
||||
logging.info("Shutting down strategy processes first...")
|
||||
for name, proc in strategy_procs.items():
|
||||
if proc.is_alive():
|
||||
logging.info(f"Terminating process: '{name}'...")
|
||||
proc.terminate()
|
||||
|
||||
# --- 2. Wait for 5 seconds ---
|
||||
logging.info("Waiting 5 seconds for strategies to close...")
|
||||
time.sleep(5)
|
||||
|
||||
# --- 3. Terminate all other processes ---
|
||||
logging.info("Shutting down remaining core processes...")
|
||||
for name, proc in other_procs.items():
|
||||
if proc.is_alive():
|
||||
logging.info(f"Terminating process: '{name}'...")
|
||||
proc.terminate()
|
||||
|
||||
# --- 4. Join all processes (strategies and others) ---
|
||||
logging.info("Waiting for all processes to join...")
|
||||
for name, proc in processes.items(): # Iterate over the original dict to get all
|
||||
if proc.is_alive():
|
||||
logging.info(f"Waiting for process '{name}' to join...")
|
||||
proc.join(timeout=5) # Wait up to 5 seconds
|
||||
if proc.is_alive():
|
||||
# If it's still alive, force kill
|
||||
logging.warning(f"Process '{name}' did not terminate, forcing kill.")
|
||||
proc.kill()
|
||||
# --- END MODIFIED ---
|
||||
|
||||
for proc in processes.values():
|
||||
if proc.is_alive(): proc.terminate()
|
||||
for proc in processes.values():
|
||||
if proc.is_alive(): proc.join()
|
||||
logging.info("Shutdown complete.")
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
|
||||
|
||||
@ -8,107 +8,48 @@ import requests
|
||||
import time
|
||||
from datetime import datetime, timezone, timedelta
|
||||
import json
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv()
|
||||
|
||||
# Assuming logging_utils.py is in the same directory
|
||||
from logging_utils import setup_logging
|
||||
|
||||
class MarketCapFetcher:
|
||||
"""
|
||||
Fetches historical daily market cap data from the CoinGecko API and
|
||||
intelligently upserts it into the SQLite database for all coins.
|
||||
intelligently updates the SQLite database. It processes individual coins,
|
||||
aggregates stablecoins, and captures total market cap metrics.
|
||||
"""
|
||||
|
||||
def __init__(self, log_level: str):
|
||||
COIN_ID_MAP = {
|
||||
"BTC": "bitcoin",
|
||||
"ETH": "ethereum",
|
||||
"SOL": "solana",
|
||||
"BNB": "binancecoin",
|
||||
"HYPE": "hyperliquid",
|
||||
"ASTER": "astar",
|
||||
"ZEC": "zcash",
|
||||
"PUMP": "pump-fun", # Correct ID is 'pump-fun'
|
||||
"SUI": "sui"
|
||||
}
|
||||
|
||||
STABLECOIN_ID_MAP = {
|
||||
"USDT": "tether",
|
||||
"USDC": "usd-coin",
|
||||
"USDE": "ethena-usde",
|
||||
"DAI": "dai",
|
||||
"PYUSD": "paypal-usd"
|
||||
}
|
||||
|
||||
def __init__(self, log_level: str, coins: list):
|
||||
setup_logging(log_level, 'MarketCapFetcher')
|
||||
self.coins_to_fetch = coins
|
||||
self.db_path = os.path.join("_data", "market_data.db")
|
||||
self.api_base_url = "https://api.coingecko.com/api/v3"
|
||||
self.api_key = os.environ.get("COINGECKO_API_KEY")
|
||||
|
||||
if not self.api_key:
|
||||
logging.error("CoinGecko API key not found. Please set the COINGECKO_API_KEY environment variable.")
|
||||
sys.exit(1)
|
||||
|
||||
self.COIN_ID_MAP = self._load_coin_id_map()
|
||||
if not self.COIN_ID_MAP:
|
||||
logging.error("Coin ID map is empty. Run 'update_coin_map.py' to generate it.")
|
||||
sys.exit(1)
|
||||
|
||||
self.coins_to_fetch = list(self.COIN_ID_MAP.keys())
|
||||
|
||||
self.STABLECOIN_ID_MAP = {
|
||||
"USDT": "tether", "USDC": "usd-coin", "USDE": "ethena-usde",
|
||||
"DAI": "dai", "PYUSD": "paypal-usd"
|
||||
}
|
||||
|
||||
self._ensure_tables_exist()
|
||||
|
||||
def _ensure_tables_exist(self):
|
||||
"""Ensures all market cap tables exist with timestamp_ms as PRIMARY KEY."""
|
||||
all_tables_to_check = [f"{coin}_market_cap" for coin in self.coins_to_fetch]
|
||||
all_tables_to_check.extend(["STABLECOINS_market_cap", "TOTAL_market_cap_daily"])
|
||||
|
||||
with sqlite3.connect(self.db_path) as conn:
|
||||
for table_name in all_tables_to_check:
|
||||
cursor = conn.cursor()
|
||||
cursor.execute(f"PRAGMA table_info('{table_name}')")
|
||||
columns = cursor.fetchall()
|
||||
if columns:
|
||||
pk_found = any(col[1] == 'timestamp_ms' and col[5] == 1 for col in columns)
|
||||
if not pk_found:
|
||||
logging.warning(f"Schema for table '{table_name}' is incorrect. Dropping and recreating table.")
|
||||
try:
|
||||
conn.execute(f'DROP TABLE "{table_name}"')
|
||||
self._create_market_cap_table(conn, table_name)
|
||||
logging.info(f"Successfully recreated schema for '{table_name}'.")
|
||||
except Exception as e:
|
||||
logging.error(f"FATAL: Failed to recreate table '{table_name}': {e}. Please delete 'market_data.db' and restart.")
|
||||
sys.exit(1)
|
||||
else:
|
||||
self._create_market_cap_table(conn, table_name)
|
||||
logging.info("All market cap table schemas verified.")
|
||||
|
||||
def _create_market_cap_table(self, conn, table_name):
|
||||
"""Creates a new market cap table with the correct schema."""
|
||||
conn.execute(f'''
|
||||
CREATE TABLE IF NOT EXISTS "{table_name}" (
|
||||
datetime_utc TEXT,
|
||||
timestamp_ms INTEGER PRIMARY KEY,
|
||||
market_cap REAL
|
||||
)
|
||||
''')
|
||||
|
||||
def _load_coin_id_map(self) -> dict:
|
||||
"""Loads the dynamically generated coin-to-id mapping."""
|
||||
map_file_path = os.path.join("_data", "coin_id_map.json")
|
||||
try:
|
||||
with open(map_file_path, 'r') as f:
|
||||
return json.load(f)
|
||||
except (FileNotFoundError, json.JSONDecodeError) as e:
|
||||
logging.error(f"Could not load '{map_file_path}'. Please run 'update_coin_map.py' first. Error: {e}")
|
||||
return {}
|
||||
|
||||
def _upsert_market_cap_data(self, conn, table_name: str, df: pd.DataFrame):
|
||||
"""Upserts a DataFrame of market cap data into the specified table."""
|
||||
if df.empty:
|
||||
return
|
||||
|
||||
records_to_upsert = []
|
||||
for index, row in df.iterrows():
|
||||
records_to_upsert.append((
|
||||
row['datetime_utc'].strftime('%Y-%m-%d %H:%M:%S'),
|
||||
row['timestamp_ms'],
|
||||
row['market_cap']
|
||||
))
|
||||
|
||||
cursor = conn.cursor()
|
||||
cursor.executemany(f'''
|
||||
INSERT OR REPLACE INTO "{table_name}" (datetime_utc, timestamp_ms, market_cap)
|
||||
VALUES (?, ?, ?)
|
||||
''', records_to_upsert)
|
||||
conn.commit()
|
||||
logging.info(f"Successfully upserted {len(records_to_upsert)} records into '{table_name}'.")
|
||||
|
||||
def run(self):
|
||||
"""
|
||||
Main execution function to process all configured coins and update the database.
|
||||
@ -117,6 +58,7 @@ class MarketCapFetcher:
|
||||
with sqlite3.connect(self.db_path) as conn:
|
||||
conn.execute("PRAGMA journal_mode=WAL;")
|
||||
|
||||
# 1. Process individual coins
|
||||
for coin_symbol in self.coins_to_fetch:
|
||||
coin_id = self.COIN_ID_MAP.get(coin_symbol.upper())
|
||||
if not coin_id:
|
||||
@ -129,21 +71,30 @@ class MarketCapFetcher:
|
||||
logging.error(f"An unexpected error occurred while processing {coin_symbol}: {e}")
|
||||
time.sleep(2)
|
||||
|
||||
# 2. Process and aggregate stablecoins
|
||||
self._update_stablecoin_aggregate(conn)
|
||||
|
||||
# 3. Process total market cap metrics
|
||||
self._update_total_market_cap(conn)
|
||||
|
||||
# 4. Save a summary of the latest data
|
||||
self._save_summary(conn)
|
||||
|
||||
logging.info("--- Market cap fetch process complete ---")
|
||||
|
||||
def _save_summary(self, conn):
|
||||
# ... (This function is unchanged)
|
||||
"""
|
||||
Queries the last record from each market cap table and saves a summary to a JSON file.
|
||||
"""
|
||||
logging.info("--- Generating Market Cap Summary ---")
|
||||
summary_data = {}
|
||||
summary_file_path = os.path.join("_data", "market_cap_data.json")
|
||||
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND (name LIKE '%_market_cap' OR name LIKE 'TOTAL_%');")
|
||||
tables = [row[0] for row in cursor.fetchall()]
|
||||
|
||||
for table_name in tables:
|
||||
try:
|
||||
df_last = pd.read_sql(f'SELECT * FROM "{table_name}" ORDER BY datetime_utc DESC LIMIT 1', conn)
|
||||
@ -151,24 +102,40 @@ class MarketCapFetcher:
|
||||
summary_data[table_name] = df_last.to_dict('records')[0]
|
||||
except Exception as e:
|
||||
logging.error(f"Could not read last record from table '{table_name}': {e}")
|
||||
|
||||
if summary_data:
|
||||
summary_data['summary_last_updated_utc'] = datetime.now(timezone.utc).isoformat()
|
||||
|
||||
with open(summary_file_path, 'w', encoding='utf-8') as f:
|
||||
json.dump(summary_data, f, indent=4)
|
||||
logging.info(f"Successfully saved market cap summary to '{summary_file_path}'")
|
||||
else:
|
||||
logging.warning("No data found to create a summary.")
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to generate summary: {e}")
|
||||
|
||||
def _update_total_market_cap(self, conn):
|
||||
"""Fetches the current total market cap and upserts it for the current date."""
|
||||
"""
|
||||
Fetches the current total market cap and saves it for the current date.
|
||||
"""
|
||||
logging.info("--- Processing Total Market Cap ---")
|
||||
table_name = "TOTAL_market_cap_daily"
|
||||
|
||||
try:
|
||||
# --- FIX: Use the current date instead of yesterday's ---
|
||||
today_date = datetime.now(timezone.utc).date()
|
||||
today_dt = pd.to_datetime(today_date)
|
||||
today_ts = int(today_dt.timestamp() * 1000)
|
||||
|
||||
cursor = conn.cursor()
|
||||
cursor.execute(f"SELECT name FROM sqlite_master WHERE type='table' AND name='{table_name}';")
|
||||
table_exists = cursor.fetchone()
|
||||
|
||||
if table_exists:
|
||||
# Check if we already have a record for today
|
||||
cursor.execute(f"SELECT 1 FROM \"{table_name}\" WHERE date(datetime_utc) = ? LIMIT 1", (today_date.isoformat(),))
|
||||
if cursor.fetchone():
|
||||
logging.info(f"Total market cap for {today_date} already exists. Skipping.")
|
||||
return
|
||||
|
||||
logging.info("Fetching current global market data...")
|
||||
url = f"{self.api_base_url}/global"
|
||||
@ -180,11 +147,10 @@ class MarketCapFetcher:
|
||||
|
||||
if total_mc:
|
||||
df_total = pd.DataFrame([{
|
||||
'datetime_utc': today_dt,
|
||||
'timestamp_ms': today_ts,
|
||||
'datetime_utc': pd.to_datetime(today_date),
|
||||
'market_cap': total_mc
|
||||
}])
|
||||
self._upsert_market_cap_data(conn, table_name, df_total)
|
||||
df_total.to_sql(table_name, conn, if_exists='append', index=False)
|
||||
logging.info(f"Saved total market cap for {today_date}: ${total_mc:,.2f}")
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
@ -192,6 +158,7 @@ class MarketCapFetcher:
|
||||
except Exception as e:
|
||||
logging.error(f"An error occurred while updating total market cap: {e}")
|
||||
|
||||
|
||||
def _update_stablecoin_aggregate(self, conn):
|
||||
"""Fetches data for all stablecoins and saves the aggregated market cap."""
|
||||
logging.info("--- Processing aggregated stablecoin market cap ---")
|
||||
@ -201,6 +168,7 @@ class MarketCapFetcher:
|
||||
logging.info(f"Fetching historical data for stablecoin: {symbol}...")
|
||||
df = self._fetch_historical_data(coin_id, days=365)
|
||||
if not df.empty:
|
||||
df['coin'] = symbol
|
||||
all_stablecoin_df = pd.concat([all_stablecoin_df, df])
|
||||
time.sleep(2)
|
||||
|
||||
@ -208,30 +176,31 @@ class MarketCapFetcher:
|
||||
logging.warning("No data fetched for any stablecoins. Cannot create aggregate.")
|
||||
return
|
||||
|
||||
aggregated_df = all_stablecoin_df.groupby('timestamp_ms').agg(
|
||||
datetime_utc=('datetime_utc', 'first'),
|
||||
market_cap=('market_cap', 'sum')
|
||||
).reset_index()
|
||||
aggregated_df = all_stablecoin_df.groupby(all_stablecoin_df['datetime_utc'].dt.date)['market_cap'].sum().reset_index()
|
||||
aggregated_df['datetime_utc'] = pd.to_datetime(aggregated_df['datetime_utc'])
|
||||
|
||||
table_name = "STABLECOINS_market_cap"
|
||||
last_date_in_db = self._get_last_date_from_db(table_name, conn, is_timestamp_ms=True)
|
||||
last_date_in_db = self._get_last_date_from_db(table_name, conn)
|
||||
|
||||
if last_date_in_db:
|
||||
aggregated_df = aggregated_df[aggregated_df['timestamp_ms'] > last_date_in_db]
|
||||
aggregated_df = aggregated_df[aggregated_df['datetime_utc'] > last_date_in_db]
|
||||
|
||||
if not aggregated_df.empty:
|
||||
self._upsert_market_cap_data(conn, table_name, aggregated_df)
|
||||
aggregated_df.to_sql(table_name, conn, if_exists='append', index=False)
|
||||
logging.info(f"Successfully saved {len(aggregated_df)} daily records to '{table_name}'.")
|
||||
else:
|
||||
logging.info("Aggregated stablecoin data is already up-to-date.")
|
||||
|
||||
|
||||
def _update_market_cap_for_coin(self, coin_id: str, coin_symbol: str, conn):
|
||||
"""Fetches and appends new market cap data for a single coin."""
|
||||
table_name = f"{coin_symbol}_market_cap"
|
||||
last_date_in_db = self._get_last_date_from_db(table_name, conn, is_timestamp_ms=True)
|
||||
|
||||
last_date_in_db = self._get_last_date_from_db(table_name, conn)
|
||||
|
||||
days_to_fetch = 365
|
||||
if last_date_in_db:
|
||||
delta_days = (datetime.now(timezone.utc) - datetime.fromtimestamp(last_date_in_db/1000, tz=timezone.utc)).days
|
||||
delta_days = (datetime.now() - last_date_in_db).days
|
||||
if delta_days <= 0:
|
||||
logging.info(f"Market cap data for '{coin_symbol}' is already up-to-date.")
|
||||
return
|
||||
@ -246,30 +215,24 @@ class MarketCapFetcher:
|
||||
return
|
||||
|
||||
if last_date_in_db:
|
||||
df = df[df['timestamp_ms'] > last_date_in_db]
|
||||
df = df[df['datetime_utc'] > last_date_in_db]
|
||||
|
||||
if not df.empty:
|
||||
self._upsert_market_cap_data(conn, table_name, df)
|
||||
df.to_sql(table_name, conn, if_exists='append', index=False)
|
||||
logging.info(f"Successfully saved {len(df)} new daily market cap records for {coin_symbol}.")
|
||||
else:
|
||||
logging.info(f"Data was fetched, but no new records needed saving for '{coin_symbol}'.")
|
||||
|
||||
def _get_last_date_from_db(self, table_name: str, conn, is_timestamp_ms: bool = False):
|
||||
"""Gets the most recent date or timestamp from a market cap table."""
|
||||
def _get_last_date_from_db(self, table_name: str, conn) -> pd.Timestamp:
|
||||
"""Gets the most recent date from a market cap table as a pandas Timestamp."""
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
cursor.execute(f"SELECT name FROM sqlite_master WHERE type='table' AND name='{table_name}';")
|
||||
if not cursor.fetchone():
|
||||
return None
|
||||
|
||||
col_to_query = "timestamp_ms" if is_timestamp_ms else "datetime_utc"
|
||||
last_val = pd.read_sql(f'SELECT MAX({col_to_query}) FROM "{table_name}"', conn).iloc[0, 0]
|
||||
|
||||
if pd.isna(last_val):
|
||||
return None
|
||||
if is_timestamp_ms:
|
||||
return int(last_val)
|
||||
return pd.to_datetime(last_val)
|
||||
|
||||
last_date_str = pd.read_sql(f'SELECT MAX(datetime_utc) FROM "{table_name}"', conn).iloc[0, 0]
|
||||
return pd.to_datetime(last_date_str) if last_date_str else None
|
||||
except Exception as e:
|
||||
logging.error(f"Could not read last date from table '{table_name}': {e}")
|
||||
return None
|
||||
@ -282,7 +245,7 @@ class MarketCapFetcher:
|
||||
|
||||
try:
|
||||
logging.debug(f"Fetching last {days} days for {coin_id}...")
|
||||
response = requests.get(url, headers=headers, params=params)
|
||||
response = requests.get(url, headers=headers)
|
||||
response.raise_for_status()
|
||||
data = response.json()
|
||||
|
||||
@ -290,16 +253,9 @@ class MarketCapFetcher:
|
||||
if not market_caps: return pd.DataFrame()
|
||||
|
||||
df = pd.DataFrame(market_caps, columns=['timestamp_ms', 'market_cap'])
|
||||
|
||||
# --- FIX: Normalize all timestamps to the start of the day (00:00:00 UTC) ---
|
||||
# This prevents duplicate entries for the same day (e.g., a "live" candle vs. the daily one)
|
||||
df['datetime_utc'] = pd.to_datetime(df['timestamp_ms'], unit='ms').dt.normalize()
|
||||
|
||||
# Recalculate the timestamp_ms to match the normalized 00:00:00 datetime
|
||||
df['timestamp_ms'] = (df['datetime_utc'].astype('int64') // 10**6)
|
||||
|
||||
df.drop_duplicates(subset=['timestamp_ms'], keep='last', inplace=True)
|
||||
return df[['datetime_utc', 'timestamp_ms', 'market_cap']]
|
||||
df['datetime_utc'] = pd.to_datetime(df['timestamp_ms'], unit='ms')
|
||||
df.drop_duplicates(subset=['datetime_utc'], keep='last', inplace=True)
|
||||
return df[['datetime_utc', 'market_cap']]
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
logging.error(f"API request failed for {coin_id}: {e}.")
|
||||
@ -308,6 +264,12 @@ class MarketCapFetcher:
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description="Fetch historical market cap data from CoinGecko.")
|
||||
parser.add_argument(
|
||||
"--coins",
|
||||
nargs='+',
|
||||
default=["BTC", "ETH", "SOL", "BNB", "HYPE", "ASTER", "ZEC", "PUMP", "SUI"],
|
||||
help="List of coin symbols to fetch (e.g., BTC ETH)."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--log-level",
|
||||
default="normal",
|
||||
@ -316,6 +278,6 @@ if __name__ == "__main__":
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
fetcher = MarketCapFetcher(log_level=args.log_level)
|
||||
fetcher = MarketCapFetcher(log_level=args.log_level, coins=args.coins)
|
||||
fetcher.run()
|
||||
|
||||
|
||||
@ -1,2 +0,0 @@
|
||||
# This file can be empty.
|
||||
# It tells Python that 'position_logic' is a directory containing modules.
|
||||
@ -1,31 +0,0 @@
|
||||
from abc import ABC, abstractmethod
|
||||
import logging
|
||||
|
||||
class BasePositionLogic(ABC):
|
||||
"""
|
||||
Abstract base class for all strategy-specific position logic.
|
||||
Defines the interface for how the PositionManager interacts with logic modules.
|
||||
"""
|
||||
def __init__(self, strategy_name: str, send_order_callback, log_trade_callback):
|
||||
self.strategy_name = strategy_name
|
||||
self.send_order = send_order_callback
|
||||
self.log_trade = log_trade_callback
|
||||
logging.info(f"Initialized position logic for '{strategy_name}'")
|
||||
|
||||
@abstractmethod
|
||||
def handle_signal(self, signal_data: dict, current_strategy_positions: dict) -> dict:
|
||||
"""
|
||||
The core logic method. This is called by the PositionManager when a
|
||||
new signal arrives for this strategy.
|
||||
|
||||
Args:
|
||||
signal_data: The full signal dictionary from the strategy.
|
||||
current_strategy_positions: A dict of this strategy's current positions,
|
||||
keyed by coin (e.g., {"BTC": {"side": "long", ...}}).
|
||||
|
||||
Returns:
|
||||
A dictionary representing the new state for the *specific coin* in the
|
||||
signal (e.g., {"side": "long", "size": 0.1}).
|
||||
Return None to indicate the position for this coin should be closed/removed.
|
||||
"""
|
||||
pass
|
||||
@ -1,83 +0,0 @@
|
||||
import logging
|
||||
from position_logic.base_logic import BasePositionLogic
|
||||
|
||||
class DefaultFlipLogic(BasePositionLogic):
|
||||
"""
|
||||
The standard "flip-on-signal" logic used by most simple strategies
|
||||
(SMA, MA Cross, and even the per-coin Copy Trader signals).
|
||||
|
||||
- BUY signal: Closes any short, opens a long.
|
||||
- SELL signal: Closes any long, opens a short.
|
||||
- FLAT signal: Closes any open position.
|
||||
"""
|
||||
def handle_signal(self, signal_data: dict, current_strategy_positions: dict) -> dict:
|
||||
"""
|
||||
Processes a BUY, SELL, or FLAT signal and issues the necessary orders
|
||||
to flip or open a position.
|
||||
"""
|
||||
name = self.strategy_name
|
||||
params = signal_data['config']['parameters']
|
||||
coin = signal_data['coin']
|
||||
desired_signal = signal_data['signal']
|
||||
signal_price = signal_data.get('signal_price', 0)
|
||||
|
||||
size = params.get('size')
|
||||
leverage_long = int(params.get('leverage_long', 2))
|
||||
leverage_short = int(params.get('leverage_short', 2))
|
||||
agent_name = signal_data['config'].get("agent", "default").lower()
|
||||
|
||||
# --- This logic now correctly targets a specific coin ---
|
||||
current_position = current_strategy_positions.get(coin)
|
||||
new_position_state = None # Return None to close position
|
||||
|
||||
if desired_signal == "BUY" or desired_signal == "INIT_BUY":
|
||||
new_position_state = {"coin": coin, "side": "long", "size": size}
|
||||
|
||||
if not current_position:
|
||||
logging.warning(f"[{name}]-[{coin}] ACTION: Setting leverage to {leverage_long}x and opening LONG.")
|
||||
self.send_order(agent_name, "update_leverage", coin, is_buy=True, size=leverage_long)
|
||||
self.send_order(agent_name, "market_open", coin, is_buy=True, size=size)
|
||||
self.log_trade(strategy=name, coin=coin, action="OPEN_LONG", price=signal_price, size=size, signal=desired_signal)
|
||||
|
||||
elif current_position['side'] == 'short':
|
||||
logging.warning(f"[{name}]-[{coin}] ACTION: Closing SHORT and opening LONG with {leverage_long}x leverage.")
|
||||
self.send_order(agent_name, "update_leverage", coin, is_buy=True, size=leverage_long)
|
||||
self.send_order(agent_name, "market_open", coin, is_buy=True, size=current_position['size'], reduce_only=True)
|
||||
self.log_trade(strategy=name, coin=coin, action="CLOSE_SHORT", price=signal_price, size=current_position['size'], signal=desired_signal)
|
||||
self.send_order(agent_name, "market_open", coin, is_buy=True, size=size)
|
||||
self.log_trade(strategy=name, coin=coin, action="OPEN_LONG", price=signal_price, size=size, signal=desired_signal)
|
||||
|
||||
else: # Already long, do nothing
|
||||
logging.info(f"[{name}]-[{coin}] INFO: Already LONG, no action taken.")
|
||||
new_position_state = current_position # State is unchanged
|
||||
|
||||
elif desired_signal == "SELL" or desired_signal == "INIT_SELL":
|
||||
new_position_state = {"coin": coin, "side": "short", "size": size}
|
||||
|
||||
if not current_position:
|
||||
logging.warning(f"[{name}]-[{coin}] ACTION: Setting leverage to {leverage_short}x and opening SHORT.")
|
||||
self.send_order(agent_name, "update_leverage", coin, is_buy=False, size=leverage_short)
|
||||
self.send_order(agent_name, "market_open", coin, is_buy=False, size=size)
|
||||
self.log_trade(strategy=name, coin=coin, action="OPEN_SHORT", price=signal_price, size=size, signal=desired_signal)
|
||||
|
||||
elif current_position['side'] == 'long':
|
||||
logging.warning(f"[{name}]-[{coin}] ACTION: Closing LONG and opening SHORT with {leverage_short}x leverage.")
|
||||
self.send_order(agent_name, "update_leverage", coin, is_buy=False, size=leverage_short)
|
||||
self.send_order(agent_name, "market_open", coin, is_buy=False, size=current_position['size'], reduce_only=True)
|
||||
self.log_trade(strategy=name, coin=coin, action="CLOSE_LONG", price=signal_price, size=current_position['size'], signal=desired_signal)
|
||||
self.send_order(agent_name, "market_open", coin, is_buy=False, size=size)
|
||||
self.log_trade(strategy=name, coin=coin, action="OPEN_SHORT", price=signal_price, size=size, signal=desired_signal)
|
||||
|
||||
else: # Already short, do nothing
|
||||
logging.info(f"[{name}]-[{coin}] INFO: Already SHORT, no action taken.")
|
||||
new_position_state = current_position # State is unchanged
|
||||
|
||||
elif desired_signal == "FLAT":
|
||||
if current_position:
|
||||
logging.warning(f"[{name}]-[{coin}] ACTION: Close {current_position['side']} position.")
|
||||
is_buy = current_position['side'] == 'short' # To close a short, we buy
|
||||
self.send_order(agent_name, "market_open", coin, is_buy=is_buy, size=current_position['size'], reduce_only=True)
|
||||
self.log_trade(strategy=name, coin=coin, action=f"CLOSE_{current_position['side'].upper()}", price=signal_price, size=current_position['size'], signal=desired_signal)
|
||||
# new_position_state is already None, which will remove it
|
||||
|
||||
return new_position_state
|
||||
@ -1,170 +0,0 @@
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import time
|
||||
import multiprocessing
|
||||
import numpy as np # Import numpy to handle np.float64
|
||||
|
||||
from logging_utils import setup_logging
|
||||
from trade_log import log_trade
|
||||
|
||||
class PositionManager:
|
||||
"""
|
||||
(Stateless) Listens for EXPLICIT signals (e.g., "OPEN_LONG") from all
|
||||
strategies and converts them into specific execution orders
|
||||
(e.g., "market_open") for the TradeExecutor.
|
||||
|
||||
It holds NO position state.
|
||||
"""
|
||||
|
||||
def __init__(self, log_level: str, trade_signal_queue: multiprocessing.Queue, order_execution_queue: multiprocessing.Queue):
|
||||
# Note: Logging is set up by the run_position_manager function
|
||||
|
||||
self.trade_signal_queue = trade_signal_queue
|
||||
self.order_execution_queue = order_execution_queue
|
||||
|
||||
# --- REMOVED: All state management ---
|
||||
|
||||
logging.info("Position Manager (Stateless) started.")
|
||||
|
||||
# --- REMOVED: _load_managed_positions method ---
|
||||
# --- REMOVED: _save_managed_positions method ---
|
||||
# --- REMOVED: All tick/rounding/meta logic ---
|
||||
|
||||
def send_order(self, agent: str, action: str, coin: str, is_buy: bool, size: float, reduce_only: bool = False, limit_px=None, sl_px=None, tp_px=None):
|
||||
"""Helper function to put a standardized order onto the execution queue."""
|
||||
order_data = {
|
||||
"agent": agent,
|
||||
"action": action,
|
||||
"coin": coin,
|
||||
"is_buy": is_buy,
|
||||
"size": size,
|
||||
"reduce_only": reduce_only,
|
||||
"limit_px": limit_px,
|
||||
"sl_px": sl_px,
|
||||
"tp_px": tp_px,
|
||||
}
|
||||
logging.info(f"Sending order to executor: {order_data}")
|
||||
self.order_execution_queue.put(order_data)
|
||||
|
||||
def run(self):
|
||||
"""
|
||||
Main execution loop. Blocks and waits for a signal from the queue.
|
||||
Converts explicit strategy signals into execution orders.
|
||||
"""
|
||||
logging.info("Position Manager started. Waiting for signals...")
|
||||
while True:
|
||||
try:
|
||||
trade_signal = self.trade_signal_queue.get()
|
||||
if not trade_signal:
|
||||
continue
|
||||
|
||||
logging.info(f"Received signal: {trade_signal}")
|
||||
|
||||
name = trade_signal['strategy_name']
|
||||
config = trade_signal['config']
|
||||
params = config['parameters']
|
||||
coin = trade_signal['coin'].upper()
|
||||
|
||||
# --- NEW: The signal is now the explicit action ---
|
||||
desired_signal = trade_signal['signal']
|
||||
|
||||
status = trade_signal
|
||||
|
||||
signal_price = status.get('signal_price')
|
||||
if isinstance(signal_price, np.float64):
|
||||
signal_price = float(signal_price)
|
||||
|
||||
if not signal_price or signal_price <= 0:
|
||||
logging.warning(f"[{name}] Signal received with invalid or missing price ({signal_price}). Skipping.")
|
||||
continue
|
||||
|
||||
# --- This logic is still needed for copy_trader's nested config ---
|
||||
# --- But ONLY for finding leverage, not size ---
|
||||
if 'coins_to_copy' in params:
|
||||
logging.info(f"[{name}] Detected 'coins_to_copy'. Entering copy_trader logic...")
|
||||
matching_coin_key = None
|
||||
for key in params['coins_to_copy'].keys():
|
||||
if key.upper() == coin:
|
||||
matching_coin_key = key
|
||||
break
|
||||
|
||||
if matching_coin_key:
|
||||
coin_specific_config = params['coins_to_copy'][matching_coin_key]
|
||||
else:
|
||||
coin_specific_config = {}
|
||||
|
||||
# --- REMOVED: size = coin_specific_config.get('size') ---
|
||||
|
||||
params['leverage_long'] = coin_specific_config.get('leverage_long', 2)
|
||||
params['leverage_short'] = coin_specific_config.get('leverage_short', 2)
|
||||
|
||||
# --- FIX: Read the size from the ROOT of the trade signal ---
|
||||
size = trade_signal.get('size')
|
||||
if not size or size <= 0:
|
||||
logging.error(f"[{name}] Signal received with no 'size' or invalid size ({size}). Skipping trade.")
|
||||
continue
|
||||
# --- END FIX ---
|
||||
|
||||
leverage_long = int(params.get('leverage_long', 2))
|
||||
leverage_short = int(params.get('leverage_short', 2))
|
||||
|
||||
agent_name = (config.get("agent") or "default").lower()
|
||||
|
||||
logging.info(f"[{name}] Agent set to: {agent_name}")
|
||||
|
||||
# --- REMOVED: current_position check ---
|
||||
|
||||
# --- Use pure signal_price directly for the limit_px ---
|
||||
limit_px = signal_price
|
||||
logging.info(f"[{name}] Using pure signal price for limit_px: {limit_px}")
|
||||
|
||||
# --- NEW: Stateless Signal-to-Order Conversion ---
|
||||
|
||||
if desired_signal == "OPEN_LONG":
|
||||
logging.warning(f"[{name}] ACTION: Opening LONG for {coin}.")
|
||||
# --- REMOVED: Leverage update signal ---
|
||||
self.send_order(agent_name, "market_open", coin, True, size, limit_px=limit_px)
|
||||
log_trade(strategy=name, coin=coin, action="OPEN_LONG", price=signal_price, size=size, signal=desired_signal)
|
||||
|
||||
elif desired_signal == "OPEN_SHORT":
|
||||
logging.warning(f"[{name}] ACTION: Opening SHORT for {coin}.")
|
||||
# --- REMOVED: Leverage update signal ---
|
||||
self.send_order(agent_name, "market_open", coin, False, size, limit_px=limit_px)
|
||||
log_trade(strategy=name, coin=coin, action="OPEN_SHORT", price=signal_price, size=size, signal=desired_signal)
|
||||
|
||||
elif desired_signal == "CLOSE_LONG":
|
||||
logging.warning(f"[{name}] ACTION: Closing LONG position for {coin}.")
|
||||
# A "market_close" for a LONG is a SELL order
|
||||
self.send_order(agent_name, "market_close", coin, False, size, limit_px=limit_px)
|
||||
log_trade(strategy=name, coin=coin, action="CLOSE_LONG", price=signal_price, size=size, signal=desired_signal)
|
||||
|
||||
elif desired_signal == "CLOSE_SHORT":
|
||||
logging.warning(f"[{name}] ACTION: Closing SHORT position for {coin}.")
|
||||
# A "market_close" for a SHORT is a BUY order
|
||||
self.send_order(agent_name, "market_close", coin, True, size, limit_px=limit_px)
|
||||
log_trade(strategy=name, coin=coin, action="CLOSE_SHORT", price=signal_price, size=size, signal=desired_signal)
|
||||
|
||||
# --- NEW: Handle leverage update signals ---
|
||||
elif desired_signal == "UPDATE_LEVERAGE_LONG":
|
||||
logging.warning(f"[{name}] ACTION: Updating LONG leverage for {coin} to {size}x")
|
||||
# 'size' field holds the leverage value for this signal
|
||||
self.send_order(agent_name, "update_leverage", coin, True, size)
|
||||
|
||||
elif desired_signal == "UPDATE_LEVERAGE_SHORT":
|
||||
logging.warning(f"[{name}] ACTION: Updating SHORT leverage for {coin} to {size}x")
|
||||
# 'size' field holds the leverage value for this signal
|
||||
self.send_order(agent_name, "update_leverage", coin, False, size)
|
||||
|
||||
else:
|
||||
logging.warning(f"[{name}] Received unknown signal '{desired_signal}'. No action taken.")
|
||||
|
||||
# --- REMOVED: _save_managed_positions() ---
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"An error occurred in the position manager loop: {e}", exc_info=True)
|
||||
time.sleep(1)
|
||||
|
||||
# This script is no longer run directly, but is called by main_app.py
|
||||
|
||||
@ -1,159 +0,0 @@
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import json
|
||||
import argparse
|
||||
from datetime import datetime, timezone
|
||||
from hyperliquid.info import Info
|
||||
from hyperliquid.utils import constants
|
||||
from dotenv import load_dotenv
|
||||
import logging
|
||||
|
||||
from logging_utils import setup_logging
|
||||
|
||||
# Load .env file
|
||||
load_dotenv()
|
||||
|
||||
class PositionMonitor:
|
||||
"""
|
||||
A standalone, read-only dashboard for monitoring all open perpetuals
|
||||
positions, spot balances, and their associated strategies.
|
||||
"""
|
||||
|
||||
def __init__(self, log_level: str):
|
||||
setup_logging(log_level, 'PositionMonitor')
|
||||
|
||||
self.wallet_address = os.environ.get("MAIN_WALLET_ADDRESS")
|
||||
if not self.wallet_address:
|
||||
logging.error("MAIN_WALLET_ADDRESS not set in .env file. Cannot proceed.")
|
||||
sys.exit(1)
|
||||
|
||||
self.info = Info(constants.MAINNET_API_URL, skip_ws=True)
|
||||
self.managed_positions_path = os.path.join("_data", "executor_managed_positions.json")
|
||||
self._lines_printed = 0
|
||||
|
||||
logging.info(f"Monitoring vault address: {self.wallet_address}")
|
||||
|
||||
def load_managed_positions(self) -> dict:
|
||||
"""Loads the state of which strategy manages which position."""
|
||||
if os.path.exists(self.managed_positions_path):
|
||||
try:
|
||||
with open(self.managed_positions_path, 'r') as f:
|
||||
# Create a reverse map: {coin: strategy_name}
|
||||
data = json.load(f)
|
||||
return {v['coin']: k for k, v in data.items()}
|
||||
except (IOError, json.JSONDecodeError):
|
||||
logging.warning("Could not read managed positions file.")
|
||||
return {}
|
||||
|
||||
def run(self):
|
||||
"""Main loop to continuously refresh the dashboard."""
|
||||
try:
|
||||
while True:
|
||||
self.display_dashboard()
|
||||
time.sleep(5) # Refresh every 5 seconds
|
||||
except KeyboardInterrupt:
|
||||
logging.info("Position monitor stopped.")
|
||||
|
||||
def display_dashboard(self):
|
||||
"""Fetches all data and draws the dashboard without blinking."""
|
||||
if self._lines_printed > 0:
|
||||
print(f"\x1b[{self._lines_printed}A", end="")
|
||||
|
||||
output_lines = []
|
||||
try:
|
||||
perp_state = self.info.user_state(self.wallet_address)
|
||||
spot_state = self.info.spot_user_state(self.wallet_address)
|
||||
coin_to_strategy_map = self.load_managed_positions()
|
||||
|
||||
output_lines.append(f"--- Live Position Monitor for {self.wallet_address[:6]}...{self.wallet_address[-4:]} ---")
|
||||
|
||||
# --- 1. Perpetuals Account Summary ---
|
||||
margin_summary = perp_state.get('marginSummary', {})
|
||||
account_value = float(margin_summary.get('accountValue', 0))
|
||||
margin_used = float(margin_summary.get('totalMarginUsed', 0))
|
||||
utilization = (margin_used / account_value) * 100 if account_value > 0 else 0
|
||||
|
||||
output_lines.append("\n--- Perpetuals Account Summary ---")
|
||||
output_lines.append(f" Account Value: ${account_value:,.2f} | Margin Used: ${margin_used:,.2f} | Utilization: {utilization:.2f}%")
|
||||
|
||||
# --- 2. Spot Balances Summary ---
|
||||
output_lines.append("\n--- Spot Balances ---")
|
||||
spot_balances = spot_state.get('balances', [])
|
||||
if not spot_balances:
|
||||
output_lines.append(" No spot balances found.")
|
||||
else:
|
||||
balances_str = ", ".join([f"{b.get('coin')}: {float(b.get('total', 0)):,.4f}" for b in spot_balances if float(b.get('total', 0)) > 0])
|
||||
output_lines.append(f" {balances_str}")
|
||||
|
||||
# --- 3. Open Positions Table ---
|
||||
output_lines.append("\n--- Open Perpetual Positions ---")
|
||||
positions = perp_state.get('assetPositions', [])
|
||||
open_positions = [p for p in positions if p.get('position') and float(p['position'].get('szi', 0)) != 0]
|
||||
|
||||
if not open_positions:
|
||||
output_lines.append(" No open perpetual positions found.")
|
||||
output_lines.append("") # Add a line for stable refresh
|
||||
else:
|
||||
self.build_positions_table(open_positions, coin_to_strategy_map, output_lines)
|
||||
|
||||
except Exception as e:
|
||||
output_lines = [f"An error occurred: {e}"]
|
||||
|
||||
final_output = "\n".join(output_lines) + "\n\x1b[J" # \x1b[J clears to end of screen
|
||||
print(final_output, end="")
|
||||
|
||||
self._lines_printed = len(output_lines)
|
||||
sys.stdout.flush()
|
||||
|
||||
def build_positions_table(self, positions: list, coin_to_strategy_map: dict, output_lines: list):
|
||||
"""Builds the text for the positions summary table."""
|
||||
header = f"| {'Strategy':<25} | {'Coin':<6} | {'Side':<5} | {'Size':>15} | {'Entry Price':>12} | {'Mark Price':>12} | {'PNL':>15} | {'Leverage':>10} |"
|
||||
output_lines.append(header)
|
||||
output_lines.append("-" * len(header))
|
||||
|
||||
for position in positions:
|
||||
pos = position.get('position', {})
|
||||
coin = pos.get('coin', 'Unknown')
|
||||
size = float(pos.get('szi', 0))
|
||||
entry_px = float(pos.get('entryPx', 0))
|
||||
mark_px = float(pos.get('markPx', 0))
|
||||
unrealized_pnl = float(pos.get('unrealizedPnl', 0))
|
||||
|
||||
# Get leverage
|
||||
position_value = float(pos.get('positionValue', 0))
|
||||
margin_used = float(pos.get('marginUsed', 0))
|
||||
leverage = (position_value / margin_used) if margin_used > 0 else 0
|
||||
|
||||
side_text = "LONG" if size > 0 else "SHORT"
|
||||
pnl_sign = "+" if unrealized_pnl >= 0 else ""
|
||||
|
||||
# Find the strategy that owns this coin
|
||||
strategy_name = coin_to_strategy_map.get(coin, "Unmanaged")
|
||||
|
||||
# Format all values as strings
|
||||
strategy_str = f"{strategy_name:<25}"
|
||||
coin_str = f"{coin:<6}"
|
||||
side_str = f"{side_text:<5}"
|
||||
size_str = f"{size:>15.4f}"
|
||||
entry_str = f"${entry_px:>11,.2f}"
|
||||
mark_str = f"${mark_px:>11,.2f}"
|
||||
pnl_str = f"{pnl_sign}${unrealized_pnl:>14,.2f}"
|
||||
lev_str = f"{leverage:>9.1f}x"
|
||||
|
||||
output_lines.append(f"| {strategy_str} | {coin_str} | {side_str} | {size_str} | {entry_str} | {mark_str} | {pnl_str} | {lev_str} |")
|
||||
|
||||
output_lines.append("-" * len(header))
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description="Monitor a Hyperliquid wallet's positions in real-time.")
|
||||
parser.add_argument(
|
||||
"--log-level",
|
||||
default="normal",
|
||||
choices=['off', 'normal', 'debug'],
|
||||
help="Set the logging level for the script."
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
monitor = PositionMonitor(log_level=args.log_level)
|
||||
monitor.run()
|
||||
BIN
requirements.txt
BIN
requirements.txt
Binary file not shown.
152
resampler.py
152
resampler.py
@ -5,7 +5,7 @@ import sys
|
||||
import sqlite3
|
||||
import pandas as pd
|
||||
import json
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from datetime import datetime, timezone
|
||||
|
||||
# Assuming logging_utils.py is in the same directory
|
||||
from logging_utils import setup_logging
|
||||
@ -13,8 +13,7 @@ from logging_utils import setup_logging
|
||||
class Resampler:
|
||||
"""
|
||||
Reads new 1-minute candle data from the SQLite database, resamples it to
|
||||
various timeframes, and upserts the new candles to the corresponding tables,
|
||||
preventing data duplication.
|
||||
various timeframes, and appends the new candles to the corresponding tables.
|
||||
"""
|
||||
|
||||
def __init__(self, log_level: str, coins: list, timeframes: dict):
|
||||
@ -33,62 +32,6 @@ class Resampler:
|
||||
}
|
||||
self.resampling_status = self._load_existing_status()
|
||||
self.job_start_time = None
|
||||
self._ensure_tables_exist()
|
||||
|
||||
def _ensure_tables_exist(self):
|
||||
"""
|
||||
Ensures all resampled tables exist with a PRIMARY KEY on timestamp_ms.
|
||||
Attempts to migrate existing tables if the schema is incorrect.
|
||||
"""
|
||||
with sqlite3.connect(self.db_path) as conn:
|
||||
for coin in self.coins_to_process:
|
||||
for tf_name in self.timeframes.keys():
|
||||
table_name = f"{coin}_{tf_name}"
|
||||
cursor = conn.cursor()
|
||||
cursor.execute(f"PRAGMA table_info('{table_name}')")
|
||||
columns = cursor.fetchall()
|
||||
if columns:
|
||||
# --- FIX: Check for the correct PRIMARY KEY on timestamp_ms ---
|
||||
pk_found = any(col[1] == 'timestamp_ms' and col[5] == 1 for col in columns)
|
||||
if not pk_found:
|
||||
logging.warning(f"Schema migration needed for table '{table_name}'.")
|
||||
try:
|
||||
conn.execute(f'ALTER TABLE "{table_name}" RENAME TO "{table_name}_old"')
|
||||
self._create_resampled_table(conn, table_name)
|
||||
# Copy data, ensuring to create the timestamp_ms
|
||||
logging.info(f" -> Migrating data for '{table_name}'...")
|
||||
old_df = pd.read_sql(f'SELECT * FROM "{table_name}_old"', conn, parse_dates=['datetime_utc'])
|
||||
if not old_df.empty:
|
||||
old_df['timestamp_ms'] = (old_df['datetime_utc'].astype('int64') // 10**6)
|
||||
# Keep only unique timestamps, preserving the last entry
|
||||
old_df.drop_duplicates(subset=['timestamp_ms'], keep='last', inplace=True)
|
||||
old_df.to_sql(table_name, conn, if_exists='append', index=False)
|
||||
logging.info(f" -> Data migration complete.")
|
||||
conn.execute(f'DROP TABLE "{table_name}_old"')
|
||||
conn.commit()
|
||||
logging.info(f"Successfully migrated schema for '{table_name}'.")
|
||||
except Exception as e:
|
||||
logging.error(f"FATAL: Migration for '{table_name}' failed: {e}. Please delete 'market_data.db' and restart.")
|
||||
sys.exit(1)
|
||||
else:
|
||||
self._create_resampled_table(conn, table_name)
|
||||
logging.info("All resampled table schemas verified.")
|
||||
|
||||
def _create_resampled_table(self, conn, table_name):
|
||||
"""Creates a new resampled table with the correct schema."""
|
||||
# --- FIX: Set PRIMARY KEY on timestamp_ms for performance and uniqueness ---
|
||||
conn.execute(f'''
|
||||
CREATE TABLE "{table_name}" (
|
||||
datetime_utc TEXT,
|
||||
timestamp_ms INTEGER PRIMARY KEY,
|
||||
open REAL,
|
||||
high REAL,
|
||||
low REAL,
|
||||
close REAL,
|
||||
volume REAL,
|
||||
number_of_trades INTEGER
|
||||
)
|
||||
''')
|
||||
|
||||
def _load_existing_status(self) -> dict:
|
||||
"""Loads the existing status file if it exists, otherwise returns an empty dict."""
|
||||
@ -108,14 +51,6 @@ class Resampler:
|
||||
self.job_start_time = datetime.now(timezone.utc)
|
||||
logging.info(f"--- Resampling job started at {self.job_start_time.strftime('%Y-%m-%d %H:%M:%S %Z')} ---")
|
||||
|
||||
if '1m' in self.timeframes:
|
||||
logging.debug("Ignoring '1m' timeframe as it is the source resolution.")
|
||||
del self.timeframes['1m']
|
||||
|
||||
if not self.timeframes:
|
||||
logging.warning("No timeframes to process after filtering. Exiting job.")
|
||||
return
|
||||
|
||||
if not os.path.exists(self.db_path):
|
||||
logging.error(f"Database file '{self.db_path}' not found.")
|
||||
return
|
||||
@ -126,58 +61,37 @@ class Resampler:
|
||||
logging.debug(f"Processing {len(self.coins_to_process)} coins...")
|
||||
|
||||
for coin in self.coins_to_process:
|
||||
source_table_name = f"{coin}_1m"
|
||||
logging.debug(f"--- Processing {coin} ---")
|
||||
|
||||
try:
|
||||
# Load the full 1m history once per coin
|
||||
df_1m = pd.read_sql(f'SELECT * FROM "{source_table_name}"', conn, parse_dates=['datetime_utc'])
|
||||
if df_1m.empty:
|
||||
logging.warning(f"Source table '{source_table_name}' is empty. Skipping.")
|
||||
continue
|
||||
df_1m.set_index('datetime_utc', inplace=True)
|
||||
|
||||
for tf_name, tf_code in self.timeframes.items():
|
||||
target_table_name = f"{coin}_{tf_name}"
|
||||
source_table_name = f"{coin}_1m"
|
||||
logging.debug(f" Updating {tf_name} table...")
|
||||
|
||||
last_timestamp_ms = self._get_last_timestamp(conn, target_table_name)
|
||||
last_timestamp = self._get_last_timestamp(conn, target_table_name)
|
||||
|
||||
query = f'SELECT * FROM "{source_table_name}"'
|
||||
params = ()
|
||||
if last_timestamp_ms:
|
||||
query += ' WHERE timestamp_ms >= ?'
|
||||
# Go back one interval to rebuild the last (potentially partial) candle
|
||||
try:
|
||||
interval_delta_ms = pd.to_timedelta(tf_code).total_seconds() * 1000
|
||||
except ValueError:
|
||||
# Fall back to a safe 32-day lookback for special timeframes
|
||||
interval_delta_ms = timedelta(days=32).total_seconds() * 1000
|
||||
# Get the new 1-minute data that needs to be processed
|
||||
new_df_1m = df_1m[df_1m.index > last_timestamp] if last_timestamp else df_1m
|
||||
|
||||
query_start_ms = last_timestamp_ms - interval_delta_ms
|
||||
params = (query_start_ms,)
|
||||
|
||||
df_1m = pd.read_sql(query, conn, params=params, parse_dates=['datetime_utc'])
|
||||
|
||||
if df_1m.empty:
|
||||
if new_df_1m.empty:
|
||||
logging.debug(f" -> No new 1-minute data for {tf_name}. Table is up to date.")
|
||||
continue
|
||||
|
||||
df_1m.set_index('datetime_utc', inplace=True)
|
||||
resampled_df = df_1m.resample(tf_code).agg(self.aggregation_logic)
|
||||
resampled_df = new_df_1m.resample(tf_code).agg(self.aggregation_logic)
|
||||
resampled_df.dropna(how='all', inplace=True)
|
||||
|
||||
if not resampled_df.empty:
|
||||
records_to_upsert = []
|
||||
for index, row in resampled_df.iterrows():
|
||||
records_to_upsert.append((
|
||||
index.strftime('%Y-%m-%d %H:%M:%S'),
|
||||
int(index.timestamp() * 1000), # Generate timestamp_ms
|
||||
row['open'], row['high'], row['low'], row['close'],
|
||||
row['volume'], row['number_of_trades']
|
||||
))
|
||||
|
||||
cursor = conn.cursor()
|
||||
cursor.executemany(f'''
|
||||
INSERT OR REPLACE INTO "{target_table_name}" (datetime_utc, timestamp_ms, open, high, low, close, volume, number_of_trades)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
||||
''', records_to_upsert)
|
||||
conn.commit()
|
||||
|
||||
logging.debug(f" -> Upserted {len(resampled_df)} candles into '{target_table_name}'.")
|
||||
# Append the newly resampled data to the target table
|
||||
resampled_df.to_sql(target_table_name, conn, if_exists='append', index=True)
|
||||
logging.debug(f" -> Appended {len(resampled_df)} new candles to '{target_table_name}'.")
|
||||
|
||||
if coin not in self.resampling_status: self.resampling_status[coin] = {}
|
||||
total_candles = int(self._get_table_count(conn, target_table_name))
|
||||
@ -197,6 +111,7 @@ class Resampler:
|
||||
"""Logs a summary of the total candles for each timeframe."""
|
||||
logging.info("--- Resampling Job Summary ---")
|
||||
timeframe_totals = {}
|
||||
# Iterate through coins, skipping metadata keys
|
||||
for coin, tfs in self.resampling_status.items():
|
||||
if not isinstance(tfs, dict): continue
|
||||
for tf_name, tf_data in tfs.items():
|
||||
@ -214,11 +129,9 @@ class Resampler:
|
||||
logging.info(f" - {tf_name:<10}: {total:,} candles")
|
||||
|
||||
def _get_last_timestamp(self, conn, table_name):
|
||||
"""Gets the millisecond timestamp of the last entry in a table."""
|
||||
"""Gets the timestamp of the last entry in a table."""
|
||||
try:
|
||||
# --- FIX: Query for the integer timestamp_ms, not the text datetime_utc ---
|
||||
timestamp_ms = pd.read_sql(f'SELECT MAX(timestamp_ms) FROM "{table_name}"', conn).iloc[0, 0]
|
||||
return int(timestamp_ms) if pd.notna(timestamp_ms) else None
|
||||
return pd.read_sql(f'SELECT MAX(datetime_utc) FROM "{table_name}"', conn).iloc[0, 0]
|
||||
except (pd.io.sql.DatabaseError, IndexError):
|
||||
return None
|
||||
|
||||
@ -238,6 +151,7 @@ class Resampler:
|
||||
self.resampling_status['job_start_time_utc'] = self.job_start_time.strftime('%Y-%m-%d %H:%M:%S')
|
||||
self.resampling_status['job_stop_time_utc'] = stop_time.strftime('%Y-%m-%d %H:%M:%S')
|
||||
|
||||
# Clean up old key if it exists from previous versions
|
||||
self.resampling_status.pop('last_completed_utc', None)
|
||||
|
||||
try:
|
||||
@ -253,24 +167,14 @@ def parse_timeframes(tf_strings: list) -> dict:
|
||||
tf_map = {}
|
||||
for tf_str in tf_strings:
|
||||
numeric_part = ''.join(filter(str.isdigit, tf_str))
|
||||
unit = ''.join(filter(str.isalpha, tf_str)) # Keep case for 'M'
|
||||
unit = ''.join(filter(str.isalpha, tf_str)).lower()
|
||||
|
||||
key = tf_str
|
||||
code = ''
|
||||
if unit == 'm':
|
||||
code = f"{numeric_part}min"
|
||||
elif unit.lower() == 'w':
|
||||
code = f"{numeric_part}W-MON"
|
||||
elif unit == 'M':
|
||||
code = f"{numeric_part}MS"
|
||||
key = f"{numeric_part}month"
|
||||
elif unit.lower() in ['h', 'd']:
|
||||
code = f"{numeric_part}{unit.lower()}"
|
||||
else:
|
||||
code = tf_str
|
||||
logging.warning(f"Unrecognized timeframe unit in '{tf_str}'. Using as-is.")
|
||||
|
||||
tf_map[key] = code
|
||||
if unit == 'm': code = f"{numeric_part}min"
|
||||
elif unit == 'w': code = f"{numeric_part}W"
|
||||
elif unit in ['h', 'd']: code = f"{numeric_part}{unit}"
|
||||
else: code = tf_str
|
||||
tf_map[tf_str] = code
|
||||
return tf_map
|
||||
|
||||
|
||||
|
||||
79
review.md
79
review.md
@ -1,79 +0,0 @@
|
||||
# Project Review and Recommendations
|
||||
|
||||
This review provides an analysis of the current state of the automated trading bot project, proposes specific code improvements, and identifies files that appear to be unused or are one-off utilities that could be reorganized.
|
||||
|
||||
The project is a well-structured, multi-process Python application for crypto trading. It has a clear separation of concerns between data fetching, strategy execution, and trade management. The use of `multiprocessing` and a centralized `main_app.py` orchestrator is a solid architectural choice.
|
||||
|
||||
The following sections detail recommendations for improving configuration management, code structure, and robustness, along with a list of files recommended for cleanup.
|
||||
|
||||
---
|
||||
|
||||
## Proposed Code Changes
|
||||
|
||||
### 1. Centralize Configuration
|
||||
|
||||
- **Issue:** Key configuration variables like `WATCHED_COINS` and `required_timeframes` are hardcoded in `main_app.py`. This makes them difficult to change without modifying the source code.
|
||||
- **Proposal:**
|
||||
- Create a central configuration file, e.g., `_data/config.json`.
|
||||
- Move `WATCHED_COINS` and `required_timeframes` into this new file.
|
||||
- Load this configuration in `main_app.py` at startup.
|
||||
- **Benefit:** Decouples configuration from code, making the application more flexible and easier to manage.
|
||||
|
||||
### 2. Refactor `main_app.py` for Clarity
|
||||
|
||||
- **Issue:** `main_app.py` is long and handles multiple responsibilities: process orchestration, dashboard rendering, and data reading.
|
||||
- **Proposal:**
|
||||
- **Abstract Process Management:** The functions for running subprocesses (e.g., `run_live_candle_fetcher`, `run_resampler_job`) contain repetitive logic for logging, shutdown handling, and process looping. This could be abstracted into a generic `ProcessRunner` class.
|
||||
- **Create a Dashboard Class:** The complex dashboard rendering logic could be moved into a separate `Dashboard` class to improve separation of concerns and make the main application loop cleaner.
|
||||
- **Benefit:** Improves code readability, reduces duplication, and makes the application easier to maintain and extend.
|
||||
|
||||
### 3. Improve Project Structure
|
||||
|
||||
- **Issue:** The root directory is cluttered with numerous Python scripts, making it difficult to distinguish between core application files, utility scripts, and old/example files.
|
||||
- **Proposal:**
|
||||
- Create a `scripts/` directory and move all one-off utility and maintenance scripts into it.
|
||||
- Consider creating a `src/` or `app/` directory to house the core application source code (`main_app.py`, `trade_executor.py`, etc.), separating it clearly from configuration, data, and documentation.
|
||||
- **Benefit:** A cleaner, more organized project structure that is easier for new developers to understand.
|
||||
|
||||
### 4. Enhance Robustness and Error Handling
|
||||
|
||||
- **Issue:** The agent loading in `trade_executor.py` relies on discovering environment variables by a naming convention (`_AGENT_PK`). This is clever but can be brittle if environment variables are named incorrectly.
|
||||
- **Proposal:**
|
||||
- Explicitly define the agent names and their corresponding environment variable keys in the proposed `_data/config.json` file. The `trade_executor` would then load only the agents specified in the configuration.
|
||||
- **Benefit:** Makes agent configuration more explicit and less prone to errors from stray environment variables.
|
||||
|
||||
---
|
||||
|
||||
## Identified Unused/Utility Files
|
||||
|
||||
The following files were identified as likely being unused by the core application, being obsolete, or serving as one-off utilities. It is recommended to **move them to a `scripts/` directory** or **delete them** if they are obsolete.
|
||||
|
||||
### Obsolete / Old Versions:
|
||||
- `data_fetcher_old.py`
|
||||
- `market_old.py`
|
||||
- `base_strategy.py` (The one in the root directory; the one in `strategies/` is used).
|
||||
|
||||
### One-Off Utility Scripts (Recommend moving to `scripts/`):
|
||||
- `!migrate_to_sqlite.py`
|
||||
- `import_csv.py`
|
||||
- `del_market_cap_tables.py`
|
||||
- `fix_timestamps.py`
|
||||
- `list_coins.py`
|
||||
- `create_agent.py`
|
||||
|
||||
### Examples / Unused Code:
|
||||
- `basic_ws.py` (Appears to be an example file).
|
||||
- `backtester.py`
|
||||
- `strategy_sma_cross.py` (A strategy file in the root, not in the `strategies` folder).
|
||||
- `strategy_template.py`
|
||||
|
||||
### Standalone / Potentially Unused Core Files:
|
||||
The following files seem to have their logic already integrated into the main multi-process application. They might be remnants of a previous architecture and may not be needed as standalone scripts.
|
||||
- `address_monitor.py`
|
||||
- `position_monitor.py`
|
||||
- `trade_log.py`
|
||||
- `wallet_data.py`
|
||||
- `whale_tracker.py`
|
||||
|
||||
### Data / Log Files (Recommend archiving or deleting):
|
||||
- `hyperliquid_wallet_data_*.json` (These appear to be backups or logs).
|
||||
1
sdk/hyperliquid-python-sdk
Submodule
1
sdk/hyperliquid-python-sdk
Submodule
Submodule sdk/hyperliquid-python-sdk added at 64b252e99d
@ -5,42 +5,36 @@ import os
|
||||
import logging
|
||||
from datetime import datetime, timezone
|
||||
import sqlite3
|
||||
import multiprocessing
|
||||
import time
|
||||
|
||||
from logging_utils import setup_logging
|
||||
from hyperliquid.info import Info
|
||||
from hyperliquid.utils import constants
|
||||
|
||||
class BaseStrategy(ABC):
|
||||
"""
|
||||
An abstract base class that defines the blueprint for all trading strategies.
|
||||
It provides common functionality like loading data, saving status, and state management.
|
||||
It provides common functionality like loading data and saving status.
|
||||
"""
|
||||
|
||||
def __init__(self, strategy_name: str, params: dict, trade_signal_queue: multiprocessing.Queue = None, shared_status: dict = None):
|
||||
def __init__(self, strategy_name: str, params: dict, log_level: str):
|
||||
self.strategy_name = strategy_name
|
||||
self.params = params
|
||||
self.trade_signal_queue = trade_signal_queue
|
||||
# Optional multiprocessing.Manager().dict() to hold live status (avoids file IO)
|
||||
self.shared_status = shared_status
|
||||
|
||||
self.coin = params.get("coin", "N/A")
|
||||
self.timeframe = params.get("timeframe", "N/A")
|
||||
self.db_path = os.path.join("_data", "market_data.db")
|
||||
self.status_file_path = os.path.join("_data", f"strategy_status_{self.strategy_name}.json")
|
||||
|
||||
# --- ADDED: State variables required for status reporting ---
|
||||
self.current_signal = "INIT"
|
||||
self.last_signal_change_utc = None
|
||||
self.signal_price = None
|
||||
|
||||
# Note: Logging is set up by the run_strategy function
|
||||
# This will be set up by the child class after it's initialized
|
||||
# setup_logging(log_level, f"Strategy-{self.strategy_name}")
|
||||
# logging.info(f"Initializing with parameters: {self.params}")
|
||||
|
||||
def load_data(self) -> pd.DataFrame:
|
||||
"""Loads historical data for the configured coin and timeframe."""
|
||||
table_name = f"{self.coin}_{self.timeframe}"
|
||||
|
||||
periods = [v for k, v in self.params.items() if 'period' in k or '_ma' in k or 'slow' in k or 'fast' in k]
|
||||
# Dynamically determine the number of candles needed based on all possible period parameters
|
||||
periods = [v for k, v in self.params.items() if 'period' in k or '_ma' in k or 'slow' in k]
|
||||
limit = max(periods) + 50 if periods else 500
|
||||
|
||||
try:
|
||||
@ -57,45 +51,11 @@ class BaseStrategy(ABC):
|
||||
|
||||
@abstractmethod
|
||||
def calculate_signals(self, df: pd.DataFrame) -> pd.DataFrame:
|
||||
"""The core logic of the strategy. Must be implemented by child classes."""
|
||||
"""
|
||||
The core logic of the strategy. Must be implemented by child classes.
|
||||
"""
|
||||
pass
|
||||
|
||||
def calculate_signals_and_state(self, df: pd.DataFrame) -> bool:
|
||||
"""
|
||||
A wrapper that calls the strategy's signal calculation, determines
|
||||
the last signal change, and returns True if the signal has changed.
|
||||
"""
|
||||
df_with_signals = self.calculate_signals(df)
|
||||
df_with_signals.dropna(inplace=True)
|
||||
if df_with_signals.empty:
|
||||
return False
|
||||
|
||||
df_with_signals['position_change'] = df_with_signals['signal'].diff()
|
||||
|
||||
last_signal_int = df_with_signals['signal'].iloc[-1]
|
||||
new_signal_str = "HOLD"
|
||||
if last_signal_int == 1: new_signal_str = "BUY"
|
||||
elif last_signal_int == -1: new_signal_str = "SELL"
|
||||
|
||||
signal_changed = False
|
||||
if self.current_signal == "INIT":
|
||||
if new_signal_str == "BUY": self.current_signal = "INIT_BUY"
|
||||
elif new_signal_str == "SELL": self.current_signal = "INIT_SELL"
|
||||
else: self.current_signal = "HOLD"
|
||||
signal_changed = True
|
||||
elif new_signal_str != self.current_signal:
|
||||
self.current_signal = new_signal_str
|
||||
signal_changed = True
|
||||
|
||||
if signal_changed:
|
||||
last_change_series = df_with_signals[df_with_signals['position_change'] != 0]
|
||||
if not last_change_series.empty:
|
||||
last_change_row = last_change_series.iloc[-1]
|
||||
self.last_signal_change_utc = last_change_row.name.tz_localize('UTC').isoformat()
|
||||
self.signal_price = last_change_row['close']
|
||||
|
||||
return signal_changed
|
||||
|
||||
def _save_status(self):
|
||||
"""Saves the current strategy state to its JSON file."""
|
||||
status = {
|
||||
@ -105,62 +65,9 @@ class BaseStrategy(ABC):
|
||||
"signal_price": self.signal_price,
|
||||
"last_checked_utc": datetime.now(timezone.utc).isoformat()
|
||||
}
|
||||
# If a shared status dict is provided (Manager.dict()), update it instead of writing files
|
||||
try:
|
||||
if self.shared_status is not None:
|
||||
try:
|
||||
# store the status under the strategy name for easy lookup
|
||||
self.shared_status[self.strategy_name] = status
|
||||
except Exception:
|
||||
# Manager proxies may not accept nested mutable objects consistently; assign a copy
|
||||
self.shared_status[self.strategy_name] = dict(status)
|
||||
else:
|
||||
with open(self.status_file_path, 'w', encoding='utf-8') as f:
|
||||
json.dump(status, f, indent=4)
|
||||
except IOError as e:
|
||||
logging.error(f"Failed to write status file for {self.strategy_name}: {e}")
|
||||
|
||||
def run_polling_loop(self):
|
||||
"""
|
||||
The default execution loop for polling-based strategies (e.g., SMAs).
|
||||
"""
|
||||
while True:
|
||||
df = self.load_data()
|
||||
if df.empty:
|
||||
logging.warning("No data loaded. Waiting 1 minute...")
|
||||
time.sleep(60)
|
||||
continue
|
||||
|
||||
signal_changed = self.calculate_signals_and_state(df.copy())
|
||||
self._save_status()
|
||||
|
||||
if signal_changed or self.current_signal == "INIT_BUY" or self.current_signal == "INIT_SELL":
|
||||
logging.warning(f"New signal detected: {self.current_signal}")
|
||||
self.trade_signal_queue.put({
|
||||
"strategy_name": self.strategy_name,
|
||||
"signal": self.current_signal,
|
||||
"coin": self.coin,
|
||||
"signal_price": self.signal_price,
|
||||
"config": {"agent": self.params.get("agent"), "parameters": self.params}
|
||||
})
|
||||
if self.current_signal == "INIT_BUY": self.current_signal = "BUY"
|
||||
if self.current_signal == "INIT_SELL": self.current_signal = "SELL"
|
||||
|
||||
logging.info(f"Current Signal: {self.current_signal}")
|
||||
time.sleep(60)
|
||||
|
||||
def run_event_loop(self):
|
||||
"""
|
||||
A placeholder for event-driven (WebSocket) strategies.
|
||||
Child classes must override this.
|
||||
"""
|
||||
logging.error("run_event_loop() is not implemented for this strategy.")
|
||||
time.sleep(3600) # Sleep for an hour to prevent rapid error loops
|
||||
|
||||
def on_fill_message(self, message):
|
||||
"""
|
||||
Placeholder for the WebSocket callback.
|
||||
Child classes must override this.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
@ -1,353 +0,0 @@
|
||||
import logging
|
||||
import time
|
||||
import json
|
||||
import os
|
||||
from datetime import datetime, timezone
|
||||
from hyperliquid.info import Info
|
||||
from hyperliquid.utils import constants
|
||||
|
||||
from strategies.base_strategy import BaseStrategy
|
||||
|
||||
class CopyTraderStrategy(BaseStrategy):
|
||||
"""
|
||||
An event-driven strategy that monitors a target wallet address and
|
||||
copies its trades for a specific set of allowed coins.
|
||||
|
||||
This strategy is STATELESS. It translates a target's fill direction
|
||||
(e.g., "Open Long") directly into an explicit signal
|
||||
(e.g., "OPEN_LONG") for the PositionManager.
|
||||
"""
|
||||
def __init__(self, strategy_name: str, params: dict, trade_signal_queue, shared_status: dict = None):
|
||||
# --- MODIFIED: Pass the correct queue to the parent ---
|
||||
# The event-driven copy trader should send orders to the order_execution_queue
|
||||
# We will assume the queue passed in is the correct one (as setup in main_app.py)
|
||||
super().__init__(strategy_name, params, trade_signal_queue, shared_status)
|
||||
|
||||
self.target_address = self.params.get("target_address", "").lower()
|
||||
self.coins_to_copy = self.params.get("coins_to_copy", {})
|
||||
# Convert all coin keys to uppercase for consistency
|
||||
self.coins_to_copy = {k.upper(): v for k, v in self.coins_to_copy.items()}
|
||||
self.allowed_coins = list(self.coins_to_copy.keys())
|
||||
|
||||
if not self.target_address:
|
||||
logging.error("No 'target_address' specified in parameters for copy trader.")
|
||||
raise ValueError("target_address is required")
|
||||
if not self.allowed_coins:
|
||||
logging.warning("No 'coins_to_copy' configured. This strategy will not copy any trades.")
|
||||
|
||||
self.info = None # Will be initialized in the run loop
|
||||
|
||||
# --- REMOVED: All local state management ---
|
||||
# self.position_state_file = ...
|
||||
# self.current_positions = ...
|
||||
|
||||
# --- MODIFIED: Check if shared_status is None before using it ---
|
||||
if self.shared_status is None:
|
||||
logging.warning("No shared_status dictionary provided. Initializing a new one.")
|
||||
self.shared_status = {}
|
||||
|
||||
self.current_signal = self.shared_status.get("current_signal", "WAIT")
|
||||
self.signal_price = self.shared_status.get("signal_price")
|
||||
self.last_signal_change_utc = self.shared_status.get("last_signal_change_utc")
|
||||
|
||||
self.start_time_utc = datetime.now(timezone.utc)
|
||||
logging.info(f"Strategy initialized. Ignoring all trades before {self.start_time_utc.isoformat()}")
|
||||
|
||||
# --- REMOVED: _load_position_state ---
|
||||
# --- REMOVED: _save_position_state ---
|
||||
|
||||
def calculate_signals(self, df):
|
||||
# This strategy is event-driven, so it does not use polling-based signal calculation.
|
||||
pass
|
||||
|
||||
def send_explicit_signal(self, signal: str, coin: str, price: float, trade_params: dict, size: float):
|
||||
"""Helper to send a formatted signal to the PositionManager."""
|
||||
config = {
|
||||
# --- MODIFIED: Ensure agent is read from params ---
|
||||
"agent": self.params.get("agent"),
|
||||
"parameters": trade_params
|
||||
}
|
||||
|
||||
# --- MODIFIED: Use self.trade_signal_queue (which is the queue passed in) ---
|
||||
self.trade_signal_queue.put({
|
||||
"strategy_name": self.strategy_name,
|
||||
"signal": signal, # e.g., "OPEN_LONG", "CLOSE_SHORT"
|
||||
"coin": coin,
|
||||
"signal_price": price,
|
||||
"config": config,
|
||||
"size": size # Explicitly pass size (or leverage for leverage updates)
|
||||
})
|
||||
logging.info(f"Explicit signal SENT: {signal} {coin} @ {price}, Size: {size}")
|
||||
|
||||
def on_fill_message(self, message):
|
||||
"""
|
||||
This is the callback function that gets triggered by the WebSocket
|
||||
every time the monitored address has an event.
|
||||
"""
|
||||
try:
|
||||
# --- NEW: Add logging to see ALL messages ---
|
||||
logging.debug(f"Received WebSocket message: {message}")
|
||||
|
||||
channel = message.get("channel")
|
||||
if channel not in ("user", "userFills", "userEvents"):
|
||||
# --- NEW: Added debug logging ---
|
||||
logging.debug(f"Ignoring message from unhandled channel: {channel}")
|
||||
return
|
||||
|
||||
data = message.get("data")
|
||||
if not data:
|
||||
# --- NEW: Added debug logging ---
|
||||
logging.debug("Message received with no 'data' field. Ignoring.")
|
||||
return
|
||||
|
||||
# --- NEW: Check for user address FIRST ---
|
||||
user_address = data.get("user", "").lower()
|
||||
if not user_address:
|
||||
logging.debug("Received message with 'data' but no 'user'. Ignoring.")
|
||||
return
|
||||
|
||||
# --- MODIFIED: Check for 'fills' vs. other event types ---
|
||||
# This check is still valid for userFills
|
||||
if "fills" not in data or not data.get("fills"):
|
||||
# This is a userEvent, but not a fill (e.g., order placement, cancel, withdrawal)
|
||||
event_type = data.get("type") # e.g., 'order', 'cancel', 'withdrawal'
|
||||
if event_type:
|
||||
logging.debug(f"Received non-fill user event: '{event_type}'. Ignoring.")
|
||||
else:
|
||||
logging.debug(f"Received 'data' message with no 'fills'. Ignoring.")
|
||||
return
|
||||
|
||||
# --- This line is now safe to run ---
|
||||
if user_address != self.target_address:
|
||||
# This shouldn't happen if the subscription is correct, but good to check
|
||||
logging.warning(f"Received fill for wrong user: {user_address}")
|
||||
return
|
||||
|
||||
fills = data.get("fills")
|
||||
logging.debug(f"Received {len(fills)} fill(s) for user {user_address}")
|
||||
|
||||
for fill in fills:
|
||||
# Check if the trade is new or historical
|
||||
trade_time = datetime.fromtimestamp(fill['time'] / 1000, tz=timezone.utc)
|
||||
if trade_time < self.start_time_utc:
|
||||
logging.info(f"Ignoring stale/historical trade from {trade_time.isoformat()}")
|
||||
continue
|
||||
|
||||
coin = fill.get('coin').upper()
|
||||
|
||||
if coin in self.allowed_coins:
|
||||
price = float(fill.get('px'))
|
||||
|
||||
# --- MODIFIED: Use the target's fill size ---
|
||||
fill_size = float(fill.get('sz')) # Target's size
|
||||
|
||||
if fill_size == 0:
|
||||
logging.warning(f"Ignoring fill with size 0.")
|
||||
continue
|
||||
|
||||
# --- NEW: Get the fill direction ---
|
||||
# "dir": "Open Long", "Close Long", "Open Short", "Close Short"
|
||||
fill_direction = fill.get("dir")
|
||||
|
||||
# --- NEW: Get startPosition to calculate flip sizes ---
|
||||
start_pos_size = float(fill.get('startPosition', 0.0))
|
||||
|
||||
if not fill_direction:
|
||||
logging.warning(f"Fill message missing 'dir'. Ignoring fill: {fill}")
|
||||
continue
|
||||
|
||||
# Get our strategy's configured leverage for this coin
|
||||
coin_config = self.coins_to_copy.get(coin)
|
||||
|
||||
# --- REMOVED: Check for coin_config.get("size") ---
|
||||
# --- REMOVED: strategy_trade_size = coin_config.get("size") ---
|
||||
|
||||
# Prepare config for the signal
|
||||
trade_params = self.params.copy()
|
||||
if coin_config:
|
||||
trade_params.update(coin_config)
|
||||
|
||||
# --- REMOVED: All stateful logic (current_local_pos, etc.) ---
|
||||
|
||||
# --- MODIFIED: Expanded logic to handle flip directions ---
|
||||
signal_sent = False
|
||||
dashboard_signal = ""
|
||||
|
||||
if fill_direction == "Open Long":
|
||||
logging.warning(f"[{coin}] Target action: {fill_direction}. Sending signal: OPEN_LONG")
|
||||
self.send_explicit_signal("OPEN_LONG", coin, price, trade_params, fill_size)
|
||||
signal_sent = True
|
||||
dashboard_signal = "OPEN_LONG"
|
||||
|
||||
elif fill_direction == "Close Long":
|
||||
logging.warning(f"[{coin}] Target action: {fill_direction}. Sending signal: CLOSE_LONG")
|
||||
self.send_explicit_signal("CLOSE_LONG", coin, price, trade_params, fill_size)
|
||||
signal_sent = True
|
||||
dashboard_signal = "CLOSE_LONG"
|
||||
|
||||
elif fill_direction == "Open Short":
|
||||
logging.warning(f"[{coin}] Target action: {fill_direction}. Sending signal: OPEN_SHORT")
|
||||
self.send_explicit_signal("OPEN_SHORT", coin, price, trade_params, fill_size)
|
||||
signal_sent = True
|
||||
dashboard_signal = "OPEN_SHORT"
|
||||
|
||||
elif fill_direction == "Close Short":
|
||||
logging.warning(f"[{coin}] Target action: {fill_direction}. Sending signal: CLOSE_SHORT")
|
||||
self.send_explicit_signal("CLOSE_SHORT", coin, price, trade_params, fill_size)
|
||||
signal_sent = True
|
||||
dashboard_signal = "CLOSE_SHORT"
|
||||
|
||||
elif fill_direction == "Short > Long":
|
||||
logging.warning(f"[{coin}] Target action: {fill_direction}. Sending CLOSE_SHORT then OPEN_LONG.")
|
||||
close_size = abs(start_pos_size)
|
||||
open_size = fill_size - close_size
|
||||
|
||||
if close_size > 0:
|
||||
self.send_explicit_signal("CLOSE_SHORT", coin, price, trade_params, close_size)
|
||||
|
||||
if open_size > 0:
|
||||
self.send_explicit_signal("OPEN_LONG", coin, price, trade_params, open_size)
|
||||
|
||||
signal_sent = True
|
||||
dashboard_signal = "FLIP_TO_LONG"
|
||||
|
||||
elif fill_direction == "Long > Short":
|
||||
logging.warning(f"[{coin}] Target action: {fill_direction}. Sending CLOSE_LONG then OPEN_SHORT.")
|
||||
close_size = abs(start_pos_size)
|
||||
open_size = fill_size - close_size
|
||||
|
||||
if close_size > 0:
|
||||
self.send_explicit_signal("CLOSE_LONG", coin, price, trade_params, close_size)
|
||||
|
||||
if open_size > 0:
|
||||
self.send_explicit_signal("OPEN_SHORT", coin, price, trade_params, open_size)
|
||||
|
||||
signal_sent = True
|
||||
dashboard_signal = "FLIP_TO_SHORT"
|
||||
|
||||
|
||||
if signal_sent:
|
||||
# Update dashboard status
|
||||
self.current_signal = dashboard_signal # Show the action
|
||||
self.signal_price = price
|
||||
self.last_signal_change_utc = trade_time.isoformat()
|
||||
self.coin = coin # Update coin for dashboard
|
||||
self.size = fill_size # Update size for dashboard
|
||||
self._save_status() # For dashboard
|
||||
|
||||
logging.info(f"Source trade logged: {json.dumps(fill)}")
|
||||
else:
|
||||
logging.info(f"[{coin}] Ignoring unhandled fill direction: {fill_direction}")
|
||||
else:
|
||||
logging.info(f"Ignoring fill for unmonitored coin: {coin}")
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error in on_fill_message: {e}", exc_info=True)
|
||||
|
||||
def _connect_and_subscribe(self):
|
||||
"""
|
||||
Establishes a new WebSocket connection and subscribes to the userFills channel.
|
||||
"""
|
||||
try:
|
||||
logging.info("Connecting to Hyperliquid WebSocket...")
|
||||
self.info = Info(constants.MAINNET_API_URL, skip_ws=False)
|
||||
|
||||
# --- MODIFIED: Reverted to 'userFills' as requested ---
|
||||
subscription = {"type": "userFills", "user": self.target_address}
|
||||
self.info.subscribe(subscription, self.on_fill_message)
|
||||
logging.info(f"Subscribed to 'userFills' for target address: {self.target_address}")
|
||||
|
||||
return True
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to connect or subscribe: {e}")
|
||||
self.info = None
|
||||
return False
|
||||
|
||||
def run_event_loop(self):
|
||||
"""
|
||||
This method overrides the default polling loop. It establishes a
|
||||
persistent WebSocket connection and runs a watchdog to ensure
|
||||
it stays connected.
|
||||
"""
|
||||
try:
|
||||
if not self._connect_and_subscribe():
|
||||
# If connection fails on start, wait 60s before letting the process restart
|
||||
time.sleep(60)
|
||||
return
|
||||
|
||||
# --- MODIFIED: Add a small delay to ensure Info object is ready for REST calls ---
|
||||
logging.info("Connection established. Waiting 2 seconds for Info client to be ready...")
|
||||
time.sleep(2)
|
||||
# --- END MODIFICATION ---
|
||||
|
||||
# --- NEW: Set initial leverage for all monitored coins ---
|
||||
logging.info("Setting initial leverage for all monitored coins...")
|
||||
try:
|
||||
all_mids = self.info.all_mids()
|
||||
for coin_key, coin_config in self.coins_to_copy.items():
|
||||
coin = coin_key.upper()
|
||||
# Use a failsafe price of 1.0 if coin not in mids (e.g., new listing)
|
||||
current_price = float(all_mids.get(coin, 1.0))
|
||||
|
||||
leverage_long = coin_config.get('leverage_long', 2)
|
||||
leverage_short = coin_config.get('leverage_short', 2)
|
||||
|
||||
# Prepare config for the signal
|
||||
trade_params = self.params.copy()
|
||||
trade_params.update(coin_config)
|
||||
|
||||
# Send LONG leverage update
|
||||
# The 'size' param is used to pass the leverage value for this signal type
|
||||
self.send_explicit_signal("UPDATE_LEVERAGE_LONG", coin, current_price, trade_params, leverage_long)
|
||||
|
||||
# Send SHORT leverage update
|
||||
self.send_explicit_signal("UPDATE_LEVERAGE_SHORT", coin, current_price, trade_params, leverage_short)
|
||||
|
||||
logging.info(f"Sent initial leverage signals for {coin} (Long: {leverage_long}x, Short: {leverage_short}x)")
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to set initial leverage: {e}", exc_info=True)
|
||||
# --- END NEW LEVERAGE LOGIC ---
|
||||
|
||||
# Save the initial "WAIT" status
|
||||
self._save_status()
|
||||
|
||||
while True:
|
||||
try:
|
||||
time.sleep(15) # Check the connection every 15 seconds
|
||||
|
||||
if self.info is None or not self.info.ws_manager.is_alive():
|
||||
logging.error(f"WebSocket connection lost. Attempting to reconnect...")
|
||||
|
||||
if self.info and self.info.ws_manager:
|
||||
try:
|
||||
self.info.ws_manager.stop()
|
||||
except Exception as e:
|
||||
logging.error(f"Error stopping old ws_manager: {e}")
|
||||
|
||||
if not self._connect_and_subscribe():
|
||||
logging.error("Reconnect failed, will retry in 15s.")
|
||||
else:
|
||||
logging.info("Successfully reconnected to WebSocket.")
|
||||
self._save_status()
|
||||
else:
|
||||
logging.debug("Watchdog check: WebSocket connection is active.")
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"An error occurred in the watchdog loop: {e}", exc_info=True)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
# --- MODIFIED: No positions to close, just exit ---
|
||||
logging.warning(f"Shutdown signal received. Exiting strategy '{self.strategy_name}'.")
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"An unhandled error occurred in run_event_loop: {e}", exc_info=True)
|
||||
|
||||
finally:
|
||||
if self.info and self.info.ws_manager and self.info.ws_manager.is_alive():
|
||||
try:
|
||||
self.info.ws_manager.stop()
|
||||
logging.info("WebSocket connection stopped.")
|
||||
except Exception as e:
|
||||
logging.error(f"Error stopping ws_manager on exit: {e}")
|
||||
|
||||
@ -7,22 +7,27 @@ class MaCrossStrategy(BaseStrategy):
|
||||
A strategy based on a fast Simple Moving Average (SMA) crossing
|
||||
a slow SMA.
|
||||
"""
|
||||
# --- FIX: Changed 3rd argument from log_level to trade_signal_queue ---
|
||||
def __init__(self, strategy_name: str, params: dict, trade_signal_queue):
|
||||
# --- FIX: Passed trade_signal_queue to the parent class ---
|
||||
super().__init__(strategy_name, params, trade_signal_queue)
|
||||
self.fast_ma_period = self.params.get('short_ma') or self.params.get('fast') or 0
|
||||
self.slow_ma_period = self.params.get('long_ma') or self.params.get('slow') or 0
|
||||
|
||||
def calculate_signals(self, df: pd.DataFrame) -> pd.DataFrame:
|
||||
if not self.fast_ma_period or not self.slow_ma_period or len(df) < self.slow_ma_period:
|
||||
logging.warning(f"Not enough data for MA periods.")
|
||||
# Support multiple naming conventions: some configs use 'fast'/'slow'
|
||||
# while others use 'short_ma'/'long_ma'. Normalize here so both work.
|
||||
fast_ma_period = self.params.get('short_ma') or self.params.get('fast') or 0
|
||||
slow_ma_period = self.params.get('long_ma') or self.params.get('slow') or 0
|
||||
|
||||
# If parameters are missing, return a neutral signal frame.
|
||||
if not fast_ma_period or not slow_ma_period:
|
||||
logging.warning(f"Missing MA period parameters (fast={fast_ma_period}, slow={slow_ma_period}).")
|
||||
df['signal'] = 0
|
||||
return df
|
||||
|
||||
df['fast_sma'] = df['close'].rolling(window=self.fast_ma_period).mean()
|
||||
df['slow_sma'] = df['close'].rolling(window=self.slow_ma_period).mean()
|
||||
if len(df) < slow_ma_period:
|
||||
logging.warning(f"Not enough data for MA periods {fast_ma_period}/{slow_ma_period}. Need {slow_ma_period}, have {len(df)}.")
|
||||
df['signal'] = 0
|
||||
return df
|
||||
|
||||
df['fast_sma'] = df['close'].rolling(window=fast_ma_period).mean()
|
||||
df['slow_sma'] = df['close'].rolling(window=slow_ma_period).mean()
|
||||
|
||||
# Signal is 1 for Golden Cross (fast > slow), -1 for Death Cross
|
||||
df['signal'] = 0
|
||||
df.loc[df['fast_sma'] > df['slow_sma'], 'signal'] = 1
|
||||
df.loc[df['fast_sma'] < df['slow_sma'], 'signal'] = -1
|
||||
|
||||
@ -6,20 +6,17 @@ class SingleSmaStrategy(BaseStrategy):
|
||||
"""
|
||||
A strategy based on the price crossing a single Simple Moving Average (SMA).
|
||||
"""
|
||||
# --- FIX: Added trade_signal_queue to the constructor ---
|
||||
def __init__(self, strategy_name: str, params: dict, trade_signal_queue):
|
||||
# --- FIX: Passed trade_signal_queue to the parent class ---
|
||||
super().__init__(strategy_name, params, trade_signal_queue)
|
||||
self.sma_period = self.params.get('sma_period', 0)
|
||||
|
||||
def calculate_signals(self, df: pd.DataFrame) -> pd.DataFrame:
|
||||
if not self.sma_period or len(df) < self.sma_period:
|
||||
logging.warning(f"Not enough data for SMA period {self.sma_period}.")
|
||||
sma_period = self.params.get('sma_period', 0)
|
||||
|
||||
if not sma_period or len(df) < sma_period:
|
||||
logging.warning(f"Not enough data for SMA period {sma_period}. Need {sma_period}, have {len(df)}.")
|
||||
df['signal'] = 0
|
||||
return df
|
||||
|
||||
df['sma'] = df['close'].rolling(window=self.sma_period).mean()
|
||||
df['sma'] = df['close'].rolling(window=sma_period).mean()
|
||||
|
||||
# Signal is 1 when price is above SMA, -1 when below
|
||||
df['signal'] = 0
|
||||
df.loc[df['close'] > df['sma'], 'signal'] = 1
|
||||
df.loc[df['close'] < df['sma'], 'signal'] = -1
|
||||
|
||||
@ -4,9 +4,7 @@ import os
|
||||
import sys
|
||||
import json
|
||||
import time
|
||||
# --- REVERTED: Removed math import ---
|
||||
from datetime import datetime
|
||||
import multiprocessing
|
||||
|
||||
from eth_account import Account
|
||||
from hyperliquid.exchange import Exchange
|
||||
@ -15,20 +13,20 @@ from hyperliquid.utils import constants
|
||||
from dotenv import load_dotenv
|
||||
|
||||
from logging_utils import setup_logging
|
||||
from trade_log import log_trade
|
||||
|
||||
# Load environment variables from a .env file
|
||||
load_dotenv()
|
||||
|
||||
class TradeExecutor:
|
||||
"""
|
||||
Executes orders from a queue and, upon API success,
|
||||
updates the shared 'opened_positions.json' state file.
|
||||
It is the single source of truth for position state.
|
||||
Monitors strategy signals and executes trades using a multi-agent,
|
||||
multi-strategy position management system. Each strategy's position is
|
||||
tracked independently.
|
||||
"""
|
||||
|
||||
def __init__(self, log_level: str, order_execution_queue: multiprocessing.Queue):
|
||||
# Note: Logging is set up by the run_trade_executor function
|
||||
|
||||
self.order_execution_queue = order_execution_queue
|
||||
def __init__(self, log_level: str):
|
||||
setup_logging(log_level, 'TradeExecutor')
|
||||
|
||||
self.vault_address = os.environ.get("MAIN_WALLET_ADDRESS")
|
||||
if not self.vault_address:
|
||||
@ -41,18 +39,21 @@ class TradeExecutor:
|
||||
logging.error("No trading agents found in .env file.")
|
||||
sys.exit(1)
|
||||
|
||||
# --- REVERTED: Removed asset_meta loading ---
|
||||
# self.asset_meta = self._load_asset_metadata()
|
||||
|
||||
# --- NEW: State management logic ---
|
||||
self.opened_positions_file = os.path.join("_data", "opened_positions.json")
|
||||
self.opened_positions = self._load_opened_positions()
|
||||
|
||||
logging.info(f"Trade Executor started. Loaded {len(self.opened_positions)} positions.")
|
||||
strategy_config_path = os.path.join("_data", "strategies.json")
|
||||
try:
|
||||
with open(strategy_config_path, 'r') as f:
|
||||
self.strategy_configs = {name: config for name, config in json.load(f).items() if config.get("enabled")}
|
||||
logging.info(f"Loaded {len(self.strategy_configs)} enabled strategies.")
|
||||
except (FileNotFoundError, json.JSONDecodeError) as e:
|
||||
logging.error(f"Could not load strategies from '{strategy_config_path}': {e}")
|
||||
sys.exit(1)
|
||||
|
||||
self.status_file_path = os.path.join("_logs", "trade_executor_status.json")
|
||||
self.managed_positions_path = os.path.join("_data", "executor_managed_positions.json")
|
||||
self.managed_positions = self._load_managed_positions()
|
||||
|
||||
def _load_agents(self) -> dict:
|
||||
# ... (omitted for brevity, this logic is correct and unchanged) ...
|
||||
"""Discovers and initializes agents from environment variables."""
|
||||
exchanges = {}
|
||||
logging.info("Discovering agents from environment variables...")
|
||||
for env_var, private_key in os.environ.items():
|
||||
@ -71,123 +72,129 @@ class TradeExecutor:
|
||||
logging.error(f"Failed to initialize agent '{agent_name}': {e}")
|
||||
return exchanges
|
||||
|
||||
# --- REVERTED: Removed asset metadata loading ---
|
||||
# def _load_asset_metadata(self) -> dict: ...
|
||||
|
||||
# --- NEW: Position state save/load methods ---
|
||||
def _load_opened_positions(self) -> dict:
|
||||
"""Loads the state of currently managed positions from a JSON file."""
|
||||
if not os.path.exists(self.opened_positions_file):
|
||||
return {}
|
||||
def _load_managed_positions(self) -> dict:
|
||||
"""Loads the state of which strategy manages which position."""
|
||||
if os.path.exists(self.managed_positions_path):
|
||||
try:
|
||||
with open(self.opened_positions_file, 'r', encoding='utf-8') as f:
|
||||
with open(self.managed_positions_path, 'r') as f:
|
||||
logging.info("Loading existing managed positions state.")
|
||||
return json.load(f)
|
||||
except (json.JSONDecodeError, IOError) as e:
|
||||
logging.error(f"Failed to read '{self.opened_positions_file}': {e}. Starting with empty state.", exc_info=True)
|
||||
except (IOError, json.JSONDecodeError):
|
||||
logging.warning("Could not read managed positions file. Starting fresh.")
|
||||
return {}
|
||||
|
||||
def _save_opened_positions(self):
|
||||
"""Saves the current state of managed positions to a JSON file."""
|
||||
def _save_managed_positions(self):
|
||||
"""Saves the current state of managed positions."""
|
||||
try:
|
||||
with open(self.opened_positions_file, 'w', encoding='utf-8') as f:
|
||||
json.dump(self.opened_positions, f, indent=4)
|
||||
logging.debug(f"Successfully saved {len(self.opened_positions)} positions to '{self.opened_positions_file}'")
|
||||
with open(self.managed_positions_path, 'w') as f:
|
||||
json.dump(self.managed_positions, f, indent=4)
|
||||
except IOError as e:
|
||||
logging.error(f"Failed to write to '{self.opened_positions_file}': {e}", exc_info=True)
|
||||
logging.error(f"Failed to save managed positions state: {e}")
|
||||
|
||||
# --- REVERTED: Removed tick rounding function ---
|
||||
# def _round_to_tick(self, price, tick_size): ...
|
||||
def _save_executor_status(self, perpetuals_state, spot_state, all_market_contexts):
|
||||
"""Saves the current balances and open positions to a live status file."""
|
||||
# This function is correct and does not need changes.
|
||||
pass
|
||||
|
||||
def run(self):
|
||||
"""
|
||||
Main execution loop. Waits for an order and updates state on success.
|
||||
"""
|
||||
logging.info("Trade Executor started. Waiting for orders...")
|
||||
"""The main execution loop with advanced position management."""
|
||||
logging.info("Starting Trade Executor loop...")
|
||||
while True:
|
||||
try:
|
||||
order = self.order_execution_queue.get()
|
||||
if not order:
|
||||
continue
|
||||
perpetuals_state = self.info.user_state(self.vault_address)
|
||||
open_positions_api = {pos['position'].get('coin'): pos['position'] for pos in perpetuals_state.get('assetPositions', []) if float(pos.get('position', {}).get('szi', 0)) != 0}
|
||||
|
||||
logging.info(f"Received order: {order}")
|
||||
for name, config in self.strategy_configs.items():
|
||||
coin = config['parameters'].get('coin')
|
||||
size = config['parameters'].get('size')
|
||||
# --- ADDED: Load leverage parameters from config ---
|
||||
leverage_long = config['parameters'].get('leverage_long')
|
||||
leverage_short = config['parameters'].get('leverage_short')
|
||||
|
||||
agent_name = order['agent']
|
||||
action = order['action']
|
||||
coin = order['coin']
|
||||
is_buy = order['is_buy']
|
||||
size = order['size']
|
||||
limit_px = order.get('limit_px')
|
||||
status_file = os.path.join("_data", f"strategy_status_{name}.json")
|
||||
if not os.path.exists(status_file): continue
|
||||
with open(status_file, 'r') as f: status = json.load(f)
|
||||
|
||||
desired_signal = status.get('current_signal')
|
||||
current_position = self.managed_positions.get(name)
|
||||
|
||||
agent_name = config.get("agent", "default").lower()
|
||||
exchange_to_use = self.exchanges.get(agent_name)
|
||||
if not exchange_to_use:
|
||||
logging.error(f"Agent '{agent_name}' not found. Skipping order.")
|
||||
logging.error(f"[{name}] Agent '{agent_name}' not found. Skipping trade.")
|
||||
continue
|
||||
|
||||
response = None
|
||||
# --- State Machine Logic with Configurable Leverage ---
|
||||
if desired_signal == "BUY":
|
||||
if not current_position:
|
||||
if not all([size, leverage_long]):
|
||||
logging.error(f"[{name}] 'size' or 'leverage_long' not defined. Skipping.")
|
||||
continue
|
||||
|
||||
if action == "market_open" or action == "market_close":
|
||||
reduce_only = (action == "market_close")
|
||||
log_action = "MARKET CLOSE" if reduce_only else "MARKET OPEN"
|
||||
logging.warning(f"ACTION: {log_action} {coin} {'BUY' if is_buy else 'SELL'} {size}")
|
||||
logging.warning(f"[{name}] ACTION: Open LONG for {coin} with {leverage_long}x leverage.")
|
||||
exchange_to_use.update_leverage(int(leverage_long), coin)
|
||||
exchange_to_use.market_open(coin, True, size, None, 0.01)
|
||||
self.managed_positions[name] = {"coin": coin, "side": "long", "size": size}
|
||||
log_trade(strategy=name, coin=coin, action="OPEN_LONG", price=status.get('signal_price', 0), size=size, signal=desired_signal)
|
||||
|
||||
# --- REVERTED: Removed all slippage and rounding logic ---
|
||||
# The raw limit_px from the order is now used directly
|
||||
final_price = limit_px
|
||||
logging.info(f"[{agent_name}] Using raw price for {coin}: {final_price}")
|
||||
elif current_position['side'] == 'short':
|
||||
if not all([size, leverage_long]):
|
||||
logging.error(f"[{name}] 'size' or 'leverage_long' not defined. Skipping.")
|
||||
continue
|
||||
|
||||
order_type = {"limit": {"tif": "Ioc"}}
|
||||
# --- REVERTED: Uses final_price (which is just limit_px) ---
|
||||
response = exchange_to_use.order(coin, is_buy, size, final_price, order_type, reduce_only=reduce_only)
|
||||
logging.info(f"Market order response: {response}")
|
||||
logging.warning(f"[{name}] ACTION: Close SHORT and open LONG for {coin} with {leverage_long}x leverage.")
|
||||
exchange_to_use.update_leverage(int(leverage_long), coin)
|
||||
exchange_to_use.market_open(coin, True, current_position['size'] + size, None, 0.01)
|
||||
self.managed_positions[name] = {"coin": coin, "side": "long", "size": size}
|
||||
log_trade(strategy=name, coin=coin, action="CLOSE_SHORT_&_REVERSE", price=status.get('signal_price', 0), size=size, signal=desired_signal)
|
||||
|
||||
# --- NEW: STATE UPDATE ON SUCCESS ---
|
||||
if response.get("status") == "ok":
|
||||
response_data = response.get("response", {},).get("data", {})
|
||||
if response_data and "statuses" in response_data:
|
||||
# Check if the order status contains an error
|
||||
if "error" not in response_data["statuses"][0]:
|
||||
position_key = order['position_key']
|
||||
if action == "market_open":
|
||||
# Add to state
|
||||
self.opened_positions[position_key] = {
|
||||
"strategy": order['strategy'],
|
||||
"coin": coin,
|
||||
"side": "long" if is_buy else "short",
|
||||
"open_time_utc": order['open_time_utc'],
|
||||
"open_price": order['open_price'],
|
||||
"amount": order['amount'],
|
||||
# --- MODIFIED: Read leverage from the order ---
|
||||
"leverage": order.get('leverage')
|
||||
}
|
||||
logging.info(f"Successfully opened position {position_key}. Saving state.")
|
||||
elif action == "market_close":
|
||||
# Remove from state
|
||||
if position_key in self.opened_positions:
|
||||
del self.opened_positions[position_key]
|
||||
logging.info(f"Successfully closed position {position_key}. Saving state.")
|
||||
else:
|
||||
logging.warning(f"Received close confirmation for {position_key}, but it was not in state.")
|
||||
elif desired_signal == "SELL":
|
||||
if not current_position:
|
||||
if not all([size, leverage_short]):
|
||||
logging.error(f"[{name}] 'size' or 'leverage_short' not defined. Skipping.")
|
||||
continue
|
||||
|
||||
self._save_opened_positions() # Save state to disk
|
||||
logging.warning(f"[{name}] ACTION: Open SHORT for {coin} with {leverage_short}x leverage.")
|
||||
exchange_to_use.update_leverage(int(leverage_short), coin)
|
||||
exchange_to_use.market_open(coin, False, size, None, 0.01)
|
||||
self.managed_positions[name] = {"coin": coin, "side": "short", "size": size}
|
||||
log_trade(strategy=name, coin=coin, action="OPEN_SHORT", price=status.get('signal_price', 0), size=size, signal=desired_signal)
|
||||
|
||||
else:
|
||||
logging.error(f"API Error for {action}: {response_data['statuses'][0]['error']}")
|
||||
else:
|
||||
logging.error(f"Unexpected API response format: {response}")
|
||||
else:
|
||||
logging.error(f"API call failed, status: {response.get('status')}")
|
||||
elif current_position['side'] == 'long':
|
||||
if not all([size, leverage_short]):
|
||||
logging.error(f"[{name}] 'size' or 'leverage_short' not defined. Skipping.")
|
||||
continue
|
||||
|
||||
logging.warning(f"[{name}] ACTION: Close LONG and open SHORT for {coin} with {leverage_short}x leverage.")
|
||||
exchange_to_use.update_leverage(int(leverage_short), coin)
|
||||
exchange_to_use.market_open(coin, False, current_position['size'] + size, None, 0.01)
|
||||
self.managed_positions[name] = {"coin": coin, "side": "short", "size": size}
|
||||
log_trade(strategy=name, coin=coin, action="CLOSE_LONG_&_REVERSE", price=status.get('signal_price', 0), size=size, signal=desired_signal)
|
||||
|
||||
elif action == "update_leverage":
|
||||
leverage = int(size)
|
||||
logging.warning(f"ACTION: UPDATE LEVERAGE {coin} to {leverage}x")
|
||||
response = exchange_to_use.update_leverage(leverage, coin)
|
||||
logging.info(f"Update leverage response: {response}")
|
||||
elif desired_signal == "FLAT":
|
||||
if current_position:
|
||||
logging.warning(f"[{name}] ACTION: Close {current_position['side']} position for {coin}.")
|
||||
is_buy = current_position['side'] == 'short'
|
||||
exchange_to_use.market_open(coin, is_buy, current_position['size'], None, 0.01)
|
||||
del self.managed_positions[name]
|
||||
log_trade(strategy=name, coin=coin, action=f"CLOSE_{current_position['side'].upper()}", price=status.get('signal_price', 0), size=current_position['size'], signal=desired_signal)
|
||||
|
||||
else:
|
||||
logging.warning(f"Received unknown action: {action}")
|
||||
self._save_managed_positions()
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"An error occurred in the main executor loop: {e}", exc_info=True)
|
||||
time.sleep(1)
|
||||
logging.error(f"An error occurred in the main executor loop: {e}")
|
||||
|
||||
time.sleep(15)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description="Run the Trade Executor.")
|
||||
parser.add_argument("--log-level", default="normal", choices=['off', 'normal', 'debug'])
|
||||
args = parser.parse_args()
|
||||
|
||||
executor = TradeExecutor(log_level=args.log_level)
|
||||
try:
|
||||
executor.run()
|
||||
except KeyboardInterrupt:
|
||||
logging.info("Trade Executor stopped.")
|
||||
|
||||
|
||||
652
wallet_data.py
652
wallet_data.py
@ -1,652 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Hyperliquid Wallet Data Fetcher - FINAL Perfect Alignment
|
||||
==========================================================
|
||||
Complete Python script to pull all available data for a Hyperliquid wallet via API.
|
||||
|
||||
Requirements:
|
||||
pip install hyperliquid-python-sdk
|
||||
|
||||
Usage:
|
||||
python hyperliquid_wallet_data.py <wallet_address>
|
||||
|
||||
Example:
|
||||
python hyperliquid_wallet_data.py 0xcd5051944f780a621ee62e39e493c489668acf4d
|
||||
"""
|
||||
|
||||
import sys
|
||||
import json
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional, Dict, Any
|
||||
from hyperliquid.info import Info
|
||||
from hyperliquid.utils import constants
|
||||
|
||||
|
||||
class HyperliquidWalletAnalyzer:
|
||||
"""
|
||||
Comprehensive wallet data analyzer for Hyperliquid exchange.
|
||||
Fetches all available information about a specific wallet address.
|
||||
"""
|
||||
|
||||
def __init__(self, wallet_address: str, use_testnet: bool = False):
|
||||
"""
|
||||
Initialize the analyzer with a wallet address.
|
||||
|
||||
Args:
|
||||
wallet_address: Ethereum-style address (0x...)
|
||||
use_testnet: If True, use testnet instead of mainnet
|
||||
"""
|
||||
self.wallet_address = wallet_address
|
||||
api_url = constants.TESTNET_API_URL if use_testnet else constants.MAINNET_API_URL
|
||||
|
||||
# Initialize Info API (read-only, no private keys needed)
|
||||
self.info = Info(api_url, skip_ws=True)
|
||||
print(f"Initialized Hyperliquid API: {'Testnet' if use_testnet else 'Mainnet'}")
|
||||
print(f"Target wallet: {wallet_address}\n")
|
||||
|
||||
def print_position_details(self, position: Dict[str, Any], index: int):
|
||||
"""
|
||||
Print detailed information about a single position.
|
||||
|
||||
Args:
|
||||
position: Position data dictionary
|
||||
index: Position number for display
|
||||
"""
|
||||
pos = position.get('position', {})
|
||||
|
||||
# Extract all position details
|
||||
coin = pos.get('coin', 'Unknown')
|
||||
size = float(pos.get('szi', 0))
|
||||
entry_px = float(pos.get('entryPx', 0))
|
||||
position_value = float(pos.get('positionValue', 0))
|
||||
unrealized_pnl = float(pos.get('unrealizedPnl', 0))
|
||||
return_on_equity = float(pos.get('returnOnEquity', 0))
|
||||
|
||||
# Leverage details
|
||||
leverage = pos.get('leverage', {})
|
||||
leverage_type = leverage.get('type', 'unknown') if isinstance(leverage, dict) else 'cross'
|
||||
leverage_value = leverage.get('value', 0) if isinstance(leverage, dict) else 0
|
||||
|
||||
# Margin and liquidation
|
||||
margin_used = float(pos.get('marginUsed', 0))
|
||||
liquidation_px = pos.get('liquidationPx')
|
||||
max_trade_szs = pos.get('maxTradeSzs', [0, 0])
|
||||
|
||||
# Cumulative funding
|
||||
cumulative_funding = float(pos.get('cumFunding', {}).get('allTime', 0))
|
||||
|
||||
# Determine if long or short
|
||||
side = "LONG 📈" if size > 0 else "SHORT 📉"
|
||||
side_color = "🟢" if size > 0 else "🔴"
|
||||
|
||||
# PnL color
|
||||
pnl_symbol = "🟢" if unrealized_pnl >= 0 else "🔴"
|
||||
pnl_sign = "+" if unrealized_pnl >= 0 else ""
|
||||
|
||||
# ROE color
|
||||
roe_symbol = "🟢" if return_on_equity >= 0 else "🔴"
|
||||
roe_sign = "+" if return_on_equity >= 0 else ""
|
||||
|
||||
print(f"\n{'='*80}")
|
||||
print(f"POSITION #{index}: {coin} {side} {side_color}")
|
||||
print(f"{'='*80}")
|
||||
|
||||
print(f"\n📊 POSITION DETAILS:")
|
||||
print(f" Size: {abs(size):.6f} {coin}")
|
||||
print(f" Side: {side}")
|
||||
print(f" Entry Price: ${entry_px:,.4f}")
|
||||
print(f" Position Value: ${abs(position_value):,.2f}")
|
||||
|
||||
print(f"\n💰 PROFITABILITY:")
|
||||
print(f" Unrealized PnL: {pnl_symbol} {pnl_sign}${unrealized_pnl:,.2f}")
|
||||
print(f" Return on Equity: {roe_symbol} {roe_sign}{return_on_equity:.2%}")
|
||||
print(f" Cumulative Funding: ${cumulative_funding:,.4f}")
|
||||
|
||||
print(f"\n⚙️ LEVERAGE & MARGIN:")
|
||||
print(f" Leverage Type: {leverage_type.upper()}")
|
||||
print(f" Leverage: {leverage_value}x")
|
||||
print(f" Margin Used: ${margin_used:,.2f}")
|
||||
|
||||
print(f"\n⚠️ RISK MANAGEMENT:")
|
||||
if liquidation_px:
|
||||
liquidation_px_float = float(liquidation_px) if liquidation_px else 0
|
||||
print(f" Liquidation Price: ${liquidation_px_float:,.4f}")
|
||||
|
||||
# Calculate distance to liquidation
|
||||
if entry_px > 0 and liquidation_px_float > 0:
|
||||
if size > 0: # Long position
|
||||
distance = ((entry_px - liquidation_px_float) / entry_px) * 100
|
||||
else: # Short position
|
||||
distance = ((liquidation_px_float - entry_px) / entry_px) * 100
|
||||
|
||||
distance_symbol = "🟢" if abs(distance) > 20 else "🟡" if abs(distance) > 10 else "🔴"
|
||||
print(f" Distance to Liq: {distance_symbol} {abs(distance):.2f}%")
|
||||
else:
|
||||
print(f" Liquidation Price: N/A (Cross margin)")
|
||||
|
||||
if max_trade_szs and len(max_trade_szs) == 2:
|
||||
print(f" Max Long Trade: {max_trade_szs[0]}")
|
||||
print(f" Max Short Trade: {max_trade_szs[1]}")
|
||||
|
||||
print(f"\n{'='*80}")
|
||||
|
||||
def get_user_state(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get complete user state including positions and margin summary.
|
||||
|
||||
Returns:
|
||||
Dict containing:
|
||||
- assetPositions: List of open perpetual positions
|
||||
- marginSummary: Account value, margin used, withdrawable
|
||||
- crossMarginSummary: Cross margin details
|
||||
- withdrawable: Available balance to withdraw
|
||||
"""
|
||||
print("📊 Fetching User State (Perpetuals)...")
|
||||
try:
|
||||
data = self.info.user_state(self.wallet_address)
|
||||
|
||||
if data:
|
||||
margin_summary = data.get('marginSummary', {})
|
||||
positions = data.get('assetPositions', [])
|
||||
|
||||
account_value = float(margin_summary.get('accountValue', 0))
|
||||
total_margin_used = float(margin_summary.get('totalMarginUsed', 0))
|
||||
total_ntl_pos = float(margin_summary.get('totalNtlPos', 0))
|
||||
total_raw_usd = float(margin_summary.get('totalRawUsd', 0))
|
||||
withdrawable = float(data.get('withdrawable', 0))
|
||||
|
||||
print(f" ✓ Account Value: ${account_value:,.2f}")
|
||||
print(f" ✓ Total Margin Used: ${total_margin_used:,.2f}")
|
||||
print(f" ✓ Total Position Value: ${total_ntl_pos:,.2f}")
|
||||
print(f" ✓ Withdrawable: ${withdrawable:,.2f}")
|
||||
print(f" ✓ Open Positions: {len(positions)}")
|
||||
|
||||
# Calculate margin utilization
|
||||
if account_value > 0:
|
||||
margin_util = (total_margin_used / account_value) * 100
|
||||
util_symbol = "🟢" if margin_util < 50 else "🟡" if margin_util < 75 else "🔴"
|
||||
print(f" ✓ Margin Utilization: {util_symbol} {margin_util:.2f}%")
|
||||
|
||||
# Print detailed information for each position
|
||||
if positions:
|
||||
print(f"\n{'='*80}")
|
||||
print(f"DETAILED POSITION BREAKDOWN ({len(positions)} positions)")
|
||||
print(f"{'='*80}")
|
||||
|
||||
for idx, position in enumerate(positions, 1):
|
||||
self.print_position_details(position, idx)
|
||||
|
||||
# Summary table with perfect alignment
|
||||
self.print_positions_summary_table(positions)
|
||||
|
||||
else:
|
||||
print(" ⚠ No perpetual positions found")
|
||||
|
||||
return data
|
||||
except Exception as e:
|
||||
print(f" ✗ Error: {e}")
|
||||
return {}
|
||||
|
||||
def print_positions_summary_table(self, positions: list):
|
||||
"""
|
||||
Print a summary table of all positions with perfectly aligned columns.
|
||||
NO emojis in data cells - keeps them simple text only for perfect alignment.
|
||||
|
||||
Args:
|
||||
positions: List of position dictionaries
|
||||
"""
|
||||
print(f"\n{'='*130}")
|
||||
print("POSITIONS SUMMARY TABLE")
|
||||
print('='*130)
|
||||
|
||||
# Print header
|
||||
print("| Asset | Side | Size | Entry Price | Position Value | Unrealized PnL | ROE | Leverage |")
|
||||
print("|----------|-------|-------------------|-------------------|-------------------|-------------------|------------|------------|")
|
||||
|
||||
total_position_value = 0
|
||||
total_pnl = 0
|
||||
|
||||
for position in positions:
|
||||
pos = position.get('position', {})
|
||||
|
||||
coin = pos.get('coin', 'Unknown')
|
||||
size = float(pos.get('szi', 0))
|
||||
entry_px = float(pos.get('entryPx', 0))
|
||||
position_value = float(pos.get('positionValue', 0))
|
||||
unrealized_pnl = float(pos.get('unrealizedPnl', 0))
|
||||
return_on_equity = float(pos.get('returnOnEquity', 0))
|
||||
|
||||
# Get leverage
|
||||
leverage = pos.get('leverage', {})
|
||||
leverage_value = leverage.get('value', 0) if isinstance(leverage, dict) else 0
|
||||
leverage_type = leverage.get('type', 'cross') if isinstance(leverage, dict) else 'cross'
|
||||
|
||||
# Determine side - NO EMOJIS in data
|
||||
side_text = "LONG" if size > 0 else "SHORT"
|
||||
|
||||
# Format PnL and ROE with signs
|
||||
pnl_sign = "+" if unrealized_pnl >= 0 else ""
|
||||
roe_sign = "+" if return_on_equity >= 0 else ""
|
||||
|
||||
# Accumulate totals
|
||||
total_position_value += abs(position_value)
|
||||
total_pnl += unrealized_pnl
|
||||
|
||||
# Format all values as strings with proper width
|
||||
asset_str = f"{coin[:8]:<8}"
|
||||
side_str = f"{side_text:<5}"
|
||||
size_str = f"{abs(size):>17,.4f}"
|
||||
entry_str = f"${entry_px:>16,.2f}"
|
||||
value_str = f"${abs(position_value):>16,.2f}"
|
||||
pnl_str = f"{pnl_sign}${unrealized_pnl:>15,.2f}"
|
||||
roe_str = f"{roe_sign}{return_on_equity:>9.2%}"
|
||||
lev_str = f"{leverage_value}x {leverage_type[:4]}"
|
||||
|
||||
# Print row with exact spacing
|
||||
print(f"| {asset_str} | {side_str} | {size_str} | {entry_str} | {value_str} | {pnl_str} | {roe_str} | {lev_str:<10} |")
|
||||
|
||||
# Separator before totals
|
||||
print("|==========|=======|===================|===================|===================|===================|============|============|")
|
||||
|
||||
# Total row
|
||||
total_value_str = f"${total_position_value:>16,.2f}"
|
||||
total_pnl_sign = "+" if total_pnl >= 0 else ""
|
||||
total_pnl_str = f"{total_pnl_sign}${total_pnl:>15,.2f}"
|
||||
|
||||
print(f"| TOTAL | | | | {total_value_str} | {total_pnl_str} | | |")
|
||||
print('='*130 + '\n')
|
||||
|
||||
def get_spot_state(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get spot trading state including token balances.
|
||||
|
||||
Returns:
|
||||
Dict containing:
|
||||
- balances: List of spot token holdings
|
||||
"""
|
||||
print("\n💰 Fetching Spot State...")
|
||||
try:
|
||||
data = self.info.spot_user_state(self.wallet_address)
|
||||
|
||||
if data and data.get('balances'):
|
||||
print(f" ✓ Spot Holdings: {len(data['balances'])} tokens")
|
||||
for balance in data['balances'][:5]: # Show first 5
|
||||
print(f" - {balance.get('coin', 'Unknown')}: {balance.get('total', 0)}")
|
||||
else:
|
||||
print(" ⚠ No spot holdings found")
|
||||
|
||||
return data
|
||||
except Exception as e:
|
||||
print(f" ✗ Error: {e}")
|
||||
return {}
|
||||
|
||||
def get_open_orders(self) -> list:
|
||||
"""
|
||||
Get all open orders for the user.
|
||||
|
||||
Returns:
|
||||
List of open orders with details (price, size, side, etc.)
|
||||
"""
|
||||
print("\n📋 Fetching Open Orders...")
|
||||
try:
|
||||
data = self.info.open_orders(self.wallet_address)
|
||||
|
||||
if data:
|
||||
print(f" ✓ Open Orders: {len(data)}")
|
||||
for order in data[:3]: # Show first 3
|
||||
coin = order.get('coin', 'Unknown')
|
||||
side = order.get('side', 'Unknown')
|
||||
size = order.get('sz', 0)
|
||||
price = order.get('limitPx', 0)
|
||||
print(f" - {coin} {side}: {size} @ ${price}")
|
||||
else:
|
||||
print(" ⚠ No open orders")
|
||||
|
||||
return data
|
||||
except Exception as e:
|
||||
print(f" ✗ Error: {e}")
|
||||
return []
|
||||
|
||||
def get_user_fills(self, limit: int = 100) -> list:
|
||||
"""
|
||||
Get recent trade fills (executions).
|
||||
|
||||
Args:
|
||||
limit: Maximum number of fills to retrieve (max 2000)
|
||||
|
||||
Returns:
|
||||
List of fills with execution details, PnL, timestamps
|
||||
"""
|
||||
print(f"\n📈 Fetching Recent Fills (last {limit})...")
|
||||
try:
|
||||
data = self.info.user_fills(self.wallet_address)
|
||||
|
||||
if data:
|
||||
fills = data[:limit]
|
||||
print(f" ✓ Total Fills Retrieved: {len(fills)}")
|
||||
|
||||
# Show summary stats
|
||||
total_pnl = sum(float(f.get('closedPnl', 0)) for f in fills if f.get('closedPnl'))
|
||||
print(f" ✓ Total Closed PnL: ${total_pnl:.2f}")
|
||||
|
||||
# Show most recent
|
||||
if fills:
|
||||
recent = fills[0]
|
||||
print(f" ✓ Most Recent: {recent.get('coin')} {recent.get('side')} {recent.get('sz')} @ ${recent.get('px')}")
|
||||
else:
|
||||
print(" ⚠ No fills found")
|
||||
|
||||
return data[:limit] if data else []
|
||||
except Exception as e:
|
||||
print(f" ✗ Error: {e}")
|
||||
return []
|
||||
|
||||
def get_user_fills_by_time(self, start_time: Optional[int] = None,
|
||||
end_time: Optional[int] = None) -> list:
|
||||
"""
|
||||
Get fills within a specific time range.
|
||||
|
||||
Args:
|
||||
start_time: Start timestamp in milliseconds (default: 7 days ago)
|
||||
end_time: End timestamp in milliseconds (default: now)
|
||||
|
||||
Returns:
|
||||
List of fills within the time range
|
||||
"""
|
||||
if not start_time:
|
||||
start_time = int((datetime.now() - timedelta(days=7)).timestamp() * 1000)
|
||||
if not end_time:
|
||||
end_time = int(datetime.now().timestamp() * 1000)
|
||||
|
||||
print(f"\n📅 Fetching Fills by Time Range...")
|
||||
print(f" From: {datetime.fromtimestamp(start_time/1000)}")
|
||||
print(f" To: {datetime.fromtimestamp(end_time/1000)}")
|
||||
|
||||
try:
|
||||
data = self.info.user_fills_by_time(self.wallet_address, start_time, end_time)
|
||||
|
||||
if data:
|
||||
print(f" ✓ Fills in Range: {len(data)}")
|
||||
else:
|
||||
print(" ⚠ No fills in this time range")
|
||||
|
||||
return data
|
||||
except Exception as e:
|
||||
print(f" ✗ Error: {e}")
|
||||
return []
|
||||
|
||||
def get_user_fees(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get user's fee schedule and trading volume.
|
||||
|
||||
Returns:
|
||||
Dict containing:
|
||||
- feeSchedule: Fee rates by tier
|
||||
- userCrossRate: User's current cross trading fee rate
|
||||
- userAddRate: User's maker fee rate
|
||||
- userWithdrawRate: Withdrawal fee rate
|
||||
- dailyUserVlm: Daily trading volume
|
||||
"""
|
||||
print("\n💳 Fetching Fee Information...")
|
||||
try:
|
||||
data = self.info.user_fees(self.wallet_address)
|
||||
|
||||
if data:
|
||||
print(f" ✓ Maker Fee: {data.get('userAddRate', 0)}%")
|
||||
print(f" ✓ Taker Fee: {data.get('userCrossRate', 0)}%")
|
||||
print(f" ✓ Daily Volume: ${data.get('dailyUserVlm', [0])[0] if data.get('dailyUserVlm') else 0}")
|
||||
|
||||
return data
|
||||
except Exception as e:
|
||||
print(f" ✗ Error: {e}")
|
||||
return {}
|
||||
|
||||
def get_user_rate_limit(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get API rate limit information.
|
||||
|
||||
Returns:
|
||||
Dict containing:
|
||||
- cumVlm: Cumulative trading volume
|
||||
- nRequestsUsed: Number of requests used
|
||||
- nRequestsCap: Request capacity
|
||||
"""
|
||||
print("\n⏱️ Fetching Rate Limit Info...")
|
||||
try:
|
||||
data = self.info.user_rate_limit(self.wallet_address)
|
||||
|
||||
if data:
|
||||
used = data.get('nRequestsUsed', 0)
|
||||
cap = data.get('nRequestsCap', 0)
|
||||
print(f" ✓ API Requests: {used}/{cap}")
|
||||
print(f" ✓ Cumulative Volume: ${data.get('cumVlm', 0)}")
|
||||
|
||||
return data
|
||||
except Exception as e:
|
||||
print(f" ✗ Error: {e}")
|
||||
return {}
|
||||
|
||||
def get_funding_history(self, coin: str, days: int = 7) -> list:
|
||||
"""
|
||||
Get funding rate history for a specific coin.
|
||||
|
||||
Args:
|
||||
coin: Asset symbol (e.g., 'BTC', 'ETH')
|
||||
days: Number of days of history (default: 7)
|
||||
|
||||
Returns:
|
||||
List of funding rate entries
|
||||
"""
|
||||
end_time = int(datetime.now().timestamp() * 1000)
|
||||
start_time = int((datetime.now() - timedelta(days=days)).timestamp() * 1000)
|
||||
|
||||
print(f"\n📊 Fetching Funding History for {coin}...")
|
||||
try:
|
||||
data = self.info.funding_history(coin, start_time, end_time)
|
||||
|
||||
if data:
|
||||
print(f" ✓ Funding Entries: {len(data)}")
|
||||
if data:
|
||||
latest = data[-1]
|
||||
print(f" ✓ Latest Rate: {latest.get('fundingRate', 0)}")
|
||||
|
||||
return data
|
||||
except Exception as e:
|
||||
print(f" ✗ Error: {e}")
|
||||
return []
|
||||
|
||||
def get_user_funding_history(self, days: int = 7) -> list:
|
||||
"""
|
||||
Get user's funding payments history.
|
||||
|
||||
Args:
|
||||
days: Number of days of history (default: 7)
|
||||
|
||||
Returns:
|
||||
List of funding payments
|
||||
"""
|
||||
end_time = int(datetime.now().timestamp() * 1000)
|
||||
start_time = int((datetime.now() - timedelta(days=days)).timestamp() * 1000)
|
||||
|
||||
print(f"\n💸 Fetching User Funding Payments (last {days} days)...")
|
||||
try:
|
||||
data = self.info.user_funding_history(self.wallet_address, start_time, end_time)
|
||||
|
||||
if data:
|
||||
print(f" ✓ Funding Payments: {len(data)}")
|
||||
total_funding = sum(float(f.get('usdc', 0)) for f in data)
|
||||
print(f" ✓ Total Funding P&L: ${total_funding:.2f}")
|
||||
else:
|
||||
print(" ⚠ No funding payments found")
|
||||
|
||||
return data
|
||||
except Exception as e:
|
||||
print(f" ✗ Error: {e}")
|
||||
return []
|
||||
|
||||
def get_user_non_funding_ledger_updates(self, days: int = 7) -> list:
|
||||
"""
|
||||
Get non-funding ledger updates (deposits, withdrawals, liquidations).
|
||||
|
||||
Args:
|
||||
days: Number of days of history (default: 7)
|
||||
|
||||
Returns:
|
||||
List of ledger updates
|
||||
"""
|
||||
end_time = int(datetime.now().timestamp() * 1000)
|
||||
start_time = int((datetime.now() - timedelta(days=days)).timestamp() * 1000)
|
||||
|
||||
print(f"\n📒 Fetching Ledger Updates (last {days} days)...")
|
||||
try:
|
||||
data = self.info.user_non_funding_ledger_updates(self.wallet_address, start_time, end_time)
|
||||
|
||||
if data:
|
||||
print(f" ✓ Ledger Updates: {len(data)}")
|
||||
# Categorize updates
|
||||
deposits = [u for u in data if 'deposit' in str(u.get('delta', {})).lower()]
|
||||
withdrawals = [u for u in data if 'withdraw' in str(u.get('delta', {})).lower()]
|
||||
print(f" ✓ Deposits: {len(deposits)}, Withdrawals: {len(withdrawals)}")
|
||||
else:
|
||||
print(" ⚠ No ledger updates found")
|
||||
|
||||
return data
|
||||
except Exception as e:
|
||||
print(f" ✗ Error: {e}")
|
||||
return []
|
||||
|
||||
def get_referral_state(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get referral program state for the user.
|
||||
|
||||
Returns:
|
||||
Dict with referral status and earnings
|
||||
"""
|
||||
print("\n🎁 Fetching Referral State...")
|
||||
try:
|
||||
data = self.info.query_referral_state(self.wallet_address)
|
||||
|
||||
if data:
|
||||
print(f" ✓ Referral Code: {data.get('referralCode', 'N/A')}")
|
||||
print(f" ✓ Referees: {len(data.get('referees', []))}")
|
||||
|
||||
return data
|
||||
except Exception as e:
|
||||
print(f" ✗ Error: {e}")
|
||||
return {}
|
||||
|
||||
def get_sub_accounts(self) -> list:
|
||||
"""
|
||||
Get list of sub-accounts for the user.
|
||||
|
||||
Returns:
|
||||
List of sub-account addresses
|
||||
"""
|
||||
print("\n👥 Fetching Sub-Accounts...")
|
||||
try:
|
||||
data = self.info.query_sub_accounts(self.wallet_address)
|
||||
|
||||
if data:
|
||||
print(f" ✓ Sub-Accounts: {len(data)}")
|
||||
else:
|
||||
print(" ⚠ No sub-accounts found")
|
||||
|
||||
return data
|
||||
except Exception as e:
|
||||
print(f" ✗ Error: {e}")
|
||||
return []
|
||||
|
||||
def fetch_all_data(self, save_to_file: bool = True) -> Dict[str, Any]:
|
||||
"""
|
||||
Fetch all available data for the wallet.
|
||||
|
||||
Args:
|
||||
save_to_file: If True, save results to JSON file
|
||||
|
||||
Returns:
|
||||
Dict containing all fetched data
|
||||
"""
|
||||
print("=" * 80)
|
||||
print("HYPERLIQUID WALLET DATA FETCHER")
|
||||
print("=" * 80)
|
||||
|
||||
all_data = {
|
||||
'wallet_address': self.wallet_address,
|
||||
'timestamp': datetime.now().isoformat(),
|
||||
'data': {}
|
||||
}
|
||||
|
||||
# Fetch all data sections
|
||||
all_data['data']['user_state'] = self.get_user_state()
|
||||
all_data['data']['spot_state'] = self.get_spot_state()
|
||||
all_data['data']['open_orders'] = self.get_open_orders()
|
||||
all_data['data']['recent_fills'] = self.get_user_fills(limit=50)
|
||||
all_data['data']['fills_last_7_days'] = self.get_user_fills_by_time()
|
||||
all_data['data']['user_fees'] = self.get_user_fees()
|
||||
all_data['data']['rate_limit'] = self.get_user_rate_limit()
|
||||
all_data['data']['funding_payments'] = self.get_user_funding_history(days=7)
|
||||
all_data['data']['ledger_updates'] = self.get_user_non_funding_ledger_updates(days=7)
|
||||
all_data['data']['referral_state'] = self.get_referral_state()
|
||||
all_data['data']['sub_accounts'] = self.get_sub_accounts()
|
||||
|
||||
# Optional: Fetch funding history for positions
|
||||
user_state = all_data['data']['user_state']
|
||||
if user_state and user_state.get('assetPositions'):
|
||||
all_data['data']['funding_history'] = {}
|
||||
for position in user_state['assetPositions'][:3]: # First 3 positions
|
||||
coin = position.get('position', {}).get('coin')
|
||||
if coin:
|
||||
all_data['data']['funding_history'][coin] = self.get_funding_history(coin, days=7)
|
||||
|
||||
print("\n" + "=" * 80)
|
||||
print("DATA COLLECTION COMPLETE")
|
||||
print("=" * 80)
|
||||
|
||||
# Save to file
|
||||
if save_to_file:
|
||||
filename = f"hyperliquid_wallet_data_{self.wallet_address[:10]}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json"
|
||||
with open(filename, 'w') as f:
|
||||
json.dump(all_data, f, indent=2, default=str)
|
||||
print(f"\n💾 Data saved to: {filename}")
|
||||
|
||||
return all_data
|
||||
|
||||
|
||||
def main():
|
||||
"""Main execution function."""
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python hyperliquid_wallet_data.py <wallet_address> [--testnet]")
|
||||
print("\nExample:")
|
||||
print(" python hyperliquid_wallet_data.py 0xcd5051944f780a621ee62e39e493c489668acf4d")
|
||||
sys.exit(1)
|
||||
|
||||
wallet_address = sys.argv[1]
|
||||
use_testnet = '--testnet' in sys.argv
|
||||
|
||||
# Validate wallet address format
|
||||
if not wallet_address.startswith('0x') or len(wallet_address) != 42:
|
||||
print("❌ Error: Invalid wallet address format")
|
||||
print(" Address must be in format: 0x followed by 40 hexadecimal characters")
|
||||
sys.exit(1)
|
||||
|
||||
try:
|
||||
analyzer = HyperliquidWalletAnalyzer(wallet_address, use_testnet=use_testnet)
|
||||
data = analyzer.fetch_all_data(save_to_file=True)
|
||||
|
||||
print("\n✅ All data fetched successfully!")
|
||||
print(f"\n📊 Summary:")
|
||||
print(f" - Account Value: ${data['data']['user_state'].get('marginSummary', {}).get('accountValue', 0)}")
|
||||
print(f" - Open Positions: {len(data['data']['user_state'].get('assetPositions', []))}")
|
||||
print(f" - Spot Holdings: {len(data['data']['spot_state'].get('balances', []))}")
|
||||
print(f" - Open Orders: {len(data['data']['open_orders'])}")
|
||||
print(f" - Recent Fills: {len(data['data']['recent_fills'])}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Fatal Error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
367
whale_tracker.py
367
whale_tracker.py
@ -1,367 +0,0 @@
|
||||
import json
|
||||
import os
|
||||
import time
|
||||
import requests
|
||||
import logging
|
||||
import argparse
|
||||
import sys
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
# --- Configuration ---
|
||||
# !! IMPORTANT: Update this to your actual Hyperliquid API endpoint !!
|
||||
API_ENDPOINT = "https://api.hyperliquid.xyz/info"
|
||||
|
||||
INPUT_FILE = os.path.join("_data", "wallets_to_track.json")
|
||||
OUTPUT_FILE = os.path.join("_data", "wallets_info.json")
|
||||
LOGS_DIR = "_logs"
|
||||
LOG_FILE = os.path.join(LOGS_DIR, "whale_tracker.log")
|
||||
|
||||
# Polling intervals (in seconds)
|
||||
POLL_INTERVALS = {
|
||||
'core_data': 10, # 5-15s range
|
||||
'open_orders': 20, # 15-30s range
|
||||
'account_metrics': 180, # 1-5m range
|
||||
'ledger_updates': 600, # 5-15m range
|
||||
'save_data': 5, # How often to write to wallets_info.json
|
||||
'reload_wallets': 60 # Check for wallet list changes every 60s
|
||||
}
|
||||
|
||||
class HyperliquidAPI:
|
||||
"""
|
||||
Client to handle POST requests to the Hyperliquid info endpoint.
|
||||
"""
|
||||
def __init__(self, base_url):
|
||||
self.base_url = base_url
|
||||
self.session = requests.Session()
|
||||
logging.info(f"API Client initialized for endpoint: {base_url}")
|
||||
|
||||
def post_request(self, payload):
|
||||
"""
|
||||
Internal helper to send POST requests and handle errors.
|
||||
"""
|
||||
try:
|
||||
response = self.session.post(self.base_url, json=payload, timeout=10)
|
||||
response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx)
|
||||
return response.json()
|
||||
except requests.exceptions.HTTPError as e:
|
||||
logging.error(f"HTTP Error: {e.response.status_code} for {e.request.url}. Response: {e.response.text}")
|
||||
except requests.exceptions.ConnectionError as e:
|
||||
logging.error(f"Connection Error: {e}")
|
||||
except requests.exceptions.Timeout:
|
||||
logging.error(f"Request timed out for payload: {payload.get('type')}")
|
||||
except json.JSONDecodeError:
|
||||
logging.error(f"Failed to decode JSON response. Response text: {response.text if 'response' in locals() else 'No response text'}")
|
||||
except Exception as e:
|
||||
logging.error(f"An unexpected error occurred in post_request: {e}", exc_info=True)
|
||||
return None
|
||||
|
||||
def get_user_state(self, user_address: str):
|
||||
payload = {"type": "clearinghouseState", "user": user_address}
|
||||
return self.post_request(payload)
|
||||
|
||||
def get_open_orders(self, user_address: str):
|
||||
payload = {"type": "openOrders", "user": user_address}
|
||||
return self.post_request(payload)
|
||||
|
||||
def get_user_rate_limit(self, user_address: str):
|
||||
payload = {"type": "userRateLimit", "user": user_address}
|
||||
return self.post_request(payload)
|
||||
|
||||
def get_user_ledger_updates(self, user_address: str, start_time_ms: int, end_time_ms: int):
|
||||
payload = {
|
||||
"type": "userNonFundingLedgerUpdates",
|
||||
"user": user_address,
|
||||
"startTime": start_time_ms,
|
||||
"endTime": end_time_ms
|
||||
}
|
||||
return self.post_request(payload)
|
||||
|
||||
class WalletTracker:
|
||||
"""
|
||||
Main class to track wallets, process data, and store results.
|
||||
"""
|
||||
def __init__(self, api_client, wallets_to_track):
|
||||
self.api = api_client
|
||||
self.wallets = wallets_to_track # This is the list of dicts
|
||||
self.wallets_by_name = {w['name']: w for w in self.wallets}
|
||||
self.wallets_data = {
|
||||
wallet['name']: {"address": wallet['address']} for wallet in self.wallets
|
||||
}
|
||||
logging.info(f"WalletTracker initialized for {len(self.wallets)} wallets.")
|
||||
|
||||
def reload_wallets(self):
|
||||
"""
|
||||
Checks the INPUT_FILE for changes and updates the tracked wallet list.
|
||||
"""
|
||||
logging.debug("Reloading wallet list...")
|
||||
try:
|
||||
with open(INPUT_FILE, 'r') as f:
|
||||
new_wallets_list = json.load(f)
|
||||
if not isinstance(new_wallets_list, list):
|
||||
logging.warning(f"Failed to reload '{INPUT_FILE}': content is not a list.")
|
||||
return
|
||||
|
||||
new_wallets_by_name = {w['name']: w for w in new_wallets_list}
|
||||
old_names = set(self.wallets_by_name.keys())
|
||||
new_names = set(new_wallets_by_name.keys())
|
||||
|
||||
added_names = new_names - old_names
|
||||
removed_names = old_names - new_names
|
||||
|
||||
if not added_names and not removed_names:
|
||||
logging.debug("Wallet list is unchanged.")
|
||||
return # No changes
|
||||
|
||||
# Update internal wallet list
|
||||
self.wallets = new_wallets_list
|
||||
self.wallets_by_name = new_wallets_by_name
|
||||
|
||||
# Add new wallets to wallets_data
|
||||
for name in added_names:
|
||||
self.wallets_data[name] = {"address": self.wallets_by_name[name]['address']}
|
||||
logging.info(f"Added new wallet to track: {name}")
|
||||
|
||||
# Remove old wallets from wallets_data
|
||||
for name in removed_names:
|
||||
if name in self.wallets_data:
|
||||
del self.wallets_data[name]
|
||||
logging.info(f"Removed wallet from tracking: {name}")
|
||||
|
||||
logging.info(f"Wallet list reloaded. Tracking {len(self.wallets)} wallets.")
|
||||
|
||||
except (FileNotFoundError, json.JSONDecodeError, ValueError) as e:
|
||||
logging.error(f"Failed to reload and parse '{INPUT_FILE}': {e}")
|
||||
except Exception as e:
|
||||
logging.error(f"Unexpected error during wallet reload: {e}", exc_info=True)
|
||||
|
||||
|
||||
def calculate_core_metrics(self, state_data: dict) -> dict:
|
||||
"""
|
||||
Performs calculations based on user_state data.
|
||||
"""
|
||||
if not state_data or 'crossMarginSummary' not in state_data:
|
||||
logging.warning("Core state data is missing 'crossMarginSummary'.")
|
||||
return {"raw_state": state_data}
|
||||
|
||||
summary = state_data['crossMarginSummary']
|
||||
account_value = float(summary.get('accountValue', 0))
|
||||
margin_used = float(summary.get('totalMarginUsed', 0))
|
||||
|
||||
# Calculations
|
||||
margin_utilization = (margin_used / account_value) if account_value > 0 else 0
|
||||
available_margin = account_value - margin_used
|
||||
|
||||
total_position_value = 0
|
||||
if 'assetPositions' in state_data:
|
||||
for pos in state_data.get('assetPositions', []):
|
||||
try:
|
||||
# Use 'value' for position value
|
||||
pos_value_str = pos.get('position', {}).get('value', '0')
|
||||
total_position_value += float(pos_value_str)
|
||||
except (ValueError, TypeError):
|
||||
logging.warning(f"Could not parse position value: {pos.get('position', {}).get('value')}")
|
||||
continue
|
||||
|
||||
portfolio_leverage = (total_position_value / account_value) if account_value > 0 else 0
|
||||
|
||||
# Return calculated metrics alongside raw data
|
||||
return {
|
||||
"raw_state": state_data,
|
||||
"account_value": account_value,
|
||||
"margin_used": margin_used,
|
||||
"margin_utilization": margin_utilization,
|
||||
"available_margin": available_margin,
|
||||
"total_position_value": total_position_value,
|
||||
"portfolio_leverage": portfolio_leverage
|
||||
}
|
||||
|
||||
def poll_core_data(self):
|
||||
logging.debug("Polling Core Data...")
|
||||
# Use self.wallets which is updated by reload_wallets
|
||||
for wallet in self.wallets:
|
||||
name = wallet['name']
|
||||
address = wallet['address']
|
||||
state_data = self.api.get_user_state(address)
|
||||
if state_data:
|
||||
calculated_data = self.calculate_core_metrics(state_data)
|
||||
# Ensure wallet hasn't been removed by a concurrent reload
|
||||
if name in self.wallets_data:
|
||||
self.wallets_data[name]['core_state'] = calculated_data
|
||||
time.sleep(0.1) # Avoid bursting requests
|
||||
|
||||
def poll_open_orders(self):
|
||||
logging.debug("Polling Open Orders...")
|
||||
for wallet in self.wallets:
|
||||
name = wallet['name']
|
||||
address = wallet['address']
|
||||
orders_data = self.api.get_open_orders(address)
|
||||
if orders_data:
|
||||
# TODO: Add calculations for 'pending_margin_required' if logic is available
|
||||
if name in self.wallets_data:
|
||||
self.wallets_data[name]['open_orders'] = {"raw_orders": orders_data}
|
||||
time.sleep(0.1)
|
||||
|
||||
def poll_account_metrics(self):
|
||||
logging.debug("Polling Account Metrics...")
|
||||
for wallet in self.wallets:
|
||||
name = wallet['name']
|
||||
address = wallet['address']
|
||||
metrics_data = self.api.get_user_rate_limit(address)
|
||||
if metrics_data:
|
||||
if name in self.wallets_data:
|
||||
self.wallets_data[name]['account_metrics'] = metrics_data
|
||||
time.sleep(0.1)
|
||||
|
||||
def poll_ledger_updates(self):
|
||||
logging.debug("Polling Ledger Updates...")
|
||||
end_time_ms = int(datetime.now().timestamp() * 1000)
|
||||
start_time_ms = int((datetime.now() - timedelta(minutes=15)).timestamp() * 1000)
|
||||
|
||||
for wallet in self.wallets:
|
||||
name = wallet['name']
|
||||
address = wallet['address']
|
||||
ledger_data = self.api.get_user_ledger_updates(address, start_time_ms, end_time_ms)
|
||||
if ledger_data:
|
||||
if name in self.wallets_data:
|
||||
self.wallets_data[name]['ledger_updates'] = ledger_data
|
||||
time.sleep(0.1)
|
||||
|
||||
def save_data_to_json(self):
|
||||
"""
|
||||
Atomically writes the current wallet data to the output JSON file.
|
||||
(No longer needs cleaning logic)
|
||||
"""
|
||||
logging.debug(f"Saving data to {OUTPUT_FILE}...")
|
||||
|
||||
temp_file = OUTPUT_FILE + ".tmp"
|
||||
try:
|
||||
# Save the data
|
||||
with open(temp_file, 'w', encoding='utf-8') as f:
|
||||
# self.wallets_data is automatically kept clean by reload_wallets
|
||||
json.dump(self.wallets_data, f, indent=2)
|
||||
# Atomic rename (move)
|
||||
os.replace(temp_file, OUTPUT_FILE)
|
||||
except (IOError, json.JSONDecodeError) as e:
|
||||
logging.error(f"Failed to write wallet data to file: {e}")
|
||||
except Exception as e:
|
||||
logging.error(f"An unexpected error occurred during file save: {e}")
|
||||
if os.path.exists(temp_file):
|
||||
os.remove(temp_file)
|
||||
|
||||
class WhaleTrackerRunner:
|
||||
"""
|
||||
Manages the polling loop using last-run timestamps instead of a complex scheduler.
|
||||
"""
|
||||
def __init__(self, api_client, wallets, shared_whale_data_dict=None): # Kept arg for compatibility
|
||||
self.tracker = WalletTracker(api_client, wallets)
|
||||
self.last_poll_times = {key: 0 for key in POLL_INTERVALS}
|
||||
self.poll_intervals = POLL_INTERVALS
|
||||
logging.info("WhaleTrackerRunner initialized to save to JSON file.")
|
||||
|
||||
def update_shared_data(self):
|
||||
"""
|
||||
This function is no longer called by the run loop.
|
||||
It's kept here to prevent errors if imported elsewhere, but is now unused.
|
||||
"""
|
||||
logging.debug("No shared dict, saving data to JSON file.")
|
||||
self.tracker.save_data_to_json()
|
||||
|
||||
|
||||
def run(self):
|
||||
logging.info("Starting main polling loop...")
|
||||
while True:
|
||||
try:
|
||||
now = time.time()
|
||||
|
||||
if now - self.last_poll_times['reload_wallets'] > self.poll_intervals['reload_wallets']:
|
||||
self.tracker.reload_wallets()
|
||||
self.last_poll_times['reload_wallets'] = now
|
||||
|
||||
if now - self.last_poll_times['core_data'] > self.poll_intervals['core_data']:
|
||||
self.tracker.poll_core_data()
|
||||
self.last_poll_times['core_data'] = now
|
||||
|
||||
if now - self.last_poll_times['open_orders'] > self.poll_intervals['open_orders']:
|
||||
self.tracker.poll_open_orders()
|
||||
self.last_poll_times['open_orders'] = now
|
||||
|
||||
if now - self.last_poll_times['account_metrics'] > self.poll_intervals['account_metrics']:
|
||||
self.tracker.poll_account_metrics()
|
||||
self.last_poll_times['account_metrics'] = now
|
||||
|
||||
if now - self.last_poll_times['ledger_updates'] > self.poll_intervals['ledger_updates']:
|
||||
self.tracker.poll_ledger_updates()
|
||||
self.last_poll_times['ledger_updates'] = now
|
||||
|
||||
if now - self.last_poll_times['save_data'] > self.poll_intervals['save_data']:
|
||||
self.tracker.save_data_to_json() # <-- NEW
|
||||
self.last_poll_times['save_data'] = now
|
||||
|
||||
# Sleep for a short duration to prevent busy-waiting
|
||||
time.sleep(1)
|
||||
|
||||
except Exception as e:
|
||||
logging.critical(f"Unhandled exception in main loop: {e}", exc_info=True)
|
||||
time.sleep(10)
|
||||
|
||||
def setup_logging(log_level_str: str, process_name: str):
|
||||
"""Configures logging for the script."""
|
||||
if not os.path.exists(LOGS_DIR):
|
||||
try:
|
||||
os.makedirs(LOGS_DIR)
|
||||
except OSError as e:
|
||||
print(f"Failed to create logs directory {LOGS_DIR}: {e}")
|
||||
return
|
||||
|
||||
level_map = {
|
||||
'debug': logging.DEBUG,
|
||||
'normal': logging.INFO,
|
||||
'off': logging.NOTSET
|
||||
}
|
||||
log_level = level_map.get(log_level_str.lower(), logging.INFO)
|
||||
|
||||
if log_level == logging.NOTSET:
|
||||
return
|
||||
|
||||
handlers_list = [logging.FileHandler(LOG_FILE, mode='a')]
|
||||
|
||||
if sys.stdout.isatty():
|
||||
handlers_list.append(logging.StreamHandler(sys.stdout))
|
||||
|
||||
logging.basicConfig(
|
||||
level=log_level,
|
||||
format=f"%(asctime)s.%(msecs)03d | {process_name:<20} | %(levelname)-8s | %(message)s",
|
||||
datefmt='%Y-%m-%d %H:%M:%S',
|
||||
handlers=handlers_list
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description="Hyperliquid Whale Tracker")
|
||||
parser.add_argument("--log-level", default="normal", choices=['off', 'normal', 'debug'])
|
||||
args = parser.parse_args()
|
||||
|
||||
setup_logging(args.log_level, "WhaleTracker")
|
||||
|
||||
# Load wallets to track
|
||||
wallets_to_track = []
|
||||
try:
|
||||
with open(INPUT_FILE, 'r') as f:
|
||||
wallets_to_track = json.load(f)
|
||||
if not isinstance(wallets_to_track, list) or not wallets_to_track:
|
||||
raise ValueError(f"'{INPUT_FILE}' is empty or not a list.")
|
||||
except (FileNotFoundError, json.JSONDecodeError, ValueError) as e:
|
||||
logging.critical(f"Failed to load '{INPUT_FILE}': {e}. Exiting.")
|
||||
sys.exit(1)
|
||||
|
||||
# Initialize API client
|
||||
api_client = HyperliquidAPI(base_url=API_ENDPOINT)
|
||||
|
||||
# Initialize and run the tracker
|
||||
runner = WhaleTrackerRunner(api_client, wallets_to_track, shared_whale_data_dict=None)
|
||||
|
||||
try:
|
||||
runner.run()
|
||||
except KeyboardInterrupt:
|
||||
logging.info("Whale Tracker shutting down.")
|
||||
sys.exit(0)
|
||||
|
||||
Reference in New Issue
Block a user