Examples
Advanced use cases and real-world applications of FinWorld
Example 1: Data Download and Processing
This example shows how to download and process financial data using the provided shell scripts.
Download Scripts
Use the provided download scripts in examples/download.sh:
#!/bin/bash
# Download HS300 data
python scripts/download/download.py --config configs/download/hs300/hs300_fmp_price_1day.py
python scripts/download/download.py --config configs/download/hs300/hs300_fmp_price_1min.py
# Download SP500 data
python scripts/download/download.py --config configs/download/sp500/sp500_fmp_price_1day.py
python scripts/download/download.py --config configs/download/sp500/sp500_fmp_price_1min.py
# Download DJ30 data
python scripts/download/download.py --config configs/download/dj30/dj30_fmp_price_1day.py
python scripts/download/download.py --config configs/download/dj30/dj30_fmp_price_1min.py
Processing Scripts
Use the provided processing scripts in examples/process.sh:
#!/bin/bash
# Process HS300 data
python scripts/process/process.py --config configs/process/hs300.py
# Process SP500 data
python scripts/process/process.py --config configs/process/sp500.py
# Process DJ30 data
python scripts/process/process.py --config configs/process/dj30.py
Key Features
- Multiple Data Sources: FMP, Alpaca, AkShare, TuShare
- Different Time Frames: Daily and minute-level data
- Batch Processing: Process multiple datasets simultaneously
- Data Quality: Built-in validation and cleaning
Example 2: Time Series Forecasting
This example demonstrates time series forecasting using different model architectures.
Training Scripts
Use the provided time series training scripts in examples/time.sh:
#!/bin/bash
# Train Autoformer model
CUDA_VISIBLE_DEVICES=4,5 torchrun --master_port=29510 --nproc_per_node=2 scripts/time/train.py --config configs/time/dj30_autoformer.py
# Train Crossformer model
CUDA_VISIBLE_DEVICES=4,5 torchrun --master_port=29511 --nproc_per_node=2 scripts/time/train.py --config configs/time/dj30_crossformer.py
# Train DLinear model
CUDA_VISIBLE_DEVICES=4,5 torchrun --master_port=29512 --nproc_per_node=2 scripts/time/train.py --config configs/time/dj30_dlinear.py
# Train ETSformer model
CUDA_VISIBLE_DEVICES=4,5 torchrun --master_port=29513 --nproc_per_node=2 scripts/time/train.py --config configs/time/dj30_etsformer.py
Key Features
- Multiple Architectures: Autoformer, Crossformer, DLinear, ETSformer
- Distributed Training: Multi-GPU support for faster training
- Different Datasets: DJ30, SP500, HS300, SSE50
- Performance Comparison: Easy to compare different models
Example 3: RL Trading with PPO
This example demonstrates reinforcement learning trading using PPO algorithm across multiple stocks.
Training Scripts
Use the provided PPO trading scripts in examples/ppo_trading.sh:
#!/usr/bin/env bash
# Train PPO trading agents for different stocks
CUDA_VISIBLE_DEVICES=4 python scripts/rl_trading/train.py --config=configs/rl_trading/ppo/AAPL_ppo_trading.py &
CUDA_VISIBLE_DEVICES=4 python scripts/rl_trading/train.py --config=configs/rl_trading/ppo/AMZN_ppo_trading.py &
CUDA_VISIBLE_DEVICES=4 python scripts/rl_trading/train.py --config=configs/rl_trading/ppo/GOOGL_ppo_trading.py &
CUDA_VISIBLE_DEVICES=4 python scripts/rl_trading/train.py --config=configs/rl_trading/ppo/META_ppo_trading.py &
CUDA_VISIBLE_DEVICES=4 python scripts/rl_trading/train.py --config=configs/rl_trading/ppo/MSFT_ppo_trading.py &
CUDA_VISIBLE_DEVICES=4 python scripts/rl_trading/train.py --config=configs/rl_trading/ppo/TSLA_ppo_trading.py &
Key Features
- Multi-Stock Training: Train on multiple stocks simultaneously
- PPO Algorithm: Stable and sample-efficient RL algorithm
- Risk Management: Built-in position limits and transaction costs
- Performance Metrics: Sharpe ratio, returns, drawdown analysis
Example 4: RL Trading with SAC
This example demonstrates reinforcement learning trading using SAC (Soft Actor-Critic) algorithm.
Training Scripts
Use the provided SAC trading scripts in examples/sac_trading.sh:
#!/usr/bin/env bash
# Kill any existing SAC trading processes
# ps -ef | grep sac | grep -v grep | awk '{print $2}' | xargs kill -9
CUDA_VISIBLE_DEVICES=4 python scripts/rl_trading/train.py --config=configs/rl_trading/sac/AAPL_sac_trading.py &
CUDA_VISIBLE_DEVICES=4 python scripts/rl_trading/train.py --config=configs/rl_trading/sac/AMZN_sac_trading.py &
CUDA_VISIBLE_DEVICES=4 python scripts/rl_trading/train.py --config=configs/rl_trading/sac/GOOGL_sac_trading.py &
CUDA_VISIBLE_DEVICES=4 python scripts/rl_trading/train.py --config=configs/rl_trading/sac/META_sac_trading.py &
CUDA_VISIBLE_DEVICES=4 python scripts/rl_trading/train.py --config=configs/rl_trading/sac/MSFT_sac_trading.py &
CUDA_VISIBLE_DEVICES=4 python scripts/rl_trading/train.py --config=configs/rl_trading/sac/TSLA_sac_trading.py &
Key Features
- SAC Algorithm: Sample-efficient off-policy RL algorithm
- Experience Replay: Better sample efficiency with replay buffer
- Entropy Regularization: Automatic exploration-exploitation balance
- Double Q-Learning: Reduces overestimation bias
Example 5: RL Portfolio with PPO
This example demonstrates reinforcement learning portfolio optimization using PPO algorithm across different asset classes.
Training Scripts
Use the provided PPO portfolio scripts in examples/ppo_portfolio.sh:
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=5 python scripts/rl_portfolio/train.py --config=configs/rl_portfolio/ppo/dj30_ppo_portfolio.py &
CUDA_VISIBLE_DEVICES=5 python scripts/rl_portfolio/train.py --config=configs/rl_portfolio/ppo/sse50_ppo_portfolio.py &
CUDA_VISIBLE_DEVICES=5 python scripts/rl_portfolio/train.py --config=configs/rl_portfolio/ppo/hs300_ppo_portfolio.py &
CUDA_VISIBLE_DEVICES=5 python scripts/rl_portfolio/train.py --config=configs/rl_portfolio/ppo/sp500_ppo_portfolio.py &
Key Features
- Multi-Asset Portfolio: Optimize across multiple asset classes
- Risk-Adjusted Returns: Focus on Sharpe ratio and risk management
- Diversification: Built-in diversification constraints
- Rebalancing: Dynamic portfolio rebalancing strategies
Example 6: RL Portfolio with SAC
This example demonstrates reinforcement learning portfolio optimization using SAC algorithm.
Training Scripts
Use the provided SAC portfolio scripts in examples/sac_portfolio.sh:
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=5 python scripts/rl_portfolio/train.py --config=configs/rl_portfolio/sac/dj30_sac_portfolio.py &
CUDA_VISIBLE_DEVICES=5 python scripts/rl_portfolio/train.py --config=configs/rl_portfolio/sac/sse50_sac_portfolio.py &
CUDA_VISIBLE_DEVICES=5 python scripts/rl_portfolio/train.py --config=configs/rl_portfolio/sac/hs300_sac_portfolio.py &
CUDA_VISIBLE_DEVICES=5 python scripts/rl_portfolio/train.py --config=configs/rl_portfolio/sac/sp500_sac_portfolio.py &
Key Features
- Continuous Action Space: Fine-grained portfolio allocation
- Sample Efficiency: SAC's off-policy learning advantage
- Stable Training: Reduced variance in portfolio optimization
- Adaptive Strategies: Dynamic adjustment to market conditions
Getting Started with Examples
To run these examples, follow these steps:
- Prepare Data: Download the required datasets using the download tutorials
- Install Dependencies: Ensure all required packages are installed
- Configure Environment: Set up your configuration files
- Run Examples: Execute the example scripts
- Analyze Results: Review performance metrics and visualizations
💡 Pro Tips
- Start with simpler examples and gradually move to more complex ones
- Modify configurations to match your specific requirements
- Use the built-in visualization tools to analyze results
- Consider computational resources for large-scale examples
For more detailed information, check out our Tutorials and API Reference.