An asynchronous Python helper library for interacting with Ethereum RPC nodes, featuring retry logic and multi-node support.
- Multi-Node Support: Automatically distributes requests across multiple Ethereum RPC nodes
- Automatic Failover: Intelligent retry logic with exponential backoff
- Archive Mode: Support for both regular and archive nodes
- Rate Limiting: Built-in rate limiting to prevent overwhelming nodes
- Async/Await: Fully asynchronous API for non-blocking operations
- Batch Operations: Efficient batched RPC calls for better performance
- Advanced Logging: Comprehensive logging with multiple levels
- Web3.py Integration: Seamless integration with web3.py
import asyncio
from rpc_helper.utils.models.settings_model import RPCConfigBase, RPCNodeConfig, ConnectionLimits
from rpc_helper.rpc import RpcHelper
async def main():
# Create RPC configuration
rpc_config = RPCConfigBase(
full_nodes=[
RPCNodeConfig(url="https://eth-mainnet.provider1.io"),
RPCNodeConfig(url="https://eth-mainnet.provider2.io")
],
archive_nodes=[
RPCNodeConfig(url="https://eth-mainnet-archive.provider.io")
],
force_archive_blocks=10000, # Use archive nodes for blocks older than this
retry=3, # Number of retries before giving up
request_time_out=15, # Seconds
connection_limits=ConnectionLimits(
max_connections=100,
max_keepalive_connections=50,
keepalive_expiry=300
)
)
# Initialize RPC helper
rpc = RpcHelper(rpc_settings=rpc_config)
await rpc.init()
# Get current block number
block_number = await rpc.get_current_block_number()
print(f"Current block number: {block_number}")
# Get transaction details
tx_hash = "0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef"
tx_details = await rpc.get_transaction_from_hash(tx_hash)
print(f"Transaction details: {tx_details}")
asyncio.run(main())
from web3 import Web3
from web3.contract import AsyncContract
async def contract_example():
# Initialize RPC helper (as shown in Quick Start)
# ERC-20 contract ABI (simplified for example)
abi = [
{
"constant": True,
"inputs": [{"name": "_owner", "type": "address"}],
"name": "balanceOf",
"outputs": [{"name": "balance", "type": "uint256"}],
"type": "function"
},
{
"constant": True,
"inputs": [],
"name": "symbol",
"outputs": [{"name": "", "type": "string"}],
"type": "function"
}
]
# Contract address (USDT on Ethereum)
contract_address = "0xdAC17F958D2ee523a2206206994597C13D831ec7"
# Create tasks for batch call
tasks = [
("symbol", []), # Get token symbol
("balanceOf", [Web3.to_checksum_address("0x1234567890123456789012345678901234567890")]) # Get balance
]
# Make batch call
results = await rpc.web3_call(tasks, contract_address, abi)
symbol, balance = results
print(f"Token: {symbol}")
print(f"Balance: {balance}")
async def batch_processing_example():
# Initialize RPC helper (as shown in Quick Start)
# Process a range of blocks
start_block = 15000000
end_block = 15000010
# Get blocks in batch
blocks = await rpc.batch_eth_get_block(start_block, end_block)
for block in blocks:
print(f"Block {block['number']}: {len(block['transactions'])} transactions")
async def event_logs_example():
# Initialize RPC helper (as shown in Quick Start)
# Contract address and event details
contract_address = "0x1234567890123456789012345678901234567890"
# Event ABI
event_abi = {
"anonymous": False,
"inputs": [
{"indexed": True, "name": "from", "type": "address"},
{"indexed": True, "name": "to", "type": "address"},
{"indexed": False, "name": "value", "type": "uint256"}
],
"name": "Transfer",
"type": "event"
}
# Event signature
event_signatures = {
"Transfer": "Transfer(address,address,uint256)"
}
# Get event signatures and ABIs
from rpc_helper.rpc import get_event_sig_and_abi
event_sig, event_abi_dict = get_event_sig_and_abi(event_signatures, {"Transfer": event_abi})
# Get logs for a block range
from_block = 15000000
to_block = 15000100
logs = await rpc.get_events_logs(
contract_address=contract_address,
from_block=from_block,
to_block=to_block,
topics=[event_sig], # Filter by event signature
event_abi=event_abi_dict
)
for log in logs:
print(f"Transfer: {log['args']['from']} -> {log['args']['to']}, Value: {log['args']['value']}")
The library uses a custom RPCException
class to provide detailed error information:
try:
# Make RPC call
result = await rpc.get_transaction_from_hash(tx_hash)
except RPCException as e:
print(f"RPC Error: {e}")
print(f"Request: {e.request}")
print(f"Response: {e.response}")
print(f"Underlying Exception: {e.underlying_exception}")
print(f"Extra Info: {e.extra_info}")
RPC Helper uses Loguru for logging and provides flexible configuration options.
By default, RPC Helper uses Loguru's standard console logging with module binding:
from rpc_helper.rpc import RpcHelper
# Default logging - uses standard Loguru console output
rpc = RpcHelper(rpc_settings=rpc_config)
You can provide your own logger instance:
from loguru import logger
from rpc_helper.rpc import RpcHelper
# Create a custom logger
custom_logger = logger.bind(service="MyService")
rpc = RpcHelper(rpc_settings=rpc_config, logger=custom_logger)
To enable file logging, use the configuration functions:
from rpc_helper.utils.default_logger import configure_rpc_logging
from rpc_helper.utils.models.settings_model import LoggingConfig
from pathlib import Path
# Configure file logging
logging_config = LoggingConfig(
enable_file_logging=True,
log_dir=Path("logs/rpc_helper"),
file_levels={
"INFO": True,
"WARNING": True,
"ERROR": True,
"CRITICAL": True
},
format="{time:YYYY-MM-DD HH:mm:ss} | {level} | {module} | {message}",
rotation="100 MB",
retention="7 days",
compression="zip"
)
# Apply the configuration (call this once, typically at application startup)
configure_rpc_logging(logging_config)
# Now create RPC Helper instances - they will use the configured logging
rpc = RpcHelper(rpc_settings=rpc_config)
Enable debug and trace logging for troubleshooting:
from rpc_helper.utils.default_logger import enable_debug_logging
# Enable debug logging to console
debug_handler_id = enable_debug_logging()
# Create RPC Helper with debug mode
rpc = RpcHelper(rpc_settings=rpc_config, debug_mode=True)
The LoggingConfig
class supports these options:
from rpc_helper.utils.models.settings_model import LoggingConfig
config = LoggingConfig(
# File logging
enable_file_logging=True, # Enable/disable file logging
log_dir=Path("logs"), # Directory for log files
file_levels={ # Which levels to log to files
"DEBUG": True,
"INFO": True,
"WARNING": True,
"ERROR": True,
"CRITICAL": True
},
# Console logging (optional - uses Loguru defaults if not specified)
enable_console_logging=True,
console_levels={
"INFO": "stdout", # Send INFO to stdout
"WARNING": "stderr", # Send WARNING+ to stderr
"ERROR": "stderr",
"CRITICAL": "stderr"
},
# Format and rotation
format="{time:YYYY-MM-DD HH:mm:ss} | {level} | {module} | {message}",
rotation="100 MB", # Rotate when files reach this size
retention="7 days", # Keep logs for this long
compression="zip", # Compress rotated logs
module_name="RpcHelper" # Module name for log binding
)
The LoggingConfig
class supports these options:
from rpc_helper.utils.models.settings_model import LoggingConfig
config = LoggingConfig(
# File logging
enable_file_logging=True, # Enable/disable file logging
log_dir=Path("logs"), # Directory for log files
file_levels={ # Which levels to log to files
"DEBUG": True,
"INFO": True,
"WARNING": True,
"ERROR": True,
"CRITICAL": True
},
# Console logging (optional - uses Loguru defaults if not specified)
enable_console_logging=True,
console_levels={
"INFO": "stdout", # Send INFO to stdout
"WARNING": "stderr", # Send WARNING+ to stderr
"ERROR": "stderr",
"CRITICAL": "stderr"
},
# Format and rotation
format="{time:YYYY-MM-DD HH:mm:ss} | {level} | {module} | {message}",
rotation="100 MB", # Rotate when files reach this size
retention="7 days", # Keep logs for this long
compression="zip", # Compress rotated logs
module_name="RpcHelper" # Module name for log binding
)
- Clone the repository:
git clone https://github.com/powerloom/rpc-helper.git
cd rpc-helper
- Install dependencies using Poetry:
poetry install
- Install pre-commit hooks:
poetry run pre-commit install
This project maintains high code quality standards using automated tools:
- black (v25.1.0) - Code formatting
- isort (v6.0.1) - Import sorting
- flake8 (v7.3.0) - Linting
- pre-commit (v4.2.0) - Git hooks
# Check code quality without making changes
./scripts/verify_code_quality.sh
# Auto-fix formatting issues
./scripts/verify_code_quality.sh --fix
# Check formatting
poetry run black --check rpc_helper/ tests/
poetry run isort --check-only rpc_helper/ tests/
poetry run flake8 .
# Apply formatting
poetry run black rpc_helper/ tests/
poetry run isort rpc_helper/ tests/
Pre-commit hooks automatically verify code quality before each commit:
# Install hooks (one-time setup)
poetry run pre-commit install
# Run manually on all files
poetry run pre-commit run --all-files
Checks performed:
- Python syntax validation
- Code formatting (black)
- Import sorting (isort)
- Linting (flake8)
- Large file detection
- YAML/JSON/TOML validation
- Merge conflict detection
Note: Pre-commit hooks do not automatically modify files. If issues are detected, the commit will be blocked and you'll need to fix them manually or run
./scripts/verify_code_quality.sh --fix
.
Run the test suite using pytest:
# Run all tests
poetry run pytest
# Run unit tests only
poetry run pytest tests/unit/
# Run integration tests only
poetry run pytest tests/integration/
# Run with coverage
poetry run pytest --cov=rpc_helper --cov-report=term-missing
# Run specific test markers
poetry run pytest -m unit
poetry run pytest -m integration
poetry run pytest -m "not slow"
GitHub Actions automatically runs on every push and pull request:
- Linting - Validates code formatting and style
- Testing - Runs test suite on Python 3.10, 3.11, and 3.12
- Coverage - Generates and uploads coverage reports
- Building - Builds distribution packages
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Make your changes
- Run quality checks (
./scripts/verify_code_quality.sh
) - Run tests (
poetry run pytest
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
- Write tests for new features
- Maintain or improve code coverage
- Follow existing code patterns and conventions
- Update documentation as needed
- Ensure all quality checks pass before submitting PR