Overview
evolveRL supports multiple Language Model (LLM) backends to power its agents. Currently supported providers:- OpenAI (GPT-4o-mini)
- Anthropic (Claude-3-5-sonnet)
- LLaMA (Coming Soon)
LLM Configuration
LLM backends are configured using theLLMConfig class:
Configuration Options
Name of the LLM model to use
Type of LLM provider (“openai” or “anthropic”)
Maximum tokens in model responses
Temperature for response generation (0.0 - 1.0)
Using LLM Backends
Direct Usage
With Agents
Environment Variables
Store your API keys as environment variables:.env file:
Model Selection
Choose models based on your needs:OpenAI Models
- GPT-4o-mini
- o1-mini
Anthropic Models
- Claude-3-Opus
- Claude-3-5-Sonnet
Best Practices
- Error Handling: Always handle API errors gracefully
- Rate Limiting: Implement backoff strategies for API limits
- Cost Management: Monitor token usage and costs
- Model Selection: Start with simpler models and upgrade as needed
- Security: Never hardcode API keys in your code

