Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c55d2c088f
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| messages: Union[str, List[Dict[str, str]]], | ||
| tools: Optional[List[dict]] = None, | ||
| callbacks: Optional[List[Any]] = None, | ||
| available_functions: Optional[Dict[str, Any]] = None, | ||
| ) -> Union[str, Any]: |
There was a problem hiding this comment.
Accept executor kwargs in ModelsLabLLM.call
The call signature is incompatible with CrewAI's executor path: get_llm_response invokes llm.call(..., from_task=..., from_agent=..., response_model=...) (lib/crewai/src/crewai/utilities/agent_utils.py), but this method does not accept those keyword arguments. In normal Agent/Crew runs with this provider, Python raises TypeError for unexpected kwargs before any API request, so the provider cannot be used through the standard runtime flow.
Useful? React with 👍 / 👎.
| if tools and available_functions: | ||
| return self._handle_function_calling(messages, tools, available_functions) |
There was a problem hiding this comment.
Handle native tool mode when available_functions is None
Native tool execution in CrewAI passes tool schemas with available_functions=None so the model returns tool calls for the executor to run (_invoke_loop_native_tools in crew_agent_executor.py). This implementation only enters tool handling when both tools and available_functions are set, so in native mode it silently skips tool-calling logic and does plain text generation even though supports_function_calling() returns True, preventing tool-enabled agents from emitting executable tool calls.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 4 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
This PR is being reviewed by Cursor Bugbot
Details
Your team is on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle for each member of your team.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
| tools: Optional[List[dict]] = None, | ||
| callbacks: Optional[List[Any]] = None, | ||
| available_functions: Optional[Dict[str, Any]] = None, | ||
| ) -> Union[str, Any]: |
There was a problem hiding this comment.
Missing call() parameters causes runtime TypeError
High Severity
The call() override is missing the from_task, from_agent, and response_model parameters that BaseLLM.call() declares and that CrewAI's agent_utils.py passes when invoking llm.call(...). Since the method also lacks a **kwargs catchall, this will raise a TypeError (got an unexpected keyword argument 'from_task') every time an agent tries to use this provider, making it completely non-functional.
| if attempt >= max_attempts - 1: | ||
| raise RuntimeError(f"Failed to fetch {content_type} result: {str(e)}") | ||
| time.sleep(10) | ||
| attempt += 1 |
There was a problem hiding this comment.
Polling swallows "failed" status RuntimeError, retries needlessly
Medium Severity
When the async poll returns "status": "failed", the code raises a RuntimeError on line 333, but the broad except Exception block on line 339 immediately catches it. Instead of propagating the failure, the method sleeps and retries the already-failed request up to 30 times (~5 minutes of wasted polling) before eventually raising a generic error that loses the original failure message.
| elif any(keyword in latest_message for keyword in video_keywords): | ||
| return self._generate_video(latest_message) | ||
| elif any(keyword in latest_message for keyword in audio_keywords): | ||
| return self._generate_audio(latest_message) |
There was a problem hiding this comment.
Overly broad keywords cause false multimodal triggers
High Severity
The multimodal keyword lists include extremely common words like "draw", "picture", "render", "show me", "say", "speak", "voice", "sound", "video", "clip", "film". Combined with substring matching (in), normal text prompts such as "draw conclusions from…", "say more about…", "show me the analysis…", or "render a verdict" will incorrectly trigger expensive multimodal API calls instead of text generation, producing unusable responses for routine agent tasks.
| messages.extend([ | ||
| {"role": "assistant", "content": f"I'll use the {func_name} function."}, | ||
| {"role": "function", "name": func_name, "content": str(result)} | ||
| ]) |
There was a problem hiding this comment.
Function calling mutates caller's messages list in-place
Medium Severity
_handle_function_calling calls messages.extend(...) which mutates the original messages list passed into call(). Since lists are passed by reference, this silently modifies the caller's message history with assistant/function entries, potentially corrupting the conversation state for subsequent calls or retry logic in the CrewAI framework.


ModelsLab Provider for CrewAI
A comprehensive multi-modal LLM provider for CrewAI that integrates ModelsLab's powerful AI APIs, enabling your agents to generate text, images, videos, and audio content seamlessly within their workflows.
🚀 Key Features
📦 Installation
Or install individually:
🔑 Setup
🎯 Quick Start
Basic Text Generation Agent
Multi-Modal Creative Agent
🎨 Multi-Modal Capabilities
The ModelsLab provider automatically detects when your agents need multi-modal content:
🖼️ Image Generation
🎬 Video Creation
🔊 Audio Generation
⚙️ Advanced Configuration
Custom Configuration Options
Text-Only Mode
Multiple Agents with Different Capabilities
🛠️ Function Calling & Tools
ModelsLab LLM supports CrewAI's function calling:
📚 Examples
Explore comprehensive examples in
examples.py:Run examples:
🧪 Testing
Run the test suite:
The test suite covers:
🔧 Supported Models & Endpoints
Text Generation
/uncensored_chat(OpenAI-compatible)Image Generation
/images/text2imgVideo Generation
/video/text2videoAudio Generation
/tts🌟 Why Choose ModelsLab for CrewAI?
🏆 First Multi-Modal Provider
💸 Cost-Effective Enterprise Solution
🚀 Production-Ready
🔧 Developer-Friendly
📖 API Reference
ModelsLabLLMMain class for CrewAI integration.
Methods
call(messages, tools, callbacks, available_functions): Main generation methodsupports_function_calling(): Returns True (supports CrewAI tools)supports_stop_words(): Returns True (supports stop sequences)get_context_window_size(): Returns model context window sizeConvenience Functions
🤝 Contributing
We welcome contributions! Here's how to get started:
Development Setup
Running Tests
Code Style
Contributing Guidelines
git checkout -b feature/amazing-featurepytestblackandisortgit commit -m 'Add amazing feature'git push origin feature/amazing-feature🐛 Issue Reporting
Found a bug? Have a feature request? Please open an issue with:
📋 Roadmap
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🔗 Links
💬 Community & Support
Built with ❤️ for the AI agent community
Transform your CrewAI agents into multi-modal powerhouses with ModelsLab's comprehensive AI capabilities.
Note
Medium Risk
Introduces a new external-API-backed LLM provider with synchronous polling and heuristic keyword routing, which could affect reliability/latency and error handling but doesn’t modify existing core logic.
Overview
Adds a new
ModelsLabLLMprovider implementingBaseLLM.call()and routing requests to ModelsLab’s API for standard chat (/uncensored_chat).When
enable_multimodalis on, the provider keyword-detects image/video/audio requests and calls the corresponding ModelsLab endpoints (with async polling support) and also includes a JSON-based, prompt-driven tool/function-calling shim that executesavailable_functionsand feeds results back into the chat.Written by Cursor Bugbot for commit c55d2c0. This will update automatically on new commits. Configure here.