-
Notifications
You must be signed in to change notification settings - Fork 460
Open
Description
Hi! Thanks authors for the perfect paper and code implementation that just works!
I enjoyed playing with the model and chatting on datasets.
I found 2 weak points:
- the model generates not too diverse hypotheses to test
- coding skills of the model is far behind sota coding agents (and I understand it, the goal of the paper was not to create perfect coding LLM)
Finally I came to idea: what if on steps like Analyze, Code and Understand not only use output of DeepAnalyze, but call other agents for Coding ( like Claude Code or Codex) and powerful models for generating ideas how else to explore data (like ChatGPT or Gemini)
One step more: at Analyze step several ideas can be generated and tested in parallel and then the best performing ideas are used on next steps.
How it can work together:
- DeepAnalyze works as orchestrator model and drives the process
- at Analyze step several sub-agents started in parallel
- for each sub-agent external LLM is called to generate diverse ideas
- then at Code step each idea is coded by coding agent
- on Understand step all results are collected and analyzed, best ideas are taken for next research cycle
Finally we have benefits:
- SOTA research process from DeepAnalyze
- diverse ideas exploration
- Improved coding allows much deeper analysis
If someone interested, I would like to collaborate on making fast prototype and experiments!
Metadata
Metadata
Assignees
Labels
No labels