-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Seeking Advice on Incorporating GAIL with Modular-Agent #2
Comments
Hello, thanks for the kind words. The GAIL implementation is pretty solid in ML-Agents and should be the first thing you try if you are interested in it and are already using ML-Agents. There some limitations, such as having your discriminator be restricted to the same observations as your agent, but for many cases the features that exist will be sufficient. Take a look at the DemonstrationRecorder component: https://github.com/Unity-Technologies/ml-agents/blob/release_2_verified_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations After recording demonstrations you just need to update your config file to point to them, and enable the GAIL reward. Let me know if you have specific questions as you experiment! |
Heya! I was just curious if you ended up progressing on your GAIL experiments? |
Thank you for following up. We have experimented with the ML-Agents' embedded functionality for GAIL, but the results were not satisfactory. Actually, it was not performing as well as the DM methods in Modular-Agent when we trained the worker with only one motion clip. The main reason I think is that the out of control of the reward function for imitation rewards. Additionally, the optimization of the discriminator lacked controllability. Therefore, I think that it may be better to develop a standalone discriminator using PyTorch and integrate it into the training process. This work is still in progress. We may get some results by the end of this month. I will keep you updated at that time. |
Hello, thank you very much for your contribution. The Modular-Agent has been incredibly useful and powerful for completing RL/IL training in the Unity.
I have been using the Modular-Agent for an interactive task where the agent reaches its hand to the target while imitating the demo's behavior. I am considering incorporating GAIL into the training process to train the agent with a demo dataset, where the demonstrator completes the tasks with different poses to fit the changing environment.
Do you have any advice on using GAIL in this context? Additionally, is the GAIL function in the ML-Agent toolkit useful in this case? Or would it be better to wrap it as a GYM environment?
The text was updated successfully, but these errors were encountered: