You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Per #6 (comment), I'd like to try the DDPG RL agent (compared to PPO agent). DDPG hypers will need to be added to hypersearch, and likely some other code adjustments. I once had DQN support, when I removed it I may have tailored the code to be too PPO-centric.
The text was updated successfully, but these errors were encountered:
This shoudn't be a problem. I have a tensorforce DDPG agent that I was hypersearching with hyperas and a different environment. I'll work on adapting it as the learning characteristics vary between PPO and DDPG. Recommending a separate branch for this as it will need to be initially hypersearched with some wide ranges, and to keep the amount of search parameters minimized in each run. Since there is a DDPG branch already, would it make sense to copy the v0.2 PPO branch over to there then modify to the DDPG agent params?
Might as well make an ACKTR branch. Would be a heavier modification but I've seen at least one tensorflow implementation that I was looking into and would be interested in rigging up a non-tensorforce agent for testing.
Per #6 (comment), I'd like to try the DDPG RL agent (compared to PPO agent). DDPG hypers will need to be added to hypersearch, and likely some other code adjustments. I once had DQN support, when I removed it I may have tailored the code to be too PPO-centric.
The text was updated successfully, but these errors were encountered: