Skip to content

Deep reinforcement learning in ViZDoom (using Tensorflow)

Notifications You must be signed in to change notification settings

mihahauke/deep_rl_vizdoom

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep reinforcement learning in ViZDoom

Requirements:

  • python3
  • Tensorflow version 1.2 with GPU support
  • ViZDoom version 1.1.2 (pip install vizdoom)
  • numpy
  • tqdm
  • ruamel.yaml
  • opencv version 3.1.0

To install python dependecies:

sudo pip3 install -r requirements.txt

Implemented algorithms:

How to use:

Settings

Alll training scripts looad settings from multiple yaml file (settings are combined, reocurring keys are overwritten by newest entries). By default "settings/defaults/common_defaults.yml" and "settings/XXX_defaults.yml will be loaded (XXX in {a3c, adqn, dqn}). To load addditional settings use -s / --settings switch.

For convenience, multiple yml files with scenarios are held separately.

Default settings aren't particularly focused on giving fast results and might leave no output for very long time. Settings in settings/examples directory should work out of the box though.

Training (train_a3c.py / train_adqn.py / train_dqn.py):

./train_a3c.py -s <SETTINGS_FILES>

Example:

./train_a3c.py -s settings/examples/basic_a3c.yml 

# Using your settingo:
./train_a3c.py -s {YOUR_SETTINGS1} {YOUR_SETTINGS2} settings/basic.yml 

Output:

Tensorboard logger level is set to 2 by defulat so don't expect info logs from tf.

  • Lots of console output including loaded settings, training/test results and errors. The output is partly colored so it might be difficulat to read as raw text.
  • Log file with output hte same as one from console in a path resembling {logfile}_{DATE_AND_TIME}.log. (if logfile is specified)
  • Tensorboard scalar summaries with scores (min/mean/max/std) and learning rate in {tf_logdir} (tensorboard_logs by default).
  • TF model saved in a path resembling {models_path}/{scenario_tag}/{NETWORK_NAME}/{DATE_AND_TIME}

Watching (test_a3c.py / test_adqn.py / test_dqn.py):

usage: test_a3c.py [-h] [--episodes EPISODES_NUM] [--hide-window]
                   [--print-settings] [--fps FRAMERATE] [--agent-view]
                   [--seed SEED] [-o STATS_OUTPUT_FILE]
                   [--deterministic DETERMINISTIC]
                   MODEL_PATH

A3C: testing script for ViZDoom

positional arguments:
  MODEL_PATH            Path to trained model directory.

optional arguments:
  -h, --help            show this help message and exit
  --episodes EPISODES_NUM, -e EPISODES_NUM
                        Number of episodes to test. (default: 10)
  --hide-window, -ps    Hide window. (default: False)
  --print-settings, -hw
                        Print settings upon loading. (default: False)
  --fps FRAMERATE, -fps FRAMERATE
                        If window is visible, tests will be run with given
                        framerate. (default: 35)
  --agent-view, -av     If True, window will display exactly what agent
                        sees(with frameskip), not the smoothed out version.
                        (default: False)
  --seed SEED, -seed SEED
                        Seed for ViZDoom. (default: None)
  -o STATS_OUTPUT_FILE, --output STATS_OUTPUT_FILE
                        File for output of stats (default: None)
  --deterministic DETERMINISTIC, -d DETERMINISTIC
                        If 1 dtests will be deterministic. (default: 1)

Example:

# You need to have a pretrained model
./test_a3c.py models/basic/ACLSTMNet/09.04_11-21 -e 10 --seed 123

About

Deep reinforcement learning in ViZDoom (using Tensorflow)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published