This is the code that we wrote to train the state-of-the-art VQA models described in our paper. Our ensemble of 7 models obtained 66.67% on real open-ended test-dev and 70.24% on real multiple-choice test-dev.
You can upload your own images and ask the model your own questions. Try the live demo!
We are releasing the “MCB + Genome + Att. + GloVe” model from the paper, which achieves 65.38% on real open-ended test-dev. This is our best individual model.
You can easily use this model with our evaluation code or with our demo server code.
In order to use our pretrained model:
- Compile the
feature/20160617_cb_softattention
branch of our fork of Caffe. This branch contains Yang Gao’s Compact Bilinear layers (dedicated repo, paper) released under the BDD license, and Ronghang Hu’s Soft Attention layers (paper) released under BSD 2-clause. - Download the pre-trained ResNet-152 model.
If you want to train from scratch, do the above plus:
- Download the VQA tools.
- Download the VQA real-image dataset.
- Optional: Install spaCy and download GloVe vectors. The latest stable release of spaCy has a bug that prevents GloVe vectors from working, so you need to install the HEAD version. See
train/README.md
. - Optional: Download Visual Genome data.
See preprocess/README.md
.
See train/README.md
.
To generate an answers JSON file in the format expected by the VQA evaluation code and VQA test server, you can use eval/ensemble.py
. This code can also ensemble multiple models. Running python ensemble.py
will print out a help message telling you what arguments to use.
The code that powers our live demo is in server/
. To run this, you’ll need to install Flask and change the constants at the top of server.py
. Then, just do python server.py
, and the server will bind to 0.0.0.0:5000
.
In server_tensorflow/
folder, there is a tensorflow version of the VQA demo. To run it, you need to install TensorFlow (v0.9.0 or higher).
Before running the TensorFlow demos, you need to downloaded the trained models in TensorFlow format by running download_tensorflow_model.sh
in server_tensorflow/
. Also, for image feature extraction, you need to download the ResNet-152 model and install Caffe. To run this TensorFlow demo, you only need to install the standard Caffe (which does not need to be the VQA branch).
There is a IPython notebook demo at notebook_example.ipynb
and a server demo that can be run by python server.py
. To run them, you need to change the constants at the top of notebook_example.ipynb
and server.py
to your path.
This code and the pretrained model is released under the BSD 2-Clause license. See LICENSE
for more information.
Please cite our paper if it helps your research:
@article{fukui16mcb,
title={Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding},
author={Fukui, Akira and Park, Dong Huk and Yang, Daylen and Rohrbach, Anna and Darrell, Trevor and Rohrbach, Marcus},
journal={arXiv:1606.01847},
year={2016},
}