Skip to content

ronghanghu/vqa-mcb

 
 

Repository files navigation

Multimodal Compact Bilinear Pooling for VQA

This is the code that we wrote to train the state-of-the-art VQA models described in our paper. Our ensemble of 7 models obtained 66.67% on real open-ended test-dev and 70.24% on real multiple-choice test-dev.

Live Demo

You can upload your own images and ask the model your own questions. Try the live demo!

Pretrained Model

We are releasing the “MCB + Genome + Att. + GloVe” model from the paper, which achieves 65.38% on real open-ended test-dev. This is our best individual model.

Download

You can easily use this model with our evaluation code or with our demo server code.

Prerequisites

In order to use our pretrained model:

If you want to train from scratch, do the above plus:

  • Download the VQA tools.
  • Download the VQA real-image dataset.
  • Optional: Install spaCy and download GloVe vectors. The latest stable release of spaCy has a bug that prevents GloVe vectors from working, so you need to install the HEAD version. See train/README.md.
  • Optional: Download Visual Genome data.

Data Preprocessing

See preprocess/README.md.

Training

See train/README.md.

Evaluation

To generate an answers JSON file in the format expected by the VQA evaluation code and VQA test server, you can use eval/ensemble.py. This code can also ensemble multiple models. Running python ensemble.py will print out a help message telling you what arguments to use.

Demo Server

The code that powers our live demo is in server/. To run this, you’ll need to install Flask and change the constants at the top of server.py. Then, just do python server.py, and the server will bind to 0.0.0.0:5000.

TensorFlow Implementation

In server_tensorflow/ folder, there is a tensorflow version of the VQA demo. To run it, you need to install TensorFlow (v0.9.0 or higher).

Before running the TensorFlow demos, you need to downloaded the trained models in TensorFlow format by running download_tensorflow_model.sh in server_tensorflow/. Also, for image feature extraction, you need to download the ResNet-152 model and install Caffe. To run this TensorFlow demo, you only need to install the standard Caffe (which does not need to be the VQA branch).

There is a IPython notebook demo at notebook_example.ipynb and a server demo that can be run by python server.py. To run them, you need to change the constants at the top of notebook_example.ipynb and server.py to your path.

License and Citation

This code and the pretrained model is released under the BSD 2-Clause license. See LICENSE for more information.

Please cite our paper if it helps your research:

@article{fukui16mcb,
  title={Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding},
  author={Fukui, Akira and Park, Dong Huk and Yang, Daylen and Rohrbach, Anna and Darrell, Trevor and Rohrbach, Marcus},
  journal={arXiv:1606.01847},
  year={2016},
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 77.9%
  • Python 21.2%
  • Other 0.9%