Skip to content

Commit 4c24ba0

Browse files
authored
Update README.md
1 parent aed8dae commit 4c24ba0

File tree

1 file changed

+3
-10
lines changed

1 file changed

+3
-10
lines changed

README.md

Lines changed: 3 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,7 @@ This source code is licensed under the Apache 2.0 license found in the LICENSE f
77
## Multimodal Prompt Tuning for Unified Sequence-to-Sequence Learning
88

99
### Overview
10-
11-
This is the code to reproduce the experiments from the paper **"Prompt Tuning for Generative Multimodal Pretrained Models"**, [Check our paper on ArXiv](https://arxiv.org/abs/2208.02532). This paper explores prompting methods for multimodal pretraining, and specifically focuses on the unified sequence-to-sequence learning framework instead of the constrastive learning models. We investigate a series of parameter-efficient tuning methods, and demonstrate that light-weight prompt tuning can achieve comparable performance with finetuning with a much fewer computation costs. More remarkably, we show that prompt tuning becomes more competitive with finetuning as parameters increase.
12-
<br>
13-
14-
### Results
15-
Experimental results on RefCOCO, RefCOCO+, RefCOCOg, SNLI-VE, COCO Captions and VQA
16-
![result](examples/result.png)
10+
This is the code for **"Prompt Tuning for Generative Multimodal Pretrained Models"**, [Check our paper on ArXiv](https://arxiv.org/abs/2208.02532). This paper explores prompt tuning for generative multimodal pretrained models, instead of the constrastive learning models. We specifically focuses on the unified sequence-to-sequence learning framework and implement on our OFA models.
1711
<br>
1812

1913
### Requirements
@@ -27,7 +21,7 @@ Experimental results on RefCOCO, RefCOCO+, RefCOCOg, SNLI-VE, COCO Captions and
2721
```bash
2822
pip install -r requirements.txt
2923
```
30-
<br></br>
24+
<br>
3125

3226
### Datasets and Checkpoints
3327
See [datasets.md](datasets.md) and [checkpoints.md](checkpoints.md).
@@ -69,5 +63,4 @@ OFA/
6963
├── trainer.py
7064
└── utils/
7165
```
72-
73-
<br></br>
66+
<br>

0 commit comments

Comments
 (0)