You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ All rights reserved.
4
4
This source code is licensed under the Apache 2.0 license found in the LICENSE file in the root directory.
5
5
-->
6
6
7
-
## Multimodal Prompt Tuning for Unified Sequence-to-Sequence Learning
7
+
## Prompt Tuning for Generative Multimodal Pretrained Models
8
8
9
9
### Overview
10
10
This is the code for **"Prompt Tuning for Generative Multimodal Pretrained Models"**, [Check our paper on ArXiv](https://arxiv.org/abs/2208.02532). This paper explores prompt tuning for generative multimodal pretrained models, instead of the constrastive learning models. We specifically focuses on the unified sequence-to-sequence learning framework and implement on our OFA models.
0 commit comments