Skip to content

Commit df9bd43

Browse files
authored
Update README.md
1 parent bf9b9f3 commit df9bd43

File tree

1 file changed

+32
-1
lines changed

1 file changed

+32
-1
lines changed

README.md

Lines changed: 32 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,7 @@ We support the inference of OFA in Hugging Face Transformers. Check the [README]
4848

4949

5050
# News
51+
* 2023.5.11: Two papers ([OFA-OCR](https://arxiv.org/abs/2212.09297) and [OFA-prompt](https://arxiv.org/abs/2208.02532)) are accepted by ACL. The evaluation scripts and checkpoints of OFA-OCR are released.
5152
* 2023.1.11: Released MuE (https://arxiv.org/abs/2211.11152), which significantly accelerates OFA with little performance degradation. Many thanks to the first author, Shengkun Tang (@Tangshengku). See the branch `feature/MuE` and [PR](https://github.com/OFA-Sys/OFA/pull/336) for more information.
5253
* 2022.12.20: Released OFA-OCR, a model for Chinese text recognition based on OFA. Check our [paper](https://arxiv.org/abs/2212.09297) and [demo](https://modelscope.cn/studios/damo/ofa_ocr_pipeline/summary).
5354
* 2022.12.7: Released the MMSpeech an ASR pre-training method based on OFA. Check our paper [here](https://arxiv.org/abs/2212.00500)! Please see the [README_mmspeech.md](README_mmspeech.md) for further details.
@@ -581,7 +582,7 @@ To contact us, never hestitate to send an email to `[email protected]` o
581582

582583

583584
# Citation
584-
Please cite our paper if you find it helpful :)
585+
Please cite our papers if you find them helpful :)
585586

586587
```
587588
@article{wang2022ofa,
@@ -603,3 +604,33 @@ Please cite our paper if you find it helpful :)
603604
}
604605
```
605606
<br></br>
607+
```
608+
@article{ofa_ocr,
609+
author = {Junyang Lin and
610+
Xuancheng Ren and
611+
Yichang Zhang and
612+
Gao Liu and
613+
Peng Wang and
614+
An Yang and
615+
Chang Zhou},
616+
title = {Transferring General Multimodal Pretrained Models to Text Recognition},
617+
journal = {CoRR},
618+
volume = {abs/2212.09297},
619+
year = {2022}
620+
}
621+
```
622+
<br><br>
623+
```
624+
@article{ofa_prompt,
625+
author = {Hao Yang and
626+
Junyang Lin and
627+
An Yang and
628+
Peng Wang and
629+
Chang Zhou and
630+
Hongxia Yang},
631+
title = {Prompt Tuning for Generative Multimodal Pretrained Models},
632+
journal = {CoRR},
633+
volume = {abs/2208.02532},
634+
year = {2022}
635+
}
636+
```

0 commit comments

Comments
 (0)