Skip to content

mynlp/VAD_Emotion_Control_in_Visual_Art_Captioning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VAD Emotion Control in Visual Art Captioning via Disentangled Multi-modal Representation

This is the source code of our paper:

Ryo Ueda, Hiromi Narimatsu, Yusuke Miyao, & Shiro Kumano.
VAD Emotion Control in Visual Art Captioning via Disentangled Multimodal Representation.
ACII2024.

Requirements

Train model

Minial example command:

$ .venv/bin/python -m src.train.train \
    --artemis_data_path ${Path to ArtEmis} \
    --artemis_data_sep ${Appropriate Delimiter} \
    --nrc_vad_lexicon_data_path ${Path to NRC-VAD} \
    --dvisa_data_path ${Path to D-ViSA} \
    --wikiart_dirpath ${Path to WikiArt}

For more options, try:

$ .venv/bin/python -m src.train.train --help

or directly check out ./src/train/train.py.

Citation

@inproceedings{UedaNMK2024,
  author={Ueda, Ryo and Narimatsu, Hiromi and Miyao, Yusuke and Kumano, Shiro},
  booktitle={2024 12th International Conference on Affective Computing and Intelligent Interaction (ACII)}, 
  title={VAD Emotion Control in Visual Art Captioning via Disentangled Multimodal Representation},
  year={2024}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages