Skip to content

Deep learning-based task-oriented and unified multi-task semantic communications

Notifications You must be signed in to change notification settings

MonLightMagic/t-udeepsc

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

53 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This repository is built upon BEiT and MAE, thanks very much!

We would gradually upload the full-version of the implementation.

Citation (Preprint Version)

@ARTICLE{10431795,
  author={Zhang, Guangyi and Hu, Qiyu and Qin, Zhijin and Cai, Yunlong and Yu, Guanding and Tao, Xiaoming},
  journal={IEEE Transactions on Communications}, 
  title={A Unified Multi-Task Semantic Communication System for Multimodal Data}, 
  year={2024},
  volume={},
  number={},
  pages={1-1},
  keywords={Task analysis;Semantics;Transmitters;Multitasking;Communication systems;Feature extraction;Decoding;Deep learning;dynamic overhead;multimodal data;multi-task semantic communication},
  doi={10.1109/TCOMM.2024.3364990}}

Usage

Clone

Clone this repository and enter the directory using the commands below:

git clone https://github.com/zhang-guangyi/t-udeepsc.git
cd t-udeepsc/

Requirements

Python 3.8.5 is recommended.

Install the required packages with:

pip install -r requirements.txt (Not provided yet)

If you're having issues with installing PyTorch compatible with your CUDA version, we strongly recommend related documentation page](https://pytorch.org/get-started/previous-versions/).

In our work, we use the bert model to initialize the text encoder, the pretrained weights should be placed at ./pretrain_models. The weights can be downloaded in the huggingface websites.

Dataset Preparation

CIFAR10

Use the torchvision, the datasets will be dowmloaded automatically. Then, place the dataset in path ./data/cifar

MOSEI and MOSI

Download the CMU-MOSI and CMU-MOSEI dataset from Google Drive and place the contents inside ./data/msadata folder. Note that these are (pre-computed splits).

SST2

This dataset is used for text sentiment analysis and text reconstruction. As we use pytreebank in our implementation, the SST2 dataset will also be downloaded automatically. The dataset will be placed at the .cache folder, you can also move it to your required place.

VQAv2

We use the image features are extracted using the bottom-up-attention strategy, with each image being represented as an 2048-D features. The features for each image are stored in a .npz file. You can prepare the visual features by yourself or download the extracted features from OneDrive or BaiduYun. The downloaded files contains three files: train2014.tar.gz, val2014.tar.gz, and test2015.tar.gz, corresponding to the features of the train/val/test images for VQA-v2, respectively. You should place them as follows:

|-- ./data/vqa_datasets
	|-- coco_extract
	|  |-- train2014.tar.gz
	|  |-- val2014.tar.gz
	|  |-- test2015.tar.gz

Besides, we use the VQA samples from the visual genome dataset to expand the training samples. The processed vg questions and annotations files can be found in OneDrive or BaiduYun, and place them as follow:

|-- ./data/vqa_datasets
	|-- vqa
	|  |-- VG_questions.json
	|  |-- VG_annotations.json

Then, you can run the following script to setup all the needed configurations for the experiments.

$ bash vqa_setup.sh

Running the script will:

  1. Download the QA files for VQA-v2.
  2. Unzip the bottom-up features

Finally, the ./data/vqa_datasets folders will have the following structure:

|-- ./data/vqa_datasets
	|-- coco_extract
	|  |-- train2014
	|  |  |-- COCO_train2014_...jpg.npz
	|  |  |-- ...
	|  |-- val2014
	|  |  |-- COCO_val2014_...jpg.npz
	|  |  |-- ...
	|  |-- test2015
	|  |  |-- COCO_test2015_...jpg.npz
	|  |  |-- ...
	|-- vqa
	|  |-- v2_OpenEnded_mscoco_train2014_questions.json
	|  |-- v2_OpenEnded_mscoco_val2014_questions.json
	|  |-- v2_OpenEnded_mscoco_test2015_questions.json
	|  |-- v2_OpenEnded_mscoco_test-dev2015_questions.json
	|  |-- v2_mscoco_train2014_annotations.json
	|  |-- v2_mscoco_val2014_annotations.json
	|  |-- VG_questions.json
	|  |-- VG_annotations.json

Run

  1. The instructions are given in execute.sh, use the following script to execute the script. The following script will start training with the default hyperparameters. You can find the detailed hyperparameters in "base_args.py".
$ bash execute.sh
  1. All instructions are given in running_command.sh.

  2. All checkpoint files will be saved to the path set by "--output_dir", the manuscript will set a sub-path for each selected eval task.

  3. We recommend to use a smaller learning rate and a larger batchsize, since the BERT-initialized model are prone to be overfitted.

Test

  1. Use the argument "--eval" to enable the test phase. Specifically, Offline evaluation only support the VQA 2.0 val split, which is considered in this work. If you want to evaluate on the VQA 2.0 test-dev or test-std split, you can upload the obtained result json file to Eval AI to evaluate the scores on test-dev and test-std splits.

TODO

  • implement the designed benchmark: task-oriented semantic communication works (T-DeepSC) for the considered tasks, including image(cls/recons), text(cls/recons) under analog transmission.
  • implement the designed benchmark: TDeepSC for VQA and MSA under analog transmission.
  • implement the digital transmission: vector quantization (VQ) and uniform scalar quantization (SQ).
  • implement the 16QAM and QPSK modulations.
  • the basic version of the unified semantic communication (U-DeepSC).
  • dataset preparation.
  • feature selection-based UDeepSC.
  • packages requirement.

About

Deep learning-based task-oriented and unified multi-task semantic communications

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.5%
  • Shell 1.5%