TextBrewer 0.2.0
New Features
-
Now supports distributed data-parallel training with
torch.nn.parallel.DistributedDataParallel
! You can passlocal_rank
to theTrainingConfig
to setup for the distributed training. The detailed usage ofDistributedDataParallel
can be found at the PyTorch docs. -
We also added an example (Chinese NER task) to demonstrate how to use TextBrewer with distributed data-parallel training.