Skip to content

Commit f26ce7a

Browse files
committed
Create releasenotes.md
1 parent 2f33f8d commit f26ce7a

File tree

1 file changed

+21
-0
lines changed

1 file changed

+21
-0
lines changed

releasenotes.md

+21
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
# Transformers Neuron 0.2.0 Release Notes
2+
3+
Date: 2023-02-24
4+
5+
## What's New?
6+
7+
- Added error handling to check if the desired generated sequence length is valid based on the model configuration
8+
- Improved logging:
9+
- Reduced overly verbose compiler messages
10+
- Disabled lazy module warnings
11+
12+
## Bug Fixes
13+
14+
- Updated `src/transformers_neuronx/gptj/demo.py` to correctly use the `amp_callback` function from `transformers_neuronx.gpt2.demo`
15+
- Extend the `gpt_demo.py` `save` function to support GPT-2 and GPT-J configs
16+
17+
# Transformers Neuron 0.1.0 Release Notes
18+
19+
Date: 2023-02-08
20+
21+
First release of `transformers-neuronx`, a new library that enables LLM model inference on Inf2 & Trn1 using the Neuron SDK. `transformers-neuronx` contains optimized model implementations that are checkpoint-compatible with HuggingFace Transformers, and currently supports Transformer Decoder models like GPT2, GPT-J and OPT.

0 commit comments

Comments
 (0)