diff --git a/README.md b/README.md index ca5596e..d279d23 100644 --- a/README.md +++ b/README.md @@ -69,7 +69,7 @@ The **LongRoPE** model architecture is designed to extend the context window of The LongRoPE model extends the context window of large language models beyond 2 million tokens. Key components include: -1. Rotary Position Encoding (RoPE): +"1." Rotary Position Encoding (RoPE): ```python class RoPEPositionalEncoding(nn.Module): @@ -86,7 +86,7 @@ The LongRoPE model extends the context window of large language models beyond 2 return sin_cos.view(*sin_cos.shape[:-2], -1) ``` -2. Non-uniform Interpolation: +"2." Non-uniform Interpolation: ```python def non_uniform_interpolation(pos_embed, extension_ratio, lambda_factors, n_hat): @@ -101,6 +101,8 @@ The LongRoPE model extends the context window of large language models beyond 2 return interpolated_pos ``` +"3." Progressive Extension Strategy: + ### Progressive Extension Strategy The architecture begins with a pre-trained LLM and extends its context window incrementally. Initially, the model is fine-tuned to handle a context length of 256k tokens. This progressive approach avoids the need for direct fine-tuning on extremely long texts, which are rare and computationally expensive to process. By gradually increasing the context length, the model can adapt more effectively to longer sequences.