From 7a803d19425e294e25e3467cfbdcbf4f434d98b5 Mon Sep 17 00:00:00 2001 From: Joshua David Date: Mon, 15 Jul 2024 22:56:18 -0700 Subject: [PATCH] Update README.md to be more detailed --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index d8c0cdf..4959939 100644 --- a/README.md +++ b/README.md @@ -70,6 +70,7 @@ The **LongRoPE** model architecture is designed to extend the context window of The LongRoPE model extends the context window of large language models beyond 2 million tokens. Key components include: 1. Rotary Position Encoding (RoPE): + ```python class RoPEPositionalEncoding(nn.Module): def __init__(self, d_model, max_len=1000000, base=10000): @@ -85,6 +86,7 @@ The LongRoPE model extends the context window of large language models beyond 2 return sin_cos.view(*sin_cos.shape[:-2], -1) 2. Non-uniform Interpolation: + ```python def non_uniform_interpolation(pos_embed, extension_ratio, lambda_factors, n_hat): d_model = pos_embed.shape[-1] @@ -97,6 +99,7 @@ The LongRoPE model extends the context window of large language models beyond 2 interpolated_pos[..., 2 * i + 1] *= scale return interpolated_pos + ### Progressive Extension Strategy The architecture begins with a pre-trained LLM and extends its context window incrementally. Initially, the model is fine-tuned to handle a context length of 256k tokens. This progressive approach avoids the need for direct fine-tuning on extremely long texts, which are rare and computationally expensive to process. By gradually increasing the context length, the model can adapt more effectively to longer sequences.