From dfba72951c0fe28a7e4b44a96283527901f6a543 Mon Sep 17 00:00:00 2001 From: Joshua David Date: Mon, 15 Jul 2024 23:12:02 -0700 Subject: [PATCH] Update README.md to be more detailed --- README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/README.md b/README.md index d279d23..0ebe938 100644 --- a/README.md +++ b/README.md @@ -103,6 +103,10 @@ The LongRoPE model extends the context window of large language models beyond 2 "3." Progressive Extension Strategy: + ```python + + ``` + ### Progressive Extension Strategy The architecture begins with a pre-trained LLM and extends its context window incrementally. Initially, the model is fine-tuned to handle a context length of 256k tokens. This progressive approach avoids the need for direct fine-tuning on extremely long texts, which are rare and computationally expensive to process. By gradually increasing the context length, the model can adapt more effectively to longer sequences.