Skip to content

Commit

Permalink
llama.cpp updated
Browse files Browse the repository at this point in the history
  • Loading branch information
mgonzs13 committed Jul 4, 2024
1 parent a61906b commit 078f208
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 3 deletions.
5 changes: 3 additions & 2 deletions llama_cli/llama_cli/api/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ def text_cb(feedback) -> None:
goal = GenerateResponse.Goal()
goal.prompt = prompt
goal.sampling_config.temp = temp
llama_client.generate_response(goal, text_cb)
print("")
response = llama_client.generate_response(goal, text_cb)[0].response.text
if not response.endswith("\n"):
print("")
rclpy.shutdown()
2 changes: 1 addition & 1 deletion llama_ros/llama_cpp
Submodule llama_cpp updated 50 files
+6 −5 .gitignore
+5 −1 CMakeLists.txt
+1 −1 ci/run.sh
+18 −1 common/common.cpp
+159 −44 convert_hf_to_gguf.py
+16 −3 convert_hf_to_gguf_update.py
+0 −0 convert_llama_ggml_to_gguf.py
+27 −7 examples/batched/batched.cpp
+1 −1 examples/llava/requirements.txt
+21 −1 examples/main/main.cpp
+8 −0 examples/tokenize/tokenize.cpp
+15 −0 gguf-py/gguf/constants.py
+15 −6 gguf-py/gguf/gguf_writer.py
+13 −3 gguf-py/gguf/tensor_mapping.py
+15 −0 include/llama.h
+2 −0 models/ggml-vocab-bert-bge.gguf.inp
+1 −0 models/ggml-vocab-bert-bge.gguf.out
+2 −0 models/ggml-vocab-command-r.gguf.inp
+1 −0 models/ggml-vocab-command-r.gguf.out
+2 −0 models/ggml-vocab-deepseek-coder.gguf.inp
+1 −0 models/ggml-vocab-deepseek-coder.gguf.out
+2 −0 models/ggml-vocab-deepseek-llm.gguf.inp
+1 −0 models/ggml-vocab-deepseek-llm.gguf.out
+2 −0 models/ggml-vocab-falcon.gguf.inp
+1 −0 models/ggml-vocab-falcon.gguf.out
+2 −0 models/ggml-vocab-gpt-2.gguf.inp
+1 −0 models/ggml-vocab-gpt-2.gguf.out
+2 −0 models/ggml-vocab-llama-bpe.gguf.inp
+1 −0 models/ggml-vocab-llama-bpe.gguf.out
+2 −0 models/ggml-vocab-llama-spm.gguf.inp
+1 −0 models/ggml-vocab-llama-spm.gguf.out
+2 −0 models/ggml-vocab-mpt.gguf.inp
+1 −0 models/ggml-vocab-mpt.gguf.out
+2 −0 models/ggml-vocab-phi-3.gguf.inp
+1 −0 models/ggml-vocab-phi-3.gguf.out
+2 −0 models/ggml-vocab-qwen2.gguf.inp
+1 −0 models/ggml-vocab-qwen2.gguf.out
+2 −0 models/ggml-vocab-refact.gguf.inp
+1 −0 models/ggml-vocab-refact.gguf.out
+2 −0 models/ggml-vocab-starcoder.gguf.inp
+1 −0 models/ggml-vocab-starcoder.gguf.out
+1,197 −0 poetry.lock
+44 −0 pyproject.toml
+3 −3 requirements.txt
+0 −0 requirements/requirements-convert_hf_to_gguf.txt
+0 −0 requirements/requirements-convert_hf_to_gguf_update.txt
+0 −0 requirements/requirements-convert_llama_ggml_to_gguf.txt
+2 −2 scripts/check-requirements.sh
+0 −2 src/CMakeLists.txt
+1,301 −159 src/llama.cpp

0 comments on commit 078f208

Please sign in to comment.