Skip to content

Commit

Permalink
llama.cpp updated
Browse files Browse the repository at this point in the history
  • Loading branch information
mgonzs13 committed Mar 26, 2024
1 parent 012aa18 commit e61fd0c
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 5 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@ $ colcon build
To run llama_ros with CUDA, you have to install the [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit) and the following lines in the [CMakeLists.txt](llama_ros/CMakeLists.txt) of llama_ros package must be uncommented:

```
option(LLAMA_CUBLAS "llama: use cuBLAS" ON)
add_compile_definitions(GGML_USE_CUBLAS)
option(LLAMA_CUDA "llama: use CUDA" ON)
add_compile_definitions(GGML_USE_CUDA)
```

## Usage
Expand Down
4 changes: 2 additions & 2 deletions llama_ros/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
endif()

# cuBLAS
# option(LLAMA_CUBLAS "llama: use cuBLAS" ON)
# add_compile_definitions(GGML_USE_CUBLAS)
# option(LLAMA_CUDA "llama: use CUDA" ON)
# add_compile_definitions(GGML_USE_CUDA)

# find dependencies
find_package(ament_cmake REQUIRED)
Expand Down
2 changes: 1 addition & 1 deletion llama_ros/llama_cpp
Submodule llama_cpp updated 114 files

0 comments on commit e61fd0c

Please sign in to comment.