You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(localgpt2) gpu1@GPU1-Ubuntu:~/localgpt2$ CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.3.1 --no-cache-dir
Collecting llama-cpp-python==0.3.1
Downloading llama_cpp_python-0.3.1.tar.gz (63.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.9/63.9 MB 115.3 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in /home/gpu1/install_home/anaconda3/envs/localgpt2/lib/python3.10/site-packages (from llama-cpp-python==0.3.1) (4.12.2)
Requirement already satisfied: numpy>=1.20.0 in /home/gpu1/install_home/anaconda3/envs/localgpt2/lib/python3.10/site-packages (from llama-cpp-python==0.3.1) (1.26.4)
Requirement already satisfied: diskcache>=5.6.1 in /home/gpu1/.local/lib/python3.10/site-packages (from llama-cpp-python==0.3.1) (5.6.1)
Requirement already satisfied: jinja2>=2.11.3 in /home/gpu1/.local/lib/python3.10/site-packages (from llama-cpp-python==0.3.1) (3.1.2)
Requirement already satisfied: MarkupSafe>=2.0 in /home/gpu1/.local/lib/python3.10/site-packages (from jinja2>=2.11.3->llama-cpp-python==0.3.1) (2.1.3)
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [28 lines of output]
*** scikit-build-core 0.10.7 using CMake 3.30.5 (wheel)
*** Configuring CMake...
loading initial cache file /tmp/tmpy3ec99o0/build/CMakeInit.txt
-- The C compiler identification is GNU 11.4.0
-- The CXX compiler identification is GNU 11.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/gcc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/g++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.34.1")
CMake Error at vendor/llama.cpp/CMakeLists.txt:98 (message):
LLAMA_CUBLAS is deprecated and will be removed in the future.
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama-cpp-python)
The text was updated successfully, but these errors were encountered:
Hi I am facing issue while installing llama-cpp-python using the mentioned command
# Example: cuBLAS
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir
GPU Info
Error i am getting
(localgpt2) gpu1@GPU1-Ubuntu:~/localgpt2$ CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.3.1 --no-cache-dir
Collecting llama-cpp-python==0.3.1
Downloading llama_cpp_python-0.3.1.tar.gz (63.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.9/63.9 MB 115.3 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in /home/gpu1/install_home/anaconda3/envs/localgpt2/lib/python3.10/site-packages (from llama-cpp-python==0.3.1) (4.12.2)
Requirement already satisfied: numpy>=1.20.0 in /home/gpu1/install_home/anaconda3/envs/localgpt2/lib/python3.10/site-packages (from llama-cpp-python==0.3.1) (1.26.4)
Requirement already satisfied: diskcache>=5.6.1 in /home/gpu1/.local/lib/python3.10/site-packages (from llama-cpp-python==0.3.1) (5.6.1)
Requirement already satisfied: jinja2>=2.11.3 in /home/gpu1/.local/lib/python3.10/site-packages (from llama-cpp-python==0.3.1) (3.1.2)
Requirement already satisfied: MarkupSafe>=2.0 in /home/gpu1/.local/lib/python3.10/site-packages (from jinja2>=2.11.3->llama-cpp-python==0.3.1) (2.1.3)
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [28 lines of output]
*** scikit-build-core 0.10.7 using CMake 3.30.5 (wheel)
*** Configuring CMake...
loading initial cache file /tmp/tmpy3ec99o0/build/CMakeInit.txt
-- The C compiler identification is GNU 11.4.0
-- The CXX compiler identification is GNU 11.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/gcc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/g++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.34.1")
CMake Error at vendor/llama.cpp/CMakeLists.txt:98 (message):
LLAMA_CUBLAS is deprecated and will be removed in the future.
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama-cpp-python)
The text was updated successfully, but these errors were encountered: