Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue Installing llama-cpp-python #831

Open
AnandMoorthy opened this issue Oct 28, 2024 · 1 comment
Open

Issue Installing llama-cpp-python #831

AnandMoorthy opened this issue Oct 28, 2024 · 1 comment

Comments

@AnandMoorthy
Copy link

Hi I am facing issue while installing llama-cpp-python using the mentioned command

# Example: cuBLAS
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir

GPU Info

image

Error i am getting

(localgpt2) gpu1@GPU1-Ubuntu:~/localgpt2$ CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.3.1 --no-cache-dir
Collecting llama-cpp-python==0.3.1
Downloading llama_cpp_python-0.3.1.tar.gz (63.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.9/63.9 MB 115.3 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in /home/gpu1/install_home/anaconda3/envs/localgpt2/lib/python3.10/site-packages (from llama-cpp-python==0.3.1) (4.12.2)
Requirement already satisfied: numpy>=1.20.0 in /home/gpu1/install_home/anaconda3/envs/localgpt2/lib/python3.10/site-packages (from llama-cpp-python==0.3.1) (1.26.4)
Requirement already satisfied: diskcache>=5.6.1 in /home/gpu1/.local/lib/python3.10/site-packages (from llama-cpp-python==0.3.1) (5.6.1)
Requirement already satisfied: jinja2>=2.11.3 in /home/gpu1/.local/lib/python3.10/site-packages (from llama-cpp-python==0.3.1) (3.1.2)
Requirement already satisfied: MarkupSafe>=2.0 in /home/gpu1/.local/lib/python3.10/site-packages (from jinja2>=2.11.3->llama-cpp-python==0.3.1) (2.1.3)
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [28 lines of output]
*** scikit-build-core 0.10.7 using CMake 3.30.5 (wheel)
*** Configuring CMake...
loading initial cache file /tmp/tmpy3ec99o0/build/CMakeInit.txt
-- The C compiler identification is GNU 11.4.0
-- The CXX compiler identification is GNU 11.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/gcc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/g++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.34.1")
CMake Error at vendor/llama.cpp/CMakeLists.txt:98 (message):
LLAMA_CUBLAS is deprecated and will be removed in the future.

    Use GGML_CUDA instead

  Call Stack (most recent call first):
    vendor/llama.cpp/CMakeLists.txt:103 (llama_option_depr)


  -- Configuring incomplete, errors occurred!

  *** CMake configuration failed
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama-cpp-python)

@ningo-agilityio
Copy link

You can check this solution via this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants