Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU Usage #6

Open
Boundzero opened this issue Aug 29, 2024 · 2 comments
Open

GPU Usage #6

Boundzero opened this issue Aug 29, 2024 · 2 comments

Comments

@Boundzero
Copy link

So I gave the it a lengthy prompt to see if I can tell if its using my CPU or GPU. Now how do I tell if its using the GPU? When I look at my CPU usage while its writing the response I am seeing my CPU usage go to between 51 and 54%..

@ai-joe-git
Copy link
Owner

To determine if Belullama is using your GPU instead of your CPU, you can follow these steps:

  1. Monitor GPU usage:

    • For NVIDIA GPUs, use the nvidia-smi command in a terminal:
      watch -n 1 nvidia-smi
      
    • This will show GPU usage, memory consumption, and processes using the GPU.
    • Look for Ollama or related processes in the list.
  2. Check Ollama logs:

    • Ollama might output information about GPU usage in its logs.
    • Check the logs with:
      sudo journalctl -u ollama -f
      
  3. Verify Ollama's GPU detection:

    • Run:
      ollama run gpu-check
      
    • This should indicate if Ollama detects and can use the GPU.
  4. Monitor system resources:

    • Use tools like htop or top to monitor CPU usage.
    • If CPU usage is high while Ollama is running, it might be using the CPU instead of GPU.
  5. Check Ollama version:

    • Ensure you're using a GPU-compatible version of Ollama.
    • Run:
      ollama --version
      
  6. Verify CUDA installation:

    • For NVIDIA GPUs, ensure CUDA is properly installed:
      nvidia-smi
      nvcc --version
      

If you're seeing high CPU usage (51-54%) during responses, it's possible that Ollama is still using the CPU. This could be due to:

  1. Incorrect GPU setup in Ollama
  2. Model not optimized for GPU use
  3. GPU drivers or CUDA not properly installed or recognized

To address this:

  1. Double-check the Belullama GPU installation process
  2. Ensure your GPU drivers and CUDA are up-to-date
  3. Try running a known GPU-compatible model explicitly

If issues persist, you might need to review the Belullama GPU installation script or check for any error messages during the setup process.

@Boundzero
Copy link
Author

So I tried your commands none work, in case it doesnt even pick up that ollama is installed even though I can run it. the -smi commands all return not found. I can try reinstalling it but can you perhaps put up a better install guide for Nvidia beta test from scratch as had your original docker installed then installed the nvidia one..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants