Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error when trying to run the script #90

Open
AntouanK opened this issue Dec 12, 2022 · 15 comments
Open

error when trying to run the script #90

AntouanK opened this issue Dec 12, 2022 · 15 comments

Comments

@AntouanK
Copy link

I get this error when trying to run python ./virtual_webcam.py

No module named 'numpy.core._multiarray_umath'

more specifically

❯ python ./virtual_webcam.py
2022-12-12 11:14:30.381112: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
  File "/usr/lib/python3.10/site-packages/numpy/core/__init__.py", line 23, in <module>
    from . import multiarray
  File "/usr/lib/python3.10/site-packages/numpy/core/multiarray.py", line 10, in <module>
    from . import overrides
  File "/usr/lib/python3.10/site-packages/numpy/core/overrides.py", line 6, in <module>
    from numpy.core._multiarray_umath import (
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/run/media/antouank/evo1/_REPOS_/virtual_webcam_background/./virtual_webcam.py", line 7, in <module>
    import tensorflow as tf
  File "/home/antouank/.local/lib/python3.9/site-packages/tensorflow/__init__.py", line 37, in <module>
    from tensorflow.python.tools import module_util as _module_util
  File "/home/antouank/.local/lib/python3.9/site-packages/tensorflow/python/__init__.py", line 37, in <module>
    from tensorflow.python.eager import context
  File "/home/antouank/.local/lib/python3.9/site-packages/tensorflow/python/eager/context.py", line 26, in <module>
    import numpy as np
  File "/usr/lib/python3.10/site-packages/numpy/__init__.py", line 140, in <module>
    from . import core
  File "/usr/lib/python3.10/site-packages/numpy/core/__init__.py", line 49, in <module>
    raise ImportError(msg)
ImportError:

IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!

Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.

We have compiled some common reasons and troubleshooting tips at:

    https://numpy.org/devdocs/user/troubleshooting-importerror.html

Please note and check the following:

  * The Python version is: Python3.9 from "/opt/anaconda/bin/python"
  * The NumPy version is: "1.23.5"

and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.

Original error was: No module named 'numpy.core._multiarray_umath'
@AntouanK
Copy link
Author

after installing/uninstalling packages I got it to run somehow, with ( I think ) GPU on tensorflow.
( I'm clueless on python and its packaging system )

I get this error now, the one the troubleshooting refers to.

I tried the command it gives ( with and without sudo ) but it never creates a new device.
image

the /dev/video2 I already have, I got it when I tried this repo
The command he suggested was

$ sudo modprobe v4l2loopback devices=1 exclusive_caps=1 video_nr=2 card_label="fake-cam"

and it seems like it persisted even when I logout/login.

@allo-
Copy link
Owner

allo- commented Dec 12, 2022

Your user account needs read/write access to the devices. Depending on your Distribution they should be owned by group video and you can add your user account to the group. The less secure option is to use chmod 666 and allow everyone to read and write the devices.
Also make sure to use the right input and output devices. Your cam seems to register two video devices (0, 1) and probably only one of them is usable. Try with a video player if it can be used.

@AntouanK
Copy link
Author

you're right.
adding myself to the video group solved it.

my normal webcam is /dev/video0 , I checked with vlc.

I get this error now.

❯ python ./virtual_webcam.py
Num GPUs Available:  1
Traceback (most recent call last):
  File "/run/media/antouank/evo1/_REPOS_/virtual_webcam_background/./virtual_webcam.py", line 17, in <module>
    import tfjs_graph_converter.api as tfjs_api
ModuleNotFoundError: No module named 'tfjs_graph_converter'

:)

@AntouanK
Copy link
Author

if I do

╰─ pip install tfjs-graph-converter

then I end back in the previous error again.

❯ python ./virtual_webcam.py
Num GPUs Available:  1
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Reloading config.
Traceback (most recent call last):
  File "/run/media/antouank/evo1/_REPOS_/virtual_webcam_background/./virtual_webcam.py", line 123, in <module>
    fakewebcam = FakeWebcam(config.get("virtual_video_device"), width, height)
  File "/home/antouank/.local/lib/python3.10/site-packages/pyfakewebcam/pyfakewebcam.py", line 54, in __init__
    fcntl.ioctl(self._video_device, _v4l2.VIDIOC_S_FMT, self._settings)
OSError: [Errno 22] Invalid argument

And I am in the video group this time.

@AntouanK
Copy link
Author

also, why does it keep switching to the CPU?

INFO: Created TensorFlow Lite XNNPACK delegate for CPU.

how can I make it use the GPU? ( if it eventually runs )

I already have the cuda packages installed.
image

What else would I need?

@AntouanK
Copy link
Author

AntouanK commented Dec 13, 2022

Today, the script seems to work fine.
I made the video10 device and it loads up right away.
I guess that video2 I had was problematic and I couldn't make a new one for some reason.

The GPU is still an issue though.

I get this when the script starts :

❯ python ./virtual_webcam.py
2022-12-13 08:48:30.190135: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-12-13 08:48:31.118689: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2022-12-13 08:48:31.118758: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2022-12-13 08:48:31.118769: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Reloading config.
Model: mobilenet (multiplier=0.5, stride=16)
Loading model...
done.

It's very very slow on CPU. Like 4-5 fps.
I have an nvidia 4090 so I'd like to make use of it.
What can I do to make the script see the GPU?

thank you.

PS
I tried to read the TF documentation and the test command shows that the library is seeing my GPU.

❯ python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
2022-12-13 09:54:13.643644: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-12-13 09:54:14.510984: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/home/antouank/.conda/envs/virtual-webcam/lib/
2022-12-13 09:54:14.511351: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/home/antouank/.conda/envs/virtual-webcam/lib/
2022-12-13 09:54:14.511366: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

But then why is the script switching to the CPU?

@allo-
Copy link
Owner

allo- commented Dec 13, 2022

Have a look at tensorflow tutorials for your platform. It isn't always easy to get the right versions.

You need tensorflow-gpu and it has to match the installed CUDA version. Different python versions only support certain tensorflow versions, so one may have to try some combinations until it works.

Your errors look like you're having a GPU enabled tensorflow, but not the nvidia libraries for neural networks.

https://www.tensorflow.org/install/pip#software_requirements

@AntouanK
Copy link
Author

@allo-
thanks for the response.

I read this page 3-4 times by now.
Unfortunately, the steps it gives is not helping with the cuda nn issue.
Any idea how I can debug that one?
Maybe how to see what versions I have installed or what I can try to install?
I googled it a lot but I cannot see any specific example.
:/

@allo-
Copy link
Owner

allo- commented Dec 13, 2022

You should be able to get much of the stuff from your linux distribution, but I think for CuDNN and some others you need a download from nvidia.com that requires an account (but you can use a trashmail for it, it just needs the account for downloading).

@AntouanK
Copy link
Author

AntouanK commented Dec 13, 2022

I already got cudnn installed.

image

And I tried cudnn8-cuda11.0 (#9) but it fails to build it.

What I'm trying now is this command

conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0

It literally has been going on for almost 3 hours.

image

I don't know what it's doing 😂

PS
done after 3+ hours.
and of course, the script has the same output 😢

@allo-
Copy link
Owner

allo- commented Dec 13, 2022

I know it's a mess to get the right versions and I don't know a good advice for your system either. Looks at basic tutorials and FAQs and try to get the stuff together or maybe ask in some help forums for cuda/tensorflow or general deep learning things.

When you find a definite guide I'm happy to link to it, but I don't think I know what to recommend except what's on the tensorflow homepage myself.

Depending on your system it could install wheel packages, but when it builds from source probably something on your system isn't supported.

The versions in my current virtual environment are:

Python 3.8.2

tensorflow==2.4.4
tensorflow-estimator==2.4.0
tensorflow-hub==0.9.0
tensorflowjs==3.3.0

@AntouanK
Copy link
Author

how do you normally set it up?
let's say you just cloned the repo on a linux machine.
do you use conda? or pip, or something else?
maybe I can try to wipe all the packages etc I got, clone again and follow the same steps you did.

@allo-
Copy link
Owner

allo- commented Dec 13, 2022

I use a virtual enviroment and installed the packages with pip install -r requirements.txt. The dependencies should be installed automatically. Depending on the tensorflow version you may need a different numpy version.

@AntouanK
Copy link
Author

@allo-
After a long rabbithole I managed to get a mediapipe/bazel/selfie_segmentation build that runs locally and I can see myself in 60fps and the background replaced using the GPU.
I got some help from here.

The issue now is that I have no clue how to use that binary/graph to redirect the output to a fake webcam video device ( or how to configure the input/output in general )

And idea?
I've been googling everything all morning but I cannot see any example to understand how to connect it with what I have built.

@allo-
Copy link
Owner

allo- commented Dec 20, 2022

When you use the virtual webcam background program with mediapipe you configure video devices like when using resnet/mobilenet and only cannot use many of the plugins, but segmentation should work as good as with other mediapipe codes.

The standard is v4l2loopback for the video device, but akvcam would be a more modern solution, only the configuration is more involved. See #33 for discussion and my example config.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants