-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error on inference #7
Comments
can you trace the error with pdb to see which line of the code generates such error? on the other hand, I would suggest building independent tvm runtime on your host, and use rpc to communicate with your hikey 970 device, so that you only need to ensure opencl backend on your device is available. |
thanks for the quick response, the error is in this line: |
i think the root cause of this issue might be your opencl driver, and a correct way to compile tvm with opencl, not quite related to running mobilenetv2 inference. therefore, i would recommend post any tvm related question on http://discuss.tvm.ai . |
HI Running the script from_mxnet.py on hikey 970 board with arm5 and MALI GPU, I get this error:
LLVM ERROR: Only small and large code models are allowed on AArch64
If I disable LLVM I get an error that it is disabled. any suggestions? thanks
The text was updated successfully, but these errors were encountered: