Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to control GPU preference on Windows and Linux/BSD #323

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

mosra
Copy link
Owner

@mosra mosra commented Feb 28, 2019

It's using the NvOptimusEnablement and AmdPowerXpressRequestHighPerformance executable-local symbols on Windows and the DRI_PRIME environment variable on Linux to control whether to use the integrated or the dedicated GPU. The DRI_PRIME part works as expected.

Problem is, according to our tests, simply adding the NvOptimusEnablement to application sources, will force the app to use the dedicated GPU, no matter what the value is Seems that the value does have an effect after all (Sept 2021), did the drivers get fixed since?

// has to be in the main exe sources, not in any static library or DLL

extern "C" {
    __declspec(dllexport) int NvOptimusEnablement = 1; // 0 disables it
}

Which ... makes this quite useless, as the switch between integrated/dedicated GPU is done at compile time.

Things to do:

  • extract just the DRI_PRIME part and commit it to master, since that works correctly
    • In addition to DRI_PRIME, MESA_LOADER_DRIVER_OVERRIDE=zink|iris|... can switch to some other driver as well, but ugh :/
    • also there seem to be some __GLX_VENDOR_LIBRARY_NAME=nvidia __NV_PRIME_RENDER_OFFLOAD=1 env vars specific to the NV driver that make it possible to switch between Intel and NV card at runtime: https://gitlab.freedesktop.org/glvnd/libglvnd/-/issues/205#note_553296 well, it's basically what prime-run does internally, setting those env variables (together with a Vulkan one) and running the application
  • Provide an overarching WindowlessApplication that chooses between GLX and EGL based on --magnum-device glx|egl|0|1... and enable either depending on WITH_WINDOWLESS[GLX,EGL,..]APPLICATION being enabled
    • How to solve the problem of two flextGLInit() being present, one for GLX and one for EGL
    • Then there's no global thing choosing between GLX and EGL so we can throw away all TARGET_HEADLESS and TARGET_DESKTOP_GLES defines that were only confusing
  • test on AMD PowerXpress, maybe at least there it works?
  • turn this into a compile-time option on the CMake user side (not library builder side), at least (some target property, for example)?
  • report a bug to NVidia (and then wait >5 years until we can use it)?
  • On macOS and GL there's also a possibility to pick the GPU -- https://gist.github.com/dbarnes/94ded353e16a579ba3da52d2c6261173 (or is it even worth bothering with nowadays?)

Using the NvOptimusEnablement and AmdPowerXpressRequestHighPerformance
executable-local symbols on Windows and using the DRI_PRIME environment
variable on Linux.
@mosra
Copy link
Owner Author

mosra commented Mar 4, 2019

Potential equivalent solution for macOS: https://github.com/CodySchrank/gSwitch

@mosra
Copy link
Owner Author

mosra commented Mar 13, 2019

Tested yesterday with my Intel Radeon RX Vega M on Windows, AmdPowerXpressRequestHighPerformance doesn't seem to affect it (probably because it's an AMD chip disguised as an Intel card), so I still need someone with a real AMD Radeon to test.

@bmsq
Copy link

bmsq commented Mar 30, 2019

I stumbled across this today after having trouble getting the Primitives example working locally (it works fine on your website). I've gotten things working using the master branch and setting NvOptimusEnablement=1. Let me know if you need help testing this branch.

I have an older Dell XPS laptop with Optimus chipset (GeForce GT 640M) running on windows. Using the latest Dell video drivers resulted in a crash after the Phong shader failed to compile. After upgrading to the latest drivers from NVIDIA, the example ran but selected the Intel driver (HD Graphics 4000) and rendered a flat coloured cube. Exporting NvOptimusEnablement=1 resulted in the correct driver being selected and the example renders correctly.

@mosra
Copy link
Owner Author

mosra commented Mar 30, 2019

Oh well. What's the Intel driver version? I was doing some patching and workarounds for Intel drivers recently and managed to iron out all driver bugs on the recent GPUs (Intel 530 - 630), but I have nowhere to test the older ones.

Can you paste the engine output log here? It should show what extensions and driver workarounds is it using. As a random guess, can you try running the example with --magnum-disable-extensions "GL_ARB_direct_state_access"? This extensions is particularly buggy there.

For the NvOptimusEnablement, the problem is that even setting to 0 makes it choose the NVidia GPU, which means is basically a compile-time option and thus useless.

@bmsq
Copy link

bmsq commented Mar 31, 2019

I'm pretty sure Intel HD Graphics 4000 is total rubbish and no amount of driver workarounds will change that. It's been a long time since I did any graphics programming (I'm liking Magnum BTW) and I don't recognize any of the modern GL extensions but I doubt the Intel GPU has the required features to run basic shaders.

Here is the output when running without the NVIDIA GPU:

Renderer: Intel(R) HD Graphics 4000 by Intel
OpenGL version: 3.3.0 - Build 8.15.10.2778
Using optional features:
GL_ARB_texture_filter_anisotropic
GL_ARB_vertex_array_object
Using driver workarounds:
no-forward-compatible-core-context
intel-windows-glsl-exposes-unsupported-shading-language-420pack
no-layout-qualifiers-on-old-glsl

I'm guessing this output means GL_ARB_direct_state_access couldn't be a problem because its not even supported. I ran with the requested arguments anyway but it made no difference.

Not being able to set NvOptimusEnablement at runtime sucks, seems like a pretty poor design decision on NVIDIA's behalf. I'm glad I found this pull request though, I could have spent days trying to figure out how to turn on it.

Thanks

@mosra
Copy link
Owner Author

mosra commented Mar 31, 2019

Oh, right, this one is pretty old. It still should be capable of doing at least the WebGL1-level things.

If you have some free time, it would be great to see what the tests say on the Intel card. You can enable them using the BUILD_TESTS and BUILD_GL_TESTS CMake options and rebuilding (and reinstalling) the project (it'll take a while, there's a lot of them). Then, run ctest -V -R GLTest in the build directory and upload the full output here. One warning though -- while I hope it won't happen, with the old Intel drivers there's a possibility that some of these could trigger driver bugs that cause a GPU reset or even a blue screen. So be careful, save all your work etc :)

Thank you!

@mosra
Copy link
Owner Author

mosra commented Oct 16, 2019

With f7d7390, WindowlessEglApplication-based executables now accept a --magnum-device option, allowing you to select among various EGL devices. On Mesa 18.2 I can switch between three different ones (Intel, AMD and a software rasterizer).

I don't expect SDL/GLFW to implement EGL-based device selection anytime soon so in the future I might be looking into replacing SDL/GLFW's own context creation with the EGL-based implementation where supported (and opt-in under a TARGET_EGL CMake option, which would be renamed TARGET_HEADLESS) ideally this done at runtime as well, see the PR description.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

2 participants