You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've got a w6800 that is awesome, but I'm left with scheduling it against either my Photoprism server for encoding acceleration, LocalAI for AI resources, or ffmpeg jobs for one time encoding of raw footage. A GPU as big as this can be shared. Of course, if not managed properly from my end apps can crash, but 32G is a LOT of play room for one container.
I want the ability to assign more than one workload to this GPU. Bonus points if there a way to do memory management but not required at all.
Operating System
Arch Linux
GPU
W6800
ROCm Component
No response
The text was updated successfully, but these errors were encountered:
This is also really useful when using the GPU for light tasks like transcoding etc. This is a feature that the Intel GPU Device Plugin has had for a really long time with the sharedDevNum option.
Is there any other way to do this? I'm using sharedDevNum for an Intel GPU, just like mentioned above by @judahrand , but am looking for a way to do something similar with this AMD device plugin
Suggestion Description
I've got a w6800 that is awesome, but I'm left with scheduling it against either my Photoprism server for encoding acceleration, LocalAI for AI resources, or ffmpeg jobs for one time encoding of raw footage. A GPU as big as this can be shared. Of course, if not managed properly from my end apps can crash, but 32G is a LOT of play room for one container.
I want the ability to assign more than one workload to this GPU. Bonus points if there a way to do memory management but not required at all.
Operating System
Arch Linux
GPU
W6800
ROCm Component
No response
The text was updated successfully, but these errors were encountered: