Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PMT objects in GPU Memory #3

Open
mormj opened this issue Aug 10, 2021 · 1 comment
Open

PMT objects in GPU Memory #3

mormj opened this issue Aug 10, 2021 · 1 comment
Labels
enhancement New feature or request

Comments

@mormj
Copy link
Contributor

mormj commented Aug 10, 2021

Can the underlying memory structure be in GPU device memory so that packets of data residing in GPU memory can be passed around like a thrust vector?

@mormj mormj added the enhancement New feature or request label Aug 10, 2021
@jsallay
Copy link
Collaborator

jsallay commented Aug 13, 2021

Some quick research shows that flatbuffers allow for custom memory allocators. So we could allocate a flatbuffer using Cuda unified memory. Since we run in threads, once the memory is on the GPU, any block in the same flowgraph would be able to access it on the GPU (which makes for some really cool gpu processing flows. )

Just note that if we were to transfer the data between processes, we would have to pull the data of the GPU, serialize, deserialize and then push it back to the GPU. This could lead to confusingly slow processing in cases where you are using multiple flowgraphs connected over zmq.

mormj pushed a commit to mormj/pmt that referenced this issue Nov 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants