Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support cucim for slide manipulation #6

Open
mlathara opened this issue Feb 19, 2022 · 3 comments
Open

Support cucim for slide manipulation #6

mlathara opened this issue Feb 19, 2022 · 3 comments
Assignees
Labels
enhancement New feature or request

Comments

@mlathara
Copy link
Owner

We can potentially leverage cuCIM instead of openslide for working with the images. Should give some performance boost since the work will happen on the GPU.

@mlathara mlathara added the enhancement New feature or request label Apr 27, 2022
@mlathara
Copy link
Owner Author

@roof12 Any updates on this?

@roof12
Copy link
Collaborator

roof12 commented Jun 8, 2022

On a branch I converted the tiling to use cuCIM. I wrote a script that uses this test the execution time of tiling, separate from NVFLARE and training. The test was 3 images from mouse, size 105 - 154 MB. The drive was an external USB drive. Each configuration was run six times, taking the mean.

The cuCIM tiling was written for an older version of the Sarcoma code (from around March). These timings use that. I will update the branch to work with main and push it.

Openslide 1 workers

Mean elapsed time: 26.269

python test-network.py 25.61s user 1.49s system 102% cpu 26.318 total
python test-network.py 25.43s user 1.52s system 102% cpu 26.207 total
python test-network.py 25.49s user 1.48s system 102% cpu 26.241 total
python test-network.py 25.51s user 1.54s system 102% cpu 26.288 total
python test-network.py 25.33s user 1.62s system 102% cpu 26.195 total
python test-network.py 25.56s user 1.54s system 102% cpu 26.364 total

Cucim 1 workers

Mean elapsed time: 25.000

python test-network.py 24.44s user 1.54s system 104% cpu 24.939 total
python test-network.py 24.44s user 1.62s system 104% cpu 25.039 total
python test-network.py 24.46s user 1.66s system 104% cpu 25.069 total
python test-network.py 24.36s user 1.63s system 104% cpu 24.920 total
python test-network.py 24.49s user 1.61s system 104% cpu 25.047 total
python test-network.py 24.46s user 1.57s system 104% cpu 24.985 total

Cucim 2 workers

Mean elapsed time: 13.649

python test-network.py 24.82s user 1.67s system 194% cpu 13.643 total
python test-network.py 24.76s user 1.58s system 193% cpu 13.600 total
python test-network.py 24.69s user 1.72s system 193% cpu 13.623 total
python test-network.py 24.66s user 1.66s system 193% cpu 13.590 total
python test-network.py 24.66s user 1.56s system 189% cpu 13.856 total
python test-network.py 24.68s user 1.51s system 192% cpu 13.579 total

Cucim 4 workers

Mean elapsed time: 7.811

python test-network.py 24.79s user 1.64s system 339% cpu 7.793 total
python test-network.py 24.50s user 1.67s system 337% cpu 7.745 total
python test-network.py 24.63s user 1.67s system 340% cpu 7.734 total
python test-network.py 24.88s user 1.59s system 332% cpu 7.950 total
python test-network.py 24.69s user 1.75s system 338% cpu 7.811 total
python test-network.py 24.67s user 1.84s system 338% cpu 7.832 total

Cucim 8 workers

Mean elapsed time: 5.082

python test-network.py 25.30s user 2.21s system 519% cpu 5.299 total
python test-network.py 25.16s user 2.07s system 544% cpu 4.997 total
python test-network.py 25.09s user 2.24s system 542% cpu 5.042 total
python test-network.py 25.50s user 2.06s system 543% cpu 5.076 total
python test-network.py 25.17s user 2.16s system 544% cpu 5.020 total
python test-network.py 25.58s user 2.04s system 545% cpu 5.062 total

@mlathara
Copy link
Owner Author

mlathara commented Jun 8, 2022

@roof12 thanks for the update. This is cool! Yes, please do merge with the latest main - hopefully that won't make too much difference here, but I'm curious to see if the augmentation/normalization stuff that @afrankel-cc-tdi put in affects this in any way.

Some questions/comments:

  • The perf numbers are mostly self explanatory, but I'm wondering about the % cpu portion. I thought we'd be delegating to the GPU for this but the % cpu is high even in the cuCIM cases. Am I missing something?
  • Along those lines, what sort of GPU/CPU are you using here?
  • Not sure if Charles is comfortable giving you access to the servers, but if not maybe you could share this script and we can try it out on the servers to see what sort of numbers we see there.
  • Might also be worth bumping up the number of workers for openslide to see how that scales

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants