- DICOM v4
- Status? DSC voted to stop discussions (what I heard)
- Lots of issues raised as part of brainstorming - what is DICOM going to do about it?
- Should be some communication to the community
- Whats I heard
- Some of the issues DICOM just isn't going to address
- Performance?
- FUD
- Fear of incompatibile
- Uncertainty about whether there would be enough resources
- Doubt if its possible
- "Medical imaging is hard, you need $500/hr developers to succeed"
- Some of the issues DICOM just isn't going to address
- nVidia GTC Feedback
- Very "Apple Like"
- New GPU Blackwell
- Faster
- More power efficient
- Monai Label and OHIF: https://www.nvidia.com/gtc/session-catalog/?search=DLIT61433&tab.allsessions=1700692987788001F1cG#/session/1694022362941001ovCu
- New nvLink adapter (connects GPUs together for more memory)
- 2TB/s bandwidth
- Sam Altman trying to raise 7T for "faster AI training hardware" (GPUs?)
- Medical Imaging AI/ML Training with GPUs (Foundation Model / Deep Learning)
- If you have a large dataset for training and want to use multiple GPUs, how is the data distributed to the different GPUs
- For large numbers of parameters, you run out of memory so need to spread the training accross connected GPUs
- Different ways to parallelize
- Assign different parts of the model to different GPUs
- Data parallelization - assign subsets of training data to different GPUs, average results from each GPU and update model
- Pipeline parallelization - different layers of the models are assigned to different GPUs
- Hybrid of the above three
- More and more providers are operationalizing AI
- Duke, Rad Partners, Kaiser