You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The fastest way is to write your own dataset in PyTorch. If you take a look at dataset/brain_reader.py, you may follow its return types.
The brain_reader dataset returns an array of four elements. The first is the input 3D volume [1, depth, height, width] a float32 torch tensor. The second is a list of ground truth bounding box of objects in the volume [num_of_objects, 6], the six elements for each object is their z, y, x, depth, height, width. The third is a list of object category id, denoting the class for the corresponding bounding box in the second. The final one is a one-hot encoding segmention mask for each all classes, of shape [num_of_classes, depth, height, width].
Hi @tanghaotommy I also want to run your pipeline on an internal dataset at our institution. We currently have exported all DICOMS with binary masks to images in .nrrd format. What will I need to do, in addition to this, to train your model with our data?
Hi Tang,
Thanks for your excellent work!
As title, could you please provide a training data format for custom dataset training? How can I preprocess my dataset to train my model?
Thanks!
Cheers,
Jiancheng
The text was updated successfully, but these errors were encountered: