Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft: proof-of-concept for streamline to precomputed annotation, include CORS server for rendering in local neuroglancer #3

Draft
wants to merge 7 commits into
base: ak-precomputed
Choose a base branch
from

Conversation

aaronkanzer
Copy link
Contributor

@balbasty @ayendiki -- starting a branch here for work on trk to precomputed annotation conversion -- Note: I am testing this against google's current neuroglancer, not our fork

To replicate locally/work with if you'd like:

python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python trk_to_annotation.py

This will produce a few thousand small (1KB-ish) coordinates with the JSON properties necesssary for rendering in neuroglancer -- my current obstacles is getting the precomputed annotation layer, and the nifti layer (sub-I58_sample-hemi_desc-preproc_dwi_FA.nii.gz) in this case to be in the same voxel space

Will keep you posted as things progress -- wanted to get code on GitHub to start the conversation

@aaronkanzer aaronkanzer changed the title Ak precoputed annotation Draft: proof-of-concept for streamline to precomputed annotation, include CORS server for rendering in local neuroglancer Dec 13, 2024
@aaronkanzer
Copy link
Contributor Author

aaronkanzer commented Dec 13, 2024

@balbasty -- a couple mini updates here with the script (things I am trying to tackle)

• It seems that the NIFTI and the precomputed annotations are still offset in neuroglancer -- e.g. they dont overlap when added as layers --

backend.ts:456 Error retrieving chunk [object Object]:3,3,3: RangeError: Offset is outside the bounds of the DataView

• It seems that the empty bytes are failing to be recognized by the precomputed backend for neuroglancer (whether or not we should have "empty" bytes -- I'm not sure?)

backend.ts:891 Uint8Array(8) [0, 0, 0, 0, 0, 0, 0, 0, buffer: ArrayBuffer(8), byteLength: 8, byteOffset: 0, length: 8, Symbol(Symbol.toStringTag): 'Uint8Array']
backend.ts:456 Error retrieving chunk [object Object]:5,5,2: Error: Expected at least 8 bytes

@balbasty
Copy link

I think you want the tracts coordinate to be in mm RAS, not in voxels. So don't apply the inverse affine, set the axes as 1 mm (not voxel_size) and compute the upper and lower bounds in RAS mm space. There's an example in the Google doc.

@aaronkanzer
Copy link
Contributor Author

I think you want the tracts coordinate to be in mm RAS, not in voxels. So don't apply the inverse affine, set the axes as 1 mm (not voxel_size) and compute the upper and lower bounds in RAS mm space. There's an example in the Google doc.

@balbasty thanks for the guidance -- just so I fully understand, you mean this Google doc? https://docs.google.com/document/d/1TnDNvDigOxpNSuZBjHkve_epp7tbpswcxUUcBOaRV94/edit?tab=t.0

If so, do you mean:

 "dimensions": {"x": [1, "mm"], "y": [1, "mm"], "z": [1, "mm"]},
  "lower_bound": vertices.min(axis=0).tolist(), 
  "upper_bound": vertices.max(axis=0).tolist(), 

in your info JSON example?

@balbasty
Copy link

Yes! Although it should really be a "double loop" minimum/maximum (over tracts and edge per tract)

@aaronkanzer
Copy link
Contributor Author

Yes! Although it should really be a "double loop" minimum/maximum (over tracts and edge per tract)

What exactly is accomplished in the "double loop" that wouldn't be accomplished via a "single loop"? I tried to search online to see why, but couldn't really make 100% sense.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants