Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using kimimaro for voxels skeletonization #81

Open
ZiYuanZiWan opened this issue Jul 2, 2023 · 12 comments
Open

Using kimimaro for voxels skeletonization #81

ZiYuanZiWan opened this issue Jul 2, 2023 · 12 comments
Labels
question Further information is requested

Comments

@ZiYuanZiWan
Copy link

ZiYuanZiWan commented Jul 2, 2023

Picture
Dear Seung lab:
I apologize for asking a question that may not be directly related to your expertise. Despite searching through numerous research papers on Google Scholar, I am still unable to complete my work. As a novice in learning Python for only two months, there are many foundational papers on skeletonization algorithms, but I am struggling to translate the concepts from the papers into actual code. Most skeletonization algorithms have been extended to fields like medicine and botany, but there are scarce Python libraries available to assist me in completing my work.
Here is my problem: I have included a screenshot of a commercial finite element software called "Abaqus". As you can see, this structure resembles a topology structure. Since this software is designed for computations, it does not provide various images like those available in medical instruments. Currently, using the Python interface of this software, I have written snippets of code to obtain the center coordinates of each voxel in this structure (in the software, each voxel has a fixed value, and it is a regular cube with equal length, width, and height). Additionally, I have obtained the size of the voxels (e.g., 30.). I have tried several Python libraries, most of which require images as input, while only a few accept numpy arrays. I have written some immature code to extract a binary array from these center coordinates, which would allow me to apply skeletonization algorithms using some Python libraries.

import numpy as np
def ske(mesh, x, y, z, openfile):
    me = int(mesh)
    # The total size of the canvas
    depth = int(x//me)
    height = int(y//me)
    width = int(z//me)
    # Create an empty array with the same size as voxel data
    voxels = np.zeros((depth, height, width), dtype=np.uint8)
    # Reading data source files (file: coord_26_cube.txt)
    with open(openfile, 'r') as file:
        # Read the coordinates line by line, convert them into integers, and store them in the center point list
        centers = [tuple(map(int, line.split())) for line in file]
    print(len(centers))
    # Traverse each center point
    for center in centers:
        # Map the center point coordinates to the corresponding positions on voxel data
        z = center[0] // me
        y = center[2] // me
        x = center[1] // me
        # Set the corresponding voxel points to white
        voxels[z, y, x] = 255
    return voxels
v = ske(30, 90, 90, 90, 'coord_26point_cube.txt')

coord_26_cube.txt
Here, I have attached a simple test file that can demonstrate the effectiveness of my code under normal conditions. The file represents a 90x90x90 cube with voxel size 30. It has been subdivided into 27 parts, with one small cube removed, resulting in 26 remaining small cubes. The coordinates in the file represent the positions of the center points.
Although I achieved some good results in smaller libraries, my next step is to obtain the endpoints of these skeletons and accurately connect them. As many Python libraries only cover certain aspects of this task, I am unable to find a single library that provides the desired results. However, changing libraries poses the problem of inconsistent data formats. With limited time left until the final deadline, I cannot afford to spend more time trying each library one by one.
I sincerely hope that you can provide me with some assistance, even if it is just a little guidance or inspiration. To be honest, GitHub is one of the best communities I have come across, which is why I dare to ask you directly. Thank you for taking the time to read this lengthy passage. I truly appreciate your help.

@william-silversmith
Copy link
Contributor

Hi ZiYuanZiWan,

Thanks for writing in. I think I do not understand exactly what your issue is. Are you unable to extract a skeleton or are you having trouble joining together skeletons that have been extracted?

Have you generated Kimimaro skeletons and were they of sufficient quality? Note that TEASAR skeletons are trees and so cannot represent loops.

Will

@ZiYuanZiWan
Copy link
Author

Hi ZiYuanZiWan,

Thanks for writing in. I think I do not understand exactly what your issue is. Are you unable to extract a skeleton or are you having trouble joining together skeletons that have been extracted?

Have you generated Kimimaro skeletons and were they of sufficient quality? Note that TEASAR skeletons are trees and so cannot represent loops.

Will

I am very sorry that I did not provide any specific doubts after describing the problem. Now, I am stating as follows:

  1. My original data should be voxels, but due to software reasons, I can't get a general format (referring to the image format required by most python libraries). I only get binary arrays through my own code, so I haven't found a way to use Kimimaro to Skeletonization binary arrays for the time being. Due to reading your paper, I admire the Kimimaro you have developed and hope to use it to complete all of my work. I hope that i can directly use the data source file of the center point coordinates and a voxel size parameter to perform Skeletonization.

  2. After the completion of Skeletonization, I need significant nodes in the skeleton, and further simplify the skeleton into a spatial structure of points and lines without redundant connections (just as after the simplification of neurons, only cell bodies and synapses are left to connect into a spatial structure). Afterwards, merge the segments that are too short and have similar features or slopes to form a concise framework (I tried the method of dividing the point cloud and clustering and connecting the clustered points, but the effect was not good and too many features were lost).

The main problem is that I have a serious problem in importing data. I cannot provide the original data in image format. Can I have the opportunity to use Kimimaro? I hope your profound knowledge can guide me in completing this task. I know it's not difficult for you, but for me, it's quite challenging. Once again, I would like to express my sincere gratitude to you.

@william-silversmith
Copy link
Contributor

I see, so you are still having trouble rendering a numpy image? Your code for that small example worked for me. Would you be able to share your data? I might be working on a similar problem right now in terms of sparse points. If you're working with a mesh, the problem is that you'll need to fill in the image volumetricly after rendering the border.

In terms of simplifying the resulting skeleton, the Skeleton class that Kimimaro outputs has a .terminals() function that will pull out the end points of the skeleton. It also has a .downsample(factor) function to simplify the skeleton.

@william-silversmith william-silversmith added the question Further information is requested label Jul 2, 2023
@ZiYuanZiWan
Copy link
Author

I see, so you are still having trouble rendering a numpy image? Your code for that small example worked for me. Would you be able to share your data? I might be working on a similar problem right now in terms of sparse points. If you're working with a mesh, the problem is that you'll need to fill in the image volumetricly after rendering the border.

In terms of simplifying the resulting skeleton, the Skeleton class that Kimimaro outputs has a .terminals() function that will pull out the end points of the skeleton. It also has a .downsample(factor) function to simplify the skeleton.

Yes, I still cannot obtain an excellent numpy image.
In the code I posted, there is still a bug: when the size of voxels is allocated in the software, if the model cannot be evenly divided and non-uniform voxels appear (such as a 100 * 100 * 100 cube with a voxel size of 30, a non-uniform voxel of 10 will appear), it will cause the index to exceed the boundary (line22: voxels [z, y, x]=255) and unexpected pixels.
At present, I am unable to solve this problem with my abilities. In addition, in some of the papers I have read, it seems that some people can specify the distance between pixels (in general, the distance between pixels is 1, and the distance between pixels in the paper can be freely changed according to the size of voxels), and use this to create a numpy image that includes voxel size features, which can preserve one feature of voxels. This was originally what I wanted to do, but the papers did not provide a prompt, I cannot reproduce. Currently, my idea for preserving voxel information is to combine and synchronize updates with numpy images through a Python dictionary.
Finally, I can provide you with all my original data, but because it is too late (3:30 a.m.), I may disturb others when I turn on the computer to work. I will pack files and paste them here as soon as I get up tomorrow, and write a visual code to let you understand this problem more clearly. Thank you for providing a way to implement my second idea in Kimimaro. I believe I will soon be able to use these functions :) .
(Perhaps due to a lack of knowledge, I have not fully understood your sentence at the moment:If you're working with a mesh, the problem is that you'll need to fill in the image volumetricly after rendering the border.)

@william-silversmith
Copy link
Contributor

Looking forward to seeing your data! Hope you got a good nights' sleep!

(Perhaps due to a lack of knowledge, I have not fully understood your sentence at the moment:If you're working with a mesh, the problem is that you'll need to fill in the image volumetricly after rendering the border.)

In the code you provided, there was a variable named "mesh." If that referred to a 3D triangle mesh (or similar), then you would need to fill in the voxels that are inside the mesh as well as the boundary. However, there may be other interpretations of the word mesh, such as a grid, in which case this point may not be applicable.

@ZiYuanZiWan
Copy link
Author

ZiYuanZiWan commented Jul 3, 2023

Looking forward to seeing your data! Hope you got a good nights' sleep!

(Perhaps due to a lack of knowledge, I have not fully understood your sentence at the moment:If you're working with a mesh, the problem is that you'll need to fill in the image volumetricly after rendering the border.)

In the code you provided, there was a variable named "mesh." If that referred to a 3D triangle mesh (or similar), then you would need to fill in the voxels that are inside the mesh as well as the boundary. However, there may be other interpretations of the word mesh, such as a grid, in which case this point may not be applicable.

Thank you for your patience. It took me a significant amount of time to package my raw data and provide as much information as possible. Allow me to explain the purpose of each file in the zip folder[file.zip].

(1) testodb.odb: This is the most original database file type provided by the Abaqus software I use. It stores voxel information, also known as finite element elements, along with additional mechanical analysis results. However, considering that you may have difficulty opening or finding meaningful data in this file, I have extracted the precise coordinates of the eight corner points of the 3D cube mesh for you. These coordinates are named (2) voxels_coordinates.txt. Due to limitations in the numpy library, I have saved them in a 1D format, where each set of eight consecutive coordinates represents a 3D cube mesh.

(3) vox_ext.py: If you happen to have the Abaqus software, you can directly paste this code snippet into the command stream to obtain the voxels_coordinates.txt file and the unprocessed standard 8D numpy array.

(4-5) There is a static image and a dynamic image included in the zip folder, which showcase the original state of these voxels. You can envision them as a structure built with square-shaped blocks, similar to building with interlocking cubes.

Additionally, I would like to clarify that when I refer to "mesh," I specifically mean the 3D cube mesh. The Abaqus software also has a 3D triangle mesh available, but it is not commonly used. Currently, the bottleneck I face is whether I can convert these 3D cube meshes into standard numpy images, as this will determine my ability to use kimimaro for my work.

@ZiYuanZiWan
Copy link
Author

ZiYuanZiWan commented Jul 3, 2023

Now, I respectfully wish to provide you with a demonstration using a another dataset[coord_1308points_wall.txt] to showcase the skeleton I have generated. I have made modifications to the provided code, incorporating visualization using the skimage library in conjunction with the matplotlib library. The skeleton algorithm utilized by the skimage library is [T.-C. Lee, R.L. Kashyap and C.-N. Chu, Building skeleton models via 3-D medial surface/axis thinning algorithms. Computer Vision, Graphics, and Image Processing, 56(6):462-478, 1994.]. Upon executing this code, you will be able to observe the comparison between the resulting 'Skeleton' type and the original data labeled as 'Original Data'.

import numpy as np
from skimage import morphology
import matplotlib.pyplot as plt
def ske(mesh, x, y, z, openfile):
    me = int(mesh)
    depth = int(x//me)
    height = int(y//me)
    width = int(z//me)
    voxels = np.zeros((depth, height, width), dtype=np.uint8)
    with open(openfile, 'r') as file:
        centers = [tuple(map(int, line.split())) for line in file]
    print(len(centers))
    for center in centers:
        z = center[0] // me
        y = center[2] // me
        x = center[1] // me
        voxels[z, y, x] = 255
    print(voxels)
    binary_array = voxels
    skeleton = morphology.skeletonize_3d(binary_array)
    fig = plt.figure(figsize=(10, 5))
    ax1 = fig.add_subplot(121, projection='3d')
    ax1.voxels(binary_array, facecolors='b', edgecolor='k')
    ax1.set_title('Original Data')
    ax2 = fig.add_subplot(122, projection='3d')
    ax2.voxels(skeleton, facecolors='r', edgecolor='k')
    ax2.set_title('Skeleton')
    plt.tight_layout()
    plt.show()
    return voxels, skeleton

v, st = ske(100, 2000, 2000, 2000, 'coord_1308points_wall.txt')

This is the second difficulty I encountered. Apart from being unable to obtain a good numpy image, I am unable to apply the end points operation I mentioned earlier to the already processed skeleton.

@william-silversmith
Copy link
Contributor

Hi ZiYuanZiWan,

I took a look at your data. coord_1308points_wall.txt seems reasonable and I was able to visualize it easily.

image
import numpy as np
import math
import microviewer

with open("coord_1308points_wall.txt", "rt") as f:
	lines = f.readlines()

lines = [ 
	[float(x) for x in line[:-1].split(" ") ]
	for line in lines
]

arr = np.array(lines).astype(int)
print(arr.shape)

div = np.array([
	math.gcd(*list(arr[:,0])),
	math.gcd(*list(arr[:,1])),
	math.gcd(*list(arr[:,2]))
], dtype=int)
arr //= div
arr //= 2
arr += np.min(arr, axis=0)

shape = np.max(arr, axis=0) + 1
img = np.zeros(shape, dtype=bool, order="F")
img[arr[:,0],arr[:,1],arr[:,2]] = 1

microviewer.view(img)

However, voxels_coordinates.txt seems to be potentially corrupted?

image
import numpy as np
import math
import microviewer

with open("voxels_coordinates.txt", "rt") as f:
	lines = f.readlines()

lines = [ 
	[float(x) for x in line[:-1].split(" ") ]
	for line in lines
]

arr = np.array(lines).astype(int)
print(arr.shape)

div = np.array([
	math.gcd(*list(arr[:,0])),
	math.gcd(*list(arr[:,1])),
	math.gcd(*list(arr[:,2]))
], dtype=int)
arr //= div
# arr //= 2
arr += np.min(arr, axis=0)

shape = np.max(arr, axis=0) + 1
img = np.zeros(shape, dtype=bool, order="F")
img[arr[:,0],arr[:,1],arr[:,2]] = 1

microviewer.view(img)

@ZiYuanZiWan
Copy link
Author

“However, [voxels_coordinates.txt] seems to be potentially corrupted?”

Hello, [voxels_coordinates.txt] is not corrupted, and its data is recorded differently from [coord_1308points_wall.txt]. The [voxels_coordinates.txt] takes a more detailed approach, where every eight coordinate points represent a 3D cube mesh (for example, coordinates 0-7 mean the 0th 3D cube mesh, coordinates 8-15 mean the first 3D cube mesh... and so on), directly recording the eight vertices of the 3D cube mesh. There are a total of 2400 3D coordinates in the [voxels_coordinates.txt], 2400 / 8 = 300, which means there are 300 3D cube meshes. I am not sure which method is more suitable for constructing numpy images, so I have provided you with two formats.

@william-silversmith
Copy link
Contributor

Oh, in that case, you can use Kimimaro directly with the numpy image generated by my code for coord_1308points_wall.txt.

@ZiYuanZiWan
Copy link
Author

Thank you for providing the code. Currently, I have successfully used kimimaro for skeleton extraction, but I found that the effect seems to be not very good, which should be related to the two parameters (scale and const). And I have a small idea that I can use skeletons that have already been extracted from other Python libraries as data sources and use kimimaro for endpoint extraction and skeleton simplification. However, I don't seem to have found the usage of [. terminals() and. downsample (factor)]. Do you have a help manual or could you please provide me with a demonstration of the code directly? Anyway, I would like to thank you very much. After my work is completed, I will place kimimaro in the most prominent position of the Acknowledge.

@william-silversmith
Copy link
Contributor

You can try using much smaller scale and const. Maybe scale=1 and const=1 (assuming you provide anisotropy=(1,1,1))? The shape is very thin. Try playing around with it.

skels = kimimaro.skeletonize(...)
skel = skels[1] # since its a binary image
terminals = skel.terminals()
ds_skel = skel.downsample(2) # downsample factor of 2

I hope that helps! Thanks for the acknowledgement (assuming you are successful)!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants