Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alan cleanup #234

Merged
merged 63 commits into from
Dec 11, 2024
Merged

Alan cleanup #234

merged 63 commits into from
Dec 11, 2024

Conversation

jvshields
Copy link
Contributor

@jvshields jvshields commented Dec 4, 2024

This should massively improve the speed of large simulations. Rather than calculating the difference between each the frequency of each line and each frequency grid point, this does a binary search on the wavelength grid to find the closest frequency array index of the line, and then calculates how many indices out we should consider the line for.

Currently for this I'm just doing (gamma + doppler width) * 20 * alpha_of_line/frequency_bin_width, or 10 bins to each side of the line. This should probably be thought out more, but looks like it gives a good first answer to consider each line out to where appropriate.

This also implements parallelization of this calculation.

As proof of implementation, simulations that used to be far too big to fit in memory on my laptop (6e5 wavelength points with 3e5 lines) now run in <5 minutes.
It was hard to get good time estimates on moria before this was made, but simulations that could hang for between 30 minutes and an hour and a half when they had to use swap memory can now finish in ~5 to 10 minutes and don't appear to have memory problems.

Copy link
Contributor

@andrewfullard andrewfullard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume the removed broadening range was the old solution to the line search issue?

@jvshields
Copy link
Contributor Author

jvshields commented Dec 11, 2024

I assume the removed broadening range was the old solution to the line search issue?

Correct. You used to specify a range that you wanted to search around for all lines, which wasn't great because you need a wide range for large lines which was wasteful for small ones. Now it's adaptive, though it does do a minimum search region of +/- 10 spectral pixels.

Though this makes me realize that perhaps the better way to do this is to use the d_nu that already has to be calculated and maybe impose a minimum search region of 1 angstrom or so.

EDIT: I implemented this.

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

< broadening_range[frequency_index]
for line_index in numba.prange(len(line_nus)):
line_nu = line_nus[line_index]
thread_id = numba.get_thread_id()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should doublecheck if numba.get_thread_id() is safe/reliable to be the index.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants