Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difference in evolve_model between gadget2 and bridge #1041

Open
fredt00 opened this issue Mar 26, 2024 · 4 comments
Open

Difference in evolve_model between gadget2 and bridge #1041

fredt00 opened this issue Mar 26, 2024 · 4 comments

Comments

@fredt00
Copy link

fredt00 commented Mar 26, 2024

Hi,

I have a problem that I think is related to issue #679. I am trying to couple a live NFW halo in gadget2 with an orbiting test mass, however I have noticed that no matter how long I pre-evolve the halo to allow it to reach equilibrium, as soon as I add it to bridge, the halo rapidly expands. I have tried the suggestions made in #679, in particular ensuring that max_size_timestep in gadget2 is a few factors of two less than the bridge timestep (dt), and using a multiple of two times the bridge timestep in the unit converter, but the issue persists.

Attached is an example script that demonstrates the problem by evolving the same halo first by calling evolve_model every bridge timestep, and then evolving with a bridge integrator. The resulting evolution is dramatically different. I ran the same test with BHTree and in this case the evolution is the same whether or not I call evolve_model directly or via bridge. The evolution with and without bridge does match for gadget2 if I add it to the bridge with add_code instead of add_system.

gadget_testing.py.txt

I'm also confused by the output file from gadget2, with every multiple of dt/max_size_timestep step having a systemstep=0, yet the time still increases:

Step= 93  t= 91  dt= 1 
Nf= 0000100000  total-Nf= 0011700000  ex-frac= 8.56557  iter= 1
work-load balance: 1.30849  max=0.104356 avg=0.079753 PE0=0.0842994
particle-load balance: 1.3368
max. nodes: 5556, filled: 0.694569
part/sec=83591.5 | 63883.9  ia/part=527.471 (0)

Step= 94  t= 92  dt= 0 
Nf= 0000100000  total-Nf= 0011800000  ex-frac= 14  iter= 1
work-load balance: 1.01866  max=9.64458 avg=9.46793 PE0=9.31209
particle-load balance: 1.0107
max. nodes: 4564, filled: 0.570557
part/sec=704.132 | 691.234  ia/part=100000 (0)

Step= 94  t= 92  dt= 0 
Nf= 0000100000  total-Nf= 0011900000  ex-frac= 8.62274  iter= 1
work-load balance: 1.39914  max=0.109665 avg=0.0783807 PE0=0.0941227
particle-load balance: 1.0107
max. nodes: 4564, filled: 0.570557
part/sec=85055 | 60790.9  ia/part=527.238 (0)

Step= 95  t= 93  dt= 1 
Nf= 0000100000  total-Nf= 0012000000  ex-frac= 8.56177  iter= 1
work-load balance: 1.30747  max=0.104649 avg=0.0800395 PE0=0.0844096
particle-load balance: 1.33185
max. nodes: 5513, filled: 0.689194
part/sec=83292.2 | 63704.8  ia/part=527.38 (0)

Am I implementing gadget2 incorrectly?

Thanks!
Fred

@fredt00
Copy link
Author

fredt00 commented Apr 23, 2024

I've narrowed down the problem to line 464 in bridge:
channel.copy_attributes(["vx","vy","vz"])
and more generally, copying to gadget2 between evolve_model calls seems to cause rounding errors in the timestep. The script test.py.txt demonstrates this with the output file for the case where data is copied to gadget2 showing the lines:

Begin Step 1, Time: 1, Systemstep: 0
Begin Step 2, Time: 2, Systemstep: 0
Begin Step 3, Time: 2.99994, Systemstep: 0
Begin Step 4, Time: 3.99994, Systemstep: 1
Begin Step 5, Time: 4, Systemstep: 0
Begin Step 6, Time: 5, Systemstep: 0

The two simulations diverge after step 5. Adding print(gravity.get_time_step().in_(units.Myr)) in the script shows a timestep of 0.999938964844 Myr followed by 6.103515625e-05 Myr at 4Myr and 5Myr respectively, yet the model_time still increases by 1Myr. I've tried different multiples of dt in the unit converter and parameters.max_size_timestep but the evolution never matches the case where no data is copied from the interface to gadget2.

Copy link

stale bot commented Jun 23, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 28 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale Issues that have been around for a while without updates label Jun 23, 2024
@stale stale bot closed this as completed Jul 21, 2024
@LourensVeen
Copy link
Collaborator

It seems to me that this isn't fixed? In that case it should remain open until we can figure out what the problem is.

@LourensVeen LourensVeen reopened this Jul 22, 2024
@stale stale bot removed stale Issues that have been around for a while without updates labels Jul 22, 2024
Copy link

stale bot commented Sep 20, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 28 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale Issues that have been around for a while without updates label Sep 20, 2024
@stale stale bot closed this as completed Oct 19, 2024
@rieder rieder added keep-open and removed stale Issues that have been around for a while without updates labels Oct 19, 2024
@rieder rieder reopened this Oct 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants