-
Notifications
You must be signed in to change notification settings - Fork 631
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Particle memory requirements #13849
Comments
Is the 100^3 mesh a factor or is the 7.44 kB seen even for say a 10^3 mesh? |
Good question, I can check that. |
This thread sheds some light. It seems that the allocatable components of the derived types like |
Okay, seems like the cost of doing business but just very noticeable for the type of particle I have where the arrays are all small. I'll leave this open for now in case inspiration strikes (because it can be pretty taxing for big vegetation cases). FYI @drjfloyd, yup the ~7.5 kB/particle was the same on a 10^3 case with the same number of particles. |
I took the system memory function in FDS and coded this snippet, where a 1D derived type array ALL_ARRAY of real allocatable arrays is defined, and an index derived type I_ARRAY + a single storage allocatable array of reals B is defined:
To get same amount of data and access to it we can allocate:
Note that N=1000000 is the size of the derived type arrays and M is the number of reals per entry in N. So M defined how granular ALL_ARRAY is, or how many entries are in a given ALL_ARRAY(J)%A(:) allocatable. I compiled the code with gfortran -O0 main.f90 and get the following cost for Allocation of ALL_ARRAY (1st allocation) and I_ARRAY + B (2nd allocation) as function of M: -M = 1 (as granular as it gets):
-M = 10 :
-M = 100 :
I see a fixed cost of ~70+ MB after allocation of internal allocatables, in line with what Kevin is stating (64 bytes of metadata per ALL_ARRAY(J)%A(:) allocatable). If I just allocate ALLOCATE(ALL_ARRAY(N)) I get a memory cost of 63.2 MB, 63.2 bytes per entry. This cost will be for each allocatable we use in the type. |
Randy had an interesting idea on this. I will look at the feasibility of making a derived data type for thermally thin particles that would have many of the allocatable arrays hard-wired. This is kind of how droplets are handled, but we'd still need to be able to go through the solid phase thermally-thick routine to get the pyrolysis. I'll see. |
This case tests memory requirements with a large mesh (100^3). If it is just the mesh, I see about 1 GB of RAM usage. I then add a large number of particles (200,000). I see the RAM requirement jump up to 2.6 GB. The thing that I cant reconcile is that the .out file shows that each particle should require 924 bytes, or 143 MB for all particles - a very small fraction of the 1.6 GB increase I see.
Printing out with calls to
SYSTEM_MEM_USAGE
after each particle is added in part.f90 shows closer to 7.44 KB added per particle. So the bulk of the RAM increase is definitely coming from theALLOCATE_STORAGE
subroutine, but I have yet to track down which component of that is adding more memory than the .out file suggests is needed.part_storage_1LPC.txt
The text was updated successfully, but these errors were encountered: