Replies: 3 comments 5 replies
-
Hi @trparry, many thanks for trying the Ahmed body! This is on my list too for a study on grid resolution. You're right, using super fine cells everywhere isn't feasible. I've been thinking about dynamic adaptive grid refinement for a long time now, and I haven't found a simple solution to shortcut potentially years of implementation. It is certainly not a near-term extension. If it would be easy, someone else with far more resources than me would have already done it and cracked the CFD industry. SpaceX have a nice demo, but that's only 2D and there is no updates since 7 years ago. I'll continue to think about it and experiment. I have a rough idea of the steps required to make it work. But I don't expect any significant progress on it in the next year, especially since I have only 0-2 days a week to work on FluidX3D next to my full-time job. Looking at alternatives, there is a couple simpler options that I will check:
|
Beta Was this translation helpful? Give feedback.
-
Hello @ProjectPhysX, after changing my boundary conditions as I mentioned in issue #60, I was able to get an accurate result on the Ahmed body using the exact settings found here. I was able to get within 3.69% of the original experimental from here. This was achieved with a 1024 x 512 x 512 = 268M grid on a NVIDIA RTX A6000 with a total compute time of about 200 minutes (this would have taken 17 hours on 16 CPU cores including local mesh refinement on OpenFoam (see mesh 4 of the test case I linked). The 1024 dimension is the stream wise direction, where the side of 1 voxel is about 4mm. The length of the Ahmed body was only 256 voxels. The time averaging of the drag coefficient was done over 0.35 sec of simulation time. The force on the body is quite noisy, but the time averaged result is much smoother (I think commercial FVM codes time average the result to remove the noise caused by the massive vortex shedding that occurs over the body anyways). I did not render it because it would have taken more time. The remaining error might be due to boundary effects, under-resolved body, near body cell size, or all of these combined. This makes sense because the SimScale case here has 5 meshes of increasing resolution. The fourth one is approximately equal to mine in terms of near body cell size (4mm) and they are getting 3.72% error. To get within 1.9% its still not very feasible for me without a better GPU because I would need to add many cells to get about 1-2mm near body resolution (for my domain it would require a mesh that is 2048 x 1024 x 1024 = 2.14 billion). I've seen you do cases above 10 billion so if you would like to try this I would be very grateful! This also demonstrates that a domain that is 4L x 2L x 2*L (L = length of Ahmed body 1044mm) is capable of mitigating most of the boundary effects. 3.69% might be a lot of error for some FVM folks, but it depends on what you are doing: are you computing flows in a nuclear reactor or just trying to get an idea of the performance on a UAV? For many applications that aren't life/death I think its quite useful! For example, I would say that 16 cores are pretty accessible to most people, and there are now gaming GPU's that can beat the A6000 so this means that FluidX3D significantly increases the time efficiency of the average CFD user computing on their local machine (ignore HPC stuff). To my knowledge this is the first validation test for the subgrid turbulence model in FluidX3D. |
Beta Was this translation helpful? Give feedback.
-
I am not expert, but just wandering why we need mesh refinement concept in LBM as well. |
Beta Was this translation helpful? Give feedback.
-
Hello! I'm looking at different CFD codes for my thesis and FluidX3D caught my eye. After experimenting with it quite a bit. Unfortunately, I've come to the conclusion that it won't work for me unless some form of local grid refinement is added. This holds true even when I use multiple GPUs.
I experimented with the Ahmed body where I copied all of the conditions from this test case. They do a grid refinement study and are very transparent about it, you can open up the case and see all of the settings. Simscale uses OpenFoam in the background, which probably isn't that fast compared to other commercial FVM codes, but its an easily accessible baseline to start making comparisons to LBM.
Here is the original experimental paper on the Ahmed body from 1984.
To get the drag coefficient within 1.94% of the value in the original paper, they needed about 24M cells (which included local mesh refinement) and 515 core hours (I think they used 64 CPU cores), which totaled about 8 hours of total compute time.
The tetrahedral cells near the Ahmed body had a max edge length of 2mm, and the far field cells were a cartesian grid with much larger cell dimensions. The domain is 6m x 5m x 13.5m. They could potentially reduce the size of the domain, but this was not investigated.
In FluidX3D, to get an equivalent cell size of 2mm near the body (ignoring the fact that we only use cartesian meshes and not tetrahedral), this would require about 50 billion cells (no local grid refinements). If we use larger cells near the body the accuracy will suffer, just look at Figure 4 of Simscale's report. Even if we can reduce the domain size to something smaller, ~50 billion cells is far too large even for 4x Nvidia A100 SXM4 40GB. This gives about 160 GB of VRAM, looking at the max single domain grid resolution table on FluidX3D's github page, 160GB will probably only give me a 3-4 billion cell grid. I've tried using 2x A100's on the Ahmed body and the results seem to be consistent with my predictions. Unless someone has access to many GPU's or other HPC resources, 1.94% accuracy is unfeasible without grid refinement (my institution doesn't have access to HPC resources besides what can be found in places like Amazon AWS, but 64 CPU cores are much cheaper than 10-20 GPU's).
I would love to use this software because I think its awesome and it will probably bring LBM into wider use in the long run, but without local grid refinement the speed of the code is offset by the inefficiency of the total number of cells in the mesh. FluidX3D is still probably useful for low Reynolds numbers where we might be able to get away with a coarser grid near the body (ex: your stokes drag setup), but the Ahmed body test here used Re = 4.29×10^6 which requires a fine mesh near the body to resolve the turbulent boundary layer. Am I missing anything here, or can anyone recommend any validation strategies to alleviate this problem? I've also been looking at Wabbit which uses adaptive grid refinement using wavelets. If FluidX3d implemented something like this it could be a game changer. This is probably very difficult to implement with a GPU, but SpaceX did a talk where they mentioned doing similar grid refinement methods and their code involved GPUs so perhaps its possible.
Beta Was this translation helpful? Give feedback.
All reactions