-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OO Design of the time_derivative method #4
Comments
time_derivative
method
The Rouson's approach for solving the Burgers equation is simply to change the integrand concept: the integrand becomes a container of the whole spatial domain (which eventual discretization is blind forFOODiE). Considering a PDE problem, a concrete integrand type could be something like type, extends(integrand) :: conservative_field
Integer :: Ni, Nj, Nk ! Number discretization cells along each Cartesian direction
integer :: gc ! Number of ghost cells necesary for boundary condition imposition
real, allocatable, dimension(:, :, :) :: U ! Conservative state [1-gc:Ni+gc, 1-gc:Nj+gc, 1-gc:Nk+gc]
contains
procedure :: t => residuals
...
endtype conservative_field Here This should work into the current abstract calculus pattern. I will test Monday. |
I'm sorry that I haven't commented sooner and won't have the time to comment in detail now. I just finished the final leg of a lengthy international business trip and am leaving for a family vacation road trip within the next few hours. For now, I'll simply add the following: I might now classify Abstract Calculus as an anti-pattern for performance reasons. I still really like it, and I believe similar concepts can be judiciously applied in ways that a compiler might be able to optimize. In fact, I probably won't move away from Abstract Calculus myself. I'll just warn users of any software I write that performance analysis is highly recommended for any code in which performance matters. In Fortran, performance is king so it's likely to be of concern for most Fortran programmers (rightly or wrongly). As always, I believe that any discussion of performance must include hard data: runtime profiles and scalability analyses for the application in question on the platform in question. Be wary of generalizations, including the ones I wrote above. |
Thank you very much, I was very curious to read about your experience. My main concern is just about performance lack. In my previous tentative to (ab)use of operator overloading into OFF I was facing with broken speedup that without overloaded operators was very good. However, as you said in your great book, preliminary optimization is dangerous. As soon as posdible I will measure the performances, this probably will happen when the 1D Riemann solver test will be ready. I hope you can give us more suggestions in the future, please do not leave us :-) |
Remember, too, that the optimizations for object oriented Fortran will |
Yes Chris you are right! |
Find a possible lack... pure function time_derivative(self, n, t) result(dState_dt)
!---------------------------------------------------------------------------------------------------------------------------------
!< Time derivative function of integrand class, i.e. the residuals function.
!---------------------------------------------------------------------------------------------------------------------------------
import :: integrand, R_P, I_P
class(integrand), intent(IN) :: self !< Integrand field.
integer(I_P), optional, intent(IN) :: n !< Time level.
real(R_P), optional, intent(IN) :: t !< Time.
class(integrand), allocatable :: dState_dt !< Result of the time derivative function of integrand field.
!---------------------------------------------------------------------------------------------------------------------------------
endfunction time_derivative |
The OOP abstract calculus as the one of Rouson already implemented and tested with the Lorenz equations example could be limiting or problematic...
As a matter of facts, in the Lorenz equations the time derivative (hereafter the residuals) function is very simple to compute. In a more complex scenario of non-linear PDE system the residuals is a very complex function (involving computations on large stencils). Generally, solving a PDE means applay a spatial operator that builds the residuals that are the advanced in time by an ODE solver like FOODiE.
Consequently, the design of residuals function is crucial. Moreover, for multi-steps/multi-stages like RK ones the residuals function is used many times for each single time step. This involves also a smart boundary conditions handling.
In my previous experience, the residuals function is a procedure operating on a whole set of integrand field (a block of finite volumes being self-consistent for each time step, i.e. boundary conditions, space operator...). Now (in the present integrand definition), the residual function seems to be designed to operate on a single-valued field. Obviously, one can assume that the Lorenz field example is just the time integration of a single space location and applay it to tge whole domain. For the Lorenz example this is simple, but for non linear PDE each space lication residuals depend on other space locations...
In such a case, one solution could be to reduce the integrand time derivative class function to a simple lookup array where the residuals are computed previously in the whole domain taking into account the non linearity. This could work for a simple scheme like the explicit Euler one, but I think that it not feasible for more comple schemes like the RK ones.
Any suggestions?
I will try study better the Burgers solution of Rouson it being a more complex example than the Lorenz one.
The text was updated successfully, but these errors were encountered: