-
Notifications
You must be signed in to change notification settings - Fork 89
Albany as a Library
Due to expanding use of Albany, there are cases in which the target problem to be solved does not fit well into the existing range of generality, and users have to either add capability to the Piro-Thyra-NOX-LOCA stack or write their own alternatives. We refer to said stack and the Albany classes directly associated to it as the "Driver" portion of Albany.
Two examples where the Driver stack was augmented to handle more cases are:
- Adaptivity: the LOCA/NOX adaptive stepping system was added
- Dynamics: support for
x_dot_dot
was added through the Driver stack
Two examples where new Thyra::ModelEvaluator
s were created to
support coupled problems are:
- QCAD: coupling a Poisson equation with Schrodinger's equation.
- SchwarzMultiscale: coupling multiple scales
In those cases, Albany::SolverFactory::createAndGetAlbanyAppT
is extended to construct a different system of Thyra::ModelEvaluators
.
In pursuit of a one-way coupled Thermal-Plasticity problem with
adaptivity, we are considering creating a separate executable which
omits the use of the Piro driver layer but continues to reuse
the code in which the simulation logic mainly resides, meaning
the various PHX::Evaluator
classes as well as the Albany::Discretization
logic.
We refer to the set of reused components as the "Library" portion
of Albany.
This page documents the plan and progress along this approach.
This GraphViz plot illustrates our current plan regarding which major Albany classes fall into the Driver versus Library categories, as well as some key relationships that couple said classes:
In this particular case, the idea is to loop over a number of time steps where each step will solve one time step of a Thermal problem, then feed the resulting temperature as an input to one time step of a Plasticity problem (Elasticity initially), then the mesh would be adapted at the end of the time step. The simulation is transient, and Backward Euler should be a sufficient time integrator. We likely won't need LOCA functonality, control of the time delta is the extent of our needs there. IN the following document is an explanation of the algorithm used:
The current plan includes reusing a single Discretization object at each
time step, by resizing its knowledge of neq
, etc. based on the
next problem to solve, solving that problem, and then resizing
again for the next problem.
This is affordable if we assume adaptation occurs just prior,
hence data structures need rebuilding anyway.