-
Notifications
You must be signed in to change notification settings - Fork 169
Description
Hi there,
I'm currently working on a topology optimisation problem involving convection-diffusion heat transfer. In this setup, I repeatedly solve the primal PDEs and compute sensitivities of certain physical quantities (e.g. thermal objective) with respect to a design variable field gamma, using the adjoint approach provided by firedrake.adjoint
.
The challenge is that the fluid region only occupies part of the design domain, so I’m using SubMesh(mesh, region_f)
to restrict the flow solve to the fluid region. The velocity field u_local
is solved on this submesh. Then, I interpolate the velocity back to the global mesh to compute the temperature field T over the full domain:
u_global.interpolate(u_local, subset=region_f)
This works for the forward solve, and the optimisation proceeds. However, I often encounter random adjoint failures at a certain optimisation step — not at the same iteration, and not consistently reproducible. When running with more MPI processes, the failures happen more frequently. The error appears during the adjoint solve, not from forward solve.
Before switching to interpolate(...)
, I used project(...)
to transfer the velocity field to the global mesh, and it worked stably under serial execution. But since project()
is not supported across meshes in parallel, I moved to interpolate(...)
.
So my questions are:
Is this use of interpolate(..., subset=...)
safe and supported during adjoint computations?
Is there a more robust or recommended approach to transferring data from a SubMesh to the global Mesh that works with firedrake.adjoint
and in parallel?
I'd greatly appreciate any suggestions or workarounds. Thank you very much!