The adjoint solver is nothing but the transpose of the solver. Suppose we have a equation as
where is a differential operator depending on . Then it is easy to invert it to solve our solution .
However, what if we want something else, like a functional of our interest . And usually this is a functional which we want to minimize, then common optimization methods will need the gradient.
where can be deducted from the explicit solution (if we have), otherwise, we are getting another equation:
which means
transpose it,
We do this is because is vector, can be easily solved through adjoint solver .
Another problem is the derivative of , i.e. . If is simply a matrix, and there is dofs of , then will be a vector of matrix of . However, it is not a problem with numerical.
- solve the forward problem and get , with
- solve the adjoint problem and get , with
- calculate the matrix multiplication. Roughly .
It will be much more faster if is sparse.