This repository contains the code for reconstructing missing input data of neural networks via backpropagation. The method is inspired by Roche et al. 2023.
The idea is that trained weights of a neural network can be used to reconstruct missing input data to the neural network, by simply including the missing data in the gradient and optimizing its values via backpropagation, while fixing the weights of the neural network.
We extend Roche et al's approach by instantiating multiple neural network instances. This allows us to counter being captured in a local minimum during the optimization. The most promising candidate is then selected for final optimization.
We evaluate the results with the MNIST dataset. A full empirical evaluation can be found the publication above. For our evaluation, we trained a simple Autoencoder on the complete MNIST dataset. We then masked single samples with a zero-tensor, as missing values.
Sample & Masked Sample
We included only the missing values as optimizable parameters in the optimizer and optimized it over
Reconstruction of the masked sample over different iterations of optimization
python >= 3.8
torch >= 2
numpy >= 1.23
matplotlib.pyplot >= 3.8
Licensed under MIT license