This release add
-
the support of backward for ( dot product, softmax, Linear perceptron, MLP, mse loss)
-
a toy dataset the moon dataset.
-
the SGD otpim algo
-
an nice example to train a full MLP on the moon dataset
What need to be done next
- Use ndarray as a backend and extend it to array composition.
- Implement a full MLP
- implement some basic optim algo (SGD)
- Plug everything and train a simple network on a simple task !
- Work directly on batch of data
- Add more grad fn function, more loss function and array manipulation like max
- allow the use of view in the autograd graph
- monitor memory leaks and performance
- have fun and implement state of the art technique, dropout etcc ...
- use GPU acceleration