Robust Optimisation Techniques for Parameter Search in Neural Networks - Digit Recognition
Object detection and classification has been a prominent problem in computer vision. In this paper we propose a set of novel optimization techniques for searching over the parameters of a neural network in a digit recognition setting.We tackle the problem of digit recognition by using two feed forward Neural Network architectures while optimizing them with variational SGD and Simulated Annealing methods. Two neural networks were constructed for the SGA method. One with a single layer and another 2-layer neural network. The test data is taken from MNIST database. We have also conducted qualitative comparison of the two SGD learning algorithms and the SA method. We have found that the two- and one-layer SGAs saturate at almost the same time, however the learning time and accuracy are much better for the two-layer network. The results for SA are promising since the accuracy was increasing.
Keywords - Neural Networks, Stochastic Gradient Descent, Simulated Annealing, MNIST, digit recognition.