The remarkable versatility and efficiency of the brain makes it important to understand its principles in order to push the boundaries of modern computing. Many models in neuroscience effectively model the detailed biological properties of the brain — however, they do not exhibit good performance on any benchmark task. Other systems have good performance, but do not work in the manner that the brain does, thus they do not give any insight into the mechanisms of the brain. A recent model by Diehl and Cook [1] is a neuromorphic system modelled by spiking neural networks (SNN) and spike-timing-dependant plasticity (STDP) that is able to achieve very good results on the MNIST dataset, a benchmark dataset of handwritten numbers for classification algorithms. However, in this system the manner in which MNIST dataset is converted into spikes does not mimic the manner in which neuromorphic sensors record information. Therefore, in this paper, we test the method proposed by Diehl and Cook on the N-MNIST (neuromorphic MNIST) where the MNIST images were recorded using a silicon retina, a DVS sensor that was moved similar to the saccade-like movements of the retina. The output of the camera consists of events at pixels similar to the retinal spikes. In this paper we examined how the original system had to be changed to accommodate the new dataset and noted that with modified parameters that respond to characteristics of the new dataset the system had better accuracy. The 400 neuron system has an accuracy of 93.68% for a smaller 3-class training dataset, and entire 10-class N-MNIST dataset on an 800 neuron network has 80.63% accuracy. We noted that output neurons form clusters that respond to a particular class, making it suitable to stack the system in a hierarchical manner.