The brain cortex processes visual information to classify it following a scheme that has been mimicked by Convolutional Neural Networks (CNN). Specialised hardware accelerators are currently used as CPU co-processors for mobile applications. These accelerators are getting closer to the sensors for an edge computation of its output towards a faster and lower power consumption improvements. In this demonstration we use a dynamic vision sensor (inspired in the retina neural cells) as a visual source of the NullHop CNN accelerator deployed on a MPSoC FPGA and placed into a mobile robot for edge-computing the visual information and classify it to properly command a Summit-XL mobile robot for a target destiny. The reduced latency of the used CNN accelerator allows to process several histograms before taking a movement decision. A distance sensor mounted on the robot ensures that the direction change is done at the right distance for a proper path following.