Transferring the success of deep learning models to consumer electronic devices requires the construction of deep learning models that are small enough to fit on resource constrained hardware. Since embedded and mobile devices lack the resources in terms of power consumption requirements, processing speed, and available memory of the latest GPU technology, it is desirable to create neural networks that are significantly smaller without sacrificing accuracy. A new technique for data augmentation called “Smart Augmentation” has recently been introduced that has been experimentally shown to be effective at this task. In this paper, we show how Smart Augmentation can be used to train models that are significantly smaller than their equivalently performant counterparts, and thus more viable for deployment on consumer devices.