This study explores the robustness of different neural network architectures when exposed to various geometric transformations, including translation, rotation, and scaling. The objective is to evaluate the performance and resilience of these architectures in real-world scenarios where such transformations are commonly encountered. Extensive experiments are conducted using benchmark datasets, and a comparison is made between convolutional neural networks (CNNs), Residual Networks (RESNet), fully connected networks (FCNs), and transformer-based architectures. The study focuses on assessing the generalization capabilities and the efficiency of these architectures when data augmentation techniques are employed. The results aim to identify the architectures that exhibit superior performance and fast adaptation to transformed inputs.