This paper considers federated learning (FL) in a tactical network (TN), by adopting computationally efficient over-the-air aggregation (OTA) to train a global model at a parameter server (PS). Memory limited edge devices ideally require to store a deep neural network (DNN) model that is compact in size but has high accuracy. This work explores the effects on the accuracy of the model due to the compression technique of pruning, as the model parameters are aggregated OTA. Further, we quantify the effects of Rayleigh fading and additive white Gaussian noise (AWGN) on the accuracy of the DNN model, at different signal-to-noise ratios (SNRs), exploring the size-accuracy-SNR trade-off for both uncompressed and pruned versions of the DNN model. Simulation results, particularly at high SNR, show little difference in accuracy among the uncompressed, 30%, and 50% pruned models, but the sizes of the pruned models are reduced roughly by 0.23x and 0.40x than that of uncompressed model's size, respectively. Pruning the model to 70% results in 0.58x reduction in size but the cost is significant lowering of accuracy, even at high SNR. Depending upon the availability of resources at edge devices, size-accuracy-SNR trade-offs can be exploited as various overlapping trends are obtained in the simulation results.