In this paper, we solve the image classification task by capturing context dependencies based on spatial and channel attention mechanism. Unlike previous research on feature fusion, we propose an attention module based on spatial and channel dimensions. This module derives attention maps respectively from spatial and channel, then for feature refinement we multiply the attention map into the feature map. Meanwhile, the module can be easily embedded into the network structures due to it is lightweight. The channel attention module selectively enhances some feature channels and suppresses certain feature channels by integrating the relationship between each feature channel. By weighting the features of all locations, spatial attention module aggregates location features. Regardless of distance, similar features are interrelated. Our module is evaluated through experiments on the ImageNet-1K and CIFAR-100 datasets.