Fast and accurate semantic analysis of natural disaster images is crucial for rational rescue plans and resource allocation. However, the scarcity of meticulously labelled datasets and the ignorance of region-of-interest scale variations of popular general-purpose methods lead to undesirable performance. In this paper, we propose a novel triple-strip attention mechanism (TSAM) to solve the generalization problem of disaster images that can be combined into general networks. Our TSAM accumulates features of three parallel-strip attentions (row strip attention, column strip attention, and channel strip attention), and the output is multiplied with original input features for further processing. Our attention mechanism can effectively overcome the defect of ignoring global features caused by the convolution and enhance the performance of the network by weighting the features from both spatial and channel aspects more comprehensively. Besides, we employ both the compression and expansion operations in the weighting operation to reduce the amount of parameters, leading to negligible computational overhead. Experiments validate that our TSAM outperforms other state-of-the-art methods on natural disaster segmentation. Due to its concise form, plug-and-play pattern, and high promotion rate, our TSAM can be combined with many existing neural networks for better performance improvement. [ABSTRACT FROM AUTHOR]