Domain adaptation can apply knowledge learned from the source domain to the target domain by reducing data distribution discrepancy inter domains. However, existing domain adaptation algorithms do not do as well on sensor datasets as on image datasets because of the neglect of intra data distribution discrepancy. The long time of collecting a raw data segment on sensors will lead to a shift with time, and the shift distribution will change with the variety of sensors and wearing positions, causing the time series distribution discrepancy intra and inter domains. To solve this problem, we design a new model, Time Series Adaptation Network (TSAN), and a new loss, Time series Contrastive Loss (TCL). TSAN uses a siamese network and “pack” the samples divided from the same segment into the network. Furthermore, TCL is defined as the similarity of “unpack” network output, which leads the model to learn time-independent features. In particular, TSAN can be used as a plug-in to combine with existing domain adaptation algorithms, so the intra and inter distribution discrepancies can be considered simultaneously. We conduct extensive experiments with eight existing domain adaptation algorithms on sensor-based cross domain human activity recognition (HAR) tasks, including three Routine Activity Recognition (RAR) datasets and four Parkinson's tremor Detection (PD) datasets. The results show that all the existing algorithms are improved by an average of 5.4% (RAR), 2.2% (PD) with TSAN.