The main task of change detection is to segment moving objects from the background. Recently, the deep learning-based change detection method has attracted much attention, but it tends to classify stationary objects as foreground. The reason for this phenomenon is that it identifies objects mainly based on appearance features while the same object has a similar appearance both in motion and stationary state. To address this problem, we propose a novel network called Motion-augmented Change Detection Network (MCDNet) to distinguish moving objects using both motion and appearance information. To achieve this purpose, firstly, we introduce a Motion-augmented Background Model (MBM) to simulate scene background without any foreground objects which can be dynamically updated by the predicted mask. In this way, the motion information can be implicitly highlighted by comparing the current frame and the background. Secondly, we design an Attention Memory Module (AMM) to store past features and use it to guide the segmentation of the current frame, facilitating the extraction of motion and appearance features. Experiments on two challenging public benchmarks (i.e. AGVS 1 and CDnet2014) demonstrate that our proposed method achieves compelling performance against state-of-the-art methods.