The visual perception module in autonomous vehicles is based on deep learning technology to realize environmental perception, including object detection, semantic segmentation, depth estimation, etc. While deep neural networks have achieved state-of-the-art performance on various computer vision tasks, researchers have found that deep learning models drop significantly in accuracy when they add carefully crafted small perturbations to their inputs. Such an attack is called adversarial attack. The vulnerability of deep neural networks to adversarial attacks brings potential security risks to autonomous driving perception systems. As the first step in autonomous driving, any erroneous output from the perception module can lead to serious consequences. In the existing literature, there are few studies on the impact of the adversarial attack on the perception module on the subsequent modules and the corresponding coping strategies of the subsequent modules. In this paper we addresses this gap by investigating the impact of adversarial attacks on the object detection algorithm on the multi-object tracking module and we proposes a solution that uses the Kalman filter with packet loss compensation to mitigate the reduction of multi-object tracking accuracy under adversarial attack.