With extensive applications and remarkable performance, deep reinforcement learning is becoming one of the most important technologies that researchers have been focusing on. Many applications have used reinforcement learning, such as robotics, recommendation systems, and healthcare systems. These systems could collect data about the environment or users, which may contain sensitive information that posed a real risk when these data were disclosed. In this work, we aim to preserve the privacy of the data used in deep reinforcement learning with Double Deep-Q-Network in continuous space by adopting the differentially private SGD method to inject a noise to the gradient. In our experiment, we used a different amount of noise on two separate settings to demonstrate how effective of using this method.