the visualization of the 3D models is a scorching topic in computer vision and human-computer interaction. The demands for 3D models have been increased due to high involvement in animated characters, virtual reality and augmented reality. To interact with 3D models with the help of mouse and keyboard is a very hectic, less efficient and complex process because of multiple types of operations required by the models to view properly in all sides. So it is essential to improve the user interaction with the 3D system. In this paper, a new method is introduced by using the Microsoft Kinect v2 to detect the human body and joints. First, we trained the Kinect to understand the specific gestures, and then recognize to perform the specific task on an object in the proposed environment.