In this paper, we introduce the Dynamic Hand Gesture Dataset (DHGD) tailored for skeleton-based gesture recognition. This novel dataset encompasses nine distinct hand gestures, each performed by twenty participants, designed to facilitate intuitive manipulation of images, including actions such as rotations and scaling. Each gesture sequence ia captured over 70 frames using an RGB-D camera system. Key points of the hand are detected in each frame utilizing MediaPipe Hands, with key point represented by its three-dimensional coordinates. Additionally, we evaluate the recognition accuracy of the nine distinct dynamic hand gestures using Spatial-Temporal Graph Convolutional Network (ST-GCN). This network is selected as a benchmark for graph convolutional networks to validate the dataset’s efficacy.