With the rise of smart mobile devices and wearable devices, emerging technologies such as gesture recognition have been widely used in reality. However, its application for specific complex scenarios (such as flight cockpit gesture recognition) is not perfect at present. In view of this, in this paper, a flight cockpit gesture simulation dataset (FCGS) is firstly proposed; secondly, a flight cockpit gesture recognition algorithm YOLOv7H is proposed, which is based on the YOLOv7 algorithm Reconstruction on the basis of: The BackBone part adopts the Hornet series network, which effectively combines the advantages of the visual Transformer and CNN, and adds a No-parameter attention mechanism (SimAM) module to the Head network, which improves the representation ability of the convolutional network, with flexibility and effectiveness. The results obtained in this experiment are: percison increased from 92.469% of YOLOv7 to 98.053%, recall increased from 97.486 % of YOLOv7 to 99.994%, and mAP increased from 99.448% of YOLOv7 to 99.500%. In the scene of flight cockpit gesture recognition, the recognition accuracy of YOLOv7H is better than the current mainstream target detection algorithm, and the recognition speed is almost the same. It has advantages in processing natural interactive information and provides an effective means for human-computer interaction. In addition, The source code will be published in https://github.com/Dennis-Chen2021/aircraft-cockpit-gesture-recognition.