Students' behavior reflects their status and the degree of participation in the classroom. Therefore, it is particularly crucial to use the classroom camera to automatically detect the student's behavior. However, there are situations where students are densely distributed and there are many small objects in the classroom. To solve this problem, the dynamic head was introduced to effectively detect student behavior. Firstly, we collect a large number of videos from real university classrooms for data annotation to form a student behavior dataset and treat student behavior recognition as an object detection task. Secondly, we propose a model called YOLOv5-Dy, which includes a dynamic head. The dynamic head includes scale, space, and task attention mechanisms, which focus on the importance of different features from different perspectives. Experimental results show that the improved model is more adaptable to the student behavior detection task. Finally, we compare our method with state-of-the-art detectors, and it turns out that the proposed approach performs better on the student behavior dataset.