The paper presents the classification of image sequences of the single-handed Czech sign language alphabet diacritics. Since diacritics are expressed by the motion of the hand, the classification is performed by the Long Short-Term Memory recurrent neural network. Annotation of the dataset is done by the MediaPipe framework, and the neural network is constructed with the TensorFlow computational library. The paper describes the proposed method's flow, data acquisition, preprocessing, and training. Obtained results consist of the validation dataset's classification success rate and testing on whole signed words and sentences. The overall success rate was around 88%