Sign Language is the linguistic system adopted by the Deaf to communicate. The lack of fully-fledged Automatic Sign Language (ASLR) technologies contributes to the numerous difficulties that deaf individuals face in the absence of an interpreter, such as in private health appointments or in emergency situations. A challenging problem in the development of reliable ASLR systems is that sign languages do not rely only on manual gestures but also on facial expressions and other non-manual markers. This paper proposes to adopt Facial Action Coding System to encode sign language facial expressions. However, the state-of-the-art of Action Unit (AU) recognition models is mostly targeted to classify two dozen of AUs, typically related to the expression of emotions. We adopted Brazilian Sign Language (Libras) as our case study and we identified more than one hundred of AUs (with a great intersection with other sign languages). We then implemented and evaluated a novel AU recognition model architecture that combines SqueezeNet and geometric-based features. Our model obtained 88% of accuracy for 119 classes. Combined with the state-of-the-art of gesture recognition, our model is ready to improve sign disambiguation and to advance ASLR. [ABSTRACT FROM AUTHOR]