Using a digital collection composed of more than 52'000 posters from different years, designers, topics, and clients, we study the feasibility of comparing posters based on their typographic features. To this end, we explore the possibilities of training a model to classify serif types without knowing the font and the character. We also investigate how to train a vectorial-based image model able to group together fonts with similar features. Specifically, we compare the use of state-of-the-art image classification methods, such as the EfficientNet-B2 and the Vision Transformer Base model with different patch sizes, and the state-of-the-art fine-grained image classification method, TransFG, on the serif classification task. We also evaluate the use of the DeepSVG model to learn to group fonts with similar features. Our investigation reveals that fine-grained image classification methods are better suited for the serif classification tasks, and that leveraging the character labels helps to learn more meaningful font similarities.