Glaucoma identification is often performed manually by medical professionals, which is time consuming. Automated image analysis (e.g. of retinal fundus images) can speed up the detection and treatment of this disease. In this study, twentysix deep learning models pretrained for object recognition are compared as potential feature extractors for glaucoma detection from retinal fundus images. We used a template matching algorithm to automate the cropping around the optic nerve head at three different scales and extracted the features using pretrained networks from both cropped and full versions of the images. Cropped features were concatenated with full image features to create expanded feature sets. Next, we conducted extensive ten-fold cross-validation experiments using random forest and optimised logistic regression base classifiers to estimate the accuracy of models trained on each feature set individually and also on various combinations of feature sets from the full and cropped images. The best feature extractor for glaucoma detection was the residual network (ResNet) giving 0.97 cross validated AUC-ROC in conjunction with the random forest classifier using concatenated features. The experimental results indicate that ResNet architecture is best suited as a feature extractor for glaucoma identification from retinal fundus images, but only if two feature sets (full and cropped) are utilised.