The understanding of the external characteristics of objects that need to be grasped is crucial for enhancing the dexterity of a robotic hand. Utilizing ontology-based knowledge representation (KR) approaches in the field of grasping presents novel opportunities for designing effective object recognition modules. This research paper proposes the development of an ontology aimed at recognizing objects that have been grasped. The ontology, named OntOGrasp, encompasses various concepts, attributes, and relationships concerning the kinematics of the hand and the structural components of the object. To generate a dataset for grasp kinematics, a series of grasping experiments were conducted using carefully selected objects. A semantic feature selection approach was then introduced, leveraging the knowledge graph embedding technique, to identify practical features from the ontology. The grasp dataset was processed to extract relevant data pertaining to the selected features, which were subsequently employed to train classification algorithms for the task of object recognition. The integration of semantically suggested features resulted in a significant improvement in recognition accuracy for the classifiers, surpassing the performance achieved by using the complete feature set. The semantic features demonstrated an accuracy increase ranging from 1% to 9% across all classification algorithms, accompanied by a notable reduction in false positives and false negatives during object prediction. This enhanced object recognition accuracy substantiates the importance of knowledge modeling in incorporating practical aspects of a domain described through experimentation, thereby yielding plausible outcomes.