Artificial intelligence based predictive modeling has become increasingly sought-after in the field of materials science for training property prediction models due to their promising ability to extract and utilize data-driven information from materials data. However, current methods typically use limited hand-engineered fixed-length representations obtained from available composition-based information only, making model inputs a stumbling block when handling small and specialized training datasets. In this paper, we study and propose a method to perform representation learning (RL) that is both applicable and adaptive for generalized use across various domains. We introduce a RL technique that utilizes pre-activation based representations extracted from a model pre-trained using a deep neural network to maximize the accuracy. We perform model training for inorganic material properties using composition-based numerical vectors representing the elemental fractions (EF) of the materials by leveraging source models trained on large datasets to build target models on small datasets and then compare its performance against traditional machine learning (ML), deep neural network and RL-based graph neural network (GNN) models trained from scratch (SC) with EF as input, more informative physical attributes (PA) as input, as well as conventional TL/RL techniques. Using large $(\sim 345K)$ datasets for source model training and small computational $(\sim 28K)$ and experimental $(\sim 2K)$ datasets for target model training and testing, we show that the proposed RL methods help significantly improve the accuracy of the model as compared to the SC models and conventional TL/RL techniques for all data sizes and properties by using only EF as input. We also perform a statistical significance analysis by calculating the p-value to find that the observed improvement in the accuracy of proposed RL model over SC, RL-based GNN, and conventional TL/RL models is indeed significant.