Locomotion Mode Detection (LMD) is a predominating area of research from the past few years. It facilitates the convenience of estimating travel expenses, travel time, minimizing traffic congestion, etc. LMD incorporates sensory values of multiple inertial sensors, including accelerometer, gyroscope, magnetometer, and so on. The existing literature on LMD cover different aspect, but non-of-them emphasized on recognition of unseen class labels along with deployment of deep learning models on Resource Constraint Devices (RCDs). In addition, during inference using deep learning model deployed on RCDs, the dynamic resources variation is still unexplored. Therefore, we propose a deep learning-based mechanism that incorporates the concept of zero-shot learning for recognizing seen and unseen class labels (locomotion modes). Next, we solve an optimization problem to determine an optimal deep learning model to be deployed on RCDs. Further, we perform experimental analysis on publically available dataset to study the impact of seen classes, unseen classes, and RAM occupancy. The results justify the effectiveness of the proposed schemes in recognizing seen and unseen locomotion modes on RCDs.