Millimeter-wave (MMW) radar is becoming an essential sensing technology in smart environments due to its light and weather-independent sensing capability. Such capabilities have been widely explored and integrated with intelligent vehicle systems, often deployed in industry-grade MMW radars. However, industry-grade MMW radars are often expensive and difficult to attain for deployable community-purpose smart environment applications. On the other hand, commercially available MMW radars pose hidden underpinning challenges that are yet to be well investigated for tasks such as recognizing objects, and activities, real-time person tracking, object localization, etc. Such tasks are frequently accompanied by image and video data, which are relatively easy for an individual to obtain, interpret, and annotate. However, image and video data are light and weather-dependent, vulnerable to the occlusion effect, and inherently raise privacy concerns for individuals. It is crucial to investigate the performance of an alternative sensing mechanism where commercially available MMW radars can be a viable alternative to eradicate the dependencies and preserve privacy issues. Before championing MMW radar, several questions need to be answered regarding MMW radar’s practical feasibility and performance under different operating environments. To answer the concerns, we have collected a dataset using commercially available MMW radar, Automotive mmWave Radar (AWR2944) from Texas Instruments, and reported the optimum experimental settings for object recognition performance using several deep learning algorithms in this study. Moreover, our robust data collection procedure allows us to systematically study and identify potential challenges in the object recognition task under a cross-ambience scenario. We have explored the potential approaches to overcome the underlying challenges and reported extensive experimental results.