The escalating consumption of superior quality streaming videos among digital users has intensified the exploration of Video Super-Resolution (VSR) methodologies. Implementing VSR on the user end enhances video resolution without the need for additional bandwidth or capitalising on localised or edge computing resources. In the contemporary digital era, the proliferation of high-quality video content and the relative simplicity of VSR dataset generation have bolstered the popularity of Deep Neural Network-based VSR (DNN-VSR) approaches. Such dataset generation typically involves associating down-sampled high-resolution videos with their low-resolution equivalents as training instances. Nonetheless, current DNN-VSR techniques predominantly concentrate on enriching down-sampled videos, such as through Bicubic Interpolation (BI), without factoring in the inherent codec loss within video streaming applications, consequently constraining their practicality. This research scrutinises five state-of-the-art (SOTA) DNN-VSR algorithms, contrasting their performance on streaming videos using Fast Forward Moving Picture Expert Group (FFMPEG) to emulate codec loss. Our analysis also integrates subjective testing to address the limitations of objective metrics for VSR evaluation. The manuscript concludes with an introspective discussion of the results and outlines potential avenues for further investigation in the domain.