Upper limb kinematic analysis that has been employed in the clinical assessment of motion functions or rehabilitation training is traditionally tested manually with a goniometer. Nowadays, it is a trend to deploy different technology and devices including low-cost but accurate RGB cameras in order to save manual efforts. Among these, a new method using deep learning-based cameras has been investigated to provide the same ease and accessibility as a manual handheld goniometer. The key to measuring upper limb Range of Motion (ROM) using a camera is to estimate upper limb joints accurately. Many existing joint estimation algorithms focus on improving the accuracy performance but put the efficiency concerns aside. It is still challenging to apply those algorithms to low-capacity and budget-friendly devices, which is highly demanding in clinical scenarios. We propose a lightweight and fast deep learning model to estimate human pose and then use predicted joints to measure the range of motion for upper limb joints. Unlike other human pose estimation methods that learn and predict all major joints of the human body, the proposed model only focuses on the upper limb, which improves the accuracy and reduces the overhead of prediction. To further reduce model size and latency, our model is based on a compact neural network architecture, and parameters in the network are quantized to 8-bit precision. As a result, our model runs 4.1 times faster and is 15.5 times smaller compared with a full sized state of the art human pose estimation model. The proposed method is further evaluated on different upper limb functional tasks. Results show that our new method achieves a satisfying accuracy in ROM measurement and a high degree of agreement with a goniometer. Compared with the goniometer to measure ROM, our presented method is easier to operate and can be performed remotely, while still retaining good accuracy.