Edge intelligent has already become a reality where millions of mobile and IoT devices are currently running deep learning tasks. Deep learning inference is one of the mainstream AI workloads running on edge devices. However, the basic overhead characteristics of deep learning inference tasks on edge devices are still not clear. In this paper, we deeply analyzed the overheads of running edge-device-based deep learning inference jobs on the OpenFaaS platform, a mainstream open-sourced platform that provides sufficient serverless functions on edge environments. Our study reveals that the performance of deep learning inference tasks is significantly affected by the size of models and resource contention in edge devices which could lead to 3X performance degradation. For instance, the model size of RESNET-50 is 11X of the size of ShuffleNet while its inference time is 30X of the ShuffteNet for the same inference tasks. In addition, we find network environment would jeopardize the performance of edge applications, e.g. poor network situation would increase the CPU overheads. Based on the above insights, we propose suggestions for designing more efficient serverless platforms and resource management strategies for edge computing.