Edge computing is known to provide lower latency for compute services. The principal strategy behind edge computing is to utilize compute resources physically and/or logically close to the source of data. A number of prior works have studied the problem of scheduling of tasks to minimize the overall latency of task completion. However, recently, a number of safety-critical applications, such as Factory 4.0 and teleoperated vehicles, are proposed to utilize edge computing for their operation. Such applications require consistent low latency. Thus, in this demo, we focus on scheduling of tasks that reduce the tail latency, where the tail is defined as 90 or above percentile. We design a deep reinforcement learning strategy that selectively duplicates the tasks across multiple edge devices to minimize the effect of uncertainty in both network and execution times. We evaluate our technique by generating different types of network and compute loads. We finally show that our strategy provides faster task completion times than existing baseline strategies.