Large-scale Deep Learning Training(DLT) jobs consume a large amount of time and are usually carried out in a distributed cluster environment. However, existing DLT framework like TensorFlow does not contain adhoc optimizations at parallelism and scheduling, which results in seriously low efficiency. Due to this problem, researchers need to choose appropriate scheduling algorithms for cluster jobs. Consider the expensiveness of hardware resources, using job scheduling simulator(JSS) to verify the performance of different scheduling algorithms in advance is necessary.