A trend in nowadays data centers is that heterogeneous storage devices are deployed to meet different storage demands of various big data workloads. For example, many nodes are equipped with both SSDs and HDDs. And HDFS has introduced the heterogeneous-storage-aware feature to adapt to such hybrid storage clusters. However, current task scheduler on big data processing platforms (such as Hadoop and Spark) only considers the overhead of network data transmission by exploiting the data locality principle. On heterogeneous storage clusters, task completion time is also affected by the speed of storage devices (SSDs and HDDs) where the data are stored. Ignoring the different speed of storage devices results in poor utilization of high speed devices such as SSD. In this paper, we propose a task scheduling strategy for heterogeneous storage clusters called H-Scheduler. The key idea of H-Scheduler is to differentiate speeds of storage devices by storage types. It classifies the tasks by both data locality and storage types, and redefines the priorities of different classes of tasks by both storage device speed and data locality to reduce job execution time. We implemented H-Scheduler in Spark, and the experiment results show that H-Scheduler can reduce job execution time by up to 73.6 %, depending on the workload characteristics and data distribution among different types of storage devices.