To improve the productivity and safety of cranes, deep reinforcement learning (DRL) has received widespread attention as a framework for developing automated control methods. However, the major challenge of DRL is sample efficiency, which is further exacerbated by the operational and kinematic characteristics of the crane. Our study proposes an approach to improve the sample efficiency in training control policies for two subtasks: horizontal transportation and sway suppression. To do this, we built a simulation environment and defined the state of the environment and the reward. Then, we performed experiments to find out whether three DRL techniques (reward shaping, curriculum learning, and generative adversarial imitation learning) can mitigate the sample efficiency degradation caused by operational and kinematic characteristics. The results show that the techniques used in our experiment are effective in the improvement of the sample efficiency and learning performance of the DRL model for crane operation.