Reinforcement learning (RL) has emerged as a promising approach for robot manipulation tasks. However, the data-intensive nature and substantial training time required by online RL techniques make them unsafe or impractical in certain situations like robot applications. This paper empirically investigates offline RL’s feasibility, generalization, and adaptability for robot manipulation tasks compared to online RL approaches. We apply several state-of-the-art algorithms, including AWAC, CQL, and IQL, to robot push tasks to examine the feasibility and practicality of offline RL for robotic manipulation. We also investigate how the characteristics of the offline dataset, such as size, exploration ratio, and randomization, impact the performance of the offline RL. The generalization capabilities and adaptability of these algorithms are assessed in unseen environments with varied object properties and physics settings. The results demonstrate that offline RL not only shows promising performance but also exhibits better generalization capabilities compared to online RL. In terms of adaptation, offline RL achieves significant performance improvements through small steps of online fine-tuning. These findings underline the potential of offline RL as an effective and practical approach for real-world robot manipulation tasks.