This paper investigates the use of reinforcement learning for autonomous exploration in an unknown environment. Autonomous exploration is crucial in many situations, such as urban search, security inspection, environmental mapping, etc. Traditional approaches focused on frontiers are unlikely to span a variety of enormously complex scenarios. Convergence is a little more difficult for learning-based approaches, which can adapt to many different environments. Consequently, a hierarchical exploration framework is built using frontier information. We propose a reinforcement learning-based local decision exploration model that uses deep neural networks to learn the optimal strategy from the environment. To prevent falling into local optimization, we also suggest a global rescue module to assist the robot in returning to the proper exploration track. Compared with other hierarchical methods, the framework is more effective and resilient in many contexts, greatly decreasing the total completion time and path length.