Thanks to previous research, computer vision has now endowed 6-DoF robot arms with the intelligence to plan its best trajectory to a given target. One of the problems that the 6-DoF semantic grasp planning still faces is the solution to grasping occluded targets in a cluttered scene. In such a scenario, the robot arm may not be available to grasp the target in one shot. The current potential attempts to grasp an occluded object include re-arranging the cluttered scene in the hope to expose the target object for a more viable trajectory. However, such methods require multiple viewpoints to model the scene for making global re-arranging plans. Also, the process of re-arranging would take considerable steps to achieve. In our work, we use our decision-making algorithm to combine occlusion prediction (BCNet) with grasp pose planning algorithm (GraspNet), enabling the robot arm to understand the relative positions of each object in a scene. Our method is based on a single viewpoint so that we do not require the robot arm to look all around the scene. In addition, our method is target-motivated, which means we only grasp relevant objects instead of re-arranging all objects in the scene, providing a novel efficient solution to grasp occluded targets.