This paper presents a refinement framework for enhancing the accuracy of interactive image segmentation by exploiting all available semantic clues. Interactive image segmentation iteratively improves segmentation masks using an input image and user annotations. The information available in this process ranges from low-level visual features like colors and textures to high-level semantic information, such as user annotations and segmentation results. Despite tremendous efforts to segment the overall object shapes, existing methods underutilize the available semantic clues, causing unsatisfactory boundary quality for segmentation masks. The proposed framework first extracts confidence guidance maps, then suppresses and lifts the predicted probabilities for confident pixels, and finally utilizes color similarities as bases and prediction confidence as guidance to refine the segmentation boundaries. Experimental results demonstrate that the framework has a low computational cost and significantly boosts existing methods on standard benchmarks.