In the process of image generation, visual degradation often occurs. The current color-based superpixel methods are always unable to generate accurate superpixel in color degradation situation for the ambiguity of color information. In order to eliminate this ambiguity, we propose a novel approach designated as DFASP (Depth-Fused Adaptive Superpixel) to generate accurate superpixel under the condition of visual degradation. Furthermore, we design an adaptive mechanism to adjust the color and depth information automatically during the pixel clustering process. We compare our method with state-of-the-arts on public datasets and ours. In the visual degradation situation, this proposed method can, compared with color-based approaches, generate the more accurate contours of objects. The experimental results demonstrate that our method outperforms the popular methods on boundary adherence and regularity greatly.