In human body pose estimation, manifold learning is a useful method for reducing the dimension of 2D images and 3D body configuration data. Most commonly, body pose is estimated from silhouettes derived from images or image sequences. A major problem when applying manifold estimation, however, is its vulnerability to silhouette variation. In this paper, we propose a novel approach to solving viewpoint-induced silhouette variation by introducing biased label distances for learning manifolds that are able to represent variations in viewpoint, pose, and 3D body configuration. We demonstrate the effectiveness of the approach on a synthetic and a real-world dataset.