Traditional learning methods solve classification problems with training stage, and exploit the intrinsic relationships between available training instances and binary class labels. Label distribution learning aims to learn the models that ideally depict the connection between instances and probabilistic labels, which can be used to predicate the label distributions of further instances. Existing methods mainly focus on independent learning of predication models, and adaptive learning models are still desired for complicated scenarios. In this work, a novel approach to label distribution learning is proposed, and unfolded generator is adopted to produce class-specific samples of each class. As a consequence, the original problem can be solved with convenient accomplishment, and parallel computing is applicable for extension. Furthermore, information volumes of each instance is adopted to predicate the ideal probabilistic labels, which is approximately preserved with summation of information volumes associated with different classes. Experimental results on various real-world data sets demonstrate that, the proposed method is able to achieve fascinating performance compared with existing methods.