Recently, the learning method is gradually penetrating into the field of 3D saliency, but the ground truth annotation is too insufficient to directly train a 3D saliency network. Here, we propose a novel attention-embedding strategy for 3D saliency estimation by directly applying the attention embedding scheme to 3D mesh. With this method, the network is trained in a weakly supervised manner, requiring no saliency annotations but generalizing well on different categories of objects, such as animals, furniture, cars and people. Experimental results show that our approach is comparable with existing state-of-the-art methods. We also apply saliency results to mesh simplification. Evaluations on simplified models show that the visually significant parts can be retained during saliency-aware simplification. [ABSTRACT FROM AUTHOR]