Recently, volumetric video has gained growing research interest as it allows for the creation of immersive and realistic experiences by representing the full volume of 3D content. However, due to the limitation of storage space and transmission bandwidth in common applications, volumetric videos are inevitably bothered with compression and simplification distortions, which severely harms users' quality of experience (QoE). Moreover, current volumetric video quality assessment (VVQA) is mainly focused on full-reference or reduced-reference metrics, which can not be applied in the absence the reference information. Therefore, in this paper, we propose a novel deep learning based no-reference volumetric video quality assessment method based on multi-view learning. Specifically, we first project volumetric videos to 2D video sequences from various viewpoints. Then a 3D-CNN backbone is utilized to extract quality-aware features from the projected video sequences. Then a quality regression module is designed to fuse the features learned from the multiple viewpoints and jointly regress the features into quality scores. The experimental results show that our method outperforms current state-of-the-art objective volumetric video quality assessment metrics on the vsenseVVDB2 database, which validates the effectiveness of the proposed method.