Current data-driven salt body interpretation methods are mainly based on 2-D seismic slices and complete labeling training. The 2-D salt body prediction results of this kind of method lose the spatial continuity of salt body distribution after being restored to 3-D seismic space. With the difficulty in acquiring salt body labels in the field, it becomes crucial to use sparse 2-D labeling to guide the learning of 3-D networks. We have proposed a 3-D salt body segmentation method based on multiview collaborative regularization, called 3-D multiview co-regularization (SALT-MVCR). Innovatively, we designed a dual-view collaborative training paradigm for voxel-level seismic data and proposed a regional loss function applicable to 2-D sparse-salt body labeling, which solved the difficult problem of asymmetrically supervised sample learning. In addition, a cross-view prediction consistency loss was designed to improve the segmentation model’s understanding of the salt body information by restricting the parameter search space of a single view and solving the artifacts of the prediction result splicing problem. Experimental results show that after supervised training with only 1.56% of salt body labels, a Dice index of 90.6% has been achieved. The visualization of the 3-D salt body distribution also demonstrates that 3-D SALT-MVCR is capable of interpreting the complete salt body from the 3-D seismic body end-to-end and outperforms previous state-of-the-art methods in terms of segmentation performance.