Magnetic resonance imaging (MRI) provides detailed anatomical information critical for radiologists in assessing and diagnosing. Moreover, complementary information obtained from multiple contrasts MRI can improve the clarity of internal structures, especially in detecting unusual objects. However, acquiring contrast-enhanced MRI is usually time-consuming, expensive, or requires contrast agent injection. Medical image synthesis has been demonstrated as an effective alternative. Within the scope of project, we aim to provide a non-invasive method to synthesize contrast-enhanced MRI from a given MRI modality. We present different generative frameworks to learn the mappings between T1-weighted and contrasted Tl MRI. Frameworks jointly exploit different features between cross-modalities to resolve the challenging complexity in synthesis. Methods are trained on a multimodal brain MRI dataset of different contrast samples. Quantitative assessments were conducted by computing peak signal-to-noise ratio and structural similarity index measurements. Results on synthesized output are clear with a low distortion, showing the potential of study in practice.