This paper presents a multimodal data based late fusion technique used in the classification of cardiac normality and abnormality integrating 12-lead ECG image data and electronic health records (EHR) data. Cardiac diseases are big problems worldwide, and need better ways to diagnose to help patients. Only electrocardiograms (ECG) report may not be conclusive for predicting cardiac normality and abnormality. In recent studies, the integration of ECG image and EHR has emerged as a promising approach for improving cardiac disease detection and diagnosis. The EHR data includes critical parameters such as Blood Pressure (BP), Oxygen Saturation (SpO2), Body Mass Index (BMI), and blood sugar level etc. These parameters are obtained along with 12-lead ECG images, from our own prepared multimodal database (CardioHTDC database). The proposed multimodal late fusion technique employs a deep learning architecture. The late fusion technique involves concatenating the outputs of individual deep learning architecture to create a combined feature vector that incorporates both visual and structured information. A 2D CNN (Convolutional Neural Network) architecture is considered as model I, responsible for feature extraction from 12-lead ECG images while an MLP (Multi-Layer Perceptron) architecture is designated as model II, tasked with processing EHR data, to capture patterns, that help predicting early cardiac disease. The experimental results demonstrate that the proposed late fusion technique provides significant important by achieving a maximum accuracy for cardiac multimodal dataset, while an accuracy of only 72.2% has achieved using multimodal late fusion techniques. However, lower accuracy of the algorithms is multi-modal cardiac dataset and non-availability of the other existing dataset.