In recent years, assisting meditation using neuro-feedback applications has become increasingly popular. One key component of such applications is the ability to accurately decode the state of meditation from electroencephalography (EEG) signals in real-time, with as small calibration as possible. This work investigates the problem of cross-subject mindfulness meditation decoding from EEG signals. For this reason, a dataset containing EEG recordings from Novice and Expert meditators is employed. First, Riemannian Space Data Alignment (RSDA) is performed in a session-wise and subject-specific manner to tackle the problem of subject variability and within-session shifts. Then, after a comparative study among features used in the field of meditation, the performance of feature engineering methods is compared to a deep learning-based approach for decoding the EEG state of meditation. In this work, the EEGNet was employed for the deep learning approach, an architecture with a small number of learnable parameters widely used in the Brain-Computer Interface (BCI) field. The EEGNet applied to Riemannian space data aligned EEG signals leads to the highest decoding performance using the smallest time segments. The results show that EEGNet can effectively extract relevant features from EEG signals for decoding the state of meditation in small time segments, which has important implications for developing more effective and calibration-free neurofeedback applications for facilitating meditation.