In literature, we find many examples showing that, contrary to what one might think, multimodal remote sensing analysis might be suboptimal. Given the high computational complexity typically required by multimodal investigation in order to properly extract information from multiple sources, there is a need to assess its actual benefit during image preprocessing. This urgency becomes indeed crucial when targeting transfer learning in remote sensing, as understanding the actual relationship between diverse sensors is fundamental to accurately characterize the considered scenes. In this work, we derive a reliability metric by means of an information theory-based approach. The proposed metric is able to estimate how confident one can be of the considered datasets when characterizing each pixel in the considered region of interest. Experimental results on real datasets show how this quantity can be used to improve the understanding of the scenes, and to enhance multisensor transfer learning.