Real-world databases nowadays are particularly vulnerable to noisy, missing and inconsistent data due to their large size (often several terabytes or so more), as well as the potential that they come from multiple and diverse sources. Poor mining findings will result from low-quality data. To produce the appropriate image for a better analysis, images from the relevant dataset are pre-processed using a variety of currently available approaches. In this work, some of the numerous pre-processing methods are discussed and it is observed that filtering techniques like Gabor filter are more popular than other state-of-the-art techniques like interpolation, kernels etc.