In the realm of biometrics, the undeniable future of identity and seamless security is inspired by the adoption of iris recognition. In iris recognition, a major challenge lies in accurately segmenting the iris and pupil areas in the image, followed by matching the image features to a provided input. This paper introduces a machine learning (ML)-based approach to enhance the accuracy of matching raw inputs, as demonstrated through experiments conducted on the Multimodal University (MMU) dataset. The use of Gaussian filters and Histogram stretching serves to reduce image noise and enhance dataset images. Subsequently, pre-processed images undergo segmentation to isolate the iris and pupil areas. The log Gabor filter is then employed to extract image features, which are subsequently normalized. The proposed method, utilizing Pearson's correlation-based feature selection, enhances the model's matching accuracy by selecting relevant features then using the hamming distance method for matching the string length. The results obtained indicate that the ML-based Pearson's correlation coefficient model achieves an impressive accuracy of 99.40% and specificity of 92.90% on the MMU dataset, which ensures accurate classification compared to other existing method namely Pupil Candidate Bank for Non-Cooperative Iris Recognition (PCB) and Swift Iris Recognition at distance based on novel pupil segmentation (RSRDS).