Augmented reality (AR) technologies have recently gained substantial attention within the industry due to their potential applications in on-the-job training and assistance across diverse industrial settings. However, personalizing AR instructions and feedback interventions that cater to individual user needs and skill levels remains a relatively less explored area of research. This paper aims to bridge this gap by utilizing eye tracking data coupled with computer vision to examine the gaze and pupil behaviors of individuals with various levels of expertise performing AR-guided procedural tasks. The main goal is to investigate the relationship between eye tracking data, visual attention, and expertise by exploring four research questions associated with (1) differences in fixation and saccade duration between novices and experts, (2) variations in visual attention allocation to action-relevant areas of interest (AOI) between novices and experts, (3) the influence of expertise on scanpath and transitions between AOIs, and (4) the correlation between pupil size variations and fixation/saccade behaviors. The findings of a study on humans that focused on two procedural tasks are reported. The study uses synchronized gaze, pupillometry, and egocentric videos to analyze gaze interactions with AOIs and background stimuli based on object detection models. This research advances our understanding of the relationship between gaze behaviors, visual attention, and expertise, thus offering new insights into enabling adaptive and personalized interventions in AR. These insights are specifically relevant to AR use cases centered on training or on-the-job assistance.