Background: To ensure that nurse practitioners are adequately prepared for practice, assessment of their consultation skills is required. Objective Structured Clinical Examinations (OSCEs) have been widely used to assess medical students' consultation skills, and more recently to assess postgraduate nursing students. There are potential benefits to using a combination of examiners from nursing and medicine, but this has not been evaluated. Therefore, the aim of this study was to evaluate whether examiner profession was a source of score variance. Methods: Nurses undertaking a postgraduate advanced assessment course were assessed using an OSCE, which specifically examined their consultation skills. The OSCE consisted of four simulated-patient history-taking stations with two examiners (one senior doctor and one advanced nurse) in each. Score reliability was evaluated by internal consistency and sources of variance using ANOVA. Results: The examination was taken by 28 candidates. Score reliability was satisfactory (alpha 0.7). There was no difference in station scores between examiner professions. There were differences between individual examiners regarding scoring and standard setting for individual stations but not for aggregated results. The main source of variance (35%) in the candidate scores was related to case content. Discussion: There was no difference between scores from experienced doctors and nurses when examining consultation history-taking skills. Differences were not related to professional background. The main source of variance was content related, which may reflect the more specialist practice of the nurses raising questions regarding this form of generalist assessment. [ABSTRACT FROM AUTHOR]