INTRODUCTION:: To determine whether the two popular artificial intelligence (AI) chatbots, ChatGPT and Bard, provide high-quality information concerning procedure description, risks, benefits, and alternatives of various ophthalmological surgeries. METHODS:: ChatGPT and Bard were prompted with questions pertaining to the description, potential risks, benefits, alternatives, and implications of not proceeding with various surgeries in different subspecialties of ophthalmology. Six common ophthalmic procedures were included in our analysis. Two comprehensive ophthalmologists and one sub-specialist graded each response independently using a five-point Likert scale. RESULTS:: Likert grading for accuracy was significantly higher for ChatGPT in comparison to Bard (4.5±0.6 vs 3.8±0.8, p<0.0001). Generally, ChatGPT performed better than Bard even when questions were stratified by type of ophthalmological surgery. There was no significant difference between ChatGPT and Bard for response length (2104.7±271.4 characters vs 2441.0±633.9 characters, p=0.12). ChatGPT responded significantly slower than Bard (46.0±3.0 seconds vs 6.6±1.2 seconds, p<0.0001). CONCLUSIONS:: Both ChatGPT and Bard may offer accessible and high-quality information relevant to the informed consent process for various ophthalmic procedures. Nonetheless, both AI chatbots overlooked probability of adverse events, hence limiting their potential and introducing patients to information that may be difficult to interpret.