We explore the potential of utilizing ChatGPT to automatically generate educational assessment questions. Despite the growing promise of ChatGPT across various natural language tasks, including question generation, there exists a challenge of producing content that is both reliable and desirable. This issue becomes particularly crucial in educational assessments, where the quality of the generated content and its alignment with the assessed material are paramount. This research examines questions produced by ChatGPT, focusing on their alignment with different levels of Bloom's Taxonomy and their overall fluency and coherence. To assess the quality of the generated questions, we establish several scenarios for generation and subsequently evaluate the questions using both automated metrics and manual examination. Our empirical analysis shows that ChatGPT tends to produce high-quality questions; however, these questions predominantly fall within the categories of Remember, Understand, and Apply in Bloom's Taxonomy. It exhibits limited capacity for generating questions that require higher-order reasoning. Further enhancements are therefore necessary to enhance the question quality of this model, particularly with regard to the diversity of question types based on Bloom's Taxonomy. This improvement is crucial to ensure the suitability of the generated questions for effective classroom assessments.